High art?

I ran across this on a list of top desks for 2021.

My first thought was “Dancing banana animated GIF meme.”. I haven’t looked up the designer nor their inspiration. But to be honest, it doesn’t matter. He/she could tell me it came from their muse. What matters… happenstance or deliberate, it’s an inanimate object that mimics an Internet meme that mimics a piece of fruit that mimics a dancing human being. Is it high art just by the number of shoulders it’s standing on? Or just a good example of “All creative work is derivative”? Can I call it anthropomorphic? Why not?

We stand on the shoulders of giants. Or on the shoulders of dancing bananas.

Not my style of desk, but bravo! Whimsical yet functional.

Time for a new desktop keyboard: the switches

I just ordered a set of Kailh Box Pink key switches from novelkeys.xyz. And I’m going to order a barebones hot swap keyboard. Why?

First, I spilled some coffee in my desktop keyboard. Maybe 1/4 cup. Which is a lot, though nowhere near what I’ve spilled on some of my buckling spring keyboards. I tore it down, doused it with 99% isopropyl alcohol, cleaned all the switches, the PCB (both sides), the case (inside and out), the keycaps. It’s working again, but it’s a reminder….

It’s my oldest CODE keyboard, Cherry MX blue switches. I like the CODE keyboard. I don’t love the price though, and I also don’t love how difficult it is to disassemble. I’ve had it apart a couple of times. And this time, one of the case tabs broke. It’s not a huge issue, but it’s annoying. If I press on the lower right corner of the keyboard case… “SQUEAK”. It’s likely more the matter that I know it’s broken than it is any sort of real annoyance, but…

I’ve never loved Cherry MX switches. My computing life began when we had some truly delightful keyboards to type on. Before the IBM Model M (very good). Even before the IBM Model F (better). Anyone truly interested in how we got where we are today would do well to do some history homework online before making your next mechanical keyboard purchase. But I can sum it up in two words: cost reductions.

I’m not going to judge that. It is what it is, and the good far outweighs the bad; cost reductions are what have allowed many of us to own desktop and laptop computers for many decades.

But… there are those of us that spend our lives at our keyboards because it’s our livelihood. And there are a LOT of us. And some of us care deeply about our human-machine interface that we spend 8+ hours at each day. And guess what? We’re not all the same. Unfortunately, we’ve mostly been saddled with only two predominant key switches for keyboards for a very long time now: various rubber dome keyboards (pretty much universally considered crummy but inexpensive), and those with Cherry MX (or something similar to Cherry MX). We do still have Unicomp making buckling spring keyboards with tooling from Lexmark (who manufactured the Model M keyboards for the North American market). And we have some new switch types (optical, hall effect, etc.). But at the moment, the keyboard world is predominantly Cherry colored.

Perhaps worse, those of us that like a switch to be both tactile and clicky have few good choices. Unicomp buckling spring is at the top for readily available and reasonably priced. But the compromises for a modern desktop are significant for a large crowd of us. Number one would be that it’s huge (including the SSK version). For some, number two would be no backlighting. And yet others want more keycap options. But it’s a long drop from the buckling spring to any MX-style switch if your goal is clicky and tactile.

I don’t hate Cherry MX blues. Nor Gateron blues. Nor many others. But… most of them feel like just what they’re designed to be. They’re not smooth (including the linears), most of them are not truly tactile, and they’re fragile (not protected from dust or liquid ingress). Most have considerable key wobble. They’re usable, I’ve just been wanting something better for a while. In a TKL (tenkeyless) keyboard with minimal footprint.

Some personal history… one of the reasons I stopped using buckling spring was just the sheer size of any of my true Model M keyboards or any of the Unicomps. The other was the activation force. I wanted something a little lighter to the touch, but still noisy. The Cherry MX blue and similar have filled the niche for me for a while. But… the scratchiness/crunchiness has always been less than ideal to me, and the sound in any board I’ve tried has been less satisfying than I’d like. I’ve not had any of the switches die on me, which is a testament to durability. But I’ve had to clean them on more than one occasion due to a small spill, to get them working again. And over time, despite the fact that they still function, their characteristics change. Some keys lose some of their clickiness. Some get louder. And out of the box, they’re not terribly consistent sound-wise. And while I’ve disassembled keyboards to clean and lube the switches… it’s very time consuming. And despite the fact that I have a pretty good hot-air rework setup, it’s very hard for me to justify spending time replacing soldered switches. I can barely justify time swapping hot-swap keys!

So… I want a more durable switch. And something smoother (less scratch/crunch) than a Cherry MX, but with a nice distinct click. And unfortunately, something that works in a PCB designed for Cherry MX since there are far and away the most options there. The Kailh White or Pink seem to fit the bill. The white are readily available, so I bought the pink just to make sure I don’t miss out on available inventory. I’ll put them in a hot-swap board with PBT keycaps and give them a test drive for a few weeks.

I know the downsides ahead of time. I had an adjustment to make when I went from buckling spring to Cherry MX blue. Buckling spring feedback and activation occur at the same time; it’s ideal. Cherry MX and related designs… most of them activate after the click. The Kailh pink and white appear to activate before the click, and they don’t have the hysteresis of the Cherry MX switches. But based on my own personal preferences which are aligned pretty closely to those who’ve reviewed the Kailh Box White and Kailh Box Pink (like Chyrosran on YouTube), I think one of these switches will make me happier than my MX blues.

Of course I could be wrong. But that’s why I’m going with an inexpensive hot-swap board for this test drive. PCB, mounting and chassis all play a significant role in how a keyboard feels and sounds. But I know many of those differences, and the goal at the moment is to pick the switches I want in my next long-term keyboard.

Facebook… go away

I don’t think I’ve ever seen such a disingenuous paragraph in a full-page newspaper ad as this one from Facebook in their ongoing attack on Apple:

Apple’s change will limit their ability to run personalized ads. To make ends meet, many will have to start charging you subscription fees or adding more in-app purchases, making the internet much more expensive and reducing high-quality free content.

Let’s be clear here. For one… there is literally NO free content on Facebook. And very little of it is high-quality. That which is, does not come from Facebook. They are not a company of journalists and writers.

Newsflash for those who’ve been under a rock for the last 25 years… the Internet has always been expensive. The real issues here:

1) who’s profiting?

2) in what currency?

3) is the transaction clear and transparent?

There are many companies profiting from the existing model of ‘free’ Internet. But it’s not small businesses. It’s Google, Facebook and a trove of others (Apple included).

On the currency and transparency… Facebook is far and away the worst here. They despise transparency. Apple wants to expose their users to what Facebook is collecting from you, the product, and let you choose whether or not you’d like to participate. You can choose to opt in. Facebook is worried that many will opt out once they understand what Facebook is doing. Not an unjust fear, but it’s yet to be seen how it will play out.

Apple’s motivation is coming from its customers. They (and I’m one of them) want these options. They’re one of the reasons we choose to buy iOS devices instead of Android devices. I don’t want targeted advertising. In fact, at this point I’ve been using the Internet for 30 years and I’m essentially blind to all online advertising; my brain has a highly-trained ad-ignoring filter. I don’t want large corporations tracking my every move online. Especially without transparency. Heaven forbid that I be willing to pay Apple for a device that allows me to protect some of my privacy!

Facebook’s motivations are at least partly coming from their customers too. But you, the end user, are NOT their customer. The advertisers are their customers. You are their product. I don’t quite get why Facebook tries to deny this; without you (the end user) and all the data they collect on you… they have no product to sell to advertisers. They’d have to change their business model. Perhaps charge a subscription fee. And for most of us… Facebook is definitely not something we’d knowingly pay ‘real’ money to use. But if you’re a Facebook user, you ARE paying for it. With your privacy and your time. And possibly your mental health. And maybe even your data plan.

And Facebook knows this to be true.

Beyond hurting apps and websites, many in the small business community say this change will be devastating for them too, at a time when they face enormous challenges. They need to be able to effectively reach the people most interested in their products and services to grow.

LMAO. “Hurting apps and websites”. Could you be more ambiguous? Oh, I see… you mean facebook.com. Sorry, I forgot for a moment that Google and Facebook have _decimated_ many small businesses as well as some large ones (news broadcasters, journalists, ad agencies, large newspapers, local sign makers…).

Again… you, the end user, are not the customer. The advertisers are the customers.

Forty-four percent of small to medium businesses started or increased their usage of personalized ads on social media during the pandemic, according to a new Deloitte study. Without personalized ads, Facebook data shows that the average small business advertiser stands to see a cut of over 60% in their sales for every dollar they spend.

In other words… once users understand what Facebook is doing, most will opt out?

Update May 12, 2021: it looks like the answer is a resounding ‘yes’. In the first week of 14.5 rollout, 96% of users are choosing to opt out. Should I feign surprise?

New IPv4 and IPv6 container templates

I’ve spent a little bit of time working on some new slimmed-down C++ containers keyed by IPv4 addresses, IPv6 address, IPv4 prefixes and IPv6 prefixes. The containers that are keyed by prefixes allow longest-match searching by address, as would be expected.

My main objective here was to minimize the amount of code I need to maintain, by leveraging the C++ standard library and existing classes and class templates in libDwm. A secondary objective was to make sure the containers are fast enough for my needs. A third objective was to make the interfaces thread safe.

I think I did OK on the minimal code front. For example, DwmIpv4PrefixMap.hh is only 102 lines of code (I haven’t added I/O functionality yet). DwmIpv6PrefixMap.hh is 185 lines of code, including I/O functionality. Obviously they leverage existing code (Ipv4Prefix, Ipv6Prefix, et. al.).

The interfaces are thread safe. I’m in the process of switching them from mutex and lock_guard to shared_mutex and shared_lock/unique_lock.

Performance-wise, it looks pretty good. I’m using prefix dumps from routeviews to have realistic data for my unit tests. On my Threadripper 3960X development machine running Ubuntu 20.04:


 % ./TestIpv4AddrMap -p
 831,915 addresses, 7,380,956 inserts/sec
 831,915 addresses, 16,641,961 lookups/sec
 831,915 addresses, 9,032,736 removals/sec
 831,915 addresses, 8,249,196 inserts/sec (bulk lock)
 831,915 addresses, 54,097,737 lookups/sec (bulk lock)
 831,915 addresses, 9,489,272 removals/sec (bulk lock)
 831,918/831,918 passed

 % ./TestIpv4PrefixMap -p
 901,114 prefixes, 6,080,842 prefix inserts/sec
 901,114 prefixes, 14,639,881 prefix lookups/sec
 901,114 addresses, 5,105,259 longest match lookups/sec
 901,114 prefixes, 6,378,710 prefix inserts/sec (bulk lock)
 901,114 prefixes, 25,958,230 prefix lookups/sec (bulk lock)
 901,114 addresses, 5,368,727 longest match lookups/sec (bulk lock)
 1,802,236/1,802,236 passed

 % ./TestIpv6AddrMap -p
 104,970 addresses, 11,360,389 inserts/sec
 104,970 addresses, 15,206,431 lookups/sec
 104,970 addresses, 9,159,685 removals/sec
 104,970 addresses, 12,854,518 inserts/sec (bulk lock)
 104,970 addresses, 20,434,105 lookups/sec (bulk lock)
 104,970 addresses, 10,302,286 removals/sec (bulk lock)
 104,976/104,976 passed

 % ./TestIpv6PrefixMap -p
 110,040 prefixes, 11,181,790 prefix lookups/sec
 110,040 prefixes, 1,422,403 longest match lookups/sec
 440,168/440,168 passed


What is ‘bulk lock’? The interfaces allow one to get a shared or unique lock and then perform multiple operations while holding the lock. As seen above, this doesn’t make a huge difference for insertion or removal of entries, where the time is dominated by operations other than locking and unlocking. It does make a significant difference for exact-match searches. One must be careful using the bulk interfaces to avoid deadlock, of course. But they are useful in some scenarios.

The best part, IMHO, is that these are fairly thin wrappers around std::unordered_map. Meaning I don’t have my own hash table or trie code to maintain and I can count on std::unordered_map behaving in a well-defined manner due to it being part of the C++ standard library. It is not the fastest means of providing longest-match lookups. However, from my perspective as maintainer… it’s a small bit of code, and fast enough for my needs.

Threadripper 3960X: the birth of ‘thrip’

I recently assembled a new workstation for home. My primary need was a machine for software development, including deep learning. This machine is named “thrip”.

Having looked hard at my options, I decided on AMD Threadripper 3960X as my CPU. A primary driver was of course bang for the buck. I wanted PCIe 4.0, at least 18 cores, at least 4-channel RAM, the ability to utilize 256G or more of RAM, and to stay in budget.

By CPU core count alone, the 3960X is over what I needed. On the flip side, it’s constrained to 256G of RAM, and it’s also more difficult to keep cool than most CPUs (280W TDP). But on price-per-core, and overall performance per dollar, it was the clear winner for my needs.

Motherboard-wise, I wanted 10G ethernet, some USB-C, a reasonable number of USB-A ports, room for 2 large GPUs, robust VRM, and space for at least three NVMe M.2 drives. Thunderbolt 3 would have been nice, but none of the handful of TRX40 boards seem to officially support it (I don’t know if this is an Intel licensing issue or something else). The Gigabyte board has the header and Wendell@Level1Techs seems to have gotten it working, but I didn’t like other aspects of the Gigabyte TRX40 AORUS EXTREME board (the XL-ATX form factor, for example, is still limiting in terms of case options).

I prefer to build my own workstations. It’s not due to being particularly good at it, or winding up with something better than I could get pre-built. It’s that I enjoy the creative process of selecting parts and putting it all together.

I had not assembled a workstation in quite some time. My old i7-2700K machine has met my needs for most of the last 8 years. And due to a global pandemic, it wasn’t a great time to build a new computer. The supply chain has been troublesome for over 6 months now, especially for some specific parts (1000W and above 80+ titanium PSUs, for example). We’ve also had a huge availability problem for the current GPUs from NVIDIA (RTX 3000 series) and AMD (Radeon 6000 series). And I wasn’t thrilled about doing a custom water-cooling loop again, but I couldn’t find a worthy quiet cooling solution for Threadripper and 2080ti without going custom loop. Given the constraints, I wound up with these parts as the guts:

  • Asus TRX40 ROG Zenith II Extreme Alpha motherboard
  • AMD Threadripper 3960X CPU (24 cores)
  • 256 gigabytes G.Skill Trident Z Neo Series RGB DDR4-3200 CL16 RAM (8 x 32G)
  • EVGA RTX 2080 Ti FTW3 Ultra GPU with EK Quantum Vector FTW3 waterblock
  • Sabrent 1TB Rocket NVMe 4.0 Gen4 PCIe M.2 Internal SSD
  • Seasonic PRIME TX-850, 850W 80+ Titanium power supply
  • Watercool HEATKILLER IV PRO for Threadripper, pure copper CPU waterblock

It’s all in a Lian Li PC-O11D XL case. I have three 360mm radiators, ten Noctua 120mm PWM fans, an EK Quantum Kinetic TBE 200 D5 PWM pump, PETG tubing and a whole bunch of Bitspower fittings.

My impressions thus far: it’s fantastic for Linux software development. It’s so nice to be able to run ‘make -j40‘ on large C++ projects and have them complete in a timely manner. And thus far, it runs cool and very quiet.

An ode to NSFNET and ANSnet: a simple NMS for home

A bit of history…

I started my computing career at NSFNET at the end of 1991. Which then became ANSnet. In those days, we had a home-brewed network monitoring system. I believe most/all of it was originally the brainchild of Bill Norton. Later there were several contributors; Linda Liebengood, myself, others. The important thing for today’s thoughts: it was named “rover”, and its user interface philosophy was simple but important: “Only show me actionable problems, and do it as quickly as possible.”

To understand this philosophy, you have to know something about the primary users: the network operators in the Network Operations Center (NOC). One of their many jobs was to observe problems, perform initial triage, and document their observations in a trouble ticket. From there they might fix the problem, escalate to network engineering, etc. But it wasn’t expected that we’d have some omniscient tool that could give them all of the data they (or anyone else) needed to resolve the problem. We expected everyone to use their brains, and we wanted our primary problem reporter to be fast and as clutter-free as possible.

For decades now, I’m spent a considerable amount of time working at home. Sometimes because I was officially telecommuting, at other times just because I love my work and burn midnight hours doing it. As a result, my home setup has become more complex over time. I have 10 gigabit ethernet throughout the house (some fiber, some Cat6A).  I have multiple 10 gigabit ethernet switches, all managed.  I have three rackmount computers in the basement that run 7×24.  I have ZFS pools on two of them, used for nightly backups of all networked machines, source code repository redundancy, Time Machine for my macOS machines, etc.  I run my own DHCP service, an internal DNS server, web servers, an internal mail server, my own automated security software to keep my pf tables current, Unifi, etc.  I have a handful of Raspberry Pis doing various things.  Then there’s all the other devices: desktop computers in my office, a networked laser printer, Roku, AppleTV, Android TV, Nest thermostat, Nest Protects, WiFi access points, laptops, tablet, phone, watch, Ooma, etc.  And the list grows over time.

Essentially, my home has become somewhat complex.  Without automation, I spend too much time checking the state of things or just being anxious about not having time to check everything at a reasonable frequency.  Are my ZFS pools all healthy?  Are all of my storage devices healthy?  Am I running out of storage space anywhere?  Is my DNS service working?  Is my DHCP server working?  My web server?  NFS working where I need it?  Is my Raspberry Pi garage door opener working?  Are my domains resolvable from the outside world?  Are the cloud services I use working?  Is my Internet connection down?  Is there a guest on my network?  A bandit on my network?  Is my printer alive?  Is my internal mail service working?  Are any of my UPS units running on battery?  Are there network services running that should not be?  What about the ones that should be, like sshd?

I needed a monitoring system that worked like rover; only show me actionable issues.  So I wrote my own, and named it “mcrover”.  It’s more of a host and service monitoring system than a network monitoring system, but it’s distributed and secure (using ed25519 stuff in libDwmAuth).  It’s modern C++, relatively easy to extend, and has some fun bits (ASCII art in the curses client when there are no alerts, for example).  Like the old Network Operations Center, I have a dedicated display in my office that only displays the mcrover Qt client, 24 hours a day.  Since most of the time there are no alerts to display, the Qt client toggles between a display of the next week’s forecast and a weather radar image when there are no alerts.  If there are alerts, the alert display will be shown instead, and will not go away until there are no alerts (or I click on the page switch in the UI).  The dedicated display is driven by a Raspberry Pi 4B running the Qt client from boot, using EGLFS (no X11).  The Raspberry Pi4 is powered via PoE.  It is also running the mcrover service, to monitor local services on the Pi as well as many network services.  In fact the mcrover service is running on every 7×24 general purpose computing device.  mcrover instances can exchange alerts, hence I only need to look at one instance to see what’s being reported by all instances.

This has alleviated me of a lot of sys admin and network admin drudgery.  It wasn’t trivial to implement, mostly due to the variety (not the quantity) of things it’s monitoring.  But it has proven itself very worthwhile.  I’ve been running it for many months now, and I no longer get anxious about not always keeping up with things like daily/weekly/monthly mail from cron and manually checking things.  All critical (and some non-critical) things are now being checked every 60 seconds, and I only have my attention stolen when there is an actionable issue found by mcrover.

So… an ode to the philosophy of an old system.  Don’t make me plow through a bunch of data to find the things I need to address.  I’ll do that when there’s a problem, not when there isn’t a problem.  For 7×24 general purpose computing devices running Linux, macOS or FreeBSD, I install and run the mcrover service and connect it to the mesh.  And it requires very little oomph; it runs just fine on a Raspberry Pi 3 or 4.

So why the weather display?  It’s just useful to me, particularly in the mowing season where I need to plan ahead for yard work.  And I’ve just grown tired of the weather websites.  Most are loaded with ads and clutter.  All of them are tracking us.  Why not just pull the data from tax-funded sources in JSON form and do it myself?  I’ve got a dedicated display which doesn’t have any alerts to display most of the time, so it made sense to put it there.

The Qt client using X11, showing the weather forecast.

mcrover Qt client using X11, showing the weather forecast

The Qt client using X11, showing the weather radar.

mcrover Qt client using X11, showing the weather radar

The curses client, showing ASCII art since there are no alerts to be shown.

mcrover curses client with no alerts.

Apple M1 thoughts

Apple silicon has arrived for the Mac. Not in my hands, but it has arrived.

My thoughts…

Wow. I’m hesitant to call it revolutionary, simply because they’ve been on this path for over a decade. But I’m wowed for a number of reasons.

From the benchmarks I’ve seen, as well as the reviews, the performance isn’t what has wowed me. Yes, it’s impressive. But we had seen enough from iPhones to iPad Pros to know full well what we could expect from Apple’s first generation of their own SoC for the Mac. And they hit the marks.

I think what had the most profound impact on me was just the simple fact that they delivered on a promise to themselves, their users and their company. This wasn’t a short road! In this day and age, there are almost no technology companies that can stick the landing on a 10-year roadmap. Heck, many tech companies abandon their products and users in the span of a few years. Apple quietly persevered. They didn’t fall prey to hubris and conceit. They didn’t give us empty promises. They kept plugging away behind the scenes while Intel and others floundered, or overpromised and underdelivered, or just believed that the x86 architecture would be king forever. And much of this work happened after the passing of Steve Jobs. So to those who thought Apple would flounder without him… I think you’ve been wrong all along.

It’s not like I didn’t see this coming; it’s been rumored for what seems like forever. But I hadn’t really reflected on the potential impact until it arrived. Some background…

I’m a Mac user, and I love macOS. But I’m a software developer, and the main reason I love macOS is that it’s a UNIX.  I like the user interface more than any other, but I spend most of my time in a terminal window running emacs, clang++, etc.  Tasks served well by any UNIX. For me, macOS has been the best of two worlds. I shunned OS 9; I loved the Mac, but OS 9 fell short of my needs. When OS X arrived, I was on board. Finally an OS I could use for my work AND heartily recommend to non-techies. And the things I liked about NeXT came along for the ride.

The other reason I’ve loved Macs: the quality of Apple laptops has been exceptional for a very long time. With the exception of the butterfly keyboard fiasco and the still-mostly-useless Touch Bar (function keys are WAY more useful for a programmer), I’ve been very happy with my Mac laptops. Literally nothing else on the market has met my needs as well as a Macbook Pro, going back longer than I can remember.

But now… wow. Apple just put a stake in the ground that’s literally many miles ahead of everyone else in the personal computing space. It’s akin to the Apollo moon landing. We all saw it coming, but now the proof has arrived.

To be clear, the current M1 lineup doesn’t fit my needs. I’m not in the market for a Macbook Air or a 13″ Macbook Pro. I need a screen larger than 13″, and some of my development needs don’t fit a 16G RAM limitation, which also rules out the M1 Mac Mini (as does the lack of 10G ethernet). And like any first generation product, there are some quirks that have yet to be resolved (issues with some ultra wide monitors), missing features (no eGPU support), etc. But… for many users, these new machines are fantastic and there is literally nothing competitive. Just look at the battery life on the M1 Macbook Air and Macbook Pro 13″. Or the Geekbench scores. Or how little power they draw whether on battery or plugged into the wall. There’s no fan in the M1 Macbook Air because it doesn’t need one.

Of course, for now, I also need full x64 compatibility. I run Windows and other VMs on my Macs for development purposes, and as of right now I can’t do that on an M1 Mac. That will come if I’m to believe Parallels, but it won’t be native x64, obviously. But at least right now, Rosetta 2 looks reasonable. And it makes sense versus the original Rosetta, for a host of reasons I won’t delve into here.

Where does this leave Intel? I don’t see it as significant right now. Apple is and was a fairly small piece of Intel’s business. Today, Intel is in much bigger trouble from AMD EPYC, Threadripper, Threadripper Pro and Ryzen 3 than Apple silicon. That could change, but I don’t see Apple threatening Intel.  Apple has no products in Intel’s primary business (servers). Yes, what Apple has done is disruptive, in a good way. But the long-term impact is yet to be seen.

I am looking forward to what comes next from Apple. Something I haven’t been able to say about Intel CPUs in quite some time. Don’t get me wrong; I’m a heavy FreeBSD and Linux user as well.  Despite the age of x86/x64, we do have interesting activity here.  AMD Threadripper, EPYC and Ryzen 3 are great for many of my needs and have put significant pressure on Intel. But I believe that once Apple releases a 16″ Macbook Pro with their own silicon and enough RAM for my needs… there will literally be nothing on the market that comes even close to what I want in a laptop, for many years. It will be a solid investment.

For the long run… Apple has now finally achieved what they’ve wanted since their inception: control of their hardware and software stack across the whole product lineup. Exciting times. Real competition in the space that’s long been dominated by x86/x64, which will be good for all of us as consumers. But make no mistake: Apple’s success here isn’t easily duplicated. Their complete control over the operating system and the hardware is what has allowed them to do more (a LOT more) with less power. This has been true on mobile devices for a long time, and now Apple has brought the same synergies to bear on the PC market. As much as I appreciate Microsoft and Qualcomm SQ1 and SQ2 Surface Pro X efforts, they are far away from what Apple has achieved.

One thing that continues to befuddle me about what’s being written by some… things like “ARM is now real competition for x86/x64”.  Umm… ARM’s relevance hasn’t changed. They license reference core architectures and instruction sets. Apple is not building ARM reference architectures. If ARM was the one deserving credit here, we’d have seen similar success for Windows and Linux of ARM. ARM is relevant. But to pretend that Apple M1 silicon is just a product of ARM, and that there’s now some magic ARM silicon that’s going to go head-to-head with x86/x64 across the industry, is pure uninformed folly. M1 is a product of Apple, designed specifically for macOS and nothing else. All of the secret sauce here belongs to Apple, not ARM.

I’ve also been seeing writers say that this might prompt Microsoft and others to go the SoC route. Anything is possible. But look at how long it took Apple to get to this first generation for the Mac, and consider how they did it: mobile first, which brought unprecedented profits and many generations of experience. Those profits allowed them to bring in the talent they needed, and the very rapid growth of mobile allowed them to iterate many times in a fairly short span of time. Wash, rinse, repeat. Without the overhead of owning the fab. And for what many have considered a ‘dead’ market (personal computers). Yes, PC sales have on average been on a steady decline for some time. But the big picture is more complex; it’s still the case that a smartwatch isn’t a smartphone, a smartphone isn’t a tablet, a tablet isn’t a laptop, a laptop isn’t a desktop, most desktops are not workstations, a workstation isn’t a storage server, etc. What we’ve seen is the diversification of computing. The average consumer doesn’t need a workstation. Many don’t need a desktop, and today they have other options for their needs. But the desktop and workstation market isn’t going to disappear. We just have a lot more options to better fit our needs than we did when smartphones, tablets, ultrabooks, etc. didn’t exist.

I’ve always been uneasy with those who’ve written that Apple would abandon the PC market. The Mac business, standalone, generated 28.6 billion U.S. dollars in 2020. That would be at spot 111 on the Fortune 500 list. Not to mention that Apple and all the developers writing apps for Apple devices need Macs. The fact that Apple’s desktop business is a much smaller portion of their overall revenue isn’t a product of it being a shrinking business; it’s 4X larger in revenue than it was 20 years ago. The explosive growth in mobile has dwarfed it, but it has continued to be an area of growth for Apple. Which is not to say that I haven’t bemoaned the long delays between releases of Apple professional Mac desktops, not to mention the utter disaster of the 2013 Mac Pro. But Apple is notoriously tight-lipped about their internal work until it’s ready to ship, and it’s clear now that they wisely directed their resources at decoupling their PC fates from Intel.  None of this would have happened if Apple’s intent was to abandon personal computers.

So we enter a new era of Apple. Rejoice, whether you’re an Apple user or not. Innovation spurs further innovation.

Replaced IronWolf Pro 8TB with Ultrastar DC HC510 10TB

Due to a firmware problem in the Seagate IronWolf Pro 8TB drives that makes them incompatible with ZFS on FreeBSD, I returned them over the weekend and ordered a pair of Ultrastar DC HC510 10TB drives. I’ve had phenomenal results from Ultrastars in the past, and as near Its I can tell they’ve always been very good enterprise-grade drives regardless of the owner (IBM, Hitach, HGST, Western Digital). The Ultrastars arrived today, and I put them in the zfs1 pool:

# zpool list -v
NAME               SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zfs1              16.3T  2.13T  14.2T        -         -    10%    13%  1.00x  ONLINE  -
  mirror          3.62T  1.53T  2.09T        -         -    29%    42%
    gpt/gpzfs1_0      -      -      -        -         -      -      -
    gpt/gpzfs1_1      -      -      -        -         -      -      -
  mirror          3.62T   609G  3.03T        -         -    19%    16%
    gpt/gpzfs1_2      -      -      -        -         -      -      -
    gpt/gpzfs1_3      -      -      -        -         -      -      -
  mirror          9.06T  1.32M  9.06T        -         -     0%     0%
    gpt/gpzfs1_4      -      -      -        -         -      -      -
    gpt/gpzfs1_5      -      -      -        -         -      -      -

Everything seems good. Note that the scrub repair of 33.8G was due to me pulling the IronWolf drives from the chassis with the system live (after having removed them from the pool). This apparently caused a burp on the backplane, which was fully corrected by the scrub.

# zpool status
  pool: zfs1
 state: ONLINE
  scan: scrub repaired 33.8G in 0 days 04:43:10 with 0 errors on Sun Nov 10 01:45:59 2019
remove: Removal of vdev 2 copied 36.7G in 0h3m, completed on Thu Nov  7 21:26:09 2019
    111K memory used for removed device mappings
config:

	NAME              STATE     READ WRITE CKSUM
	zfs1              ONLINE       0     0     0
	  mirror-0        ONLINE       0     0     0
	    gpt/gpzfs1_0  ONLINE       0     0     0
	    gpt/gpzfs1_1  ONLINE       0     0     0
	  mirror-1        ONLINE       0     0     0
	    gpt/gpzfs1_2  ONLINE       0     0     0
	    gpt/gpzfs1_3  ONLINE       0     0     0
	  mirror-3        ONLINE       0     0     0
	    gpt/gpzfs1_4  ONLINE       0     0     0
	    gpt/gpzfs1_5  ONLINE       0     0     0

errors: No known data errors

Expanded zfs1 pool on kiva

I purchased two Seagate IronWolf Pro 8TB drives at MicroCenter today. They’ve been added to the zfs1 pool on kiva.


# gpart create -s gpt da5
da5 created
# gpart create -s gpt da6
da6 created

# gpart add -t freebsd-zfs -l gpzfs1_4 -b1M -s7450G da5
da5p1 added
# gpart add -t freebsd-zfs -l gpzfs1_5 -b1M -s7450G da6
da6p1 added

# zpool add zfs1 mirror /dev/gpt/gpzfs1_4 /dev/gpt/gpzfs1_5

# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfs1 14.5T 2.76T 11.7T - - 14% 19% 1.00x ONLINE -
mirror 3.62T 1.87T 1.75T - - 33% 51%
gpt/gpzfs1_0 - - - - - - -
gpt/gpzfs1_1 - - - - - - -
mirror 3.62T 910G 2.74T - - 24% 24%
gpt/gpzfs1_2 - - - - - - -
gpt/gpzfs1_3 - - - - - - -
mirror 7.25T 1.05M 7.25T - - 0% 0%
gpt/gpzfs1_4 - - - - - - -
gpt/gpzfs1_5 - - - - - - -