There’s a spectrum of openness when it comes to computers. Most people hover somewhere between fully closed – proprietary hardware, proprietary operating system – and partly open – proprietary hardware, open source operating system. Even if you run Linux on your AMD or Intel machine, you’re running it on top of a veritable spider’s web of proprietary firmware for networking, graphics, the IME, WiFi, BlueTooth, USB, and more. Even if you opt for something like a System76 machine, which has open firmware as a BIOS replacement and to cover some functions like keyboard lighting, you’re still running lots of closed firmware blobs for all kinds of components. It’s virtually impossible to free yourself from this web.
Virtually impossible, yes, but not entirely impossible. There are options out there to run a machine that is entirely open source, from firmware all the way up to the applications you run. Sure, I can almost hear you think, but it’s going to be some outdated, slow machine that requires tons of tinkering and deep knowledge, out of reach of normal users or people who just want to buy a computer, take it out of the box, and get going.
What if I told you there is a line of modern workstations, with all the modern amenities we’ve come to expect, that is entirely open? The instruction set, the firmware for the various components, the boot environment, the operating system, and the applications? No firmware blobs, no closed code hiding in various corners, yet modern performance, modern features, and a full, modern operating system?
Full disclosure: Slimbook sent us the Executive as a loan, and it will be returned to them. They did not read this review before publication, and placed zero restrictions on anything I could write about.
Now you’re playing with POWER
Most people’s knowledge and experiences with the Power ISA begins and ends with Apple. The company used Power-based processors from 1994 until 2006, when it switched to using processors from Intel and the x86 ISA. Aside from Apple, there are two other major cornerstones of the Power ISA that most people are familiar with. First, game consoles. The GameCube, Wii, Xbox 360 and PlayStation 3 all used PowerPC-based processors, and were all widely successful. Second, various embedded systems use Power processors as well.
Aside from Apple, game consoles, and embedded systems, IBM has been developing and using processors based on the Power ISA for a long time now. IBM released the first Power processor in 1990, the POWER1, for its servers and supercomputers. They’ve steadily kept developing their line of processors for decades, and they are currently in the process of rolling out POWER10, which should be available later this year.
Other Power ISA processors you may have heard of, such as the PowerPC G4 or G5 or the various gaming console processors, do not necessarily correspond to IBM’s own POWERx generations of processors, but are implementations of the same ISA. The nomenclature of the Power ISA has changed quite a bit over time, and companies like Apple and Sony using their own marketing names to advertise the processors they were using certainly didn’t help. To this day, PowerPC is often used as the name of the entire ISA, which is incorrect. The proper name for the ISA today is the Power ISA, but the confusion is understandable.
The Power ISA, and related technologies, have been made freely available by IBM for anyone to use, and the specifications and reference implementations are open source, overseen by the OpenPOWER Foundation. The goal of the OpenPOWER Foundation is to enable the various partners involved in making Power hardware, like IBM, NXP, and others, to work together and promote the use and further development of the open Power ISA. In 2019, the OpenPOWER Foundation became part of the Linux Foundation.
With Apple no longer making any Power-based computers, and with game consoles all having made the transition to x86, you may be left wondering how, exactly, you can get your hands on this fully open hardware. And, even if you could, how exotic and quirky is this hardware going to be? Is this another case of buying discard IBM POWER servers and turning them into very loud workstations with tape and glue, or something unrealistic and outdated no sane person would use?
Thank god, no.
Luckily for us, one company sells mainboards, POWER9 processors, and fully assembled POWER workstations: Raptor Computing Systems. Last year, they sent me their Blackbird Secure Desktop, and after many, many shipping problems caused by UPS losing packages and the effects of COVID-19, I can now finally tell you what it’s like to use this truly fully open source computer.
Like what we do? Become an OSAlert Patreon and support our continued work!
Specifications
The Blackbird Secure Desktop is built around Raptor’s Blackbird micro-ATX motherboard. This motherboard has a Sforza CPU socket, 2 DDR4 RAM slots compatible with EEC registered memory with a maximum combined capacity of 256GB, 2 PCIe 4.0 slots (16x and 8x), 2 gigabit Ethernet ports, another Ethernet port used for the BMC (OpenBMC – more on that later), 4 SATA ports (6Gb/s), and more than enough USB options (4 USB 3.0, 1 USB 2.0), and two RS-232 ports (one external, one internal using a header). On top of that, it has a CMedia 5.1 audio chip and associated jacks, an HDMI port driven by the on-board ASpeed graphics chip, as well as the ASpeed BMC.
The board also comes with amenities we’ve come to expect from modern motherboards, like fan headers, an internal LED panel that displays the status of the motherboard, standard front panel connectors, a header for external audio, and so on. You also get a number of more exotic features, such as various headers to control the BMC, headers to update the open source firmware packages on the board, a FlexVer connector, and more. The only modern amenity that’s really missing from this board is an M.2 slot, which is something Raptor should really add to future revisions or new boards.
In what will be a running theme in this review, for an exotic non-x86 ISA, the Blackbird motherboard is decidedly… Normal. Anyone who knows their way around a regular x86 motherboard won’t be confused by the Blackbird. Nor the unique ISA, nor the fact that the entire board is free from binary blobs makes it any harder to use than any other motherboard. Sure, the processor socket and the cooler mounting mechanism is a bit different, but even within x86 there are various different socket types and mounting mechanisms, so this is just another one to add to the list.
My preassembled machine came equipped with the base processor option – an IBM POWER9 processor with 4 cores and 16 threads, running at a base clock speed of 3.2Ghz, with a turbo frequency of 3.80Ghz. Unlike x86 cores, POWER9 uses four-way multithreading (or eight-way for the more exotic chips). This particular processor also boasts 48 PCIe lanes. You can also configure the Blackbird Secure Desktop with an 8-core variant, but higher core counts will most likely lead to instability and downclocking due power delivery constraints. If you want more cores, you’ll have to step up to the single-socket Talos II Lite board or the dual-socket Talos II board.
My machine further came equipped with 64GB of registered ECC DDR4 RAM (running at 2666MHz) and an AMD Radeon Pro WX4100 GPU. To circumvent the lack of an on-board M.2 slot, my machine came configured with a PCIe M.2 adapter carrying a Samsung 960 EVO M.2 SSD at 500GB. All this hardware is housed in a relatively small generic Antec desktop-style micro-ATX case (with a stand for orienting the case vertically), and is powered by a standard 300W TFX power supply.
Performance is excellent, and benchmarks show that POWER9 processors can hold their own against competing x86 processors from Intel and AMD. Not once did I feel this machine was lacking in power, performance, or smoothness.
Of note here is that if you buy the Blackbird motherboard and CPU separately and build your own machine from there, you can use any regular PC case you want, as long as it can fit a micro-ATX motherboard. The same obviously applies to the power supply – if it’s ATX, you’re good to go. And while the board supports registered ECC memory, you can opt for cheaper, regular memory too. I’m guessing quite a few OSAlert readers have a random case, PSU, and some DDR4 memory lying around, so if you’re interested in building a POWER9 machine, you won’t necessarily have to buy a lot of specialised, expensive equipment.
There’s an elephant in my room
One aspect where hardware like this decidedly differs from generic x86 is pricing. Exotic, niche hardware like this that eschews the large PC part makers is not cheap, and the Blackbird is no exception. Time to rip off the band-aid: a base configuration of the Blackbird Secure Desktop, with the 4-core/16-thread CPU, 8GB of EEC registered RAM, no dedicated GPU, and a 128GB Samsung NVMe drive will set you back $3,370. My model, with the bigger SSD, dedicated GPU, and 64GB of RAM is considerably more expensive at an estimated $5000. Buying just the motherboard with the base 4-core/16-thread processor and passive 2U CPU heatsink costs $1,732.07.
There’s no going around it: that’s a lot of money. You can get a lot of x86 for that – current processor and GPU shortage not withstanding – and there’s going to be a lot of people here who would be perfectly fine with that. However, this hardware does offer the one thing other platforms simply cannot offer: complete openness. There isn’t any other platform that’s completely free and open source from top to bottom. Is that unique feature worth the price of admission?
If you’re tired of companies like Apple, Intel, Microsoft, and so on invading your privacy and taking ownership of “your” hardware, or in case you’re a journalist investigating serious corporate or government crimes – either in totalitarian dictatorships like China or in western democracies – it just might be. There’s really no other way to know for sure your hardware hasn’t been compromised.
These machines cost a lot of money, but that’s the price to pay for hardware you actually own, instead of just leas. Machines from x86 competitors don’t go beyond sort-of-but-not-really disabling the IME and some open firmware, which is obviously better than a fully locked-down machine, but nowhere near something like the Blackbird.
Are you sure this is exotic?
Taking the machine out of the box and setting it up is pretty much identical to any other computer, but the server-like architecture of the Blackbird does come with a few peculiarities that you won’t find in generic x86 hardware. Much like a server, the Blackbird has a BMC – running OpenBMC, an open source BMC firmware stack – that powers on first, the second you connect the PSU to the power outlet. It’s the BMC’s job to interface between the system-management software and platform hardware. OpenBMC is a tiny Linux distribution designed specifically for running on BMCs.
The BMC outputs to both the VGA port and serial, but most of us will use the former. Once the BMC has fully booted its Linux installation, you end up at a Petitboot menu, where you can select your preferred boot device.
Petitboot is an operating system bootloader based on Linux kexec. It can load any operating system image that supports the Linux kexec re-boot mechanism like Linux and FreeBSD. Petitboot can load images from any device that can be mounted by Linux, and can also load images from the network using the HTTP, HTTPS, NFS, SFTP, and TFTP protocols.
Petitboot might be one of my favourite features of the Blackbird. It automatically recognises any bootable medium, and can rescan for new media even once it’s already running. Think of it as a combination between a BIOS boot menu and GRUB, but easier to use than both. In Petitboot you can also check system logs, change individual boot options, exit to a shell for more control, and more.
From here on out, booting an operating system is pretty much identical to any other PC. Linux and several BSD variants are supported, with the more popular operating systems on POWER machines like these being Fedora and Void Linux. Installing these distributions is identical to installing their x86 counterparts, and the two distributions I tried, Fedora and Void, have outstanding support for POWER and work out of the box, without any additional hacks or tricks.
Actually running these distributions – I settled on Fedora myself – is almost an entirely uneventful experience. Everything just works, and other than actively searching for it, you’d be hard-pressed to find any signs you’re not running on x86. The repositories for Fedora seem fully covered, and even external projects such as RPM Fusion just work. I run Fedora 34 using Wayland, and that, too, works entirely flawlessly.
There are a few notes, however, about running Linux on POWER. first and foremost, the browser situation. Firefox is my preferred browser, but the POWER9 version is severely crippled because its JIT has not yet been ported to ppc64. This means anything more complex than basic web pages bring the browsing experience to a crawl, and using Firefox on POWER is, therefore, a very unpleasant experience. There is an effort underway to port the Firefox JIT to ppc64, but it seems it hasn’t been very active.
With Firefox being problematic on POWER9, the best browser to use is Chromium. The open source base for Google’s Chrome browser has been ported to ppc64 and works perfectly fine and without any issues, with my preferences definitely going to the Ungoogled Chromium version, so we don’t have to deal with any Google nonsense on a fully open source workstation. The installation is straightforward – add the repository and install it from there, or download the specific RPM for the latest release.
The second limitation of running Linux on POWER is one that is entirely obvious, but that I want to mention anyway. It’s an open door, but anything that is not or cannot be ported to POWER won’t run. There isn’t much of this kind of software – one of the strengths of the Linux world is the relative ease with which different architectures can be supported because of its open source nature – but it does exist.
An example of this is obviously video games. Steam, which thanks to Proton and native Linux games has turned Linux into a very capable gaming platform (I don’t run Windows at all anymore), doesn’t run on POWER, and while work on bringing Wine to POWER is underway, I doubt it will deliver usable performance for games. Interestingly enough, since Minecraft, one of the most popular games of all time, is written in Java, it runs just fine on POWER with a small modification. The latest version of Minecraft – 1.16.5 – is available for POWER.
Other than these two limitations, running Linux on the Blackbird is an uneventful experience. My biggest surprise while using Linux on POWER is just how… Pedestrian it all feels. If you’ve used Fedora or Debian or Void on x86, you’ve pretty much used them on POWER, too. For instance, I was pleasantly surprised to see that the very latest version of my Linux Twitter client of choice, Cawbird, was available in the Fedora ppc64 repositories without any issues, which you just wouldn’t expect from a non-essential app developed by a small team.
Adding a dedicated GPU
There is one other unique quirk of the Blackbird that straddles the line between software and hardware. The onboard ASpeed graphics chip isn’t exactly great – it maxes out at 1920×1080 with only usable performance – which means most people will want to add a dedicated GPU. However, adding a dedicated GPU requires loading a proprietary firmware blob, which goes against the very nature of the hardware. As such, if you are interested in a Blackbird because your use case requires 100% user-controlled, open source hardware without any proprietary code, you have no choice but to stick to the more limited ASpeed graphics or possible future fully open source graphics cards.
For people willing to make the concession and add a dedicated GPU, there’s a few steps you need to take that aren’t required on x86 hardware. The firmware required for your GPU needs to be loaded by the Linux video drivers in Petitboot, and a small area of the firmware’s flash storage – about 1.8MB – has been set aside specifically for firmware that needs to be loaded, and you need to copy the required firmware into this area.
Once you know which firmware files you need, it’s not a difficult process – especially not for people reading OSAlert – but it is the only instance I’ve experienced where there is a marked difference between using Linux on regular x86 and using Linux on POWER. There’s room for making this process a little easier – maybe through a script or a tool that takes some of the guesswork and manual commands out of the equation – but making it easier to compromise the security of machines like this seems… Counterproductive.
In short, while using the onboard graphics is a must if you need to maintain the security of the machine, you at least have the option to move to a dedicated GPU for massively increased performance. Whether or not you feel comfortable doing so is a question I cannot answer. Firmware blobs like these have access to a lot of important areas of the system, so running unaudited, closed source firmware is a massive security risk.
Proceed with caution.
Some random Post-its(R)
I’ve noticed that quite a number of people with understanding of why Apple transitioned to Intel in 2006 have a tendency to assume the Blackbird will be an overheating power hog. Nothing could be further from the truth, as the user-reported power consumption figures illustrate. The 300W power supply my system came with has no issues powering the hardware, and while POWER does run a little hotter than x86 processors tend to do (70-90°C), this is normal for POWER and the temperature range Raptor’s engineers aim for.
I am not a big fan of the case the Blackbird comes in, since its airflow is pretty terrible. The 2U CPU cooler the Blackbird Secure Desktop comes with is a passive heatsink, connected to the PSU fan through a duct, effectively meaning the PSU fan draws air past the CPU heatsink, exhausting it out the back. However, since the front of the case is almost entirely closed off, the influx of ambient air isn’t going to be great. The upside is that the case is quite small, and easy to stow away under or next to your monitor or desk.
Raptor and I are discussing the possibility of sending me the 8-core CPU with an actively cooled 3U heatsink, so I can transplant the mainboard into a bigger, airflow-optimised case. If this goes through, you can expect a follow-up article with some benchmarks comparing the 4-core CPU to the 8-core model, as well as information about if we can get some lower temperatures – and thus, less fan noise – using a bigger case, which is valuable information for people considering buying just the mainboard. If you would like me to test some of the BSDs or a specific Linux distribution, lot me know, and I’ll see if I can write about that, too.
Note that aftermarket coolers do not exist; you can choose between Raptor’s fanless 2U cooler and the 3U cooler with a fan. While you could probably jerry-rig some Intel/AMD coolers with some redneck engineering and elbow grease, do so at entirely your own risk.
Conclusion
I’m rarely this positive in reviews, but I have to say I love the Blackbird. Having such a capable, modern workstation that is entirely open source, without any dubious, unaudited firmware blobs anywhere in the system is something I deeply appreciate. We’re in the middle of the war on general purpose computing, and it seems that every day we read the tech news, we learn of another consumer or user right that we seemingly give up without a fight to the likes of Apple, Google, Microsoft, Intel, and others.
The Blackbird, and its higher-end sibling the Talos II, is, as far as I know, the only fully open source alternative to the Intel and ARM machines that you lease, not buy. That you may use, not own.
That being said, the Blackbird has a number of problems, with the most obvious one being its price. The cost of admission to the front lines of this war is nothing to sneeze at, and it’s entirely unreasonable to expect someone who worries about the state of computing to just shell out this kind of money. Most people’s computing budgets – including my own, since our first kid is on the way! – simply do not have any room for $3000+ machines, and there’s nothing wrong with appreciating a machine like this without being willing to spend the money to own one.
Still, the mere fact a fully open source machine like the Blackbird exists at all is astonishing. Here we have a fully capable, easy to use and modern computer that is fully open source and free of proprietary code, that is barely distinguishable from a proprietary firmware-ridden PC or, even worse, Mac. All I can hope for is that Raptor, its customers, and its suppliers like IBM, can somehow, perhaps slowly, manage to bring the price down, making truly Free hardware accessible to more and more people.
Also a laptop would be nice but you know, baby steps!
The Blackbird Secure Desktop is an excellent piece of hardware, and a machine the current abysmal state of the computing landscape desperately needs.
Ouch, that pricing is a bit ridiculous from a price/performance standpoint. The whole “totally opensource” seems a very flimsy value proposition at that cost honestly.
A pity, I’d love to have a Power9 workstation to fart around with. I’ve always liked the Power architecture.
javiercero1,
It’s not for you if you don’t appreciate openness. To be perfectly honest I could seriously see myself using this instead of an x86 computer. It even has a BMC, which is awesome. I suspect quite a few of us would appreciate totally open hardware. Alas, at this price point it’s just not something I can afford.
Apparently, we both have similar appreciation for opneness since neither of us consider this product to have enough of a value proposition to justify us purchasing it.
javiercero1,
Appreciation and having enough money are two independent variables. To quote Thom Holwerda:
Oh, congratulations Thom!
Compared to server and workstation stuff, the price isn’t that outrageous.
I’d really like to see something based on the A2O processor. That looks to be more inline with AMD’s and Intel’s consumer stuff.
They are being hopeful. Good luck finding anyone who can build a secure general purpose platform.
That supply chain thing again.
So you plugged it into the wall without a secure router? Oh, dear. It gets worse.
I just funning around. It was a very through review and raised lots of interesting points and made some nice observations espeically from a managers or end users point of view. The issue of open and secure systems won’t go away and a system like this requires real expertise and a lot of resources to make the fullest use of it but it shows what is possible and opens up discussion so this is useful. Like you say, baby steps!
What’s the carbon footprint of this system, I’m pegging on a per-unit basis the answer is probably horrendous relative to the wider hardware platforms, so it seems like a huge barrow to push for a smallish benefit, and somewhat ironic!
Thom, would you consider using a kill-a-watt for reviews? Ideally several measurements for off/idle/heavy load…
Also, I’m really curious about the scalability of 4X threads per core. Are you open to benchmarking the systems you review?
An actually usable machine at “not such a steep” price? However, “without any proprietary code” is not 100% true.
As article mentions, GPU runs a dedicated firmware code. But that is not the only extra code inside the computer. The HDD/SDD controllers are proprietary, and they are already known to be hackable attack targets: https://icmconference.org/wp-content/uploads/A14-VanK-HardDrive_Firmware_Hacking_ICMC-Copy.pdf .
And having open source BMC is very good, but might not be perfect. ASPEED chips used on POWER machines (among others) had known vulnerabilities: https://www.zdnet.com/article/bmc-caught-with-pantsdown-over-new-batch-of-security-flaws/
At the end of the day, this is still better than 99% of the things out there.
sukru,
That’s a good observation. Intel AMT has had some sinful vulnerabilities in it’s proprietary firmware and is a good incentive to move away from closed proprietary systems, but at the end of the day open firmware is only part of the solution. We also need the code to be audited. A compelling case could be made for these subsystems to be running formally verified kernels. This would really raise the bar for secure systems compared to typical x86 systems of today.
Alfman,
BMCs are double-edged swords. They are really helpful, but also have insecurities by design.
It goes without saying they should be behind a firewall, and in a separate LAN (or VLAN). But that is not enough. For example, SuperMicro variants can have passwords resets from local shell (I could not find a definitive answer on OpenBMC one).
So, if an exploit somehow gains local root (which is already very bad), can in theory leave behind backdoors though the BMC:
https://serverfault.com/questions/85042/is-it-possible-to-reset-the-password-on-a-supermicro-ipmi-interface
Computer security used to be much simpler…
Yes, you are right. It’s not just supermicro either.
Virtually all admin tools can potentially open up attack surfaces. The capabilities of SSH can be invaluable, even though enabling it opens up a greater attack surface. A VPN is simultaneously useful and yet can open up new potential lines of attack. The way I see it, there are legitimate and even mandatory use cases to use these kinds of admin tools. When used properly, they can offer a high degree of security. However it really sucks when we’re forced to rely on proprietary blobs in positions of trust and acting as gatekeepers. I do not consider this acceptable at all, however x86 vendors for their part (ie intel, dell, hp, etc) are collectively guilty.
Ideally to me the BMC would just be a very simple low power SBC that runs the user’s operating system of choice without relying on anything proprietary. Maybe running off an SD card or something where you could set the write-protect tab if you are paranoid enough.
Actually we are in luck
https://pikvm.org/
I have forgotten about this project. Back in time I took a look, but realized my soldering skills were not good, and passed it. But it will allow a Raspberry PI (4 or ZeroW) to handle 90% of the BMC tasks (and 2, 3 for a smaller subset).
It even supports the REDFISH standard, which modern BMCs use to communicate.
There is a bit of work, and requires external connectors for USB, power control, and a HDMI->Camera converter.
sukru,
That is super awesome! The product is still in pre-order, and I’d rather buy a turnkey product than build it myself, but I am definitely interested!
“The kit will cost about $130 – or less, we are working to make it as cheap as possible.”
I really hope they have a good case that covers up the exposed breadboards and wires, since that’s not really production ready. Apart from that, this looks like it could be way better than the proprietary stuff I have now!
Alfman,
It actually looks nice in the case, and can be used with a KVM, too: https://youtu.be/dTchVKxx7Fo?t=265
Nitpicking here, but Nintendo Switch is ARM…
Q: Could this platform play games…?
It is a perfect workstation for pretty much anything. I could play many games (provided that they have source codes) without any issue at all, been documenting my exploration in https://www.youtube.com/watch?v=erNb_5mFypw&list=PLDegflDdH9RKn08gkPWb_v71hhMFFeTD6
We obviously have very diverging definitions on what “perfect” means. Ha ha.
I think a $5K workstation to run a 20 year old game at low frame rate is a bit suboptimal gaming experience, but that’s just me…
Yes, this is not a gaming computer, it is more for professional and scientists. But being able to support some games is a plus (to me), a workstation should be fun.
The GPU performance is pretty much on par with x86_64 just because amdgpu binary blob is exactly the same.
So the AMD drivers work natively on the PPC with full acceleration? That’s pretty good news. I knew that AMD had opensourced a lot of the linux driver, but I didn’t know if it was a portable deal that allowed AMD GPUs to work on non-x86 systems.
Yes, full 3D acceleration. There are known issues with crashing issues when running with Linux 64K Pages (CONFIG_PPC_64K_PAGES=y) on certain cards. The solution is to use 4K Pages (CONFIG_PPC_4K_PAGES=y)
In fairness, I’m pretty sure the game is locked at 75Hz. It stays that the entire video, no matter what is happening on screen.
Most games are now running at 60fps but there are many First Person Shooting game that could go further like 120hz to 240hz. It comes down to the monitor capability to support high framerate
Heh. “Open” just means it’s easier for the original supplier or anyone else with access (retailer, courier, software repository maintainer, coworker) to create a malicious/trojan clone of the hardware/firmware/software; and often (for open source projects like OpenBMI which tend to rely partly on poorly vetted and unpaid volunteers) it’s easier for an attacker to become a developer and slip something in the original. This means that you end up having to trust more people more; which makes security worse.
Note 1: I’m assuming that nobody is able to verify that what they bought actually matches the original “open” design; which is a very reasonable assumption given that it’d take some extreme equipment and a huge amount of time (e.g. electron microscope and many decades) to check.
Note 2: The only true basis for trust is the consequences of being proven untrustworthy later (e.g. “I can trust you because if you break that trust you will lose profit”). From this perspective there’s also minimal basis for trust as the company itself is a small high risk venture.
Higher price for “same or worse” security isn’t a great proposition.
In addition; there’s also no alternative suppliers. You lose the ability to switch from one motherboard manufacturer to another, or from Intel to AMD, or …. This means that the risk of “vendor lock in” (which is arguably the biggest problem for products from companies like Apple and Microsoft) is also “same or worse”. Lack of diversity has other problems too (e.g. lack of competition between hardware manufacturers to drive prices down).
Brendan,
Your point is certainly true for hardware, which is difficult to validate. But the same criticism doesn’t really apply to open drivers & firmware. Obviously you’re right that open drivers can’t solve hardware trust issues, but it’s still incrementally better over proprietary drivers.
Unfortunately though this trust is often ill founded in practice. Whether intentional or not, manufacturers of proprietary devices including intel, dell, HP, cisco, and on and on continue to produce exploitable code. We need to make a commitment to improve or else I have no doubt that we will go in circles repeating the same the same mistakes over and over again. Open source is a good start because it reduces the need for implied trust in proprietary blobs. We’d make a lot of progress by phasing out dangerous programming languages one project at a time. And long term we should be training students on formal code verification so that eventually more programmers could apply verification skills to real world systems. I think these are good plans, but it’s another thing altogether to actually get the industry to actually impliment them. I’m afraid the most likely outcome may just be more of the same..
Bringing prices down is a matter of scales of economy, which may prove very difficult to overcome. For better or worse the market usually follows the incumbents.
Hardware-wise, I agree. But software/driver wise, FOSS is actually one of the best ways we have to combat vendor locking.
All people writing drivers (both proprietary and open) continue to produce exploitable code. The difference is that for proprietary code nobody except the developers looks at the code (because they can’t), and for open source nobody except the developers looks at the code (even though they can in theory, if they have the skills to understand it, and if they have nothing better to do with their time than to spend weeks familiarizing themselves with a foreign code-base and the low level interface of a specific device).
The best solution is to not need to trust drivers in the first place (a micro-kernel); combined with a digital signature scheme (to prevent “trojan malware” drivers – so you can prove that a driver signed using Intel’s key definitely did come from Intel and hasn’t been tampered with since). Sadly; we’ve known about these solutions for decades, and yet were still dealing with absurd security disasters like Linux (e.g. “Let’s put all drivers in kernel space and map all RAM into kernel space so that any minor bug in any driver can be exploited to access anything in RAM; and then have no useful security or safeguards whatsoever!”).
For software/drivers, FOSS is dependent on vendor lock in. Most of the push against “binary blobs” comes from FOSS kernel developers saying “If it’s not in our code you can’t trust it” (and then deliberately not having a stable driver interface to prevent third-party drivers from working). Ironically; this extreme vendor lock in is a direct result of the “no security whatsoever” I mentioned (in the absence of any security that matters, random unskilled users looking at the source code with blank stares of pure confusion is their holy grail of “faux security”).
Brendan,
Clearly the set of people who review FOSS drivers is much smaller relative to the user base, but often still bigger than the set of original developers. I mean I’ve spot checked code and reported bugs in open source software and drivers, the process does work. Granted no one can physically review everything and there are never guaranties, but nevertheless some FOSS projects require multiple people to sign off on code before being committing it and it’s far easier to hold people accountable given public audit trails, which is way more than can be said of proprietary products. Not to mention that FOSS makes independent audits possible.
https://www.helpnetsecurity.com/2016/12/09/openvpn-security-audits/
Intel’s vpro vulnerabilities were especially annoying to me because I was having trouble with one of the very same vpro authentication functions that turned out to be vulnerable….it’s very likely I would have discovered this fault sooner than intel did if it had been open source.
So I personally prefer FOSS software over proprietary software and think proprietary blobs being more secure than open source is a hard sell. But that’s just me.
I don’t have a problem with that, but good luck convincing the world, haha.
Unfortunately this tends not to be that useful. Whether it’s something like MS eternalblue vulnerabilities or intel vpro vulnerabilities, dell drac vulnerabilities, etc. the code often IS signed and it IS authentic, but it’s still vulnerable anyways. Code signing does absolutely squat to solve trust issues in proprietary code. And anyways code signing works just as well with open source so I don’t think it changes the balance between open versus proprietary.
A bit of a stretch, n’est pas?
I agree with you that lack of driver stability is problematic, however the main push against binary blobs is very straitforward: we want to see what the code is doing and fix it if necessary. Our interests for openness are not simply contingent upon ABI stability.
Every person on earth can invent a fake identity, then spend ~6 months or so creating good/useful patches to get a project’s developers to trust them, then start slipping in back doors (e.g. an occasional “innocent looking mistake” hidden in a large change). You’d have to assume that NSA (and/or China, and/or Russia, and/or…) already did this to Linux (and/or Gnome, and/or FireFox, and/or …).
Sure; if it’s ever discovered (after it’s too late) it could be tracked back to a meaningless pseudonym, and if the attacker can’t pass it off as an innocent mistake they’ll have to spend another few months getting developers to trust a new/different email address.
In contrast, for proprietary software, it’s not “LOL, spend 5 minutes creating a new email address” – you’re looking at creating bank accounts and tax file numbers, and a valid home address; and people recognizing your face; and passing job interviews (where “referees” and past employers are likely checked).
Of course that’s assuming the attacker/s couldn’t be bothered doing a larger scale attack (e.g. creating a fork of an existing project, or a whole new distro, so that they can install their own “patch reviewers”); and it’s assuming they don’t try a different approach (e.g. getting access to an existing distro’s repository so that everyone downloads/installs corrupted binaries that don’t match the source code at all).
That isn’t the kind of attack that digital signatures protect against.
Would you let a computer repair shop fix your Linux computer, knowing that it would take less than 1 hour for them to replace your kernel with a maliciously modified clone (and maybe 2 hours to create a copy of all the data on your encrypted file systems) without you knowing it?
Who is “we”? There’s about 7800 million people in the world and I’d guess that less than 1% are able to understand the source code, less than 1% of those will ever look, and out of that irrelevant “niche within a niche” they’re only going to look at a tiny fraction of all the source code they have to trust (probably averaging several thousand lines out of many billion lines of source code spread across boot code, kernel, drivers and user-space). Heck; even people like Linus Torvalds (or Richard Stallman, or
Andi Kleen, or….) have probably never looked at (e.g.) the source code for GRUB, or the internals of Wayland’s user-space libraries, or the majority of Firefox. They’re only slightly less clueless about the huge volume of code they’re trusting.
The real “we” is actual normal people, who have proven repeatedly (e.g. Microsoft’s continuing dominance in desktop) that they will never give a single crap. The only thing the overwhelming majority care about is cost; and if they use open source at all it’s because it’s “zero price” and not because it’s open.
“Guaranteed 100% secure” is impossible. All security comes down to making it so difficult that the attacker finds an alternative method to achieve their goal (why rob a bank if it’s easier to get a normal job? Why break RSA encryption when it’s easier to use social engineering? Why decrypt someone’s hard drive if it’s easier to threaten them with a shotgun?).
The proprietary benefit is that creating a malicious forgery of authentic software is harder than creating the original authentic software (e.g. consider ReactOS and all the work they’ve done trying to create a clone of Windows, then multiply that effort by 10 because they’re not trying to make something that is indistinguishable from genuine Microsoft and only want something that is compatible). For comparison, creating an indistinguishable malicious forgery of (e.g.) Debain would only take an attacker a few days.
Note that “security through obscurity” is mainly about algorithms. E.g. being able to tell everyone “it uses AES-256 for encryption” rather than having to say “I can’t tell you what the algorithm is because if other people know it’d break the security”. For almost everything else you still have to depend on obscurity (e.g. do not broadcast your private encryption keys and login passwords to the general public – security depends on obscuring the daylights out of those things).
I can:
a) open source is significantly more vulnerable to “original supplier vulnerability” (e.g. people injecting back doors into the project’s source)
b) open source has an extreme risk of “trojan clone forgery”
c) open source has a significantly higher risk from vulnerabilities in the supply chain (e.g. repository servers between original source code author/s and final consumer/installer of binaries)
d) Open source tends towards a monoculture and dampens diversity. For an example, if you ask people to list new operating systems from the last 20 years you’ll be lucky if someone mentions Fuchsia (but even one of the largest companies on earth is feeling the pressure to add Linux compatibility and turn Fuchsia into yet another boring pile of *nix crud because forking existing open source user-space stuff is significantly more economically viable than doing anything new).
e) it can be construed as an economic attack (like predatory pricing but worse) – a deliberate attempt to destroy an industry by using “the illusion of free” (e.g. social engineering to reduce development costs combined with a system of hidden fees to cover the rest) to wipe out honest and ethical “pay for what you use” competitors. To understand this, imagine what would happen if one car manufacturer started giving away free cars (but petrol/gas/diesel prices jumped 10 times higher due to a oil companies subsidizing the “free” cars). For software; the main difference is that (for most normal consumers) the cost of physical hardware is much higher than the (easily amortized) cost of software development, so people don’t notice much when (e.g.) Intel or AMD increase prices of hardware to subsidize Linux development. Note that in the last 20 or so years almost every OS died (Solaris, NextStep, BeOS, Blackberry, …) and the few that remain become hardware providers (Apple) or cloud providers (Microsoft Azure) to remain viable during the (slow and ongoing) collapse of the industry.
f) The quality is often substandard due to management/leadership problems, funding issues and pointless forking. E.g. an open source project can’t tell a volunteer “do this or you’ll be fired” and nobody can say “we’re switching to Wayland in September so you all have 6 months to update all software and all documentation to ensure there’s a nice smooth transition for end users”.
Brendan,
Your making a lot of assumptions here. Not all open source projects let random people off the street commit stuff without any verification. Many FOSS projects are under a corporate structure and submissions get reviewed. Being open source doesn’t mean its a free-for-all. Those involved are not inherently shadier than corporations with black boxes. If anything the FOSS has an advantage because there’s much less secrecy surrounding the code.
That has nothing whatsoever to do with FOSS.
In the context of what you responded to, “we” is developers. And given the high profile of something like IME, I am certain there are thousands of us interested in reviewing that code, myself included. Honestly I think there is lots of merit for an open source replacement.
Brendan,
I don’t know who you’re quoting there, but social engineering attacks have absolutely nothing to do with FOSS vs proprietary.
No, that is not what is meant by “security through obscurity”. You’re allowed to have keys. Security through obscurity means the security of the lock is compromised when an attacker learns how it works despite not having a key.
Most of your points are variations of the same point where you are making assumptions about how the project is managed. Are you going to boycott open source browsers, javascript engines, tools, etc just because they’re open source? No, that’s ridiculous.
We don’t trust or distrust a project because of the license, we trust or distrust it because of reputation. And while reputation can be faulty, this is tangential to the point of FOSS versus proprietary. There are countless reputable FOSS projects and proprietary ones too.
This is a great example of the anti-FOSS bias I was talking about. Are such things possible? Absolutely. But this has been such a rampant problem for closed source windows software that it’s hugely hypocritical that you’d try to object to FOSS over it.
And not for nothing, but it’s very ironic that commercial software distribution is increasingly taking the shape of central repos used in the FOSS world for decades because it’s more secure for average users.
A lot of your points are so hypocritical though. Mono culture and substandard management/leadership problems are huge problems in the proprietary corporate world too. You’re right, but it has absolutely nothing to do with the licensing.
All of these things are the kinds of bias I’m talking about. We all have gripes about bad management, I can appreciate that. But let’s be honest, source code availability has very little to do with it!
@Brendan
Yes, all true. I’m still waiting for the penny to drop about the fundamental theories governing this but so far you are the only one who has posted on one of these. Any academic paper or stack of academic papers is subject to these fundamental theories if you track them back.
Right again on closed versus open systems.
I use security through obscurity. It’s a valid tool. What with and when is a secret. Sometimes in plain sight. Sometimes covert. Nothing hugely important but it keeps me amused.
HollyB,
But why though? You probably use a lot of software without even realizing that it’s built on open source. What browser do you use?
Microsoft has published source code for software that gets a lot of use by millions of users like .net. powershell, npm, MS powertoys, and even windows calculator, haha. Are you going to criticize them or is this just selectively hating on open source for projects and companies that you don’t like?
An obscure project may be able to get away with it for a while because it isn’t a target. But for mainstream products security through obscurity is a disaster waiting to happen.
https://www.crn.com/news/security/microsoft-exchange-server-attacked-by-chinese-hackers
Security by obscurity doesn’t work and as developers it is our responsibility to improve the standards of our industry to address the exploits that leave us vulnerable. This advice applies to both open source and closed source software. Blaming FOSS is a diversion to the real issues.
How, exactly, do you think it helps?
If normal people can install any IME code they like, then it’s a huge gaping security disaster (because attackers can install any IME code they like). If normal people can not install any IME code they like (e.g. the binary must pass a digital signature check before hardware will accept/use the binary) then there’s no practical way for anyone to check if the IME binary that is actually being used in their computer matches the open source code. You end up with silly assumptions that mean nothing (e.g. the assumption that if the source is good, then the binary that may have nothing to do with the source at all must also be good).
The end result is that regardless of whether it’s open source or not you must trust the manufacturer/supplier (e.g. trust that they are providing binaries that do match the source); and if you don’t trust the manufacturer then open source makes no difference whatsoever.
Sure; if you do trust the manufacturer (and trust that they are providing binaries that do match the source) and want to help the developer save a few $$ by helping them find their bugs for free (so that they don’t have to pay someone competent to do a formal security audit) then open source can be effective way for them to reduce costs by taking advantage of you. In practice it also adds costs (preparing source and documentation for the general public, providing access for external people to obtain the source code, dealing with random clueless people that don’t have the skills/knowledge to actually be helpful, etc) so relying on gullible suckers to do your job for you can be more expensive. Even in this case, it makes absolutely no difference for security (do you honestly think that a few random nutjobs off the street that have nothing better to do between pondering if the earth is flat or not are actually going to find security problems that a team of highly trained and skilled developers/employees couldn’t/didn’t find?).
Let’s take this one step further. What if a company (e.g. Raptor Computing Systems) employed all of the people involved in writing OpenBMC (and hired all volunteers that might look at the source code as temporary consultants); and made OpenBMC closed source/proprietary. Would it suddenly become less secure purely because it became proprietary; even though the exact same people are doing the exact same thing with the exact same source code and exact same binaries? Obviously this is completely ridiculous (and the assumption that “open” is more secure simply because of magic pixie dust is equally ridiculous).
What this is all about is economics (not security) – leveraging suckers to reduce development costs (with no difference in security at all); and then using “open” in marketing bullshit to convince more suckers that it’s “better” (even though the final product is still significantly more expensive due to low volume production and not more secure at all).
We can even do a fair “apples vs. apples” comparison and look at vulnerabilities that effected everyone equally (most hardware, most operating systems); like rowhammer or meltdown or spectre; and prove that open source did not help anything. In fact if you look at what actually happened, proprietary products were able to roll out fixes/mitigations before the vulnerabilities were publicly disclosed, but open source was prevented from doing that (because rolling out fixes in open source products would disclose the vulnerabilities publicly during the non-disclosure period) and were less secure than proprietary products during the non-disclosure period. In other words, not only did “open” not help security at all, “open” was directly responsible for making security worse.
Think like an attacker – if you wanted to inject a back door into a kernel’s source code (so that people installing the authentic/unmodified OS from its original publisher get pawned), how would you do it? For most (open or proprietary) products changes/submissions will be reviewed so in the end you’re going to need to “slip under the radar” (rely on the back door being hard to notice and rely on the reviewer/colleagues being a little less diligent because they already trust you) regardless of whether it’s open or proprietary. For proprietary there’s a massive hurdle (getting access needed to make any submission) before you can reach that point though.
Given that security is about making successful attacks harder (and never about making successful attacks provably impossible), how does letting the attackers have direct access to source code make it harder for them to create trojan clones?
Sure, it’s “not provably impossible” (see note) for attackers to create trojan clones of proprietary software (and yes, it probably has happened, even though I have never seen any evidence of this “rampant problem for closed source” ever actually happening for any proproprietary software) but that was never a practical goal of security in the first place.
Note: Digital signatures and white-lists can make it “almost provably impossible” for an attacker to make trojan clones because they’d need to break (e.g.) RSA-2048 encryption to generate a digital signature that is indistinguishable from the signature on a genuine binary. While this approach has become a fundamental security feature for some parts of modern systems (e.g. UEFI “secure boot”, Windows device drivers, etc) it hasn’t been fully implemented (e.g. normal user-space applications) and does create the potential for other problems (key management – who controls the white list/s?).
Version control systems have existed since the 1970s (e.g. https://en.wikipedia.org/wiki/Source_Code_Control_System ), were mostly proprietary, and were used by teams writing proprietary software before the open source movement began. They exist because there are no adequate tools for collaboration between members of a team (e.g. currently there’s still no collaborative real-time IDE). Over the years they’ve evolved (e.g. got more “distributed” when the Internet was invented) and open source alternatives have been created; but mostly FOSS uses the same tools that proprietary software development had always been using. You’d have to be an extremely biased against proprietary products to falsely assume that proprietary software development is “suddenly” adopting “new and different” development practices from open source to “improve security” for average users.
The real irony is that companies like Microsoft are using things like Github to help lock people into their ecosystem (including Azure, and service contracts with ICE), while their main competitor (Gitlab) has mostly become “proprietary cripple-ware (built on top of open source parts)” using subscription fees to unlock features. Open source furthering the interests of “for profit” companies isn’t exactly the hippy’s utopia that RMS envisioned.
“Mono-culture between different companies” is rare for proprietary products because it requires the agreement of competitors (and in some cases it’s outright illegal – anti-competitive practices, cartels). As soon as open source gets involved you end up with a “cross-pollination” phenomenon where (e.g.) someone forks pieces from KDE to create WebKit in Apple’s Safari, that becomes (part of) Google’s Chrome (and Chromium and Opera), which then becomes Microsoft’s Edge, and sooner or later you reach the point where no truly new code has been written for 20+ years (just many different people forking and evolving the same old code) and it doesn’t matter what software you install because it’s all mostly the same thing underneath.
I think you’re missing my point – while proprietary companies can/do have problems relating to substandard management , they do not have the specific problems I mentioned (e.g. no ability for a leader to force unpaid volunteers to work towards a specific goal to ensure a consistent end product; resulting in something more like directionless groups working at cross-purposes to the detriment of the project as a whole). The running “year of the Linux desktop” joke that’s spanned 20+ years now is the result of “divided we fall, but choice is good, so let’s have 100+ different distros!” thinking (and the rapid success of Android was the result of applying a strict “for profit” management structure that’s rare in the FOSS world).
I don’t “think” it helps, I *know* it helps. I’ve had multiple occassions to look at the IME source in order to diagnose a problem I was having with automating it. I’ve also had different problems with dell dracs and lantroix device that I would have liked to fix personally if these devices were open. Not to mention I could fix some of my peeves with these products, like hard dependencies on legacy versions of java, ugh!
I understand that if you don’t take advantage of openness, you might think that nobody else does, but we do!
I want to be absolutely clear this isn’t hypothetical, these are real limitations and bugs that many of us encounter as real customers, and often the manufacturers don’t care and won’t help. That’s when open hardware really shines!
http://www.osnews.com/story/28229/google-reveals-third-unpatched-90-day-windows-vulnerability/
It couldn’t be any worse than what intel did, but anyways… It wouldn’t need to be enabled by default, just those who explicitly flash an open bios on their own system. Many of these features get locked down in conjunction with secure boot, which is good because it prevents the OS itself from being an attack vector requiring deliberate action to override.
Furthermore, all of the security features we have like remote attestation that keeps OEM systems secure can also work with open firmwares too.
It’s not just about trust though, Intel has proven that it’s proprietary blobs are of bad quality.
Intentional backdoors -> unknown
Unintentional backdoors -> known
Bad quality -> known
We actually have great role models for open hardware projects, take a look at the highly dedicated communities like dd-wrt who have shown their value to the world many times over with better features and better support than the original manufacturer.
Honestly, you’re stereotyping. Many employed programmers are bad and independent programmers are good and visa versa. Some of these high end products from reputable vendors are notoriously bad for both quality and security…
https://duckduckgo.com/?q=dell+drac+crashing&t=ffab&ia=web
So yes for me I am positive an open community could do better.
You want to believe that corporate developers are more qualified, but in fact a lot of closed products end up using open source code anyways and then locking down the product (like roku). Are roku’s developers actually better than the open source developers they took from? Heck even intel itself is using open source minix inside the proprietary IME. This whole premise that corporate developers are superior is nothing but bias.
You’re making assumptions that it’s different, but in fact many open source projects have the same processes in place and are even managed by companies. There is no implied difference, just bias against open source.
The risk isn’t so much for an attacker to create a clone, but to embedding malware alongside the official software and re-releasing it into the wild, probably even in an automated fashion.
There are plenty of open options, but this is completely besides the point anyways because software isn’t distributed to the public using source control. Most unix platforms have used software repositories for decades and microsoft/google/apple are increasingly switching to centralized repos too.
It’s true, a lot of FOSS users hated the MS buyout, although it’s still not an objective reason for end users to avoid open source.
I agree that mono-culture is bad, but wide scale market consolidation is happening independently of software licenses. Open source has not caused this and eliminating it will not solve it! Either way the network effects will go on.
A) Actually there are times when open source projects go on directions that the community doesn’t approve of, just as with proprietary software. This shouldn’t be a surprise because many of these open source projects are owned by corporations after all.
B) At least with source code and open platforms there’s more the community can do to escape restrictions, bad decisions and anti-features.
It doesn’t matter to me whether you use open source or not, but honestly it seems that your justifications for users avoiding open source projects stem from bias rather than material differences. That’s why I asked for objective reasons that users should boycott software because the developer has made the source available, it just doesn’t make sense.
These examples are going in the wrong direction – starting from a bug you know exists, then finding the cause yourself (instead of submitting a bug report and hoping a developer finds/fixes the known bug). For security purposes it’s the opposite – you start not knowing if there’s a problem or not, and examine the source code extremely carefully hoping that you find something that you don’t know exists (with no clues/symptoms guiding you to where the unknown problem/s might be found), and acknowledge from the start that you might spend months of your time searching for problem/s that don’t exist at all.
In other words; I don’t think these are examples of open source helping for security.
Based on these examples; you can’t even say that “open” helped to fix known bugs more than sending a bug report to a (proprietary) software developer would have. The only thing your examples show is that “open” did not prevent the bugs that you found from existing in released software. Essentially, you’ve proven that there wasn’t adequate testing/validation done before release, but this “low quality, buggy on first release” is exactly what you expect for “zero profit” where it’s harder to cover the expense of extensive pre-release testing (note: admittedly, it’s also what you expect for a lot of proprietary software too).
Let me fix that for you:
“It wouldn’t need to be enabled by default, just those who explicitly flash an open bios on their own system without any way to know if the new bios is malicious or not, victims of retailers and 2nd hand dealers that flashed an open bios without the new owner knowing, victims of couriers that flashed an open bios without the owner knowing, victims of computer repair/IT people that flash an open bios without the owner knowing, victims of anyone else that has any physical access (taxi driver, janitor, builder that was supposed to be fixing the floor while you were on lunch break), and companies that have disgruntled employees who flash an open bios on a company’s computers without the company knowing.”
To me; that’s a huge gaping security problem.
Note that its highly probable that any “malware bios” will include code to prevent the victim/owner from re-flashing another “known good” (LOL) version of the open bios afterwards. Once an attacker gains control of “firmware update”, you’re permanently hosed.
Sure; if the firmware and OS are proprietary (including “proprietary binary compiled from open source”) it’s possible to lock the computer down and close attack vectors. As soon as any part can be replaced by “end user” (attacker) it gets ripped to shreds (e.g. Secure Boot becomes irrelevant as soon as firmware can be modified by an attacker to allow anything to be executed).
Sure; but again, only if the firmware and OS are proprietary (including “proprietary binary compiled from open source”) and nothing can be replaced by “end user” (attacker). For things like remote attestation, merely replacing a binary with anything different is enough to break the attestation all by itself (you end up with a different result from “measurement” so the remote system thinks your computer is a different computer).
And you think open source alternatives have proven they’re of better quality? If more people used open source (and it wasn’t “rare niche with market share too small for attackers to bother attacking”) you probably wouldn’t be able to sneeze without 12 new CVE reports falling out.
No, I’m not stereotyping. There are many very skilled developers working on open source projects, who deserved to be rewarded for their efforts, and who would be employees (or contractors, or consultants) if the open source projects were “proprietary, for profit” instead. Note that a large number of these people actually are employees already (working for companies like Redhat, AMD, Microsoft, Google, Huawei, etc).
These are NOT the people I was referring to when I said “random nutjobs off the street”, they’re obviously in the “highly trained and skilled developers/employees” group. I’m talking about the student that’s half way through a physics degree that wants to use their knowledge of optics to help improve lighting in Blender, or the accountant that has been using visual basic in excel spreadsheets for years and wants a Linux kernel developer to explain how write code in C because they want to make radical changes to the Linux kernel’s scheduler, or the Trump supporter that wants to do a security audit on voting machine software and begins by asking if RSA is rimfire or centerfire ammunition. Mostly; it’s people with good intentions that currently don’t qualify for the “highly trained and skilled developers/employees” group (which honestly, once you consider “domain knowledge”, is extremely small).
The reality is that there are advantages and disadvantages of both proprietary and open source; but you asked for the disadvantages of open source so I provided (my attempt at) listing those disadvantages.
More honestly; I think both proprietary and open source suck (in different ways) for a large number of reasons. My preference is mandatory open standards (written by developer neutral committees) for things like interfaces, protocols and file formats; combined with small software components working together using those open standards; where vendor lock-in is virtually impossible (due to open standards) and developers are significantly more able to compete with much finer granularity (e.g. “word-processor front-end A vs. word-processor front-end B” and not “the whole of MS office as a huge indivisible blob vs. the whole of OpenOffice as a huge indivisible blob”). With something like this, the advantages/disadvantages of “open vs. closed source” become a lot less significant.
For IME; I think it needs to be split into “required functionality considered part of hardware (even though it internally uses software to reduce manufacturing costs), baked directly into an immutable ROM at the factory and not upgradeable by anyone” and “optional remote management stuff that you won’t have if you don’t pay extra for it” (I don’t use it, so why should I pay higher hardware costs to allow it to be supported when I know it’s going to increase the attack surface regardless of where the unnecessary and unwanted software came from?).
@Brendan
Nice summary of the meta issues. As with most things in discussion people have a nasty habit of starting off vague and general while groundpounding their way whether they like it or not to the core philosophical issues where discussion should have started in the first place. This is why 90% of philosophy discussion is about definitions and terms of argument before the actual argument which is a tiny fraction of the whole discussion.
Shannon, Godel, and Turing walk into a bar… Discussion for another day I think.
@Brendan
Yeah there’s a lot of magical thinking and cherry picking to suit the political argument. The fact is software development and management and security and all the other factors is hard and requires some honesty not groundpounding from a position of ego and going with the loudest voices.
I had this exact some discussion with law enforcement last week. Coincidentally someone else has been mapping links between organisations and people for far right activity including far right terrorism. I’m unsure if and when is the best time to raise this issue but it’s on the to-do list.
It doesn’t matter whether it’s hardware or software or people or organisations they are all systems…
HollyB,
To be fair though there are practical benefits to FOSS that have nothing to do with politics or ego. There are a lot of benefits with FOSS that cannot be had with proprietary, but the converse isn’t really true. The only proprietary benefit comes to mind is “security through obscurity”, which is often looked down upon by security circles anyways.
https://en.wikipedia.org/wiki/Security_through_obscurity
Can you enumerate any other reason(s) that an end user might objectively prefer a proprietary solution? To be clear, I’m not suggesting that all users care about source code, but asking whether you can think of an objective reason for them to boycott hardware/software on just on account of it having a FOSS license?
That’s the thing. there are certainly end users who may be biased against FOSS, but it’s really hard to pin down purely objective cons versus a proprietary product.
Brendan,
For something as critical and widespread as the system management engine, I am quite certain that the open community and security researchers would independently audit it. And speaking just for myself, I would very seriously consider replacing all proprietary blobs with my own code. It would fit my IT use cases very well.
I’m not denying the “evil maid” attack is real, but withholding source code does not help with that. Physical access means that technicians can bypass firmware restrictions even assuming there are no firmware vulnerabilities. Look at the Xbox modders who have specialized tools to make it quick and easy. Firmware has never been and will never be secure from physical attacks. Withholding source code only gives you security through obscurity. That can make it a pain for legitimate use cases and researchers, but it doesn’t stop hackers.
The solution for this is remote attestation where the hardware verifies the firmware. Most people and companies don’t use it, but that’s a human problem and not a technological one. Remote attestation is exactly what you are asking for and it works just as well for open source systems. So IMHO you should be encouraging people to use remote attestation rather than blaming open source for firmware security problems that have nothing to do with open source.
Well, I appreciate your feedback on that. It’s just that you’ve been speaking as though “openness” is the con, when it’s actually other things. If someone starts using a product that they like, only to find out that the source is open, does it then imply a disadvantage? Does it imply that it is less secure? Does it imply the product has been mismanaged? Does that imply the developers aren’t qualified? I don’t think anyone can objectively say “yes” for any of these. It may be insecure, and it may be mismanaged, and developers may be unqualified, but this doesn’t change just because the source code is public.
I agree with open standards as well. However open standards doesn’t necessarily mean much if the hardware is locked. So I would say we need open standards in conjunction with open hardware.
In hindsight, I am thankful that intel’s proprietary blobs are not hard coded. Otherwise we would end up with more hardware needing to be physically replaced every time it gets compromised. The only way immutable code is acceptable is if the code is published and audited by multiple truly independent parties because the risk is so high. Or having a cheap rom chip that can be easily replaced, like in the past.
There’s loads of cherrypicking and bikeshedding in this topic. Nobody has yet touched on tiered security. Basically, a generic x86 Windows XP system with a no-name motherboard and components when behind layers of tiered security and isolation is 1000 times more secure than this product will ever be without proper use.
HollyB,
What are you talking about? Exploits come out in proprietary x86 products every year. We need a multi-pronged approach to solve the software/firmware security issues that have plagued our industry for decades. If we keep going down the same path of insecure languages, unverified&unverifiable proprietary blobs, implicit trust in parties with privileged access and so on, it’s going to be more of the same. We’re not going to get a new outcome unless we work hard to get there. Better security is achievable if we choose to adhere to better standards and transparency than in the past, but I concede that the willingness may not be there.
In your example you need to put a lot of trust in the equipment around the Windows XP box. It is hard to find good hardware that you can trust, especially when it is tied to proprietary software.
In this case the hardware you start off with can be trusted. Of course trusting yourself/the end user is another essential step, but that is independent from this.
I addressed all this in my comment at the bottom. It’s all about degrees of assurance and tiered security with options on or off depending on your use case which itself is a security issue all of its own etcetera. There’s no single answer as it involves so many variables. I’m not religious about proprietory versus none proprietory because, again, variables.
Yeah you’re right. Open doesn’t imply secure. Only that one could perform a full audit without a specific dependency on any third party.
“That being said, the Blackbird has a number of problems, with the most obvious one being its price. The cost of admission to the front lines of this war is nothing to sneeze at, and it’s entirely unreasonable to expect someone who worries about the state of computing to just shell out this kind of money. Most people’s computing budgets – including my own, since our first kid is on the way! – simply do not have any room for $3000+ machines, and there’s nothing wrong with appreciating a machine like this without being willing to spend the money to own one.”
But there are people like me that have been willing to buy computers around $2,500 (both my wife and I have iMacs —with pretty much everything upgraded – upgrade memory was from Crucial, not Apple —) and I used to have 5 PCs which I dual booted each to different OSs while running sometimes up to a half dozen virtual PCs on them so they were loaded with ram. These were PCs, not Apple. I also had 4 older servers running different NOSs (network OSs – none being MS because I love up time, not headaches) with different email systems to see how all the different OSs and NOSs were compatible to each other or not. This was geek paradise to me while it lasted until I got burned out.
During that time period I *might* have thought about buying one of these machines. I wonder if OS/2 runs on the Blackbird.
PS: Ask them if the name has anything to do with the Beatles and the song Blackbird. I would ***definitely*** ask if I were you since I’m a big Beatles fan who has most of their albums in 5 languages and all date different formats ranging from time period (meaning first edition) 45s and albums including “Picture Albums” which has the picture on the album cover on the records themselves. I geek out on multiple things, including motorcycles. I don’t just hide in a basement.
Well, apparently you can get the mobo+8-core CPU for $2K, so if you have a spare ATX case and components, you can get on that $2.5K budget ;-).
If I were a betting person, I’d think the “blackbird” monicker may have to do more with the Mach-3 SR-71 plane, as the name conjures speed.
I’m not sure I quite understand how much openness one really gains with something like the Blackbird, as opposed to just careful selection of ordinary x86 components.
Thom writes:
Even if you run Linux on your AMD or Intel machine, you’re running it on top of a veritable spider’s web of proprietary firmware for networking, graphics, the IME, WiFi, BlueTooth, USB, and more. Even if you opt for something like a System76 machine, which has open firmware as a BIOS replacement and to cover some functions like keyboard lighting, you’re still running lots of closed firmware blobs for all kinds of components. It’s virtually impossible to free yourself from this web.
Let’s consider all these blobs individually:
o WiFi and Bluetooth: Thom doesn’t mention anything about WiFi and Bluetooth on the Blackbird. I assume it doesn’t have them, and so any workstation without them is just as open in this regard.
o Networking: my understanding is that most desktop-class ethernet hardware doesn’t utilize driver-loaded firmware. As reported by ethtool, they do apparently contain some sort of firmware, presumably factory installed. Thom doesn’t tell us anything about the Blackbird’s NICs: do they not have such firmware? Do they have open source firmware?
o Graphics: As Thom himself concedes, with the Blackbird you have a choice between onboard “not exactly great” graphics, or installing a performant GPU that uses non-free firmware. But isn’t that the case with x86 as well? If you want open, just use Intel graphics, which have open source drivers and work fine without firmware (IIUC, GuC / HuC firmware is entirely optional).
o USB: I admit ignorance here: do USB controllers use firmware? If so, what do the Blackbird’s USB controllers do – do they work without firmware, or do they have open source firmware?
o IME: Certainly a concern, although it can apparently be disabled, and some x86 machines are shipped with it disabled.
o BIOS: This (along with the IME) is probably the biggest issue. The x86 solution is coreboot hardware, as Thom himself notes.
So at the end of the day, if you’re using x86 with coreboot, from Purism or System76, and you have similar components to the Blackbird (no WiFi or Bluetooth, Intel GPU, etc.), how are you “still running lots of closed firmware blobs for all kinds of components,” and why is it “virtually impossible to free yourself from this web”?
Honestly, I think the whole “openess” is more of a marketing gimmick to try to justify the poor value proposition from a price/performance point.
the target audience are people with relatively sophisticated IT skills, so that same audience can just buy a more performant Rizen system for 1/3th or less of the price… and they know how to lock down the respective ports in their router if privacy from some malicious management engine (which I don’t think it’s that common in the AMD world) is of that much of a concern.
It’s neat they managed to get a full blown ATX motherboard for Power9 though. Pity about the cost. I wonder if this could work for the Amiga folk. I think they also had some custom ATX boards with PPC parts on them at some point.
atrocia,
I agree with your points in principal. The problem with x86 is that it is difficult to avoid intel & amd proprietary firmware. It’s true we can disable some features, but not everyone actually wants to disable features. For example I actually benefit from AMT, these features can be extremely useful for admins, but I just wish the damn thing were open source you know? If I disable the feature, I’m left having to purchase more dumb proprietary gear (like a lantronix spider), which is expensive and their tech support treats small customers poorly. Often times we’re left with zero good options for voting with our feet. You raise valid points about closed peripheral hardware, but even so these open initiatives are a move in the right direction and I applaud them for it. I wouldn’t mind replacing some of my x86 servers, unfortunately I’m quite price sensitive.
As a long-time (quite satisfied) Blackbird owner I wanted to comment on a few items here…
Yes, I have my Blackbird wired in as there is no WiFi. I’m also the kind of person that doesn’t use WiFi after the recent EU/US lockdowns, so this is a “don’t care” in my book.
They do indeed have open source firmware.
Those Intel graphics require both the Intel ME and various “management” cores that are still being discovered. Not exactly a great option there — while the AMD GPU firmware can’t exactly just go extract data from the rest of the system, the Intel ME certainly can!
There is no firmware, either in a kernel-loaded or on-chip form, for the USB 3.0 controller on the mainboard. Other platforms definitely use firmware, and there are even reports of that firmware being potentially malicious in the form of arbitrary platform DMA.
Nope! IME cannot be disabled. It must always run during system startup and then only afterward (on those “disabled” machines) it’s politely asked to go into an undocumented mode where it doesn’t appear to respond to certain outside stimuli. The problem with that is that for all we know the mode switch just changes what it responds to — you can’t prove a null hypothesis here with the data available. Look up the “BUP” modules for just a piece of what still runs on the Intel ME in “neutered” or “disabled” mode.
See above — coreboot is only the second level firmware (best case) and, unfortunately, also has the well-deserved moniker “shimboot” on x86 due to mostly gluing together / sequencing various large proprietary binaries on the majority of x86 platforms.
Given the above points, including the fact that the Intel ME in reality cannot be disabled, your argument only really holds for the GPU and disk on-board controllers. The interesting part about that is that the GPU and disk controllers are both very easy to sandbox / work around — GPU via the IOMMU (why should the GPU need to read / write arbitrary data to the system?) and encryption for the disk controllers (what the controller never sees cannot, by definition, be stolen / leaked).
For me, the cost was worth it to have a trustable ring 0 / ring 1. For others, it’s entirely possible they’d rather buy a cheap x86 laptop each year and just throw it out a few years later at end of support. Choice is a good thing, the world would be quite boring if everyone was the same and used the same computer.
whitepines,
I appreciate all the info you provided in your post!
Do you know for a fact whether Blackbird uses the IOMMU to isolate the GPU/disk controllers or was the point hypothetical?
I’m left with a question about sandboxing the GPU using IOMMU is in conjuction with unified memory APIs (for example CL_MEM_ALLOC_HOST_PTR)… How exactly would graphics drivers open up IOMMU apertures into host address space? Is every single allocation mapped into the IOMMU? Is the entire process mapped? When it comes to the IOMMU there always seems to be a perpetual balancing act between performance and isolation.
I ask because a very similar issue comes up with thunderbolt devices where external peripherals actually have DMA access on the host. This was an absolutely terrible design IMHO, but anyways the official solution is to stick it all behind an IOMMU. The issue is this has been fraught with security/performance trade-offs: ether giving a device too much access, or relying on inefficient bounce buffers for enforcement resulting in bad performance.
Yeah. I think it’s best not to be dependent on proprietary stuff in the first place because the community can support it long after the manufacturer looses interest. It’s an easy choice to make, but sometimes it can be a challenge as a consumer to find open hardware. A lot of new IOT devices fit in this category. There’s tons of hardware out there and it’s getting quite cheap too, but it can be extremely difficult to find something open unless you go the DIY route. There’s nothing wrong with DIY, but sometimes you just want something that’s production ready AND is open source & open API.
We need a store like amazon dedicated to exclusively open hardware!
Yes it does, it’s basically part of how POWER works in terms of the PCIe controllers. While x86 tends to run in permissive mode, Power defaults to strict mode where the peripheral is only allowed to access specific pre-configured areas of memory by the IOMMU hardware. I have seen constant EEH faults (bad / blocked DMA) with certain peripherals, but at the same time that gives me confidence that the system is indeed blocking the invalid DMA the peripheral was trying to do.
For more information on the controller than you could ever want, check out the datasheets and specifications.
I’ve only partly glanced through the controller datasheet myself, it’s quite dense:
https://wiki.raptorcs.com/w/images/a/a5/POWER9_PCIe_controller_v11_27JUL2018_pub.pdf
https://wiki.raptorcs.com/w/images/6/6c/IODA2WGSpec-1.0.0-20160217.pdf
Exactly, every single range the card needs is opened up (and closed) dynamically in the IOMMU by the driver. Without that configuration of the IOMMU, the PCIe controller would detect the invalid access, block that same access, stall the bus, and raise an error to the OS (EEH). I’ve seen that happen firsthand with some of the early AMD GPU drivers, but haven’t seen it any more from those same GPUs (a couple of AMD RX series something or others) with the newer 5.x series kernels.
Performance is quite good, but that’s largely because IBM designed the PCIe controller and IOMMU properly. I’ve heard many stories of x86 systems that were less well designed and the performance loss and other issues that came with that.
I’ve even passed the AMD GPU through to a virtual machine on the Blackbird, which makes it fairly easy to do low level driver development. The same translation system is used to do that passthrough as is used to isolate the card from the host, if that makes sense?
whitepines,
Indeed, an IOMMU makes it trivial to attach a device to a VM because it’s such a strait forward mapping.
I think it’s much harder to use IOMMU to isolate peripherals on the host because now all the associated drivers are involved. It becomes especially complex inside operating systems that optimize data transfers using shared memory buffers and zero-copy DMA. This is super efficient and secure assuming the owner trusts their PCI hardware. But it starts to break down when an IOMMU is required in order to securely isolate DMA from external peripherals.
Here’s a link to give an idea of what I mean:
https://www.lightbluetouchpaper.org/2019/02/26/struck-by-a-thunderbolt/
This is why I think thunderbolt security is fundamentally broken. Anyways I realize that I’m really off topic here…thanks for answering my questions!
Interestingly, POWER9 is immune to the attack against Linux in that paper. From the PHB4 datasheet:
https://wiki.raptorcs.com/w/images/a/ad/P9_PHB_version1.0_27July2018_pub.pdf
> “No PCIe ATS services support”
ATS is a very bad idea conceptually, and it seems IBM agreed.
Thanks for the detailed explanation. I appreciate in particular the clarifications of how the problems with coreboot (“shimboot”) and Intel’s ME are deeper than I understood..
A few follow-up questions:
> They [the Blackbird’s NICs] do indeed have open source firmware.
Can you provide more details, and / or a link to information about this?
> Other platforms definitely use firmware [for their USB controllers], and there are even reports of that firmware being potentially malicious in the form of arbitrary platform DMA.
Can you provide links to the use of firmware and to the reports you mention?
No problem, the Intel ME is quite cleverly hidden away from the public eye and, worse, a number of companies do want to downplay its actual capabilities, hence what motivated my response here.
Here’s a direct link to the open firmware project itself: https://github.com/meklort/bcm5719-fw
And to the contest Raptor ran that helped make it happen: https://www.raptorcs.com/TALOSII/nic_fw_contest.php
Some of that information was communicated to me privately, but there is enough in public locations to connect the dots:
The controllers in question are ASmedia devices. Proof of firmware requirement can be found at Dell’s website:
https://www.dell.com/support/home/en-us/drivers/driversdetails?driverid=xc0m7
Note in particular that this update appears to fix unnamed “security vulnerabilities” in the device firmware.
There are then a number of relevant entries on the Raptor HCL:
https://wiki.raptorcs.com/wiki/POWER9_Hardware_Compatibility_List/PCIe_Devices
> “ASUS Xonar SE – Contains ASMedia USB host controller with errant DMA access flaw”
> ASM1142: “EEH errors may occur during long reads from multiple devices.”
At minimum this is very shoddy device firmware. In the worst case it could be malicious. What is known for sure is that the devices have a nasty habit of trying to read from arbitrary memory, and that proprietary device firmware is involved somewhere. The Power systems block all of that, which again proves the strength of the Power IOMMU, but at the same time it does make that silicon nearly worthless as a USB controller.
Most (all?) USB 3.1 and higher controllers on the market right now do require proprietary firmware, presumably for some sort of DRM function given the HDMI features in USB 3.1 and up. Even the USB controllers in AMD CPUs are licensed from a third party and firmware-riddled, sadly.
Thank you very much. This is really eye-opening!
> o Graphics: As Thom himself concedes, with the Blackbird you have a choice between onboard “not exactly great” graphics, or installing a performant GPU that uses non-free firmware. But isn’t that the case with x86 as well? If you want open, just use Intel graphics, which have open source drivers and work fine without firmware (IIUC, GuC / HuC firmware is entirely optional).
AFAIK there is no discrete Intel graphic card for consumer market (YET).
It sounds like this system has a ways to go to be a reasonable daily driver even for relatively sophisticated users, but someone has to get the process started so we can get economies of scale and build community interest in getting more software ported, etc. One blocker for this to be my main system would be Qubes OS support; fortunately, it looks like some work on that is underway (https://github.com/QubesOS/qubes-issues/issues/4318).
As someone who uses a Debian Blackbird as a daily driver, I’m honestly curious (other than Qubes) what you think still needs work?
Sure, I can’t play proprietary games on it, and Firefox could use help, but I have a dedicated gaming PC with Steam for games and Chromium is a reasonable (very fast) alternative to Firefox on the Blackbird. It was installed graphically from a USB stick just like I would do on an x86 box, so I wonder what I’m missing?
To me the biggest hurdle is adoption by both big elephants and OSS enthusiasts. Because the hardware is not readily available which impedes the community from porting many OSS apps to PowerPC64 LE. Furthermore big brothers like Google refuse to support the platform officially which leads to technical difficulty for porting apps. We all know how much hacks are needed to get V8 or Electron compile. Having said that, I have to give credits to IBM and Red Hat for their efforts with supporting POWER platform on Fedora and RHEL.
I can (and have) switched off IME on my old Thinkpad laptop. You can also block its ports at a router. I could if I wanted to easily open the case and remove all the wifi and bluetooth aerials and 3G modem and speakers and microphones and webcam. The graphics is integrated Intel. My laptop used cost one tenth what this did and one third new. If I wanted security I wouldn’t plug it into the internet. Data can be transferred by DVD or floppy, or scanner and printed paper for the paranoid. Good luck breaking into that even if it was running an unpatched version of Windows 2000.
First there’s the memory. Not just the usual suspects such as BIOS or hard disc but the half a dozen or so components on the board with a few megabytes or a few kilobytes you have forgotten about. Is is read only or re-writable? You do have a read only BIOS, right? If writable a hardware switch gauranteed to make it read only, right? Programs and data are indistinguishable. Stop and think about that. Then there are all the other vulnerabilities at a board level but this is a nightmare beyond the scope of this discussion. Then there is the ROM in your mouse and other external devices. Oh, whoops.
Then there is the surrounding ecosystem. You guys are running at least two routers, right? No don’t tell me you’re plugged straight into the wall or hotspotting. Who has access? Where do you use it? One of the joys of a 15 inch clunker is it’s a disincentive to carry it around and leave it with that nice student I’ve been chatting with while I dash off to the loo.
I’d call this product (mostly) open access but I wouldn’t call it secure. I don’t personally see why the majority or almost all or maybe all computers cannot be open access. For various reasons you can lock things down in specific ways if necessary either before it leaves the factory or when you receive it. Some stuff will almost certainly be locked down at the point of manufacture. This will either be by leaving components our or blowing fuses. Am I bothered by CPU level OS code? Not especially as it serves a purpose. Am I bothered by locked down baseband code? No and for the same reasons. The same arguments extend out to closed versus open software. They both have their merits and demerits. some people need fully auitable and others don’t. Some have this level of access and can’t be bothered or don’t know how to ensure it is bug free let alone secure. Developers and end-users have different use cases too.
One of the bests securities is don’t vote an idiot into power. No computer only pen and pencil required plus a good walk outside.
So when people say “security” what do they mean by security? Security from what? From whom? Someone else or you? YMMV.
HollyB,
Please don’t chastise me just for commenting, but I think it’s extremely important to shine a light on this topic of disabling privileged ME vendor code in the context of proprietary firmwares, which I thank you for bringing up.
The IME is not designed to be disabled. While bioses can provide some options to set some flags that the ME checks to disable certain features like vPro or temporarily stop the ME from grabbing the bus, intel’s proprietary management engine code is still there running in the background. It’s proven difficult to patch out entirely because intel still uses the ME for normal system management tasks and they employ a watchdog to reset the system when the ME isn’t responding (ie because the user hacked it out of the firmware).
https://www.intel.com/content/dam/support/us/en/documents/motherboards/desktop/sb/intelmebxsettings_v02.pdf
http://blog.ptsecurity.com/2017/08/disabling-intel-me.html
As far as I’m aware, these researchers were the first to discover the existence of the “HAP bit”, accessible with a flash programmer. I’m not aware of a bios where this is easy for owners to get to, but if anyone knows otherwise, please comment.
https://www.bleepingcomputer.com/news/hardware/researchers-find-a-way-to-disable-much-hated-intel-me-component-courtesy-of-the-nsa/
I remember when vendors including system76 and purism were working hard to disable ME without effecting functionality back in 2017 too. Apparently even dell provided the option at one point, but it was retracted shortly thereafter…
https://www.zdnet.com/article/computer-vendors-start-disabling-intel-management-engine/
https://www.dell.com/community/Laptops-General-Read-Only/Deactivating-Intel-ME/td-p/5188150
This is so suspect that I believe dell was pressured behind the scenes to remove the option.
IMHO what needs to happen is what zdnet proposed:
The note about 8 core / 32 thread CPUs being potentially unstable is incorrect. I’ve had my Blackbird since the mid-spring of 2019 (pre-ordered it in 2018), and it’s been running the 32 thread CPU, 64GB of RAM, a couple of SSDs (now three NVMes), and an AMD GPU flawlessly since then. According to the BMC, the CPU is idling at 38 watts. I routinely peg all 32 threads for extended periods of time when compiling things / running VMs, and I’ve never once had a system crash. It is utterly reliable under load.
Yes, I have been running for more than a year. Never ever have I experienced any instability with the CPU.
Looks like the Martians prefer the power architecture as well!
https://finance.yahoo.com/news/nasa-perseverance-powerpc-750-171516292.html
I don’t suppose you had a chance to run FreeBSD?
I’m seriously considering one of these and FreeBSD is my go to UNIX-like. I know Power is a Tier-2 architecture, but they are working towards Tier-1 support of it soon
AFAIK FreeBSD 13BETA has supported ppc64le though I have not tried it out yet
Excellent article, I noticed just a couple of typos: “EEC registered” -> “ECC registered”
The irony is that this fully open computer is actually a prime example of a black box.
That is the ultimate goal of the project but it would take some time to get there. The company and no one in the community ever say this one is “fully open”, there are black box in proprietary driver firmware here and there (hopefully one day they would be fully open). So in spirit this is a prime example of continuing openness. (No one in their wildest dream would imagine Linux and OSS would get a strong foothold but guess what the world is changing for the better)