I’ve got this rock64, which is an aarch64 board comparable to a Raspberry Pi 3 B+ with 4 gigs of ram. For years I’ve wanted to put a distribution on here that doesn’t have a premade image available, mainly because out of all the options on that page I don’t actually like any of them. Well, except NetBSD, but NetBSD doesn’t have GPU drivers for it. Problem is, everything I do want to use provides rootfs tarballs and tells you to figure it out. To do that I’ve got to get a Linux kernel, track down the device trees so it knows what hardware it has, and then wrangle u-boot into actually booting the whole thing. I figured that would be the hard part; little did I know the depths that Single Board Computer Hell would reach.
Unlike x86, ARM is far, far from a standardised platform. The end result of this is that unless you can find tailor-made images specific for your particular ARM board, you’re gonna have to do a lot of manual labour to install an operating system that should work.
We have pretty much been spoiled by standard hardware.
With the PC, I know keyboard controller is on port 60h:
https://fd.lod.bz/rbil/ports/keyboard/p0060006f.html
System timer is wired to IRQ 0:
https://en.wikipedia.org/wiki/Interrupt_request_(PC_architecture)
And can use “multiboot” header to pass my generic ELF file to the grub bootloader:
https://www.gnu.org/software/grub/manual/multiboot/multiboot.html
And even easier with the EFI:
https://github.com/brutal-org/brutal/blob/main/sources/loader/main.c
Every kid in undergrad can easily write a kernel for the x86 platform:
https://wiki.osdev.org/Projects
… (ping? for pending moderation approval)
Thom Holwerda,
As much as I wish it weren’t the case, you are right and this situation may well be permanent. On x86 manufacturer support can be lacking, but on ARM manufacturer support for FOSS is practically non-existent! x86 may end up being the most “open” and standardized platform whether we like it or not. I’ve been looking forward to practical ARM computing for such a long time. and yet now that ARM systems are becoming competitive I am disappointed that the openness and standardization needed to make ARM practical for everyday FOSS isn’t there.
Arm is working on fixing this with their System Ready program. They’re picking UEFI to solve the boot problem, but it’s a very new program and many boards are old. RPi4 might be the first Server Ready SBC board.
It’s an expansion of their Server Ready program which has been a success for Arm servers and Linux, from what I’ve heard.
https://www.arm.com/architecture/system-architectures/systemready-certification-program
Server and workstation ARM is already standardized around UEFI. But don’t expect that to ever be the case for SBCs.
This falls in the category of unsolvable problems: the small workshops crafting SBCs simple don’t have the resources to port and keep a firmware, even a fork of a open source one like EDK 2. The effort to do that is many times over what is required to just write the device trees and use a u-boot.
This System Ready program for embedded will work very well for companies with big budgets that has granted customers deploying at scale, and that can pay for the certification process. For everyone else, expect the device tree/u-boot situation to last for a veeeeeeeery long time.
And if ARM ever makes that mandatory, they will effectively kill the hobby SBC market.
Keeping firmwares are not trivial, that why x86 motherboard manufactures outsource most of it to companies like American Megatrends and Award.
> And if ARM ever makes that mandatory, they will effectively kill the hobby SBC market.
If that’s all it takes then that is a good thing,
It won’t and those who care will go on.
I fail to see how restricting the market to a handful of manufacturers geared to supply only businesses, that will be free to rise costs to infinity, restrict access to manuals and technical documentation and create barriers like “certifications processes” and “educational courses” (and making it the only way to get development tools at a affordable price tag) to pocket more money, and recreate the hell that having access to this kind of embedded hardware used to be in the past, where a “development board” used to cost 15000 dollars just because a accountant told the management to use it as cash cow, would be of any benefit.
All these fine SBC small workshops providing even schematics for free, is based entirely on the fact that the entry barrier to use ARM SoCs are low. And with that, they created one of the most vibrant hacker subcultures in the last decade, one that keeps on giving.
CapEnt,
Who is calling for any of that though?
I don’t understand your point here at all. As someone who enjoys the ARM SBC scene and hacking on them, the proprietary nature of ARM has got to be one of the biggest pains in the ass. Its been this way for a long time and if anything this has been holding back vibrant hacker subculture. Even programming microcontrollers in assembly is better because they’re so well documented. There is zero doubt in my mind that innovation would skyrocket if we could only somehow get the industry to collectively embrace open standards. But how do you do that when nobody wants to budge or compromise?
Alfman,
If someone keeps rising the bar, this is what will happen. It will drive all the small players out of the equation. Certification and keeping a complex firmware forever and ever for every single product released is not cheap.
And one that will not be solved with this ARM System Ready program.
A ARM SoC is basically a micro motherboard on a chip. Things like the bus and interconnection protocols are all standard already.
But SoC manufacturers are free to connect on that bus whatever they want, like they own GPU, their own AI accelerator, their own 4G/5G radios blocks… and they are very lazy in providing drivers. And that’s a situation impossible to solve without reducing everything to the minimal common denominator. Enforcing all ARM SoC manufacturers to use the same GPU, the same cell radio blocks, will just not happen.
A board firmware, UEFI style, will not solve that, because the responsible to keep the firmware will not be component manufacturer (Allwinner, Rockchip, NVidia…), it will be the SBC manufacturer, because it must be customized for the board. And the real problem is ill maintained drivers, not device trees and u-boot.
Just imagine if on a desktop PC, the one responsible to keep the driver for the GPU would not be NVidia or AMD, but Gigabyte, MSI, eVGA… etc, the ones manufacturing the board itself and using the GPU. This would be insanity.
People are targeting the wrong guy here. It’s not Pine64, Orange, BananaPi… etc, fault. They simple don’t have the resources to keep drivers, as complex as GPU or WiFi, by themselves if the component manufacturer don’t update them.
CapEnt,
Well, I disagree with the premise. I think unified standards will lower the bar and make things far more accessible for everyone SBC makers included. It would be so awesome if all SBCs would circle the wagons around a FOSS implementation with no proprietary bits. The obstacle of course is that the qualcoms of the world control so much of the process and the single board computer manufacturers are just soldering chips together. They are likely barred by NDAs from doing anything like this. So in order for this to work, we’d really need to get the chipmakers on board, which may be hard to do if they’re already happy with the status quo.
Assuming that it works as advertised then you’d be wrong, this would be absolutely huge progress for ARM!
https://www.arm.com/architecture/system-architectures/systemready-certification-program/sr
The question is over whether standards will reach critical mass and whether benefits will carry over into practice. I’d like to be optimistic but who knows, maybe it’ll be a dud or maybe it will remain too niche to help the linux SBC market.
Technically, most of the drivers on linux are maintained by the community so the idea is not so insane. But that’s tangential to the issue of unified booting standards that ARM desperately needs. Video hasn’t been the bottleneck for me, my personal interest in ARM SBCs lies more in the automation and IoT applications.
But I don’t think anyone here was blaming them, I’ve been a part of some of those communities and I think that they too would agree that ARM’s lack of standards is a mess.
@CaptEnt.
I’m personally NOT going to use a SBC again until it supports System Ready. Its just not worth it to fight these pointless fights just to get a linux, LINUX, LINUX distro on a board. Its a raised bar for any alternative operating system. Its this kind of junk that more or less killed the KDE on tablets effort. ( Well yes drivers sucked too, also a big problem).
So yeah in the future you may have less SBC boards, but each one of them will be a better platform to build upon.
So a project like CoreBoot then?
I know I’m being super idealistic/dumb about this, but wouldn’t something like a GPLed firmware project be the answer?
I think it would really help, yes.
UEFI is open, but it is large and complicated.
There are open system firmware alternatives out there.
o CoreBoot — https://www.coreboot.org/
o OpenFirmware — https://www.openfirmware.info/Open_Firmware
It needs Arm Ltd. to get insistent about support, IMHO.
Flatland Spider
I hope something good comes out of it. Any standard needs to be broadly supported. Ideally it would become important enough such that a majority of manufacturers are compelled to actually support it. Today there is so much genuinely cool ARM hardware but then the depressing reality sets in that it’s unsupportable by 3rd parties.
Me too. I think most companies are going to follow the lead of RPi, since everyone points to them as the gold standard, once they get around to releasing new boards.
There are multiple reasons for companies to get on board, and most of them have to do with not alienating their new customer base. The Arm board companies have an in, but they could easily be out if their products are not as easy to use as x86.
As it stands, most boards are junk toys.
The problem is not with whether or not UEFI is used to boot the processor. ARM has already had a perfectly capable way of booting the operating system. It involves device trees and UBoot (Among others) already implements this.
(A quick search found a story from almost 7 year old ago that U-Boot had support for UEFI on ARM and ARM 64)
Linaro implemented modified ACPI for ARM 64 servers quite a while ago. Like device trees, they’re just a vehicle for passing hardware descriptions to the operating system on boot. (Unless somebody is going to explain the deficiencies in device trees have that cause issues for doing this? It’s been a few years since I’ve worked with them)
Part of the problem with booting ARM SBCs is where that initial hardware description comes from. It’s typically a mix of information that has been set up in the bootloader, (From BIOS, if you disable a piece of hardware, it gets removed from the information) autodetected stuff, (How much RAM is there, where it is?) and information baked into the boot firmware itself. (To be able to detect RAM, for instance, you need to know how the RAM controller is connected. You need to now how your storage controller is connected to configure it to read the operating system in from wherever it is. You may even configure the FLASH storage for your boot firmware so that it runs quicker for the rest of the boot process)
If you’re flashing UBoot into an SBC to replace the boot firmware that’s there, you’re already increasing the complexity of getting boot to work by orders of magnitude. (Although, you won’t always have a boot firmware in there already)
It’s kind of similar to buying an x86 motherboard and replacing the boot firmware with Coreboot and Tianocore.
I must agree. I’ve also been looking forward to some good ARM options but this is the sort of pain most can do without. We turned to Risc-V for hope. Not much Risc-V news lately hmmm? Apart from the articles of Haiku testing on Risc-V it’s been about a year since the last Risc-V article. I’ll go back to playing with my X86 boxen.
In regards to on what terms you get “instruction sets”. Here RISC-V is ahead of ARM. On the part ARM is really bad. Things like upstream GNU/Linux support for ARM hardware. Here i fear that RISC-V is mimicking the same pattern. But lets hope they come to their senses. As if Intel, AMD and now even NVIDIA can. Then sooner rather then later others will too. As the pressure to conduct business should help. In regards to improving customer trust in your products. Among others.
What do you mean by “RISC-V is ahead of ARM” on “instruction sets”? (Out of interest)
In theory you can use it royalty free.
To be fair, if ARM made the ISA royalty free, they effectively lose part of their business model. (Apple and Qualcomm (IIRC) would stop paying them money, for a start)
I think RISC-V has the problem of economy of scale. There are some boards out there with RISC-V on them – but I think the problem is that there are so many ARM processors out there that, even including the royalty, you’re probably going to get more bang for your buck buying the ARM based board.
(I may be wrong)
You can make cheap RISC-V chips – some of the ESP32 chips have a RISC-V core in them – but it is a low power device to bring down power consumption and isn’t a general purpose core.
ARM is open in regards to licensing IP and core designs to third parties. Here Intel is still in dark ages. For things like being able to develop your own chip designs. ARM toolchain is rather developed too. This part is hence rather good. Beyond that ARM is a bloody mess.
Why is it always the graphic drivers?
At a guess, graphics drivers have enough complexity and the people working on them generally want to reduce the amount that other people can poach their ideas while, at the same time, trying to shield themselves from random patents lawsuits from other companies chasing down small advantages. (They’re the sort of companies that will write a driver that has 2 main code paths – one that passes the Microsoft WHDL tests and one that the customer uses that may not – and the whole quake/quack thing where one of them saw that quake.exe was running and modified settings to get a higher benchmark out of the Quake demo run)
There was, once, an ethernet driver for a device that came as a blob for Linux. Somebody reverse engineered it and created the forcedeth driver. (Although, to be fair, that ethernet device was built into an nVidia chipset)
Frame buffers are pretty simple once you figure out how to initialize video modes. It’s when we add the accelerated co-processors that things get complex quick. They have their own machine code, need their own compiler, have proprietary boot strapping processes, etc. 3rd party driver writers have to re-build all of this without the benefit of any specs. One needs even more skills to accomplish this for a black box than write the original driver with full specs.
It’s true about the framebuffers with one exception that that I can think of – Displaylink devices.
I’ve not really dug into compilers – or graphics drivers, to be honest – but using LLVM as part of the shader compilation does impress me. (Dunno why)
There is more to drivers than just implementing APIs.
For a long while, chipset developers also optimized 3rd party code to run better on their GPUs. This is not just “cheating” in benchmarks, but effectively fixing bugs on games and other software.
Why else would nvidia need to release “Game Ready” drivers? https://www.nvidia.com/en-us/geforce/game-ready-drivers/ (Or AMD with their brand)
They essentially do a free compatibility lab for 3rd party software. And that is very valuable.
Indeed. I remember when I first installed Max Payne on my PC, one of the polygons for Max decided that it was somewhere at the top of the screen. Until I updated the drivers.
Probably deciding that any graphical glitches may get blamed on them, so fixing it in the graphics drivers rather than taking the blame.
I can relate to some of the topics touched on by this article. I bought a Cubietruck (aka Cubieboard 3) many years ago and after about 1 year was no longer able to properly use it. It’s now been sitting in my closet for the last 4 years. The sunxi Linux kernel version 3.4 and is basically a binary blob. You can update to a more recent kernel, but none of the graphics or advanced features of the Allwinner A20 will be used.
Back in those days, it was not obvious that Raspberry Pi was going to be the success that it is. Essentially, all the competitors took a release-and-forget approach.
This is why we should never trust hardware manufacturers to supply the operating system. Having one entity control both the hardware and software has been one of the worse developments for FOSS and owner control
The thing is: it’s not the SBC manufacturer problem.
It’s a Allwinner problem. They do have this approach. AND A LINUX DEVELOPMENT CHOICE PROBLEM AS WELL.
Long time ago Linus Torvalds and company choose to just say “fuck kernel level ABI stability”, if companies don’t want to open drivers (and don’t have a community to keep this open driver updated forever and ever targeting a fast moving ABI) just let them suffer, and that’s one of the consequences: the one that actually suffer is the user.
The problem is not lack of standardization, is that Rockchip, Allwinner, Broadcom, NVidia… all ARM SoC manufacturers pack it with features that goes way beyond the scope of standard ARM (with all kinds of custom IP to get a edge over competition, like better GPUs, AI accelerators of all kinds, video decoder blocks…) and don’t bother to make the drivers open source or even maintain it.
Don’t expect that a 5 engineers SBC workshop to be able to keep drivers on behalf of their component supplier, especially for thing as complex as a GPU.
And this is the same situation with x86 as well. You just don’t feel that as much because manufacturers are forced to circle around Windows, and they don’t change their kernel level ABI that much to break binary drivers.
But just try to use your x86 peripheral that has a closed source 4 years old binary blob as a “driver” on a fine Linux x86-64 desktop or laptop, and see what happens.
CapEnt,
I mostly agree with your points, but honestly if the code was open sourced in the first place I’d have much more confidence in the FOSS community being able to support it better and longer than the manufacturers themselves. I’ve maintained linux drivers personally and although I hate that the kernel ABI is always breaking, mapping to it is nevertheless fairly strait forward. I can usually get it right on the first try. Granted I haven’t done a GPU driver, but even so I think the community would end up handling it better than manufacturers do if they were given the chance. Of course the problem is that they won’t be given the chance with proprietary code and reverse engineering is a significant undertaking that may never reach the performance and feature parity of the original driver.
IMHO things got worse with windows than before it. Before windows all hardware adapted pseudo standards (like adlib, soundblaster, SVGA, VBE, NE2K, PS/2 kb & mouse, etc) in order to maximize compatibility with software. If it wasn’t compatible then noone would want to buy it. But windows drivers would change this; software would use the windows API instead of programming the hardware directly. Consequently hardware could and did become more proprietary and for a long time linux support was pretty bad.
Today linux benefits from a huge community and there are more manufactures that explicitly support linux too. It’s the #1 OS in major markets like hosting and HPC. If a manufacturer only supports windows than that manufacturer isn’t a serious contender in these markets. This critical mass has helped linux a lot on x86. It doesn’t help in markets where linux doesn’t have critical mass though (or in markets where critical mass takes the shape of android…uck).
Are you talking about proprietary nvidia drivers? Obviously they have a poor track record, but it could get better if nvidia’s FOSS drivers become mainlined.
Lack of interest in regards to maintaining your product isn’t really Linux fault. On this front Raspberry Pi does a rather good job. That is you could buy their board a couple of years back and the latest OS they provide still supports it just fine. You don’t really need a “stable ABI” to achieve that. As although the drivers are not open source you as OEM or through some contractor can still have access to the code. Bottom line Linux isn’t preventing you from maintaining your product. Poor maintenance hence should be contributed to OEM. It’s up to them to do a good or poor job. And it’s up to you as a buyer to do the research. On who does a good and who a poor job in regards to long term maintenance.
Geck,
Both the manufacturers and Linux are responsible this long term stalemate. As a user who believes that any ability to update the kernel is objectively better than none, I’m not giving a linux a free pass anymore. Linux stubbornness is holding everyone back. Inaction==binary kernels==no progress.
Without GNU/Linux i would say that very likely you wouldn’t even be able to buy such board. Hence your claim that GNU/Linux is holding things back is in my opinion not well founded. I don’t know the case in details and i will hence assume. That is in this specific case OEM sold a board but had no plan to maintain it long term in regards to software. Blame OEM for that. Not Linus. Or better blame ARM as this is ARM fault. Well known issue on their side.
Geck,
As the dominant FOSS OS, the linux community could single handedly solve this. Linux is wasting its opportunities in a gridlocked staring contest that will never end. Enough is enough.
I don’t blame him personally so much as the market’s failure to produce viable FOSS completion. When there is no urgency to innovate and fight for users, the result is complacency, which has grabbed hold of linux unfortunately. The lack of competition signals to linux that they can just stay on top indefinitely without solving anything. That right there is the whole problem! Linux either needs to fight for our progress or get out of the way. I know you speak of Fuchsia as the enemy, but I hope Fuchsia can bring in much needed competition for FOSS. We deserve that much.
Note that it’s just your opinion. Not necessarily the truth. No matter how much you believe in it. As you sound like you figured it out and obviously you can’t be wrong. As for me personally. I can agree that i am not interested in Google blob offering me no control whatsoever. Hence i will stick with GNU/Linux. Yes that likely comes down to actively fighting against Google blob. Now obviously for that to not make all that much difference! As majority of people really couldn’t care less. But still. But as said this really wasn’t Linux vs Fuchsia debate. It was a debate about poor maintenance of your product. Somebody just wanted to make a quick buck and after that couldn’t care less. On the other hand Raspberry Pi maintains their boards basically and so far indefinitely. GNU/Linux hence can’t really be a problem here.
Geck,
No actually I don’t have an opinion about Fuchsia yet. My primary interest in it at the moment is to introduce new competition within the FOSS space, which has been lacking under the linux FOSS monopoly. And this lack of competition seems to be why linux continues to get away with so little progress year after year. It’s been decades. There is no plan to get through the stalemate keeping us dependent on binary linux kernels. Even as linux users and supporters, we need to admit the truth that our inaction has already caused prolonged harm to the FOSS community and will continue to do so.
We’ll have to see where Fuchsia mobile devices end up in practice, but for the time being you need to take a hard look in the mirror and admit that the linux/android kernel situation is NOT a solution for our FOSS needs as it stands today. I’ve told you this before but objectively you will need to compare Fuchsia to where Linux is at and not where you wish it were at, otherwise it makes you a hypocrite.
Linux blobs are just as harmful. If you were genuinely in favor of FOSS and owner control then you should be fighting these linux blobs as well. Linux should not be getting a pass on this!
I agree that the majority of people don’t care about FOSS. But of those of us who do care about FOSS, there’s a good chance that the majority of us are disappointed with the android & linux on ARM status quo. Everyone involved including linux deserve some of the blame for this stalemate which shows no signs of resolving itself.
I don’t agree there isn’t any meaningful competition in FOSS world. Among others there is BSD. As for your claim on how Linux is holding things back. This couldn’t be further from the truth. It’s GNU/Linux that is pushing things forward. Without GNU/Linux we would likely still live in caves. Including Google. As for your use and definition of FOSS. When you add binary drivers it’s not FOSS anymore. But OK you are not a purist. I get it. So isn’t Linux. That is on why you can put a product on the market, provide binary drivers for it and more or less maintain if indefinitely. If the hardware runs GNU/Linux. In addition you do have access to device drivers source code. In the end it’s a choice. What you are proposing is lets support companies that do a poor job in regards to maintaining their products. Imagine you would buy NVIDIA graphic card today. Latest one. Lets say for 2000 dollars. And lets say you use Windows. Now imagine NVIDIA would not maintain or update their graphic card driver from that day forward. They would say graphic card driver was released as 1.0 and our job is done. Due to “stable ABI” it will just work. For now and forever. Software just doesn’t work like that. Does it? And this is what my remark was about. You would basically kill off GNU/Linux for some niche use case nobody really wants or asked for.
Geck,
I would love if that were true, but it’s not enough for a competitor to exist in the shadows, it needs to have critical mass and be well supported. Take Sailfish, it probably has more marketshare than BSD on mobile, but neither are “meaningful competition” for android/linux.
Are you counting IOS towards BSD marketshare? If so then sure IOS has a lot of market share, but IOS is a proprietary product – we as end users cannot get the source code and I don’t count it as a FOSS platform.
Yes, that’s the whole point. Not even linux lives up to this open driver standard on android & ARM. That’s a double standard.
Everyone agrees that binary drivers suck, but we need to face the reality that *linux* got us here. And while I’m all for criticizing the binary drivers, that avenue has been a complete stalemate. The truth is that linux’s unstable ABI = binary drivers + binary kernel. Linux has made the problem so much worse than it needed to be since the entire kernel became a binary blob. It’s a slap in the face for FOSS users and it’s taken place with none other than *linux* at the helm. For the sake of maximizing tangible end user FOSS benefits we need to stop glorifying ideas that have failed to bring results and start adapting and focusing on ways to make real progress. Being able to update the kernel would be a huge step to getting FOSS back on track.
Raspberry Pi is a nice example on how GNU/Linux and blobs can coexist and on how long term maintenance is possible. Technically hence GNU/Linux isn’t preventing your favorite smartphone provider to do the same. If you personally would like to tinker more with it. Then what you will need at minimum is root access and maintained FOSS or binary device drivers. For GNU/Linux or Fuchsia. Technically all four variants are viable and possible. As for what has the best track record so far. It’s GNU/Linux with FOSS drivers. Here you really can have full control, tinker with it down to changing it programmatically, run on 15 years old hardware with latest kernel and user space and programs. Some applications like lets say Blender can cause issues. Due to demanding higher version of OpenGL. Nobody else can beat that not even Windows. And Windows is in popular culture considered to be the best in this regard. It’s far from it compared to GNU/Linux and FOSS drivers. On mobile hence we have what we have ATM. That is GNU/Linux and blobs and companies not interested in long(er) term maintenance. As this is a choice and not technical limitation you can be rather certain you wouldn’t get much more than that if having “stable ABI”. On the other hand once companies such as Broadcom start producing and to maintain FOSS drivers. After that likely a decade or more of support is to be expected. As until some hardware would be viable option. Until then somebody would likely keep the FOSS device driver option viable. By applying minimal maintenance work on it. It doesn’t have to do much with ideology. It’s just how things work in real life. All this was proven already in the past on numerous occasions.
Geck,
RPI is a great project but they too have been stuck in a similar boat. Their way of dealing with the situation is to take a snapshot of the linux kernel and apply back patches to a fixed version rather than track mainline. In a roundabout way this gives them a “stable ABI” to work with, albeit at the cost of a stale kernel until they can get new blobs from the chip manufacturers. It’s not that different from the windows model, but of course windows was designed for a stable ABI whereas here we’re just freezing whatever state the linux ABI happens to be in to achieve compatibility. So it’s not ideal but it’s probably the best they can do with the hand they’ve been dealt.
To their credit some developers are trying to reverse engineer the hardware to eliminate the blobs.
https://hackaday.com/2017/01/14/blob-less-raspberry-pi-linux-is-a-step-closer/
But it’s a considerable effort to reach 100% and unless they do then things will be buggy and incomplete. At least at the tail end we might end up with proper open source drivers that hopefully match or beat the manufacturer drivers. IMHO the main problem with this is that by the time it’s done some newer and better chips will be out. So the new dilemma becomes: should RPI pin the project on older hardware just to be able to use FOSS drivers? It’s a complex issue with no easy answers.
Exactly. Even with GNU/Linux and ARM blobs “chip manufactures” are not limited in any technical way to keep your mobile phone working for lets say a couple of years more or for the hardware to be limited to some specific kernel version. In the end it’s their choice not to do it. They could if they would wanted to. If we are talking about you and me. To do something then obviously first thing is to allow root access. The second thing is to have FOSS drivers. Or conditionally maintained and accessible blobs. There is nothing ideological or technical preventing this.
Geck,
Having to hold back kernels is no good though, I don’t find that acceptable and you shouldn’t either. Users deserve to be able to use the latest mainline kernel. The RPI is forced to do this to get a “stable ABI”, which is a key ingredient for long term support, but the consequences to having middlemen like RPI do this instead of doing it upstream has lots of negatives. The RPI project is making the best of bad options but it really doesn’t scale well. ARM kernels on real hardware become increasingly divergent creating a maintenance burden for RPI and others like me who supports our own linux distros, I really hate the idea of having to maintain several different kernel versions for different hardware targets as this multiplies my workload. Also using RPI’s method we’re not free to change kernel build features arbitrarily because that risks breaking the driver ABIs. We can end up being stuck with the kernel features that the manufacturers chose when building their binary blobs and as a consequence we not only have to deal with diverging kernels, but also feature discrepancies.
This isn’t just theoretical either, I’ve given up on a linux ARM project because I didn’t have sufficient control over the kernel build. It sucks that the full potential of mainline linux remains out of reach on ARM. I commend RPI for their awesome & tireless work, but that’s not an excuse for the broader linux community to ignore the numerous problems the unsable ABI creates downstream.
Alfman, Geck,
I think the issue is resolving itself, in an unexpected way.
If I am reading this correctly, nvidia’s recent open source move is in that direction: https://www.osnews.com/story/134860/nvidia-transitioning-to-official-open-source-linux-gpu-kernel-driver/
Basically Linus is “locking down” Linux with more functionality behind a GPL barrier. This will force “binary blob” manufacturers, like nvidia to build their own “stable kernel ABIs”.
That open source “driver” is not actually a full featured driver. But if it initializes the cards, has a mechanism to send commands, and does not require DRM for unlocking, it is as good as it gets. They (nvidia/arm/rockchip/samsung) then keep the actual driver functionality separate, and keep it compatible over many kernel generations.
I might be reading this wrong, though.
Armbian is your friend.
They’ve put out new releases this year for the CubieTruck:
https://imola.armbian.com/archive/cubietruck/archive/
I like Linux soo much. install with https://www.worktime.com/ software.
Mods: this is spam.
sukru,
That’s an interesting thought. Nvidia’s open kernel driver will provide a stable ABI to interface to their userspace drivers, which should address the breakages on x86. So maybe it could be a solution for ARM too.
I want to point out that there may be binary blobs for a broad range of functions like USB, ethernet, wifi, bluetooth, audio, battery charge control, cpu regulation, etc. Moving them all into userspace would effectively turn linux into a microkernel, haha. but I assume you were only thinking about the GPU.
Your idea could help, but I’m not sure whether qualcom and other ARM manufacturers could be convinced to copy nvidia’s move. Consider this difference: on x86 nvidia has no control over the kernels that their customers use, however on ARM devices are expected to use the kernels supplied by the manufacturer. So they may dismiss the ability to support arbitrary kernels (like nvidia needs to) as a non-goal for them. “You run the kernel we give you and that’s all you need”.