Linux hardware projects are made or broken by their community support. PINE64 has made some brilliant moves to build up a mobile Linux community, and has also made some major mistakes. This is my view on how PINE64 made the PinePhone a success, and then broke that again through their treatment of the community.
I want to start by pointing out that this is me leaving PINE64 and not the projects I’m involved in like postmarketOS.
This is just a sad story. I hope some of the problems can be mended in time.
For all it’s shortcomings, I was always thankful that the PC had a standard boot environment. It seems like such a simple trivial thing, but when an architecture like ARM gets it wrong over and over again it really reminds us just how important boot standards are.
If PCs were like ARM, not only would alt-OS be a constant struggle, but both users and developers would risk perma-bricking their computers every time they flash a new OS. If only ARM had worked on a more standardized boot process, they could have avoided this perpetual mess!
Agreed, it is beyond frustrating trying to use anything but the “official” Manjaro release for my Pinebook Pro, to the point I’m questioning why I bought the damn thing in the first place (and I liked it so much I even replaced the LCD when it died early on, sourcing it myself instead of waiting for Pine64 to get it back in stock). The hardware feels nice, the screen is decent, it’s definitely fast enough to be a daily driver unlike the horribly slow RPi 4…but the lack of any real support from Pine64 combined with the revelations in this article make me never want to use it again. It’s currently sitting in the bottom of my sock drawer with the battery unplugged, waiting for the day Void Linux or OpenBSD run properly on it…and now it seems that day will never come.
If I may add, at least the PC has something approaching standardization in the audio and GPU departments. Sure, it’s not perfect and there are other peripherals to worry about, but at least it’s something. Instead, ARM SoCs are typically binary blob this and binary blob that.
But as you say, the main problem is the boot situation. Not only do users and developers risk perma-bricking their computers every time they flash a new OSeach, but each ARM board requires its own special ISO image.
Sometimes I wonder why the Desktop Linux community even bothers with ARM, when it’s clearly a hostile platform (no surprise here, considering it was shaped by the nasty world of embedded) and when most pre-compiled software for Desktop Linux is x86-64 anyway.
Now, don’t get me wrong, it’s possible to ship a Desktop Linux product on ARM, but the resulting product would be far from what people would call tinkerer-friendly.
So, should we start perking some x86 vendor for a dirt-cheap x86 single-board-computer or something?
But even with the cost advantage of ARM over x86, ARM is still not worth it in the long run in my opinion, especially when you consider most ARM boards ship without storage and a case, so the actual coast of the ARM board is higher.
Which brings the question: Which is the cheapest x86 board available out there?
https://shop.udoo.org/en/bolt-boards/
https://shop.udoo.org/en/x86-ii-boards/
https://www.dfrobot.com/product-2594.html
https://www.dfrobot.com/product-1728.html
https://www.seeedstudio.com/ODYSSEY-X86J4105800-p-4445.html
…
Clearly not the ARM SBC price range.
kurkosdr,
To be honest I feel like I am the target demographic for ARM SBCs, but I really wished they worked much better in these key respects. All the parties have had enough time to iron things out but it isn’t happening. They keep rolling their own solutions and fragmentation is the norm.
On the one hand it would have been nice for MS to succeed on arm because microsoft is well situation to push standards such as requiring manufacturers to support the UEFI standard. I would get on board with that, but on the other hand there’s no way in hell I can condone what microsoft was doing with mandatory secure boot locks on ARM blocking alternatives. If it weren’t for their anti-competitive behavior I feel like I could have endorsed windows certified ARM hardware for linux.
You’re right. While ARM looks great for efficient applications, frankly my own linux distro doesn’t support them because of these hassles. Technically I could create and maintain device specific forks for the various ARM devices I own, but frankly I’ve never needed to do this on x86! To be perfectly honest I am not interested in supporting anything other than mainline linux. I understand why people do it. but I refuse to invest time and effort into back-porting my stuff just to supporting 3rd party kernels that are updated at the manufacture’s discretion. For me that’s intolerable.
Alfman;
The amusing thing about this is we can truly thank Compaq for making the PC have a standard boot. They reverse engineered the IBM BIOS and opened it up. All of the later attempts to try to lock it down with non-standard stuff failed. It’s literally the only reason the ‘PC’ of today exists like it does. Otherwise it’s quite possible we’d still have options of Atari, Spectrum, Commodore, etc. Granted it could be argued if that would be better or worse than the current landscape. I tend to think, much like with the mobile industry, we as a people prefer a ‘two party system’ so we’d likely still be kind of stupid in that regard. Some of the non-intel systems had open firmware and it still ended up dying off of course.
Also on the flip side, we have UEFI and Secure Boot, where it’s trying almost to be more ‘arm like’ and lock out alternate operating systems… Freedom, no matter where it’s at, is still a fight worth fighting for.
Indeed, without Compaq the IBM PC would have been yet another platform that had a good run before being succeeded by something else (IBM PS/2). Also, Compaq took the PC out of the 4.77Mhz ghetto by introducing the “turbo button” that underclocked the CPU (in hardware) to the speed of 4.77Mhz, allowing for faster CPUs without breaking compatibility with software that required 4.77Mhz. It seems obvious now, but it wasn’t back then.
Although personally I never understood the purpose of turbo buttons on later PCs that underclocked the computer to some random frequency
I fail to see what ARM has gotten wrong over and over again.
ARM also has a very standard boot process. If you look at Documentation/arm/booting.rst, it starts of by saying “The following documentation is relevant to 2.4.18-rmk6 and beyond”. It’s dated (In the text, the log is a bit newer) 18th May 2002. Effectively it’s:
1. Setup and initialise the RAM.
2. Initialise one serial port.
3. Detect the machine type.
4. Setup the kernel tagged list. (Device trees – the early day of ARM wasn’t that sophisticated, though, and you’d get a register giving you a machine type and a size of RAM, IIRC)
5. Load initramfs.
6. Call the kernel image.
(I think the original bootloader called 0x8000 – and the later ones call 0x80000. That said, the “Booting ARM linux” documentation doesn’t give an address for ARM – and that the default for ARM64 is 0x80000)
Ah – reading on, the document has 4 split into 4a, covering ATAGs – I vaguely remember having to use them – and 4b covering device trees.
Anyway, the reason you can’t take a random ARM board and boot your own Linux isn’t a problem thats been caused by ARM. The problem is down the use cases of different boards.
People making any sort of board with an x86 or AMD64 processor on it really don’t care about whether you boot something else on them. There are exceptions to this – the XBox being one – and there are a few embedded products that have intel processors and also secure boot.
On the other hand, a large percentage of ARM boards have secure boot on them. Or, at the very least, something that verifies the image being booted is full and intact. You may require signed images. The images may also be encrypted.
The bootloader has information about how the board is put together. How things are wired up, that sort of thing. (Incidentally, the hardware also has to be set up in such a way that when the processor – either ARM or x86 – reads certain memory addresses, they get boot code. That sounds simple – but generally isn’t)
Thing is, if you want to replace the bootloader, and there is a signing mechanism but no secure boot, you’re already in a world of pain. You have to figure out how things are wired up on the board and create your own hardware description – whether that’s UBoot with the ARM standard boot process (It’s slightly fluid – but I suppose it’s generally been simple enough that even changing one little bit changes a high percentage of it) or UEFI and it’s cut down/genericised ACPI. (Developed for the data centre server people by Linaro)
With secure boot, you better hope for a vulnerability in the boot process that’ll let you load something else. You just have to look at games consoles to see that it does happen)
xslogic,
Standardizing the boot process outside of the ARM CPU may not have been on the drawing board initially. They just let each vendor do their own thing. But IMHO this lack of coordination & standardization was a bad decision in retrospect, one that continues to haunt the alt-os community. Until this can be fixed, standard tools and operating systems that are able to boot and run on ARM hardware regardless of vendor remain impossible. This is a huge con compared to x86 where standards are both normal and expected. If ARM could do better and commit to a full boot standards then I would be willing cheer them on despite my fatigue thus far. But I will remain disappointed if what we have now is the best they can do.
I agree with you about secure boot and bootloader locking. Some vendors will intentionally block owners from installing what owners want. However I see is as a separate problem altogether. But even on devices with unlocked/defeated locks, the lack of standards on ARM is still a large reason universal alt-os on ARM remains out of reach.
I’m failing to see which bit isn’t standardised that should be. Which bit, do you think, should be co-ordinated that intel are doing? I am trying to understand.
There is a specific way of starting an ARM processor up (With a few minor tweaks over the years – then again, x86 has also had a few tweaks) that even the Raspberry Pi follows, albeit most of the hardware setup is performed by the DSP.
Generally speaking, ARM chips of the past have mainly been designed to put into embedded systems. ARM hasn’t even make the chips for a long time – other chip vendors do. (At which stage, it’s effectively a similar situation to the South bridge manufacturers who decide how x86 chips boot get the initial boot code, for instance)
(I’ll let you off about secure boot being separate – I was going to ask you to explain the intel secure boot process if you hadn’t…)
xslogic,
Well, to put it simply, I want to be able to take an idea like “knoppix”, a utility disk you could run on nearly any x86 computer and translate that to ARM.
On x86 you didn’t have to care too much about x86 vendors. Whether your installing windows, linux, bsds or anything else, the entire boot process was highly standardized. You didn’t have to be particularly knowledgeable, didn’t need per device instructions, didn’t have to be proficient at reverse engineering, and didn’t have to be proficient at bypassing restrictions. Having ubiquitous boot standards is very powerful and by contrast the ARM devices of today are unsatisfactory for DIY.
Projects like lineageos and it’s forks only work with ARM by having lots of volunteers doing lots of redundant work across lots of devices, but even so most devices will never be supported. I work on my own distro and for someone like me standards are even more important because I simply cannot build/maintain custom images for every ARM device I’d want to support. x86 is so much easier! And it’s not because of anything intrinsic to x86 processors themselves, but simply because there’s a complete standard we can count on for nearly 100% of hardware (whether legacy or the new UEFI).
This degree of standardization would go a long way in reducing ARM barriers for DIY. I’m sure there are differing opinions on how best to accomplish it, but even so I believe that the overwhelming majority of DIY users want this.
Alfman: Okay – I get it – there is a standard that the platform expects certain things in certain places in RAM, but there’s no actual standard for where it actually loads it from or how it controls that.
I’m not sure that’s actually part of a standard, though – more so something that’s expected of the platform. (And the different BIOS/UEFI vendors on x86 will, generally, make them work close enough on each board)
(Sidenote: I have never needed to look at UEFI – so I don’t really know much about it – other than that the PC one has ACPI as part of the standard and the ARM one has a modified version of ACPI)
And I sort of agree that it’s not that it’s intrinsic to x86 – but, again, it’s down to the use case. Non-embedded x86/AMD64 boards won’t sell if you don’t provide a method for people to load their own operating systems onto the board with some method of recovery. They can be sold as part of a system – but they can also be sold individually. People need ways to load software onto them.
As much as it’d be nice to be able to have ARM boards that boot what we want and have a controllable boot path, it doesn’t pay for anybody to develop them. We are very much in the error bars of most ARM SoCs sales figures. (There are 2 large groups of uses of general purpose devices – Apple devices (Who mostly didn’t bother with the boot standard most other x86 motherboard manufacturers used) and datacentre boards. (Which will probably be very expensive and have a large number of cores on them. I don’t know; never used one)
Embedded systems, generally, don’t have the developer time or QA bandwidth to add in other methods of booting. (I’m pretty sure that the embedded x86/AMD64 boards out there will also typically have unused boot methods disabled – even ignoring secure boot and chain of trust issues)
Firstly, there are many people on many platforms that do not know how good they have it, long long term stability by comparison to some modern fleeting alternatives, fleeting alternatives that are somehow better than the same old same old. I wonder how the internal discussions go, when developers start pushing an OS monopoly, there must be some true irony!
Secondly, in difference to the OS decisions, early adoption is always built on hope and promises, sometimes, in fact many times, it just won’t work out. We should not be surprised when we read it leaves some with disenchantment.
In the end, it’s just another example that nothing seems to be any more certain than change itself.
Sad. I was going to buy PinePhone with UB Touch, but now…
As usual, 20 distros didn’t really do any favors. Endless differentiation without something to prune out the losers just produces 20 bad products. It is unsurprising the most commercial of the distros won out, and that distro doesn’t care about this niche. Although a bit sad as judging by their finance scandal, Manjaro is very likely an incredibly small operation pretending to be a big one.
As usual, 20 distros didn’t really do any favors.
Did you miss the part where Manjaro is using everyone else’s work poorly? From the blog post:
Much of the original hardware bring-up was done by Ubuntu Touch. Mobian developers built the telephony stack via their eg25-manager project. And in my role for the postmarketOS distribution, I developed the camera stack.
[…]
shipping known broken versions of software and pointing to the developers for support.
So Manjaro is using other people's efforts and contributing nothing while reaping financial rewards for doing so and actively breaking stuff by using broken software versions and being an actual negative by pointing angry users that are using Manjaro's shipped broken versions of software to the developers who have already fixed the issues the angry users are having issues with.
And your take is that there were too many cooks, rather than serious ethical issues with the "winners" of this situation?
Nah. Nah, that dog doesn't hunt. The 'twenty distros' were doing just fine, all of them sharing the load and contributing back to each other before Pine64 decided to give over everything to a company that is already known to have a financial scandal which is something that should make anyone take a pause for a moment and question just how this situation came to be. The party that is not sharing and is not contributing would seem to be the problem here, not the strawman of many hands making light work you’re beating up on.
A usual misinterpretation of reality. Just because there are 20 projects technically doing all the work to package and port things, doesn’t mean any of them are successful. It’s more like 20 chefs in 20 different kitchens trying to serve one restaurant. Looks like the last survey of only around 3000 people said the biggest distros were Mobian and Manjaro. Naturally the people trended towards the bigger project as when they searched for help to resolve issues, only answers realated to Manjaro would be found. Thus a feedback look is created where the most popular distro wins out over the smaller projects. In reality Linux is far more akin to the awful restaurants on Kitchen nightmares where the public has voted with their feet, but the owner/manager/chef wants to use Gordon Ramsy to tell the public their bad food is actually good. Much like the Linux community, neither can take criticism and the restaurant/marketshare will always remain empty/low due to lack of customers and the product just being bad to begin with. At the end of the day, if desktop Linux were good, it would be preinstalled already; and Linux people cannot accept this hard truth about how reality works.
dark2,
Your chef metaphor is apt until the end. It should be more like 20 chefs in 20 different kitchens trying to serve 20 different restaurants. With all the overlap do we need so many restaurants in town? Couldn’t we get by with just one or two? Maybe, but not everybody wants the same thing and there are niches to serve. Also consider that when we have viable choices, the competition helps discourage anti-features.
It would be nice if you could cite your source. What you are describing is called the network effect. It’s not specific to operating systems, it’s true of almost every market in existence and even our two party political systems.
I wouldn’t object to constructive criticism, but I do object to the zombie criticism that has no thought, balance, or objectivity. It merely goes through the motions of hating without providing real insight and nuance. Unfortunately there is this dogmatic mindset for some whereby it’s not enough to hold an opinion for themselves, they have to push their opinion onto others as though we’re wrong for choosing linux for ourselves. That’s silly.
Linux may not be for you and that’s fine, but with all due respect your opinion is irrelevant when it comes to my choices and those of alt-os users world wide. For the record I was a solid windows user before moving to linux and you know what even in hindsight I’m still glad I did because linux is better than windows for me.
So you think marketshare is a stand-in for something being good or bad?
“We’re the best, baby! Our customers love us and our numbers prove it! You people cannot accept this hard truth about how reality works.” – AT&T circa 1980 alt-universe antitrust proceedings.
> finance scandal, Manjaro
I couldn’t find any information on this. Can you please direct me to a link?
You could probably start here: https://linuxreviews.org/Manjaro_Linux_Lead_Developer_In_Hot_Waters_Over_Donation_Slush_Fund_For_Laptop_And_Personal_Items
Ohhh yeah, I vaguely remember hearing about this. Thank you. I agree, it does look pretty bad.
Regardless, as an Arch Linux user I’m already natural enemies with the Manjaro Linux distribution.
Thom, since OSAlert is OSAlert Inc. and not just a blog, I’d sometimes wish you’d keep things in your Drafts folder for a while longer. In this case, there’s now also https://www.pine64.org/2022/08/18/a-response-to-martijns-blog/ , which gives the other party’s side of the story. And before long additional information might emerge in the forums and maybe even some journalist will write about the whole thing – as is often the case, and as I know you know, too. It’d be sad if OSAlert of all places became tendentious instead of showing the whole picture.
Yet on the other hand if Thom doesn’t post “right now” people will complain being another repost from Phoronix or Reddit or pick whatever source to rant on.
You do have a point there, unfortunately.
Thank you for the response. There is always multiple perspectives to any issue, they aren’t always equally valid, but it helps a bit to understand what the real disagreement was. Here, it looks like Martijn was upset to re-litigate the previous discussion over SPI inclusion on the pinebook pro, and that Towboot wasn’t preflashed. At first I thought he as saying SPI wasn’t included on the PCB, but reading again after Pine64’s response makes it a little more clear he didn’t claim that. It was just the pain of relitigating and not getting Towboot on for whatever reason.
I think ultimately the problem as others have highlighted, the project isn’t using Server Base System Architecture, so you’ll have these arguments until the end of time. And Its an incentive to stay away. This custom booting stuff is a pain on ARM, leads to more ewaste.
I read both articles and to some extent i can agree to things like cutting off the “Community Editions” is likely a rather bad move. And for focusing only on Manjaro to not be that good of an idea. Why? I agree that distributions such as Manjaro more or less do the packaging. While the “Community Editions” likely took care of the development. Now i guess only packagers are getting some sparse amount of funding. And the development section was cut off completely. This in my opinion can’t work. Unless there will be more paid people doing both the development and packaging for Manjaro. But i somehow doubt that. As software development is expensive and i doubt Pine64 can afford it long term.
I understand and agree partially with both sides on the issues that were raised. I should point out that I bought my PinePhone about a year and a half ago and that it has been my daily driver for at least 6 months now. I have changed the boot loader to tow-boot and currently run Mobian. The fact that the SPI is something that had to be argued multiple times is frustrating and yet is not an uncommon problem in any business that needs to make payroll for its staff (today is pay day for the small 15 person non-profit that I run). I simultaneously think that considering the highly technical market that the PPP is targeting, I would rather see it ship sans OS (cut some minor costs) and let the purchaser choose which OS they want to receive their $10 bucks and offer links to each OS with detailed installation instructions. Since Manjaro became the de facto OS for the PP and PPP I tried running it as my primary phone OS multiple times and regularly found myself frustrated as I had to fix many packages myself. On one hand I have been messing around with Linux and alternative OSes for decades, and don’t consider it beyond my skill set; however, when many of those issues have already been fixed on the other community driven versions for months it seems frustrating at best. The dialer was wonky for a very long time on Manjaro for many months after I could reliably make phone calls on other systems. As long as I can always mess around with various operating systems on PINE64 devices, I will continue to be a customer. Hopefully they remember the “if/then” caveat.