Responding to a forum post on upcoming ARM server offerings, Linus Torvalds makes a compelling case for why Linux and x86 completely overwhelmed commercial Unix and RISC:
Guys, do you really not understand why x86 took over the server market?
It wasn’t just all price. It was literally this “develop at home” issue. Thousands of small companies ended up having random small internal workloads where it was easy to just get a random whitebox PC and run some silly small thing on it yourself. Then as the workload expanded, it became a “real server”. And then once that thing expanded, suddenly it made a whole lot of sense to let somebody else manage the hardware and hosting, and the cloud took over.
It is actually not about “x86” per se.
The reasoning has nothing to do with the cpu architecture itself, but only with the “PC” being the most widespread and affordable option.
If IBM would have chosen 68k as CPU back in 1982 the title here would say accordingly.
Yeah, but that was not the main point.
Main point is that ARM distributors just have boards that are “good enough for tinkering, not enough horsepower to dev and deploy software on cloud”, or “development boards that are expensive, and not really good for daily cloud usage”, or “servers that are more expensive than x86”. As soon as a decent Laptop(not Chromebooks with shitty with eMMC and low power consumption processor) comes with ARM and the possibility to install a Linux distribution on it, this scenario could change.
Other point(my opinion) is that ARM is too much heterogeneous, and even when using that utopic development laptop you will have to do some cross-compilation or compilation tweaking depending on the language being used, while using Intel you can use a “amd64” distribution that will run good enough for cloud development purposes. No matter if an old Core 2 Duo, AMD Ryzen or Xeon, the packages will be generic among them.
+1
x86 had “competition” from AMD that offered and alternative “upgrade” path to Intel (remember, AMD chips were better -running at 40 MHz when Intel was at 33 MHz, when IPC weren’t a thing-) while Motorola did really, reallyyyy slow improvements and offered little alternatives (Thomson or Hitachi had a license for the 68k). Hence the “backward compatibility” was more real on the x86 path, offering you yearly improvements in both terms of “horsepower” than power consumption (if that mattered in the 90s) when it was a real issue on 68k (some instructions were not executed the same way from a 68000 to a 68060 -like movep- not even talking about the FPU issue).
So, while the x86 ISA definitively had its quirks, it indeed provided a true “backward compatibility” that ensured running an old binary -provided you ensured the software framework compatibility- would work, no matter what, from a 486 (let’s take that as a true reference) to the latest Ryzen. Of course, extended ISA not taken into account (MMX, SSExyz, …). If at the very least some other CPU manufacturer offered such a “backward compatibility” with more “open to license” ISA, maybe they would have had a bit of a chance. But in facts, even ARM have troubles (ARM 7/9/11, Cortex A/R/M divergences) and not even speaking about PowerPC with no “low cost” alternative.
What are you talking about? The 68K had backwards compatibility at least as good as the x86. There was ONE FREAKING INSTRUCTION that generated a privilege violation between the 68000 and 68010 (at it really was needed since the instruction never should have been allowed in user state), and from there on, there were no issues that you didn’t also run into with x86 transitions, like between the 386 and 486. Atari and Amiga both had code in the OS so that privilege violations by that one instruction got patched on the fly to the proper user instruction so that only one exception ever occurred on old code that used this improper instruction. The 68060 was a bigger transition than others, but still had exceptions for old instructions to allow their emulation, and again, OSes from Atari and Amiga handled that using code from Motorola. There were even some products allowing the use of the 060 on the Mac, although Apple had already transitioned to the PPC. I remember far more issues getting old x86 software to work than old 68000 software.
There never was a “Frankenstein” 68k based machine that would allow you to upgrade it card by card (sound card, graphic card, CPU, …) hence you had to change the whole machine at once (Amiga, ST, Mac, …), what the PC allowed. You could just change the CPU for a faster one, which created a demand/market for ever faster CPU. Even if Motorola was providing an upgrade path (68000 -> 68020 -> 68030 -> …), there still was bugs in the CPU support.
For instance, from an Atari ST to an Atari Falcon030, or just a Booster020, you had to patch the original 68000 binary because some instructions wouldn’t behave correctly (and no, it’s not because of the TAS instruction).
Some 68k computers :
https://ddraig68k.com/
https://www.ist-schlau.de/
http://www.kswichit.com/68k/68k.html
https://www.yorku.ca/mack/68KMB.html
http://www.bigmessowires.com/68-katy/
http://www.classiccmp.org/cini/ht68k.htm
https://www.retrobrewcomputers.org/doku.php?id=boards:sbc:tiny68k
http://www.s100computers.com/My%20System%20Pages/68000%20Board/68K%20CPU%20Board.htm
Some articles about 68k’s downfall :
https://spectrum.ieee.org/tech-history/heroic-failures/the-inside-story-of-texas-instruments-biggest-blunder-the-tms9900-microprocessor
http://www.filfre.net/2015/03/the-68000-wars-part-1-lorraine/ (and follow up)
You could upgrade the Amiga. It would take a new CPU in the expansion port. There were 68020 expansions available for the 68000 Amiga 500.
yeah but IBM didn’t. So the article can be boiled down to “x86 won because PCs won, and they ran x86”
exactly – so it is not about the architecture, the ISA, RISC vs. CISC or whatever…
MamiyaOtaru,
cybergorf,
I also agree, x86 didn’t have a technical advantage, it mostly came down to our winner takes all markets where having an early monopoly and using strong network effects and cash flow can be enough to stay ahead of competitors for decades regardless of merit. I don’t necessarily like the moral of this story, but this is often the way business works.
Does that include the Unixy Linux?
I’ve several ARM boards, however they cannot run a “real” desktop. Yes Raspberry PI had progressed over time, however even at 3B+ iteration, it is a “toy” desktop, not something that can replace my main one. (This is true even if I were to use a remote desktop, and just use RPi for the terminal emulation. I cannot do 4K monitors).
Others like ODrioid offers slightly better alternatives, but still falls short.
I can easily get a much more powerful motherboard at ARM “dev kit” prices.
For example, the nvidia dev kit costs $400+:
https://www.newegg.com/Product/Product.aspx?Item=N82E16813190006
Where as a Celeron ITX board, with everything except RAM costs $72:
https://www.newegg.com/Product/Product.aspx?Item=N82E16813157728
Yes, the nvidia one has “CUDA” cores for high performance computing, however other dev kits with ARM Linux functionality are at similar prices.
It is much more pronounces with servers. I can buy a used SurperMicro quad core server at that price:
https://www.ebay.com/itm/Supermicro-X10SAE-2U-Server-8-Bay-3-5-E3-1270V3-4Core-16GB-1TB-SC825TQ-563LPB/113449675487
An ARM server is only offered as a NAS device, which would require hacking to get a generic distribution, and would not have even half the specs as the Intel one:
https://www.ebay.com/itm/NETGEAR-ReadyNAS-RN214-4-Bay-Diskless-NAS-Server-w-ARM-Cortex-A15-2GB-RAM
(Intel or AMD can be used interchangeably here).
I don’t know where you get your information.
https://www.asacomputers.com/Cavium-ThunderX-ARM.html
Arm64 in the form of the Cavium Thunder X chips is offer as quite powerful workstations/servers. Getting them second hand does not happen yet. If you compare a new Thunder X based to a new Xeon based workstation the costs work out about the same.
Celeron ITX board boards vs ROCKPro64 https://www.pine64.org/?page_id=61454 come a little complex. 4G Ram for the Celeron ITX is basically double it price.
The Rockpro64 is about half the processing power of Celeron ITX boards but its also about half the cost if you take into account that on the Celeron ITX you have to buy ram.. Rockpro64 is also less than 1/4 of the power consume of Celeron ITX boards.
At only 4G of ram the Rockpro64 has trouble at times with current day desktop that is quite ram hungry.
Next generation of these arm boards could see 8G ram boards out there at reasonable prices.
So yes those companies making small hobbyist class arm boards are starting to move into making boards that are competitive against ITX x86.
There is no need to hack into a nas either. A15 is only a 32 bit processor.
https://www.eetimes.com/author.asp?section_id=36&doc_id=1318968#
Its not always the best choice of processor.
http://www.banana-pi.org/r2.html that is a A7 board that out box is support by a few different distributions.
Most of the time with arm you don’t have second hand as option. Required hacking if you are paying your time is going to cost you these days more than using a Linux compatible new arm board off the start line.
There is a lot of reasonable stuff appearing at the 100 dollar level.
Does this Rockpro run mainline Kernel and is updated with ease ?
This and the lack of SATA ports is what had been holding me back for my home server where I don’t need a lot of power.
The most exciting board for me to build a server with HDDs and a possibility of routing has been the ODROID-H2, currently unavailable due to Intel’s CPUs supply issues. It is kind of sad that the best product is still based on Intel hardware. I’d like to use ARM again (I ran a Dockstar for several years) but cannot find a good offering against x86 right now.
benoitb,
I may try getting one for myself at some point, The thing is many of these ARM SBCs are quite appealing on paper but after I buy usually some problems arise. I had written a fairly detailed wish list in one of my posts but I can’t find it since osnews lost the ability to search user comments
In short: full support by the mainline kernel would be great (this more typical on the x86 side). Also, many of these ARM SBCs suffer from overheating&throttling using official OEM kits and they generally aren’t as extensible as x86 counterparts.
For better or worse, your best bet may be buying a NAS product and rooting/jailbreaking it. I’ve done this in the past with decent results. While it bothers me that this voids the hardware warranty, it might be more production-hardened than a DIY SBC kit. You’ll probably be stuck on a non-standard kernel, which sucks big time, but this is fairly par for the course with ARM SBCs.
I thought for a moment you were talking about the odroid-hc2, which I have, except it has an ARM processor. The model numbers are too similar, haha.
https://www.hardkernel.com/shop/odroid-hc2-home-cloud-two/
https://www.hardkernel.com/shop/odroid-h2/
I built my router a few years ago using a quad ethernet x86 board with an j1900 processor, a predecessor to this board:
https://www.dminipc.com/products/4-ethernet-ports-motherboard-j1900-fanless-j1900-2-5-hdd-inboard-bypass-supported-itx-m9f_b-high-performance-router-motherboard
The chinese documentation was a con, but getting it to boot a standard mainline linux kernel was strait-forward and it is the fastest & most powerful router I’ve ever owned.
Alfman idea of Jail breaking a NAS is path to more issues.
“Also, many of these ARM SBCs suffer from overheating&throttling using official OEM kits and they generally aren’t as extensible as x86 counterparts.”
Some of this is users fault. Like the rockbox pro you have 3 different heaksinks to by that are under 5 dollars each not buying one is a problem. Next is not all ARM SBC are created equal there are development and production ones.
The other thing is the rockbox pro I pointed to
https://www.pine64.org/?product=rockpro64-metal-desktopnas-casing
Is in fact a kit NAS.
There is a difference between a development board and a board designed for deployment. Its clear with most of the pine64 stuff they are serous about it.
The Rock64 is a development board. and the Rockpro64 is a production board both are using related cpus. One of the big give ways that the board you are most likely looking at is a development board is stick on heaksinks where the production boards will have a mounting system for heat-sinks of some form based on holes though board.
Next fully functional production devices not just a small compact protection case that the company themselves support using exactly that board this is your completed reference implementation. So effectively when you buy production board you are buying a spare part for another devices except the maker of that board will provide you will full support on it. The rockpro64 it a kit that is the reference implementation for production device..
Finally the kicker you go to the SoC makers specifications look up what size heatsink should be installed. Many development boards only have the heatsink sized for a air-conditioned room under light load.
The pine boards have 5 years support at least. Rockpro64 the heatsink from them is less than 5 dollars and this is heat-sinks sized for 50 C room under full laod..
The distributions that pine64 provides don’t run a mainline kernel yet. Key word yet and I will explain why. The reason has been a problem with development boards and jail breaking stuff as well as proper production boards..
Rockchip RK3399 hexa-core SOC that is in the rockbox pro is a good example of a chip that has not been good to jail break with mainline kernel until really recently. Min really need Energy Aware Scheduling (EAS) that only comes part of mainline kernel as of 5.0,
https://community.arm.com/processors/b/blog/posts/energy-aware-scheduling-in-linux
Ok the number look the same other than realtime. The realtime is in fact more critical in most arm socs than most people give credit for. Lot of arm soc chips like the RK3399 don’t have decanted power management chips so thermal throttling is managed by real time tasks running in your Linux kernel. . So if your realtime tasks are consuming more power they are producing more heat that means stalling in thermal throttling for longer.
After Linux kernel 5.0 the RK3399 still has the mali graphics driver issues that will be the Panfrost open source drivers. This is enough to make you curse you want mali gpu to work with X11 you have to use older closed source mali drivers that then lock you to soc vendor modified kernels so power management works right.
Also there is a big difference between what you get in prebuilt NAS and what you get in a production grade board like the rockpro64 kit nas. The issue is the heat sink/possible lack there of. The heatsink will be sized in the prebuilt nas to the projected load. To save money when cost in production of prebuilt NAS you will cut the heat sink back to as small as what will handle the load that you as the NAS maker is expecting. Production grade board for unknown usage will have more than 1 heaksink options and one or more of these options will cover chips running at 100 percent. Rockpro64 has 3 passive heatsink options and 1 active heatsink options provided by maker not a third party piece so this is covered so under their 5 year support so you can ask them what one of the heatsinks are designed for 100 percent load in your environment so buying and installing the right heat sink so never suffering from thermal throttling.
Really rooting a NAS is a path to hell. Might take a while to find proper production SBC with proper heatsinking but it worth your time.
We are starting to see open hardware routers appear for most of the same reasons because rooting routers and nas run into the problem that once you change the work load the installed heatsinking is no longer good enough.
Even rooting x86 based nas that are commercially produced run into this lack of heat-sinking problem..
If you customizing be it on arm or x86 you want a proper production grade board with proper optional cooling you do not get this by hacking your way into the hardware.
oiaohm,
I never said it was ideal, in fact I’m quite opposed to even having to jailbreak devices that we own, but sometimes it’s the most pragmatic way to get the hardware you want.
The trouble is it can be a lot of trial and error. It doesn’t help that ARM communities will hide their weaknesses. But I might give the pine 64 ARM computers a shot next time. I’ll certainly keep it in mind especially if there are strong recommendations from users here.
Well, sometimes DIY research and assembly is ok, but other times I really need a turnkey system that I know is going to work solidly with a mainline kernel with no fussing around. So far professional x86 products have been more readily available & standardized in my experience. However I’m hopefully optimistic this will change and I will continue to revisit progress that ARM vendors are making.
Yeah, these kernel issues continue to be a major gripe of mine but unfortunately there’s not much I can do except wait and hope.
To be honest, I’ve had very good luck with getting arbitrary x86 hardware running mainline linux kernels (and therefor able to run my homemade x86 linux OS). I look forward to the day when ARM computers offer the same level of standardization, however the evolution of ARM products diverged considerably from x86 in that it’s quite typical for ARM manufacturers to tether the hardware with a specific kernel for use in a specific product rather than be sold as a generic/commodity piece of hardware. So, while I’m hoping this won’t be the case indefinitely, there’s the possibility that ARM computers will never function as genericly as x86 computers do today. Consider that with mobiles, you see phone hardware and phone operating systems sold strictly together with very little interest by manufacturers to make them generic & interchangeable.
Alfman things are changing.
https://fosdem.org/2019/schedule/event/one_image_to_rule_them_all/
Looks at this stage that 4 different vendors of arm SOC chips will be able to use a single boot disc. This is due to their unique boot sectors. This is requiring fixing up u-boot so that it can boot for different brands of soc chip from the same build.
One of the on going issues the arm instruction set is that you can ask what features the arm cores have but you cannot ask what SOC chip is this. Of course there is no instruction to ask what board is this soc chip on. Lack of software acquirable answers to these questions will mean installing you will have to answer more questions than x86.
Rockchip arm64 chips with kernel 5.0 are down to only the mali driver not mainline Linux. Rockchip are not only vendor in this position where all drivers are mainlined bar mali. Google with android wants to be able to fire up android recovery mode using a mainline linux kernel alone so there is pressure on SoC and ODM vendors from there..
This lead to this other project now that the Linux kernels has almost all the drivers how are we going to have a universal boot disc this is the one image to rule them all.
Things are getting better is just been a hell of a long road getting to this point.
Linux kernel does not support all x86 like the atoms with powervr chips are horrible badly supported. At least once we have the one image to rule them all there there should be a decent percentage of arm64 able to use a single installation disc.
@post by Alfman 2019-02-25 10:20 am
The list in https://www.osnews.com/story/129331/intel-to-discontinue-itanium-9700-kittson-processor-the-last-of-the-itaniums/ in reply to my post where I mention Toradex? (ctrl+f it)
And DDG doesn’t have site search? (my first comment in https://www.osnews.com/story/129339/firefox-66-to-block-automatically-playing-audible-video-and-audio/ … )
@post by Alfman 2019-02-25 10:12 pm
and @post by oiaohm 2019-02-26 1:00 am
It seems the community is underway creating the standards. We can call such boot standard for Linux on ARM SoCs… LEG (hm… Linux… Execution… or Embedded… Guidelines? You heard it here first, folks )
I’m hopeful that the next major revision of RPi will finally have good enough specs (RAM mostly) to largely replace my old desktop …or will my expectations shift up by then as well? :/ (hm, I am probably getting a 4k monitor this year…)
https://www.theregister.co.uk/2019/02/23/linus_torvalds_arm_x86_servers/
“For Jobs, cross-platform code represented a competitive threat, bugs, and settling for lowest-common denominator apps.”
“For Torvalds, it may be that supporting Arm architecture complicates kernel development, demanding more work and creating more potential issues to resolve.”
Apple is currently switching to go full ARM, Linus is telling that cloud is to be full x86. There’s be a schism anyway, and since higher level programming languages matters more than C/C++ (Java/OpenCL/…) I don’t care anymore about the “under-lying” ISA (used to be a 68k advocate) provided it provides enough IPC (Instruction Per Cycle) at lowest EPI (Energy Per Instruction) with a reasonable price.
Now that’s all what matters to me. x86 is perhaps brain dead, but even using C/C++, you get enough abstraction not to be annoyed by some legacy architecture quirks.
Maybe I’m way off, but I think with the advent of Kubernetes, and serverless, the underlying hardware is a lot less relevant unless you are still buying your hardware. A lot of shops are just giving their dev teams a aws/google cloud environment for dev.
Bill Shooter of Bul,
I agree.
The problem for ARM isn’t compatibility, the demand for ARM servers is absolutely there. I want my next server to be an ARM server, something I’ve been saying for years! I can buy affordable new & used x86 servers from thousands of vendors, The lack of commodity server-grade ARM hardware is a huge impediment to would-be ARM buyers like me who are desperately waiting for hardware vendors to sell these at affordable prices
Given the fact that most linux software is installed from repos or compiled from portable source code, there’s virtually no high level difference for users and administrators between x86 and ARM. I’m willing to bet most users and administrators who log into a turnkey system would be 100% oblivious to the underlying architecture if they didn’t actually look at /proc/cpuinfo. Linux is an abstraction that makes architecture almost entirely irrelevant: same environment, same workflow, same software, which makes Linus being wrong about the importance of x86 for “cloud” all the more ironic, haha. Maybe his world focuses on low level kernel modules, but for the vast majority of web software & administrators who use linux there’s no dependency on x86 architectures (think php/exim/dovecot/bind/apache/nginx/wordpress/joomla/etc).
Even more so when you know that Linus worked at Transmeta on the Crusoe, a CPU emulating other ISA. So he should know that a specific ISA have no real importance, might have some commodities on some operations, but the fundamental basis between CPU is the same : processing and carrying a valid output.
Kochise,
That simply does not follow. He would also know, working on Crusoe, that emulation is costly in many ways than just instruction cycles. And he already addressed the ISA issue by discussing actual development workflow – most dev stuff happens on the side, on the dev’s local machine. You can’t ignore that initial advantage that Intel has. ISAs do not start on an equal footing when it comes to local development machines.
But since personal.local development machines are not x86/x64 centric anymore, who cares ? It’s Javascript, Python, Java, OpenCL, whatever, with C/C++ slowly going downhill (unless for hardware specific tasks -like kernel development-) so it shouldn’t be an home/work issue. This was probably the case in the 90s or 00s, but things changed since then.
Kochise,
But they are. If you crushed down all the local development machines of the world into architecture lumps, the centre of mass of all local development machines would literally be inside the biggest x86 lump.
Yeah, because the Javascript VM is written in Javascript, Python interpreter is written in Python, and the JVM is written in JVM bytecode…
Or, since we are talking about servers, backend development. It doesn’t even matter if they use Node.JS for backend development, really, since the same thing still applies – if your Node.JS backend crashes for some reason, you would still want to debug in the same environment, and it’s much easier to do so if the production machine is running on very similar hardware as the local dev machine.
kwan_e,
The point Kochise was making doesn’t depend on languages being written in themselves, but that the code is portable such that the underlying architecture becomes abstracted and therefor irrelevant. For example, javascript is javascript regardless of the underlying CPU it’s running on by design. I’d say the main things missing are assembly language portability and binary portability, but the former is rarely (if ever) used in “cloud” or client/server development and the later is hardly an obstacle when local binaries are installed from repos or are one “make” away for developers.
For example I’ve coded ffmpeg (libav) applications that process video streams in real time on x86 and recompiled them on ARM with no effort at all. Even on a new server running a completely different architecture I was able to continue development without even noticing the switchover.
Even running a full graphical linux desktop on ARM is indistinguishable from x86 (ignoring possible performance differences). I’m not endorsing these products, but if you haven’t tried it before I recommend you buy a cheap ARM SBCs like raspberry or banana pi that supports a full desktop to witness just how similar the software experience is to x86!
https://www.youtube.com/watch?v=2rWsTpDYMwg
I honestly think it’s fair to say the majority of users on linux hosting wouldn’t notice a single difference caused by switching to an ARM architecture.
As an example, let’s take a PHP website. If the PHP code is bug free, it should be bug free on both architectures. If it is buggy, it should be buggy on both architectures. Why would CPU architecture make a difference in debugging PHP?
Now maybe we can focus on a scenario where PHP behaves differently between the two computers, which is a fair point. However in practice there’s not usually any guaranty that hosting providers are running the exact same binaries as you anyways even assuming that you share the same architecture (like AMD64). It might not be ideal for you, but I’d even say that it’s quite common for developers at home to be running a different distro and/or different version of software or different compiler flags compared to the hosting provider.
It’s not that your point is invalid, but just take note that binary differences are often present anyways.
Alfman,
Yes, and I think we’re all getting away from the point that Linus was actually making – he’s not talking about what should be. He’s talking about what is, and what was, that lead to x86 being dominant. Whether or not it matters any more does not change the fact that it did matter.
“By design” is neither here nor there because it’s what happens when things don’t work by design, is Linus’ argument. Sure, your lowest common denominator programmer would just throw their hands up at some unexplainable bug, but programmers that want to understand the root of the problem will find themselves needing to rule out architectural changes for “unexplainable” bugs they do encounter.
Experience with Java shows that cross platform is a bit of an illusion when things break, and they do break, regardless of how “by design” they’re implemented.
But performance differences do matter, especially for development in interpreted languages like Javascript. No one likes developing on slow machines, and the inavailability of truly performant desktop/laptop systems means the barrier to adoption makes x86 the option that requires no thinking about.
Because if your website is running on ARM, but your local development machine is x86, how are you going to diagnose the problem quickly (and that’s important these days)? That’s the point you keep missing. Linus’ argument is that developments starts on a developer’s local machine, which is most likely x86. If using x86 in the server means a much quicker turn around time between encountering a bug on the server, reproducing it locally, then fixing it locally, testing the fix locally, and then deploying the fix, most people are going to take the option with a quicker turn around time. Because business needs outweigh every other concern when it comes to a non-functioning website.
Yes, and adding architectural differences between the server and the development machine to the debugging experience is going to slow turnaround down even more. Think about it. In a homogenous environment, a developer is going to have to test a combination of the differences to figure out what went wrong. Now add in the difference between architectures. How much of a blowout of combinations will a developer have to even contemplate, if not test? I’m pretty sure it’s a factorial blowout.
So unless ARM makes any headway into providing machines as easy to get and as reliably generic as x86, they’re simply not going to make a dent in the monopoly.
He’s wrong about “cloud” being the reason we don’t have ARM. My company is in the hosting business, and year after year we continue to provision x86 servers not because x86 specifically plays such a crucial role in the infrastructure, but because the availability of ARM servers is so limited while commodity x86 servers are cheap and plentiful. Linus is wrong in claiming we need x86 because of the ISA. It has more to do with price & availability than sheer demand for x86 specifically.
Of course performance matters. I mentioned it because many of the SBC ARM computers including the banana pi I mentioned are thermally throttled, which puts them at a disadvantage compared to normal desktops – keep it in mind.
As an aside: I do think intel is king when it comes to absolute sequential speed. For hosting, sometimes having more slower cores can win out over fewer faster ones. There are pros and cons for each depending on the workload. It may be a debate worth having at some point
I’m perplexed why you’d ask this question…whether it’s PHP, python, ruby on rails, sql, etc, I’d literally use the exact same tools to debug on ARM that I use to debug & diagnose web pages on x86 servers. The process is no different at all.
For low level development, that’s fair enough. But he specifically cited “cloud” services, which is almost entirely high level development where CPU architecture is irrelevant. The truth is the vast majority of high level developers (like php, go, javascript, even bash, sql, etc) don’t care one iota about the CPU’s ISA. I don’t see why this should be controversial at all.
Low level development, sure. Not for a high level developer though.
Accessibility of ARM hardware aside, linux ARM distros are already every bit as useful as their x86 counterparts, which is why I suggested you try it yourself. You’ll probably have many of the same gripes I have (regarding proprietary kernels, poor cooling, and what not), but I’m not exaggerating one bit when I say you’ll feel right at home on an ARM computer. Aside from binaries getting compiled to a different architecture, everything is practically the same even for developers. Seriously, go buy one today and try it for yourself, it will wipe your skepticism away.
Alfman,
But that’s not his argument at all. You keep neglecting the main point of his argument.
He’s not saying anything about need. He specifically addressed how most development happens. On local dev machines that people already have. ARM has to be that local dev machine if it wants to get ahead. That is Linus’ argument.
We’re not debating the merits of the architectures, which is what you seem to want to do, but nothing to do with Linus’ argument. It doesn’t matter if they would perform better if they weren’t thermally throttled. At the end of the day, they ARE thermally throttled, giving a substandard experience compared to what most people are used to. And because of that, ARM will have a much harder time of breaking that cycle of fast local development with a direct migration to the server side.
Again, missing the point. If you can, from the outset, rule out any bugs due to architectural differences, wouldn’t you? If you have a mixed architectural development environment, you will always have the question of “what if the architectural differences are causing this bug” hanging over every bug. That barrier is simply not acceptable to most businesses.
Yes for high level. How can a high level developer be sure that the bug they encountered was not due to the architectural differences?
I’ve worked on commercial tools that were aiming to provide diagnostic for Java services on mainframes. Java is a cross platform high level language. And yet, companies STILL want to be able to develop and debug entirely on the mainframe precisely to RULE OUT the architecture playing any part in a bug.
If high level languages were so great, why is there a market for such mainframe tools? Why do you think these customers aren’t satisfied with developing and debugging their mainframe Java services on their local x86 machines?
I don’t know why you keep wanting to turn this into a battle of architectures.
Linus’ argument is about pragmatic development cycles and whether that can reasonably happen in mixed architectural development environments.
If both the dev and production environments were ARM, yes, it would be great. But they aren’t, and that is the argument you keep not addressing.
kwan_e,
Look, I feel this is redundant, but I’m just going to quote him directly while I say he’s wrong for all the reasons I already mentioned.
He’s not necessarily wrong about the financials. x86 servers can be more profitable for manufacturers, and it’s certainly true that displacing intel head on in a server market it dominates goes against the grain. The economies of scale are absolutely in it’s favor. But HIS argument that ARM servers are inadequate for the cloud needs of home/work users has very little merit. The truth is most users don’t even use the same operating system between their desktop and servers anyways. You can disagree with me if you want, but don’t say that my rebuttal isn’t directed at what Linus himself is saying, that’s just not true.
He doesn’t work in the web industry, so he may not realize how little ISA matters for web developers. Web developers just don’t deal which such low level things that having a specific ISA would be critical.
I’m saying architecture doesn’t matter to the hoards of web developers, be it ARM or x86. It’s just totally off the radar. They don’t care if they run x86 at home and ARM in the data center. I say this both in theory and with practical experience. If you insist on disagreeing with me, then find us a specific instance where typical high level web programming works properly on x86 but not ARM using the exact same software & configuration built for both architectures (ie exact same & stable versions of apache, php, nginx, etc). Then we can debate those specifics. Otherwise I feel you are arguing about something hypothetical that rarely happens in practice.
I’ll take the argument to increasing extremes just to make a point about things being “good enough”:
Users should not run AMD in the data center if they’re running Intels at home. Users shouldn’t have motherboards with different features at home and on the server because that introduces more uncertainly – how do we know it won’t be responsible for a bug? We shouldn’t use TCP offloading, raid, or virtualization on our servers if that same configuration wasn’t used at the office. Even though these new components are designed to work the same at a high level, we must prohibit them because there’s always a tiny chance that alternate server hardware will cause bugs.
Well, certifying the entire stack works as a whole can matter sometimes, but these cases are the exception rather than the norm. For the rest of us, we typically don’t care that the hardware used at the server is different from home/office. Millions of websites use shared hosting and VPS every day and the odds of them running different hardware are near 100%. None of this matters to everyday web developers because the service is hosted on normal distros that fulfill our high level needs. The hardware is below our radar, so long as it runs well and supports our databases and PHP scripts that’s all that matters.
Granted x86->ARM is a bigger change under the hood, but in the end, insofar as web developers are concerned, so long as it runs well and supports our databases and PHP scripts that’s all that matters.
Again, I’d like for you to find an instance where high level web developer was affected by bugs caused by having the same distro & software builds versions running on ARM instead of x86 (such that the bug is in fact caused by the ARM architecture).
Alfman,
I already gave you an example, but you seem to ignore it, so here it is again:
Java on IBM mainframes. Not, ARM, but still a completely different ISA completely incompatible with x86.
This isn’t a hypothetical, but an actual situation with customers really wanting to hand over money for it.
So I ask you again: why would customers be willing to pay millions of dollars for Java dev tools on the mainframe when, according to you, they should be happy with free, non-million dollar, dev tools on x86 if Java magically makes bugs, due to architectural differences, go away?
kwan_e,
Linus was not talking about interchanging mainframes with other servers, he was was talking about interchanging x86 servers with ARM servers. This makes a difference because mainframes are completely different beasts to other servers. x86 and ARM on the other hand can run the same distros and software. Even if it were true that mainframe software compatibility is a problem, it wouldn’t imply that x86 & ARM linux distros & software have to be incompatible too. So for this reason I’m going to insist you stick to the x86 & ARM servers that Linus is referring to. That’s specifically where he claims the issues are and is what I object to in his argument.
So can you find a specific instance where using the same distro & software between x86 and ARM has caused a compatibility problem for high level development?
If you can give enough specific evidence to prove it is problematic for the hosting industry as Linus claims, then I’m willing to change my opinion, But on the other hand If we cannot find specific examples where running on ARM produces buggy results for high level development, then I think it’s completely fair to conclude the risks of this hypothetical problem are being blown out of proportion. That’s all there is to say about it really.
Alfman,
I really don’t know what to say. You seem really adamant about missing the point. Here are choice quotes from his argument you seem intent on completely missing:
I mean, how many freaking times can the man explicitly point out that it’s the “cross” part?! How many times can I point out that his point was about cross-platform development?! How more explicitly can he say “develop and deploy on the same platform”?
None of those quotes apply only to ARM. None of those quotes apply to “interchanging x86 with ARM” servers.
I brought up mainframes, because Linus’ issue is with CROSS DEVELOPMENT. His point would still stand if we’re talking about ARM on the local side, and mainframes on the server side. Or whatever bloody heterogenous combination you can think of.
It’s not about x86. It’s not about ARM. It’s not about Y. It’s not about Z.
I see no point in continuing this discussion because you are INTENT on disregarding CROSS DEVELOPMENT. I quoted the very many times he brought it up. It’s not about ARM. It’s about the CROSS aspect. CROSS CROSS CROSS.
@kwan_e : from the quotes
Cross-development is pointless and stupid when the alternative is to just develop and deploy on the same platform.
But whenever the target is powerful enough to support native development, there’s a huge pressure to do it that way, because the cross-development model is so relatively painful.
“same platform” ? Really ? Do you have a mainframe at home to be sure those are really equivalent like Alfman mentioned it ? Is cross development really be THAT painful ? What is he talking about ? He coded Linux on 386, he deployed it on 386, not Itanium.
Talking about higher level language is another beast, but if Sun/Oracle cannot ensure that Java behaves the same regardless of the under lying architecture, that’s not the programmer’s fault. That Python2 doesn’t behave like Python 3 is another issue.
Linus is just wrong on this issue to explain why x86 succeeded instead of other alternatives. x86 at home rose because Atari and Amiga stalled for games in the 90s, x86 won in the servers because Itanium was a mess, Alpha disappeared, Sparc and PowerPC were not powerful enough, and when AMD reached the GHz with Athlon, the competition was won by the x86 purely on the performance ground.
Now servers are looking for a more energy efficient alternative, and ARM still have a pretty good niche here. All servers don’t run at full speed 24/7, lower specs could be enough. If performance is requested, stronger performing CPU are of course required, like x86 or Power.
But again this doesn’t require to have such machine at home, because those beasts are coded in higher level language (halide, simit, opencl, …) and not directly in C/C++ because even OpenMP is a mess to get parallel processing right.
So HE missed the point, x86 won not because you wanted to develop at home what you could use at work, it’s just a coincidence that workers and gamers found in the x86 a larger ecosystem than in any other architecture. Ever found a Sparc computer in your local shop ? Ever found a PowerPC computer in your local shop ? Why have Apple switched to Intel ? Because performance and lower price (and possibility to use AMD as a second source), etc.
Don’t focus on the “server” thing, that a fallacy.
Kochise,
I’ve worked on mainframes professionally, you idiot. I did cross platform development between x86 and mainframes in Java environments. And yes, cross platform development even using Java was quite the nightmare.
Do you know what Linux is? It’s a kernel. It runs on x86 and ARM (and, yes, even mainframes). he would know about cross platform development because he has to merge everything from ARM developers. Are you seriously an idiot that you didn’t know that? Do you really think Linus, being the main kernel maintainer, doesn’t know anything about developing on ARM? SERIOUSLY?!
He’s NOT talking about why x86 rose at home. READ.
No, you and Alfman seem to have real trouble actually reading what he was actually saying.
@kwan_e : don’t call me an idiot, because I’ve read his prose pretty much thoroughly. Just as it is mentioned that -this “develop at home” issue- as to why -x86 won for servers- is stupid. And I think you’re the one that is obviously too much of a fanboy and cannot cope Linus being wrong.
People working on mainframes and servers do it at work, with the machines put at disposal AT WORK. Why would they bring work at home and ensure their server is compatible with their home computer ? I think it’s the other way, people at home would PERHAPS get a compatible machine at home.
And to be fair, I don’t think there are so many coders working at home to have such a leverage to force servers to be x86 because they have a x86 computer. I read tough that in 80s Germany, people bought Atari ST computers because they had the same at work (or the other way around, I don’t remember exactly).
Anyway, cross development is not that difficult (provided you have the right IDE) it’s more about Java and its false promises that gives you a grief about this issue. Can code in emulators, if it really matters. Servers using x86 is because home x86 improved due to gaming demand.
Otherwise we’d get Alpha, Sparc or Mips servers.
kwan_e,
How how about this: “I don’t know what to say. I haven’t found a single example of ARM breaking high level code that works on x86, so instead of conceding that’s it’s not such a big deal, I’m going to make up a narrative about how it’s not about ARM despite the fact that Linus literally made it about ARM. It’s about the CROSS aspect. CROSS CROSS CROSS.”
Seriously, you not being able to come up with even a single specific case where high level websites are broken by running on ARM rather than x86 only further supports my position that ISA is irrelevant for web developers. As long as they’re running the same software & database, then CPU architecture doesn’t change a thing. The real questions are how much it costs and how well it performs.
Unless you can provide any specific evidence showing that I’m wrong, then this seems a good a place as any to end the discussion. Till next time, old friend
@post by kwan_e 2019-02-26 1:59 am
That is a non-argument, better devs work on them in C/C++ than average JS, Python, or Java dev…
@post by Alfman 2019-02-26 11:09 am
Hm, it seems you and Linus are kinda in agreement, at least from the blurb quoted… he’s saying that… it was price _and_ availability of x86 machines, both dev and server, essentially what you said.
@post by kwan_e 2019-02-27 11:41 pm
Was it actually justified by some previous issues?
zima,
I don’t disagree with him over financial aspects. Even more than price, the lack of availability of commodity ARM servers is a major obstacle even for those of us who want them. He was just wrong that having x86 on the server is important for typical cloud work. Those who try it find that for high level development, ARM just works the same as x86.
https://medium.com/@jonmasters_84473/amazon-aws-graviton-processor-in-newly-launched-amazon-aws-a1-instances-4e2414e27cb6
Companies like amazon had to manufacturer their own ARM servers, but most hosting providers cannot afford to do that. I hope that in a few more years we’ll see more ARM servers on the market, but it’s possible they might never have the scales of economy to be competitive with intel in the data center. Such is the way it goes.
@post by Alfman 2019-02-28 10:49 am
PS. BTW, this ending reminded my of what said the bad (really bad) character from the film Dark Tower, on which I was in the cinema some time ago, when he stops a bullet shot at him by the (good) gunslinger.
Bill Shooter of Bul,
But they have to access that cloud somehow. And a lot of shops either provide a desktop, or a laptop. Linus’ point is that a lot of development happens ad-hoc on those personal machines, and by the time their side project becomes well developed enough, it’s easier for them if they have an x86 environment to upload to.
I’d be surprised any dev shop can run without providing their devs some form of local hardware, even just to get onto the cloud. And once the dev has that local hardware, most devs would rather develop there than on the cloud.
Kochise,
Yeah, his argument needs work. After all, linux enjoys a majority in the server space despite the fact that most of us use windows desktops at home, and that’s a far bigger difference than x86 versus ARM servers.
To be fair, his argument could make sense coming from a windows-centric world view, where lots of windows software really is tied to the x86 architecture and switching architectures can be a non-starter. But it seems unnatural for Linus to be thinking in windows-centric terms as the author of linux. I’ll write it up to Linus probably not having enough sleep when he wrote that, haha. I know I’ve been there