It’s been many years since Intel Itanium processors made a convincing story and faced a slow demise over the past decade. While the last of the Itanium 9700 “Kittson” processors shipped in 2021, just two years later now the Linux kernel is already looking at possibly seeing its IA-64 support removed over having no maintainers or apparent users.
I have a morbid curiosity when it comes to Itanium, and I’ve been on the lookout for an Itanium workstation for two decades now. This is the first time where one of these “Linux to deprecate some old unused architecture” posts might actually affect me at some point, and I’m outraged. Outraged, I tell you!
Back when Itanium was new, SGI tried to convince its customers that IRIX on MIPS was a dead-end and that Linux on Itanium was the future. SGI loaned my lab a few HP-built, SGI-labeled Itanium workstations for testing purposes. At least for our bioinformatics use cases, they were the first truly boring higher-end workstations I encountered, The OS options were SUSE SLES and RedHat Enterprise Linux, both with SGI management extensions for performance and support monitoring. (It was pretty neat to see Performance Co-Pilot running on Linux. There was a time when SGI was a true leader in providing tools to manage large, complex computing environments.) SGI also provided a nice range of pre-built open-source scientific codes and clustering frameworks.
Unfortunately for SGI and Itanium, most of the codes we needed to run were efficiently parallelized, so there was no real advantage for us to switch from large MIPS systems to Itanium systems, albeit ones that could address large memory spaces. This was compounded as the price-to-performance ratio of Linux clusters became more compelling. On the workstation side, my team had mastered IRIX and loved working with it. We would have paid a premium to maintain another generation of IRIX workstations, and we did, buying a bunch of Fuels and Tezros, even though we knew they were the end of the road.
The bottom line of our evaluation was that Itanium was the most expensive way to put an otherwise standard Linux workstation on someone’s desk. Maybe it would have been fun to try HP-UX on Itanium, but we didn’t.
SGI was completely mismanaged by that time. I did an intership when in college, and I remember it was understood they were pretty much SOL. Which sucked, because when I was a kid it seemed like one of the coolest tech companies in the world.
MIPS was a dead end, as they didn’t have the money to keep pushing development. Same story with Irix, they didn’t want to rewrite the whole stack over to Itanium. And ATI/NVIDIA were already kicking their butt with the insane <12 month ASIC cadence dev cycles, that SGI could never match.
So they decided to sink their cash on CRAY and developing x86 NT machines that were not PC Compatible. It's like they went out of their way to go out of business in the most idiotic ways.
Indeed, by the time x86 clusters were performant enough, there were very few codes that required huge distributed single memory spaces… since message passing had taken over by then. And the same story with the visualization side of their business. Their last "custom" graphics boards were literally off the shelf ATI ASIC repackaged into their own custom form factor and making it 20x as expensive as the similar PC board, for literally no improvement in performance.
SGI was completely mismanaged by that time I agree, but I don’t blame them for thinking Itanium would overtake everything else. Imagine hearing about a 64-bit CPU that’s going to replace IA32 and end up in every PC (with all the economies of scale this means) and with the backing of Intel’s fabs too.
In other words, SGI wasn’t afraid of Itanium per se, they were afraid of the idea of 64-bit CPUs going into commodity PCs. And you know what, history proved them right. Whatever IA64 failed to do, x86-64 did. Companies that continued down the path of iterating on their own architectures like Sun (SPARC) and Apple aka AIM (PowerPC) ended up wasting a lot of money before eventually porting their respective OSes to x86-64 anyway.
SGI’s mistake was that they failed to understand that their customers didn’t buy MIPS anymore (x86 had caught up or was about to), they bought IRIX. IRIX deserved a port to x86-64, but never got one.
Funnily enough, Apple have now abandoned x86-64 again and moved to ARM.
SGI spun MIPS out, and for a few years MIPS was trying to compete with ARM in the embedded space. There was a MIPS port of Android among other things. I think MIPS missed a big opportunity in the move to 64bit, as while ARM64 was a new architecture which took a few years to get all the toolchain pieces in place and debugged, MIPS64 has been around since the early 90s and OS/toolchain support was already well established and mature.
Itanium made a lot of sense in the time frame when SGI adopted it.
SGI customers did not but “irix” they bought, for the most part, turn key systems.
SGI’s sales offices usually partnered with the final “packager” of the solution, at the end of the day the customers were buying a seat to run a specific application(s). Whether that seat ran Irix, or whatever, it was not the main decision maker.
During the 80s and early 90s, SGI was winning lots of bids and could charge an arm and a leg, because there was not that many competing systems that could do the same in terms of Graphics + High performance CPUs (and SMP). So they made their business model around the expectation of ridiculous margins.
The thing was that model was severely disrupted when NT came into play. And SGI simply did not understand that new model, as their answer was to produce a custom x86 NT machine with very little in terms of value added compared to much much cheaper standard NT PCs. Which were winning the bids.
Also by the late 90s, the turn around for those machines was 2 years tops. So high degree of reliability wasn’t a main concern anymore. So lots of smaller system integrators started to eat SGIs old desktop, by putting bids using cheaper PCs, running the same application for a fraction of the overall cost. So it was a no brainer. Since by then the P6 had caught up with MIPS high end in terms of performance.
They also misread where the high performance market was shifting. So they invested heavily in single image systems, when most codes where migrating to message passing clusters.
And just like SUN and HP, SGI also completely missed the shift in the data center. Where reliability was achieved at the system redundancy level, i.e. use seas of cheap boxes that replicate functionality all over and you can just swap nodes when they go down, so the old super reliable single system, where you could hotswap internal components was no longer attractive since they cost much much more.
Porting Irix to x86 would not have made any difference, since most of its desktop applications had been ported to NT.
Some backstory:
The whole reason anyone cared about Itanium was because the 4GB limit of x86-32 CPUs was visible on the horizon, and Itanium could run x86-32 binaries somewhat better than the other 64-bit RISC CPUs. Of course, going down the Itanium path meant locking yourself into Intel, since Itanium was protected by a wall of patents to which nobody else had a license, and the chips weren’t that great either. The moment x86-64 (AMD64) appeared, everyone lost interest in Itanium, including Intel. Intel kept making new Itanium chips as long as HP was willing to pay for them, but that was it.
If you run Itanium today, it’s probably because you are locked into HP-UX, not because you are running Linux. So I agree, it makes no sense to maintain Linux on Itanium.
And some but of trivia:
The lasting heritage of Itanium was that SGI and DEC stopped their investments in RISC chips, afraid of the possibility of 64-bit Itanium CPUs in PC destroying them with their economies of scale (as Pentium II and III had already done for low-end RISC workstations). But that would have happened anyway due to x86-64, as Sun’s SPARC inability to compete showed, so nothing was really lost.
@kurkosdr – Agreed on all points.
Itanium was actually pretty poor at running x86 binaries. Alpha was much better at this with FX!32 for NT.
Sure, but an Alpha was so fast and cost so much money, that was kind of expected. There is a reason Windows 2000 and Windows XP had an Itanium version but no Alpha version (last Windows version to support the Alpha was Windows NT 4.0, with a Windows 2000 port reaching Release Candidate status but never released). Of course, x86-64 happened shortly afterwards and this is why there was no Itanium port for Windows Vista and beyond.
IA64 was significantly more expensive than Alpha and never really came down in price. Alpha was more widely used, and older models were available affordably on the used market. There were lower end Alpha systems available, including bare boards/cpus in standard form factors.
Windows ending support for Alpha was because Compaq pulled the plug on it to move to IA64 more than anything else. It outlived all the other non-x86 ports of NT and actually had a user base which although small was larger than the user base for IA64 ever was.
Windows 2000 for IA64 saw extremely limited release and internal use at MS, there was also a final win2k for alpha and a 64bit win2k for alpha (the released rc3/nt4 versions did not support full 64bit operation) but they were only used internally at MS.
The IA64 version survived until win2008 r2, the very tiny niche of windows users on ia64 were mostly running mssql databases so the client version was dropped after xp.
Early Itaniums had their own HW for x86 emulation, basically an in order x86 pipeline.
FX!32 was not as performant as native HW, which is why it never really got much traction. Although it was a very interested emulation technique, specially when a lot of the ideas/techniques made it to the original Rosetta (which was originally targeted to translate MIPS to IA64 I believe, although my memory is very foggy).
Also, Alpha was pretty much dead by the time Compaq bought DEC. One of the reasons why DEC was going under was because the AXP ended up being such a money drain.
SUN was also pretty much killed by SPARC. And somewhat the same goes for SGI and MIPS, although their mismanagement was more generalized.
IBM was one of the few organizations that manage to keep their proprietary architectures and survive.
The hardware x86 support in the first gen itaniums was extremely poor, performance around the level of a p90. The software emulation was actually found to be faster, which is why later IA64 chips dropped the hardware level support.
FX!32 was at one point faster than any real x86 hardware, although obviously it wouldn’t be as fast as native alpha code. It was pointless to pay the premium for alpha if you were just going to run x86 code, Many people were also not aware that alpha existed as a highend option.
Pretty sure it costed Intel way more than they were paid, but Intel was contractually obligated to keep developing Itanium despite it being a total failure.
To be fair, since the P6 and PAE… x86-32 could technically address 64GB.
Also DEC stopped investing in their RISC efforts when they were bought by Compaq. IA64 at least provided an industry standard for 64bit computing in the mid 90s, which is why so many RISC vendors flocked to it. I don’t think people understand just for how long Itanium was in development.
It’s fascinating how Intel just has never been able to kill off x86 though. 3 times they tried and sunk billions that went nowhere. LOL.
Whilst i agree that basically no-one today is running Linux on Itanium, there was still many more options than just HP-UX for it. Itanium, until very recently, was the most modern architecture supported by OpenVMS, and i’m sure in some niche somewhere, someone is running XP 64-bit on an Itanium server, because that’s what the proprietary software/hardware package supported.
It is quite surprising that Linux want to pull the plug already, I know by already it’s a long time, but Linux also sells itself as the most democratised OS and to me that should mean Long Long Long Term support, basically until us oldies shuffle off, and even then it should be a museum curiosity maintained by volunteers!
Years ago I was working in the media industry, what is now called News Ltd, we were on Itanium overload back then, the machines as workstations were used for everything from TV Ad production to Newspaper typesetting. The Itanium fetish is not surprising, given the earlier addicton was all about SGI. Even the loading dock had an SGI or Itanium workstations, and pretty much all used it for was keypunching earlier handwritten delivery dockets! I rememeber the storeman getting a sparkling new SGI workstation, and not turning it on until the last week of the month when they entered all the audit ledger. Corporate tech gone mad!
SGI persisted a lot longer than Itanium, some of the SGI Workstations were almost treated like places of worship, guarded by zealots, sparingly allocated and never touched without permission!.
Well democratised implies there are people to vote… IA64 was never affordable, and never widely used enough to have flooded the used market with cheap retired hardware. The machines themselves were also pretty generic, they didn’t come in cool cases like SGI machines. They don’t really run any unique OS – VMS you can more easily/cheaply run on an Alpha, HP-UX can run on HPPA and Windows/Linux can run on anything. What you’re getting with IA64 is a generic beige box at a high price tag with equally high power consumption.
SGI machines looked cool, ran IRIX which wouldn’t run on any other hardware and were available cheaply on the used market at one point.
There is virtually no community around IA64 and never really was, as mentioned in the article – only one remaining known user. Initial development was sponsored by Intel/HP who have since lost interest.
Contrast that with something like the Amiga or Atari which are much older. Linux not only still supports these platforms, but they have also seen active development quite recently.
Also IA64 has a lot of architectural “oddities” that makes it harder to find maintainers.
Very few people are that proficient at the VLIW programming model, specially for a system level software. So I am not surprised.
Although I would expect some of the BSD’s that specialize in the old archs to support it for a while. Support was at least there for NetBSD 10, I think.
Interestingly, Linux support for older hardware actually seems to be getting worse over time. Basically if your hardware isn’t x86 Intel only at this point, you’re probably going to have a bad time. The main problem being that the drivers being open source doesn’t necessarily mean there’s someone out there with the will, skill, and hardware on hand to update and test those drivers. Definitely not a lot of people with Itanium workstations around, so you’d need a large corporate entity that can bankroll updates for them.
Keep in mind, that this hardware will still be supported in the previous generation LTS kernels that will be supported for a long time by the community.
Well i’m a user, so that’s unfortunate lol.
I have an RX8640 that i scored on ebay, and it runs gentoo. Can’t afford to run it much these days due to the crazy power prices, but i’d hate to see IA64 support disappear.
I remember that there was a discussion about who is going to maintain IA64 support for GCC. It was very difficult to find someone.
I hope they don’t pull the plug.
I am also outraged! I am still playing with floppies on my old retro computers (they want to abandon floppies).
Actually I am using ELKS (https://github.com/jbruchon/elks) – an old version of Linux that took its own path and supports old 16 bit machines quite well. So this is one solution – fork from a specific version of Linux, but then you still need someone to develop for it.
Sad in a way, but also understandable.
Let’s blame Oracle (there’s a lot of truth to that btw, but regardless, it’s always good to blame Oracle).