This whitepaper details the architectural enhancements and modifications that Intel is currently investigating for a 64-bit mode-only architecture referred to as x86S (for simplification). Intel is publishing this paper to solicit feedback from the ecosystem while exploring the benefits of extending the ISA transition to a 64-bit mode-only solution.
This seems like a very good idea – and it does seem like the time is ripe to remove some of the unused cruft from x86. Intel is proposing removing removing the 16 bit and 32 bit modes, and instead start in 64 bit mode right away. The company’s proposal does retain the ability to run 32 bit code on a 64 bit operating system, though.
As a sidenote, the introduction to this proposal is hilarious:
Since its introduction over 20 years ago, the Intel(R) 64 architecture became the dominant operating mode. As an example of this evolution, Microsoft stopped shipping the 32-bit version of their Windows 11 operating system. Intel firmware no longer supports non UEFI64 operating systems natively. 64-bit operating systems are the de facto standard today. They retain the ability to run 32-bit applications but have stopped supporting 16-bit applications natively.
It’s 2023, and Intel is still not, in any way, capable of acknowledging AMD for coming up with AMD64. Sad.
“Since its introduction over 20 years ago, the Intel(R) 64 architecture became the dominant operating mode”
Wait a second!
IA64 is Itanium, not AMD64.
Do you really think they care about accuracy ? It’s Intel we’re talking about… https://en.wikipedia.org/wiki/Pentium_FDIV_bug https://www.securityweek.com/aepic-leak-architectural-bug-intel-cpus-exposes-protected-data/ https://www.tomshardware.com/news/intel-2021-security-report-amd-vulnerabilities https://www.zdnet.com/article/intel-fixed-236-bugs-in-2019-and-only-5-11-bugs-were-cpu-vulnerabilities/
“IA64” and “Intel 64” are not the same
IA 64 is itanium. Whereas Intel 64 is what is referred to as the x86-64. Which is confusing, since the x86-33 is referred to as IA32.
But I don’t blame intel from trying to pretend that Itanium never happened.
FWIW contrary to popular opinion AMD64 and Intel 64 are not 100% the same. And there was lots of crosslicensing involved, which is why Intel doesn’t refer to their x86-64 as AMD64, and why AMD doesn’t mention intel in their architecture namings either.. Even though they used each other ISA extensions extensively (pun intended).
That being said, ever since they stopped using simple numbers for their architectures/processors in the early 90s. Intels naming schemes have been a mess since they moved to meaningless words.
Itanium was developed under pressure from HP, as a way of consolidating it’s many homebrew and acquired architectures and associated OSes/software under one cohesive ISA. It was kinda like the AIM Alliance behind PPC, but with HP dictating the requirements, and Intel engineering it.
Of course, it was a total failure, and colossal waste of money. Arguably, HP already had several ISAs that may have been better suited to focus on developing and improving, rather than starting from scratch. (*cough* Alpha *cough*)
The123king,
Yeah, alpha really could have been a better choice in hindsight. DEC was known for technical excellence at the high end of the market and they already had working 64bit products. But instead Itanium was touted as the next generation architecture, and it succeeding in siphoning away customers, money and resources from other 64 bit architectures, Itanium ultimately failed and it was all for naught.
@The123king
Nobody was pressuring anyone LOL.
Itanium at the time it begun it’s development, the early 90s, made some sense. The complexity from out-of-order designs, that were coming up, was expected to explode, and it kind of did, HP also knew that they wouldn’t be able to finance their own architectures past a certain point in terms of design complexity/costs, about which they were also right, so they knew they had to partner with someone that would have the ability to produce CPUs in the long run (in terms of fab and being able to ride commodity curves) and Intel was the logical choice. So they bet on software complexity instead of HW complexity and intel being around, so in that sense HP was correct.
However, they ended up having to introduce some HW complexity, like predication, which kind of negated some of the initial simplicity benefits in terms of area and power. Same thing happened to the original RISC architectures, which ended up negating a lot of their initial simplicity.
Itanium was never a bad performant, it just didn’t make sense once AMD figured out a reasonable 64-bit extension to x86.
Regarding Alpha; almost every modern x86 core has some Alpha DNA in it BTW. A lot of the design team from DEC was absorbed by intel (and AMD as well). And when I worked at intel, our research group used alpha arch simulators during our arch performance studies, so many intel cores started life as an alpha Developing AXP by itself made no sense to develop any further, since the ISA itself had little mass penetration besides Tru64 and OpenVMS which were very niche OSs to begin with.
BTW some people keep confusing ISA with the architecture itself, which has not been the case for decades. Keeping an ISA only makes sense if there is enough software catalog to be worth the bother. That is why x86 won ultimately, as the underlying uArchs kept changing.
“Itanium was never a bad performant, it just didn’t make sense once AMD figured out a reasonable 64-bit extension to x86. ”
No, Itanium was bad at almost every tasks, thanks to its VLIW-design.
Intel/HP bet on compilers and lost everything.
javiercero1,
That’s debatable. It’d be one thing if it paid off, but given what we now know in hindsight, it’s harder to make the case that they backed the right horse. Intel has historically struggled with future proofing.
You keep preaching this but everyone here already knows the difference.
The thing is micro architectures are not totally disconnected from the ISA. It’s not as simple as saying the benefits and cons of ISAs are irrelevant under micro architectures. Code density for one is quite important. Ease of decoding will dictate the latency cost for uop cache misses, which can have a dramatic impact under some workloads. Also there are granularity issues, you can’t simply optimize the micro architecture irrespective of the ISA. For example the itanium ISA register windows with hundreds of registers are quite unique and imply certain constraints on the micro architecture that an alternate ISA wouldn’t necessarily be bound to.
@smashit
For most of it’s history, Itanium was a top performer in FP and enterprise workloads, which were it’s main use cases.
Itanium’s main issues were x86 becoming competitive in FP, and the datacenter in the mid 2000s moved towards reliability models based around mass redundancy of commodity boxes, rather than the traditional redundancy within (single) system model, for which ttanium (and it’s RISC competitors) had been designed. So it followed the same fate of the previous niche RISC designs and priced itself out of existence.
@Alfman
That modern processor design cost have followed an almost exponential increase, and intel being a dominant semiconductor organization during the lifetime of Itanium are simple historical fact. You are free to debate whether water is wet all you want, I guess.
I am not “preaching” anything. That “ISA and uArch are decoupled” has been a rule in our field for ages. Like every rule, it has exceptions. IA64 being one of them, and now that Itanium has gone the way of the dodo, it kind of reinforces that simple point… which as it is traditional, you go out of your way to miss.
Cheers.
javiercero1,
While Itanium was sold as enterprise class hardware, that has a lot more to do with server features like redundancy, reliability, and uptime. When it comes to application performance for tasks like email, file sharing, web hosting, domain server, database, java, application servers, etc it has a lot more to do with how well the architecture aligns with what the application does. Itanium was more optimized for VLIW. And while there’s nothing wrong with this, the architecture was not well suited for applications that were more sequential in nature (regardless of them being “enterprise workloads” or not).
You’re free to be a contrarian and argue that itanium was not a hugely expensive mistake. I’m perfectly comfortable with my view that itanium over promised and under delivered.
@Alfman
” Itanium was more optimized for VLIW” I mean, itanium was a VLIW architecture….
I have no clue what you mean by “sequential workloads” is that your way of saying “workloads with low parallelism?”
FWIW I literally said Itanium priced itself out of existence. Itanium being performant, and it being a commercial failure are not mutually exclusive arguments.
Feel free to continue your usual counterarguments to claims I never made, I guess.
@javiercero1
“For most of it’s history, Itanium was a top performer in FP and enterprise workloads, which were it’s main use cases.”
It was its sole use case.
And only IF you could achieve 4-way parallelism during compilation.
In reality it meant, that 50-75% of your expensive Itanium-box was wasting electricity on NOPs.
I worked for HP certified prefered sales partner (or what ever that certificate was called) during that time.
We sold Alphas to every large corporation and university in austria.
If I remeber correct we never sold a single Itanium.
javiercero1,
It’s related to programming. Consider an email server following sequential routing rules or a firewall following sequential filtering rules, executing java software or web applications and countless other enterprise applications where there’s long chains of sequential instructions. These programs might contain instructions that could be executed in parallel on a superscaler CPU, but they don’t map naturally & efficiently to VLIW and aren’t particularly good candidates to take advantage of itanium’s architecture unless you have developers re-engineer software for VLIW.
That’s the thing, it wasn’t particularly performant. When itanium was released three years after the Alpha 21364 from 1998, it was slower! By 2002, intel would release itaniums that could beat alpha scores, at least if you purchased the fastest itaniums.
However…
https://www.eweek.com/pc-hardware/intel-warns-of-itanium-2-bug/
Itanium customers following this advice would have to wait until the next generation itaniums to arrive in 2003.
https://www.spec.org/cpu2000/results/cpu2000.html
Intel is punching backwards at alpha by some years. It would have been nice to see what the 2001 Alpha 21464 and beyond could have brought to the table had the industry gone with them instead. Alas, we’ll never get to see it, but it would have beaten itanium on these benchmarks by an even larger margin.
Well, a lot of your comments are critical without explaining exactly what you disagree with, so perhaps I wrongly assume you are disagreeing when you are actually in agreement. If so, that’s great. It would help if you’d say it though. Maybe we’ll get there some day
@Alfman.
Itanium, especially with Itanium2 did have internal dynamic parallelism with things like predication, bundle scheduling, SMT, etc. This is why I keep reiterating that “ISA and uarch have been decoupled for ages” because you still don’t understand why that means….
BTW the 21364/EV7 didn’t hit the market until 2002
Again, feel free to continue that debate that only exists in your head about stuff I didn’t say…
@smashIt
I have no idea what the sales figures of a random office and your misunderstanding of what a NOP is have to do with itanium.
javiercero1,
This is exactly what I was talking about. Maybe it is the case that you actually agree with me that the itanium wasn’t all that great compared to the other alternatives, but you don’t want to admit it. You keep relying on these ad homenum attacks that aren’t true and are meaninglessly generic. I’m sorry to say but that’s not a professional or insightful way to approach the subject.
Anyway, not only does the empirical data back what I am saying, but many in the industry agree that the hype and promotion around itanium ended up killing architectures that had more merit.
https://www.pcmag.com/archive/how-the-itanium-killed-the-computer-industry-236394
This is what everyone in this thread have been saying except you. If you do agree with everyone including myself, then just say it.
javiercero1,
BTW I did look into this, the spec benchmark didn’t show the model and I had the wrong date. Thanks for the correction! That makes the processors closer date-wise, but it still wasn’t a great that the itanium under-performed it.
@ Alfman
Once again, you invariably end up conducting debates that only go on in your head.
I simply claimed Itanium was performant, not that it was the cpu with the highest performance. Really not a hard concept to grasp, alas…
Yes, the highest clocked EV7 it outperforms the lowest clocked I2. And the highest clocked I2 outperformed the contemporary 21364 significantly (1640 vs 2120 in SPECFP) for example. So empirically we can show that you know so little about this matter, that you do not realize how little you know.
javiercero1,
I’m happy that you’re admitting this! So don’t you also agree it poors some cold water on the notion that intel was the only company to have the chops to build 64bit architectures and that alpha engineers weren’t good enough for the job? Because that’s really what the essence of the whole thread boils down to. We feel that alpha had merit, but ultimately failed because intel had stronger partnerships.
I’m not sure where you’re getting those numbers from, do you mind providing links?
Unfortunately the newest alpha cpus slated for 2003/2004 got canceled so from that point forward intel would be competing against a dead horse
I collect rare and exotic machines.
One of the machines i own is an HP RX8640 with four cell boards.
16 sockets populated with dual core 1.66 ghz 9000 series itanium 2 chips.
The machine was built in 2006.
32 cores, 64 threads, 128GB of RAM. A monster machine for 2006.
This should represent a good case for itanium performance, right?
The performance is godawful lol.
When you compare it to a well specced 2006 era x86 machine, the performance per core is just no contest.
Of course, as a whole, having 64 threads and lots of memory, it solidly outperforms any x86 offering of the time.
I’m not going to get into the politics, decisions, etc that lead to it being this way, i’m just an observer here. It’s just a bad deal. Always was.
I like the machine for what it is. An obscure oddity.
A very, very expensive obscure oddity.
There is still IA64 out in the wild these days. Did 9 months of consulting for a large automotive company that was still running critical services on hpux on ia64.
There is still VMS on ia64 out there as well.
IA64 sucked, at least performance wise.
If it didn’t it would have conquered the market, since intel was ready to ditch x86 for it, which gave it a very very unfair advantage over sparc, mips, etc.
Fact is that even with this unfair advantage, IA64 wasn’t able to make even a small dent in the server market.
The end.
@ Alfman
Who is this “we?” LOL. Again, you seem to be unaware that you’re conducting a debate that only exists in your head.
My numbers are from the spec database.
https://www.spec.org/cpu2000/results/cfp2000.html
Now kindly go away. LOL
javiercero1,
I searched and the numbers you provided don’t show up for alpha or itanium, only xeon. I don’t know if you made an error, but you need to be more specific.
Pretending that no one else is in the thread is silly, but whatever. Most, maybe even all of us believe itanium was a mistake. You can agree or not, I don’t care either way. If your arrogance is getting in the way of admitting that you agree with us, well that is a conundrum, I suppose. But that’s truly your problem and not mine.
I think you’ve got a serious superiority complex going on. It doesn’t seem to matter what I say on any topic, you always seem to feel threaten by everything I say and react aggressively.
https://www.verywellhealth.com/superiority-complex-7374764
And before you respond with another “I know you are but what am I”, let me remind you that I consider you my peer and I always have…
You would have me be silent, but honestly I really don’t feel that I should have to censor myself here on osnews just because you are too sensitive about the topics we cover. That punishes me and the whole point of being here on osnews is to discuss stuff we are all interested in.
@ Alfman
Well that’s “odd” when I click the link I gave you I see the official SPECFP2000 result list including about a dozen of Itanium2 and just about every contemporary Alpha. Maybe your internet is “different.”
Cope harder LOL.
javiercero1.
We’re looking at the same list, however I cannot find the specific scores you mentioned. “1640” has no results on that list and “2120” only comes up with xeons, which is why I asked.
Fair enough, I’ll cope. haha.
Have a great day.
@ Alfman
Of course you wouldn’t be searching for CPU model! See the peak for the [email protected] and the base for the [email protected]. With the report, there is a 0.02% rounding error.
Just so we are clear, I don’t consider you my peer. You lack the education and professional experience in this specific field for me to do so
javiercero1,
It feels like you’re putting me on a goose chase to find your source. I tried looking through all the benchmarks for those 1.3ghz models and I still don’t see where your scores came from. Oh well, next time please link whenever benchmarks you are quoting so that the source is immediately clear and obvious.
And just so we are clear, our credentials mean diddly squat here on the internet. You can make up whatever you want to but it’s not verifiable and ultimately we are just two random dudes having a chat on the internet. The only thing we have to build a reputation on is what we say and how well we present ourselves. Education, professionalism, and experience do carry through online, But it’s not enough to just say that you are an educated professional, no you actually have to behave like one. If you’re always threatened by opposing views, demeaning others, and using ad homenim fallacies, well those are antithetical to being professional. You can certainly earn a reputation that way, but it will be one of being a bully rather than a professional.
@ Alfman
You seem to think your poor capacity to process information is somehow my responsibility. I literally gave you a full list of SPECFP numbers so that you could indeed validate that my numbers are within the expected ranges.
As I said; you know so little about this matter, you don’t realize how little you know. Thus the lack of basic contextual education/information/experience to even begin to understand what it is being discussed.
Which makes the whole “lets’ pretend we’re in a peer-review” act of yours, that much more hilarious.
Indeed, we are just two random people on the internet. Which is why we are not peers.
I am sure you’re very good at whatever it is that you do, but which is clearly not CPU/VLIS design. If you were providing, as a courtesy, an insight about something you actually knew about academically/professionally I would have no problem taking what you say at face value. Because unlike you, I understand perfectly well how professionalism works, especially professional courtesy. You would benefit greatly from working on your understanding of context, set, and setting; this is just a random website where people come to shoot the shit, which you somehow have confused with some kind of peer-reviewed journal.
All of this to say that . as usual, I could claim that water is wet and you would still take issue with it. Because somehow at some point I said something that caused such a narcissistic injury on you, which you clearly have never recover from. I think it’s hilarious.
@ javiercero1:
“Because unlike you, I understand perfectly well how professionalism works, especially professional courtesy.”
Then why do you behave like a 15 year old with an inferiority complex?
OK. So Itanium is now so dead that Intel can re-purpose “Intel(R) 64 architecture” to actually mean AMD64?
Yeah. Linus Torvalds called out Intel in 2004 for being ‘petty’ and ‘try to make it
look like it was all their idea’ and ‘Any Intel people on this list: tell your managers to be f*cking ashamed of themselves.’.
Well said!
(https://yarchive.net/comp/linux/x86-64.html).
Old Linus rants are always a hoot.
Thom Holwerda,
It’s funny, but I still kind of resent AMD for doing that. if not for amd64/x86-64, I think we would have made a clean break from x86 to cleaner 64bit architectures. Yes this “x86s” will remove old x86 modes, but it doesn’t fundamentally redesign x86-64 as yet another extension to x86 from the 70s. The ancient x86 architecture, even with 64bit extensions, still leaves things to be desired, like more efficient decoding, less decode latency, longer prefetch queues, more consistent registers, all with with better code density. Sure It’s proven “good enough”, but by continuing to use it for new architecture variations, it’s holds back the adoption and promotion of more modern architectures.
Alfman,
We could argue it was not AMD, but the market that has decided to keep x86-isms into the 64 bit era.
We had Itanium from Intel, PowerPC supported by Apple, IBM, and Sony (PS3/Cell), Sparc from Sun Microsystems, and probably many others I forgot to include. All had 64 bit support, some very clean designs. Nevertheless none of them made inroads even after being pushed for so long.
Only after AMD introduced an x86-ish design people jumped onto the 64-bit bandwagon, both on Linux and Windows sides. (Windows had many prior 64 bit versions in the past).
So, once again “good enough” won. If not for AMD, it could have been any of the other players coming with a similar idea. At least AMD’s design made some good choices (like dropping 16 bit support, segmentation, and old fpu registers), and made things like “no execute” and SIMD standard. It was a bit of a semi-clean start.
sukru,
But…it was in fact AMD though who were responsible for bringing x86 to the 64bit space. If not for AMD, another architecture would have won and we might have been better off for it. On top of other contenders like you mention, AMD’s own talent could have introduced another architecture in place of x86 based amd64. Competition could have gotten interesting, but instead we got a continuation of the x86 monopoly…ugh.
I wouldn’t say that. There would be relatively little consumer demand for 64bit on PCs for several years after 64bit processors became available,
https://www.hp.com/us-en/shop/tech-takes/specifications-personal-computers-over-time
Not only was 64bit unnecessary for average consumers, the heavy footprint of 64bit software & OS was actually a compelling reason to prefer 32bit installs even on 64bit hardware. In fact I still remember many of these 64bit capable systems continuing to ship with 32bit versions of windows and I owned one myself. By the time 64bit truly became important for consumers, 64bit alternatives weren’t competing with x86-32, they were competing as underdogs against already dominant x86-64 systems, for better or worse.
You may already know this, but it’s not clear from your wording…
AMD64 didn’t drop any of those things in silicon, rather they added a new long mode in which those things are not accessible. This is why you can still run 16 bit dos software & games in a 32bit version of windows on a 64bit CPU but you cannot run the same the same 16bit software on a 64bit version of windows on the same CPU.
This new “x86s” spec may be the first time they actually drop old features from the CPU. I suppose it’s all well and good for x86, but I still feel a modern architecture would be more optimal. The problem has always been the difficulty of overcoming the x86 monopoly.
Yes, probably for the average maybe, but I remember installing Window XP x64 as soon as it became available. It is true that it has additional memory footprint, but it also allowed going over the 3GB limit without /PAE hacks.
I remember there was a weird period where we had the x64 kernel + 32bit runtime hybrid mode in some Linux distros (can’t remember the name).
Yes, this change actually drops them.
But many were essentially “rewired” to modern functions, or side stepped. FPU registers were inaccessible, since they became part of the SSE space, for example. Segments reused for virtualization, and there is probably more I don’t know the details about.
Agreed. But clean architectures rarely gain any steam against those who have backwards compatibly. Even that is not sufficient. Transmeta could not survive for example.
sukru,
Obviously I realize the technical advantages of 64bit, but my point is that most average consumers didn’t actually need 64 bit when x86-64 came out. Years later, 64bit would become more important, but by then x86-64 had already been out for a long time and x86 simply continued the monopoly it already had without a contest.
The assumption is incorrect. The x87 floating point unit is still usable from 64bit code. You can confirm this using “gcc -S -mno-sse test.c”.
Additionally, sse only goes up to 64bit floating point whereas x87 arithmetic has always supported 80 bits. Applications that use extended precision are still dependent on x87 and will automatically use it.
https://en.wikipedia.org/wiki/Extended_precision
And I tested all of this with a 64bit program just now to be sure…
I don’t think this matters much to the discussion, but I just wanted to point out that the x87 floating point units are still used to this day and I believe will continue to be supported on “x86s” out of necessity. My impression is that they only intend to get rid of legacy 16bit and 32bit operating modes that aren’t used by today’s bare metal operating systems. But they won’t be cleaning up legacy functionality that is still needed by 32bit and 64bit userspace software.
Yeah. Apple proves it’s possible, but they had control over both the hardware and software. The adversarial relationships and incentives in the windows space made change more difficult. It was much easier for linux to pull it off “if you build it, we’ll support it”, but linux was mirroring the investment in existing architectures rather than motivating investment in new ones.
Alfman,
Interesting. I remembered reading x86-64 dropped FPU support in 64 bits mode for SSE. But apparently, they did not. I stand correct.
Thanks.
I think 64- bit x86 was always going to happen. it was inevitable. If it wasn’t AMD, it would’ve been Intel later on since it was clear Itanium wouldn’t pan our. Or, Microsoft might’ve partnered with Cyrix/Via, since they had a large incentive for x86 to continue. It was going to happen regardless.
There was just too much momentum behind x86 for it not to happen.
Drumhellar,
Well, maybe… but that wasn’t intel’s plan. A important motivator for intel to ditch x86 in favor of itanium in transitioning to 64bit was that AMD wouldn’t have the right to clone it. That’s the last thing intel wanted. By the time intel figured out that itanium wasn’t going to cut it for 64bit platforms, and if AMD already had an alternate solution, it’s plausible that things could have turned out differently.
I’ve already suggested this in a different post, but I believe the success of any alternative that had technical merit was ultimately in microsoft’s hands. They could have succeeded in embracing a new 64bit architectures if they were committed (just as apple has done). However microsoft really weren’t committed to changing the status quo as direct beneficiaries of the “wintel” monopoly, for better or worse.
“Only after AMD introduced an x86-ish design people jumped onto the 64-bit bandwagon”
Pretty sure Alpha had 64 bit well before x86_64. Plus most of these designs didn’t wait for that. Everyone said Intel IA64 is probably going to crush their architecture and pulled the plug on expensive chip R&D Development. As for Cell, well its advantage was entirely decimated by GPU computing (No reason to limit yourself to 8 SPEs on a specific CPU and awkward programming when you can have as many general GPU cores as you can buy, and access them directly)
dark2,
Again, there were many 64 bit designs, even from decades ago. But only after x86-64 with i32 backwards compatibility, they became mainstream.
Cell is a weird case. Sony pulled the plug from Linux support since they were afraid it could lead to hacks. I actually had used it on my PS3. They also did not try to license or develop it. Of course it dwindled and died over time.
But the PowerPC architecture it was based on lived longer. (I think there are still some new chips coming out, but much rare now).
sukru,
I’m still not agreeing that 64bit adoption would not/could not have occurred without x86-64.
You’re trying to imply that x86-64 created consumer demand for 64 bit when no other architectures that preceded it could. However this lack of demand from typical consumers in the early 2000s isn’t surprising at all because 64bit simply wouldn’t become important to them until memory requirements made it important by which point it would be important regardless of whether x86-64 existed or not.
Lets consider 32bit. do we give credit to x86 for creating demand for 32bit that wouldn’t have happened otherwise? No obviously the demand for 32bit was already going to get there regardless of if x86 participated or not.
To be clear, I do acknowledge the value of compatibility, but it’s not logical to credit x86-64 with the demand for 64bits when the demand was already going to get there naturally regardless of if x86 participated or not.
Alfman,
Thinking back, the slightly more correct explanation could be:
Athlon 64 was just a better x86 chip.
It did not cost too much to have 64 bits, neither in monetary terms, nor in software compatibility.
It would run our existing 32 bit OS out of the box (almost no other alternative did that, at least without major emulation cost). And if anyone wanted to experiment both Windows and Linux provided 64 support, again with back compat.
“But only after x86-64 with i32 backwards compatibility, they became mainstream.”
Alpha was very mainstream for email servers, and other use cases that already needed 64 bit. Plus being able to add pc expansion cards, etc. It would most certainly have hung around for a long time if they hadn’t pulled the plug on their R&D funding like everyone else on the false assumption Itanium would be successful. Also 64 bit adoption truly lagged for a long time. I remember when Vista was out and while new PCs had 64 bit processors, a lot of them had motherboards that were only capable of 32 bit. It took a long time for the average person to need all that RAM.
Also Linux support has nothing to do with the Cell dying. As I said, you need special programming to feed 8 cores through one CPU, it’s always 8 cores per one CPU, so scaling sucks. It’s not 2007 anymore and all tasks those SPEs were handling can now be fed directly to a GPU with no such annoying limitations or complications in programming. Back then GPUs were pretty much only double floating point calculators and couldn’t do those tasks. Cell simply can’t be competitive anymore, even if R&D had continued.
sukru,
Yeah, we can agree on this. There wasn’t a negative and OEMs would start including x86-64 CPUs even on systems that didn’t use it (both with intel and AMD systems).
dark2,
Indeed. 64bit applications working on alpha was an extremely compelling advantage for those that needed it. Particularly with enterprise customers running applications that benefited from more ram.
Alas, Microsoft’s commitment to promoting alternatives was rather weak. It seemed they were hedging their position to be ready for a rapid move to x86 alternatives as an insurance policy in case things went south with intel, but they never put the full force of microsoft behind the alternatives, which is what alternatives needed to be truly competitive with intel software-wise…
http://alasir.com/articles/alpha_history/dec_collapse.shtml
I think we can agree that the competitiveness of alternative hardware ecosystems largely came down to software rather than hardware. Microsoft could have made other 64bit architectures first class citizens, but as second class citizens, they were deprived of their best chances to grow organically.
IBM was manufacturing for a time Cell based blades. It weren’t selling especially well.
@suku
You are right in regards to consumer-space adoption, in the pro/enterprise markets 64-bit had been a thing for almost a decade before.
FWIW people are still confusing uarch with ISA. Underneath the AMD64 was basically the same as an Alpha/POWER/MIPS high performance core of the era. In fact a big chunk of the K8 design team came straight for the AXP group in DEC.
What AMD got right was providing a consistent 64-bit extension to the x86 ISA. So you didn’t have to throw away the baby with the bath water in terms of keeping the x86 32-bit software base. And it was a move the market proved correct.
So it was not a case of the k8 being “good enough,” as it was just as good or better than the competition. But being the right path forward for the largest software base on the planet (at the time at least). And people/organizations buy computers to run software.
javiercero1,
Agreed. Even in gaming markets we had 64 and 128(!) bit chips before. But not in the general purpose consumer PCs. (! 128 is the vector size, I think not the regular words).
And, yes, all modern chips since 1990s uses microcode to convert RISC into an internal CISC-like representation.
@sukru
I think you meant convert CISC to internal RISC-like representation.
It’s an interesting freudian slip, because most people are not aware that most modern high performance “RISC” processors have also broken down their “simpler” interface instruction into nano/micro ops internally as well.
javiercero1,
Haha… Yes, I think there is almost no more “pure” RISC chips out there
@sukru
To be fair there was never a “pure” RISC chip really, since not even the research groups who came up with the term initially could even agree on what it meant exactly. It just sounded cool, and that’s why it has stuck for so long…. way past it’s expiration date about 3 decades ago (when ISA and uArch started to decouple)
Intel wanted to make 64-bit computing exclusive to their big iron customers. They had no intention of making 64-bit computing mainstream for us ordinary folk. We’d all be stuck with IA32 if they had their way.
adkilla,
I agree and think intel would have lost the 64bit battle for consumers with itanium.
I disagree. Obviously we know with hindsight that ordinary consumers weren’t going to need 64bit for many more years to come, but eventually the need would be there and an alternative would likely have beaten itanium. sukru mentioned some other existing alternatives. AMD’s 64bit architecture didn’t have to be based on x86 and if it weren’t, then I suspect we would have had another more modern architecture from them.
To be fair, Itanium was HP’s idea, and replacement for PA-RISK. It was never originally intended for ordinary customers. The only reason Intel even kept developing it was that they were contractually obligated to HP.
@dark2
Both organizations were full in at the begining. I think HP bought some VLIW startup from the 80s, and Intel had experimented with LIW programming models with the i860.
At the time itanium started both RISC and CISC designs had reached the same approaches to achieve performance; caches, prediction, fast pipelines, and superscalar scheduling. Stuff that was a big deal in the 80s when RISC came to be commercially, like instruction encoding had stop being a perf limiter as complexity had crept up elsewhere. The next step was going to be out-of-order scheduling, but it would increase significantly design complexity.
A lot of teams that went towards out of order had a lot of issues. MIPS basically went bankrupt designing the R10000 and they had to be bought out by SGI. Same for DEC, which had tremendous difficulty with the EV6. Even the 1st out of order PPC, the 620 was a failure. SUN literally took over a decade to get an out-of-order SPARC and it basically sunk the company financially.
HP and Intel both had teams that bet internally on OoO; PCX-U for PA-RISC, and P6 for x86. Which is what became the PA-8000 and Pentium Pro.
But both Intel and HP were well aware of how costly continue to design aggressively speculative HW was going to be, so they also decided to bet on a common platform that would “tame” the HW complexity, by using an in-order superscalar core. And instead transferring a lot of the scheduling complexity over to the compiler, since a lot of trace analysis tech from the 80s and 90s was showing a lot of promise. VLIW is basically an attempt to make the superscalar part of the machine visible to the programmer.
Like everything in this life. Things are not black and white, and it all ended being a mixed bag.
The assumption that out-of-order was going to induce massive runaway design costs was correct. VLIW also proved to have it’s own merits as Transmeta, for example, managed to make their x86 virtual engine run on top of a VLIW core and a lot of modern GPUs and media/vector processors use VLIW designs. The compilers also, contrary to a lot of the nonsense that it is said on the internet, managed to produce decent VLIW code.
However, what both HP and intel got wrong was that x86 would get a 64 bit extension from an external entity (AMD). And with out-of-order 64 bit x86 had access to similar performance levels than IA64 with something that itanium lacked; huge software bases and economies of scale.
And at the end of the day, customers buy CPUs to run software… and whatever allows you to run your current software fast today, that is always going to wing over whatever allows you to run your future software fast tomorrow. Which is why x86 has proven almost immortal, as it will always carry the momentum of its huge software catalog overcome whatever performance gains its competitors may have.
Then tell me why SPARC, PowerPC or MIPS didn’t caught up ? https://en.wikipedia.org/wiki/64-bit_computing
The “wintel” monopoly. Until recent snafus at intel, their fabs gave x86 a huge advantage over competing architectures. Also before windows RT no architecture besides x86 was marketed to or affordable for windows consumers. Having a “good enough” product with strong monopoly backing can be a much better position than having a better product with little market share and weak partners. Apple showed that it is possible to catch up to the leader(s), but they did so from a strong position with their own dominant platform, are worth way more than intel with trillions of dollars in the bank, exclusive access to cutting edge fabs, etc.
That’s kind of the point, x86 would loose more battles if they were fighting on a level playing field, but for better or worse x86 has had a privileged life throughout most of it’s existence.
Alfman,
But wasn’t that their strong point?
Microsoft was always the cheapest option, until Linux came of course.
Windows PCs were cheaper than Macs. Windows for Workgroups was cheaper than Novell Netware. Visual Studio was cheaper later Delphi versions (earlier it was the other way around, hence many people went with Borland), and so on…
Same with Intel chips. They were usually the cheapest option out there. Or at least we had AMD/VIA/Cyrix with the same arch. Even today, ARM desktops are barely competitive with the x64 offerings. Aren’t they?
Bottom line: cheap and good enough (VHS) wins against the better and more expensive (Betamax).
sukru,
Yes absolutely, it happens everywhere and that’s kind of the point. Markets favor monopolies & duopolies because the positive feedback loops (scales of economy, network effects, partners, momentum, etc) keep them on top with “good enough” products. We clearly agree on this aspect. I just find it unfortunate that most alternatives will need immense resources just to offset the monopoly advantages before they can have fair competition.
I don’t think Microsoft should be blamed for this. They usually just focused on the software side of things, letting everyone else do the rest.
Besides they already had the AXP (Alpha) and Jazz (MIPS) platforms that were already 64-bit before AMD released the Athlon 64, plus the future Itanium port they released. So it was not for the lack of trying.
Civitas,
I’m not sure I would use the word “blame” this way, but It’s well known that ms windows support on alpha was subpar. So let me ask you this directly: do you think quality OS support has an impact on the viability of an architecture? I hope your answer isn’t “no”, but if it is then please explain.
The Alpha hardware was 64bit, but the windows kernel running on alpha was only 32bit. There were other deficiencies too, but IMHO “the lack of trying” would be a fair assessment. It was a proof of concept that it worked, but with none of the effort that would be needed to really make it a first class windows platform.
This extremely timely article from a few days ago covers the topic perfectly…
https://www.theregister.com/2023/05/19/first_64bit_windows/
(my emphasis)
IMHO this is newsworthy enough to be its own osnews article.
This is ancient news, but it was linked to from the article above and is also relevant…
https://www.theregister.com/1999/08/26/microsoft_puts_boot_into_64bit/
Theregister articles used to be so full of satire. “Satan of Software”, haha
https://www.theregister.com/1999/08/23/compaq_alpha_cuts_pull_rug/
So arguably Microsoft’s repeated non-committal approach towards alternative 64bit architectures really did hurt the future prospects of those architectures.
@Kochise
Price. These things priced themselves out of existence.
Design costs have grown almost exponentially (just like performance). So when you’re in a market that is not growing as fast, you end up with smaller and smaller margins for bigger and bigger investments. Until you reach a point where you simply can’t afford to carry on business.
Alpha, MIPS and SPARC basically killed DEC, SGI, and SUN respectively. Specially with the transition to out-of-order designs where complexity exploded, and thus design costs.
These architectures made sense in the early 90s, when organizations were willing to pay x-times more for y-times more performance with respect to commodity off the self stuff. But by the turn of the century the roles had reversed; the bespoke designs were costing more than the x86 commodity stuff, sometimes over twice as much for half as much performance. Which is not a sustainable business model, once your customers figure out how to migrate away their legacy bases.
BTW I built an AMD64 dual 244 on an MSI mobo in 2003 as a workstation to replace my dual ABIT BP6 Celeron 500. Ran 64bit Gentoo for a few years (still have it next to me). The arguments and benchmarks for 64bit were favourable depending on how much RAM you had.
Bringbackanonposting,
Some things performed better and others performed worse. Prior to amd64, a longstanding problem with x86 was that it suffered from an inadequate number of registers. Even register renaming pipelines couldn’t mitigate the fact that compilers often had no choice put to store local variables in memory due to the lack of registers. AMD64’s new registers were a hugely welcome addition! On the other hand algorithms that were already close to optimal on x86-32 could end up performing worse when compiled to 64bit due to the additional memory consumption pushing things out of cache. Also, if the 64bit software used up memory to the point of swapping, then the performance hit would be MUCH worse….
So yeah, I’m going to bold your point about RAM