Ah, Intel’s IA-64 architecture. More commonly known as Itanium, it can probably be seen as a market failure by now. Intel consistently failed to deliver promised updates, and clock speeds have lagged behind. Regular x86-64 processors have already overtaken Itanium, and now Microsoft has announced that Windows Server 2008 R2 is the last version of Windows to support the architecture.
Windows XP and Windows Server 2003 were the first Windows releases to introduce support for Intel’s Itanium architecture. Windows Vista didn’t come in an Itanium version, and now, the server line of Redmond’s operating systems will drop support for it too.
Therefore, Windows Server 2008 R2 will be the last version of Windows that supports Itanium. This means that mainstream support for Windows Server on Itanium will end July 9, 2013, while extended support will end July 10, 2018. In other words, current customers have little to fear.
This won’t be that big of a hit for Microsoft, either. Itanium has never gained a whole lot of traction, and since it has been estimated that only five percent of Itanium systems run Windows, it only makes sense to divert the Windows on Itanium resources somewhere else.
Red Hat has already announced that Red Hat Enterprise Linux version 6 will not support Itanium either. All this spells doom for Itanium, which was once predicted to more or less replace x86 outright. Despite massive investments from Intel, as well as a boatload of hype, it was pretty much still-born.
This graph says it all, really.
It’s sad to see IA64 dying. It was intended as a replacement of the old x86 but due to stubbornness and inertia on behalf of software developers and users, it newer make it. It’s a shame since it’s a newer, smarter technology which doesn’t have x86’s restrictions and doesn’t sacrifice architecture design over supporting legacy features.
Edited 2010-04-05 18:57 UTC
It would have been a much easier sell to people if Intel was able to show consistent performance measurements over x86/x64, but they couldn’t in the face of competition. Fact is that the newer x86/x64 chips can exceed Itanium in many (most?) meaningful benchmarks.
So what is the advantage of Itanium over x86? Exactly the question people keep asking and Intel can’t seem to answer.
The fictitious problem with x86 that Intel wanted everyone to believe in was the struggle to maintain performance while still supporting legacy instruction sets. Considering how easy and fast it is today to emulate the older instructions, that “problem” sure doesn’t seem like a real problem anymore.
1. If we really want to emulate that “older instructions”, we’ll be stuck with that x86 for next 50 years.
2. In such case, why emulating anything, if there’s decision like: “stand by x86”? Better would be just to make the same chips over and over… of course: with more GHz, more cores, and more cache each next version.
Personally I suppose, that OSOS (open sourced operating systems, of course Linux at first place) will slowly change the market; they are compatible with different hardware. Just consider all the handheld devices…
My point was that Intel intentionally tried to fool us by claiming Itanium was easier to ramp up performance than x86 because of the burden of supporting the older instruction sets (backwards compatibility). They intended on supporting the the older chips through emulation on Itanium.
Funny how once AMD came up with x86 chips that were plenty fast compared to Itanium that Intel was able to ramp up the speeds at an accelerated rate for their x86 chips. x86 has a lot of life left in it so long as competition stays in place.
Edited 2010-04-05 19:21 UTC
To clarify –
Intel wanted to to Itanium and get away from having to share technology with AMD, Cyrix, etc. They wanted to control the market again. They designed IA64 to move to 64-bit computing, emulate the 16/32-bit x86, and move on to new things. It was (and is) a powerhouse of a chip. But it was always expensive, and never really targeted at the average user. Intel went from aiming at everyone to just the server market.
AMD, OTOH, decided to stick with the x86 instruction set and designed AMD64. AMD64 allowed them to overcome the issues in the prior x86 versions; though micro-code probably helps at least as much. AMD64 simply extended x86 into 64-bit, creating ‘long mode’ (versus protected and real modes). Essentially, AMD beat Intel at their own game, and eventually forced Intel to support the AMD64 instruction set themselves (originally scaled down as EM64T, then brought to par and extended as Intel64).
Interestingly, if Intel had decided to not do the IA64 thing, AMD probably wouldn’t still be around. AMD64 helped AMD out a lot.
Yes but what if that emulation has no appreciable cost?
It costs power in (proformance/watts) dude just compare atom to arm for instance atom can run old intel 8 bit code but what is the point just to use more watts than arm? The tradeoff is that intel has the huge instruction decode unit that generates nothing but heat and arm has a very simple straight forward decode frontend
Edited 2010-04-05 20:48 UTC
ya, that’s basically what I was getting at when i said “So is virtually everything else.” The x86 instruction set has a lot of baggage, baggage that needs be let go.
Because reworking your entire corporate infrastructure around new technology that “better” at significant cost when you have a working “good enough” setup in place is a smart business decision?
Until other architectures offer some killer app or extreme cost savings, you’re not going to see x86 going anywhere
you are correct. If anything ends up replacing it they will need either initial hardware support for the x86 instruction set, or one hell of a software emulator (on par or better than Apple’s Roseta).
I have nothing against x86-64, it does its job well enough, but their will come a day in the near(ish) future that bring about it’s end. (10-15 years realistically). Then again Graphine could very well keep x86 alive forever… Personally i am hoping for quantum computing. Or making Nvidia’s Fermi into a CPU/GPU combo. though it would need a bit of additional instruction sets to pull that off.
Of course, .NET was supposed to kill x86 binaries, and make applications run just as well on an ARM Windows Mobile device as on an x86 laptop and an Itanium workstation.
But we all know how well that turned out…
don’t be so sure about counting out .NET just yet. MS still likes that plan a lot, and though it’s going to take more time, i wouldn’t be surprised if we see something come out of it (though currently, there hasn’t been much fruit from said labors…except in Windows CE)
I found today interesting article:
S. Bansal, A. Aiken, “Binary Translation Using Peephole Superoptimizers,” In Proceedings of Symposium on Operating Systems Design and Implementation (OSDI), December 2008.
(S.Bansal is the same guy (then in IBM), who developed CAR algorithm for cache managment, better even than ARC).
It is quite simple and easy way to translate existing binaries to different architecture. What is interesting binaries runs on avarage about 60% of the native speed, and in fact in some situations they are faster then native optimised version.
What is most interesting in they approach is that most hard part of the translator was automatically generated! Given this it would be quite simple to add support to 10 more architectures and all possible translation pair.
In article they implemented static PowerPC to x86 translator (so they can compare they results with Apple’s Rosetta and QEMU, both are dynamic translators).
I hope this project will go on (S. Bansal was working in VMware, now is going to India start own business), so we would possible free of this x86 legacy. There is of course other interesting possibilities. Like using x86 games on ARM, or running just Intanium on more promissing architecture (like PowerPC, or just amd64), or running some older programs on current computers.
I once have a problem with old DEC UNIX workstation connected to some scientific equipment with specialised software. Given that computer would have now 14 years, but this equipment is very usefull, I was asked to possibly run this software (Alpha CPU, operating system and programs) on some x86 using emulation. Unfortunetly i had big problem with finding any products to do this (or they was astronomically costly).
Edited 2010-04-06 01:47 UTC
Are we talking about servers or netbooks here?
When Intel has had Itanium undermined by their own x64 cpus you really need to question how much the x86 baggage actually matters.
When you can get an 80 watt quad core Xeon for $230 the Itanium becomes a very hard sell.
The x86 baggage makes hell of a lot of difference per MHZ … a SGI fuel machine with a 900MHZ MIPS Processor (M16000 CPU?) had similar floating point calculation speed to a 3.4 GHZ Pentium 4.
A Chip that had almost 4x the amount of MHZ was being equalled by something that was sub GHZ.
It does make a difference.
No one disputes that it makes a difference but that doesn’t mean it always makes sense to use ARM from a performance/price perspective, especially when Intel and AMD are constantly improving the performance/watts of their x64 cpus.
Look I find Itanium interesting and I’m disappointed to see it in decline but I also know that there is rarely a good business case for it. Red Hat dropped Itanium support last year which shows that even Linux shops aren’t interested in it. It’s rare for a business to even need more than a couple Xeons so a 20% drop in power in those cases would be pocket change.
No it doesn’t. Please stop spouting off about things you clearly don’t understand.
A processor’s IPC is all about tradeoffs, and Intel decided to go with a design that focused on high clockspeed with low IPC, while other chips have gone the other way. This has nothing to do with the instruction set, it’s a valid tradeoff that you can choose to go either way on. PowerPC chips have gotten really high speed as well, and they aren’t x86.
First off, those 900 MIPS chips were rarer than hen teeth, and had a cost many times that of the P4.
Now, calculate the cost/performance ratio of the MIPS part vs. the P4 and weep.
Also the MIPS part needed some insane caches in order to be competitive in SPEC, that drove the cost even further.
And thus I point you to this (a quote of a quote):
http://kawaii-gardiner.blogspot.com/2010/04/one-thing-i-love-about-…
Itanium isn’t power efficient for starters, it is massive in size, it is expensive and the question remains – what can it do that x86 can’t. It is a solution finding a problem to solve and so far as much as people would love x86 to die, it hasn’t done so. In the years it has been out, Intel have failed to provide a Itanium CPU that is cheaper, more power efficient and scalable down to the laptop level – that alone is indication that as much as people would love to dream Itanium up as a replacement for x86, Intel’s actions have been in the opposite direction. What I think would be a more fruitful discussion is why in 2010 we’re still dicking around with BIOS when there is UEFI. Hopefully in my lifetime we’ll eventually see a move away from the awful BIOS along with the buggy ACPI implementations.
Edited 2010-04-05 22:20 UTC
The overhead of x86 decoding is now to less than 5% in area/power (in worst case). That is a very small price to pay for basically unlimited backwards compatibility.
BTW, Arm and Atom target two very different segments.
I was talking about the IA64 arhitecture, not about the actual implementations. And I think that IA64 is superior when compared to x86.
“And I think that IA64 is superior when compared to x86.”
So is virtually everything else…
But which platform is superior when it comes to providing a poor performance/cost ratio? Sparc or Itanium?
Itanium. SPARC is very very good, even Oracle believes in SPARC in a bit way. It’s also my personal favorite architecture, but I am trying to be as unbias as I can be.
for the cost (and the open factor) SPARC wins.
Cost is relative to needs. You can’t just say “Sparc wins” because I can show a thousand cases where building a Sparc server is a waste of money compared to x64. Actually I would take that even farther and say that for most server needs Sparc is a waste of money.
Sparc is more elegant than x64 but that doesn’t matter when it comes to performance/price. I think it would be nice if RISC was more competitive at the server level but it isn’t. Sparc sales are in decline and for good reason. They usually aren’t worth the price unless you have code designed for RISC systems. The new Xeon 7500s will drive even deeper into RISC territory.
http://www.itproportal.com/portal/news/article/2010/4/3/intel-prese…
Look I wish there was more investment into RISC at the server end but the sales growth is really in x64. Maybe Oracle will mix things up a bit but at this point the future of servers is really looking like x64. But if it makes you feel any better Sun 1U servers are really cheap on ebay right now and make great file servers.
Maybe, but that really isn’t important. *All* that matters is implementation – the best architecture is irrelevant if the implementation doesn’t deliver.
IA64 is a great architecture that never delivered – but x86 is a good-enough architecture that’s been delivering for some thirty years now. There’s only one winner in that fight…
I really think it died* due to being bad/expensive hardware. There were no time during the Itaniums lifetime that it gave a better price/performance ratio for any 1P/2P servers (Which is >95% of all servers).
I mean Linux did have some good support for Itanium, but there were no reason to move any linux servers to Itanium even if all the software were supported. (And most servers just need apache/php,java,MySql/PostgreSQL which do run fine on Itanium).
So no it was not just lag of software. Even people with full software support did not move.
*Insert Monty Pyton Perrot joke here
It’s not dead. It’s sleeping!
Wakey wakey!!!
I don’t think it was ever supposed to be a replacement for the x86 platform, if it had been, Intel would have been pushing it into the market much harder.
Itanium was mainly introduced to compete with other high-end non-x86 platform like MIPS, Alpha, PA-RISC and so on. Now, that Itanium has pushed all of them out of the market, Intel can abandon Itanium and has a cleaned-up processor market.
Intel has the same attitude towards backwards compatibility like Microsoft. Or how do you explain that even the latest x86 processors (I don’t know about amd64 though) still have that much-hated A20 gate.
Adrian
Although it didn’t push SPARC or POWER out of the market…
(Then again, AMD64 did that.)
Well part of the original appeal of moving away from x86 meant that Intel could sell chips that AMD couldn’t clone. AMD threw a monkey wrench into that plan with x64 which had backwards compatibility and addressed the main shortcoming of x86 on the server which was 32 bit addressing. So Intel had to build their own x64 chips to remain competitive which then cut into their own Itanium sales.
I think that’s an often overlooked point. Intel really wanted to freeze AMD out of the market entirely by moving everyone to an instruction set that AMD couldn’t legally copy. That opened up a window for AMD, though, to come up with an improved x86 architecture while Intel was still trying to get IA64 right, and the success of the original Athlon64 showed that there wasn’t any real benefit to leaving x86, at least for the majority of the market.
Edited 2010-04-06 02:15 UTC
No, intel wanted to neutralize the high end RISC processors in the datacenter.
Given that today you can’t buy any server platform designed around Alpha, PA-RISC, or MIPS. And SPARC is in life support. I’d wager Intel has succeeded.
As usual, a lot of people in this site tend to equate their particular (and in most cases very detached from the reality of the field) opinion with fact.
An interesting approach might be found in clusters.
http://www.beowulf.org/
Take something like this:
http://cs.boisestate.edu/~amit/research/beowulf/
or this:
http://lis.gsfc.nasa.gov/Documentation/Documents/cluster.shtml
and build it instead out of thousands of tiny ARM SoCs. Put a smallish hard disk with each CPU, say 100GB or so. Maybe a “headless smartbook” type of configuration.
Low end RISC processors in the datcentre, but a great many of them.
– Fault tolerant.
– Easy repair by replacement of nodes, possibly hot replacement (self-healing).
– Redundant, distributed storage, but still many terabytes of it.
– Low cost.
– Low power.
– Significant performance.
Edited 2010-04-06 13:18 UTC
What are x86 restrictions exactly?
At the time it seemed like a market grab by intel. Instead of adding 64bit to x86 they decided to release a brand new chip with 64bit support and charge lots of $$$$$ for an architecture they fully controlled.
AMD did what intel should have done and just released amd64, and pretty much owned the x86 server market for several years.
Finally the deprecated platform is dying.
So Itanium becomes a platform for professionals only?
Don’t take it serious, just a bad joke
When nobody wants your overpriced, outdated and awkward technology… call it uber ultra Pro Enterprise.
It worked for AIX… why not Itanium!?
At least the last OS to support it is a great stable one from MS. I am one of the few who like the Itanuim processor, not just because i can dual boot OpenVMS and Windows Server 2008 R2. If I recall correctly, MS only signed on to release operating systems for intel’s Itanium for x amount of years anyways.
Good thing DEC killed Alpha so they wouldn’t be stuck on a niche architecture only used for VMS and a proprietary Unix, eh?
Bitter? Me? Nah!
I wonder if HP will change their strategy for HP-UX, since they have canceled their PA-RISC and begun supporting Itanium as their remaining HP-UX platform.
Given that HP has completely outsourced their HP-UX file system software to Veritas, I’ve been expecting HP to find a strategy to discontinue HP-UX and port their system administration tools (which are generally considered “best in class”) to Solaris and/or AIX and license it as a third party enhancement (thus getting out of the OS business altogether).
If Itanium continues to lose support this seems even more likely.
I could see HP picking up Solaris. AIX would be a longggggggg shot and I don’t think IBM has any plans to let anyone but IBM mess with it.
HP-UX is in support mode (yes new features are being added, some of them awesome, but its going to go the way of IRIX).
OpenVMS might not be around much longer either, except for fixes based on customer request.
Don’t get me wrong, i REALLY hope I am wrong here (at least about OpenVMS, i couldnt care less about HP-UX), but it looks the be where things are heading…
That would be a really cool idea, specially for Solaris (and maybe Linux). AIX has smit and It’s pretty pretty good. xD
BTW I’ll be happy if they kill HP-UX all together… c’mon HP, you can’t sell that thing anymore, It’s unethical. I feel guilty when I see our clients paying uber expensive support and licences for that sh*t (and I can’t say a word ’bout it). You have to pay even to extend an FS… wtf
Amen to that. I can’t speak for it’s capabilities at a kernel level, but the userspace is appalling – not just compared to Linux (i.e GNU utils), but compared to pretty much any other Unix variant I’ve dealt with.
Apparently its kernel level capabilities are quite good, at least at getting out of Oracle’s way. It’s long been a leader in database performance.
lolwut? Are you on drugs? Being an HP-UX sysadmin is absolutely nightmarish! Their tools are buggy, weird, arcane, and unreliable. SAM is a joke, and other tools are not much better than that. Everything reeks of early-80s-UNIX, even on 11i.
Maybe the reason is that it did not sell. You can get IA64 with perfectly fine Linux or Unix support. I think the target audience was using it more with those OS-es then with Windows… It’ll live some more!
… don’t forget about OpenVMS
It was EPIC
I’ll be here all week.
I thought Itanium was still producing strong performance in certain scientific workloads though? I wonder if Windows HPC will still continue to support it for some time — it may not be very good for server workloads, but the only way I see Microsoft pulling out entirely is if Intel themselves are pulling out of Itanium.
Then again, nVidia and, to a lesser extent, AMD are gunning for the scientific processing market, and given the huge performance benefits GPUs exhibit for the types of jobs their good at, Itanium may soon find itself in an unsustainably-small niche.
Itanium is an interesting technology, and an even more interesting approach in breaking free of x86 compatability, but the technology, particularly on the compiler side, just isn’t there in a strong enough way to make Itanium the clear win it needed to be if it was going to have any chance at surplanting x86.
Sparc failed, MIPS failed, Alpha failed (even with a huge performance advantage at the time), PowerPC failed (even after a good run), and now Itanium has seemingly failed.
I strongly believe that ARM has a real shot — probably the best shot any competing architecture has had — if they keep making inroads from the mobile/embedded/low-power space, and don’t make the mistake of trying to compete with Intel in the desktop/mainstream market too soon (on the other hand, I’d love to see some snappy ARM-based netbooks/nettops/STBs and even thin-and-light laptops right now.)
IMO, ARM’s best chance was 1987.
Every year since then, their chances have gotten worse, until 2007’s netbook revolution. Even then, ARM can’t truly challenge x86 – the ARM netbooks that are actually coming to market appear to be glorified iPads, essentially, with real keyboards.
It’ll take a massive shift in the software most people run for ARM to stand a chance. Two ways that can happen: mass migration to open source, or Microsoft creating a fat binary format, making all of their compilers build for ARM, x86, and AMD64, only giving Windows Logo approval to applications and drivers that are compiled for all three architectures (unless they’re not applicable to certain architectures – for example, I wouldn’t expect drivers for an ATI northbridge for an AMD CPU to be compiled for ARM,) AND supporting this for 10 years without expecting ANY return on investment. (However, the way Microsoft does Windows ports, they may not need to do the funding. See Alpha – DEC and Compaq funded that.)
Then, an ARM port of Windows MIGHT take off.
Alpha almost did it in 6 years, but it was significantly faster than x86 at the time (and could EMULATE x86 as fast as the fastest x86s could run natively.) The only thing that killed it was Compaq pulling the plug on it in favor of Itanium, and killing the Windows 2000 port just before it was finished. (Of course, Windows on Alpha was 32-bit, despite Alpha being a 64-bit CPU. There was ALSO a 64-bit port of Windows 2000 in the works… and Microsoft continued that on their own internally, as they needed Windows to be 64-bit clean for the Itanium port, and the work on finishing the 64-bit Alpha port would be valuable for the Itanium port.)
ARM is the best selling processor right now.
ARM is exactly in the market segment they want to be, why on earth would they move over to a space in which they have to compete face to face with intel.
I dunno if I misunderstood your post, but if you think that ARM has missed any opportunity for unmitigated success, you haven’t paid attention… at all.
I’m well aware that ARM is doing quite well, however, they have announced an attack on Intel: http://www.pcpro.co.uk/news/351619/arm-launches-attack-on-intels-ne…
And, Intel’s going after the high-end ARMs: http://news.cnet.com/Intel-has-ARM-in-its-crosshairs/2100-1006_3-62…
ARM has designs that are within striking distance of Atom, and Atom is within striking distance (except on power) of ARM’s high-end smartphone designs.
ARM needs to push up, to displace Atom altogether, just to STAY where they are. If they let Intel keep coming down, my prediction is Intel will eventually start digging up old designs, die-shrinking them, and starting to compete with ARM11 and Cortex-A5, and then ARM7 and Cortex-M3. If Intel gets the ARM7 market, with, say, a massively die-shrunk 386, ARM is dead.
As for the comment about 1987… had Acorn pushed the Archimedes worldwide, HARD, directly against IBM and Compaq, things would’ve been very, very interesting. The RISC vs. x86 battle royale would’ve happened then and there, rather than the PowerPC vs. x86 long drawn out battle that ended with PowerPC fizzling out, because Acorn could’ve massively undercut the Intel machines on price, and beat them on performance (and graphics capability, too.)
Edited 2010-04-06 04:20 UTC
If moore’s law had held regarding die shrinking and clock rate bumps intel could have won. It seems at this point in time intel’s manufacturing lead over other foundries isn’t what it once was. So ARM stays a serious contender in the low end. Intel would have to find some way to make the die space/power hungry x86 decoder just *go away* to make it work.
Question: Do you even know the overhead that an x86 decoder induces?
A lot of you do not seem to understand the difference between ISA and microarchitecture, or where most of the power consumption comes from in a modern microprocessor.
Also the context of the workload/application between ARM vs. x86 are for the most part completely different.
If you were to scale up an ARM core in order to offer a similar single thread performance of a modern x86 core. Chaces are that you will end up with a similar power envelope.
ARM understand that, and that is why they will not target the same areas where x86 is king right now. Among other things, because they don’t have to do so. They are fairly dominating in the embedded market.
Edited 2010-04-06 16:14 UTC
1. I think, we cannot just simply write “chances are”; consider, for example, power consumption of processors made by VIA (“Silent power”), and Pentiums of equal computing power.
2. …and even, when “chances are” (I doubt) – you were right writing “If”. Why? Because I don’t need that much. I’m using for everydays work old Pentium III/750 (and I’m not even fully using my 750 MB of RAM), so Cortex A9 will be still much, much more than I need. If the news from ARM are correct, such 4-core A9 will have power dissipation of about 1 W, so it won’t need neither any fan, nor even radiator…
Yes, they will target – as someone wrote already. And as I wrote in the past: I’d like to buy ATX motherboard, fitted with ARM (but not Beagleboard, and not that expensive “development board” from ARM).
Again, by the time you are adding the multiple functional units, the aggressive dispatch out of order unit, and the massive branch predictors needed to match the single thread performance of a core i7… an ARM core will end up with a similar power envelope as the i7 at the same fab process. In fact, Intel will probably have a better envelope in that case since they control both: the microarchtiecture and the fab process. Which ARM does not.
A lot of you keep on focusing on the ISA, which honestly has not been an issue for the better part of a decade. Right now the power/area budged overhead associated with x86 decoding is less than 5%. Unless some of you truly think that ARM’s ISA has magical qualities which allows the microarchitecture executing it to completely ignore the laws of physics.
ARM is not stupid enough to target the high performance market. Period. In fact none of ARMs partners are remotely interested in that space. And eventually they are the ones which drive ARM development/targeting.
Edited 2010-04-06 22:29 UTC
Well, maybe you’re right talking about “high performance marget”, but this one could be rather SPARC’s domain, I suppose.
Still I believe, that such machines, like this one: http://tinyurl.com/yd5anhz – could be fitted with ARM, instead of Atom. If not “regular” ATX board, I’d like to buy something like this with ARM (at reasonable price, of course).
I doubt that ARM would use as much power as x86 if it scaled up. Because it would need much less transistors to do same work as x86.
x86 is buggy and shitty. x86 has over 1,000 instructions in the instruction set! That is terrible bloat.
I dont remember exact numbers, but it is something like this (ballpark numbers): You need 20 million transistors just to figure out where one x86 instruction ended, another 10 millions to decode the instructions, 20 million to support legacy transactions no one use, etc. Whereas you could implement a whole SPARC cpu in 20 million transistors.
Anandtech had an article recently about how bloated and buggy the x86 instruction set it.
After I read that ( http://www.agner.org/optimize/blog/read.php?i=25 ) I think there’s a hope, that chips like the one from ARM – or especially SPARC, because of its openness – will have their chances anyway. Maybe ever sooner, than we expect.
… and if my grandma had grown balls, we would have called her grandpa.
How exactly could Acorn push hard a product against IBM/Intel/Microsoft… when they were a tiny British company, barely making it.
Besides, you don’t seem to have much of a historical context. There were previous hard pushes to use RISC in order to fight x86 directly: For example, the ACE consortium which was based around MIPS processors running NT and Unix. It had large players like Compaq, Microsoft, Olivetti, Digital, etc and it still failed. So you expect a two bit British company to succeed against a giant like Intel?
ACE was later, though, and x86 had penetrated further in 1993 than it had in 1987.
But, of course, at the time, Olivetti owned Acorn, and could’ve spent their own money pushing their subsidiary’s products as world-beaters.
Edited 2010-04-06 16:21 UTC
ACE started being drafted in 89/90. And before that there were plenty of RISC competitors: MIPS, SPARC, and Motorola’s 88000. All were en vogue in the late 80s and were offering far better performance than ARM. Even intel had its own RISC CPU, the i860, at that time.
There is more to a computing ecosystem than an ISA. Which is what you seem unwilling to accept. I consider these “what if” exercises rather silly. IMHO.
One HUGE difference between your MIPS, SPARC, and 88k example, and ARM is…
Those three architectures were EXPENSIVE. A MIPS R2000, SPARC MB86900, or 88k box (and 88k was a flop even in the workstation market, FWIW) would have been significantly more than a 386 desktop, possibly even of equivalent performance, in 1987.
ARM2 was CHEAP. ARM sharply undercut 386s at equivalent performance.
And, I’m not even talking about the ISA. I’m talking about actual silicon, and either actual machines using that silicon, or estimates thereof. In the case of ARM vs. x86, I most definitely am using actual machines. Go look up, in 1987, the price of an Acorn Archimedes 440. Now go look up, again in 1987, the price of a Compaq Deskpro 386/25 with 4 megs of RAM and a hard drive (I think 40 megs?)
Edited 2010-04-06 22:27 UTC
Dupe delete
Edited 2010-04-06 22:36 UTC
ARM2 was barely faster than a 286.
So what if ARM based systems were cheaper than compaq, not a feat since at that time was fairly high end. But that you could get a no name clone running x86, for better performance, and similar or lower cost as the Archimedes.
Benchmarks from the time seem to indicate otherwise: http://www.realworldtech.com/page.cfm?ArticleID=RWT110900000000&p=2
Note the thrashing it gives to the 16 MHz 386 on integer code. (No FPU, so that’s why it sucks at FP code. There was a podule-based FPU for ARM2 systems, and later the ARM FPA, for the ARM3.)
As for price… Wikipedia claims the Archimedes 440 was 1499 GBP at launch, with a 20 meg drive and 4 megs of RAM. Graphics chip capable of 640×480 @ 256 colors and 1152×896 @ 2 colors. 8-channel 8-bit stereo sound, FWIW.
As for the Compaq… we’ll go ahead and use a 16 MHz machine, and US prices (easier to find)… $6499 for a 40 meg drive, but only 1 meg of RAM, no graphics ($599 for an EGA card that can do 640×350 @ 16 colors) and a beeper. Source: Infoworld, http://rdr.to/0pJ (And it looks like they kept that price up past the release of the 386/20 in October 1987, which had a 60 meg drive, and was $7499.)
Oh, and the floating point unit is optional, too, on the Compaq. So, discard the floating point results from the benchmarks I linked, they were with the FPU.
So, for the equivalent of $2450, you got faster hardware, better graphics, much better sound, and 4 times the RAM, albeit half the storage, as a $7098 machine (remember, gotta add a graphics card. You could cheap out and get the MDA/CGA-compatible card for $199, though.)
Edited 2010-04-06 23:25 UTC
Itanium is nowhere to be seen in scientific work loads. And GPUs are seen as a curious gimmick right now. With the number of cores packed onto each chip exploding things do not look promising for GPUs.
Edited 2010-04-06 15:14 UTC
I doubt so.. maybe true in past, but there’s cheaper horsepower available today.
… as well as it is an historicaly interesting screw-up :] Really.
I would not call Sparc failure (yet) as Sunacle seems to believe in it, MIPS reincarnated in China and PowerPC is in fact very successful: there is ppc in every ps3, two of them in every x360.. that makes dozens of millions ppc cpu sold. And there is Power7 coming.