Intel announced today its line of Itanium products for high-end computing servers. Codename Montvale, the chip is an update to Montecito, the Dual-Core Itanium 2 chip which was launched in July last year, Eddie Toh, regional platform marketing manager of Server Platforms Group for Asia-Pacific at Intel, told ZDNet Asia in an interview on Monday.
The latest greatest Intanium is manufactured in 90nm technology?! Xeons will be manufactured in 45nm pretty soon! There is no frequency upgrade except for the frontbus and the next version will appear sometimes next year. Sorry to say, but this is so lame, just compare it to the developments on POWER front. Shame on you Intel, so many great processor architectures have been buried because of all the promises Intel and HP made with Itanium and now we see, what came out.
you only mention POWER when SPARC is making leaps…. 8)
What I find funny is the fact that they would be better off fixing x86 than continuing to flog a dead horse; there are features in he high end which they would be better off going the full monty and incorporate into their mainstream processsors – MMIO for example.
MMIO? What’s that? And how should it be introduced or fixed in x86?
And what I find funny is how many people know precisely what Intel is doing wrong in terms of their strategic and economic decisions. Even when carrying the disadvantage of having just a smidgen less information than the decision makers at Intel.
Information is on wikipedia, as for why? look at the latest conversations regarding SCSI/OpenSolaris and how the lack of MMIO (when compared to SPARC) makes driver writing that little bit more difficult. It would also improve performance, especially on very large configurations.
Intel will be introducing partially when they release their next x86 platform which will have all the components (chipset/processor/etc) virtualisation aware, which should also address any performance issues as well.
It would also be great if the PC market got their act together and finally killed off BIOS; UEFI is here, lets move on. Since moving to Apple, thanks to dropping all the legacy crap, the OS loads faster, there aren’t there laundry list of issues which I face etc. part of the Mac’s succcess is in the hardware, OpenFirmware avoided the crap of BIOS and UEFI does the same time.
What in god’s name are you talking about? x86 systems have supported memory-mapped I/O since forever.
Edited 2007-11-01 04:07
Information is on wikipedia, as for why?
MMIO? As in memory mapped IO? Intel x86 CPUs have had this capability for a long time. Seeing as all their memory traffic goes through a discrete northbridge chip anyway, the main thing for the actual CPU to provide is really the memory access policies to make it useful.
It helps if you actually know what you’re talking about, when you’re making assertions.
look at the latest conversations regarding SCSI/OpenSolaris and how the lack of MMIO (when compared to SPARC) makes driver writing that little bit more difficult.
I have a feeling you read some thread where people were talking about IOMMUs. Completely different, but again due to the nature of Intel’s CPUs, IOMMUs are more a function of the platform. And that’s true of ia64 too, Itanium CPUs don’t have IOMMUs either. While IBM xseries x86 systems do have IOMMUs, and on the other hand, SGI’s ia64 Altix systems don’t.
It would also improve performance, especially on very large configurations.
Possible, but not always the case. If devices are capable, there is no big for an IOMMU to improve performance since the days of DAC over slow old 32 bit PCI are over. (Actually using it could reduce performance due to translation management overhead). Memory protection is the main reason for the renewed interest recently.
They probably would like to kill the whole thing asap, but probably don’t do it because:
– would give them a lot of headache because contracts;
– would damage their public image with tech partners, tech media and customers (the big ones that really expend money).
After all, Intel did pledge one’s faith, they must honor their words, even though they will give more and more incentives to people move away.
“The latest greatest Intanium is manufactured in 90nm technology?! Xeons will be manufactured in 45nm pretty soon! ”
The problem is that for processes under 90nm there is still a lot of unknowns regarding electron migration. Which means that sub-90nm processors have a fairly compromised lifetime. In the Xeon non mission critical market place that expects replacement in less that 18 months. That is not an issue. On top of that the cache design for the Itanium is fairly hand tuned, and it is not so easily portable to other processes. And the gains of shrinking are not offset by the reduced performance of the resulting cache at 65nm. Even at 1.5Ghz, an Itanium2 still manages top FP scores. Not too shabby.
In the mission critical segment that IA64, and some other manufacturers target. Speed is not as important as it is being up for eons of time, and have parts not fail for years of 24/7 operation. That is why a lot of IBM mainframes are not using Power6 but rather some “unsexy” 130nm processors. Because even at 90nm electron migration may be considered too risky.
There is a method behind the madness…
Edited 2007-11-01 09:39
“The problem is that for processes under 90nm there is still a lot of unknowns regarding electron migration. Which means that sub-90nm processors have a fairly compromised lifetime. In the Xeon non mission critical market place that expects replacement in less that 18 months. That is not an issue. On top of that the cache design for the Itanium is fairly hand tuned, and it is not so easily portable to other processes. And the gains of shrinking are not offset by the reduced performance of the resulting cache at 65nm. Even at 1.5Ghz, an Itanium2 still manages top FP scores. Not too shabby.
In the mission critical segment that IA64, and some other manufacturers target. Speed is not as important as it is being up for eons of time, and have parts not fail for years of 24/7 operation. That is why a lot of IBM mainframes are not using Power6 but rather some “unsexy” 130nm processors. Because even at 90nm electron migration may be considered too risky.
There is a method behind the madness… ”
Yes, enteprise machines are more conservative, but you are full of crap wrt IBM:
power 6 – 65 nm
http://www.research.ibm.com/journal/rd/516/le.html
z6 – 65 nm
http://www2.hursley.ibm.com/decimal/IBM-z6-mainframe-microprocessor…
The previous power and mainframe processors were 90 nm.
“Yes, enteprise machines are more conservative, but you are full of crap wrt IBM:”
Before you use such language, I recommend you understand what you posted.
The current offering from ibm, the Z9, is a 90nm process just like Itanium. For the reasons I cited. The Z6 will come out in 65nm just as the new IA64 65nm parts roll out in a year or two. The Z-series are the types of systems targeted by the superdomes and integrity series from HP. Z-series don’t use Power6, but the z9/z6 which have some commonalities but are different enough to be their own beasts.
The consumer parts go in a more aggressive process shrinking than the carrier-level grade stuff that is usually 1 or 2 process geometries behind. The reasons being some of the ones I cited before on why the IA64 and Z-series stuff are fabbed in less “sexy” 90nm.
http://www.research.ibm.com/journal/rd/511/poindexter.html
So rather than say I am full of crap, just bother to read the links you referred. In any case, I get a chuckle about all this nm stuff when most people in this forum don’t even understand the basic operation of a transistor
“Before you use such language, I recommend you understand what you posted.
The current offering from ibm, the Z9, is a 90nm process just like Itanium. For the reasons I cited. The Z6 will come out in 65nm just as the new IA64 65nm parts roll out in a year or two. The Z-series are the types of systems targeted by the superdomes and integrity series from HP. Z-series don’t use Power6, but the z9/z6 which have some commonalities but are different enough to be their own beasts.
The consumer parts go in a more aggressive process shrinking than the carrier-level grade stuff that is usually 1 or 2 process geometries behind. The reasons being some of the ones I cited before on why the IA64 and Z-series stuff are fabbed in less “sexy” 90nm.
http://www.research.ibm.com/journal/rd/511/poindexter.html
So rather than say I am full of crap, just bother to read the links you referred. In any case, I get a chuckle about all this nm stuff when most people in this forum don’t even understand the basic operation of a transistor :-)”
I read what I posted. In fact, I attended the z6 presentation.
Let’s start from scratch. Here is what you originally posted:
“That is why a lot of IBM mainframes are not using Power6 but rather some “unsexy” 130nm processors. Because even at 90nm electron migration may be considered too risky. ”
I never disagreed with your argument. I took issue with the numbers that you tried to use to support your argument. They are just wrong. The z9 mainframes that IBM is selling today with their bluefire processors are 90 nm. They are not “unsexy” 130 nm. It’s nice that you corrected yourself in your reply
just wanted to note that intel is developing itanium as long as hp pays for it.
that the new itaniums are still 90nm just means that hp didn’t want to pay the extra-price.
if you want to point fingers at someone do it towards hp. intel just invented a new architecture (which is still a good thing, even if it didn’t go the way intel hoped), but hp dropped pa-risc and alpha
Edited 2007-11-01 03:10
(which is still a good thing, even if it didn’t go the way intel hoped)
What’s good about inventing a crappy new architecture?
Intel never invented the architecture, it was HP who did, Intel wanted a high end, high margin chip – HP had it, HP no longer wanted to be in the chip business, so they sold off what ever assets they had to Intel.
Intel is still developing it, but basically its a dead end for Intel. They have realised that although x86 is ugly, its going to be the architecture that never died. Btw, this isn’t the first time a superior architecture has gone up against x86 and failed.
Alpha was a superior architecture that went up against x86 and failed. IA64… just failed…
Edited 2007-11-01 04:10
Alpha superior in what aspect?
AXP was a very compromised architecture, a lot of people seem to take that as “elegant.” By the time the 21364 came along, it was clear that it would take a significant investment to make it competitive in the post GHz era. The 21464 was a complete bloated pig that was almost impossible to fab. I love alternative architectures, however one must be realistic…
It is usually those who know the least about the subject who get to judge what “elegance” is. To this day I still get a kick about the typical fanboy complaining about the “ugliness” of the x86 and how PPC is the shit because it is “RISC!” never mind that those people can’t barely write a half assed program in C much less code anything in assembler. And don’t get me started on the idiots who couldn’t pass an intro class to computer architecture, but they get to weigh in on the latest design from a top architecture bureau.
Itanium is geared towards a segment of computing, about which most people in this website have little to no knowledge. IA64 has been fantastically successful for HP, the superdomes et al are making a shitload of money for HP. In that context, Itanium is doing very well. Not only that but even at 1.5GHz it achieves some impressive performance numbers.
There is more to computer architecture than being able to assemble a computer from whatever parts you just bought at Fry’s.
[i]Itanium is geared towards a segment of computing, about which most people in this website have little to no knowledge. IA64 has been fantastically successful for HP, the superdomes et al are making a shitload of money for HP. In that context, Itanium is doing very well. Not only that but even at 1.5GHz it achieves some impressive performance numbers. [i]
I BACK YOU UP!
AXP was a very compromised architecture, a lot of people seem to take that as “elegant.” By the time the 21364 came along, it was clear that it would take a significant investment to make it competitive in the post GHz era.
We’re talking about instruction sets here, not the micro-architectures of particular CPUs. The IA64 ISA itself is just a crappy design. VLIW is a dumb idea for a general-purpose processor, and IA64 is an overly-complex and obscure VLIW at that. Maybe these things weren’t obvious when IA64 was designed, but they’re painfully obvious now.
Itanium is geared towards a segment of computing, about which most people in this website have little to no knowledge. IA64 has been fantastically successful for HP, the superdomes et al are making a shitload of money for HP. In that context, Itanium is doing very well.
Itanium is a “success” by very limited and compromised criteria. The claims of Itanium “profitability” ignore the huge initial investment into the architecture. Itanium is making an operating profit for HP and Intel, but over the long term history of the product, it has lost money. The former point means it makes sense for HP and Intel to string Itanium along until x86 eats its lunch somewhere down the road, while the latter point means that if Intel and HP could go back in time and never do the whole EPIC thing, they would.
Business considerations aside, IA64 has failed as a piece of technology. Itanium is a reasonable chip for certain applications, but by and large its strengths have f*ck-all to do with IA64 itself. Pretty much the only codes that show EPIC in a good light are certain HPC codes, and while hindsight rationalizers will say otherwise, Intel and HP sure as hell didn’t invest billions of dollars into a whole new architecture over a decade and a half to develop an ISA targeting for such a specific niche!
Fundamentally, IA64 bet on certain things that just didn’t pan out. Specifically, mainstream software ecosystems aren’t conducive to VLIW designs, the compiler technology to make them effective isn’t feasible, and general-purpose software is moving away from the types of contexts in which VLIW makes sense. Things like this happen all the time, really — technology fails because it makes assumptions about other technology that don’t pan out. The only reason IA64 won’t die quietly is because Intel and HP really put a lot of money into it, and they want to milk it and at least get some of it back. For the rest of the market, the only thing IA64 really accomplishes is keeping a bunch of compiler research dollars tied up in unproductive endeavors.
Edited 2007-11-01 14:14
The problem with your analysis is that the IA64 ISA was never intended to be visible to the programmer, ironically neither did most RISC architectures. This whole nonsense of evaluating ISAs, which there is no quantitative way of doing BTW, gets tiresome really. It is like comparing what language is “better” French or English.
There is nothing inherent to VLIW, RISC, or CISC for that matter that makes them more or less suited for general purpose computing. An ISA is merely a programming interface to a microarchitecture at the end of the day. And it is a function of the programmability of that microarchitecture where the burden of the generality of it lies.
An Itanium2 is no better or worse general architecture than an Opteron. The compiler/VLIW pipeline combination seems to perform fairly decently as far as the Itanium2 is concerned, and the equivalent compiler/out-of-order pipeline seems to do for the Opteron machines, as far as general “purposedness” is concerned. They both do the same, get input… process input… dump output. It all depends upon the metrics you are considering. Price/power? Probably Opteron wins (I am just picking an example), reliability/throughput? Maybe the Itanium2 has the edge.
I don’t want to put words in your mouth, but it seems that you are missing scalability with general purpose. As far as scalability goes, true the IA64 is no where near the scalability factor of x86 for example, which can move from a laptop all the way to low-mid range enterprise systems. Where as the IA64 was always intended to stay in the high range of things. I assume that was intel’s idea. Things like predication on the Itanium means that it will never be a power efficient architecture, and its reliance on cache means that it will always be a pig and a silicon whore never to be suitable for resource constrained systems.
Is that a good or a bad decision? I don’t know. But please make no mistake that most architectures succeed because their manufacturer puts wads of cash behind them. It took a loooong time for IBM to recoup the investment it made in POWER, and for the most part the only reason why we have seen a Power6 is because IBM decided to put wads of money into it. Further, the main reason why we may see a Power7 is because DARPA is paying for its development.
Architectures like Alpha, MIPS, etc have died off in the middle/high end of things because there is no single architecture that can maintain itself. In fact the only reason why x86 is where it is today, is because Intel poured wads of cash into the Pentium and Pentium Pro which allowed x86 to endure the onslaught of the RISC machines and out-of-order architectures that appeared through the 90s. Most people assumed x86 to be dead in the water in 1990…
Thus saying things like the only reason why IA64 won’t die is because Intel and HP are poring wads of money into it, is a bit disingenuous IMHO because that is the reason why other architectures like SPARC, Power, and x86 wont die.
If you look at HPs standpoint on IA64, it is a win-win for them as they can get rid of the PA-RISC and AXP architectures, which they had no way of continue developing on their own. And they make money off their systems and services, not on the processors themselves as they stopped being a microelectronics vendor long time ago. As far as intel is concerned, the IA64 was a hefty investment, but it managed to pretty much kill off a big chunk of the competition at the high end. And it means that sales are going to either IA64 or X86-64, not the old 64 bit competitors that there were around a while back. So in the end it means $$$$ to intel, whether it comes from IA64 or X86 is not that important for the final balance sheet.
I don’t particularly agree with the approach, and I am fairly attached to the AXP architecture as I did most of my graduate research on an AXP-like pipeline. However, one has to give the devil its due.
As far as Intel is concerned, they get to recoup a lot of their investment. A big deal of the tracing, value prediction and static analysis technology from the IA64 compilers are making their way into their x86 compilers. Their performance measurement unit is also being moved over the x86 pipeline. And a lot of the RAS research, will go into their general products. Intel is a very cheap corporation, and they always find a way of recouping their investments, either in the short or long run (which is fairly uncharacteristic for an American corporation, which are notoriously short sighted).
In any case, we may end up seeing the IA64 become part of the x86 ISA spec at some point. Once it becomes not cost effective to keep two separate lines for intel. At the end of the day, if they spent $1 billion to kill off and carve $10 billion worth from competitors, it seems quite a little investment in the scheme of things. And under that light is how Itanium is being considered inside Intel…
Edited 2007-11-01 15:42
The problem with your analysis is that the IA64 ISA was never intended to be visible to the programmer, ironically neither did most RISC architectures.
VLIW isn’t just an ISA design, it’s an implementation strategy. The whole point of VLIW is to move complexity from the implementation to the compiler by creating a suitable ISA that exposes implementation details to the software. It is not just an abstract interface for programming the micro-architecture — such a statement goes against the very idea of VLIW. The purpose of EPIC specifically goes further. It uses VLIW principles to allow an implementation that can take advantage of large amounts of static ILP (in an in-order design no less!). None IA64 makes any sense without keeping in mind that design goal.
There is nothing inherent to VLIW, RISC, or CISC for that matter that makes them more or less suited for general purpose computing.
The idea of depending on the compiler to discover ILP is what makes VLIW unsuitable for general purpose computing. The compiler technology isn’t there, and even if it were, nobody wants to recompile their code every year anyway!
The compiler/VLIW pipeline combination seems to perform fairly decently as far as the Itanium2 is concerned
No, it doesn’t. I2 performs “fairly decently” only on heavily optimized code run through heroic compilers. That’s where my point about the software ecosystem comes in. When you’re targeting the “general purpose” market (high volumes, wide distribution), a couple of heroic C and FORTRAN compilers and the necessity of recompiling your software for each new iteration of the architecture doesn’t cut it.
A major lesson of the success of x86 processors in the last decade is that a good architecture is one that’s easy to generate code for, and one that runs existing code adequately (eg: the Pentium Pro’s poor performance on 16-bit code drastically curtailed its success in the mainstream). IA64 falls down very badly on this criterion.
Where as the IA64 was always intended to stay in the high range of things.
This is a retro-active rationalization. IA64 was intended to eventually replace x86. It doesn’t make sense in any other context. You don’t go to all the trouble of creating a fairly radical new architecture, one that you know is going to require a huge long-term investment into developing new software technologies especially for it, without expecting it to be very broadly applicable. IA64 was created because Intel thought that EPIC and VLIW would allow them to make better processors across its range of markets. It did not succeed in that regard.
Now, the Itanium series of processors was very likely always intended for the high range of things, but IA64 as an ISA wasn’t. However, as it has become obvious that IA64’s ideas of magic compilers hasn’t panned out, people have realized that the sole redeeming qualities of the platform lie in high-end features of the Itanium implementation that have nothing to do with the ISA. As such, IA64 is de-facto relegated to the high-end, but not by choice!
Thus saying things like the only reason why IA64 won’t die is because Intel and HP are poring wads of money into it, is a bit disingenuous
That’s not what I said. I said that IA64 won’t die because Intel has _already_ poured a wad of money into it, and is now looking to recoup whatever it can. In contrast, HP let Alpha die, because it had no such motivating drive.
As far as Intel is concerned, they get to recoup a lot of their investment. A big deal of the tracing, value prediction and static analysis technology from the IA64 compilers are making their way into their x86 compilers.
No they’re not. The IA64 compiler technology is virtually useless to everyone else. They can get you a few percent here and there, but the complexity just isn’t worth it. It’s particularly stupid because nobody wants to do static FORTRAN compilers anymore anyway. The future, in the general purpose market, is in JIT’s that have to do code-gen in 100ms on the fly, and all of the stuff developed for IA64 is just too damn expensive for that.
At the end of the day, if they spent $1 billion to kill off and carve $10 billion worth from competitors
First, Intel’s investment into Itanium was more like $10 billion, and second, that’s another retro-active rationalization. If the real goal was to kill off a bunch of RISC competitors, Intel could have achieved that at much lower cost and _much_ lower risk by creating a traditional RISC architecture. VLIW, EPIC, none of that stuff was necessary from a strictly marketing standpoint.
You have a lot of preconceived notions from a 3rd party overly superficial stand point, so be it.
However, EPIC for example, was an HP product not an Intel one. And for the most part Intel never intended to replace its cash cow: x86 with IA64. As EPIC-based products where intended from the get go for the high end and the enterprise side of things, two fields for which Intel at the time had little to no presence. You make a set of assumptions and requirements that are of your own bias, and thus move the goal posts and declare success or failure accordingly. I was at intel until recently and that was not the light under which Itanium was casted upon (and the overall costs are much less than the $10billion as stated by the shareholder info I got ).
Second, EPIC, in theory at least. Was not to require recompilation of the whole code. But rather that programs will rely heavily of shared code which can be in the form of libraries, which can be then recompiled whenever advances in the compiler were available. This was to offer a way to decouple silicon and compiler advances. When EPIC was conceived the silicon turn around cycle was nowhere near the speed it is right now, so a processor was expected to have a lifetime of a few years. Allowing speedups to come decoupled from the silicon during those few years, seemed like a sensible approach. At the time at least…
The main problem for the Itanium folks is not that it is a bad processor. But rather than the x86 folks have been fairly successful at not only keeping up but also commanding the performance curve. And that as far as Intel has been concerned is their main goal, as x86 is their cash cow.
Itanium for the most part has been subsidized, either by HP or by Intel’s internal interests. Same can be said for any other non main stream architecture. POWER is only alive because IBM made a hefty investment in it, and even though a POWER chip for IBM is a money loser. They make it up in the long run with the services, which is their bread and butter. Same goes for SPARC, at least SPARC64 is a money loser for the microelectronics side of Fujitsu, it is the services attached to the primepower side of things that makes them money. The old adagio of “you got to spend money to make money” is also true in the micro electronics world.
Alpha died because it ended being a dead weight around the neck of DIGITAL first, and Compaq later. The size of the investment required to design and implement the AXPs was too much for Compaq to recoup with the services stream provided by the AlphaServers and AlphaStations. Same goes for PA-RISC that was at some point a significant drain in resources and money for HP. Heck, the only reason why MIPS lasted in the non-embedded side of things was because SGI poured shitloads of cash into a dying platform. Which in the end pretty much did the whole company in.
IBM, Intel/HP on the other hand are large enough that they can absorb the cost of their POWER, and IA64 investments. Because that gives them access to a market that other vendors are being locked out of, mostly due to the elevated entry fee required.
One has to understand the context under which Intel and HP see Itanium to better gauge its implications.
You have a lot of preconceived notions from a 3rd party overly superficial stand point, so be it.
3rd party overly superficial standpoint? What does that even mean?
However, EPIC for example, was an HP product not an Intel one.
The ideas behind EPIC originated at HP, but IA64 was closely co-designed with Intel.
And for the most part Intel never intended to replace its cash cow: x86 with IA64.
You’re hedging with “for the most part” because it’s quite clear Intel did have much higher aspirations for IA64 than it has achieved. You can argue about whether Intel ever intended to put IA64 in every $500 PC, but the early literature certainly suggests that Intel did intend to target it at least at the large swaths of the server and workstation markets that PPro-based designs allowed x86 to capture.
As EPIC-based products where intended from the get go for the high end and the enterprise side of things, two fields for which Intel at the time had little to no presence.
While that was certainly the initial goal of the Itanium line, it certainly wasn’t the plan for IA64, as a whole, in the long term. It would make no sense for Intel to invest that kind of money into such a different architecture if they didn’t have bigger plans for it, it just doesn’t. You’re basically saying that the guys behind IA64 were drooling morons, while I’m saying that they just misjudged the progression of technology.
You make a set of assumptions and requirements that are of your own bias, and thus move the goal posts and declare success or failure accordingly.
_I’m_ moving the goal posts? Where were you in the late 1990s? Ever since Merced started having problems, Itanium-apologists have been moving the goal posts. Now, their criteria for ‘success’ is ridiculously narrow. They’re defending a chip that’s basically only good for two things (HPC and mission-critical databases), and neglecting the huge monetary and intellectual investment that went into creating a design with such limited applicability.
The lack of proportionality is truly staggering. IA64 was an enormous proposition. It was not only a new architecture, necessitating all the things new architectures require (new toolchains, OS ports, application recompilations), but it was a new _kind_ of architecture. Even at the early stages it was clear that IA64 would require new compiler and software techniques to be developed. The end result just doesn’t justify the magnitude of the investment and risk that went into it. If Itanium really was all that Intel intended for IA64 to be then their management was out to lunch when they green-lighted it. If they just wanted a good super high-end part with lots of RAS capabilities, suitable for very scalable systems, they could have achieved that goal at _far_ lower cost and _far_ lower risk by using a more traditional architecture.
There are also a couple of objective measures by which Itanium has failed. First, the project, as a whole, as failed to make money for the company. IA64 was a high-risk venture, and it has yet to provide a net-positive ROI. Second, IA64 has failed to prove the superiority of EPIC and VLIW. The compiler technology necessary to make it really useful just hasn’t materialized, and the software world has largely moved on from the type of work loads that EPIC was supposed to be good at.
Second, EPIC, in theory at least. Was not to require recompilation of the whole code. But rather that programs will rely heavily of shared code which can be in the form of libraries, which can be then recompiled whenever advances in the compiler were available.
“Code reuse” is a myth, and always has been. Aside from a few HPC applications (BLAS libraries, etc) and some games (and only then recently), nobody reuses code to do computationally intensive work. I don’t know what the mindset was at Intel when somebody put that forth as a good idea, but clearly there couldn’t have been any historical precedent supporting such a notion!
Either way, what the rationale behind such ideas was at the time really is beyond the scope of my argument. At _this_ point in time, it’s obvious that many of the things that EPIC depended on “in theory” just haven’t panned out. Trying to extract large amounts of ILP statically from general purpose code is just not feasible, and its not clear that it’s even possible. It’s basically a dead-end idea, at least until there are fundamental advancements in compiler technology. Given the slow rate of progress of compilers (Proebsting’s Law versus Moore’s Law), counting on such advancements is just plain foolish.
Edited 2007-11-01 23:31
Again, for the nth time… you set what the expectations for the IA64 were from an armchair. I am just pointing out that the goals and expectations from both Intel and HP were/are far different.
As far as HP is concerned, they get to sell Itanium systems which is making them lots of money. At the same time they got rid of PA-RISC and Alpha which were serious money drains and which they could no longer afford to develop. Intel gets a segment of the market that they had no access to before, and most of the development gets subsidized by HP. Itanium2 is an HP design BTW, and Intel is reusing the fab technology already had to begin with (which is what HP could not afford). There are a milliard of workloads, and for the ones that HP wants itanium for: HPC and transaction oriented processing. Itanium2 does remarkably well. So this whole nonsense of describing “workloads” as some sort of homogeneous entity is pointless. It may not be a sexy field, and it may be something fairly foreign to most people reading this thread, it is however a fairly important market.
There have been plenty of problems and issues with Itanium. But that is true for every other processor programme out there. Thus, as I said it is disingenuous to use the norm as if it was some sort of IA64-centric handicap. SUN lost hundreds of millions of the US V that had to be canceled, and now that their fab partner is gone out of the fabbing business they are up the creek without a paddle. IBM is trying to figure out what to do with POWER. Fujistu’s SPARC64 is becoming a more expensive proposition with each generation. MIPS pretty much killed SGI. And even AMD is debating giving up the fabbing business because they simply can’t afford to develop a successor to the Opteron and get their fab processes in order. Under that scheme of things, IA64 has weathered those waters relatively OK.
I just get a chuckle about all these armchair architect/computer tycoons, since what goes on inside of the beast is soooo different.
Sun might have lost with the US V, but they’ve won with the US T1 and now T2, by throwing out the rulebook that the other manufacturers including AMD and Intel have stuck to since the mid-80’s. Fuji is rolling CPU’s of the same form now as well. Now new manus out of europe and asia are jumping onboard, so perhaps Sun’s US V’s failure was the best thing to happen to them in the end.