OSAlert recently linked to several articles on the two remaining big iron RISC based platforms still alive and kicking, something of great interest to myself both in a professional capacity and for personal reasons (I wouldn’t be an avid OSAlert reader and poster if I wasn’t into non mainstream architectures).
Introduction
During a thread in which several of us where discussing the new POWER7 architecture, user Tyler Durden (I like the name by the way), somebody with vastly more knowledge of RISC processors than I, pointed out the following:
The development of a high end processor and its associated fab technology will soon reach the billion dollar mark.
$1 billion! Ouch!
Basically what this means is that, as these systems are niche market, high end RISC based systems are coming to a point where it’s no longer possible to sell enough systems to recoup the cost of developing them. This, for me, is going to cause countless issues on a professional level and if you’ve ever worked with one of these systems in a mission critical environment, you’ll know exactly what I’m talking about.
Why care?
You would most certainly be forgiven for asking what’s so special about these systems anyway? Well, for me, it comes down to several issues. As both the hardware and (usually) the OS of these systems are developed and/or certified by the same company, including device drivers, there is a tight integration that only the Mac can provide in the consumer space and although this in itself is nothing special, it does allow for enterprise level features you can’t get in anywhere else. One such feature is the ability to identify a faulty device, be that an expansion card, DIMM or even a CPU, power off that specific device or port in which it’s plugged, open the system and replace the faulty component without powering down the system, rebooting or even entering single user mode. Obviously these features are exceptionally handy when dealing with high-availability applications and services but to me they both pale in comparison to their virtualization capabilities.
Current Intel and AMD processors support virtualization features, using VT-x and AMD-V respectively, but these features are only for dealing with X86 CPU virtualization, an ISA where it has been notoriously difficult to implement full virtualization. What systems like POWER and SPARC have been using for quite some time is a system called I/O virtualization, also known as IOMMU.
IOMMU: A brief introduction.
IOMMUs basically perform the same function as a CPU MMUs, in that they map virtual memory address ranges to physical address ranges and though they can be used for several other functions, such as providing memory protection in the case of faulty devices, their main use is to provide hardware I/O virtualization capabilities. Devices on the PCIe bus do almost all of their communication with other devices via main memory, a process know as DMA. If a hypervisor where to allow a guest OS direct access to a PCIe device’s memory space, this could possibly overwrite a) the data from any other guest OS that has access to this device, causing corruption, and more importantly b) data from the host system, almost certainly taking the system down. Currently, on X86 platforms, this is avoided by using either dedicated devices or
paravitualization, a technique that uses software to emulate a device which is then presented to the guest OS. If you’ve ever created a virtual harddrive file for use with VirtualBox, QEMU or VMWare you are basically using paravirtualization to present the semblance of a harddrive to the guest OS.
The problem with paravirtualization is that as it adds a translation layer between the hardware and the OS, it incurs a performance penalty. This is why up until recently, 3d acceleration for guest operating systems was infeasible. With hardware I/O virtualization, this performance penalty is drastically reduced as the only added step is the translation from virtual to physical memory address range that the IOMMU performs, allowing you to allocate the same device to multiple guest operating systems as though they where running directly on the hardware.
For many devices, this isn’t actually a problem as they are already “virtualized”, for example RAID arrays in which you allocate a LUN to a specific host, but what if you have a multiport network adapter or HBA where you wish to map a specific port to a specific guest OS? You can buy multiple adapters but this approach is limited by the amount of expansion slots available.
What is often done with X86 systems in these situations is that these ports are bound together, paravirtualized and allocated to however many guest operating systems using a process called multiplexing. Usually, communication is then subsequently VLAN or VSAN tagged. This is far from perfect as not only do you incur the performance penalty of paravirtualization while also reducing flexibility, there is the added costs due to needing advanced infrastructure to support these setups. Although VLAN tagging is common as muck these days, VSAN tagging is still a relatively new technology and last I heard, only implemented on top of the range CISCO fibre directors.
So what am I going to do when it becomes too expensive to produce my beloved high-end RISC systems? Am I going to have to deal with inelegant systems to continue to provide my customers with virtualization? Thankfully, AMD and Intel have both come up with solutions.
Enter AMD-Vi and VT-d.
Not long after Mr. Durden made the above comment which prompted me to start thinking about this situation, Peter Bright published a guide to I/O virtualization on Ars Technica in which he describes the new IOMMUs being developed for the X86 platform, AMD’s AMD-Vi and Intel’s VT-d. Basically these systems provide what has been provided by the likes of POWER and SPARC based systems for years now, the ability to present a guest OS with hardware I/O virtualization. Not only will this allow big-iron UNIX vendors to produce high-end systems using commodity hardware, it will allow one to one hardware access to graphics cards, allowing guest operating systems to take full advantage of the latest 3d capabilities. As long as the guest OS has the required drivers, you can now ditch that dual boot system and startup a virtual machine for all your gaming needs.
Some of you may have noticed there is a slight, to say the least, problem with this kind of setup. If hardware virtualization is one to one, aren’t we back in the same memory range corruption boat as before? What if two guest operating systems attempt to access the same device at the same time? To avoid any problems that could arise from one to one hardware access yet still retain multiplexing capabilities, devices on the PCIe bus need to present the same set of functions multiple times. Although this sounds like a lot of work, something like it has already been implemented. Currently, PCIe devices already support multiple functions but so far this has been used to support different hardware capabilities. With the adoption of IOMMUs by both AMD and Intel, it will be possible to develop devices that are virtualization aware, thereby presenting several guest operating systems with exactly the same set of functions.
…and finally.
Although it could be some time yet before we see the demise of high-end custom ISA, the writing is on the wall and although I’m going to miss these systems, if only for their novelty value, I’m going to leave you with another quote from Mr. Durden:
…the processor is not the main value proposition for their systems. The actual system, and its integration is.
Amen to that, brother!
For further reading on virutalization, check out the following links:
- http://en.wikipedia.org/wiki/Platform_virtualization
- http://arstechnica.com/hardware/news/2008/08/virtualization-guide-1.ars/
- http://arstechnica.com/hardware/news/2008/12/virtualization-guide-2.ars/
- http://arstechnica.com/business/guides/2010/02/io-virtualization.ars/
- http://en.wikipedia.org/wiki/Virtualization
About the author:
I’m a systems engineer and administrator mostly dealing with UNIX and Linux systems for the financial sector.
Currently Intel does not include its virtualization extensions on all models of its chips. Its not easy to tell if a given chip does or does not have the extensions just by the name. And this holds back many different things that could be done with virtualization on desktops ( winxp mode on windows 7). I imagine, they’ll probably repeat that for the next generation of virtualization. Which means there won’t be as much of a demand for hardware that is aware of virtualization, so it will be expensive and hard to find for a longer period of time than normal.
Yes, that is something that really sucks about Intel. They have a list of CPU features that they basically combine with itself in a cartesian product to create a product matrix, and then create individual CPUs for every square in that matrix. IOW, they are segmenting their CPU market into a thousand little niches for no good reason, other than they can.
Working with AMD CPUs is so much easier. Turion for mobile, Sempron for low-end/budget desktops, Athlon/Phenom for desktops, Opteron for servers. Other than the type/speed of RAM, size of L2/L3, and number or cores on the die … they all have the same CPU features. Even the lowly Sempron supports 64-bit and hardware virtualisation.
Like Microsoft, Intel’s chief competitor/enemoy is itself.
Edited 2010-02-19 00:14 UTC
Oh give me a break, Intel isn’t pushing 32 bit processors anymore. Sure the low-end doesn’t come with hardware virtualization support but you shouldn’t be buying a single core celeron in the first place if you plan on running a VM.
You say AMD is easier to work with but I got seriously burned by that period where they kept changing socket interfaces. AMD also takes too long to play catch-up when it comes to power efficiency.
AMD is changing sockets far less frequently than Intel right now, and have been doing a bit better on the power front too, probably b/c they are licensing a lot of the patents from what use to be Transmeta.
What are you talking about? Intel still makes cpus for LGA 775. Maybe you are unaware of when AMD burned everyone with their abrupt ditching of socket 939. It was only on the market for 2 years.
As for power efficiency, What does AMD have to counter the Arrandale lineup? They not only use a 32nm fab but also come with impressive features like Turbo Boost, hyperthreading and hd video support. I’d consider AMD for a desktop system but Intel still wins when it comes to laptops.
I’ll raise your a socket 423.
I think you’re choosing to selectively forget the RAMBUS debacle. Had RAMBUS chosen a different path, RDRAM would still be in use today. The 845D chipset and socket 478 was a direct result of the actions taken by RAMBUS.
Intel has of late gotten into the habit of making a new socket for every chip it releases. AMD which has done a couple changes in the last few years is more stable on changing sockets.
So it’s basically – Intel is burning everyone with socket switches, where AMD use to.
Also, per the power – Intel is pretty much only using their in-house developed Speed Step technology for power management. AMD, OTOH, is using Transmeta’s LongRun technology instead. One of the big differences is how many points of power optimization are available. SpeedStep typically has 3 or 4, where LongRun has dozens. (Even if Intel upgraded SpeedStep with more than 3 or 4, it’s still a drastic difference.)
Blah blah blah speedstep yes we’re all aware of speedstep, it’s not exactly a new technology.
Now tell me exactly what AMD has to compete with the Arrandale series when it comes to performance/efficiency.
Not a future lineup but right now.
Funny – since Arrandale is basically an updated version of the same architecture used by the Pentium:
http://en.wikipedia.org/wiki/Arrandale_%28microprocessor%29
http://en.wikipedia.org/wiki/Nehalem_%28microarchitecture%2…
http://en.wikipedia.org/wiki/Penryn_%28microprocessor%29
AMD answered the Pentium with the K8, and now the K10 – available in Athlon II and all Phenom processors on market.
It’s also quite interesting how Intel has had to backpedal and redesign to what AMD did in K8 in order to keep up with performance.
Funny stuff.
Intel has Core series duals and quads w/o HW virtualization, and it is not unknown for motherboards to lack the feature, even when the chipsets have them.
Meanwhile, for AMD, if you avoid Semprons and LE-series Athlons, you’re good.
Likewise, Intel sells many such Celerons, and many Atoms, as 32-bit only.
Edited 2010-02-21 06:31 UTC
it would be sad to see more architectures die off, but their advantages are slowly being worked into main stream computing (to an extent). I at least hope SPARC survives.
Well, SPARC is an open spec, you can produce your own if you like.
But really, it is sad.
x86 and it’s 64-bit variant are horrible inefficient things, 64-bit is a bit better I hear, but delivering cheap boxes, also for home use (large volume market), seems to be winning ?
I guess we still have ARM as the last large-volume competing architecture ?
Hmm, I seem to be forgetting Power, MIPS and ?? for embedded. I guess the whole smartphone market somehow blinded me a bit.
The best comment I heard from a cpu engineer was that x86 is like sausage, it’s great as long as you don’t know what goes in to it.
Yes it is true that x64 fixed a lot of the shortcomings of x86 like the relative lack of registers and of course addressing space.
x86 is less efficient from an engineering point of view but Chipzilla has been able push efficiency gains through intensive R&D. It’s kind of like how an advanced V8 can get better mileage than a standard v6.
RISC chips still have advantages, especially when it comes to custom cpus. This can be seen by the fact that MS went with a powerpc based cpu for the 360 instead of partnering with Intel. The early 360 dev kits were actually Mac G5s with Microsoft stickers.
Intel stopped making new CISC processors with the introduction of the Pentium Pro.
Modern Intel processors are RISC. They use a lot of transistors in their chips to translate the CISC instructions into something their chips can handle. Sure it’s all one big chip, but the x86 ISA is created by essentionally hardware emulation.
If it was not for the requirement to keep compatibility with Microsoft’s Operating System then x86 would of been dead a long time ago. There is no advantage to it in any way shape or form except for the advantages of being backwards compatible and people are familar with it.
No, modern x86 cpus are still CISC, yes they have a RISC-like core but it doesn’t change the fact that they still come with a complex instruction set.
The Itanium however is clearly a RISC cpu and is advertised as such:
http://www.intel.com/products/processor/itanium/index.htm
Edited 2010-02-23 02:42 UTC
It may be that developing a new CPU costs a lot of dough, but over what time period? And how long is it sold? Are costs growing faster than the revenue? Are current competitors leaving the market?
There are a lot of variables here and as long as IBM can sell chips for Xboxes and PS3s and Wiis there will be development, it just may be that the big iron ISAs are just a byproduct of that.
I for one am not convinced that special CPUs are going to way of the dodo. But I’d liked to be convinced ..
The one thing that IBM does that Intel and AMD dont’y do is that their chips are very heavily computer aided design. Their chips are modular and the components are easily put together to make various types of custom chips for IBM’s customers.
That is probably a big reason why all the modern gaming systems use PowerPC in their design. IBM is able to crank out custom designs relatively quickly.
————————–
CPUs are fabulously expensive to create. Intel has 3-4 generations of CPU designs in their corporate pipeline at any given time.
As Intel releases their newest chip they are in the process of designing the manufacturing facilities for the next generation of chips. While they are designing the manufacture facilities they are also developing and testing the next-next generation designs out in their labs.
So that is why it seems like it takes so long for Intel or AMD to modify their chip designs when it turns out they were not competitive. Like the Pentium-4 and why it took so long for Intel to come out with a chip to put the hurt back on AMD…
It takes _years_ of work to get a new processor design out.
And what makes it worse is that as you create new and smaller processes you have to, basically, build all new production facilities. It’s more cost-effective for Intel or AMD to build all-new manufacturing plant for new fab process then it is to try to retrofit or upgrade a older facility.
exactly, think of POWER7 as the deluxe version with all the bells and whistles. IBM still makes lower end POWER chips that pull down technology from the highest end models. Considering they charge a huge premium for them they’re at least making back their money.
The real issue is that hardware is not a 20% growth business anymore, and you can’t gamble against the future like companies used too.
I think we’re seeing a trend toward more custom systems. Oracle bought SUN and one of the key reasons seems to be specifically for the hardware as Sparc is the most common Oracle platform. Apple is pushing their own spin of mobile chips and I think they’ll be gradually moving to more and more “personal” devices rather than traditional systems, but they’re going “fabless” farming manufacturing out to other companies. Nvidia and VIA have always been fabless. ARM and RISC long ago switched to an “IP” model of selling the designs and letting companies find their own manufacturing. Even AMD has separated Design and Fab businesses because the cost of cutting edge Fab facilities means they have to split so other companies feel comfortable sharing the manufacturing time at “AMD’s” Fab.
Again, companies like IBM want to be in the manufacturing business for the control of technology, and as things consolidate that makes other people pay them to keep IBM’s factories up-to-date on a regular basis. They might “lose” money on something like POWER7, but they can sell manufacturing time and technology to a die-shrink for Xboxes or Wiis and make money hand-over-fist that way.
Capital investment required for the current silicon wafer technology is huge. There haven’t been any huge changes in the area for a while, so basically there are fewer players who can survive the competition.
I really can’t wait until somebody discovers a new way to make processors that can be done in someone’s basement or garage. That would really change the whole market dynamic.
You can get a friggin dual-core Wolfdale for 50 bucks.
What the hell do you want? A free cpu with every BK value meal?
So apparently computers and computing have gotten as far as they ever will and we may as well give into our intel overlords?
How does that make them overlords? Because we buy something they sell? That’s called capitalism.
The demise is long overdue. Other architectures might have some niche advantages but the Sparcs, Titaniums, PowerPC and what else don’t cut the cake (anymore).
I work for a company that produces software that usually runs on AIX, HP or SUN. Normally between 1 and 128 processors (yes big stuff). Recently we ported to Linux on x86 as well. And then we tested the performance.
X86 was up to 3 times faster than all the other stuff for a fraction of the hardware cost. Of course you still pay a lot for the support and licenses to IBM, SUN and HP but really it makes you wonder why people are still computing on special CPU’s.
Even in a x86 market you can achieve redundancy and hot swap hardware but maybe there are some corners where proprietary hardware is useful. Can’t think of many though.
It just proves that your company software works better on x86 architecture, but there is still a lot of other software that runs far better on those big iron CPUs than on x86. Oracle DB is one software that runs very well on Power & Itanium if tuned properly. And one other thing the big iron boys win in enterprise, and that’s scalability. And when you combine all those things, Reliability, Availability & Scalability ( RAS ) that make those architectures shine on their own, and x86 simply can’t match them, at least not yet. The things that you see coming to the x86 space, have been in big iron boxes for ages. And when you talk about big companies, they don’t care they are going to pay 100k less for a box, if the $ loss per hour when that box is down is like 200k $ ( or even more ).
That was actually the biggest surprise. We have customers that insert and process in excess of 100 million records per day (we use Oracle) and x86 was faster than other CPU’s for a fraction of the cost, of course not the license cost for Oracle (though if performance is better you can buy less Oracle licenses, which often go per processor as you need less processors)
The nature of our business is such that a downtime of a couple of hours is acceptable, so no fancy realtime failover hardware is required.
All I can say is: if you buy from IBM, SUN or HP, don’t do it for reputation of the CPU, do it for what they offer extra and measure the performance (or ask you software vendor for figures)
That’s key…you identified that you don’t have to be absolutely 24/7.
The savings you realize here ends up being also real savings for customers. I suspect more and more traditional “must have 24/7” customers are reconsidering their “must” position seeing the pretty huge cost differences.
It’s not just downtime. Can you survive an undetected bit error that corrupts data in your database? If a processor goes bad, will it silently mangle several thousands of transactions before it fails?
Think of it this way: would you use RAID to store your critical data? If so, why are you expecting better error detection and recovery out of your fixed storage than out of your processor?
Because failure rates of Conventional HDD far exceed those of processors
The processors are replaced well before their MTBF. However, if you have a crapload of processors, that doesn’t really matter. We don’t have enough for that to be a problem currently, but it might be if we had thousands of CPU’s.
http://www.betanews.com/article/How-long-can-Unix-hang-on-What-thre…
this is a good read and is mostly related.
x86 is still proprietary to Intel , isn’t it??
with AMD and VIA as licensees.
POWER is proprietary to IBM, I think.
Though you can get licenses ( FreeScale, etc. )
SPARC is open standard from SUN.
Well, the 80486 is NOT proprietary to Intel any more – the patents have all expired. However, many modern techniques for making an 80486 faster may still be covered under someone’s patents – Intel, NexGen (now owned by AMD,) the remnants of Cyrix (now owned by VIA and AMD,) and Centaur (owned by VIA.)
I think Intel is a licensee of AMD for the 64-bit extensions.
“AMD licensed its x86-64 design to Intel”
http://en.wikipedia.org/wiki/X86-64#Legal_issues
AMD and Intel have an old cross-license agreement. Previously mostly for AMDs benifit, but Intel has not also taken advantage of it.
Like Mr. Reilly, I really like this stuff, but at this point I mainly wonder how it will end. My guess is that IBM will be the last RISC/UNIX vendor standing. Among other things (eg, performance), sharing the physical platform with the high-margin OS400 market can’t hurt.
So I wonder, how will it end? Clearly, UNIX’s legacy decline could last decades; Unisys still sells Burroughs B5000 compatible machines (mostly emulated). But UNIX isn’t as hard to migrate from, compared to MCP, and HP apparently felt comfortable enough about this to leave tru64 customers out in the cold regarding compatibility. I wonder if other commercial UNIXes will end this way, or if Oracle will be selling emulated SPARCs in 2030?
Not just i5/OS (ex OS/400): Just think about mostly governmental institutions (e. g. fiscal administration) that use IBM mainframe / HPC installations for their z/OS, MVS and VM stuff. The “end of job” card has not yet been given.
A chance that commercial UNIXes have to survive, at last from a “usage share” point of view, is to open source them. The question of course is: Will it make sense if there’s no platform, even no emulated one, where they could be run on?
Nor is it likely to be given any time soon. There’s nothing in the commodity market that can match what a parallel sysplex of z/OS systems can provide. When your system has to work with disastrous economy-wrecking consequences of failure, this kind of system (or other expensive high-reliability systems like Tandem, er, HP’s NonStop) is what you use.
IBM’s i5/OS, OS/400, or whatever the name is this week can survive on just about any architecture, due to the virtual machine layer (TIMI) that hosts the operating system.
While it was convenient for IBM to use their owm processors for their midrange division that weren’t System/370-compatible, they made the transition to a modified IBM/Motorola PowerPC 620 (as the PowerPC_AS) processor and then, to the POWER series without much trouble for customers.
Of course, the main problem is cooling (and the associated power requirements), which is likely why IBM will continue to produce their own processors, plus the fact that they go into their networking equipment and other systems.
Virtualization isn’t the only feature that keeps big-iron RISC ticking. There are a raft of RAS features built in to modern SPARC and POWER processors that just don’t exist in the x86 world. So far as I can determine, neither the current Xeon nor Opteron products have ECC L2 caches, let alone the type of instruction parity and retry that’s present in SPARC64 or POWER. Intel’s own Itanium supports these features (as well as core-level lockstep).
Producing a reliable system out of redundant unreliable parts only works insofar as faults are actually detected. x86 servers don’t match up to enterprise RISC servers in this area. Intel has little incentive to add these features to Xeon, which would just result in Itanium cannibalization. AMD ought to, but isn’t doing so – and they have bigger fish to fry. This niche will no doubt be dominated by IBM and Oracle for many years.
Opterons have had ECC caches since the early days. It’s up to the BIOS makers to enable it, though.
On our Tyan motherboards, we can set a full screen of options regarding ECC for the L1/L2/L3 caches. And another screen for ECC options for the RAM.
What’s ISA in this article? An uncommon abbreviation deserves definition when first used.
International Society of Arboriculture (thanks, Google)
or, for real, “Industry Standard Architecture”
No, “Industry Standard Architecture” is a type of bus, in this regard, ISA means “instruction set architecture”.
http://en.wikipedia.org/wiki/Industry_Standard_Architecture
Maybe we’ll be going back to the bit-slice modular processor construction method just like the 70’s and 80’s. The company I worked for had a 28-bit 2901 based add-on processor (to extend it’s 24-bit legacy processor.)
We know it’s expensive to develop a processor. Intel spends well over a $1 billion every three years to dig a whole in the ground for a new fab plant so they can keep producing faster chips.
It’s why Sun can’t keep SPARC up, why even Fujitsu is finding it tough to develop and why other architectures like MIPS have fallen by the wayside. They have relied on a niche, high return market that was always going to be at risk. It’s not rocket science.
On the topic of x86/x64 IO virtualization:
I just hope this is one step closer to a world where every user account is, really, a Virtual Machine. Each VMs “worldview” would be, optionally — to save space and ease maintainance, a combination of a “base” system image + software maintained by an admin, and each user could alter their “system” as they see fit.
Combine this with a fast enough network and accelerated video compression, then we’re just a step away from having one “real” computer per household tucked away in the basement or a closet and a few relatively dumb terminals wherever we want them — The office, the kitchen, the kids’ bedrooms, the Television for media streaming… Just a little box that accepts local input, has a couple free USB ports, video and audio IO, and just enough brains to decode the compressed A/V in real time. Heck, this is basically what the OnLive gaming service hopes to do over the internet — so it should definately be doable over a home network, maybe even the faster variants of WiFi.
Being unable to share hardware, particularly the video card, is what’s holding this sort of thing back — Little Timmy needs to be able to play WOW while dad watches a video in the living room.
Edited 2010-02-20 00:17 UTC
I liked the article, thank you. The Big Iron ISAs are cool but it’s interesting / amazing to see the other technology that these large systems have had for years that is still trickling down slowly to commodity systems. It’s fascinating to learn more about the capabilities of large systems; there’s a lot to learn there and the majority of folks using commodity hardware rarely get to hear about it. People perhaps think of a large, lumbering mainframe but don’t realise just how advanced some of this stuff still is…
I have one main quibble: Paravirtualization (or sometimes paravirtualisation in the UK) generally refers to modifications that expose virtualisation to the guest. That can involve modifications to the guest’s low-level architecture dependent code or to device drivers. If the guest thinks it’s talking to a real hardware device then that’s still full virtualisation / emulation of that device, regardless of how the VMM is actually providing that device. So providing a host OS file to the guest as a hard drive is still actually full virtualisation unless you’ve installed virtualisation-aware drivers into the guest.
I believe that this statement is slightly behind the state-of-the-art though I may be confused. The IO Virtualisation specs for PCI do allow a somewhat standardised way to export multiple virtual interfaces from a single PCI device.
A few devices that predate this spec also defined their own ways to export multiple virtual interfaces so that similar functionality has been available for a while now. This gets used to provide safe direct hardware access to virtual machines and to unprivileged applications. It’s still not that common, though, whereas I presume Big Iron makes equivalent functionality somewhat ubiquitous!
The scary thing is just how far ahead the likes of IBM have been; x86 systems are still catching up to some features that have been available on IBM’s large systems for decades. And that’s ignoring the integration advantages of buying hardware, OS and support from one company. Not forgetting the extreme Ninja-ness of the support that the really serious vendors provide. I knew a guy whose university acquired an IBM mainframe in a competition; part of the deal involved a dedicated telephone line between the mainframe and IBM. They reconfigured storage and before they knew it they had a phonecall from IBM saying “Your mainframe says there’s a problem”. That’s service!