I had the pleasure earlier this month of attending a demo day at HP’s Cupertino campus to commemorate the ten year anniversary of the Superdome server, see what’s new in the high-end server market and learn about what’s going on with HP-UX.
Hardware is really a secondary interest to OSAlert, but in this case the hardware is pretty impressive (and it had better be, considering that you can easily spend several million dollars on one of these). What I found most interesting was how much effort HP had put into making the hardware platform upgradable and modular. The Superdome was first released in 2000 with PA-RISC processors, and can be outfitted with as little as one “cell” (the cell is like an overgrown blade) containing 4 CPUs, and that same ten year old hardware could have been expanded up to its modern limit of 128 Itanium 2 “Montvale” EPIC processor cores and 2 TB of memory, in two adjacent cabinets with ancillary I-O cabinets.
In contrast to today’s practice of enterprises commonly relying on a large number of smaller, less-expensive servers that are periodically pulled out and replaced with new hardware, users of these large servers appreciate having never been required to do a “forklift upgrade” over ten years. Seeing some of the photos of large servers being installed into upper floors of office buildings using a crane and portions of the wall having been removed, I can see why.
You can run a mix of older and newer processors, adding cells with newer, faster hardware without having to decommission the older stuff unless you’re running out of slots, as long as they’re run as separate partitions — and partitioning and virtualization are a big part of how people use these servers, as we’ll cover more in-depth later.
If you’re more interested in learning about the hardware, you can read an overview on the Wikipedia page, or more in-depth from HP.
Now, on to software. HP invited several bloggers and “new media” types to their event (and paid our travel expenses) and I think one of the most memorable comments, made by blogger (and long-time OSAlert reader, I learned) Ben Rockwood (and I’ll have to paraphrase): “I was just surprised to hear that people are still using HP/UX.” If you read Ben’s blog, you’ll learn that he’s a Solaris guy, and not a fan of HP/UX, but even so, I think it might be a reasonable reaction. We haven’t heard much about HP/UX in a while. But as Ben points out in his coverage of the event, HP’s decision ten years ago to focus on partitioning and virtualization, therefore centralizing one company’s various computing tasks onto one machine really gave the Superdome more relevance than Sun’s high-end servers, which seemed to be more focused on raw power bragging rights. Since that time, virtualization technology has become more sophisticated, and gone downmarket, but HP’s early lead has helped it keep pace.
The Superdome actually has two kinds of partitioning supported, a “hard partition,” wherein individual cell boards can be separated into electrically-isolated partitions, and the more-familiar virtual partitions we all know about these days. The virtual partitions give you more flexibility, and obviously if you’re going to get anywhere near the 1000 VMs that the Superdome will support, the majority of them will need to be virtual ones. Since the hard partitions support dedicated i/o, they’re best for applications where low i/o latency is a factor. The modular architecture let’s you “scale out” with more virtual partitions to perform varied tasks or “scale up” by adding new CPUs or cells when you need more power.
On the OS front, the Superdome can run HP/UX, Windows, SuSe, Red Hat, and OpenVMS, and can even run them all at once. Of the operating systems running on the Superdome, though, 80% is HP/UX.
In addition to the hardware platform and software tools that help a Superdome user augment the server’s power with new processors (you can add a new cellboard without powering down the system), and dynamically monitor and re-allocate computing resources, HP has a really interesting service wherein you can have hardware installed on your Superdome that you won’t use, and it’s kept in reserve as a sort of insurance policy. You pay for 30% of its full cost. If you have a spike in utilization, you can bring that extra capacity online instantly, and only pay the full amount while you’re using it. HP notes that in their experience, companies are running at 15% utilization of their computing resources on average, in large part in order to be prepared for spikes (that never come). It’s like utility computing, only instead of offloading it to the cloud, you just temporarily beef up your existing computer.
These big servers are expensive; dramatically more expensive per processor cycle than a bunch of racks of smaller servers would be. HP has a couple of answers to this. First is the utilization issue. If you’re getting 15% utilization of your bank of smaller servers, but you have to waste that capacity in order to not end up with overloading some of your more important servers before you can shift resources around, then you may be better off with having a more expensive server that can balance your computing load better and let you run at 60% utilization but still have peace of mind. The second answer is that “companies are spending way too much money on operations and maintenance.”
I was having lunch with an old colleague of mine the other day, who’s the CIO of a local company, and he was bemoaning the can-do hacker ethos of his staff. Normally we’d think of that as a good thing, but he was mentioning how they were planning to hack together an appliance using open source software (software that needed a bit of customization to suit their needs). He pointed out that he could buy a commercial appliance to do what they needed for about $10,000, and what they were proposing was to spend about $50,000 in labor and six months to get it up, then who knows how much more in ongoing maintenance. His staff saw it as a way to advance the state of the art in open source software, which it probably was, but it was also a waste of money and effort that could have gone to advancing the companies true business goals.
That’s kind of the point HP was trying to make, I guess. You can save a lot of money by having 100 smaller servers do the job that one larger server can do, but unless take into account all the labor costs of achieving that goal, you won’t be able to measure accurately which method is really cheaper. And don’t forget that you can’t just add up the salaries of the sysadmins who are doing the extra work. You need to take into account what they could be doing with their time if they had their operations and maintenance load lightened. I’m not totally convinced that the economics of a Superdome-type server would work in most cases, but it’s food for thought. I’d appreciate hearing from OSAlert readers about their experience with big servers vs. multiple smaller ones.
Not being a big hardware guy, and not being any kind of HP/UX expert, that about sums up what I took away from the Superdome Tech Day. But one of the nice things about the fact that HP invited a bunch of bloggers to this event is that we can have a little Rashomon moment, and you can see what the other participants’ take on the event was. Since each one of us came from a different branch of the tech media tree, it makes for an interesting comparison:
- Ben Rockwood @ Cuddletech comes from a Sun server administration background.
- David Douthitt @ Administratosphere covers HP/UX, OpenVMS and Linux system administration.
- Andy McCaskey @ SDRnews captured the presentations on video
- Shane Pitman @ Techvirtuoso covers enterprise technology.
- Saurabh Dubey @ Activewin covers Windows.
of all the big guns of UNIX (IRIX, HP-UX, AIX, Solaris, OpenVMS), HP-UX interests me in the least. Now if HP put its attention to 1 of its unix’s OpenVMS would be my choice. At least it has some really unique offerings, it could have been the king had it been given proper funding and development. c’est la vi.
If HP starts throwing it’s weight behind HP-UX that means that OpenVMS will be on life suport that much more. shame really.
edit: and yes I know IRIX is dead(ish), but it was cool and had a lot of industry first features when it was in active development, just like OpenVMS.
Edited 2009-10-29 22:58 UTC
I had the pleasure of using IRIX on some SGI O2 machines down the university … great operating system, IMO one of the best operating systems that I have ever used … I just got on with it, it is a pity it is pretty much dead.
god i wish i had more free time I really want to do an OSAlert article called “The OS’s of yesteryear” talking about some of the great operating systems (like IRIX) and others and their strengths. what they were then, where they are now, what happened to make them fade away, etc… I am lucky because my job has permitted me to use virtually every OS out there over the years (though most of my time is spend in the embedded world these days). Still, there are so many great operating systems out there, and the world needs to know! (seems like the right site for that kind of thing). anyone know if there is a video montage of every OS? if not, who wants to help me make one? it would be one awesome community project (as i don’t have access to every os out there these days). any takers?
Edited 2009-10-30 00:22 UTC
OpenVMS has nothing to do with Unix.
OpenVMS isn’t unix. Isn’t even unix-y. It is cool, though.
I think IBM pretty much cleans shop at megaservers with the ultra speedy POWER6+ p 595. Itanium has yet to pay any dividends for HP. The only reason I’d consider the HP option would be if we had legacy HP-UX apps.
With IBM, you can run any mix of Linux, AIX, and i OS. Linux on POWER is second only to x86, AIX is pretty close to HP-UX but probably better for DB2 and Oracle, while i OS has a large presence in companies that have been around awhile.
POWER7 will be lethal. It will be interesting to see if Oracle/Sun can keep up. Intel seems to not really care about Itanium.
I’m actually surprised that Itanium is still being developed given how far the x86-64 has come – personally, I’d love to see HP port OpenVMS to x86-64 and sell workstations and servers pre-loaded with OpenVMS on it x86-64 with some effort on chipset design can be just as reliable as Itanium but without the massive cost involved and prohibitively high development costs.
Oh well, its only a dream
That will be the day Hell Freezes over then?
Seriously, if HP could kill VMS tomorrow then they would. But too many businesses use it. They find it sits there day in, day out and runs and runs and runs.
Last year, I decomissioned a VAX Cluster that had a cluster uptime of 17.6 years.
If by some chance HP were to have a sudden attack of Common Sense then the first complaints would be
Select 1 from below
Where’s my MS Messenger?
Where’s MS Word
Where’s Photoshop
Sort of just like what people say about Linux
OpenVMS for x86-64 workstations and servers wouldn’t be for the great unwashed masses but for high end workstation stuff that needs to be done, and massive multicore x86-64 servers serving millions each day. The problem with HP, it would require them to look long term, invest some money and stop being a bitch for Microsoft. HP might as well label themselves “HP, subsidiary of Microsoft” – at least it would be an honest reflection of their business plans.
So short disclaimer I work for HP but
Realize that HP is really several companies.
The stuff you buy in Best Buy etc is from PSG and yes they are very Microsoft centric.
However ESSN (Enterprise Servers, Storage and Networks) is very multiplatform. There is a huge internal use of Linux and Windows, Linux is a first class citizen on all of our servers Itanium and x86-64. That includes support for RHEL, SLES, OEL, even Debian. Solaris is supported on our x86-64 gear additionally and is quickly reaching parity with Linux and Windows. HP doesn’t just prefer Windows heck we are allowed to run Linux on our work desktops versus Windows when we don’t have a business need to run Windows for a specific application.
The Itanium really has a sorted past with HP because toward the end of DEC’s life Intel and DEC got into a patent dispute etc and Intel ended up buying most of the Alpha tech and manufacturing from DEC hence VMS and Tru64 and Non-Stop went to Itanium. HP-UX went Itanium because HP couldn’t afford to maintain PA-RISC anymore.
Realize from an HP perspective Itanium runs every OS they sell and there a features that are absolutely critical to how Non-Stop works. Now granted Itanium seems a bit neglected compared to x86-64 but it really is the only chip that can run everything.
Windows\Linux\HP-UX\VMS\Non-Stop.
The folks I feel sorry for are user of MPE! A bit like AS/400, factory-floor reliability. HP is suffocating the OS.
A special day was Fiorina, at her first HP World user conference, asking, “What’s MPE?”
I figured HP was done around ’94 when they blessed NT over HP-UX, and HP-UX has moved slowly ever since. At least AIX treats the sysadmin as educable; HP seems to think the sysadmin can do nothing without a menu.
OpenVMS for x86 already exists. It is called Windows (from 2000 to 2008 and 7, based on NT).
The NT kernel was almost a clone of VMS, made by the same designer Dave Cutler
http://en.wikipedia.org/wiki/Windows_NT#Development
Just because there are similarities does not make Windows OpenVMS. Just because they are designed by the same guy does not make Windows OpenVMS.
By your criteria, OS X, Linux and Minix are all AIX, because they use the similar designs, and philosophies.
You’re kidding, right? Do you realise Sun is now king of TPC-C nenchmark? And also comes better with price/performance ratio.
http://www.tpc.org/tpcc/results/tpcc_perf_results.asp
Also the Exadata V2 database machine is something that will IBM see hard to match.
I don’t think people really care about overpriced IBM gear anymore.
What I find funny is Microsoft isn’t to be found on that list – what happened?
The only people who care about IBM are those who purchase Microsoft products and think they perform peachy in the enterprise; you know the sort, when they ring up and want a mail server – they’ll ask for an Exchange server – asking for products by brand because that is all they know, the brand.
Oracle has improved though, I remember 3 years ago it was a horrible piece of crap that made any Sun hardware appear like it was a slug; Sybase had no problems pumping out the numbers on Solaris on both SPARC and x86-64 machines.
Edited 2009-10-30 09:13 UTC
Do you actually know the price of exadata? Do you really know what you are getting, when you buy Exadata? Or are you just trolling?
I know what you get when you buy exadata. You buy a bunch off x86 servers with SATA drives for a LOT of money, and you are stuck with that peace of gear, that can’t run anything else then Oracle SW. And trust me Oracle RAC ain’t cheap and Oracle RAC ain’t what they advertise it to be. And you tell me IBM HW is overpriced?! On which you can get support for Linux/AIX…and you actually have a choice what to use, and no one is forcing you to use AIX?!
EDIT: dvzt, are you a manager or somthing? You sound like one, they love the marketing slides and the fancy charts. And most of the times they don’t have a clue on the matter at hand.
Edited 2009-10-30 11:09 UTC
Wait a second, are you saying that a negative point of an application specific machine like exadata is that is not general purpose? Whaaaaaaaa?
You’d be shocked to know that the customers which are buying those machines like hotcakes do so because it runs their tailored oracle apps really, really fast. That is why the purchase them, not because they wanted to run something else. Jeez.
…and infiniband HCAs, switches, Oracle RDBMS (== expensive)
So what? Any reasonable application can use Oracle as its database backend. You will hardly want to use that hardware for a different purpose.
Tell me then, what is it? I must have been fooled.
I would choose AIX instead of Linux any day.
But let’s get back to the topic: Exadata V2 was just an example of another “lethal machine”. Let’s compare oranges to oranges. (and apples to Apples under yet another Psystar article) IBM p595 was recently beaten by a cluster of Sun T5440s, giving better price/tpc-c, smaller latency etc… That’s what I was talking about. IBM is known to be overpriced.
Sorry to disappoint you, but no, just a regular sysadmin.
While I always enjoy a good manger/sales people bashing, you should make sure not to act like one, too. And you kind of did that – appliances shouldn’t be compared to general purpose servers.
I have seen the new TCP score from Oracle. Congratulations to them for topping the chart.
One funky thing though – that is the only cluster solution you see in the list. You can’t even get Oracle people to recommend using RAC over a big multi-CPU machine so long as someone pays for the hw. Administration and support is a day and night difference. No admin wants to run RAC if they can avoid.
And have you looked at the percentage licenses are of the total cost? No wonder they want to sell you that setup.
You might also guess that there might be updates possible to the IBM setup.
Which is all so-so interesting of course.
What IBM _is_ doing is more interesting. Power7 will come some time next year and will probably come in a 256-way p595. You can take a guess at the performance that will have (Cost? Yes. Lots of it I expect.)
They also have their own cluster solution with DB2 PureScale. I expect you can chain p595s into that cluster till the cows come home. There will be very interesting times ahead indeed, and I’m not at all certain Oracle will be able to scale up if they want to stay in the numbers game.
Personal opinion: 7.7m was by far not enough to make an impact. IBM has publicly talked about their Power7 and I think that will have the “PS2 effect” in the market; that it will stop many that might be seriously interest in the Sun RAC setup to wait-and-see (if they have the time) and compare it to a similar Power7 setup.
(For the record: I both use and like AIX.)
Yeah the Power7 will be fast. But Fujitsu is releasing a octocore SPARC 64 next year called Venus. It has the same performance as Power7: 128Gflops.
And also the Sparc Niagara is extremely fast on certain workloads. For instance, Siebel v8 benchmarks, one Sun T5440 with four 1.4GHz Niagara is twice as fast as three IBM Power570 servers with fourteen Power6+ CPUs at 4.7GHz. One T5440 cost 76.000USD which is quite a lot of money. But one P570 server cost 413.000USD which is many times more. You would need six P570 servers to match one T5440. See benchmarks on Oracle site. There are whitepapers from IBM and from SUN. Compare them.
How has Itanium not paid dividends? It hasn’t taken over the world like it was hoped, but it is ubiquitous across it’s Enterprise server line, and it runs Windows, Linux, HP-UX, OpenVMS, and Non-Stop, the latter two of which are used in high-profit sales for mission-critical apps.
Especially the new Tukwila-class chips should be sweet.
I’ve been an employee of Sun Partner company. I’ve installed interesting servers with enterprise-oriented software services. I had an opportunity to value medium-sized servers against a bunch of entry-level machines.
In the beginning, we used to install bigger machines and have them sliced (electrically or logically) into “domains”, a sort of virtual machines. But later, we shifted to a different approach with a plenty of smaller machines. The only reason for this was the cost: the customers prefer cheaper version of the solution.
In my opinion, it would’ve been wiser if we retained the first type of installations, due to easier servicing, better expansion and lesser downtime. Sure, the system works but it’s much harder maintaining it, let alone reconfiguring. The money customer saved on hardware price is less than the money lost in huge reconfigurations downtime and this maintenance contract costs more.
By your review, SuperDome + HP-UX sounds as a properly engineered solution and I believe it is so. But, still, I prefer SPARC + Solaris way much more then any other high level enterprise HW + OS combination
Ben Rockwood is “she”, not “he”. You’re linking to her blog called cuddletech (hint) with her photo (hint hint).
Nope, Ben… is a he, I am yet to see a single female named “Ben.”
I believe the pics are of his wife/girlfriend/partner/whatever.
Here’s your “she”
http://www.alobbs.com/album/opensolaris_dev07_mugshots/P1090196/Ben…
ohhh, cutie!
Here’s your “she”
http://www.alobbs.com/album/opensolaris_dev07_mugshots/P1090196/Ben…..
ohhh, cutie!
Well, tbh, he is indeed pretty cute
A sucker for dog tags? Me too. But shouldn’t we be studying, or programming, or something?
A sucker for dog tags? Me too. But shouldn’t we be studying, or programming, or something?
Nope, I rather spend my time drooling at good-looking men or women, especially so late in the night
Your small and medium sized companies can’t do big iron. Its not an option. You start off small with Maybe 2 web servers 2 db servers and one back up. That’s 5 machines @ 6 k each = 30 k ( assuming really high IBM/HP pricing for x86 servers for comparison). Obviously, any big iron system can’t compete with that on price. As the company grows, it will be easier to just add new x86_64 servers as needed. They only get cheaper and more powerful as time goes on, while they are also a standard that has multiple vendors. Maybe the initial purchase of the five servers wasn’t enough of a quantity to get much of a discount But when you start buying them 15 at a time, you can get a substantial discount and get competing offers from HP, IBM, Dell, Sun, Intel. If you were to go big iron, you can’t take a HP “Cell” and stick it in your Sun/ IBM main frame. You have one supplier for the lifetime of that chassis. When they decide to stop selling additional processing boards, you’re screwed. Managing that many physical servers, isn’t that much different from managing that many virtual servers/ partitions.
From my perspective, Big Iron is a good choice for Big companies that have known computational needs that are either static or more predictable. ( ie open 20 new branches = one more “cell” required).