“Version 5 of Red Hat’s Red Hat Enterprise Linux operating system hit the streets last month, complete with a truckload of updated open-source components and brand-new support for server virtualization – 0courtesy of the Xen hypervisor project. eWEEK Labs tested RHEL 5 with a particular focus on its new virtualization features. While we think that Red Hat is off to a good start with its Xen implementation, companies in search of an out-of-the-box server virtualization solution shouldn’t expect it from RHEL 5.”
They seem to be on track for some market increases…
On a side note, anybody test the updated cluster suite?
I remember my first purchase of a ‘boxed set’ of Red Hat Professional 6.0 and I remember trying to install it on a home built machine. They had a number you could call for phone support, if I recall it was the keys to press to go from X-windows to the console. I had forgotten the keys and the guy on the phone was very helpful. To me all companies that go mega corp forget where they came from and soon get the big head. To me I remembered calling and the personal service was excellent in fact I think I called a few other times for help.
Now they want money period, they said the heck with the desktop run Windows because it would have taken some effort and marketing. Now they are in a catch22 because they know SuSE has SLED and it is a fine desktop OS. Ubuntu is hard at it developing their desktop WITH support. Now I am using Fedora at work/home and I have had some problems with it and really it seems like Red Hat is now thinking we missed out on the desktop market (which there is plenty of demand) despite what people say.
I will continue to use Fedora just because at work we use RHEL3 & 4 in the Enterprise, however I may switch on my desktop OS because I see a decrease in the amount of users switching towards Ubuntu & SuSE.
Um, Red Hat has ranked #1 for two years running amongst IT service providers. Better than IBM, better than Sun, and way better than Microsoft. You can’t get better corporate IT support services for anywhere near this kind of price.
Novell is first starting to become relevant (again) in new corporate IT deployments, and for all of its promise, Ubuntu/Canonical has gone nowhere in this space. Anyone who claims that Red Hat should be worried about Novell or Canonical might as well be saying that Google should be worried about Yahoo or Microsoft.
I thought the review was fair, but JB searched a little too hard for things to complain about. I’ve always found his writing style thoroughly unenjoyable.
His expectations are unrealistic. Deploying a virtualized server with strong isolation and RBAC is not trivial, and this is the first serious attempt at a GUI virtualization manager for a general-purpose OS. Sun doesn’t do host virtualization, Microsoft’s solution isn’t yet available as a public beta, and IBM won’t ship their virtual I/O solution for AIX until November (and I can assure you that RHEL will seem like a walk in the park by comparison). Of course, JB finds one of vanishingly few manageability features that Novell has over Red Hat and points it out.
Red Hat’s virtualization manager could be more intuitive. But it’s their first crack at it, and they seem to have done a really good job. All of the hiccups were minor and easily fixable without so much as calling for support. Any decent sysadmin would be have no trouble deploying this solution. If nobody in your shop knows bridged Ethernet from a host bridge adapter, then you’d best be hiring a consulting firm anyway.
I said nothing about some opinion poll what I meant was they totally ditched the ‘end user’ that helped them to begin with…
Whatever the poll says just like an opinion one day your are liked the next day people could careless.
RHEL 5 permissions are alot different than in RHEL4 when transfering files over a mounted “cifs” share.
Unless you open nautilus as root you cannot Cut&Paste anymore, which is really annoying.
I have discovered that SELinux is the reason also for so much problems not allowing applications to install and run and without changing its settings it will hinder your work.
Packages are updated, yes but now dependancies issues are even bigger than before especially with appz that use wxWidgets for their interfaces.
I got nautilus crashed several times in the last month but restarted after each crash, the bad thing is that deskop icons disappeared during the crash time, before recovery happened; nothing wrong on the hardware side though.
Larger Monitors are much better supported now, like 1920×1200 resolutions with 32bit depth (on 23″).
RHEL 5 doesn’t seem to be revolutionary to me but rather evolutionary.
System Tools are not much improved unfortunately. Image that you cannot set up a RAID Volume without a GUI after RHEL installation finishes; To administer your disks you have to run “fdisk” then “mkfs” then create “md0” then format it then mount it then … endless pain!!!
Otherwise it is quite stable OS to work with it unlike windows server 2003 which nagged about a bug that dated back to windows NT4 that will affect my shares (the bug of I/O Dword size of a networking stack).
Image that you cannot set up a RAID Volume without a GUI after RHEL installation finishes; To administer your disks you have to run “fdisk” then “mkfs” then create “md0” then format it then mount it then … endless pain!!!
Quite true. The lack of some decent GUI tools (whether local or remote) from the major Linux distributors is troubling, and isn’t going to make them any headway beyond the Linux and Unix worlds and into the Windows one – which is where the need to go. The closest we have is YaST, and it’s one of the reasons why I’m mystified why people call Ubuntu a user friendly distribution.
At this point, some smart Alec will come along and point out the virtues of using the CLI, but it cuts no ice. If you just want to quickly and effortlessly set something up then a good quality GUI is a must these days. Yes. On servers.
Lack of GUI tools for raid setup is probably not a big problem to Red Hat. They aim for a market where most people will use hardware raid.
Most of the boxes will also be servers, that probably have no gui anyway. I would say that good command line tools are far more impoartant than GUI tools, not that I would object to having such tools.
yeah, what’s with the gui obsession these days?
I guess it’s usefull for clueless admins, or for windows admins playing with linux, but any serious linux admin couldn’t care less about those so called missing gui.
I couldn’t imagine having to go thru vnc or something to get the job done. Real nix admin use ssh (and vi).
Servers shouldn’t boot past init3 anyway.
software raid? please, are we talking enterprise servers here or home servers?
Are we *nix sysadmins or not?
You forgot to use the macho grunt Tim Allen used in “Home Improvement”: “hurhurhurhurhur”
yeah, what’s with the gui obsession these days?
I guess it’s usefull for clueless admins, or for windows admins playing with linux, but any serious linux admin couldn’t care less about those so called missing gui.
I couldn’t imagine having to go thru vnc or something to get the job done. Real nix admin use ssh (and vi).
I agree with most of your sentiments here (appart from the ones that are derogatory of course).
However, are you sure you want to start up the fight about vi and emacs?
Servers shouldn’t boot past init3 anyway.
This has nothing to do with gui administration, there should be no reason to have to run the management tools on the local box, that is a microsoft left over, but some thing that I believe is slowly going away as they migrate to a central form of management.
The same thing is happening in the linux/unix world, regarding central management, but they are not concentrating on gui front ends they are concentrating on feature sets. I know which I’d prefer them to concentrate on (it’s not pretty clickety icons).
software raid? please, are we talking enterprise servers here or home servers?
Are we *nix sysadmins or not?
Erm, I think it is quite obvious that quite a few people here are not sysadmins, and we should be trying to educate them in a non-condecending manner
Erm, I think it is quite obvious that quite a few people here are not sysadmins, and we should be trying to educate them in a non-condecending manner
You’re right… Sorry about that.
The same thing is happening in the linux/unix world, regarding central management, but they are not concentrating on gui front ends they are concentrating on feature sets. I know which I’d prefer them to concentrate on (it’s not pretty clickety icons).
Interesting, do you have any open source projects in mind?
“””
software raid? please, are we talking enterprise servers here or home servers?
“””
Software raid is perfectly good for servers. Like hardware raid, it has its advantages and disadvantages.
I go with Linux software raid on most of my customers’ servers and am quite pleased.
If you want a thorough argument of both sides of the issue, filled with useful information, “innocently” ask which is best on the CentOS list. But don’t tell them I sent you!
I guess it’s usefull for clueless admins, or for windows admins playing with linux, but any serious linux admin couldn’t care less about those so called missing gui.
No. Good GUI tools, whether run locally or remotely, are extremely useful for day to day work and time saving and are an absolute must if Linux distributors are to make headway against Windows Server and ensure their survival. Yes, you read that right.
Linux distributors cannot just sit and see themselves taking the low hanging fruit (as they call it) of existing Unix and Sun systems whilst trying to pretend they don’t compete with Windows Server. John Terpstra got this spot on in a talk he did once.
I couldn’t imagine having to go thru vnc or something to get the job done. Real nix admin use ssh (and vi).
An awful lot of people seem to have a pathological fear that a GUI is some how not doing it real, for reasons I can’t quite fathom. Command lines are useful for repetitive tasks and scripting which is why Microsoft is putting work into theirs, and GUI tools are several times faster for doing many of the things you just need to do on an ad hoc basis. That’s how desktops came to be.
Come back to me when you have a swiss army knife tool that integrates LDAP, users, Samba, e-mail and groupware and all other essential items in one centralised place that makes things sane to look at.
The vast majority, just about all in fact, of the people who say things like “command line rulez” and harp on about how GUI tools are useless have never actually been near a Windows Server properly and have never actually used things like WMI. That’s where Windows scores, and why an awful lot of organisations keep faith with it for all its other faults.
However, just because Microsoft have created some boneheaded and needless GUI tools in their time (the stupid server profiles in Windows 2003 R2 come to mind), doesn’t mean the concept and need for them is bad.
I couldn’t imagine having to go thru vnc or something to get the job done. Real nix admin use ssh (and vi).
Then you don’t run anything on that server apart from a few web applications. Organisations require a sane and centralised manner to manage their users, LDAP, mail, groupware and universal authentication. That authentication is reused time and again in things like the infrastructure to getting access to networked COM+ components. Linux distributors have a long, long way to go here.
This whole condascending SSH and vi attitude is what kills (and has killed) *nix as a server in many disciplines and firmly cements Windows’ place in the server world.
Servers shouldn’t boot past init3 anyway.
You may think that, but Red Hat and Suse obviously don’t think so. Why? Because their customers understand the need for GUI tools – what pathetic attempts pass for GUI tools in those distributions. If they had proper remote administration APIs such as WMI on Windows then you wouldn’t run a GUI at all on the server, but simply run GUI tools remotely – if you could get away with it.
Remote login through RDP or especially NX is faster and less error prone though, but you would probably designate one machine to run a GUI environment to RDP, NX, VNC etc. into and manage the network and all your servers on from there ;-). Many people have servers on leased lines and in remote locations though, which can make this difficult.
software raid? please, are we talking enterprise servers here or home servers?
There’s nothing wrong with software RAID when applicable. Linux scores well on that front, and country miles ahead of Windows. Setting it up is more painful and drawn out than it should be though.
I’ve just went threw a really painful procedure to get hardware RAID up and running on a Debian Etch machine. Yes, it picked up my RAID array no bother but, of course, I need the tools to notify me when something bad happens and to manage my array. You need the right hardware and software to make this work properly.
Yes, it picked up my RAID array no bother but, of course, I need the tools to notify me when something bad happens and to manage my array. You need the right hardware and software to make this work properly.
First, I agree with your comment, except the part where you need tools for hardware raid.
most RAID controllers are outputing events into standard console (I’m sure about DAC and if I remember correctly Adaptec) and standard logs, so you can simply redirect log events into pipe and decide your reaction based on the event.
But for your GUI tools (personally, I would like some software raid creation and managing tool too). If you need RAID ctrlr then DAC is the best option in linux. Its support is really great, and you have quite a lot of remote controlling tools and such.
I guess it’s usefull for clueless admins, or for windows admins playing with linux, but any serious linux admin couldn’t care less about those so called missing gui.
So, in your opinion one has to know exactly every command in *nix? I’m a hardcore bash person, but I still have to look at man mdadm, whenever I need it. It is just not often enough I would need it.
software raid? please, are we talking enterprise servers here or home servers?
Both. I have efw servers and I can’t imagine having SCSI disks or hardware raid for testing or home purposes.
Servers shouldn’t boot past init3 anyway.
So??? You can still run all your gui tools over remote X11 enabled ssh session. Your point would be?
Are we *nix sysadmins or not?
Obviously you’re *nix admin few steps too much. CLI is wonderfull for automatition, controlling or repetitive tasks. Creating RAID array is not in that category. Overdosing in any obsession is mostly bad feature leading to unoptimized performance.
Even the enterprise makes use of software RAID, for example if you want to mirror two raid arrays to avoid controller failures.
There are good GUI tools for setting up software RAID/LVM in Linux and there has been for quite some time. I come to think of EVMS (http://evms.sourceforge.net/). I don’t understand why Red Hat hasn’t adopted it, but rather creates it’s own?
Well, I think the big thing is that Xen is only a temp solution and will go away. Since the announcement about KVM, why build up Xen? Xen is a pain since it only paravirtualizes and that means no Windows. Unless you have Vanderpool or the newer AMD chips. Which makes it the same as KVM. And KVM will be in the kernel, so why use Xen? Anyone really serious about running virtualization on older hardware will probably already be using VMware anyway. So I think the Xen issue is a non-issue.
Other than that I can’t comment yet as I haven’t had a chance to try it out.
I don’t believe Xen has *only* potential as temporary solution, and that KVM (eventually) supersedes it. I think it mostly depends on the situation you’re in: in my case I don’t really need to have a Linux box running windows, as long as that Linux box is really good in virtualizing other Linux guests I’m perfectly happy with that – also if that means that (perhaps) I need another virtualization thingy for Windows hosts, FreeBSD or Solaris or so. Still gives me a good chance to consolidate.
I must add: I think this is exactly what I think the Xen in RHEL5 is good for: paravirtualizing other RHEL5 instances.
If there is one perfect solution that does all and does all very well (and reasonably priced) and … and …? Why not. But:
Xen does offer paravirtualization, which I believe gives you a bit more than KVM does. The tradeoff might be that it also adds more complexity, but you gain performance at least (and perhaps the domU kernel can be less complex).
What I’m most worried about is the stability of the Xen API. They showed to change things in the past between Xen 2 and Xen 3 – if they do a similar thing in the feature, you can end up with Xen 3 islands awaiting a Xen 4 or even 3.1 migration and so forth… things can be a mess. Until now they seem to kick in new features in the Xen 3 branch and release new enterprise versions in rather short time… if they keep doing that I also might get more attracted to KVM or other solutions… perhaps they lose their value for paravirtualization if the API is not stable, and the full virtualization is then more attractive… can’t tell yet (at least I can’t). I’d love to hear about others insights in this. It’s what I’m a bit afraid of, other than that I’d like to build on Xen.
Edited 2007-04-11 07:07
This is one of the reasons why Fedora and RHEL are build on development from Red Hat like the graphical virtualization manager and similar command line tools build on the libvirt library (http://libvirt.org) which provides a stable API/ABI and supports Xen, KVM. Qemu etc is because it allows end users to pick or switch to any of these implementations which not breaking the management tools and interfaces.
So regardless of any changes in the Xen implementation or API or development of other technologies like KVM RHEL customers are sheilded from them and have the benefit of choice.
Simple. KVM is Linux only. Xen is cross platform – Linux, Solaris, some BSD. Even Microsoft is working with XenSource.
Xen is also available FOR Windows. XenServer license for windows-on-windows solutions is only $99. I think you may wanna do a liiitttle more research before you start playing favorites. Xen can run windows, and has had that ability for some times, and many people,including myself, running linux-on-linux still prefer the paravirt method over hvm because of the flat out speeds. No matter what chip you are running, you will take a performance hit when moving to hvm versus paravirt.
Get a little more versed on the facts before you make unfounded claims. I’m getting kinda sick of the here comes KVM bye bye Xen discussions. It’s like the debian commentors yesterday, speaking in terms of Ubuntu, asked can we have a discussing about a Debian release without someone talking about irrelevant it is because of Ubuntu.
The biggest DRAWBACK of KVMis that you can’t run it WITHOUT Intel VT or AMD-V. Whereas Xen can run linux and bsd at near native speeds on any modern cpu.
Edited 2007-04-11 11:47
many people,including myself, running linux-on-linux still prefer the paravirt method over hvm because of the flat out speeds. No matter what chip you are running, you will take a performance hit when moving to hvm versus paravirt.
Really? I was under the impression that things are exactly the opposite way around–that paravirtualization is almost always slower than full virtualization (when running on chips that support it). Where are you getting this information from? I’d like to read up on the issue some more.
Edited 2007-04-11 15:47
I should note at this point that I do do paid work for XenSource and have been involved with the project for some years – I’ll try to be fair, but please call me on any bias that might creep in.
The paravirtualised implementation of a guest OS ought to achieve as-good-as or better-than performance relative to a non-Xen-aware guest. Paravirtualisation avoids the inefficiencies of emulating hardware that may not be well suited to virtualisation.
The full virtualisation in Xen is decently performing too, but there’s more work to be done in accomplishing it hence you’d expect a performance hit relative to the paravirt case…
Serious VMM products tend to at least include a degree of paravirtualisation in the form of special virtualisation-aware custom device drivers (e.g. VMware has a special network device, a “balloon” driver and special graphics / HID devices)
Guess they never heard about XenExpress either.
http://www.xensource.com/products/xen_express/index.html#
First, I’m a Xen fan and rather doubtfull of KVM at this stage. There’s just too much missing now. I would ilke to wait a year or two before concluding which one is better.
Get a little more versed on the facts before you make unfounded claims. I’m getting kinda sick of the here comes KVM bye bye Xen discussions.
Actualy, you should take more realistic point on your facts.
If Xen doesn’t become defacto part of the kernel, then that is exactly what will happen’. Forcefull inclusion is not what it is preffered and it might just happen’ that distros stop shipping Xen when KVM matures. Xen patch is not something what “Joe THE distro maker” would be able to ship with his home made distro.
Again… Sysadmins are just different kind of users, but as all users they mostly share tendency to have it as simple as possible. And there can’t be simpler as have it included in base kernel tree.
If KVM will benefit from anything it is from its simplicity and inclusion inside of all kernels. Quality will earn smaller ammount of points than that. Sad but true.
The biggest DRAWBACK of KVMis that you can’t run it WITHOUT Intel VT or AMD-V. Whereas Xen can run linux and bsd at near native speeds on any modern cpu.
This is not a permanent drawback. All machines sold have virtualization now. Who will think about those machines in two years from now in production?