Linux has developed an undeserved reputation as less scalable than commercial Unix and Windows, say Sam Greenblatt, Kenneth Milberg, Matt O’Keefe and John H. Terpstra in an interview with SearchEnterpriseLinux. This reputation can be attributed to the vendors of competing platforms to a major extent. Organizations like Google and NOAA are using huge Linux clusters that prove otherwise, and in the past few years there has been huge improvement in scalability.
How nice that there are still people who speak the truth about Linux. Yes, it rulez on the server, its a cool OS, and you also can toy with it! I would like to see Win Server scalling like that, and having all the features that Linux has, then I’ll buy Win Server. Till then, Linux RoX!
It’s a bit of a confusing article.
Sam Greenblatt claims Linux has not lagged in scalability, while John Terpstra admits that Linux doesn’t scale as well as Unix and Windows, but says you have to see it in context. How nice.
Also, Greenblatt claims that with 2.6 “vertical” scalability will improve to 16 way, while Linux can already be used on 128 way systems with work underway to get that to 256. Hmm…
Anyway, some Unix systems scale to much higher number of processors.
Perhaps it is not all FUD.
This article is lacking content and as for experts these guys sort of mis the point of the questions…
Beowulf clusters are the oposite of scaling. 2 way nodes..
None of the oses really scale well above 32 processors. They basically become controllers for parrallel machines.
Not really good controllers. Whether it works or not is upto the app. Filesystem, IO, and networks data paths are bottlenecked and limited by design. Basically the OSes are really written for crappy little RISC boxes or 386 pc’s.
The $$ Unixes have hacks that let them address large memory but ussually not well.
Though I’ll put my trust in time tested, scalable operating systems such as Irix and Solaris still.
Despite what some of the regular Linux trolls around here (I’m talking about you, samb and phil) would have you believe, Irix and Solaris are still far more scalable on multiprocessor systems. Linux does its best on UP or 2-way systems, although for a UP system FreeBSD is probably a better choice as it has the better VM.
Linux’s development cycle is full of massive rewrites, rather than sequential, progressive refactoring. While this is great for decrufting the code base, it doesn’t bode well for fine tuning the engine, after all, it’s hard to fine tune the engine when you throw the baby out with the bathwater every generation.
it doesn’t bode well for fine tuning the engine, after all, it’s hard to fine tune the engine when you throw the baby out with the bathwater every generation
holy mixed metaphors, batman
“Terpstra: Accusations have been made that Unix and Windows scale to far greater numbers of processors than the Linux 2.4 kernel can. While this is true, […]”
My sentiments exactly! But an interesting point nonetheless
“Linux has support for symmetric multiprocessing beyond eight CPUs, which is sufficient for most systems today. Linux will have increased support for large memory systems, have a far more expansive feature set suitable for enterprise deployments, and has effectively caught up with most other Unix operating systems in the area of scalability.”
yeah so what will it out perform of perform as good as a 32X32 Superdome? or an E15k? I doubt it. I think this guy should just stick with the 4 processor bit for file sharing and such.