Heise Open Source provides an extensive breakdown of the innovations present in the latest release of the Linux kernel, announced by Linus Torvalds. This version adds the first version of Ext4 as a stable filesystem, the much-anticipated GPU memory manager which will be the foundation of a renewed graphic stack, support for Ultra Wide Band (Wireless USB, UWB-IP), memory management scalability and performance improvements, a boot tracer, disk shock protection, the phonet network protocol, support of SSD discard requests, transparent proxy support, high-resolution poll()/select()… full Changelog here
Linus who?
His lesser known estranged half-brother, twice removed. Thanks for spotting that.
it’s been a while since I last followed kernel developments, but these new features (especially the graphics stuff) looks very impressive, even more, considering this is an “incremental” update.
while windows is (due to software selection) my primary desktop, I wish Microsoft had learned from “release often, release early”, “adding features to already released products” mentalities.
To release often you actually need programmers, not lawyers. This is why Microsoft is still stuck with the same NT kernel which they got from a group of developers from Digital Equipment Corporation, led by Dave Cutler.
I guess you are right. Unlike Linux developers, of course, who are obviously not stuck with Linux.
Your argument misses the point. I’ll tell you why. 1.) The Linux Kernel is modular and not monolithic and it comes in a variety of flavours because companies with actual developers use it to fit their needs. This is why today you have a multitude of devices running Linux – probably your also router/cable modem.
2.) Of course it depends on what you are developing. But as long as you are not really a kernel developer, you really have a broad choice of systems to develop and are not stuck with the Linux kernel. E.g. My work mostly involves Python and Unix-like environments. If I do this work on my customer’s FreeBSD Server or my Linux box is irrelevant most of the time.
No, it’s because it’s open source, cheap, and so can be customized by third parties.
Your second point doesn’t actually make any sense at all, and it doesn’t help you to bash NT or Windows, which is exactly what you did in your first message just for the sake of it. Sorry.
Edited 2008-12-25 21:58 UTC
Well, I’m a Linux guy. But I don’t really see where this particular type of Linux kernel vs NT kernel stuff gets us. So let me throw this into the ring to see if maybe something constructive comes of it: MinWin is/was/will be about emulating some strengths of the Linux kernel philosophy.
Edited 2008-12-25 22:25 UTC
‘Monolithic’ refers to the memory model used by the kernel. The Linux kernel, like many other common kernels (eg BSD, BeOS, Syllable), is monolithic because all kernel code including modules runs in the same memory space; ie any kernel code, whether compiled into the kernel or in a module, can access any variables or data structures of any other part of the kernel. A consequence of this is that a crash or bug in a module can corrupt the whole kernel.
The usual example of a non-monolithic kernel is Minix, http://en.wikipedia.org/wiki/MINIX.
One might actually say that Windows Vista is less monolithic than Linux, since the new video driver system has video drivers running in userspace. This means that a buggy video driver won’t crash the whole system – it just gets safely restarted and everything goes on as usual.
While Linux is not a microkernel it isn’t as monolithic as earlier versions of Unix. A lot has been moved outside the kernel and it is extremely modular, much more so than Windows.
Vista’s graphics driver model is actually similar to Linux because there are two parts to the driver, one in kernel space for things like memory management and another part resides in userspace to handle things like GL acceleration. Linux also has split graphics drivers and DRI2 introduces a kernel memory manager for graphics much like Vista has.
It is either a microkernel or Monolithic. How much of the linux kernel is actually executed in userland?
The graphics part of the driver stack is shared with other OSes that use X.org like OpenSolaris.
That’s simply not true. Most kernels mix elements of a microkernel with elements of a monolithic kernel. Look at OSX which has a hybrid Mach/FreeBSD kernel. Linux has things like libusb, udev, and fuse which operate from userspace.
I’m not sure how that is relevant to the discussion.
According to me there are only two categories Monolithic or Microkernel. This whole hybrid kernel thing is just marketing.
udev does nothing to make linux more microkernel like. It is a dev filesystem that manages the /dev directories and symlinks. Udev just listens for kernel events and manages /dev entries.
Fuse has a kernel module that runs in privileged or supervisor mode.
In a microkernel nothing the kernel in privileged mode only handles IPC, address space management, interrupts and thread scheduling. Everything else is in non-privileged mode.
Also none of this makes any one OS better for everything, which is the point I was trying to make to the OP.
Edited 2008-12-29 06:34 UTC
Oh ok, according to YOU! Case closed! Too bad you are completely wrong. Have you heard of XNU which is a hybrid kernel or XOK, MIT’s exokernel which is smaller than a microkernel but does not use message passing?
Methinks you don’t understand the point behind microkernels. The main reason is to move things out of kernelspace to avoid errors that can take down the entire system. By moving more things out of the Linux kernel into userspace it allows some of the same advantages.
This is true but the filesystem implementations are completely in userspace.
I guess most implementation of Mach are not microkernels then since they use co-location to move servers into kernelspace because of the atrocious performance of pure microkernels.
You clearly have no clue. We were talking about microkernels and monolithic kernels. Hybrid kernels are supposed to be an amalgam of those two concepts. The MIT exokernel is neither of those.
I didn’t say they are the only two ways of doing things.
Wrong! udev has nothing to do with device drivers or privileged code. First understand what udev does before you talk out of your nether regions.
I was pointing out that your examples are completely wrong. You are trying to convey a point but you don’t even understand the examples you are using.
Therefore FUSE is a bad example because all of it is not in user space. It is a good abstraction but it doesn’t make the OS, FUSE is running on any more of a microkernel. The reason being a kernel mode driver is need to make it work.
Most commercial implementations of Mach like Mac OS X darwin are no longer microkernels.
One word, QNX. You really need to pay attention. QNX beats the pants off linux in scaling down, RT and latency. Got any more ignorance to spread?
Edited 2008-12-30 09:56 UTC
Actually you did. “According to me there are only two categories Monolithic or Microkernel.”
What are you talking about? Where did I mention device drivers in connection with udev? Maybe you haven’t been around that long to remember devfs. Devfs was the precursor to udev and it was inside the kernel. The point I was trying to make is that when feasible Linux has moved things out of the kernel.
I never said Linux was a microkernel or was trying to be a microkernel. I just said that concepts from microkernels have entered into other kernels like Linux. The main concept behind microkernels is running as little code in kernelspace as possible. Linux is pushing code out to userspace when it is feasible. Microkernels have done it the other way around and starting pulling things into kernelspace when performance suffers dramatically in userspace.
It depends on architecture. Context switching on x86 is expensive. QNX and other microkernels don’t perform well on x86.
The linux kernel is Monolithic. You have no idea what you are talking about. Dynamic loadable modules are available in most modern kernels but they are still monolithic.
In kernel parlance, Monolithic refers to wether the kernel and all of its modules, including device drivers, execute in privileged mode.
You are confusing runtime/compile time binary level implementation with architecture.
The NT kernel is a hybrid kernel.
Your understanding is incorrect. The architecture of linux has nothing to do with its popularity or it being able to run on small memory foot print.
QNX is a microkernel and runs fine on small memory embedded systems.
Edited 2008-12-28 19:53 UTC
True. And that particular choice of architecture is nothing to be ashamed of. Microkernel was considered avante garde in the 90s. Nowadays, the word “microkernel” mainly evokes mental images of Andy Tannenbaum waiting for the Great Pumpkin to rise out of the pumpkin patch. I’m not a QNX expert. But from what I’ve heard, they do seem to have done a good job with a microkernel design in the RT space.
The NT kernel is a monolithic kernel in the ways that matter. Microsoft was happy to have buzz word compliance in the 90s, when NT was architected. But they were no more willing than was Linus to accept the overhead of message passing at that level. (Remember that QNX and real-time are about determinism, and not about speed.)
Probably not directly. But to the extent that Linux’s design has allowed it to be performant, especially in the server space, it has no doubt contributed.
Yep.
Edited 2008-12-28 20:21 UTC
Oh please spare me the Linus vs Tannenbaum argument.
Each kernel architecture when implemented properly works just fine. QNX is an great example of that.
The OP didn’t mention anything about speed. QNX offers much better latencies than linux so obviously the message passing doesn’t cause that much overhead.
Nothing in my response or the person I was responding to mentioned performance.
You are mistaken if you think the standard Linux kernel offers anything close to real time performance and microsecond latencies that QNX offers. Even with RTAI and other extensions.
QNX also scales down to smaller systems than Linux. I don’t see where the overhead from message passing comes in.
So It doesn’t matter one bit if the kernel is monolithic or microkernel or a mish mash of both.
The architecture doesn’t matter the implementation does. No one OS or architecture can deal with all the niches.
Actually, I was thinking about Minix3:
http://lwn.net/Articles/220255/
Thanks for pointing that out. I wasn’t aware that according to the correct nomenclature he Linux kernel is indeed monolithic.
Which has been improved on for years and seems to work extremely well. But let me not interrupt your trolling…
As opposed to yours?
The NT Kernel is *not* the same at the original NT kernel. It has been updated and changed over the last 15+ years just like any other OS. The Kernel in Windows is pretty damn good. It’s the userland that sucks ass.
But don’t let facts get in the way of your trolling.
(I’m a mostly neutral commentator I use NEXTSTEP at home and WM6.1 when on the go )
My understanding (and our experience) is that the NT scheduler doesn’t handle resource contention as well as Linux, which, seemed in the past to lag behind FreeBSD and I’m assuming Solaris.
We have one NT server in house with 5 users with tepid response despite a $14,000 server. Our factory (50 simultaneous users) runs under linux on a $400 server it seems crisper.
Your mileage may vary, but it seems telling that Microsoft delivers the updates via Akamai linux servers. Why didn’t they use Windows Server?
What kinds of apps are running on the two servers? Are you talking about graphical environments, databases, or SSH sessions?
NT doesn’t have a fair scheduler, so it’s possible for a high-priority task to starve lower-priority things for a significant amount of time, but this decision was made to improve throughput by reducing the overall context switch rate, I think.
On the NT server, we’re running an accounting reporting system. There are five PC’s connected, but only one is usually using the system at one time. We use it for reporting only. It has a gigabit network, which, for some odd reason, was required.
The Linux system is a factory system which does customer maint, order entry/maint, packaging and shipping, factory office, timeclock and factory production. It’s the bread and butter of the operation. There are usually around 200 connections to the database (200 different applications running at the same time), some of which are monitors which poll at regular intervals between 1 second and 15 seconds.
Amazingly, we run all this from a $400 Dell workstation working as a server.
Our conclusion is, if you have work to do and your application is available for linux or your write it yourself (as we did), Linux is very well suited.
If you have a specific application which requires Windows or it’s not in the critical path, consider Windows.
Thanks for giving the details of your operation. It doesn’t sound like that’s a particularly heavy workload, so I doubt you’re hitting a design limitation of Windows Server. It seems like a case of misconfiguration.
Yes. It must be his configuration, and the user’s fault. That old cop out seems lamer and lamer every time I see it used.
Very well could be a config issue. It was setup by the vendor of the software. They seemed competent, but who knows.
I have a friend, who worked in a shop of similar size (50 workstations) which was a pure Windows shop. They have a full-time network/admin and they had to get a 70,000 Euro server to handle the load, mostly because they are using terminal server. For whatever reason, it seems alot of Windows shops require alot of maintenance and heavy duty servers.
I don^aEURTMt know about Microsoft, but if we know one thing it^aEURTMs that Linux developers clearly are not Lawyers
On the other hand you can always be happy that Microsoft integrated a GPU memory manager in the NT kernel two years ago (the much complained-about new graphics Vista driver model was to support GPU memory management and GPU thread scheduling in a generic way in the kernel). So they do miss the release often quite clearly, but at least when it comes to graphics technology they have been good about being early in the last 10 years. Overall the graphics/DirectX team has clearly come a long way since the early days.
Still, it is good to see the Linux kernel guys taking a more clear stance on the graphics aspects of things, with more and more general-purpose graphics hardware the handling of it clearly has a place in the core kernel rather than as the current odd mix of kernel/modules/x11 responsibilities.
Edited 2008-12-25 12:49 UTC
Indeed, those improvements are impressive.
The changes I like most are those changes towards cleaner, less fuzzy source code. In many parts the kernel can be considered as feature complete, so the resources used to scrub and/or rewrite earlier doings is well invested. Nobody wants to see the kernel dying a slow Netscape-like death of bloat, after all.
Well done, Linus & friends..
Linux kernel development is breathtaking at best, the amount of new additions and functionality is simply amazing and in my opinion with the economic downturn I think even more development will continue. Due to the fact software licensing is so expensive, and open source will flourish with new possibilities for companies struggling to keep the doors open and cost cutting in this area rather than laying off workers.
Ext4 from what I read is really neat, also another great advancement is the SSD enhancements. The area of most interest is the SSD arena I have seen offerings from companies offering SAN SSD units, now this will be interesting in how the speed and throughput will increase over time.
The play by play (advertised) details sound amazing. But overall, I still get the same feeling that I get when I see “New and improved!”, “33% more!”, and “33% less fat! 25% fewer calories!” on the labels at the grocery store. If everything is always so amazingly much better, why are we not all living like kings and queens?
Yeah, I think I see Andrew Morton out there quietly furrowing his brow and wondering about the ignored regressions.
Edited 2008-12-26 01:03 UTC
We do live like kings and queens, compared to the kings and queens of yesteryear. It just happens that the wealthy and powerful are also living relatively better off than before, so it gives us the illusion that we are still living in a craphole, when we are not.
That depends on what you mean by yesteryear. Real wages are lower in the US today than they were a decade ago. If yesteryear means a thousand years ago then yes, we are living better than kings and queens of yesteryear.
Just look at the dinky little houses entire families lived in during the 1950’s. Houses that they could *afford*. My parents managed to pay up-front for their second house in Tucson, AZ (about $10,000 as I recall) and he was in the Air Force and she was a nurse. That was two bedrooms, one bath, no basement and no garage.
Now it seems no one is happy unless they have three bedrooms, two baths, a big kitchen and living room and a two-car garage which runs over $200,000 around here and generally requires about a 20 year mortgage.
I think that if people lived without so much debt, like people did in the 50’s, they’d be better off.
As for “real” wages, I don’t know your comparison index. If it is television size, then no way, we are much better off today. If it is a food or real estate price index, then “real” wages are always and forever going to go down. More people are competing for land use and the food supply being grown on what land is left for farming.
I agree. But would like to clarify my original analogy. Despite all the “New and Improved” we see on a daily basis, a jar of mayonnaise still looks like a jar of mayonnaise to me. And the contents taste like… well… mayonnaise. I guess Queen Elizabeth II probably does eat mayonnaise sometimes, though.
Edited 2008-12-26 17:44 UTC
Where I live you can’t buy a $200,000 home that has everything you describe. You would have to pay at least $300,000 for a home like that. This is vastly different from 10 years ago when you could buy a home like you describe for under $200,000. The homes haven’t added value in ten years time, rampant inflation causes home value to skyrocket like that. Couple that with people who then re-financed against their newly “higher” priced homes which then crashed in value causing them to owe more on their house than it is now worth you start to see the cracks in the economic foundation.
Unfortunately in a system like ours there has to be debt to create more money. Currency isn’t backed by anything material and the only way to create more wealth is to simply make more money in the form of loans. Of course this leads to inflation and when inflation outstrips real earnings (by this I mean the average income adjusted for inflation) increases, or in this case an actual decrease, we are bound to be in for some trouble as the middle class backslides.
This assumes that population will continue to grow. Most first world countries have very slow population growth and some even have declining populations.
If you’re an american or from most european countries you do live like a king/queen from way back in the day. Actually, you live way better than most kings/queens used to.
It’s a good thing we have history books to see these things, because for most people “history” starts the day they are born.
FOSS development continues apace. It rolls along at a very healthy pace (compared to the competition) despite the intense efforts of those competitors to stop it or even slow it.
Right now what we seem to have coming in FOSS for early next year is beautiful, functional, stable, robust, secure and best-of-class-performing desktop software:
http://openmode.ca/2008/12/why-you-might-be-using-linux-in-2009/
http://nuno-icons.com/images/wall/snapshot3.jpg
Desktop software that will run extremely well on even quite modest hardware specifications.
The challenge is to get people to try it, or perhaps even get them to become aware of it.
Small inroads in this direction are starting to be made.
Edited 2008-12-28 05:56 UTC
I absolutely agree with you, I can’t also wait for BTRFS, it will rock =D