Linus Torvalds has announced the first release candidate for version 2.6.22 of the Linux kernel, noting that the changelog itself for this release is just too big to put on the mailing list. According to the kernel-meister himself: “The diffstat and shortlogs are way too big to fit under the kernel mailing list limits, and the changes are all over the place. Almost seven thousand files changed, and that’s not double-counting the files that got moved around. Architecture updates, drivers, filesystems, networking, security, build scripts, reorganizations, cleanups… You name it, it’s there.”
“You name it, it’s there.”
Then I’d like to name 100 new regressions.
No doubt it’s inevitable, but the changes/improvements can’t sit in -mm forever. Even if the freakiest ones remain marked as experimental in the config, at some point they have to be pushed into mainline. One just hopes they were deemed stable enough to merge.
Still, I do find the tendency for breakage/regression/driver deprecation to be a little at odds with the oft-stated implication that hardware vendors should leave drivers to the kernel community to maintain for them. Even the devs are complaining about the lack of resources in terms of people willing to do the dirty work in maintaining older code. Andrew Morton has brought this up in the past, but it seems to have been easily dismissed by the community-at-large.
As much as I love Linux, the API is a far cry from the stability of Irix and Solaris where they guarantee forwards and backwards compatability (correct me if I’m wrong but I _think_ that this extends into the kernel and not just userland stuff)
You mean a stable in-kernel API? That’s nonsense; and here’s why: http://www.kroah.com/log/linux/stable_api_nonsense.html
Yeah, but you do know that that is a very idealistic if not arrogant point of view, right? Not everybody can or wants the driver in the mainline kernel, given the nature of the GPL.
“Yeah, but you do know that that is a very idealistic if not arrogant point of view, right?”
Linux, by its very nature of being Free, is entirely idealistic. It is this idealism that has brought us as a community as far as we are in terms of features, speed, and stability. It is this idealism that will push us even further in the coming generations.
“Not everybody can or wants the driver in the mainline kernel, given the nature of the GPL.”
Eh? Just because it’s in the mainline kernel doesn’t mean it’s necessarily GPL on its own. Many drivers in the kernel are under a dual BSD/GPL or MIT/GPL license. That is, if you removed them from the kernel and used them in a standalone fashion in your software, you could then write the software complying to the BSD or MIT licenses (therefore perhaps making it proprietary).
Yeah, but you do know that that is a very idealistic if not arrogant point of view, right? Not everybody can or wants the driver in the mainline kernel, given the nature of the GPL.
No. This is a very simple equation. Most of the linux drivers are in the tree. Only a few, specially closed-source drivers, are out of the tree.
So Linux just optimizes their work. It’d be stupid to optimize the workflow and spent time creating a maintaining a binary kernel ABI if most of the driver do not need an ABI at all.
Let us say today there are 100 drivers in kernel, tomorrow 1000 and then 10000 and according to you, changing kernel ABI is not a problem? Who the f^^k is going to test all those 10000 drivers in the world?
At least keeping kernel ABI stable guarantees that you haven’t introduced any bugs in the driver code itself.
http://www4.osnews.com/permalink?239517
http://www4.osnews.com/permalink?239553
You were already refuted once, and the same still stands true. You still haven’t responded. Give it a rest, posting the same biased opinion piece over and over won’t make it true.
> Even the devs are complaining about the lack of resources in terms of
> people willing to do the dirty work in maintaining older code. Andrew
> Morton has brought this up in the past, but it seems to have been easily
> dismissed by the community-at-large.
Probably because Morton and others are working full-time to get things working anyway. The community at large won’t realize the problem until things fall apart.
I’d be surprised if it was only 100. Sounds like a small number to me.
ZFS?
If you follow the linux kernel mailing list you’ll find that most of kernel hackers don’t care or even bother about ZFS. ZFS is mostly a user request, not certainly a kernel development goal. Linux kernel hackers don’t even think that ZFS is that interesting as much as some people does, nor they feel that Linux is inferior. Most of them agree that Sun marketing is great and has a lot of effect though.
Edited 2007-05-14 21:51
If you follow the linux kernel mailing list you’ll find that most of kernel hackers don’t care or even bother about ZFS. ZFS is mostly a user request, not certainly a kernel development goal. Linux kernel hackers don’t even think that ZFS is that interesting as much as some people does, nor they feel that Linux is inferior. Most of them agree that Sun marketing is great and has a lot of effect though.
Yes, of course, the famous Not Invented Here syndrome.
Those pesky users and their requests can be annoying to the enlightened. Everyone knows all you have to do is use LVM2 for snapshots.
Everyone who has never used ZFS, anyway
Not to mention the value of automatic, transparent checksumming on block IO to detect or correct corruption before it is too late…
You would have a lot more luck asking for them to implement automatic, transparent checksumming on block IO in the native linux filesystems than asking for ZFS. Adding ZFS to the kernel is ILLEGAL and won’t happen unless Sun gives the OK. So try focusing on the possible rather than endlessly bringing up the impossible.
So try focusing on the possible rather than endlessly bringing up the impossible.
Thank you for saying this. Negativity all the time is a real downer. People don’t solve problems by complaining about what is wrong they solve problems by actively working with real solutions.
Who the f^^k is going to test all those 10000 drivers in the world?
You and I are going to start. Get the latest development kernel and build it. Try it out. If you find bugs then submit them. The more people that try out the kernel and test it the more stable it becomes. Welcome to the open source community where you are part of the process at making things better.
Edited 2007-05-15 01:08
You and I are going to start. Get the latest development kernel and build it. Try it out. If you find bugs then submit them. The more people that try out the kernel and test it the more stable it becomes. Welcome to the open source community where you are part of the process at making things better.
Most people don’t want to be beta tester or free kernel tester. I for one make my living by developing Windows driver. I would waste a lot of my time and money if i hit kernel bugs. Sorry the proposition is meaningless when people are trying to make a living off of the OS.
Think of a scenario – Hey i bought a new device but it has the drivers only in latest kernel but i can’t upgrade because i get Kernel panic.
Sorry but this IMO does not fly for consumers of the OS.
Think of a scenario – Hey i bought a new device but it has the drivers only in latest kernel but i can’t upgrade because i get Kernel panic.
Or think of this scenario – Hey i bought a new device but it has the drivers only on a cd but i can’t install them because i get Kernel panic (BSOD).
The exact same thing can happen in Windows, and in fact I think it is actually more likely. Most of the hardware developers make sure their drivers are OK, but there will always be a few companies out there that aren’t up to it.
>Who the f^^k is going to test all those 10000 drivers in the world?
> You and I are going to start.
> Most people don’t want to be beta tester or free kernel tester. […] Sorry but this IMO does not fly for consumers of the OS.
You reply is modded down to oblivion, while it’s pure common sense.
> You and I are going to start. Get the latest
> development kernel and build it. Try it out. If you
> find bugs then submit them. The more people that try
> out the kernel and test it the more stable it becomes.
> Welcome to the open source community where you are part
> of the process at making things better.
If someone have a problem with kernel testing, here is “Linux Kernel Tester’s Guide” (translation unfinished) http://www.stardust.webpages.pl/files/handbook/handbook-en.pdf
>>Not to mention the value of automatic, transparent checksumming on block IO to detect or correct corruption before it is too late…<<
Note that this fix only one failure path, there are other: your disk could write at the wrong place, a block could become corrupted after it is written for no reason, etc.
You still need a fsck and backups to fix those other cases..
So, ZFS does not prevent this kind of corruption as well?
Proposition:
checksumming + cow + ext4 – journal = ext5?
So, ZFS does not prevent this kind of corruption as well?
Proposition:
checksumming + cow + ext4 – journal = ext5?
If you are running a non RAID protected setup and there is a checksum error, ZFS will report but cannot correct, since the information is essentially gone.
If you are running mirroring or raidz, it will regenerate the corrupted block to complete the request and log the error.
And I agree, ext4 is looking good. Solaris UFS and Linux extX are a lot alike in technology and in the fact that they both evolved into reasonably fast and stable, if not technologically flashy, filesystems. To me, that is the real lesson of ZFS; to make a dent in that sort of (totally reasonable) inertia, you have to offer a lot of genuine improvements at the same time. Otherwise, I think it is wise to stick to compatibility as they are doing with ext4.
I’ve said nothing about LVM2 greatness (nor I’ve said the kernel have said it) or block checksumming, that was all you. I only said that most of kernel hackers seem to think that there’s a lot of fanboyism around ZFS.
Linux has a deadline IO scheduler and IO priorities (which work for all filesystems not just for a single filesystem like ZFS due to its so-called “layering violation”) long before Solaris and ZFS did, for example, and nobody spent thousand of dollars telling the world how great it was.
I’ve said nothing about LVM2 greatness (nor I’ve said the kernel have said it) or block checksumming, that was all you. I only said that most of kernel hackers seem to think that there’s a lot of fanboyism around ZFS.
Linux has a deadline IO scheduler and IO priorities (which work for all filesystems not just for a single filesystem like ZFS due to its so-called “layering violation”) long before Solaris and ZFS did, for example, and nobody spent thousand of dollars telling the world how great it was.
Yes, I pulled the LVM2 example directly from the mailing list: http://kerneltrap.org/node/8066
And this is exactly what I am talking about: one (somewhat influential) guy comes up with the phrase “layering violation”, a nice, glib sound bite to be sure, and people cling to it for months later–what I would love to see is a detailed explanation of why they think it is true and what difference it makes–that is, a real debate versus hand waving.
Regardless, my point is that many Linux people may indeed be dismissive of ZFS. But if you read what they write, it is often apparent that they haven’t even tried it. The LVM2 example is perfect because LVM2 snapshots (VxVM snapshots, etc) are a different idea than ZFS snapshots (NetApp snapshots, etc), the term snapshot being inconveniently overloaded, and this is immediately apparent to anyone who has used said software.
Edit: Another nice example from the above link: Several people suggest that ZFS doesn’t address the large filesystem fsck problem and that chunkfs is needed for this. ZFS has no fsck program and does not run fsck.
Edited 2007-05-15 15:02
I wouldn’t say that ZFS never runs fsck; it uses scrubbing to perform the same task. So instead of the large filesystem fsck problem, ZFS has the large filesystem scrub problem. However, the fact that ZFS can scrub online is a real advance.
I wouldn’t say that ZFS never runs fsck; it uses scrubbing to perform the same task. So instead of the large filesystem fsck problem, ZFS has the large filesystem scrub problem. However, the fact that ZFS can scrub online is a real advance.
I agree.
But if you read the lkml link, the poster, dismissing zfs, said:
The fsck for none of these filesystems will be able to deal with
a filesystem that big. Unless, of course, you have a few weeks
to wait for fsck to complete
Which you and I know is not true, since, as you say, a scrub runs on a mounted FS.
Of course, in this context, you could call it “large filesystem scrub problem” but I would call that a “disks are not infinitely fast” problem, since the point of a scrub is to touch all allocated blocks in the FS and this is, in any conceivable implementation, IO bound.
My overall point here is that a lot of the people who dismiss ZFS do not really know much about it besides that it did not originate in Linux.
No way. No only would the Linux community need to re-implement it using a clean-room procedure, but we’d also need to steer clear of the dozens of patents Sun holds on ZFS. Even if Sun follows through with their plans to license OpenSolaris under the GPLv3, it still can’t be ported to Linux without being completely re-written. Unless Sun wants to distribute a Linux distribution with ZFS support, they will not license it under the GPLv2, and I’m inclined to think they want to keep this as a differentiating feature for Solaris.
The way forward for enterprise storage on Linux appears to be doublefs, which combines a ZFS-style COW filesystem with a log-structured filesystem as an on-disk write buffer. It should offer the same kinds of features as ZFS (100% metadata consistency, atomic transactions, inexpensive snapshots, etc.) while providing reduced write transaction latency. The idea was conceived leading up to the 2006 OLS by a former ZFS developer and other Linux hackers. I’d be surprised to see it hit the mainline before the end of 2008 unless some huge corporate Linux supporter decides that this is crucially important for competing against Sun…
The way forward for enterprise storage on Linux appears to be doublefs, which combines a ZFS-style COW filesystem with a log-structured filesystem as an on-disk write buffer. It should offer the same kinds of features as ZFS (100% metadata consistency, atomic transactions, inexpensive snapshots, etc.) while providing reduced write transaction latency. The idea was conceived leading up to the 2006 OLS by a former ZFS developer and other Linux hackers
I don’t like the sound of that. I can see the lawsuits coming already.
The way forward for enterprise storage on Linux appears to be doublefs, which combines a ZFS-style COW filesystem with a log-structured filesystem as an on-disk write buffer. It should offer the same kinds of features as ZFS (100% metadata consistency, atomic transactions, inexpensive snapshots, etc.) while providing reduced write transaction latency. The idea was conceived leading up to the 2006 OLS by a former ZFS developer and other Linux hackers. I’d be surprised to see it hit the mainline before the end of 2008 unless some huge corporate Linux supporter decides that this is crucially important for competing against Sun
Can you link this? I am confused since this (doublefs) does not sound (in details) a lot like what you describe (though it may be a good idea):
http://lwn.net/Articles/190224/ (scroll down a bit)
On the other hand this (dualfs) sounds more like what you describe, but still not quite:
http://lwn.net/Articles/221841/
Neither mentions snapshots, though.
The separation between the filesystems isn’t as explicit as I described. That’s probably the cause for confusion. There’s only one copy of the data blocks. Only metadata blocks are stored in the log. So it’s like a journal, except the logged metadata is the lookup metadata for as long as necessary until the in-place copy is updated, at which time the logged copy sticks around for redundancy and error correction.
All COW filesystems provide snapshots. The snapshot functionality just sort of falls into place as a consequence of the design, so you basically get it for free.
The separation between the filesystems isn’t as explicit as I described. That’s probably the cause for confusion. There’s only one copy of the data blocks. Only metadata blocks are stored in the log. So it’s like a journal, except the logged metadata is the lookup metadata for as long as necessary until the in-place copy is updated, at which time the logged copy sticks around for redundancy and error correction.
All COW filesystems provide snapshots. The snapshot functionality just sort of falls into place as a consequence of the design, so you basically get it for free.
Can you provide a link, a whitepaper, something?
The link to doublefs which I previously provided does not even say that it is a COW filesystem, only that they (to paraphrase) want the advantages of a COW filesystem without the disadvantages. But some of what you say here sounds a lot more like dualfs, at any rate.
From the previously provided link, it doesn’t sound like doublefs even exists beyond “Zach Brown and Aaron Grier put some effort into prototyping doublefs”. Dualfs seems to have a working kernel patch.
(Off topic slightly)
How about you look at this really new project:
http://ext3cow.com/ EXT3 copy on write
(Off topic slightly)
How about you look at this really new project:
http://ext3cow.com/ EXT3 copy on write
Pretty neat, I thought this project went the way of most academic CS papers (published and forgotten), but it looks like it is back and current.
(this is all I had seen before, a 2003 publication)
http://citeseer.ist.psu.edu/peterson03extcow.html
Microsoft patents, that is.
No, only 48 in the kernel. And they are not confirmed but merely alleged by Microsoft.
And none of them are valid, since software patents aren’t valid – remember… there’s a world outside USA.
Add to it the fact that Microsoft is patenting the IP of FLOSS developers. This IP-theft has been confirmed.
Just a joke, sorry. And apparently not a very funny one, either!
Actually that was RC1 of the joke. Our kernel of truth humor engineers are hard at work improving it as we speak.
I wonder if we can pull a Microsoft v. AT&T and claim that the “parts” of Linux can be developed wherever, but if they are “assembled” in jurisdictions that don’t recognize software patents, then the patents in those parts aren’t valid in the assembly as a whole. You want a Linux distribution without patent risks? Just make sure the master build server is located on some tropical island nation with friendly laws.
IANAL…
One would assume that since Linux re-implements UNIX, it there fore is using technology that is either owned by UNIX vendors or exist in the form of prior art, which would invalidate any of Microsofts claims. Thats unless Microsofts claiming invention of UNIX.
And none of them are valid, since software patents aren’t valid – remember… there’s a world outside USA.
Whether they’re valid or not in Europe is irrelevant, as long as U.S. firms such as Red Hat, Novell, and others are shipping supposedly infringing code. I doubt that it will happen but, in theory, MS could bring suit against U.S. based Linux distro firms, and a federal court could compel them to remove the offending code, if it’s found to infringe. So, unless there’s a U.S. code fork — and a rest-of-the-world code fork — it could wreak havoc in the Linux community.
“Whether they’re valid or not in Europe is irrelevant, as long as U.S. firms such as Red Hat, Novell, and others are shipping supposedly infringing code.”
Sure it is relevant. It only affects *those* companies and not Linux itself. Now might be a god time to consider not hosting the source in the U.S though (if it is hosted there, I’m not sure).
“So, unless there’s a U.S. code fork — and a rest-of-the-world code fork — it could wreak havoc in the Linux community.”
Nothing like a little fear mongering to brighten up the day, eh?
if it’s found to infringe. So, unless there’s a U.S. code fork — and a rest-of-the-world code fork — it could wreak havoc in the Linux community
And if pigs could fly, it would wreak havoc in the barnyard. That’s a pretty big ‘if’, considering MS’s record of bs and the recent supreme court decision basically saying the basis for granting and supporting patents has been too low and many are just bs. That makes MS’s claims bs squared.
On top of that, I think MS figured their best shot at linux was through SCO, and look how well that turned out.
These “patents” are at best a fall back plan to spread FUD. Trouble is, they’ve cried wolf one too many times to be believed. And don’t tell me Novell believed MS’s patent threat. MS paid Novell millions to take the deal. If anyone wants to pay me millions in return for them not suing me for a patent of theirs that I may infringe, I’ll take the money.
It’s a pity that the osnews guidelines for moderation don’t include modding down for innaccuracies. Prove that Microsoft has 235 VALID patents being infringed by the Linux kernel. If you’re just believing what Microsoft says, then you’re a fool.
Dave
My apologies for my previous post, didn’t realise you were being humurous.
Dave
It’s right, the kernel is progressing fast, the patches are big, etc.
But let’s face it: Thinks broke just too often lately. Almost anyone working with the kernel, ie distributing it, complains.
The kernel developers perhaps should reconsider if they really want to go this path further down. They could lose more than they win, and one of it is the good attitude. If everyone in the herd starts to get the Web 2.0 feeling (rush the release, everything that shines, beta doesn’t matter, bugs will get away on their own eventually), people who tend to care about things and polishing will decease.
Aren’t the kernel developers maintaining 2.6.15 as a “stable” kernel for that very reason? If all the distros are ignoring it and just getting the latest and greatest, then doesn’t that mean that they think the fast progress is worth it?
Edited 2007-05-14 21:51
It is 2.6.16, take a look:
http://www.kernel.org/pub/linux/kernel/v2.6/
And about the state of the kernel, i am agree, things start to fail to compile and i get more kp than ever, it is time to think about the 2.7 brainstorming. I removed the 2.6.20 from ubuntu feisty for recompiling myself 2.6.18. I keep 2.6.18 on my gentoo box too, after that 2.6.18, my system is unstable (i dont use beryl, thats pure kernel panic)
I would really, really hate it if they went back to a 2.7 release like they did with 2.5. I don’t think I’m alone here either.
You never really answered my question. If the new kernels are so unstable then why not just use 2.6.16? It probably gets just as much attention as 2.6 would if they started a 2.7 branch.
Thanks for pointing that out. I just failed to notice it. Esp. as nothing is written about it on kernel.org
It seems to be exactly what I wished. Selecting a “good” release once in a while and keeping it stable and bugfixed, so one can rely on it.
Edited 2007-05-14 23:49
Adrian Bunk is running that show, and it’s up to release 2.6.16.5x or so.
Which is another testament to the greatness of the development model. You are completely free to choose how close to the edge you dwell. No mugger deciding where, when, and for what price you will be forced into an upgrade.
Thinks broke just too often lately
Like what? (and i don’t doubt there’re regressions)
I’ve had no problems with the kernel in the last years. The development model seems to work. There’s a reason why only the first two weeks are allowed to merge features and the following two months or more are about stabilizing. It’s certainly much better than the “keep a stable version, start a development version, keep developing it for two years, and after two years release a new stable version that is full of crap and will take another year to stabilize” model.
The Linux kernel has the big advantage of merging new features slowly, instead of one-time-every-n-years, like most of projects do. Because only a few features are merged each time, it’s much easier to debug, fix and stabilize it, than a big amount of crap that hasn’t been tested for two years and it’s full of thousand of new things.
For example, the tickless feature rewrote a LOT of the x86 timer code, and the scheduler. Very low-level code. Code REALLY prone to create bugs. That was some major feature. And it works. There was only some marginal regressions which are being tracked and fixed for -stable. I’d say it’s a big success.
Notice that Opensolaris is also merging new features rapidly, in fact opensolaris is the “unstable” solaris tree. There’re reasons why opensolaris exists, and using the community as beta tester is one. So it’s not just Linux.
Edited 2007-05-14 22:10
I didn’t want to question it generally, and I also know that 2.4/2.5 didn’t work as good as the new development model.
I just missed some “step back”, because there were many releases which had “silly” bugs in it, making some hardware fail completely, and so on.
I heard about the 2.6.16 as a “maintained” release and think this is going into the right direction. But not maintaining the same release for 2 years while everybody starts backporting
I feel classifying OSOL as “unstable” and comparing it to Linux is rather silly. I’d trust OSOL CE on a server more than I’d trust Ubuntu 6.06LTS, we’ll put it that way.
I also think the development model is working. In traditional software development, you wouldn’t think of changing 7,000 source files every two months in any kind of software project, let alone a kernel. But that’s what’s happening in the Linux kernel these days, and although some objective measurements show some quality decreases, it isn’t anywhere near as bad as you’d expect. The quality of the Linux kernel in spite of the ridiculous breakneck pace of development is astonishing.
But I think they have to slow it down a tad. They’re trying to manage the patch backlog, and they’re succeeding for the time being. But the recent torrent of patch submissions isn’t going to abate. It’s going to turn into a deluge, and then an all-out assault. As Linux really hits its stride, with ISV/IHVs and enterprise IT really buying into it, the current pace will seem like a trickle in hindsight. Will the current model scale, or will it creak under the enormous pressure?
They’re going to have to let the backlog grow, and they’re going to have to turn to automated testing. Everyone’s going to have to commit unit test code along with their patches. Perhaps someone will make a nifty KVM-based screensaver that makes it brain-dead simple for everybody to use their spare resources to test development builds. We’ve got to get smarter about the way we ensure quality, because in two years we’re going to want to change over 10,000 source files per release cycle.
i know 2.6.16 is being maintained, but is there a plan yet for another stable/maintained kernel, such as 2.6.24 for example?
Simply I would like to see a stable release every six months and a development release. “Testing” release does not sound good. In the stable release only errors and security holes should be added, no new features should be added. This is Red Hat policy. It is not something new.
In the stable release only errors and security holes should be added,
You meant should be fixed?
The menuconfig menu is now more logical.You don’t have to make a way through numerous sub-menus.Just check or uncheck the category.
Although this doesn’t say much about other instalations but i’m quite happy with this release kernel.
2.6.22-rc1 runs flawless sofar with nvidias NVIDIA-Linux-x86-100.14.03 beta driver.