“Con Kolivas is a prominent developer on the Linux kernel and strong proponent of Linux on the desktop. But recently, he left it all behind. Why? In this interview with APCMag.com, Con gives insightful answers exploring the nature of the hardware and software market, the problems the Linux kernel must overcome for the desktop, and why despite all this he’s now left it all behind.”
To cut to the chase of the article, on the one hand, ego makes you compete; OTOH, ego can lead to bad answers by avoiding helpful cooperation. It naturally is at odds with friendliness. Indeed, Kolivas pegs ego as problem #1:
“If there is any one big problem with kernel development and Linux it is the complete disconnection of the development process from normal users. You know, the ones who constitute 99.9% of the Linux user base.
The Linux kernel mailing list is the way to communicate with the kernel developers. To put it mildly, the Linux kernel mailing list (lkml) is about as scary a communication forum as they come. Most people are absolutely terrified of mailing the list lest they get flamed for their inexperience, an inappropriate bug report, being stupid or whatever. And for the most part they’re absolutely right. There is no friendly way to communicate normal users’ issues that are kernel related. Yes of course the kernel developers are fun loving, happy-go-lucky friendly people. Just look at any interview with Linus and see how he views himself.”
Edited 2007-07-24 15:53
I’ve mailed the kernel mailing list before with various boot failures I’ve had during a couple of -rc cycles and the responses I got were rather friendly.
The kernel mailing list is a list for *development*, for posting patches and discussing changes to code but as a tech savvy user (someone at least familiar with compiling the kernel) if you post enough information in a suitable manner you’ll be welcomed.
That’s really at odds with my experience back when I was first learning about Linux. Granted, it was a Red Hat specific IRC channel, but even when learning the very basics I got abused to no end because I didn’t “RTFM”.
The only problem was that I didn’t know how to access the man pages and being new searching for things on the Internet could be difficult when you don’t know the specific key words to enter.
I think it’s come a long way since then, but I am not at all surprised people get nailed for asking questions others take for granted as assumed knowledge.
Well, if you don’t know how to access man-pages you are definitely not a “tech savvy user”
It’s true that developers are often harsh on newbies, but more often than not it’s because newbies are swallowing human resources like nobody else can. And that’s annoying – and being annoyed (even when a person does not intend to annoy) brings out the worst in people.
OTOH, show the devs that you know what you’re doing, and they will be more than friendly.
Maybe Linux should acquire first some real man pages or documentation at all before crying RTFM at newbies. Even new developers are lost in the Linux-limbo (it’s not the dance) because of a massive lack of documentation.
Depends whether you refer to Linux (the kernel) or Linux (the distribution).
The amount of documentation depends on the distribution. Some have better than others.
In terms of the kernel solely it’s true that documentation tend to lack behind. But the same could be said for *BSD as well as Microsoft products.
OTOH, you get the documentation for free, so if it doesn’t please you go make it better.
Everytime a person complain about this it’s always a person who forgets that Open Source is contributer-centric and not developer-centric nor user-centric. Contribute or shut up. It’s that simple. If you don’t want to take the time it takes, pay somebody who will.
But if they’re new, how can they contribute? The ‘take it or leave it’ attitude is a major hurdle that must be overcome.
By asking nicely for what they can help with, stating which gifts they have and don’t have – making it clear that they are newbies.
Nope. It is an attitude which must be correctly understood. Many newbies don’t want to be teached but just want to make demands. The slogan is “Gimme what I want now! Or I’ll scream! I swear I’ll scream!”
This behaviour pisses off developers and occasionally this harms innocent people, but out of 100 harsh treatments of newbies only 1 is unwarranted.
It’s stupid that new people are expected to have such knowledge. They are new to the software for crying out loud.
Perhaps some people in the Linux community should get off their high horse, stop being asses and realise they too once didn’t know anything about Linux.
Having myself been a newb to many things, I think there is enough blame on both sides. Yes, experienced users can get annoyed at newbie questions. Yes, newbies can even be a source of amusement and the target of sarcasm. At the same time, its all too common to see new users acting like pricks and being generally annoying. New users tend to get frustrated when things don’t work, and take out their frustration by being rude and disrespectful. They often have an attitude that they are owed help, and get angry if they are not tended to immediately. They’ll often take shots at whatever thing they’re trying to learn, without really having an understanding of what they’re talking about. You’ll see this a lot with people who know X and are trying to learn Y, both of which are subsets of Z*. They think they have a general knowledge of Z, when they really just have knowledge of X. They try Y, find that their knowledge isn’t working, and instead of realizing that its just a matter of Y not working like X, they blame Y for being a bad example of a Z.
*) If this is confusing, replace X with “Windows” and Y with “Linux” and Z with “operating systems”, or X with “Java” and Y with “Lisp” and Z with “programming”, and you’ll get the idea!
I agree, however it is not unreasonable for the seasoned Linux users to expect newbies to search and read rather than post without first making an effort to figure it out on their own. There are copious amounts of tutorials, how-tos, and even forum posts answering many of the questions that get asked by newbies over and over (and over and over….).
When I see a newbie who asks a question that has an answer available on the web but at least explains the he tried to find the answer I am much more inclined to be helpful than: “This is broke, what do I do?” style posts.
Kernel developers don’t need direct communication with normal users. All this will do is waste the time of the kernel developers.
The linux distribution developers are the ones that should be dealing with the users and passing bugs upstream if they are relevant.
I’m not sure I agree that bug reports should come from Distributions only. The kernel is already heavily isolated from its users.
But I do agree that a mechanism needs to be in place that filters *timewasting* posts, although to be fair scaring the users witless actually sounds a good mechanism.
The point is Linux developers isolate themselves from GNU users. I actually don’t believe this is true for *all* developers, but with the exception of Thomas Winischhofer http://www.winischhofer.eu/linuxsisvga.shtml I cannot think of a another developer that that has an open *forum* for his driver, although I’m sure there must be some…but then I think Linux has little to do with Desktop users anyway.
“There were so many subsystems being repeatedly rewritten that there was never-ending breakage. And rewriting working subsystems and breaking them is far more important than something that might improve the desktop right? Oops, some of my bitterness crept in there.”
Brilliant quote, and agreed wholeheartedly.
Second. Too bad Linux devotee’s here confuse ‘constant breakage’ and ‘re-writes’ to innovation. Again, how many of the developers have any formal education when it comes to development processes?
I’ll be slammed for that, probably by butters or someone else claiming, ‘but, but….’ and a whole lot of ad nauseum about evolution and Darwinian theory. The simple fact is, until you’ve actually *STUDIED* it and understand it, you’ll never actually truly understand the benefits of following a procedure when developing something.
There is a reason for System Analysis and Design in any IT related course – and it isn’t there so that the university can extract more money. For small projects you can get away with hackery, but when you have a large project with 100s of programmers, clients and so forth, you need structures and procedures in place to ensure that there is a logical flow to development.
The key thing is that refactoring *is* part of good *evolutionary* system analysis and design. And I think you’re missing something fundamental.
Every project that’s lived long enough and is actually useful, starts to develop “Code Smell” ( http://en.wikipedia.org/wiki/Code_smell ). It starts simple enough. Your system was designed to handle some workload, but it gets increasingly used to handle another workload (e.g. from server to desktop to mobile and mainframe) because “it’s good enough”. Eventually you start carefully tweaking things so that nothing breaks, make a trade-off-or-two and eventually end up with either hard to maintain code or the need to break the API or even the conceptual model (i.e. from Cartesian co-ordinates to Radians).
Then you have a choice to make:
(1) Keep the old system and add a new one. With this approach you end up with the Function(), FunctionExt(), FunctionExt2(), FunctionExt64() monstrosities that you see on Windows and have to maintain a massive code base. More crucially, sometimes it’s discovered that the old API needs additional parameters in order to be secure and so maintaining compatibility might be a security risk.
(2) Implement the old one in terms of the new one and depreciate the old one. A cleaner approach than (1), but since we’re dealing with enterprise software, it might not be much different since you’ll need to keep the old system up for 2 to 10 years — a life-time in computer terms. You’ll also need to hobble your new system to keep track of things that it might not have to in order to implement that compatibility.
(3) Use a “buffer” API library layer and implement the “depreciation code of (2)” when it makes sense but break things when it doesn’t. The “buffer” API library is versioned and only applications that need the specific API will use it. If the demand for the crufty API is gone, that buffer API version may be dropped.
Approach (3) is the approach Linux uses and the “buffer” API people rely on is the GCC system libraries. AFAIK, GCC has proper stringent unit testing, so if you rely on it to hide the kernel flux, you should be okay.
No, you’re missing something fundamental – and thats reading the post. I never said that changing things when parts are broken is bad. Its when working things are replaced with broken things – anyone remember the VM fiasco in the 2.4.13 series of the with the split between Alan Cox’s and Linus’s kernel sources?
Again, that was a *good* change but not adequately tested – 6 long and painful releases it *finally* become more stable – but it should *never* have been that painful had they actually spent the time to adequate test it before merging.
The issue was raised that parts were being changed for no good reason, replaced with code that was fundamentally broken from the ground up. That is the problem, nothing to do with backwards or forwards compatibility – it has to do with the most basic of processes. If they’re strictly enforced then only good well tested code is merged, it means a slower development cycle but when something is merged there isn’t a massive scramble of ‘merge first and fix bugs later’ mentality – again, anyone remember the quick succession of Linux 2.4.x releases to fix race conditions, the IDE corruption in the first 2.4.x series?
Oh yes, shocked? you bloody well should be. I have been around, I have been using Linux since the God was a teenager, so nothing surprises me about what the interviewee talked about. Its a rehash of what everyone else, who has ever been involved, said about the development process.
For me, if I was in his shoes, I would have given up long ago – probably moved onto FreeBSD or something else if there is going to be the sort of attitude to development that many Linux developers hold.
Again, that was a *good* change but not adequately tested – 6 long and painful releases it *finally* become more stable – but it should *never* have been that painful had they actually spent the time to adequate test it before merging.
Of course, in an ideal world the problems would have been ironed out before anything was released. But no matter how much process you have, most problems in complex systems like a VM won’t be detected until real users put them into production, which doesn’t happen until you release. So you can spend the next 2 years writing test cases to cover every aspect of the system, or you can release and let the early adopters tell you what’s wrong. This is the open source process. Early releases and fast feedback, and it has worked fine for a long time.
The issue was raised that parts were being changed for no good reason, replaced with code that was fundamentally broken from the ground up.
You are arguing based on the assumption that Linus is a complete idiot and will accept any new code that is thrown his way, even if it is broken. While new code may not be immediately better on all fronts than the old code, it is never put in just for shits and giggles. It is put in because the old code is crufty (and causing hard to fix bugs), has problems, or is completely unmaintained. No matter how much experience you have with Linux and computing in general, Linus and the other kernel maintainers have a hell of a lot more, so I would trust their judgement on what is wise to rewrite over your ramblings based on some software engineering classes you took. I took the same classes, and I have several friends that swear by TDD and those processes and yet the systems they design are pure crap in the end. In real systems, processes need to be balanced by a heavy dose of common sense and practical decision making.
You also have to look at results. If Linux is such a godawful mess as you say, then why is it so widely used? Linux doesn’t have any external pressures promoting its use like Windows does, it exists entirely on its own merit. If the BSD’s were miles ahead, they would be used more frequently.
Because IBM, Novell, Red Hat, Oracle, Canonical, Mandriva, Sun and countless others are pushing it.
Exquisitely decorated rock, this one you’ve been living under for the last decade
BSDs are quite comparable, and they’re not used as frequently because Linux is quite the cash cow for the companies I just cited. It takes them a 5% effort to reap 100% of the results of everyone else doing a similar 5% effort, while the “public domain art” status of being able to not disclose usage of BSD-licensed code scare the hell out of them, but only in an economic sense of serving their middle-tier services model. And in a twisted sense of irony, everyone uses BSD-licensed code daily and don’t realise it.
Those companies are backing Linux mostly because of economics, not pure merit (which is evident by the fact that they take the minute matters that they care about most, i.e. the parts where Linux is lacking, into their own hands, while they benefit from everyone else’s itch scratching). Not altruism. Not some karmic duty of “giving great code to the World”.
Top-tier service providers like Yahoo! and plenty of the recurringly cited *BSD customers are more than happy with it, and I take the lack of flare and glittery announcements more as an effort to conceal the recipe of their secret sauce than some attempt to build mindshare by riding with the hype.
Remember Corel.
(Notice I’m not saying there are no merits in the Linux kernel codebase, quite the contrary. But the quality in Linux’s code has to emerge somehow, given the amount of men-years work poured there. It’s only that, unlike the engineering that goes on on BSDs, Linux’s quality emerges by natural selection, which, as time approaches infinity is bound to produce high quality, enduring stuff. And in a twist that I can’t help but find extremely humorous, the FreeBSD ULE 1.0 scheduler that was considered a failure was modeled against Linux’s original O(1) scheduler, and now that ULE 2.0 is being modeled against Jeff’s own expertise and past experience instead of imitating someone else’s design, it’s making great strides. How is that for an irony?
I can appreciate the quality of both approaches, but as a computer scientist myself, and born under the sign of Virgo, and a Dog in Chinese Zodiac, this ad-hoc-iness is extremely, extremely uncomfortable to me. I really like the correctly and thoroughly thought out, engineered approach better.
But that’s just me.)
Edit: stray UBB tag, and “sauce” is written with the Cee.
Edited 2007-07-24 18:21
Quoted for great truth.
The main sponsors/controllers of Linux function much like the large ERP companies. Sell support, change the base frequently enough so that you have to keep your customers on a continual upgrade/training cycle, and make sure the cost of changing from your solution to another is very high.
Let’s take a theoretical function: using Windows binary drivers with Linux. In theory, that’d be a good thing, right? Hardware companies could write drivers for one and they’d run on both types of OS. Good for desktop customers, good for server customers, it’d make it easier for Joe Sixpack to give the Linux distro of the week a spin, hardware companies would have a larger market, and it’d make hardware companies more willing to work with Linux as well. Win-win all around, right?
Yet that is unlikely to happen anytime soon, much less in the kernel. Linux’s minders aren’t going to let that happen. The reason is not because it’s a bad idea; there are thousands of webpages throwing around ways to do that, from people who say “if only I had the time.” Linux has hundreds of programmers, and billions of dollars at their disposal (heck, IBM could buy out Microsoft); you could have alphas in a few weeks if you wanted, and stable tested driver support ready in time for 2.8.x, if the support was there. The reason we won’t see that sort of Windows-Linux support is because Windows’ diverse software environment offers a lot of competition to the products of Linux’s patrons, and it’s in their best interests, once they’ve made the sale, to drive the cost of change up as much as possible to keep Windows software from being fiscally competitive with their products.
For that reason alone, we won’t see the two OSs working together anytime soon. But that was just an example; similarly, the rest of Linux’s development is guided not by user concerns and desires, but by corporate needs and profits. Which, in maybe not so many words, is what Con was trying to get at.
EDIT: Can’t quote in a quote, it seems.
Edited 2007-07-24 19:33 UTC
Almafeta,
So now Linux is the corporate-driven OS and Windows is the OS of the people? Give me a break.
Linux development functions on a meritocracy,whereas Windows is developed to drive up Microsoft’s revenue and that of its corporate allies, whether it makes any technical sense. The wasted cycles to implement DRM in Vista is all you need to know in this subject.
A lot of recent research shows that while corporations have a large seat at the kernel table, a large portion of the the Linux kernel development is still done by volunteers following their ideas and sharing their knowledge.
It’s as you said. Microsoft stands alone and has to respond to the will of the customers to survive, where the various Linux corporations get to dictate to the customer what they are able to use (an oligopoly with the FSF/GNU/ISO/LKO functioning as the cartel).
The article shows that that is not the case. Maybe it at one point (back in the early 90s, say), but that isn’t the case today and hasn’t been for years.
Large corporations using copyleft licenses to legitimize taking small programmers’ code without their consent and/or knowledge is not the same thing as their having a say in the actual development of the kernel. Ask Con.
“””
“Linux development functions on a meritocracy”
The article shows that that is not the case.
“””
The article shows that kernel devs who don’t get their way can run to the media.
“The article shows that kernel devs who don’t get their way can run to the media.”
Absolutely, and Linus used a public forum to make rants against the FSF. Its a disturbing trend, that only provides fodder for those who are against adoption of Desktop GNU.
Its unfortunate that Con unlike Linus is focused on that very area.
“””
Its a disturbing trend,
“””
I think you are misunderstanding my post. I don’t see anything particularly wrong with devs doing interviews, etc. and speaking their minds.
I do think that it is a mistake to say that “the article shows” something when the article is a one-sided interview of one person telling their version of the story.
BTW, as the administrator of an XDMCP desktop server running approximately 50 (and climbing) Gnome desktops for my users, on just a dual Xeon 3.2Ghz box with 4096MB of RAM… if Linux desktop performance had half the problems that Con describes here, I would know about it. And yet things hum along nicely, day after day. And one of the word of mouth selling points for getting more users moved from standalone Windows boxes to thin client desktops on the XDMCP server is the improvement in speed that people notice after the conversion.
BTW, as the administrator of an XDMCP desktop server running approximately 50 (and climbing) Gnome desktops for my users, on just a dual Xeon 3.2Ghz box with 4096MB of RAM… if Linux desktop performance had half the problems that Con describes here, I would know about it. And yet things hum along nicely, day after day. And one of the word of mouth selling points for getting more users moved from standalone Windows boxes to thin client desktops on the XDMCP server is the improvement in speed that people notice after the conversion.
If you move away from the thin-client “everything runs on the server and is sent to the display/dumb terminal via the network” model and over to the diskless “boot off the network, mount everything via NFS, run everything on the local CPU/video/soundcard” model, you’ll be able to support a lot more than 50-60 on that server.
Our server is only a dual-Opteron 2 GHz box with 4 GB RAM, and supports (at our largest site) 340 diskless clients. Our second largest site is 110 diskless clients on a similar server. The clients are AMD Sempron 1.6 GHz boxes with 512 MB RAM and an onboard GeForce 6100 videocard. Each client only costs $200 CDN, making it a drop-in replacement appliance. Each runs a full-blown KDE 3.5.6 desktop with OpenOffice.org 2.2. (These are installed in secondary schools, where every computer in the school, including office stations, are diskless setups.)
I would love to know all the technical details of your setup. Have you documented how you went about building this?
You may reach me at my usernameheronosnews AT GMAIL DOT COM.
Thanks.
“I do think that it is a mistake to say that “the article shows” something when the article is a one-sided interview of one person telling their version of the story.”
The interview is not one-sided its his story as *he* sees it. If you are arguing that their is some sort of subtifuge on his part say so. I tended to agree with just about everything. He is bitter and upset, and he’s not little and unknown his patchset is well known.
The bottom line is Linux has nothing to do with GNU Desktop. Its not a secret the massive rant by Linus against GPL3 showed the emphasis on development geared towards Embedded hardware…and why not they pay these developers *Money*.
Many of the topics mentioned are well known, and are often reported…some of them can even be spun in a positive manor.
…at the end of the day, Con’s patches have been available *forever*. We all know why he stopped doing in them, because after all this time similar patches are suddenly available from the very people who rejected the same functionality themselves. Which tends to heavily support the argument he was right all along.
…but yes. I don’t know if machines are that much faster, but I don’t get any of the problems described and I run on a very moderate machine, thats not to say I don’t want improvements that benefit the Desktop.
>> (heck, IBM could buy out Microsoft)
that would be some feat, considering MSFT’s market cap is nearly twice that of IBM
I think you’ve got those flipped. IBM’s net worth and income are between 3 and 5 times that of Microsoft.
“I think you’ve got those flipped. IBM’s net worth and income are between 3 and 5 times that of Microsoft.”
no, my stats are correct, because they are indeed stats, not speculation
http://finance.yahoo.com/q/ks?s=IBM
http://finance.yahoo.com/q/ks?s=MSFT
ibm and msft show roughly the same net profit
msft has twice the cash in the bank
msft has almost twice the market cap
msft has higher revenue growth, higher margins, pretty much eclipses ibm in every significant category
ibm won’t be buying out msft any time soon
Because IBM, Novell, Red Hat, Oracle, Canonical, Mandriva, Sun and countless others are pushing it.
Talk about backwards logic. In the beginning those companies weren’t pushing Linux, and it was purely a volunteer effort. The companies realized it was an excellent system, and started pushing it further ahead. Linux did not start because of any companies pushing it.
Exquisitely decorated rock, this one you’ve been living under for the last decade
Of course there is commercial backing to Linux now, but my point was that it didn’t start that way. This is an important difference to Windows, which was always commercial and always pushed by commercial interests.
BSDs are quite comparable, and they’re not used as frequently because Linux is quite the cash cow for the companies I just cited.
So? They are comparable, in the sense that if you compare them, Linux will come out ahead in most cases. Sure there are always exceptions, and the BSD’s have some technically superior features, but overall they’re just not as featurefull as Linux. You may not need the features that the BSDs don’t have, but there are lots of people that do.
Those companies are backing Linux mostly because of economics, not pure merit (which is evident by the fact that they take the minute matters that they care about most, i.e. the parts where Linux is lacking, into their own hands, while they benefit from everyone else’s itch scratching). Not altruism. Not some karmic duty of “giving great code to the World”.
What’re you talking about? Of course no company chooses Linux out of altruism. They chose it because it works well for them. As you say, they improve the bit they rely on, and benefit from the work of others. You say that as if it was a bad thing.
My dusty RedHat 2 and InfoMagic 6CD set beg to differ. Been there for a long time, pal
In the beginning Linux was just an academic experiment. No wonder companies weren’t pushing it.
The companies realised it could be made into a working replacement for SCO Unix on the PC, since the BSDs were entangled in a high profile lawsuit then.
And the mascot was cute.
Guess what happened as soon as someone found that Linux could be packaged and sold in sets of floppies or CDs, and that services could actually make real revenue.
Oops! Red Hat was born!
Gosh, I don’t know if you remember it, but do you have any idea how crude the RH’s site looked then? Lots of text, the logo sprinkled here and there, HTML tables ad hoc…
BTW, IBM only entered the game when it decided to give up on OS/2 and that porting AIX to PC platforms would be a lot of work.
IBM’s MO was to tout Linux as the cost-effective Unix alternative for budget PCs, and AIX as the OS for grown ups. First they just got a free ride because of the work Red Hat and others (specially the academia) poured into Linux; it really felt like having a lot of college students working for free instead of being hired as trainees.
Then IBM had some itches of their own to scratch. It was the same time frame when IBM started to buy smaller companies left and right, and reinvented itself as a middle-tier business.
Then other companies decided to check out how they could benefit from this flurry of work. Which coincided with the Internet bubble, and ISPs using Linux as cost-effective hosting solutions.
Well, you know, the rest, as the saying goes, is history.
We were once just a single cell, now we’re grown up human beings. Your point is…?
What DID make a difference for Linux was that by its inception, it behaved like the OS of choice of both the academia and big iron, while Windows was a home PC novelty (business apps ran on DOS).
Remember the academic roots of the Internet. You’re seeing the connection now, aren’t you?
People keep saying that, but they seldom back these remarks with actual examples. Care to list yours? Because the only example I can come up with is official vendor support, but one that’s tied to a specific pair of distros (you know which) and versions.
Don’t lose yourself in those last 2 sentences; the context of the paragraph is important. I said that out of sarcasm, which is my standard knee-jerk reaction to those seeing things through pink glasses and believing that companies advocate Linux out of selfless good will instead of textbook economics
I think he may be talking about linux having a native flash plugin while the BSDs don’t.
They’re pretty evenly matched with one of them scaling a little bit better in the extreme high end. I don’t remember which though since I’m never going to see that end of the spectrum.
The biggest different is that linux is system V like and has runlevels to put things simplistically.
Sure there are always exceptions, and the BSD’s have some technically superior features, but overall they’re just not as featurefull as Linux. You may not need the features that the BSDs don’t have, but there are lots of people that do.
There really is only one feature that Linux can claim to do better than BSDs and that is on hardware drivers. For quality hardware, BSDs are usually just as good and the quality of the drivers are often higher on that which is supported.
What other features are you saying is missing from for ex FreeBSD?
>most problems in complex systems like a VM won’t be detected until real users put them into production, which doesn’t happen until you release.
Yeah quiet nonsense, this is true only in an ever changing environment like Linux. But in real operating systems it’s not true. Have a look for example at wheels, working with this attidude of Linux would be a disaster or have a look at *engineering* at all. Feedback of a mere user isn’t really useful most of the time. A developer needs quality input, not dmesg plus an error.
>If the BSD’s were miles ahead, they would be used more frequently.
Windows is used more frequently, so it’s miles ahead?
>If Linux is such a godawful mess as you say, then why is it so widely used?
Have a look at Windows again.
Windows is used more frequently, so it’s miles ahead?
In some areas (specifically an incredible commitment to backwards compatibility), yes. This is partially why they are ahead, other reasons include good marketing, shady business practices, and incompetent competitors (Linux was not really useable until Windows was already completely dominant).
Windows is used more frequently, so it’s miles ahead?
In a lot of ways, if some of the open source zealots can get their heads out of their asses, yes it is. And in other ways it is not.
To be completely frank, I do not believe I even suffered a BSoD in over 2-3 years, if not longer. And if I did it was a shoddy driver by a third party, which can bring any kernel to death.
Fonts, input methods, and internationalization. Three areas I consider tantamount to any decent uptake. And Windows does that well with XP and especially now Vista is released it only got better for various other languages. Stuff you dare not even try on Linux.
But then again, I guess most people have no idea how hard Microsoft is actually participating on things like OpenType and Unicode.
Like hell it does!: hype, IBM, number of drivers, user base, and marketplace penetration. How many hosting services use Linux instead of *BSDOpenSolaris when for “Bob’s Crayfish Emporium” it wouldn’t make a lick of difference? “No one ever got fired by going with IBM,” that was the quote non? Then no one got fired for choosing Microsoft. Linux is the next in line. Now, that doesn’t mean it doesn’t have it’s virtues, or that it isn’t the best available, but to argue that it stands entirely on its merits is ridiculous.
Hell, at work we used Linux to host; our sys-admin had never heard of BSD. So it was obviously not a merit-based decision. I’d bet there are many more where that came from.
Notice that you’re talking about the 2.4.x series, not the 2.6.x series. The 2.4.x development model followed the “never break anything so you can rely on any version of the GCC libraries you like” model of development and it broke badly — several times — and caused several deep forks (e.g. Red Hat version, SUSE version, etc)
By most measures, the 2.6.x series with its “break things when necessary to keep things up to day and let GCC worry about maintaining compatibility” has been a huge success. Is it perfect? No. The “/proc” interfaces need proper standardization since it’s something GCC doesn’t insulate you from and there’s often no other way to manipulate or view kernel objects. *But* it works and it works well.
The problem with Con is that he doesn’t have sufficiently thick skin to work in the Linux kernel development environment, which in Linus’ own words, it brutal and that’s just the way he likes it. Could it be better? Absolutely. Should it be better? That’s less clear. I know I wouldn’t have much sympathy if I lost a year’s worth of work “because the kernel guys were too nice to say no to a feature addition”.
Personally, I think it *could* be better since it’s almost always possible to criticize without getting personal and in a reassuring way. But there’s no way to prove this would work better for Linux than Linus’ method without forking the kernel, implementing a code of conduct, and banning people (even technically valuable people) who violate it (in order to keep valuable but less aggressive people like Con). Since no-one is willing to step up and since Linux is succeeding, this point is moot anyway.
The issue was raised that parts were being changed for no good reason, replaced with code that was fundamentally broken from the ground up. That is the problem, nothing to do with backwards or forwards compatibility – it has to do with the most basic of processes. If they’re strictly enforced then only good well tested code is merged, it means a slower development cycle but when something is merged there isn’t a massive scramble of ‘merge first and fix bugs later’ mentality – again, anyone remember the quick succession of Linux 2.4.x releases to fix race conditions, the IDE corruption in the first 2.4.x series?
Can you come up with something a little more recent than that? The 2.6 kernel is developed in a much different way than the 2.4 kernel.
For me, if I was in his shoes, I would have given up long ago – probably moved onto FreeBSD or something else if there is going to be the sort of attitude to development that many Linux developers hold.
No disrespect to CK but maybe he should work on OpenBSD I’m sure he’ll think a little better of Linus and crew after dealing with OBSD’s lead developer.
>after dealing with OBSD’s lead developer
If you’re offended by Theos sayings, you didn’t hear any of Linus sayings it seems
Linus is really brilliant, but “his crew” is somewhat “out of order” and driven by politics.
Linus is driven by pragmatism. He doesn’t like the politics.
“Linus is driven by pragmatism. He doesn’t like the politics.”
Do not misuse the word pragmatism. Linus used the word in a *very* specific instance to describe Binary blobs in the kernel. He is driven by money, and there is *nothing* wrong with that.
In fact a large part of this interview is that linux is driven by money, not the Desktop…becuase there is no money from the Desktop.
Con argues strongly that this has negative effects for GNU. I suspect its swings and roundabouts.
He is highly political, hence all the flaming about GPL3 and the FSF…and he’s not very good at it.
If you’re offended by Theos sayings, you didn’t hear any of Linus sayings it seems
Linus is really brilliant, but “his crew” is somewhat “out of order” and driven by politics.
I think you’re confusing the Linux kernel developers with Theo. If anything Theo is much more political about his decisions than the Linux kernel developers. They seem to be much more practical.
“They seem to be much more practical.” Practical!? what could you mean. This word seems to pop up an awful lot. If you said “Self-Serving”, “Hardworking”, “Successful”, “Paid”, “Highly Skilled”, “Dedicated” or almost anything else…but how are the practical…how does the word apply?
By practical you do you mean they make “Make good technical choices on improving Desktop performance” or “Make good financial choices by pandering to the server market”…which is one of the major points of the interview.
Seriously its become a silly overused meaningless wold. Think about what it actually means.
By practical you do you mean they make “Make good technical choices on improving Desktop performance” or “Make good financial choices by pandering to the server market”…which is one of the major points of the interview.
The kernel actually has very little to do with the Desktop. Despite this GNU/Linux has the best desktop environment among all the Unixes so I don’t know how anyone can say that the desktop is left behind. Linux has more desktop related drivers, has better GNOME/KDE integration, and so on.
“The kernel actually has very little to do with the Desktop. Despite this GNU/Linux has the best desktop environment among all the Unixes so I don’t know how anyone can say that the desktop is left behind. Linux has more desktop related drivers, has better GNOME/KDE integration, and so on.”
You missed the point of my topic, and are making a Linus, in separating the kernel from the Desktop. There is two points.
1) Kernel Optimized for Server Performance
2) Effort to develop server features over desktop features.
Con uses 1) as a criticism of the kernel particularly because his emphasis is on the Desktop. I also say that features that benefit Desktop users…and I remember when it was sound cards…and then linmodems…and then graphics cards…and now wi-fi.
Its not a *bad* thing its about *money* for development .
wrong thread.
Edited 2007-07-26 10:06
> Again, how many of the developers have any formal education when it comes to development processes?
Since so many of the big contributers are working for big companies at the same time, I figure a lot of them are trained or at least skilled professionals.
Edited 2007-07-24 20:03 UTC
Process in programming is overrated. By all accounts, Apple’s development practices are pretty good. Yet, Safari leaks memory like its free, and my 2GHz MacBook with 2GB of RAM will still beachball at completely random times. Microsoft has a *ton* of process, and we know how Vista turned out!
It’s not that process is useless. As someone who took the sorts of classes you refer to (in an engineering context, not a programming one), I understand that you don’t build a bridge or design an airplane withour process. But programming isn’t engineering. If real engineers had the kind of track record programmers do, walking out your door in the morning would be dangerous business. Whatever processes software “engineering” has developed, they’re not very good! Maybe this will change in the future. But at this moment in time, relatively unstructured, “hacker-ish” development methods have about as good a track record at delivering quality software as highly “engineered” methods do.
Edited 2007-07-24 16:53
There is a difference between having a process and enforcing it. Heck, when I took over a department at my last job, we had lots of great procedures and processors but no one ever actually followed them. They were as useless as a condom vending machine in the Vatican.
You take the same classes in programming as well. From analysing the problem to documentation to psuedo code, layouts, feedback from users etc. Its a cycle. People who try to avoid that cycle do so at their own peril. You either rush it, and have a ball of unmaintainable crap that causes problems in the future – such as the case of Windows Vista OR you have a set of processes in place that although doesn’t push out bleeding edge software, actually works as it should. When things do need to be fixed it doesn’t cause breakages further up or further down the chain of dependencies.
You take the same classes in programming as well.
I don’t doubt that. What I’m saying is that it’s not helping.
From analysing the problem to documentation to psuedo code, layouts, feedback from users etc. Its a cycle
It a surprising amount of great software has been written in a “code and run” kind of style. As chaotic as Linux’s development style may be, its hard to argue with the results. There is no other OS that even attempts to do what Linux does: address everything from cell-phones to supercomputers with a single code-base. You can argue that AIX or Solaris is a notch more robust, or VxWorks is a touch more real-time, but what does that really say? That a bunch of much older, mature products that solve a fraction of the problems Linux does do slightly better in their limited application domain? This is supposed to be a criticism of the development model?
Generally speaking its true that you can "code and run“ with Linux. "code and run“ is property of open source nature of this particular software product and its true for any piece of software on which developer can lay its hands.
But assumption that you can "code and run“ and finish with something that will end up in mainbase is wrong. As a extremely big system software, there is a quite large number of non-coding related steps, like coding style and quality control etc, which any patch, update or module new to Linux must comply in order to be even considered to be included in mainbase.
This isn’t strange; every project as big as this kernel has rules of this kind, open or closed source.
Windows, OS X and QNX would state otherwise.
NT5 kernel is used in many different embedded appliances like Xbox or MediaPC-s and heavy weight servers at same time.
OS X is used in cell phones, desktops and clustered supercomputers
And with its unique range of features QNX is within league of its own.
It is true that Linux has seen more usage of this kind – this isn’t a magical property of code. It is consequence of its GNU-ish nature which makes this happen. Anybody can, free of charge, pick up that code and custom it to desired use. And as it is a good code with big base – it is chosen large number of times.
You don’t see the process in the Linux kernel model? Think about the ordeal that Con, Ingo, and the maintainers went through over this scheduler issue. Long email threads debating their design. Feedback, complaints, and rewrites. I think CFS got merged at version 19 or so. Passionate arguments from supporters of each camp.
These patches were put under a microscope and ripped apart to an extent you would rarely see outside of life-critical software development. Nobody is questioning the code quality or maintainability of either solution. In the Linux model, controversial changes receive a somewhat ridiculous amount of attention, argument, and criticism. Getting a scheduler merged is like running for president.
The Linux news outlets covered the CFS/SD debate. Hundreds of people weighed in. That’s a process.
> They were as useless as a condom vending machine in the Vatican.
Heh, somebody watches Red Dwarf
this distinction between programming and engineering is rather weak. I am an engineer BTW, not a computer science grad
First and foremost, software is not like other fields in engineering. It it were, we wouldn’t have 1/1000 of the software we have today. You certainly won’t hear someone say ‘I bought a corolla, now I want it upgraded to be ferrari’. But with software, that is the mentality of people. Even in ‘critical’ fields like core routers…they’re always adding new features, new protocols…on top of an existing design. No one wants to start from scratch. So it is a trade off. We trade off versatility, speed, and cost for stability. I reasonable trade off I would add.
Bridges are stable, not because of process but because we know about them. We know what designs work and what to account for. It also has a very limited number of things to consider…will it hold up. Also many people complain that application use 2 much memory. Hmmm, do you know how much extra support is in a building? It’s a lot more than needed to support its normal load; just like software Also buildings and everything else have laws to prevent bad things from happening and require regular maintenance. For example, you can’t drive with chains on your tires in snow in most cities, because it damages roads and roads are cleaned/inspected often.
we could make software that was reliable and as rock solid as a building. But it would be just as boring, expensive, and restricted in its use as building.
That said, I’m not advocating less process Hacking is not good. But even in strict process oriented environments, you will find the same issues.
You certainly won’t hear someone say ‘I bought a corolla, now I want it upgraded to be ferrari’. But with software, that is the mentality of people. Even in ‘critical’ fields like core routers…they’re always adding new features, new protocols…on top of an existing design.
This is really at the heart of the issue. The specification for an airplane is pretty solid years before it makes its first flight. Customer change software specs up to and after deployment.
Given the vastly different environment in which programmers operate, relative to engineers, doesn’t it make sense that they should adopt different development methodologies?
Programming most definitely *is* engineering, although it is the engineering of processes, rather than of physical structures. I would argue that the flaws of the software engineering process is related to the confusion many seem to have over what that process really is. You, for instance, claim that software is *not* engineering, others will claim that it is and model their process tightly to traditional engineering processes. I would argue that neither claim is true, and that the truth lies somewhere in the middle of all reasonable assertions, as it often does.
We know from our other human pursuits like math and science that some information and very often processes and methodologies carry over to other fields of study with little or no modification. Indeed, the ultimate pursuit of the “discovery” sciences is to find the one truth which simultaneously explains or is the root device which can explain everything in the natural world around us; This is the reason why Einstein’s theory was such a huge breakthrough: It brought together two previously disjoint elements — matter and energy… Mathematicians would love to discover a way to represent all of our current number systems in a single, consistent & elegant system.
Based on this previous experience, I think its more than safe to assert that software engineering *is* engineering in the true sense, in much the same way that Chemistry, Biology and Physics are all sciences: different fields of study ultimately bound to the same underlying laws and principles.
Software engineering is still quite a young process, with much less than a century under it’s belt, while we have been building tools and buildings for many thousands of years. Software engineering today is much the equivalent of stone knives and grass huts.
It is the condition of these young pursuits of ours that they will often remain disjoint from more established pursuits as the study of and theory behind them matures and, eventually, worthy pursuits will be integrated into the established.
You are mixing two concepts: work methodology and development process.
A methodology is approach on how-to-make a particular unit of work. A process is how to assemble, test and deploy all those units.
Modern software development went in direction of leased methodologies (like RUP or Agile) where every developer should be a hacker fluent in all parts of software in development, without strict hierarchical ranks between developers. So modern methodologies are very casual, non-formal things – because this way hackers can hack.
But a reason why methodologies like RUP or Agile do work is because all hackers are strict and skilled with a used process of development. So modern software development has very strict process, like formal demands on software (use-cases etc.), documentation writing, software architecture, testing framework etc.
Problem with older approaches (waterfall model for example), indeed lied in fact of in proper process – hackers were strained in their hacking.
And you have mentioned a Vista. Well if you think that Vista sucks think how much more would it sucked if no processes were used during its development. It would be a horrible, bug driven, without any goal, not properly tested piece of completely useless crap.
If you are interested in this subject see http://www.martinfowler.com/ for excellent articles on leased-hacking-productive models of modern day development.
That is a view of blind man. Engineering – as construction engineering – is "easy to do“ in a sense that construction engineering builds on physical laws. Application of physical laws and mathematics gives a construction engineer a framework within which it can test semantics of its design. Basically construction engineer can ask itself a "will this work“ question and find a answer without actually building end product. That is a consequence of physical laws which do have semantics – that is they *state* something and physical statements can be tested.
Although you did noticed that software is different way of engineering, you failed to see why it differs. Software development is engineering discipline which builds on math itself. No physical laws are applicable to software. As, in contrary to natural sciences, math doesn’t have real-world semantics (math is just bunch of self-centered transformations of purely abstract things like numbers, sets and operations on those) there isn’t any way to formally test code on bugs and "will it work“ question will remain unanswered until you actually build end product.
Second problem with software is this – for same amount of work no other human build thing is more complex. Complexity kills human brains. And humans aren’t gods. They make mistakes.
Now you can build perfectly fine, bug free software. NASA does it, military does it, airplane industry does it, and nuclear reactor controlling software is it – for 2000+ dollars per line and 10 years development cycles.
An excellent, a classic and absolutely essential literature about this topic is “The mystical man-month” from F.P.Brooks jr.
Engineering – as construction engineering – is "easy to do“ in a sense that construction engineering builds on physical laws.
You’ve got a fundamentally mistaken notion of “meat-space” engineering. Engineering is governed by physical laws, yes, but the hard part of engineering is dealing with the fact that most of our knowledge of those laws is incomplete. Take, for example, air turbulence. Nobody has an equation for turbulent airflow. There are various probabilistic models, but nothing that’s terribly accurate, much less actually representative of the underlying physical processes. Yet, every airplane engineer has to take turbulent flow into account in designing their products. That’s why we have engineers at all, to deal with the imprecision in our knowledge of the real world.
Meanwhile, programmers work in a world of mathematical precision. They not only have a rich, well-defined framework of theory within which to work, but they also are free of the extreme limitations imposed by physical reality. The closest they come to limitation is NP-completeness or undecidability, and even then they have heuristics that can give highly precise conservative solutions.
Basically construction engineer can ask itself a "will this work“ question and find a answer without actually building end product.
Hardly. Most of the interesting questions are way too hard to answer from physical laws alone. Engineers rely heavily on statistical data, past experience, and heuristics to answer such questions. You know how most airplanes are designed? By starting from an existing one that is known to work!
there isn’t any way to formally test code on bugs and "will it work“ question will remain unanswered until you actually build end product.
You’re right that building the product and testing it is the only way to know that it really works. Which is a strong argument against too much process in programming!
Process is not a tool to improve quality. You don’t prove that something works by following a particular process, but by building it and testing it. Instead, process is a tool to manage risk. Engineers use lots of process because of the enormous costs of building, testing, and experimentation in “meat-space”. Programmers aren’t subject to the same sorts of limitations. Building, testing, and rebuilding are relatively cheap in programming. Ergo, a highly process-oriented development methodology isn’t necessary for programming, and indeed actively hinders the activity.
Ok. It true that turbulences are quite wild. Heisenberg itself had a figure-out-turbulence project during his studies. He give up stating that it is mind boiling and 6 months later created quantum physics.
However you could hardly pick up more extreme example of engineering project. Large scale airplane project (civil or military) are $10+ billion projects. That is quite more than average space project (ESA has budget of one third of it).
Anyway you don’t even need turbulences to see those problems. Even simple construction problems are unsolvable by analytic methods inspite of knowledge of all laws governing it. Like resonance frequencies of beam supported by two fixed sides.
However when mathematics fails both problems have relatively cheap solutions – the experiment. That’s why airplane engineers have prototypes and wind tunnels. For fraction of cost of all project they can measure and see will it work.
It doesn’t work with software. The closest to verifying its design is peer review of your colleague.
It’s funny that you think that software is virtually unlimited peace of design. Math is unbounded disciple. Software is bounded by performances, time, tools, customer desires, skills and money.
For example right now I have extreme problems with 32k size limitation of CICS commarea..
And it doesn’t change the fact that you will not know does it work until you build the whole thing.
Indeed it isn’t. Quality comes from individuals involved. However it is a thing to lesser communication overhead and open space for more productive development. What’s your idea of development process? Bunch of managers telling everyone how to write a code?
Do you know how much time does it take just to analyze problems before coding in software development? In good managed project it’s about half of all wasted time. Compare that to no more than 10% for typical engineering discipline and 90% of time is spent on building. In software building eats no more than 10% of time for application software and presents no more than one ninth of money spend. On system software (drivers, kernels) it is even far less. Testing of system software is really really expensive.
And why do you thing that software is cheap? Did you know that third of cost of B2 (US stealth bomber) is in software? Other airplane project – Grippen fighter, stalled on software errors, not on turbulences.
On mine current project, for debugging time spent on one processor of zSystem one could buy a luxurious over-the-top BMW every month. That’s a fraction of total cost, east-european wages and mid sized project by western standards.
Yes – if you are building the next Tetris or your aunts videostore application. It is completely opposite when problems aren’t trivial. What do you think how much money would Linux-like fat kernel tend to cost when developed by some company? My judgment is no less than $200 million of dollars and that’s optimistic.
I don’t know if you are aware of it, but software development exponentially slows with its size. Linux is system software with something about 6-7 milions LOC’s. Thats expensive. Extremely expensive.
Ok. It true that turbulences are quite wild. Heisenberg itself had a figure-out-turbulence project during his studies. He give up stating that it is mind boiling and 6 months later created quantum physics.
Turbulence is an extreme example, but it’s not the only one. There are almost no interesting problems in atmospheric flight that are simply described in terms of physical laws. Structural mechanics? Anything that isn’t a simple geometry (brick, beam, sphere), is analyzed in terms of simple geometries (finite elements). Aerodynamics? Its impossible to work at the physical level (motion of molecules in air), so you make all sorts of assumptions to derive approximate models (eg: low-subsonic continuous flow systems). Thermodynamics? There are some beautiful equations based on conservation laws, but the minute you hit something as complex as a compressor you’re back to empirical models.
However when mathematics fails both problems have relatively cheap solutions – the experiment. That’s why airplane engineers have prototypes and wind tunnels. For fraction of cost of all project they can measure and see will it work.
This is dead wrong. Experimentation in software is enormously cheaper than experimentation in engineering. A commercial-grade wind tunnel can cost tens of millions of dollars, and thousands per hour to operate. A high-fidelity wind tunnel model will run you hundreds of thousands of dollars or more. It’s also very difficult, pain-staking work, because the solution-space is enormous (tens of gradiations in tens of dimensions even for a small system). Just running the simulations to evaluate a small range of configurations can take a day of processor time. Software developers don’t have nearly the same level of cost constraints.
What’s your idea of development process? Bunch of managers telling everyone how to write a code?
Seems to me like this is exactly what a process-heavy style often gives you. I don’t have the experience to know what’s effective in managing a large project, but in my experience programming in a research setting, I think a good model is thus: get working code as soon as possible, iteratively refine to add features and refactor existing code, and test at every step to ensure that there are no regressions. Add architecture only as necessary, make it a point to use good tools, and build-in a strong framework for debugging and experimentation from the very beginning.
Do you know how much time does it take just to analyze problems before coding in software development? In good managed project it’s about half of all wasted time.
This is process at fault, saying that the problem should be fully analyzed before work starts. This is necessary in engineering, but rarely necessary for most programming projects.
And why do you thing that software is cheap?
I never said software is cheap. I said the relative cost of experimentation in software is cheaper than the relative cost of experimentation in engineering projects.
There is also a cost-benefit issue to consider here. The software environment is what it is. The budgets are a given and the ever-changing requirements are a fact of live. *Given* those constraints, what’s the best way maximize the quality of software? The methodology used for aerospace-grade software is not appropriate for most software. The budget isn’t there, the requirements-stability isn’t there, and the extensive safety-requirements aren’t there.
On mine current project, for debugging time spent on one processor of zSystem one could buy a luxurious over-the-top BMW every month.
You could buy a BMW with how much it costs to run a wind tunnel for a day or two.
What do you think how much money would Linux-like fat kernel tend to cost when developed by some company?
Whatever it was, I think it would’ve been higher, with no improvement in quality, if a traditional, process-heavy development methodology been used.
How many of the developers have any formal education when it comes to development processes?
Oh, please. “Open source development violates all know software development processes”. Or something like that
I believe in the fluid evolutionary model of the Linux kernel project, but there is a troubling problem here. The model isn’t broken, but that it’s only been partially implemented, and the missing pieces are crucial for evolution instead of devolution.
As Con mentioned several times in the interview, the central issue here is about measuring improvements. There’s no standard scheduler benchmark suite to objectively compare one solution to another, and the same problem exists in other performance-critical subsystems.
The kernel community is good at discussing and evaluating design, including code cleanliness, interface convenience, correctness, generality, and things of that nature. But when it comes to evaluating performance, the decision process reverts to a simple matter of trusting your closest friends.
We see this all over the kernel, from Mel’s sophisticated VM patches that Andrew is afraid of merging to this debate over fair scheduling. Performance tuning is a black art in the kernel community. Evolution doesn’t work efficiently if we can’t select the most successful adaptations.
Linux needs to a adopt a benchmark-centric model for evaluating performance-related patches. You want your patch to be considered? Develop a benchmark that illustrates how it performs under a variety of workloads. Yes, you should also be able to explain how it works theoretically. But that’s not sufficient.
In a way, the behavior of the kernel community concerning performance is very un-hacker-ish. Instead of trying a bunch of ideas and keeping the good ones, the community is pre-selecting what they perceive to be the inevitable winner according to popularity and clout. That’s aristocracy, not meritocracy.
While there are many differences between software development and other engineering disciplines, one of the most prominent concerns verification and analysis. In other fields, there are always ways of testing to makes sure things work and evaluating just how well they work. These are still challenges across most of the software industry.
But no matter whether you use design by committee or design by proposal, you face the same challenge of evaluating solutions objectively. You come up with the solutions in different ways, but judging their merit is very much the same problem.
However, this isn’t a problem that comes up very often in commercial software development. You don’t commission two teams to develop similar solutions that will compete for a spot in the final product. Linux has this kind of embarrassment of riches, and it’s not without its problems.
So Linux wound up with a similar fair scheduler from a more trusted and prominent developer. The jury is still out on which solution actually performs better, despite the subjective claims from either camp. They’re fairly sure that both solutions are better than the existing scheduler, a bittersweet conclusion.
Let’s not simply dismiss this as an ego issue, because it’s not. This is about being able to confidently select the best solution out of a set of proposals. We lost Con because his solution was passed up without much in the way of objective evaluation. That shouldn’t happen.
Although I agree with most of your post, this caught my eye:
“However, this isn’t a problem that comes up very often in commercial software development. You don’t commission two teams to develop similar solutions that will compete for a spot in the final product. Linux has this kind of embarrassment of riches, and it’s not without its problems.”
I strongly disagree with this. I know folks that work at Microsoft and Microsoft is notorious for having separate teams work on an identical problem as a way of promoting some competition. They do indeed “commission two teams to develop similar solutions that will compete for a spot in the final product.” The MS Office teams are especially subjected to this.
I’m sure Microsoft tries all sorts of wacky project management tactics. But how do they pick the best solution at the end? The cynical in me wonders if they decide the winner beforehand and use the pressure to expedite the development process. What do they do to keep the losers from quitting and blogging about it? Pizza party?
Oh, quite the contrary. There are very real economic rewards at stake, in the forum of remuneration and/or promotions. Careers have been made (and broken) in certain instances, and there have been a few occasions where the losing parties have left the company, either on their own accord or due to some handwriting on the wall in an especially large typeface.
In any event, my point was not about the effectiveness of these managerial styles; rather, it was to illustrate that your point was incorrect. :-p
“””
We lost Con because his solution was passed up without much in the way of objective evaluation.
“””
I doubt we “lost Con”. The way he continues to post to lkml about how he’s “no longer part of your kernel world” and is doing interviews about it, suggests to me that it’s a sort of sour grapes publicity thing. Otherwise he’d just get on with something else rather than continuing to beat the issue.
I’m pretty sure he’ll be back. And my guess is sooner rather than later.
That’s a pretty harsh recounting of what goes on behind the scenes of the Linux kernel development.
“I think the kernel developers at large haven’t got the faintest idea just how big the problems in userspace are.”
Rather harsh comments maybe, but this kind of open critical talk from a long time developer might be needed in order to get enough attention to problems that may have been neglected too much so far.
By the way, however, when comparing Linux to Solaris or BSDs, hasn’t Linux usually paid much more attention to desktop users’s need after all? Maybe things could change in the future, however? I suppose that the big companies using Linux as a server system, and financing Linux development, may usually have had the last word in kernel development decisions.
From my usage, Linux gets very unresponsive when CPU usage goes above 50% or so. FreeBSD, on the other hand, is still responsive even with incredibly high load. Doing compiles at work makes my computer unusable, and it’s a P4 3GHz, with 1GB RAM! Doing the same compiles at home on a lowly Athlon XP 1800+ with 1GB RAM, running FreeBSD 6.2, everything is still very responsive. I don’t drop network packets like at work (Work machine runs Debian Testing, the latest Testing kernel, I think 2.6.18). So, no, Linux must not be paying attention to the desktop if the desktop is running X, and all parts of the network stack (even the loopback) are useless at non-zero load.
I’d say KDE, Gnome & the crew paid much more attention to the desktop user’s need. But KDE and Gnome are not Linux…
“Yet all it took was to start up an audio application and wonder why on earth if you breathed on it the audio would skip. Skip! Jigabazillion bagigamaherz of CPU and we couldn’t play audio?”
With the latest stable kernel I cannot get the audio to skip, even at 100% load.
The only thing that can still ruin my multimedia experience is when I run out of RAM but this is not Linux’s fault.
> With the latest stable kernel I cannot get the audio
> to skip, even at 100% load.
Try running a vmplayer with something that uses a few hundred megs and lots of I/O. With a load of 6-7 you will get skipping audio no matter how little CPU and I/O your mp3 player requires.
> The only thing that can still ruin my multimedia
> experience is when I run out of RAM but this is not
> Linux’s fault.
Sure it is. With the “2.6.20-16-lowlatency #2 SMP PREEMPT” in ubuntu feisty, an Opteron 180, 3 GiB RAM (almost all of which was free) and swaps on 3 different disks I yesterday tried running 7 “nice -n 19 memtest 500M” and immediately when the mouse pointer froze I typed “killall memtest<Enter>”, but still the computer was as good as dead for about an hour. Nothing should make the mouse pointer freeze for an hour (or even a few seconds), even if the computer had the world on its shoulders. That a few nice-19 processes accessing a few hundred more megs ram than is available puts it into a deep coma is inexcusable, and it’s most certainly Linux’ fault.
Sure it is. With the “2.6.20-16-lowlatency #2 SMP PREEMPT” in ubuntu feisty, an Opteron 180, 3 GiB RAM (almost all of which was free) and swaps on 3 different disks I yesterday tried running 7 “nice -n 19 memtest 500M” and immediately when the mouse pointer froze I typed “killall memtest<Enter>”, but still the computer was as good as dead for about an hour. Nothing should make the mouse pointer freeze for an hour (or even a few seconds), even if the computer had the world on its shoulders. That a few nice-19 processes accessing a few hundred more megs ram than is available puts it into a deep coma is inexcusable, and it’s most certainly Linux’ fault.
It would be interesting to know the results when you use BSD, so we know if it really is Linux’ fault or X.
> It would be interesting to know the results when you
> use BSD, so we know if it really is Linux’ fault or X.
Well, considering Alt-SysRq-r didn’t help me Ctrl-Alt-F1 to a console I’d say it’s certainly linux’ fault. It’s not like “oh, the kernel is alive and well, it’s just everything else that refuse to do anything”. If the kernel wouldn’t be in a coma I’m sure the rest of the system wouldn’t either. At least in this case.
I think you should run your test on another hardware platform first to determine whether or not your hardware is causing the problem before saying that it is Linux’s fault.
Even if BSD has this issue, it does not mean Linux’s bug is not bug.
“””
With the “2.6.20-16-lowlatency #2 SMP PREEMPT” in ubuntu feisty, an Opteron 180, 3 GiB RAM (almost all of which was free) and swaps on 3 different disks I yesterday tried running 7 “nice -n 19 memtest 500M” and immediately when the mouse pointer froze I typed “killall memtest<Enter>”, but still the computer was as good as dead for about an hour.
“””
I hope you find out what is wrong with your hardware.
I just tried the same thing on my sempron 2800+ laptop with 768MB of ram, also running feisty, while listening to a 192kb/sec internet radio station coming in over the wireless nic. I didn’t bother to nice it, though.
Silky smooth mouse. Audio stream didn’t skip a beat.
> > 7 “nice -n 19 memtest 500M”
>
> I hope you find out what is wrong with your hardware.
There’s nothing wrong with it. Linux behaves the same way on all hardware I’ve ever tested it on.
> I just tried the same thing on my sempron 2800+ laptop
> with 768MB of ram, also running feisty, while listening
> to a 192kb/sec internet radio station coming in over the
> wireless nic.
You ran seven “memtest 500M” on 768M of ram while still being able to listen to music? I don’t believe you for even a fraction of a second. How much swap do you have? Sure that those memtests weren’t killed?
Well I tried it as an experiment on my X60s with 3GB of RAM and a single drive w/ 6GB of swap. It slowed down MASSIVELY, however I did get response out of my mouse and was able to kill the processes and continue on my merry way 5 minutes later. (Gutsy Gibbon Alpha 3 w/ 2.6.22-8 SMP) The mouse was somewhat responsive but X itself was not very.
That said, running 7 copies of memtest with a large amount over physical RAM is ill advised. memtest forces pages into physical memory attempting to lock the ram. You have 7 competing processes each locking all available RAM, and when it can not lock RAM attempting a smaller amount till it does. Of course this is going to send the system into an unresponsive spiral, the amount of swapping done by the memory tests alone guarantees that. Further since it is locking the memory pages as it gets them how exactly do you expect anything to be responsive after any significant amount of time?
There’s nothing wrong with it. Linux behaves the same way on all hardware I’ve ever tested it on.
If you are absolutely sure there is nothing wrong with the hardware, then there is something wrong with your install, because the behaviour you are seeing is not typical. I just tried your test again, this time executing memtest 1000M and memtest 1500M and memtest 1800M. The first two (1000 and 1500M) went fine, not even a miniscule skip of the mouse or audio. The third caused the mouse to skip a few times (< 1/2 second for each skip). This is because I was running into the limits of my RAM, and the system was busy writing 400MB of data to swap.
“””
How much swap do you have?
“””
Lots. I’m a big believer in the Andrew Morton school of shuffling the bloat off to swap.
4GB swap and vm.swappiness = 100 in /etc/sysctl.conf.
Nothing should make the mouse pointer freeze for an hour (or even a few seconds), even if the computer had the world on its shoulders.
As sbergman27 has already mentioned, either your hardware is defective, or your kernel/drivers is seriously malfunctioning. I did the same test (2GB RAM), with and without the nice, and the computer didn’t skip a beat. Music kept on playing, mouse kept moving. Overall I find the responsiveness of Linux to be far better than Windows on the same hardware. While a lot of hard drive and CPU activity tends to make the Windows GUI grind to a halt, I barely notice on Linux when I’ve got a compile going in the background.
> either your hardware is defective, or your kernel/drivers
> is seriously malfunctioning. I did the same test (2GB
> RAM), with and without the nice, and the computer didn’t
> skip a beat. Music kept on playing, mouse kept moving.
Hmm.. I just tried running some memtests on my laptop on the console and with only a gdm running X. The system didn’t become very unresponsive, but some memtest immediately got killed if I tried getting even close to the RAM limit. The laptop has 2 GiB of RAM and a swap partition and trying to run 4 “memtest 500M” almost immediately kille done of them.
Then I logged on using gdm and added a 2 gig swapfile and tried the same thing again. Now the system was comatose for 10 minutes.
Then I removed the swapfile and tried again. Coma for about a minute.
Then I re-added the swapfile and tried again. Coma for 3 minutes.
> [memtest] is locking the memory pages as it gets them […]
I see. I should have guessed that memtest is a bit different from normal applications.
I just made a small application that simply malloc()s some mem and then reads it in a loop, and it had only a minor effect on the overall responsiveness even when using available RAM + 1 GiB.
Still, all desktop linuxes I’ve seen under high load (I’ve seen maybe 15 different computers under such conditions, and only a few of those mine) have had problems with responsiveness. Usually when there’s some disk I/O involved the overall responsiveness drops to, or close to, uselessness. I’ve seen this happen e.g. when much of a program using garbage collection has been swapped to disk and it suddenly starts a big GC while some other program does some heavy disk I/O (e.g. copying a big file from one partition to another).
Another problem is vmplayer. Even if I run all vmware-related processes at nice 0 and all audio-related processes at nice -15 I still get skipping sound when whatever is running in vmplayer starts doing something. However, that’s probably not linux’ fault. (And I haven’t seen the same problem with VirtualBox, but VirtualBox OTOH always dies after a few minutes anyway, so I haven’t tested it much.)
I should probably make a repeatable test case that I can give people who claim there’s something wrong with my h/w. However, I really, really don’t have time for that right now.
Anyway, I do think some of Con’s feelings are completely justified. I’ve seen Torvalds and Morton time and again value throughput over all else.
Running seven memtest processes on more memory than you have is an… “interesting” test …
Yeah really, last time I think I had audio problems that bad was Redhat 5.x/6.x ish I think, and I’m pretty sure it was due to developing Sound Blaster Live drivers.
So 2008 won’t be the year of linux desktop?
So 2008 won’t be the year of linux desktop?
FYI Linux desktop is already available for a long time.
The real reason he left, is that linus chose the scheduler patchset from Ingo Molnar instead of his.
At least, that’s my thinking…
Yes. And I’d also point out that Con saying that he is leaving Linux kernel development forever is a very poor strategy for getting any other code merged. e.g. swap prefetch.
Thinking as a manager, as Linus must do, would you merge VM code from someone who is abandoning it and everything else kernel related?
Thinking as a casual observer, you think Con gives a rats ass after trying to have swap prefetch merged for more than a year, without anyone coming with bug reports, or anything, and no real progress happening?
“””
you think Con gives a rats ass…
“””
Actually, yes I do think that he still cares. And also that his strategy of “Look everyone! I’m leaving!” is a poor one which he will later regret.
Con’s not leaving forever. Mark my words.
Edited 2007-07-25 16:35
Well, that’s pretty much what he said. He was getting into all sorts of flamewars and not being listened to, so he decided to quit since it wasn’t doing him any good.
With everything I read on the LKML, I would leave too because of that. It’s the equivalent of being told by your boss: “Your idea was amazing, we love it. But I didn’t think of it, so yours sucks, and I’m going to write one almost identical to yours, but just different enough to claim it as my own.”
I’ve considered quitting my job several times because of that. The only reason I don’t is because it’s my only source of income. If it was just a hobby, I’d be out the door faster than you can say “clone wars”
Very interesting read. I think that the primary reason that Linux may never succeed on the desktop to the extent Windows has is because, like he said, development is almost exclusively geared towards the server. While I certainly don’t condone the behavior, I’m afraid that as long as the kernel developers are the direct recipients of the various suggestions and patches… there will be a problem with friendliness and a big lack of tact and helpfulness. In the commercial software world the developers are generally hidden from the general public, for good reason I think. That’s what the customer service representatives are for. Developers aren’t usually known for tact and excellent people skills.
Frankly, I don’t think that has *anything* to do with the success of Linux on the desktop. For starters, normal users don’t have anything to do with kernel development – you could argue that many of them don’t even know what the LKML is.
Remember, Linux is only the kernel…the userspace is mostly the responsibility of distro makers and app developers. The obstacles to Linux adoption all have to do with customer inertia, very little marketing, some “big name” apps missing, and lack of games. None of this is related to kernel development.
You’re right. I was thinking more about someone’s complaint/comment about developers responding harshly to various suggestions, and being generally unfriendly. It wasn’t intended to be related to whether Linux succeeds on the desktop. But I stand corrected, whether the Linux kernel is developed primarily for servers or desktops has little bearing on how well it gets received on the desktop… there are much larger issues such as public perception, certain applications, etc. etc.
Honesly, I think this developer is quiting more because of the same problem he accuses the other developers of having… Ego.
Edited 2007-07-24 18:37
“Honesly, I think this developer is quiting more because of the same problem he accuses the other developers of having… Ego.”
Maybe to a small part, but there were quite many other reasons too. Don’t oversimplify things and read the whole interview. The man seems quite honest and doesn’t emphasize ego related issues too much.
To put things simply: If he finds no joy in his hobby as a kernel developer anymore, why should he continue? Besides, sometimes a protest may be the right thing to do if more constructive approach doesn’t seem to work anymore.
Anyway, the man has has put lots of his own time, sleepless nights etc. into kernel development, only as a hobby. But if the ideas and patches are not not accepted and appreciated, why should one continue? In that kind of a situation, people just tend to quit and/or burn out. He has earned his right to rest from it all now. Maybe he might change his mind later, or maybe not.
As to Linux in desktop use, there was a time, a long ago, when Linux was known to be fast on older computers too. Although some people still use those kind of phrases, it is not really true anymore especially when speaking of common desktop distributions. (Of course, bloated desktop environments and software may be a big part of the problem, but you can change the software easily but not the kernel.) The fact is that practically all GNU/Linux distributions require quite modern and powerful PC hardware these days to run smoothly as a desktop OS. Under heavy load, Linux is often more of a pain than a joy. MS Windows 98 is quite speedy when compared to modern desktop Linux distributions, not to mention if you compare Linux to something like Syllable.
But I suppose that the big companies using (GNU/)Linux as a server OS are happy…
Edited 2007-07-24 20:12
As to Linux in desktop use, there was a time, a long ago, when Linux was known to be fast on older computers too. Although some people still use those kind of phrases, it is not really true anymore especially when speaking of common desktop distributions.
You are right in that running a modern distribution with a full desktop environment (KDE or Gnome) on slow hardware is not a fun experience. But the software world has evolved, and so must your expectations. Compare two modern operating systems, Vista and any Linux distro, Vista uses more RAM and resources than Linux+DE will. I run Debian on my machines, and after booting into KDE with a few small utilities running, I am using about 80MB of RAM. I also have Vista installed on the same hardware, and it uses 400+MB after booting, even with the sidebar disabled.
(Of course, bloated desktop environments and software may be a big part of the problem, but you can change the software easily but not the kernel.)
That is almost the whole problem actually (although what you call bloat, I call features). Boot into Linux without starting X or any major services, and you’ll be using less than 10MB of RAM.
MS Windows 98 is quite speedy when compared to modern desktop Linux distributions, not to mention if you compare Linux to something like Syllable.
Of course, and MS Windows 98 is also an incredibly primitive piece of software compared to modern Linux (or really anything). If you are happy with older software (which is quite capable), then by all means, use Windows 98, but don’t expect all the features of a modern OS’s to come at no cost.
“that Linux may never succeed on the desktop to the extent Windows has is because, like he said, development is almost exclusively geared towards the server.”
I guess this argument also explains why Windows sucks on the server…
Edited 2007-07-25 07:32
He claimed that Microsoft is the one to blame for hardware not advancing as fast as it might have. Well, bah: Microsoft has actually contributed to adding USB and some other things, as well as graphics hardware if only by optimizing DX for it. Besides, Microsoft has mostly only provided software, and has never made a regular PC: Microsoft isn’t to blame for the seemingly relatively small advances (diversity, in other words) of hardware, so much as technology and market forces have made it such that when computers became commodities, the margins became too small, and the technology advanced far enough that it became too expensive to push hardware design forward as fast or as diversely as it was in the late 80’s, because at that time, the clock speeds were still low enough that it didn’t require ultra care where you put down circuit traces for RAM for timing purposes like you do now.
No, the relative lack of diversity of regular PC’s (non-embedded, non-mobile) devices has everything to do with a more mature hardware industry that’s become far more cut throat, and software has been written to more completely abstract that hardware to where the details of implementation matter far less than they used to.
Microsoft contributed USB?
And all this time I thought it was INTEL.
From a technology standpoint USB is much worse than Firewire (iee1394) but it succeeded anyway because of great marketing and the lack of paying royalties..
And bad marketing is to blame for the demise of AMIGA, ATARI etc. mentioned in the article.
I remember well the times when I was sitting at work with my boring DOS computer and looking over the shoulders of coworkers with ATARI-ST using Pagemaker for the GEM desktop…
Anyway, in the end money drives it all. The big companies contributing to Linux do that in order to get a server OS out of it, not a desktop.
Hopefully Haiku will fill the needs of desktop users.
Hey Con, do you know about Haiku?
Careful reading of what I typed is that Microsoft didn’t single-handedly bring USB to the world, but they were definitely contributors to the specification: without OS support, hardware is just a user of power.
Who is Con? Either you read my name incorrectly, or you purposely did that in reference to me, or perhaps I didn’t bother to find out which post you added a response to mine in addition to it.
The answer is, yes, I most certainly know about Haiku, and I’ve personally attended the last two WalterCons.
Who is Con?
Only the subject of the interview and this discussion.
When you use one post in reply to someone to ask someone else completely an unrelated question, did you seriously think there’d be zero chance of confusion?
Also, I admit, I didn’t pay that much attention to his name
Wasn’t he the Bad Guy(tm) in Star Trek II?
Who is Con?
The kernel developer Con Kolivas.
Remember the article?
Remember the article?
Shoot, I’d actually have to -read- the article to remember it!
i think the usb chips are much cheaper also.
in many ways the usb vs firewire is like ata vs scsi.
sure scsi is better, but its also damn more expensive…
in usb, everything is centrally controlled by the motherboard chipset or usb controller. and when you want to add more you pop in a hub somewhere.
in firewire, there is more control built into each device, making the firewire chips more expensive. there is also the daisy chaining (that never really took of iirc, as in i have yet to see anything other then external drives have two firewire ports).
“as well as graphics hardware if only by optimizing DX for it.”
This is exactly a reason that Microsoft has hurt the industry. OpenGL was just fine when DX got started and is an extensible open standard that has kept up with all DX features. By pushing DirectX MS has made it so people like me are having a hard time getting game developers to release their games on multiple platforms. In many cases they just can’t justify the expense of rewriting a 3D engine from D3D to OpenGL. Whereas if most 3D games today already used OpenGL it would be much easier to get them to port the game to Mac or Linux.
The “Year of the linux desktop” is always 1+the current year.
Couldn’t there be two kernels. One optimised for the server and one for desktops. Or would that not be practical? I’m not a programmer as such so go easy on me
There could be two kernels
But that’s twice the cost, twice the maintenance…
Also who is to say what the difference is between server and desktop kernels is. You could argue that windows 9X was the desktop kernel and windows NT was the server:) Overtime though, windows 9x it needed features more in line with server kernels. It resulted in a bunch of hacks, and now we are back to basically one kernel (the windows 2000, XP…)
There’s only one thing about engineering…constant struggle there is no magic solution.
No. There’s no difference between server and desktops, except a few differences in the software they run.
A kernel either is good at desktops AND servers or it’s bad. One thing missing in that article is that all the efforts put to improve linux on servers also has also improved the desktop. A lot. Today’s servers sits the hardware base of the future desktop, for example.
No. There’s no difference between server and desktops, except a few differences in the software they run.
That might be somewhat true on the low end of the server spectrum where software like Linux tends to reside, but it is not at all true on the high end.
High-end servers are very different from desktops both in terms of the types of software they run and in the basic nature of the hardware, security, and even basic software execution environments in which user-space processes run.
Neither the mainframe servers nor the redundant UNIX server clusters I work on bear much resemblence to a typical desktop system … even in terms of the OS kernel at the center of things.
But doesn’t Windows have separate desktop (XP, Vista) and a server kernels? A bit off point, there have in fact been a number of articles that discuss using Windows Server on the desktop and talk about how much more robust this is than what MS want you to use on the desktop. Does this say something?
So I guess that’s why, in my previous experiences with Ubuntu and SuSE, the desktop computer never stopped sounding like a jet engine (oops, I mean a server) when it ran? A perpetual bootup fan sound?
Yah, thanks for clarifying that much for me.
And for the record, OpenDarwin x86 actually turned that sound off on the same machine, so why not a Desktop Linux distribution?
No. There’s no difference between server and desktops, except a few differences in the software they run.
You haven’t worked with many large or big-iron servers have you? They are very different from desktops and small servers. There’s a lot more than a CPU, northbridge, southbridge, and PCI bus in a server. There are storage controllers, management controllers, I/O controllers, hot-pluggable RAM and CPU slots, and more. There are more acronyms in large servers than I can decipher in a week.
A kernel that works well on one doesn’t mean it will work well on the other. Desktops are oriented toward interactivity and responsiveness of the foreground app and GUIs. Servers are more oriented to I/O throughput, CPU throughput, memory throughput, and network throughput.
Here you go:
One is Syllable Desktop, the other one is Syllable Server
Edited 2007-07-24 18:10
Ah another dr bias and apostels article.
Perhaps this will help vista sales or ones after hobby resume (I doubt it).
From my own experience as an OSS contributer, you have to be careful how much development you do that may not be accepted (in mainline).
Its also important to become apart of that community and let other dev’s know what your working on and why…
Of course he made valiant attempts at benchmarking tools and the like for this… so it seems the linux kernel devs should have paid more attention to his efforts.
in retrospect this benchmarking tool could have been higher priority. With published results of distro’s compared side by side and get people involved – including the distro’s themselves (ubuntu, fedora, suse etc).
Of course its a pity he has to work on that, when he aparently haws REAL improvements to the kernal already working.
Then again this is HIS SIDE of the story, and at the end of the day we have a pluggable scheduler – so in the large scheme, his ego is low on my priority list.
Edited 2007-07-24 18:49
While not having used Con Kolivas' patches myself I know a few people that did use it and the performance gains were quite noticeable without any side effect (that I'm aware of) at all. He did have a large following with his -ck patchset back in his day due to the improvements with regards to responsiveness that it brought to the Linux desktop.
I agree with you in a sense that he probably needed to have his work “advertised” better on the LKML with those benchmarking tools that he developed and everything else. I don't follow the LKML that much these days, but I feel that group thinking does play a large role in the way how things are conducted there and we all know how well Hans Reiser's efforts went to get his work included in mainline despite the amazing features that it could provide (mostly because of his people skills, or rather the lack of it, nonetheless) so I wonder if that isn't what happened in this case as well?
I prefer the conservative and centralized development model of FreeBSD. You won’t get bleeding edge, but who cares, I want a solid and predictable system, and that comes from a predictable development model. All is well organized and changes are PLANNED. Repeat with me: RELEASE ENGINEERING. Just two words.
What I see in the Linux world: anarchy and unplanned kernel hacks. You get 2.18.1,2,3,4,4-1,4-A,4-BC,4-BX,4-B-SMP..I’m just kidding but you know what I mean.
Very good opinions here. We are talking about the fact that Stallman ignores: open source, closed source, free or not free, all is guided ultimately by profit. You can’t fly above the laws of capitalism. So if we’re living in a monopoly-driven world, the open source movement, or at least part of it, will be sucked by the interests of the big companies. And you may like it or not, the big companies use the work of other programmers for free. And under the rules of capitalism, this is exploitation, no more no less.
You can’t fly above the laws of capitalism.
more correctly, you cant fly above greed…
thats just about the only law of capitalism there is.
Your reply is raw and crude… like the world we live in. Thank you for your simplification
glad to be of service.
But everyone has to eat, too, right?
yes, but how much, and of what type of food?
Edited 2007-07-25 00:08
“We are talking about the fact that Stallman ignores: open source, closed source, free or not free, all is guided ultimately by profit.”
Except it’s not a fact but your opinion.
Here’s a fact though, not every software project is profit driven.
This article is kind of sad and it actually tugs my heart a little to see someone with obvious talent just letting itgo because others are to egocentric to acknowledge that they might be wrong. We have very few people who deal with the kernel who know, care, can do anything about what affects the user. Linus and friends might not care about Linux as a desktop but many other people are working hard to make this a reality and if the devs don’t realize that interactivity and usability aren’t measurable but at the same time are not always subjective, we are pretty much going to hit a brick wall at some point. The part with the scheduler nearly broke my heart, it’s really sad to see.
I note at the very end that he is giving up Linux for Japanese. Which, while that’s very cool, it begs a question: How common is multilinguism, or the desire/attempt to learn a foreign language, among coders?
it raises a question (fixed that for you) [/pedant]
That’s a good question. I learned my second language not because of coding, but it has proven quite helpful with coding anyway. After English, German is one language quite widely used in the computing world and I’m glad I learned it. If I were to go for a third language, I might pick Japanese to be able to read stuff from another nation that does a lot with software. Unfortunately, IMHO there is no good way to learn a language without living among native speakers (as I did with German), so this is unlikely to happen.
Long story short: my second language choice had nothing to do with programming, but my choice of a third language would (if I ever made such a choice at all).
I too wonder to what extent programming concerns themselves are the primary promoter of multilingualism (I doubt it’s often, there are so many other potential benefits that I would think are a more likely cause).
Edited 2007-07-24 20:45 UTC
Why doesnt someone like Unbuntu hire this guy to start building a desktop based distro? I would love to have better desktop support and this guy sounds like he loves doing development. Ubuntu WOULD REALLY kill the competition if they came out with a system that was tailored for the desktop instead of a server.
Totally agree.
It wouldn’t be much different than now, where he’s maintaining a separate patch tree, and constantly playing catch-up to stay abreast of changes in the mainline tree.
I always thought that Linux as in the kernel was already ready for the desktop and I think it’s ready.
I mean, all my hardware is detected wherever I go or use, user space tools and even the kernel are improving every day, I think the next year is going to be an awesome year for the Linux desktop with KDE 4, Plasma, and all that incredible stuff, Xorg is also making big changes every year, I think Linux as in the kernel still need some changes, mostly in the architecture, but that is also happening.
Linux is more than ready for the desktop, I use it every day in my desktop, it does everything I need or want and more, it’s a joy to use
Edited 2007-07-24 22:59
Linux is more than ready for the desktop, I use it every day in my desktop, it does everything I need or want and more, it’s a joy to use
I agree with you. There are certain spots where you notice performance issues with the UI and IO access. For example I was burning a CD, listening to MP3 music streaming and I moved a window on the MP3 player and the audio cutout for a second. I tried the same thing on Windows XP and it was smooth. I believe these are some of the scheduling issues -ck was talking about. It is almost as if there needs to be some options for compiling the kernel for -enterprise -server -desktop mode which would adjust the scheduler so interruptions in processing events are optimized for each application of the OS. Perhaps Linux as a kernel has outgrown the one size fits all design.
Edited 2007-07-24 23:45
Yeah sometimes I experience those issues too, mostly when I’m copying big files.
but other than that, it’s great
Edited 2007-07-25 00:01
the issue here is just as much hardware.
on the pc, the cpu is in charge of controling the io, in charge of checking that every bit gets transfered correctly and so on.
on a bigger server, this would in stead be delegated to a storage controller cpu. this is how the ibm mainframes does it.
also, lets not forget that the later linux kernel releases have a pluggable scheduler framework thats being refined.
the interesting bit is that kolivas wrote such a framework, it got rejected. and then one of the kernel big names comes along, writes a similar framework, and it gets accepted.
still, i dont think there is a single issue here that we can point to and say “theres your problem”. at least not just based on the article. one would have to pull the lkml logs for all participants on the given subject, and read said correspondence carefully.
i dont think ever the linux kernel have been a one size fits all. in many ways its a collection of kernels that can be compiled from a single pool of code. but there are some things that i dont think one should need to do a recompile for. and it appears changing the behavior of the scheduler may be one of those.
PCs have DMA engines in every piece of major hardware as well, so no… the CPU does not have to sit and spin on IO transferes either on a PC. The file copying/multimedia issue is a different problem… probably some flaw in the IO scheduling system or the heuristics in the scheduler around I/O bound versus CPU-bound processes… or it could just be a really poor audio implementation that doesn’t do enough buffering/DMA and so skips if not given the CPU for long enough chunks.
i cant say i have noticed much benefit from DMA, but thats me…
It is almost as if there needs to be some options for compiling the kernel for -enterprise -server -desktop mode which would adjust the scheduler so interruptions in processing events are optimized for each application of the OS. Perhaps Linux as a kernel has outgrown the one size fits all design.
Hmmm, it’s almost as if a Pluggable CPU Scheduler framework would be useful. Then one could create a specialised scheduler for desktop use, another for server user, another for mega-server use, and so on. And one could switch between them as needed.
Wait, isn’t that exactly what Con was trying to do over a year ago? But was shot down horribly by Linus (who wants a one-size-fits-all CPU scheduler, but is fine with pluggable I/O schedulers)? And, isn’t that almost what they now have with Ingo’s framework?
Yeah, sometimes I wonder why he would want to quit.
i kinda get the impression that con didnt want a pluggable scheduler as i reread the article. but he made one anyways as it was the quickest way to experiment with different configurations.
i think another straw that helped break this camels back was that one that flamed con, enthusiastically helped ingo in his work. as if, as long as it was the right person it was coming from, the idea was golden.
so in many ways, one got a sniff of a personality cult…
i kinda get the impression that con didnt want a pluggable scheduler as i reread the article. but he made one anyways as it was the quickest way to experiment with different configurations.
I may be mis-remembering the threads on kerneltrap/lkml, but Con was willing to write the pluggable scheduler framework to allow for multiple CPU schedulers in the kernel so that side-by-side comparisons could be done, without ripping out the existing scheduler.
Linus shot down the idea, preferring a one-size-fits-everything CPU scheduler that could somehow work for all workloads. A new CPU scheduler would not be added to mainline until it was perfect and could replace the current scheduler completely.
Then Ingo goes and writes almost the same scheduler as Con, wraps a pluggable framework around it, and it gets added into mainline with hardly any issues. No complaints from Linus that there’s now a pluggable CPU scheduler framework in place.
“””
so in many ways, one got a sniff of a personality cult…
“””
What the people claiming “personality cult” and “good ole boy club” have so far completely failed to provide is a motive for Linus to do things that are contrary to the long term welfare of his kernel.
It may very well be that he is more comfortable accepting important code from some people than others, based upon trust and a feeling that they are likely to admit to inevitable issues with that code and address them… and also to stick around to maintain that code. In fact, that is exactly how it *should* be. As Linus has said, the only thing more important than code is a willingness to fix it.
By abondoning Linux kernel development and maintenance, Con is confirming any worries which Linus might have had on that count.
So far as I know, Ingo has always lived up to a high standard of responsibility regarding code that he writes and maintains. And to my knowledge, he has never threatened to take his ball and go home.
There is a lot more to a meritocracy than selecting the code with the best benchmark scores.
Edited 2007-07-25 01:14
Did he threaten to take it home, or did he just make like a ball as soon as everything was made clear?
If your contribution isn’t recognized, and someone else’s equivalent-function contribution is taken as is, then what’s the point in staying on and wasting your time? It’s an unemotional slap in the face, and no one would want to waste their time with that project, no matter how awesome it is (or could’ve been if it had accepted your input).
Sensibly, Con didn’t waste his and left. The kernel will go on in development with the other guy’s scheduler, and Con will find another project that, hopefully, will be more receptive to his input (Opensolaris, perhaps).
thats the area the article dont touch upon, and one have to dig out the lkml archive to read up on, what was “said” by whom, and when…
To be honest with you, Con’s efforts would likely be better spent on some OS that isn’t steeped in legacy like unix/mach whatever. I’m sure his talents would be quite useful in helping design and implement some next gen type kernel. As I hope it would be….
Call me a newbie, but I was curious on what made development for the server versus development for the desktop in kernel space. I would think that if the kernel gets faster with a better CPU scheduler, then both the server and the desktop would benefit.
Could someone please clarify this?
its a question of priority if i understand the subject correctly.
with a server one is more interested in the rapid movement of data to and from storage. for say a web server its critical to get the data of the storage media as fast as possible and out to whoever is accessing the page.
for a desktop system, its critical to present some kind of indication of reaction to the user. if i click some button i want that button to show that it have been pressed as soon after as one can (blinking, depressing or similar). if not you get the classical scenario of someone hitting the same button multiple times, or trying other buttons to see if the system is having issues. then you get a situation when the queued up commands are all done at ones. often with unpredictable results.
thing is that on a single cpu system like what the desktop pc is (or was at least) the cpu is in charge of all these jobs. so the question becomes, what should get priority. and here is where the scheduler comes in.
in the past, with the linux kernel being used as the basis for in-expensive server equipment and similar, the priority would shift towards IO.
but as one starts to use the kernel on the desktop, giving priority to the user interface (x server and the desktop environment that it serves) becomes more important then pushing the data around rapidly.
i recall some early desktop distros having the x server set to launch with a negative nice value. in other words, it would have a very high priority in the scheduler. this to improve the responsiveness of the gui (a long time sore point for linux distros in general).
basically, process scheduling is not a “one size fits all”. it needs to be tuned to the exact job one want the machine to do.
same deal in real life really. if you have a boss that cant make up his mind about what he wants, either the job done as quickly as possible, or the max amount of feedback on whats being done, things grind to a halt as the workers jump from one priority to the other.
this is one potential benefit of the multi-core cpu, that one can toss the gui job to one and the file transfer to the other and have both get their job done. but then one may well risk problems further down the chain, in ram and bus…
Someone should just fork the linux kernel. If the current linux developers are too important and good to listen to anyone, take the power away from them.
Fix the problems in the kernel, release it as a kernel of your own, and dont look back
Edited 2007-07-25 01:45
> Someone should just fork the linux kernel.
> […] release it as a kernel of your own
That won’t work because then that person would have to spend all his/her time just merging bugfix-/feature-patches from the mainline.
Yes, and the mainline kernel is spending all of their time merging bug fixes and features from developers. Is there really a difference?
Your just turning the tables. Requiring THEM to have to convince YOU that their patches and fixes are worth merging into YOUR kernel.
“””
Someone should just fork the linux kernel.
“””
Uhhhh, there are *already* several forks of the Linus kernel. That is considered to be a good thing by most everyone, including the kernel devs. -ck is a fork of the kernel that Con is abandoning. -mm is another fork. And there are plenty of others.
Distros typically, though not always, base their own kernels off of Linus’ kernel because it has the greatest respect and following.
At one time, RedHat and some others based their kernels off of the -ac kernel.
Your knee-jerk response that “someone should fork the kernel” does not even make any sense in this context.
Edited 2007-07-25 02:03
Actaully, I’d argue that -ck and -mm, along with a host of others such as -wireless-dev etc. are simply branches, not forks. All of the branches synch with linux’s main branch regularly to ensure the code doesn’t stray too far out of synch.
One of the inherent strengths of the branch-driven development model, IMHO, at least from a user’s POV, is that you can select a purpose-driven kernel branch for testing or even full-time use without sacrificing too much in terms of stability or compatibility with mainline. I ran -wireless-dev for a while as my primary kernel back when I was struggling to get my broadcom card supported, aside from the stack and wireless improvements everything else was standard vanilla kernel, it even patched cleanly with -ck.
A fork, on the other hand, invariably becomes orphaned from the parent project and that’s where it becomes much more resource-intensive to manage. Then you get the quasi-forks like Beryl that claim to be a separate project but were synching their core code with the parent project (compiz), which allows them to obtain but not influence patches and updates while doubling the effort to ensure that their own patches and improvements remain functional. That simply becomes a mess.
Branches provide important flexibility for the kernel and allow project/branch maintainers to work independently without interfering with mainline kernel development or constantly breaking things. Well, at least in theory.
But a true fork of the linux kernel, one with all ties cut, would be an absolute b*tch to maintain and would almost certainly be still-born, unless it had a wide swath of developers willing to adopt it a la the xfree86/xorg split.
I think that’s the biggest reason the distros standardize on vanilla with a small selection of custom patches; it’s the closest thing the kernel devs will come to a consensus on as to a “stable” or baseline kernel and therefore one that will receive the most attention when something breaks. Distros like RH or Novell have a number of kernel devs on their payroll, including some of the high profile maintainers, so they may have a little more leeway with aggressively patching production kernels knowing they have the ability to troubleshoot and address internally (I know Suse applies a number of patches to the vanilla kernel, not too sure about RH though). Many of the other distros, though, need to rely on the kernel dev team for assistance when something breaks.
Anyways, just my interpretative 2c…
Or just base it off a different POSIX implementation, like a BSD or Minix.
“Why I Quit”
I thought Thom was leaving, what a deception …
Edited 2007-07-25 06:49
Linux zealots are saying that Linux is ready for desktop since years; but now a Linux kernel developper (who knows what he is talking about) said it’s not and leave …
“but now a Linux kernel developper (who knows what he is talking about) said it’s not and leave …”
Good thing a difference of opinion is so rare, eh? Clearly, now that *one* person has left due to some reason his story is obviously the truth set in stone and everyone else is wrong. Or not.
Linux zealot spotted.
Really? Better check myself. Hmmm..nope, I’m still not using Linux.
Clearly, now that *one* person has left due to some reason his story is obviously the truth set in stone and everyone else is wrong. Or not.
That *one* person is the one working and optimizing the kernel for desktop usage. That *one* ran benchmarks and know the kernel code impacting the desktop performances.
Now you can trust who you want:
– the linux end user that don’t know how things work and which try to convince the world to use his OS.
– the linux kernel developer working since many years to improve the desktop on linux and famous for its patches.
“That *one* person is the one working and optimizing the kernel for desktop usage.”
He’s not the only person who has done that.
“Now you can trust who you want”
You forgot one group of people:
– the remaining kernel developers who’s worked on the kernel for many, many years.
Please put it in perspective. The bottom line is that Con was viewed as one of the person working towards a better Desktop for users. Linus’ rants against GPL3; Goading of the FSF are indicative of how isolated they are from users.
Comments about the FSF foundation and hobbiests not coding for the kernel are *common* and viewed as a good thing.
His statement although too black and white is right, yours is a lie…that Linus refutes often.
…but if the *companies* the ones that *pay* for real development goals are the same as those of the Desktop …Desktop users benefit, so its not bad news, things exclusive to servers are becoming *part and parcel* of the machine on your Desk; Multi-processor, Raid; File systems, its the stuff Servers do not need that are poor on the Desktop wi-fi and 3D-support, and Linus deals with those practically(sic).
Now Ubuntu proved that GNU could get Desktop users in droves, so times are changing…but unless this is turned into real revenue very little will change.
This paragraph sums it up for me:
“I even recall one bug report we tried to submit about [desktop performance] and one developer said he couldn’t reproduce the problem on his quad-CPU 4GB RAM machine with 4 striped RAID array disks… think about the sort of hardware the average user would have had four years ago. Is it any wonder the desktop sucked so much?”
The -ck line of kernels were always much more usable for me until I gave up running Gentoo and bought a new laptop to run Ubuntu. Even now it suffers performance issues if I leave it overnight and updatedb pushes all my apps into swap, or I open too many pages with flash (that still is a CPU hog on Linux) and a few other apps and my laptop can’t cope.
However the kernel developers with their mega machines simply can’t reproduce such poor performance. I wonder why?
“You can put a pig in a dress, but Linux is still a server.”
I heartfully thank ck for his contributions and wish him good luck and wisdom with whatever he persuits next.
One of the best interviews in a while, probably due to the topic matter.
I found it very hard to disagree with any of his points.
I always knew deep down inside that eventually the Linux kernel will have to “fork”.
Not fork in a bad way, but probably in a good way.
As Con has pointed out, Linux due to its success exists in just about any computing environment or infrastructure now days.
My phone, my Access point, my laptop and even my Zaurus all run a Linux kernel. Those are what I call miled “forks”.
However, if there is any point in his conversation I can point to that I would disagree with him on is the innovation in hardware.
All of the above devices have tailored made kernels that have tailor made schedulers for the tasks they need to perform.
I think any Linux kernel on the DESKTOP is going to have to be different than the one they put in a BlueGENE, a Zaurus or a access point. Make no doubt, it is.
I think he infers that the kernel developers are trying to stonewall everyone into a particular kernel design for any device and I do not see that.
What I see is a continual push to modularize the kernel into sections, to accomplish these goals. (i.e. using Con’s example, the scheduler for example.)
But all of the other points, the PC hardware industry in particularly I totally agree with him. PC manufacturers have LOST THEIR WAY. We basically all have miniaturized PC AT’s on a chip just running a lot faster with a new AT BUS with more wires…(i.e. PCI).
I point to the fact as evidence of this, that PC clock rates have stalled over the past 5 years. Manufacturers are not really doing anything about the real problems in digital computers which is the buses and glue for information retrieval.
I TOTALLY BELIEVE Con when he says that Microsoft has pretty much toasted the industry.
Why?
No PC manufacturer in their right mind would build a desktop machine with radical new bus architectures because it would break Windows.
This fact alone, is holding the entire information age hostage.
Everytime you purchase, buy or sell a Microsoft product, you might want to consider if you want the same old same old for the next 20 years? Vista 2020 Anyone?
GAD.
OR
Is it time to take Con’s advice and change the current why the hardware industry builds motherboards and chipsets?
-Hack
Here is a 20 minutes video about a real desktop OS.
If you consider when this video was made and which computer they used (two 266Mhz processors with 64MB of ram) … impressive.
http://video.google.com/videoplay?docid=1659841654840942756
Can I please have such a Desktop OS that works on modern machines?
Would even pay for it
I was contemplating posting about that myself actually.
I remember getting really excited about BeOS back in the days when there really wasn’t any reason for me to even think about using anything other than Windows (but yet Be still managed to catch my attention somehow, and it even had me signing a petition to get Cubase ported to it).
So I was furious when I heard about Be’s intention to totally abandon the desktop (or “focus shift” if you will) but I’m even more angry about it now.
If you look at what Be was doing in the 90’s and how advanced the OS was, just imagine what it would have been capable of now, after 10 odd years of further development.
For people (like me) who are currently deeply dissatisfied with both Windows and nix based OS’s, this really could have been a third way.
I liked the part when the dev showed moving playing video windows around while the movie didn’t freeze or give an inch during the move.
I think the issue here is that we are getting to the point in computer where its hard to have your cake and eat it too. A divide where one says “no the desktop is key” and another that say “no appealing to enterprise is key”.
I know when I’m at my desktop, I want it to do desktop things easily and have it be stable. I want it to support laptop buttons, X-in-one card readers and other things that windows users have privy to.
On the other hand I don’t expect servers to have the same functions as desktops etc.
Yet, Myself, I like the all in one feature to an extent as I can setup servers if I want but I’m a Nerd and most people aren’t
In the end, I think Cons frustrations are justified but that is what happens when have things ran via community, majority wins