“Proprietary kernel modules are flaky, out of date, and the bane of a Linux professional’s existence – right? Maybe not. This three-part (part I | part II | part III) series makes a case for formalizing the uneasy truce between GPL-clean and ‘tainted’ modules.”
Unlike the days when software was a vehicle to sell hardware, and despite utopian ideals of an open source software world entirely free of patents and copyrights, today’s open source community must coexist with the closed source developers
No. Why? First, let me remember that it was a closed printer driver what made Stallman start GNU (yeah, that crazy guy, even I hate him, but it was him who started it all). Linux was born with basically no real hardware support. Thanks to opensource, and not closed drivers, now you can run linux in almost every computer out there, and Linux has only started becoming popular. The percentage of hardware devices supported by Linux is only growing – I need to care less about what computers I buy because Linux already supports them. I install linux in my friend’s computer and it just works most of the times. Linux is the operative system with more out-of-the-box devices drivers (Microsoft get all through 3rd parties and they can modifify or even recompile them when they need it – there’s a lot of devices support by 64bit linux and not supported by 64bit windows) , and it’s at least the second operative system counting the number of architectures supported.
In other words, Linux’s open source drivers/platforms supports more hardware than any other closed source operative system in the world.
I can’t use my webcam in 64-bit windows or a mac because creative either doesn’t seem to have enought money to hire programmers or they just don’t care about customers. Windows XP SP2 and Vista don’t support my SATA controller out-of-the box (a controller made in 2001) so installing them is a pain on the ass. For vista I can use my usb disk to supply the 3rd party drivers but XP SP2 requires a floppy – and I don’t have a floppy drive, I already live on the 90’s – so I just can’t install Windows XP SP2 except by doing a even uglier hack. At least qemu works. And now ATI has decided that my radeon 9200 is too old, so they’ve stopped supplying new drivers for XP. And they won’t supply drivers for vista either, so I only can use the stupid basic VGA driver with my computer
I’m *sick* of propietary drivers. They create more problems than they solve, they’re relevant now just because linux geeks use nvidia/ati cards. I can’t even use ATI propietary linux drivers because ATI has decided (like in windows drivers) that my 9200 card is too old.
Maybe it should be closed source operative systems who adopt opensource development metodologies not the contrary. It’s not about zealotry. It’s just that these days my hardware only boots and works with opensource drivers.
Edited 2006-12-28 18:37
While it would be nice if companies would just open-source all of their drivers, let’s face it: in the case of most companies, it simply isn’t going to happen. The only way open source drivers for such companies’ hardware will exist is if third parties reverse engineer the hardware and create them, similar to what is done now.
IMO, the problem with proprietary drivers in Linux now is the fact that they are so frowned upon that is extremely difficult to make one that will work with many distributions. Because of this, many companies that might come out with binary Linux drivers won’t, because of the complexities involved in creating bizarre abstraction layers, such as what is found in the Nvidia drivers. An official binary driver API would alleviate such problems and allow users to choose between binary and open source drivers easily.
IMO, the problem with proprietary drivers in Linux now is the fact that they are so frowned upon that is extremely difficult to make one that will work with many distributions.
Is it? I think you meant to say “…make one that will work with many kernels”, because in fact both Nvidia and Ati make only one Linux driver, not many. AFAIK the driver’s interaction with the kernel and the OS is pretty much standardized.
A “shim” such as the one used by Nvidia is not a “bizarre abstraction layer”, and it’s not that hard to do. Do you have any example of companies who would produce such drivers but don’t due to technical difficulty (and not simply because there are already open-source drivers that do the job)?
is extremely difficult to make one that will work with many distributions
[…]
An official binary driver API would alleviate such problems and allow users to choose between binary and open source drivers easily.
I offer you this nice API proposal: Opensource it.
It’ll get included in the main kernel tree and distros will support your device. And you don’t need to build a official binary API, neither care about it, do Q&A to ensure it doesn’t breaks. You avoid all the problems derived from closed drivers, like debugging your kernel in presence of a binary driver. You also….etc, etc etc (we all know this history)
It’s magic! All your technical problems solved by just…opensourcing it!
Yes, there are companies that do not want to opensource anything. So what? There’s a easy path but they’ve choosen the difficult path. Why on earth should we waste a second trying to make their path easier? It’s fine that they want to chose the difficult way. I won’t stop you from banging your own head against a wall, if banging your head agains a wall makes you happy. They’ve avoided the easy path *on purpose*. So let them deal with their choice. Their drivers will be always a PAIN to install, it’s their choice, let them live with it.
Edited 2006-12-28 19:12
So, opening a driver guarantees it’ll get in the main tree then? Right…
I’ve got an extremely common Dell printer, based on a Samsung engine. Samsung provided a Ghostscript module that makes it act as a pure PS printer, and a PPD, both open source. Its not yet in Ghostscript, despite many hundreds of other printers being supported by it.
IIRC it released its driver two years ago.
I also have a Via graphics card which is supported with OpenChrome, but the driver updates required for it were, at my last use of Linux, not in the kernel (DRI/DRM) or XOrg. Additionally, for a long time, an updated driver from the original source was required to get a full feature set on Lucent ORiNOCO cards – took about a year until the driver was updated in the kernel tree.
If you rephrased that to “it’ll get included in the main kernel tree if a main kernel dev has that hardware”, you’d be closer to right.
very wrong, if someone submits a driver for kernel inclusion, it will be examined thoroughly, and once all the things required for inclusion is fixed, it will be included, but you cant just slap a huge pile of chunk, that allthough it may work, is a giant piece of shit, and then expect it to be merged.
you can submit for inclusion, it will be examined, then if you wish to maintain it (starting with bringing it up to quality), it will be included.
But that’s far more than opensourcing.
Fitting a driver code (most probably derived from windows deriver’s source) to kernel standards (many of which are related more to kernel dev’s convenience than the mere code quality) may be more trouble than it’s worts.
Many HW (esp. when you look outside of 2,3 biggest ones circle) companies are not really adepts software houses and getting software up to kernel standards is not an option for them.
Some take a risk (exposing trade secrets, getting stonewalled because of exposed hardware bugs) and opensource their software anyway. It rarely pays off as the desired effect (getting HW supported out of the box in major distros) is so delayed that (years after release) it’s a wasted effort from the promotion POV. More than this, not only they get bad publicity from angried users for releasing “a pile of crap” , it steals the thunder from their current products by “promoting” the outdated, unsold but now suported ones in the second hand market.
But that’s what needs to be done.
I hear Windows people say “Windows is stable, it’s the drivers that suck,” all the time. If that’s true,then including those crap drivers in the Linux kernel would make Linux just as unstable as Windows.
Right now, when I compile and install a vanilla Linux kernel, I can expect that 99.999% of the kernel, drivers and other modules are stable and relatively bug free. I wouldn’t be able to do that if just any piss-poor code could be added to the kernel.
Given the choice between a rock-solid OS with limited driver availability and a buggy, unstable OS with drivers for everything, I’d take the stable one every time.
if someone submits a driver for kernel inclusion, it will be examined thoroughly, and once all the things required for inclusion is fixed, it will be included
It may be examined, and if fixed may be included.
There are many factors that effect whether or not a particular driver is ever considered, and not all submitted are evaluated, no matter what their initial quality is.
“If you rephrased that to “it’ll get included in the main kernel tree if a main kernel dev has that hardware”, you’d be closer to right.”
Bingo. I could not have said it better myself. Or the fact that the kernel devs decide what you can and can not do with your hardware, which they do no matter how this will get flamed. My example is them discontinuing a very common intel raid controller for no reason other then they ‘Didn’t feel it was necessary’. My system in reference is a Dell XPS Gen 5, so needless to say Linux no longer runs on that box. Something to say for keeping backward compatibility eh?
If you rephrased that to “it’ll get included in the main kernel tree if a main kernel dev has that hardware”
You are completely right. That is, if you didn’t bothered submitting and maintaing the code, the only drivers that obviously will get into the tree are the ones that the kernel hackers can develop.
You need hardware in order to develop a device driver for that device. Apparently you think that it’s not a requeriment.
I mean, do you really think that linux developers will pick up drivers and happily include it?. Why on earth would them do a stupid thing? They aren’t going to include your printer and sis drivers if they can’t even test the damn thing.
But if you submit your driver, fix the issues people find when reviewing it and find a maintainer, it will get included even if it’s the stupidest hardware device on the earth surface.
Edited 2006-12-29 01:19
It’ll get included in the main kernel tree and distros will support your device. And you don’t need to build a official binary API, neither care about it, do Q&A to ensure it doesn’t breaks. You avoid all the problems derived from closed drivers, like debugging your kernel in presence of a binary driver. You also….etc, etc etc (we all know this history)
That’s the theory, but the reality is that the devs are having a hard enough time maintaining all of the drivers in the kernel as it is, particularly the legacy ones. Andrew Morton has brought this up as an issue.
Don’t get me wrong, I’m not knocking the kernel devs and they do a fantastic job of keeping things working as it is, but I’ve always questioned the pragmatism of the wrap-everything-in-the-kernel-and-leave-it-to-us approach. How big is the kernel expected to grow, and how prepared are the devs to handle the exponential growth in the complexity of maintaining, troubleshooting and updating the drivers with every kernel revision?
I don’t think binary drivers are necessarily the right answer, but I’m not sure trying to convince large hardware vendors to handoff their Q&A and brand reputation to the kernel devs is going to fly either.
Idealism aside, I don’t think it’s a problem with an easy solution.
I don’t think binary drivers are necessarily the right answer, but I’m not sure trying to convince large hardware vendors to handoff their Q&A and brand reputation to the kernel devs is going to fly either.
There is no reason to dump their code in the kernel and then forget about it. They can get it merged with the kernel and then MAINTAIN IT THEMSELVES within the public tree by submitting patches to the kernel.
The problem with a few of the legacy drivers is that very few people have the old hardware and nobody seems to be stepping up to maintain them. Just because a driver is open source doesn’t mean it should magically live forever; just until nobody cares enough to actually maintain it any more. Thankfully that’s usually a much longer term than the vendor who created the product in the first place.
Edited 2006-12-28 20:35
There is no reason to dump their code in the kernel and then forget about it. They can get it merged with the kernel and then MAINTAIN IT THEMSELVES within the public tree by submitting patches to the kernel.
Fair enough, but to play devil’s advocate, then why is that different than producing proprietary modules? Savvy user’s can always mod the code themselves if they need to, which even happens with the nvidia drivers from time to time where users post patches to fix something in the wrapper. But where’s the benefit for the vendor?
See, I think the biggest problem here is that nobody has produced a suitable enough business case for the manufacturers to fully embrace the OSS model. The day may come when linux has a sufficient enough install base to leverage, but until then we’re in a bit of a chicken-and-egg scenario.
From the user’s point of view, there’s no doubt that open drivers provide more flexibility and are likely to enhance the value they receive from their hardware. But (sadly) I’m not sure that’s sufficient enough incentive for the vendors.
Yet, anyways. That could all change if ATI or another major player opens up, but I suspect for now hardware companies are taking a wait and see approach.
Fair enough, but to play devil’s advocate, then why is that different than producing proprietary modules? Savvy user’s can always mod the code themselves if they need to, which even happens with the nvidia drivers from time to time where users post patches to fix something in the wrapper. But where’s the benefit for the vendor?
Well for one, open source modules are much easier to change than proprietary ones and can be recompiled to work on different architectures etc. They also survive the vendor going out of business or being purchased by an anti-open source entity.
As for the benefit to the vendor, they get free code auditing, code inspection and even maintenance by having their code in the kernel. As a small example, when someone changes the name of a MACRO, their code will be automatically patched to use the new name without effort. Not to mention that it will be distributed automatically to everyone that installs a stock kernel.
Yet, anyways. That could all change if ATI or another major player opens up, but I suspect for now hardware companies are taking a wait and see approach.
You’re probably right. There are vendors who are watching from the sidelines. But even with that situation, the hardware support in Linux (save for a couple of unfortunate areas) is incredibly good.
Edited 2006-12-28 21:12
“There is no reason to dump their code in the kernel and then forget about it. They can get it merged with the kernel and then MAINTAIN IT THEMSELVES within the public tree by submitting patches to the kernel.”
Your forgot to add that it could take a year or 2 before one of the kernel devs decided he wanted to do that. Not any one can submit code, only the elite can.
Your forgot to add that it could take a year or 2 before one of the kernel devs decided he wanted to do that. Not any one can submit code, only the elite can.
Well of course you have to get past the gatekeepers, and that’s a _good thing_. It stops the kernel from being a dumping ground for garbage code.
However, if you approach it with a good attitude and an open mind, i think you’d find getting things merged isn’t all that hard. My company has had a handful of patches accepted without a problem and I see new drivers being accepted all the time from people who are brand new to the process and haven’t had time to become “elite”. There are even drivers in the kernel for devices that are only used by one or two people.
While the Linux kernel mailing list might seem like a flamefest, there is actually a huge amount of work being accomplished and there are some very reasonable people among the so called elite. Linus and Andrew for instance do their best to see new submissions being accepted and properly reviewed and are pillars of reason.
The process must be working okay, since there are thousands and thousands of patch lines being accepted in the kernel every month. Clearly if those lines are only being created by the “elite” they’re doing some damn good work.
Your forgot to add that it could take a year or 2 before one of the kernel devs decided he wanted to do that. Not any one can submit code, only the elite can.
Duh? Anyone can submit code as long as the follow the rules. What the hell “elite” means? Subsystem maintainers? Well, yes, it’ll be a subsystem maintainer who merges your driver, not you. You know, organization it’s the little price you’ve to pay when it’s impossible to handle one-to-one relationships with every other developer
There’re lots of new drivers being added in every release from unknow people that just developed and maitained a driver, submitted it for review, fixed the problems and got it included into the main tree. Apparently those people live in a parallel world, much different than yours…
The problem with maintaining a driver in Linux kernel tree is that most maintainence work is caused by other parts of kernel sponanousely changing without consultation with driver devs.
That’s a lot of effort (easliy a dedicated vacancy) with no apparent benefit, asside from assurance that you won’t be kicked out of kernel in the next release.
Nvidia can afford that but for smallere/specialized HW houses its an overkill.
The problem with maintaining a driver in Linux kernel tree is that most maintainence work is caused by other parts of kernel sponanousely changing without consultation with driver devs.
Well that’s true to a certain degree. However, most changes that affect driver code are required to also change the driver themselves.
On top of which, an out of tree driver will have to be modified to accomodate such changes _anyway_. So that issue really shouldn’t be a deciding factor on whether to submit for inclusion or not. In fact, you have a better chance of _not_ being affected by a change by being included, than you do otherwise.
Edited 2006-12-28 21:55
[i]The problem with maintaining a driver in Linux kernel tree is that most maintainence work is caused by other parts of kernel sponanousely changing without consultation with driver devs.
Well that’s true to a certain degree. However, most changes that affect driver code are required to also change the driver themselves.
In theory. But often the changes made to the drivers are mere guesswork and not tested, and sometimes drivers are broken by people who do that sort of updating.
Plus, of course, only the drivers for the small number of mainstream architectures are patched in this process, leaving the embedded community to cope.
Which is actually worse than simply breaking the driver…
[i]There is no reason to dump their code in the kernel and then forget about it. They can get it merged with the kernel and then MAINTAIN IT THEMSELVES within the public tree by submitting patches to the kernel. [i]
You have no idea how old it gets having to constantly update a driver in the kernel tree just because someone toyed with kernel interfaces and broke it again.
There’s just not enough Linux-based hardware sales to justify spending the money that sort of endeavor takes.
You have no idea how old it gets having to constantly update a driver in the kernel tree just because someone toyed with kernel interfaces and broke it again.
I’m curious as to the frequency at which this happens…When you say “constantly”, do you mean once every release? Once a month? Once a year?
I understand that a stable interface is preferable, but in real-world terms, how much of a problem is this with Linux? Is it a mild annoyance, a serious issue, or a “theoretical problem”, along the lines of “there’s too many distros” or “there should only be one DE”?
Not trying to start a flamewar or anything…I’m genuinely curious about the real-world extent of the problem.
I’m curious as to the frequency at which this happens…When you say “constantly”, do you mean once every release? Once a month? Once a year?
When I was maintaing a port during the time frame from 2.6.8 to 2.6.18, every kernel update broke our drivers.
I suspect that was a particularly extreme period, but it seems to be getting worse rather than better.
Thanks, I had no idea…that sounds pretty bad.
“While it would be nice if companies would just open-source all of their drivers, let’s face it: in the case of most companies, it simply isn’t going to happen. The only way open source drivers for such companies’ hardware will exist is if third parties reverse engineer the hardware and create them, similar to what is done now. ”
You are only partially correct. What is required in most cases is *not* for the vendors to open drivers but provide open specs for their hardware which is not hard to do in most cases. Many vendors these days like Intel are willing to work on producing drivers, merging them into the Linux kernel and continue to maintain them because it provides them the best amount of control over their hardware and software.
While it would be nice if companies would just open-source all of their drivers, let’s face it: in the case of most companies, it simply isn’t going to happen.
It’s not necessary for every company to open-source every driver. Enough drivers need to be open-sourced such that a viable ecosystem of hardware exists for open systems. Platforms are huge investments. People will pick hardware to suit their platform, as long as doing so is reasonably cheap and easy. Whether Linux works with every single webcam that people already own is not going to drive adoption one way or another. What matters is that its easy to find webcams that work with Linux, and work with a minimum of hassle.
The current situation is already most of the way there. For example, every piece of hardware on my Macbook, aside from a few Apple-specific bits like the backlight controller, are supported by good open drivers. The only major gap is video cards, but there are a lot of avenues through which that gap can be addressed. Intel, the largest seller of GPUs, has open linux drivers. As the move their GPUs up market, those drivers will cover a lot of the gap. As AMD moves towards using the GPU as an hypertransport coprocessor, it becomes substantially more attractive for them to open up the specs (or even sources) of ATI’s hardware. And of course, reverse-engineering efforts for NVIDIA GPUs continues. Any one of these avenues can significantly remedy the GPU driver issue, without requiring compromises for binary drivers.
Binary drivers are in the long term bad for Linux as a platform. They limit flexibility and maintainability, and in some segments (specifically graphics), not having specs to hardware severely limits the ability of Linux developers to innovate relative to those working at Apple and Microsoft, who have the access they need. Compromising to binary components must be done to win some short-term battles, but embracing them in general will cause Linux to lose the war.
One more thing has to be remembered. Linux is not SkyOS or Haiku or BeOS. It’s got a desktop marketshare comparable to Apple’s, server revenues that totaled $5.7 billion last year, 24% percent of the smart-phone market (bigger than Palm and WinCE combined), 10% (by revenue, probably more by shipments) of the embedded software market, and it runs on 74% of the top-500 supercomputers. It has double-digit annualized growth rates in the desktop, server, and embedded markets. It’s already pretty huge, and growing faster than pretty much everything else.
Linux is not at the point where the community can afford to throw its weight around like Microsoft, but is not also some hobby OS that has to accept the decisions of hardware makers as the word of God. Linux vendors can afford to fight a few key battles on the binary drivers front in order to preserve the viability of the system in the long term.
Back when the GNU system was created, nobody thought that one day Sun would open source Solaris under a very liberal copy-left license. Nobody expected IBM and SGI to open up important pieces of code like XFS or JFS. Nobody expected that big corporations would be funding thousands of employees to work on open systems and open code. But all that has happened. In the big scheme of things, expecting some key drivers to be open sourced is not a far-fetched hope at all.
Edited 2006-12-28 22:10
While it would be nice if companies would just open-source all of their drivers, let’s face it: in the case of most companies, it simply isn’t going to happen. The only way open source drivers for such companies’ hardware will exist is if third parties reverse engineer the hardware and create them, similar to what is done now.
They won’t opensource it because of the following; Management are advised by the legal department, who are advised by their engineers – if you’re an engineer and found that if the source was submitted to the Linux kernel tree, you’d only need 1/2 the programmers to accomplish the same job as the role of maintaining the driver was be spread across many people.
If you were an engineer, would you support a possible idea that could possibly make you unemployed? of course not! so you feed bull crap to your legal department and management, claiming all sorts of lies and half truths.
Patents? bullcrap, that relates to the PROCESS, it doesn’t relate to the actual code; heck, Freetype is a prime example; if you want to ship a product that relies on Freetype with patented hinting, you have to pay up to the patent holder.
Same situation here, the patent holder would still get their money, their IP would still be protected, and all would be happy – whether the process is made open-source, the patented process is still protected, because even if a company were to copy the process, they would then have to too pay up to the patent holder.
Whats the real reason for not open-sourcing? vested interests, ivory towers and egos the size of goodyear blimps hoping, perish the thought, the unwashed masses don’t get a hold of the code and maintain it alot better than paid programmers do.
They won’t opensource it because of the following; Management are advised by the legal department, who are advised by their engineers – if you’re an engineer and found that if the source was submitted to the Linux kernel tree, you’d only need 1/2 the programmers to accomplish the same job as the role of maintaining the driver was be spread across many people.
There’s a talent shortage in the industry. Any software developer (we are not engineers) competent enough to be asked to advise management on open source decisions isn’t going to have to worry about being unemployed.
In addition to which, in seven years of developing Linux kernels, I have never seen open sourcing software lead to a need for fewer developers.
The real reason for not open-sourcing drivers? Varies from company to company, but the number one concern is usually that it costs more to open-source a driver than the return on investement for having done so.
How does it cost more to ‘open-source a driver’ than the return on investment? if fixing is done by 4 volunteers, and heavy development done by the programmers, wouldn’t that in itself be a nice way to save money, and focus on the important things – like the next product launch, rather than the mundane crap that chews up time – maintaining old code for old products?
How does it cost more to ‘open-source a driver’ than the return on investment? if fixing is done by 4 volunteers, and heavy development done by the programmers, wouldn’t that in itself be a nice way to save money, and focus on the important things – like the next product launch, rather than the mundane crap that chews up time – maintaining old code for old products?
That’s a big “if”. The reality is that you spend money on documentation; you spend money on doing release engineering for source that you don’t spend doing release engineering on binaries; you spend money on the person coordinating the volunteers; you spend money on answer questions from the community that you wouldn’t need to answer.
In return you get 4 volunteers who don’t have good access to your hardware people, who don’t care about your development standards, who don’t understand the limitations forced on you by having to support multiple OSes, who aren’t motivated to do testing beyond what suits their immediate needs, and who tend to be far more enthusiastic than talented.
If you lived in an open-source-only world, maybe the volunteers would save you money, but they’re just duplicating work you’d have to do and test for your majority cusomters, anyway.
And frankly, the sales to open source customers that you’d generate as a result wouldn’t make up the difference.
And frankly, the sales to open source customers that you’d generate as a result wouldn’t make up the difference.
Luckily there are companies who aren’t nearly as negative about the process as you appear to be. We’re starting to get good open source driver support for lots of hardware Adaptec, IBM, Intel, etc.. While in these early days, there may not be enough profit motivation for smaller companies to participate, that’s the _very_ reason we need to stop all this promotion of binary modules and remind people to support the foundation that saw Linux through to where it is today. It’s an iterative process of creating a bit more supply, and a bit more demand back and forth until there is an even healthier open source ecosystem than we enjoy today. Even though you’re very negative about the process, it’s produced a marvelous operating system with exceptional open source support for a _vast_ array of hardware. It can’t all be as bad as you think.
Edited 2006-12-29 23:45
Even though you’re very negative about the process, it’s produced a marvelous operating system with exceptional open source support for a _vast_ array of hardware. It can’t all be as bad as you think.
A marvelous operating system? Is there a new OS I haven’t heard of?
But I don’t think it’s “bad” or “good”. I merely described the reality as it stands.
When you compare how much effort has gone into Linux and how short a distance it has come with that effort to even 1980s commercial software development, you quickly see that there’s nothing “exceptional” or “marvelous” about it, unless you enjoy watching software developers churn because they can’t stay with one concept long enough to work out the kinks in it.
A marvelous operating system? Is there a new OS I haven’t heard of?
But I don’t think it’s “bad” or “good”. I merely described the reality as it stands.
Perhaps not, it’s called Linux and it’s really quite a good operating system. Since you can’t see that, i’d say your view of reality is a little, shall we say cloudy ;o)
The main point is that you continue to go on and on about how bad the development model is and how burdensome it is for anyone to contribute and maintain a driver. Yet reality seems to suggest that it isn’t so bad as you imagine since Linux supports as much if not more hardware than any other operating system. So maybe it’s time for you to rethink your stance; maybe it’s just _you_ who finds the current situation unacceptable. Things just can’t be bad as you describe, or Linux wouldn’t have had the success it has had or support nearly as much hardware as it _does_.
Perhaps not, it’s called Linux and it’s really quite a good operating system. Since you can’t see that, i’d say your view of reality is a little, shall we say cloudy ;o)
Actually, I can’t see it since I’ve been developing operating systems for thirty years and Linux for 7. As far as quality goes, Linux is almost as good as the best Unix boxes were about 15 years ago — which is said, because that’s about where Linux should have started, not where it should be after all this time.
The main point is that you continue to go on and on about how bad the development model is and how burdensome it is for anyone to contribute and maintain a driver. Yet reality seems to suggest that it isn’t so bad as you imagine since Linux supports as much if not more hardware than any other operating system.
I described what happens. You’re the one who’s judging the way it is as “bad”.
You’re also confusing quantity with quality. I’ve written Linux drivers, and I’ve debugged a lot more of them. I’ve also written and debugged drivers for a range of Unix derivitives, several IBM operating systems, several DEC proprietary OSes and several embedded OSes.
It is in the context of that experience that I’ve evaluated Linux and found it sadly wanting.
So maybe it’s time for you to rethink your stance; maybe it’s just _you_ who finds the current situation unacceptable. Things just can’t be bad as you describe, or Linux wouldn’t have had the success it has had or support nearly as much hardware as it _does_.
Alas, I am not alone in holding this viewpoint. Andrew Morton is coming around to it, albeit slowly, and there are plenty of people who’ve noticed it. Even Greg K-H knows that there is a problem, although he’s abscribed the cause to the wrong reasons.
But you’re confusing quality with success. If success was a measure of quality, than XP is clearly the best operating system ever written.
There are two reasons why Linux supports so much hardware. First, the support is often quiet poor, but still counted as ‘support’. Second, there has been a massive brute force effort to make it do so.
Actually, I can’t see it since I’ve been developing operating systems for thirty years and Linux for 7. As far as quality goes, Linux is almost as good as the best Unix boxes were about 15 years ago — which is said, because that’s about where Linux should have started, not where it should be after all this time.
Sorry that’s just BS. Number one, those Unix boxes weren’t running on PC hardware. Secondly, Linux started out as a project of one man. It developed without the resources thrown at many other operating systems. However, that’s changing and now we see massive investment and many full time programmers dedicated to it and the results shouldn’t be dismissed. The next release will contain support for virtualization built right into the kernel and there have been a _huge_ number of recent changes advancing its scalability. Your opinion of Linux seems dated.
Alas, I am not alone in holding this viewpoint. Andrew Morton is coming around to it, albeit slowly, and there are plenty of people who’ve noticed it. Even Greg K-H knows that there is a problem, although he’s abscribed the cause to the wrong reasons.
Sorry, I just don’t agree with you. I think you’re confusing your lack of comfort with the development model with “quality” or supposed lack thereof. By what metric are you measuring quality?
But you’re confusing quality with success. If success was a measure of quality, than XP is clearly the best operating system ever written.
No actually I wasn’t, just showing you that it is perfectly possible for people to contribute and maintain drivers. The sheer number of people doing it suggests that it isn’t as onerous a task as you conclude. Faced with that fact you want to turn this into a discussion about quality instead of about how difficult it is (or isn’t) to maintain a driver. It sounds like (correct me if i guess wrong) that your own experience was maintaining _out of tree_ code, not included in the mainline kernel. There is no doubt that’s harder than maintaining in tree code. Perhaps that is what is coloring your opinion.
There are two reasons why Linux supports so much hardware. First, the support is often quiet poor, but still counted as ‘support’. Second, there has been a massive brute force effort to make it do so.
Sorry, I can’t swallow that knowing that Linux has become one of the most popular embedded O/S’s and runs more supercomputers than _any_ other OS. That just _wouldn’t_ happen if there were huge quality and maintenance issues.
We will probably just have to agree to disagree. I think you not only have the facts wrong today, i think you’re discounting the ability of Linux developers to resolve any _real_ problems that are identified.
Edited 2006-12-30 04:35
Sorry that’s just BS. Number one, those Unix boxes weren’t running on PC hardware.
SO what, you’re saying it’s the PC hardware that makes Linux good?
Secondly, Linux started out as a project of one man.
So did Unix.
It developed without the resources thrown at many other operating systems. However, that’s changing and now we see massive investment and many full time programmers dedicated to it and the results shouldn’t be dismissed.
“many” full time programmers have been dedicated to Linux since 2000, if not earlier.
The next release will contain support for virtualization built right into the kernel and there have been a _huge_ number of recent changes advancing its scalability. Your opinion of Linux seems dated.
I know all about kvm. And how it’s a return to forty year old IBM technology. It may shock you to hear this, but virtualization has been around for a long time.
It sounds like (correct me if i guess wrong) that your own experience was maintaining _out of tree_ code, not included in the mainline kernel.
You guess wrong.
There are two reasons why Linux supports so much hardware. First, the support is often quiet poor, but still counted as ‘support’. Second, there has been a massive brute force effort to make it do so.
Sorry, I can’t swallow that knowing that Linux has become one of the most popular embedded O/S’s and runs more supercomputers than _any_ other OS. That just _wouldn’t_ happen if there were huge quality and maintenance issues.
Well, it’s debatable how “popular” linux is for embedded development, but again, you’re confusing popularity with quality.
Also, as someone who goes back in supercomputer history to the time of the CDC Star (about which we used to joke that it’s biggest problem was that it’s mean time to failure was shorter than the time it took to boot it,) you’re going to find me very unimpressed by attempts to conflate ‘quality’ and ‘supercomputing.’
Linux is a fine example of an early ’90s operating system. It’s a pity that 10s of thousands of manyears have gone into making it no more than that, but that’s what it is.
By the way, I don’t have hard numbers, but I speculate that over 50,000 man-years have gone into linux development, with at least half that having been done by paid programmers.
(Back of the envelope: There were roughly 1000 people at ols in ’05. Aproximately half of those were on company time. (Thus, the guestimate of half by paid programmers.) It is rare for more than 1 in 10 people in a field to go to a symposium, so that means that there were at least 10,000 people programming linux in ’05. 5 years at that rate is 50,000 man years. That’s a conservative estimate.)
and that’s just kernel developers.
So did Unix.
Sure… all you had to do was ignore that he was paid full time to develop and that it was subsequently maintained and developed by a large corporation. Nicely done.
“many” full time programmers have been dedicated to Linux since 2000, if not earlier.
And we’re now starting to see real progress as evidenced by the technologies being integrated into the kernel, along with better debugging and quality control tools that didn’t exist before.
I know all about kvm. And how it’s a return to forty year old IBM technology. It may shock you to hear this, but virtualization has been around for a long time.
Indeed it is, but your “historic” view misses the fact that it’s a relatively _new_ capability of modern PC hardware. Linux is handling that technology rollout without a problem. Again, with your sweeping old timer view you seem to _miss_ the fact that Linux is quite healthy and able to adapt.
First, the support is often quiet poor, but still counted as ‘support’. Second, there has been a massive brute force effort to make it do so.
First, your repeating it doesn’t make it fact. Second, it doesn’t matter how much “brute force” was involved, the _results_ are still positive.
Well, it’s debatable how “popular” linux is for embedded development, but again, you’re confusing popularity with quality.
Again, i’m not confusing anything. I’m telling you that the _results_ speak for themselves. Your doom and gloom “analysis” doesn’t have the same weight. You had to change the conversation and completely _avoid_ discussing your original claim that it was difficult to maintain a kernel driver and inject some nonsense about “quality”. But since you seem to recognize how successful or in your terms “popular” Linux is, it should tell you that whatever it is you’re worried about just doesn’t have that much _practical effect_ on the end results. Which is why there’s just no reason to spend much time worrying about it.
Edited 2006-12-30 11:32
That’s a big “if”. The reality is that you spend money on documentation
If one were at least a half competent programmer, this should be automatically done during the development of the hardware and the development of the software/drivers required to run the said piece of software – this is programming class 101; write the documentation as you do the work; not as an after thought.
No wonder todays programmes are a bloody mine field of crap with cavalier attitudes towards documentation and process during the design and development of programmes.
you spend money on doing release engineering for source that you don’t spend doing release engineering on binaries
What ‘release engineering’? its a source code! just upload the damn thing to to the cvs; I mean, Jesus F Christ, 2cents worth of electricity would probably be used in the upload process!
More gobbly-goop by the ivory tower sitters to justify their job, and justify their ‘privilege position’ of only a few ‘elite’ having access to the source code.
you spend money on the person coordinating the volunteers
You would have a volunteer to do that; thats how other projects are run, and you don’t see them falling over in a screaming heap.
you spend money on answer questions from the community that you wouldn’t need to answer.
If your documentation wasn’t half assed and half baked, people wouldn’t need to ask questions.
In return you get 4 volunteers who don’t have good access to your hardware people, who don’t care about your development standards, who don’t understand the limitations forced on you by having to support multiple OSes, who aren’t motivated to do testing beyond what suits their immediate needs, and who tend to be far more enthusiastic than talented.
Interesting you point out ‘support for multiple operating systems’ and yet, it seems that it is your camp who most suffers from the idea of being sucked into the abyss of programming crap that is the win32 api – programming for Windows first, then everything and everyone else later on down the track.
If you lived in an open-source-only world, maybe the volunteers would save you money, but they’re just duplicating work you’d have to do and test for your majority cusomters, anyway.
And frankly, the sales to open source customers that you’d generate as a result wouldn’t make up the difference.
Based on what evidence? if Microsoft had the same source code that everyone else had, the driver was dual licenced under BSD/GPL, Microsoft would have the benefit of being able to fix up problems with drivers as their customers stumble across it rather than simply saying, “well, its not our fault, we don’t maintain that driver!”.
Driver development should be about a community effort; the programming fraternity, however, see this is as a last ditch attempt to maintain their status of being the ‘elite’ and having underlings like my cup and hand, pleading that one day a fault with the driver might actually get fixed with in the next decade!
What ‘release engineering’? its a source code! just upload the damn thing to to the cvs; I mean, Jesus F Christ, 2cents worth of electricity would probably be used in the upload process!
More gobbly-goop by the ivory tower sitters to justify their job, and justify their ‘privilege position’ of only a few ‘elite’ having access to the source code.
I showed your remarks to a bunch of firmware developers. When they stop laughing, I’ll get some quotes to reply with.
Meanwhile I’ll simply say that describing driver writers as ‘ivory tower sitters’ who think of themselves as ‘elite’ merely demonstrates that you have no idea of the dynamics of software development.
You really have no idea of the economics of software development, or of its sociology, but you’re not interested in learning about it either.
there are many reasons for not open sourcing. You spend all your time working for free and Novell makes money off your software and makes a protection deal with Microsoft and you get nothing.
Wait…didn’t you defend the BSD license in an earlier thread?
You do realize that someone releasing their software under the BSDL could very well have someone profit from their work and get nothing in return, right?
Gee, I didn’t know there was a lot of money to make selling hardware drivers. Silly me!
I don’t know why you say you hate RMS or why you call him crazy, but if I recall, it wasn’t a printer driver that caused him to write the GPL or start the FSF — although, I’m sure it played a part. As I recall, the GPL was RMS’ reaction to Gosling’s close-sourcing and privatizing of the patches he did to improve Emacs. The defacto modus operandus up to that point had been to share changes with the community; however, because there were no binding agreements, the resulting free-for-all system of sharing patches produced, in essence, something comparable to a prisoner’s dilemma type of situation in that a non-cooperating agent can reap benefits greater than — and at the expense of — the cooperating agents. RMS’ GENIUS was to recognize that process and to figure a way to ensure cooperative behavior of participating agents by creating a license that required mutual cooperation.
RMS may act (and look) differently than the vast majority of people we know and respect which probably makes him an all-to-easy target of ridicule; however, there really is no need to epitomize him as “crazy” when all he has done — all he has ever done — is care enough to put everything he has on the line to help others reap the benefits of sharing software. How that makes him loathsome to some I shall never understand.
with BSD, apache and mozilla you can see the benefits of sharing code. The difference is, this shouldn’t be forced on people. If people want closed source, fine! Let them learn the hard way when companies fail to update their software. But that is their choice and if you really think that open source will beat out closed source in the long run, you don’t need a bunch of lawyers to make it so.
.
Edited 2006-12-28 18:28
what truce?
IF you believe that free/open is beneficial then why would you want anything else?
Because you’d like your “free/open” system to boot (bios) and run its devices (firmware blobs) and maybe even do decent graphics (driver blobs)
Unless you’re running a box with an open source bios or some embedded box booting entirely via redboot or some other open source boot loader *and* you don’t have any devices that have loadable firmware *and* you don’t want high end graphics support you’re running a compromise system and have made your own truce.
Articles like this and constant debate about the legalities of mixing binary and open source code are really getting tiresome. People who want to do it should be (and I believe are) free to do so.
However, personally i’m not interested in doing so, i just want to see is how far open source software can go on it’s own and i’m happy to live within its limits. For me, there’s no reason to use a hybrid system. If I wanted to use an OS that had a bunch of proprietary features, i’d use Windows; it’s a good operating system.
So no matter what the legal experts think, hopefully there will be lots of people who continue to use and promote open source free of binary funk.
that’s exactly the problem. people still look at Linux as a hobby OS … if you leave it as a pure open-source system it will never ever survive as a main stream OS.
if you want it to be a main stream OS that replaces windows and OSX; you need to make it hybrid system. which I mean, the basic and fundamental functions are open source, and the add-ons can be closed source.
that’s exactly the problem. people still look at Linux as a hobby OS … if you leave it as a pure open-source system it will never ever survive as a main stream OS.
It’s not a hobby, I use it for real work every day as do many businesses. It’s a ridiculous and uninformed argument that without binary modules Linux is just a hobby OS.
if you want it to be a main stream OS that replaces windows and OSX; you need to make it hybrid system. which I mean, the basic and fundamental functions are open source, and the add-ons can be closed source.
There’s no sound reason to try to become mainstream before we’re ready. I think this whole notion that we must do everything and at any cost to become “mainstream” is doing Linux and its community more harm than good. To me a device driver is a fundamental function that should be as open source as the rest of the O/S.
Edited 2006-12-28 20:43
you defined it as a hobby OS by your earlier argument…
you said “i just want to see is how far open source software can go on it’s own and i’m happy to live within its limits.”
by your argument, you look at Linux as a research experiment or hobby and you wanna see how far it can go…etc.
later in your argument, you emphasized that your vision of Linux as a hobby and windows as a business OS by saying
“If I wanted to use an OS that had a bunch of proprietary features, i’d use Windows; it’s a good operating system. ”
if you’ll be waiting for Linux to become mainstream with a pure open-source implementation you’ll wait forever.
by your argument, you look at Linux as a research experiment or hobby and you wanna see how far it can go…etc.
How on earth did you get that out of what I said? I didn’t say i want to see how far Linux can go as a hobby O/S. Maybe it’s your skewed view that binary modules are what define a non-hobby O/S that makes you get confused when you read sentences from others that don’t mention the word hobby at all.
later in your argument, you emphasized that your vision of Linux as a hobby and windows as a business OS by saying …
Wow two for two.. You imagine _again_ that someone who doesn’t want to use proprietary binary modules only wants to use the O/S in a “hobby” fashion. All your arguments are predicated on the mistaken belief that binary modules are necessary for Linux to be anything but a hobby O/S. You just couldn’t be more wrong.
if you’ll be waiting for Linux to become mainstream with a pure open-source implementation you’ll wait forever.
Did you read my post at all? I’m _NOT_ waiting. In fact i’m suggesting that people are putting way too much emphasis on it as an issue.
Its such a good topic spread over 21 pages. Maybe I’m getting old but click read a paragraph click is way too annoying.
Press on the *print* Link , at the bottom , and see the article on one page.
some people just want to have stuff working. They have a radeon, and they want beryl. the open source stuff won’t work too wel (it does, but it’s not useable at all due to the speed).
some people think it’s better to have open source drivers but due to the lack of it or due to lack of performance, they choose a closed source variant.
yes, they have a legitimate choice to do so. so a module is tainted, so what they say.
of course, if I can choose between ATI’s fglrx and radeon I use the latter, but only if the performace allows it to. if the performance would roughly be the same, it wouldn’t be a problem.
so it’s what many users just want/need.
it would be wrong and harmful if it would be forbidden.
Nobody forgot anything.
yes, they have a legitimate choice to do so. so a module is tainted, so what they say.
Sure they have the right. That doesn’t make it a good choice or one that people who care about open source should support or promote.
By the way, the open source r300 driver handles a Radeon X700 (for example) with beryl/compiz very nicely. While it might not be good enough for the demands of a die hard gamer, it’s surely good enough for the vast majority of users.
The case that proprietary modules are even needed in the first place is often overstated by a large margin.
It isn’t, and it can’t be, forbidden to USE a binary driver. Similarily it isn’t forbidden to create a Linux fork with a stable driver API to make it easier (cheaper) to develop binary drivers.
It’s not the users freedom to run whatever junk they want on their machines that is the issue.
The whole isse is about the balance between proprietary and free software. Copyright tips the balance towards the proprietary, and GPL (copyleft) tips it towards free software.
Free Software proponents envision a world where software is free to be studied, altered and shared. As long as the legal system makes it more profitable to make software that isn’t free to be studied, alterd or shared there has to be a balancing force to counteract this.
So by making it cheaper to create free sofware than non-free software we hope that the market evantually moves to a model where free software is the dominant way to share computation services.
Thus the issue is that once we begin to talk about ways to make proprietary software easier (cheaper) we are directly counteracting the choosen strategy.
Edited 2006-12-29 00:19
I have nothing against hardware companies that supply closed source drivers. It is their choice. However, if I have a choice I would use hardware with open source drivers.
Why? If the driver is open source and in the Linux kernels I can be confident that it will work on new Linux kernels, and over an extended period of time.
In a worst case scenario I could learn how modify it
or hire sombody to modify it for me, in case the kernel developers, or whoever is maintaining it loses interest.
In short, it’s my guarantee that my hardware isn’t turned into expensive paperweights over night, on an OS upgrade, if the hardware vender decides to drop my particular hardware model.
So hardware vendors that want my money better supply opensource drivers. It shouldn’t be that difficult.
It’s not that difficult, really, hardware vendors are just secretive and lazy.
============It’s not that difficult, really, hardware vendors are just secretive and lazy.===========
It’s not just that. They do, like it or not, have the right to keep their secrets. Actually, ‘lazy’ doesn’t apply. It takes more work to keep a secret than to let it out into the wild.
I’ll be honest, I don’t like it. I’d rather have open drivers. But I’ll never lower my own personal standards to the point where I put myself in the position to force *anyone* to give up their rights.(business or not, rights are rights) And you should be weary of anyone who is on this bandwagon.
I’ll be honest, I don’t like it. I’d rather have open drivers. But I’ll never lower my own personal standards to the point where I put myself in the position to force *anyone* to give up their rights.(business or not, rights are rights) And you should be weary of anyone who is on this bandwagon.
You don’t have to force anyone to give up their rights, and you couldn’t if you wanted to. The point is to create conditions where people see the benefits of open source solutions and start to create a market demand. Business will _choose_ to deliver open source solutions given enough demand. It’s already starting to happen with the likes of IBM, Intel and other hardware suppliers. But we’re just at the very start of the process; it will take some time, not force.
Can’t work.
The open source community moves too fast compared to our commercial competitors. We are not going to freeze the kernel API because some company doesn’t want to update its driver.
The whole idea behind corporate intellectual property in the USA is write it once, then never fix it and patent it. Sue everyone if they make anything similar. Meanwhile, only fix it if you can charge ridiculous support rates per “incident”.
The open source model doesn’t work that way, because it relies on cooperation from users and the engineers at a fairly low level to fix problems.
The open source model also doesn’t have a dividing line between user and engineer. Any customer can become and engineer, after all, they have the source code.
I think the efficiency of such a relationship speaks for itself.
Open Source engineers also have a love of the software we write, so we tend to care about bugs more than our commercial counter parts.
That means, we need open access for our customers to help us with those bugs, if they decide to do so, and we need the ability to debug the software sometimes in the wild.
After all, you can only test so much in the lab and sometimes customers have great labs they run the software in…also known as their organization.
-Hack
The open source community moves too fast compared to our commercial competitors. We are not going to freeze the kernel API because some company doesn’t want to update its driver.
Having a stable driver API would in *NO* way put Linux development at a disadvantage. One could argue that *MORE* drivers might be written for Linux if there were a well documented, stable API.
I’ve been a programmer for eighteen years now, and I just can’t believe that someone would even begin to make this claim.
Having a stable driver API would in *NO* way put Linux development at a disadvantage. One could argue that *MORE* drivers might be written for Linux if there were a well documented, stable API.
Some pretty smart people disagree with you; see the “stable_api_nonsense.txt” document in the kernel source tree: http://tinyurl.com/y7lm5c
While you’re right that it might be a bit easier to maintain drivers if there was a stable api, that’s only one half of the equation. You must also consider how easy it is to change _core_ kernel code to incorporate new features. What ends up happening is that the _drivers_ by virtue of demanding a stable api end up holding the overall system back and making it much harder to change code.
Some pretty smart people disagree with you; see the “stable_api_nonsense.txt” document in the kernel source tree: http://tinyurl.com/y7lm5c
I’ve debunked that before here. Search the archives for an explanation of what’s wrong with those arguments.
I’ve debunked that before here. Search the archives for an explanation of what’s wrong with those arguments.
Please see the archives where people showed that they actually make a lot of sense.
Linux continues to gain acceptance and support in industry, so I have a hard time worrying about any of the doom-and-gloom written in this thread. Hardware companies like Intel continue to increase their support of Linux and offer open source drivers for their hardware.
Things are pretty damn good for Linux these days, it’ll be interesting to see just how much better it gets :o)
Edited 2006-12-29 10:01
The open source community moves too fast compared to our commercial competitors. We are not going to freeze the kernel API because some company doesn’t want to update its driver.
Perhaps if you slowed down you’d stop running around in circles and make some progress out of all that brownian motion.
Open Source engineers also have a love of the software we write, so we tend to care about bugs more than our commercial counter parts.
Nonsense. Most “Open Source engineers” make their living as commercial developers. Are you suggesting that those of us who do that are unethical enough to care less about bugs in the software we’re paid to develop than we do about bugs in he software we’re not paid to develop?
After more than thirty years developing software, both open and closed, I’ve found that the reasons for bugs are different in the two worlds, but that neither is particularly more bug free than the other.
Opensource drivers are the only reason I can have an Opteron running in amd64 mode without having to worry if the component x or y is supported or not. Or why I can run an Alpha machine with off-the-shelf network (realtek) and graphics cards (ati).
So, open-source drivers not only give us the warm fuzzies about being “free”, but also reassure us that our hardware can continue to work over different versions of the OS, and enables us to (mostly) think in an architecture independent way.
Now, closed-source drivers are a necessary evil, especially on the desktop. But if vendors don’t want to provide us with the benefits of having open-source drivers, we also don’t have to relieve them from the pain of maintaining a driver outside of the main tree. It’s quid pro quo, they choose what’s best for them, and we choose what’s best for us.
Let’s not forget that Linux is mostly irrelevant on the desktop, but a big player on the server side. Well, opensource drivers are actually much more important there, because hardware doesn’t become obsolete as fast as on the desktop, and the “everything works right out of the box” experience is very important (and opensource drivers tend to be way more stable than their closed source counterparts).
Love the on the fence discussions now a days.
Linux making it into the mainstream? … Maybe not.
Everyone uses KDE and GNOME but they are at a standstill? … Maybe not
Take OSAlert editorials seriously from now on? … Maybe Not
Sorry, but I completely disagree with the author. 3rd party kernel modules taint the kernel – to allow them is to basically thumb the nose at the GPL and all it stands for.
I do NOT see why Open Source developers have to bend to suit the rich 3rd party corporations, let them bend instead. They want a piece of the Linux pie, then they have to start working with the ‘movement’, not just doing a push and shove to get their own way.
Dave
3rd party kernel modules taint the kernel – to allow them is to basically thumb the nose at the GPL and all it stands for.
I disagree. As long as the combined kernel is not redistributed, it complies with the GPL and does not go against what it stands for.
The GPL is a copyright license. You can add proprietary module to a GPL kernel just like you can make changes to a GPL program and not release the changes…as long as you don’t redistribute it.
Look, I would certainly prefer that ATI/Nvidia open-source their drivers, and I fully support petitioning them until they do, but I’m not going to stop using their binary drivers in the meantime. That’s all there is to it.
… because, if you expect hardware manufacturers to give away their IP within open source code, they’re not going to do it — so you’d better get used to using reverse-engineered drivers that lag their proprietary counterparts by many months.
… because, if you expect hardware manufacturers to give away their IP within open source code, they’re not going to do it — so you’d better get used to using reverse-engineered drivers that lag their proprietary counterparts by many months.
Businesses exist to sell things CUSTOMERS want. The more people who demand open source access to the hardware they purchase, the more businesses there will be who find a competitive edge by delivering it.
The movement towards open source is very young with many people still locked into thinking along old lines. In the long term, the advantages of open source will win over many more converts than exist today. Business is sure to follow its customers, and supply the demand.
Edited 2006-12-29 11:53
I’m not saying you shouldn’t demand open source access. But demanding and getting are two separate things. Meanwhile, unless you don’t care about getting work done, you’re going to come up empty, so you might as well seek a compromise. Or is compromise only for more reasonable people?
Look back at my posts. I’ve said that if you _must_ use binary drivers by all means go ahead; i’ve also recognized it’s legal. But that’s a lot different than promoting them and telling people it’s a _good_ compromise. It’s not in the best interest of a sustainable Linux. Frankly I think the need for binary modules is greatly exaggerated. Very few people actually need bleeding edge graphics for instance. As an example Compiz/Beryl works just wonderfully for me with open source drivers.
The reason most people give for _promoting_ binary modules is this notion that we must do anything and everything at any cost to convince millions of people to start using Linux soon. I think while this is done out of the best of intentions, it actually does way more harm than good.