Yesterday, the ninth Firefox 4.0 beta was released. One of the major new features in Firefox 4.0 is hardware acceleration for anything from canvas drawing to video rendering. Sadly, this feature won’t make its way to the Linux version of Firefox 4.0. The reason? X’ drivers are “disastrously buggy”. Update: Benoit Jacob informed my via email that there’s some important nuance: hardware acceleration (OpenGL only) on Linux has been implemented, but due to bugs and issues, only one driver so far has been whitelisted (the proprietary NVIDIA driver).
Boris Zbarsky, long-time Mozilla developer, commented on the issue over at Hacks.Mozilla.org. The release notes for Firefox 4.0 beta 9 noted that it comes with hardware acceleration for Windows 7 and Vista via a combination of Direct2D, DirectX 9 and DirectX 10. Windows XP users will also enjoy hardware acceleration for many operations “using our new Layers infrastructure along with DX9”. Furthermore, Mac OS X has excellent OpenGL support, they claim, so they’ve got that covered as well.
No mention of Linux, and there’s a reason for that. “We tried enabling OpenGL on Linux, and discovered that most Linux drivers are so disastrously buggy (think ‘crash the X server at the drop of a hat, and paint incorrectly the rest of the time’ buggy) that we had to disable it for now,” explains Zbarsky, “Heck, we’re even disabling WebGL for most Linux drivers, last I checked…”
It’s not all bad, though. “If your drivers are decent (some of the closed-source ones can be, nouveau can be sometimes), you do get something akin to Direct2D on Linux through XRender, though,” Zbarsky adds, “So while you don’t get compositing acceleration, you do get faster canvas drawing and the like. drawImage, for example, can be much faster on Linux than on Mac. But only if you manage to find a driver and X version that happens to not suck…”
Yeah, good luck with that – I’ve been trying to find that magical combination for a long time. But I digress.
He further requests help from Xorg developers and distributors on this issue, since they are still working on it for the future. In other words, if you happen to know people from those parts, be sure to let them know about the difficulties the Firefox team is apparently having with X; maybe they can help out, give advice, and so on.
By the way, Direct3D appears to be the saving grace for Windows here, as Zbarsky notes in another comment. “Sadly enough, GL drivers on Windows aren’t that great either,” he notes, “This is why WebGL is done via Direct3D on Windows now… But that mostly a matter of performance issues.”
“He further requests help from Xorg developers and distributors on this issue, since they are still working on it for the future. In other words, if you happen to know people from those parts, be sure to let them know about the difficulties the Firefox team is apparently having with X. ”
Please, please – don’t bother. The fact that the OpenGL implementations in current X drivers for many cards are buggy is hardly news to anyone, least of all the developers. Inundating them with ‘OMG WHERE’S MY FIREFOX ACCELERATION U SUCK!’ messages is not going to help.
Sorry, I should’ve worded that better. Fixed it in the article.
nope, but moving to other browsers that perform decently will;)
i went for chrome long ago and will never return back to bloat called firefox
You can change browsers, but not the fact that drivers and X are buggy beyond belief. They won’t just magically work for Chrome.
Bloat as in… memory usage, for example? Then you better look back, because Firefox actually needs less.
Well, there is a chance they will.
Linux OpenGL implementations are not very different from what we (used to?) have with html+css+… implementations. They are a buggy, inconsistent mess but if you know the safe path across the minefield you can still produce a working product. Sometimes the obvious path is not the “proper” one.
It’s likely that Mozilla guys are performing some operations that don’t match the semantics of underlying layers well (after all it’s a multiplatform program). Such corner cases are more likely to have bugs or suffer from poor performance. This of course is not an excuse for guys producing these bugs but I can easily imagine another application doing the same things differently and managing to work these bugs around.
Yep, indeed: with WebGL we are basically exposing 95% of the OpenGL API to random scripts from the Web. So even “innocuous” graphics driver bugs can suddenly become major security issues (e.g. leaking video memory to scripts would be a huge security flaw). Even a plain crash is considered a DOS vulnerability when scripts can trigger it at will. So yes, WebGL does put much stricter requirements on drivers than, say, video games or compiz.
But is it the job of Firefox to shield from blatant (security) bugs in the underlying OpenGL API and neglecting the bugfree implementations in the process?
Rather more use and exposure would motivate the driver developers to fix their buggy drivers.
Perhaps a blacklist could be implemented notifying the users that their driver is buggy and Firefox will run unaccelerated? This would raise awareness without negatively affecting the “good systems”.
Edit: I see you have already implemented a blacklist But perhaps still notifying the user would be a good idea?
Edited 2011-01-15 18:58 UTC
First of all, if an implementation is shown to be ‘bug-free’ then we’ll gladly whitelist it in the next minor update.
And yes, it is our job to shield the user from buggy drivers, buggy system libraries, whatever. You don’t want to have to wait for your OpenGL driver to be fixed to be able to use Firefox 4 without random crashes.
That would be nice, but we also need to be able to ship Firefox 4 ASAP without lowering our quality standards.
This is information of a very technical nature that most users won’t know how to act upon. For technical users, we *are* already printing this information in the terminal.
Edited 2011-01-15 23:14 UTC
Bummer. I forgot WebGL is involved. That indeed complicates things “a bit” as you no longer fully control which parts of the OpenGL API get used.
Perhaps a more graceful solution would be to selectively white/blacklist parts of the WebGL API, or WebGL itself.
There’s a major difference between both: if you use a sane (process oriented) design: a bug in an html(etc) component only crash a tab, or at worse the webbrowser (if poorly designed), a bug in an OpenGL driver can crash the *whole* computer and it is much, much more complex to debug, especially with hardware acceleration, and without hw acceleration OpenGL isn’t very interesting!
Chrome is consistently more responsive than FF on any computer that I’ve used (I suspect that it is thanks to its multi-process design).
That’s probably why the GP said that and I agree with him.
As I already said, there’s a difference between unresponsive and being bloated.
Not all lean software is responsive. A single-threaded design where UI rendering is on the same thread as the number-crunching algorithms (like firefox’s one, though thankfully they’re working on that) is all it takes to make software unresponsive, no matter how well the rest is coded.
Edited 2011-01-16 13:12 UTC
Bloat is not exactly the best term when trying to make a firefox vs chrome comparison which advantages chrome. Firefox is now nearly the mainstream web browser which consumes the least amount of memory, AFAIK, while Chrome would be near the top with its multi-process model.
Being more responsive does not equate being less bloated. Vista x64 is probably very responsive on a machine based on one of those upcoming Bulldozer CPUs from AMD, backed by 16GB of DDR4 ram, 4 Vertex 2 SSDs in RAID 0, and a SLI of 4 high-end graphic cards. That wouldn’t make it less bloated. Responsiveness depends on proper use of threads and having powerful hardware underneath, not so much on how heavy software is (except when you go the Adobe way and make software so heavy that your OS constantly has swap data in and out while you run it because your RAM is full).
Edited 2011-01-15 13:43 UTC
But even if his word usage was wrong it’s hard to argue that “it feels like crap after a while”, regardless of how well it run all the crap which make it run like a turd.
Geez, so many fickle users out there. Most don’t appreciate even a little that it was Firefox that stirred up the browser wars, when the alternatives were a sluggish Netscape and an anti-standards IE. So you’re Firefox is ‘sluggish’? Sounds like you have other issues on your system too. My primary box is a six year old P4 and Firefox launches/views pages pretty well.
Also don’t forget Google only offered Chrome to Windows users for quite a while, leaving Linux users with a somewhat supported ‘build your own’ option of Chromium. Their excuse was a public statement about how it was too difficult and problematic to offer Linux or OS X versions. Yet Firefox and Opera have been popping out concurrent versions for multiple platforms for years. (OK, well Opera has been concurrent version-wise only recently, but their developers are too busy innovating unique ideas that other browsers pick up on.)
So on a topic about a mature multi-platform browser that is having a big problem with Linux….
….you complain that a new browser didn’t immediately provide a well-functioning browser for Linux?
ooooh, the irony. Developing Multi-Platform is very difficult especially if you cannot rely on the underlying platform or have cutting edge technology requirements or other platform dependent bits.
They call it lobbying.
The more media presence the problem gets, the more chances on a speedier solution.
Squeekiest wheel gets the grease.
Oh no… But that was expected.
Couldn’t the openGL mode be enabled on a whitelist basis ? I thought the Nividia proprietary drivers were pretty good, as far as 3D is concerned (you can use pretty recent games under Wine for instance) ?
I thought the situation was pretty good, with the Video Acceleration API, compositing & 3D accel being pretty good with nv cards.
First thing we need is a good way to test.
I think this is the way you can help test it:
http://jagriffin.wordpress.com/2010/08/30/introducting-grafx-bot/
https://addons.mozilla.org/en-US/firefox/addon/grafx-bot/
But on my machine it did not seem to want to be enabled or would just crash and burn ( I used a second firefox profile to even get that far ).
Here are some of the results:
http://jagriffin.wordpress.com/2010/09/22/grafxbot-results-update/
UPDATE: ok, just tried again and now I do seem to be able to do a proper test run with a seperate profile (it is actually recommened now).
Edited 2011-01-15 12:59 UTC
Also, for WebGL (which is enabled on linux if your driver is whitelisted), the best way to test is to run the official WebGL conformance test suite:
https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/te…
Click ‘run tests’, copy your results in a text file and attach it to this bug:
https://bugzilla.mozilla.org/show_bug.cgi?id=624593
and of course tell us the precise driver, version, Xorg version, kernel version that you’re using.
If a driver can pass almost all these tests (and doesn’t crash running them…) then it’s quite probably good enough and we should try to whitelist it!
Looking forward to enabling the whitelist once we get more data. It must be said that the above WebGL test suite is AFAIK the first time that Khronos publishes a complete, public test suite for a *GL standard. I hope to convince developers of GL drivers to use it to test their drivers against.
Yep, the NVIDIA proprietary driver is pretty good for us, that’s why it’s whitelisted: see my other comment on this story, http://www.osnews.com/permalink?458166
what about on proprietary drivers from nvidia and ati? they are buggy too?
is there a way to manually turn them on/off?
NVIDIA proprietary driver is not buggy, for what we are doing (which is pure OpenGL). We are enabling hardware acceleration on X with the NVIDIA proprietary driver. So the title of this OSAlert story is inaccurate.
The FGLRX driver is crashier, it’s blacklisted at the moment, this could change (everything hopefully will change )
Yes, you can turn the whole driver blacklisting off by defining the MOZ_GLX_IGNORE_BLACKLIST environment variable. Just launch firefox with this command (you can use it in the properties of your desktop icon, too):
MOZ_GLX_IGNORE_BLACKLIST=1 firefox
We did this blacklisting to put and end to the endless series of linux crashes that were caused by buggy graphics drivers, and were causing lots of grief among linux users (“firefox 4 is crashy!”). This was the top reason for crashiness on linux.
We are looking forward to un-blacklisting drivers as soon as they get good enough, see the discussion in this bug (scroll down past the first comments sent by an angry user):
https://bugzilla.mozilla.org/show_bug.cgi?id=624593
Edited 2011-01-15 15:44 UTC
While I really like the fact that this is a runtime and not a build time choice, why do it as an environment variable and not in about:config?
An environment variable requires that .desktop files for menus and other forms of UI launcher have to be modified or some system or user level environment script be modified.
Especially since I read in one of you other comments that another related feature is switchable through about:config
Really it’s just because we’re in a rush now and an environment variable switch can be implemented in 1 line of code (while an about:config switch is, say, 5 lines of code )
Heck, ideally I would like the existing about:config force-enable switches to circumvent the blacklist. But that was harder to implement due to where the GLX blacklisting is implemented.
Eventually yes it’ll be in about:config.
Edited 2011-01-15 18:49 UTC
I see
Still, if it is about 5 lines, it might result in more people reporting whitelistable combinations.
Excellent!
It is good to see that another (additional to KDE) big free software application provider is now running into driver bugs holding back implementations of current state of the art interfaces.
Maybe you can share notes on whitelisted/blacklisted combinations with the developers of KWin. They’ve been in this very situation for a couple of months now and might have data which could be useful to you as well.
Actually I’ve gotten in touch with them already, asking for that
http://lists.kde.org/?l=kwin&m=129231532921117&w=2
They didn’t have a blacklist or whitelist already; what came out of this conversation is that we’re doing different stuff than they are.
Ah, interesting read!
I just thought about KWin because their problems with driver status also had made quite some waves, but indeed their needs don’t compare to yours a lot.
How about projects which use OpenGL more than for compositing? GNOME Shell or GNOME/KDE games?
Do you plan on making your blacklist/whitelist public (on some developer page) so other developers could re-use it?
I expect GNOME Shell to be quite similar to KWin in this respect.
You don’t usually bother making a driver blacklist for a game. If a game crashes because of a driver, so be it.
Of couse, this is open source
The (currently very simple, just allowing NVIDIA) OpenGL-on-X driver blacklist is currently implemented there:
http://hg.mozilla.org/mozilla-central/file/f9f48079910f/gfx/thebes/…
Quite likely for compositing part of it, I was more thinking about the part which is doing “the desktop”, i.e. which does actual drawing operations, effects, etc.
True, but I wasn’t actually thinking about blacklisting in this context, more like knowing which subset of GL is safe to use across all existing drivers. Might make it possible to distill some whitelisting from that.
I was a bit more thinking in terms of “page on developer wiki”
I.e. something other developers might find through (web) searching
I didn’t realize that GNOME shell would do much more than compositing with OpenGL. If Clutter does all its drawing with OpenGL, then indeed that’d be the case. I’ll have a look.
The ‘good’ thing with WebGL is that since it exposes most of the OpenGL API to scripts, it’s universally the hardest thing to support and whitelist. So anything on the whitelist of a browser implementing WebGL will be good enough for practically any other application. So the ‘subset’ you’re looking at is what’s exposed by WebGL, which is almost the same thing as OpenGL ES 2.0.
Yes a wiki page would be a good thing to write once things settle down a little. Still talking with Xorg devs to see what we can whitelist,
http://lists.freedesktop.org/archives/mesa-dev/2011-January/004877….
The interesting part is that KWin did not run into driver bugs.
The driver bugs ran into KWin.
For that to understand one has to look to KDE 4.0:
KWin (!) worked quite good on most drivers.
Yet in KDE 4.5 suddenly things did not work. Why?
No, not because of KWin, the code in those areas was mostly untouched from 4.0!
Instead drivers suddenly said they did support features while this was not the case.
I.e. 4.0: drivers honest –> few KWin problems
4.5 drivers lying –> many problems.
And the only way around this for KWin is blacklisting which is a lot of work.
Then just support the the most stable driver then. I’m pretty sure most people use the NVIDIA binary, so just detect it and enable 3D support.
Adobe supports VDPAU in their Flash beta, so it’s time to stop making excuses and support what’s established, stable and working, which the NVIDIA binary is. AMD binary driver too.
That’s exactly what we are doing the NVIDIA proprietary driver is whitelisted at the moment. So you get WebGL right away. If you want accelerated compositing too (at the risk of losing the benefit of XRender) go to about:config and set layers.acceleration.force-enabled to true.
If the manufacturers did release proper specs for their cards maybe we could have first class drivers for Xorg.
Highly likely… sad but true. The blame probably could be directed toward the graphics hardware manufacturers with far more accuracy and truth than the X.org and open-source driver developers.
Or you could put the blame where it really lies, with Xorg. Nvidia’s drivers work better because they basically replace half the Xorg stack.
This state of affairs is rather retarded. If to get good performance and stability you have to replace half the underlying graphics stack, then the graphics stack must be the grand master of suck.
No, sorry but you can’t blame Xorg. It is an open source project and anyone can contribute. The quesion is why Nvidia does not contribute to Xorg instead of replacing its stack into its proprietary driver?
Xorg is an extremely complex piece of software but it is also extremely capable. It is understandable that it has more bugs than MacOS X or Windows graphic stack. MS Office has more bugs than Notepad. Xorg just need more developers and cooperation from hardware manufacturers.
What if you manufacture a good card but the driver for it sucks? Your product sucks overall. Manufacturers need to put more effort into the software part on linux. They will loose customers in the long run if they don’t.
uh, I am certainly blaming Xorg. It’s overy complex, and has too many features that have no real place in todays computing environment. I use Linux everyday, and Xorg is the weak spot in the whole OS, it’s slow, it crashes (not often, but Windows 7 has never crashed on the same computer, nor did Vista). There is a reason that Red Hat and Ubuntu are looking at Wayland, and that is simplicity, reliability and speed.
Wait. You are too quick to dismiss Xorg features as irrelevant. They are relevant to many people. Wayland may be a nice alternative for you but it is still far from being as stable as Xorg. Xorg has problems but it has many strengths. You would not be using it if it had more problems that useful features.
For me there is no alternative to Xorg because I need network transparency. Yes, network transparency is relevant, today. On Windows you have to use a hack like VNC or a product like RDP which both suck or buy Citrix which is also a hack, costs an arm and sucks. When you are used to Xorg and NX this a huge step back.
Both RDP and VNC is much more usable than Xorgs network transparency, both over wireless or the Internet, where Xorg is unusably slow. RDP even supports 3D on Windows 7 and Vista.
People don’t need network transparency, people need network access, which Windows does 100% better than Xorg, and VNC does a better job. FreeNX proves that Linux can provide a proper, usable remote GUI environment, but holding on to this broken functionality is part of the problem with Xorg. You can even use RDP with Linux, using xRDP, which is much more usable than network transparency.
Edited 2011-01-15 19:10 UTC
I could not disagree more.
VNC and RDP do not replace Xorg. Only Citrix does but poorly. With Xorg you can have an application server and administer your applications in a single place. Just let your users connect and use their applications as it was local. They can resize windows, put them next to their local window, cut and paste, everything. It is integrated into their desktop. They don’t need another desktop with poor quality graphics and scrollbars.
FreeNX is nice but it does not replace Xorg either. It depends on it, it is a layer on top of Xorg.
Think about it for a while. We develop so called RIA in php, javascript, jquery, java or .NET. RIA suck when compared to what they should be. The web was not designed for that. We use the web for RIA simply because most internet users use Windows or MacOS X. We should use X for that.
I know it doesn’t replace Xorg, but with such better replacements for that one complicated part of Xorg’s functionality, it makes more sense to remove it and use a better alternative.
When you say “better alternative”, are you talking about Citrix or are you still saying VNC is an alternative to X?
If you are saying that VNC is an alternative to X, let me say it is not. A lot of people use Citrix because VNC does not work for them.
Obviously that is not your case because VNC is enough for you but I think your attitude is wrong.
I use MS Office to wtite code and it is overbloated. I don’t need all the formating crap and the page layout is useless for me. Does this mean MS Office sucks? No! It just mean it is not for me. OK, That was a hyperbole and maybe a very bad analogy but there is some kind of point in there. Why aren’t you using the frame buffer? Or Windows? The framebuffer sucks, I know. GTK used to work on it but not that good. It has a lot of problems but you are complaining about Xorg. Xorg has problems and some bugs need fixing but you will find that the alternatives have their own problems. Maybe the best use of resource would be to fix those problems instead of removing stuff from Xorg that many people rely upon so as to make it like the frame buffer.
Anyway, your attitude “VNC is enough for everybody” sounds very wrong to me. I use Xorg extensively and my colleagues install Exceed or Xming on their Windows machine even though there is a VNC server on the server, because VNC is not adapted to our usage pattern.
Citrix, RDP, FreeNX, really, VNC is the worst performer of the bunch, but still more reliable than X, over busy networks or the internet
Of the bunch, for your usage pattern, Citrix is useless. FreeNX is the best performing one and that is because it makes use of X. You will find that people are paying good money for Citrix though. Maybe “the people” care about desktop integration, despite what you think.
I assume that you have actual numbers of back this claim, as opposed to simply making things up, right?
– Gilboa
(Ignoring for a second, that network transparency has -nothing- to do with X.org performance, as local->local display doesn’t use the same network-aware code paths…)
Edited 2011-01-15 20:27 UTC
Numbers? Making things up? Whatever…
Network transparency was useful when the machines you used everyday was not as powerful as they are today, when I first used an XTerm, back in the late 80s, X’s network transparency was useful because it allowed both the local machine and the remote server to share the burden of displaying graphics, this was useful on resource limited systems.
Today, it’s just added complexity, because in most cases, neither machine is resource limited, and therefore, you don’t need the overhead of a remote X client, compared to a lighter access method such as ssh, or even RDP and VNC.
It really doesn’t add that much complexity. Most of the issues that X is having these days have nothing to do with things like network transparency and everything to do with incomplete and buggy driver implementations, or bad GL frameworks (Gallium helps, supposedly, but much work is left to be done).
I disagree, I think it’s 20 years of legacy features and a broken design that is the problem, honestly, you can blame it on the drivers, but it can’t be the drivers, because the proprietary drivers are much more stable on other OS’s, makes me think that there might be other factors.
Sure, they exist, those other factors, but their importance is limited. The biggest limiting factors have been solved already with new acceleration architectures, for example. If you can name some limiting factors, that would be great. For now, you are just assuming that they exist because the Linux proprietary drivers are worse than the Windows ones. Well, the Linux drivers are not as well supported as the Windows ones given the difference in market share, so that there is definitely a reason. Also, the drivers have chosen not to move to the newer X architectures, so they do not get the benefit of those.
-You- claimed that -you- know what -people- want.
Now stop acting like agitated 3 y/o, admit that -you- can only guess what -people- want (based on limited personal experience) and continue from there.
“whatever” is not a solid argument.
– Gilboa
What are you talking about? I’m not talking about what people want, I’m talking about technology, and what is better. It’s not about numbers, it’s about usability, and Xorgs network transparency is not usable, compared to the alternatives, its a very inefficient way to provide remote access, which is what people need.
I don’t think anybody really cares the exact way it works, as long as it works, and Xorgs network transparency is not usable, as a reliable tool for network access.
X is no longer as network transparent as it used to be, unfortunately.
It was perhaps the case 20 years ago when most of the computer graphics, font rendering etc. was done on the server side. Now we have xshm, xrender which enable reasonably fast client side rendering on local machines but no longer work across the network (at least not if you care about the user experience).
The network itself has changed too. Over the years bandwidth has increased dramatically but latency hasn’t changed that much. (Hard)wired networks are now often replaced with wifi connections, VPNs and other ad-hoc networks.
X is still able to deliver its promise on LANs (ideally with NIS/NFS) and with some classes of applications (e.g. engineering apps using 2D vector rendering). But in most other applications, even if the program manages to start up properly, you still have to be very aware of the fact it is not running locally (if only for performance and reliability reasons).
Rdesktop and VNC chose different way: if it is no longer possible to make the graphics rendering network transparent lets make it obvious and put the user in control. Thus, having remote session in a separate desktop is GOOD – it makes it easy to find out which application is running where. Having a possibility to disconnect from and reconnect to a remote session (and thus move your own existing session between computers) is GOOD. Using protocols that benefit from increased bandwidth and don’s stress the network latency (asynchronous transfer of bitmaps, video) is GOOD. Having additional features (audio redirecting, file transfer) is GOOD.
After all, with a network you can do much more than just open a window from a machine A on a machine B.
I think you hit the nail on the head. X is extremely complex and extremely capable and so it can take an extreme amount of effort and time to have stable drivers.
IMHO we should think about making X and or Wayland as simple and efficient as possible while still having relevant features.
Just to clarify I am not blaming this all on X, but complexity does not help.
Edited 2011-01-15 18:46 UTC
NVidia works better for OpenGL, and only OpenGL, because that is what they focus on. Even the ancient VESA driver is faster and more stable the NVidia drivers when it come to 2D graphics, the nouveau driver is somewhere between 100 and 1000x times faster while using 100 times less memory (XOrg with nvidia: 300Mbyte resident, XOrg with nouveau: 22Mbyte, where almost all is the binaries).
Sorry, but the nvidia 260.19.21-1 on Debian sits at 136MB presently.
4 tabs in Chrome Unstable, 60+ files open in Kate, 3 tabs in Konsole, Inkscape Trunk open as well, plus the usual crap running in the background for KDE 4.5.x.
We have proper specs for most of the AMD/ATI chips. There are specs for Intel chips and drivers from Intel. Is this of any help? No.
It does help. The open drivers for both of those offer 3D as standard. The open drivers for NVidia only offer ‘experimental’ 3D, after much blood, sweat and tears of reverse engineering. The Gallium3D/DRM changes are not complete yet, as we get to the optimizing end of thigns, it’s going to get interesting. Phoronix is quite a good place to keep up.
You’d think it would help, especially with Intel seeing as the X developers work for them.
You would have better gpu drivers if Linus provided a stable ABI.
But working with third parties has never been a goal of Linus anymore than creating a desktop OS.
He also doesn’t seem to care about creating a server OS that meets the needs of the market given how often the unstable abi has broken VMWare.
Because having a stable abi in a Nix is just unthinkable…like OSX, Solaris, oh wait nevermind. Where are all the benefits from the unstable abi? How has Linux leaped passed other Nix systems?
http://lxr.linux.no/#linux+v2.6.37/Documentation/stable_api_nonsens…
I read that years ago.
You didn’t answer my question.
read it again
And many people think it is wrong.
This is not how responsible devs work. You tell them you are supporting the interface until date X and mark it as depreciated. i.e. I was fixing some Java code I wrote 3 years ago for Java 1.3 … added the fixes and compiled I was warned things were to be depreciated in future version … so I updated accordingly.
This here is basically a big middle finger up to any driver dev. It is basically “GPL or Else”.
Any change to code of an is a risk, It can regress functionality and/or introduce new bugs … any 1st years software engineer knows this.
It seems like if you didn’t really read the http://lxr.linux.no/#linux+v2.6.37/Documentation/stable_api_nonsens… text.
I did … you obviously didn’t read my response … It forces you to GPL your driver .. you have to be part of the club.
Yep. GPL it and get in the trunk.
Which slows everything down, gumming up development and leading room for confusion about what’s current. No, get it in the trunk, and it will be kept up to date. No legacy, no confusion and improvements propagate quickly.
If you want to play in the Linux game, you play by Linux rules. This stickiness of the GPL is deliberate and is why Linux is where it is.
What are you arguing, that they shouldn’t fix everything? Of course changes introduce risk, but that doesn’t mean you don’t change what needs to be changed. Then test test test, which happens both because of the scale of people trying out the latest and greatest, and by just normal testing. Then distro take the latest, put in their patches, and there is more test test test, and it’s roll out to users who want the bleeding edge.
The proof is in the pudding.
GPL or else …
It is marked depreciated there is only confusion if you are an idiot. This rapid improvement process causes bugs … which is why there is no nice Hardware acceleration for Firefox in Linux. The only driver which works with it, is Closed Source …
At 1% market share … GPL is freedom but as we tell you.
Of course they should fix things … but not in a manner that is likely to cause more bugs.
Functional Testing will not catch all the bugs. As I said in an earlier comment it should be marked depreciated and give time for people to move over.
Quite, it is why I see regression issues on hardware support on my chipset which Open source drivers have been provided (intel).
GPL or it’s you problem.
That depends greatly how clear it is marked depreciated.
The whole graphics stack is going through a revolution. This is a good thing. X run as the user. Nice VTTY switching. Lots of shared code, thus shared improvements in the future. X alternatives possible (like Wayland).
This public big deal noise from Mozilla may even speed the changes up. Lots of other project have managed OpenGL fine in these times of transition, so Mozilla might be making more noise then is justified.
1% on the desktop. Check out the phone market (counting Android as Linux, which it basically is), or GPS, or routers, or TVs, or web server, or super computers. You name it, Linux has a big market share, if not dominance. So the approach clearly works.
Graphics cards are clearly desktop only, so surprise surprise, the support isn’t good. But as I said, the graphics stack is going through a revolution right now. I know you are going to come back with “it always is” but this is really massive, not just general change.
The developers know what they are doing. If Linux was like you seem to think, it wouldn’t have been so massively successful as it is has (outside the desktop).
I tried to be clear that won’t be the only testing.
In the closed world that is required. In the open world, it’s not. You change the API, you go and change all the drivers too. Notify the maintainers, etc etc. Things can happen much quicker if the code is in the trunk.
Blame the maintainers. It’s not Intel don’t have the staff. They don’t allocate enough, and it’s the 1% desktop market share that is why. As things settle down to normal levels of churn, Intel won’t need to work as much as they aren’t now.
Just today, there was some more good news on the Gallium3D front:
http://www.phoronix.com/scan.php?page=article&item=amd_r500_expande…
Why should it be? I thought GPL gave you freedom and doesn’t tie you to the developer … except with the Linux kernel you have to bend to kernel devs whims or fork the code (ala Android).
This is not really a problem with the process. My point still stands.
Again you branch it and actually have a “unstable branch” … so I don’t understand why code with some obvious flaws is being put into a new release. BSDs do this quite well .. they have release, stable, current.
Also If I released code that had obvious flaws at work I would get pulled into my managers office, but because it is Linux it is suddenly perfectly okay.
People don’t see Linux, they just see Android, Chrome OS whatever it is.
It is used because it is cheaper than trying to customize something else for the purpose. No because the development model is “better”.
Also on embedded devices they are usually using a specific version of kernel (which has been throughly tested to work), because this version of the kernel happens to work well with the device.
Doesn’t prove anything about the success of the “kernel development method”.
Again why is this being put into kernels that are deemed to be “stable”, when there are massive and obvious flaws?
I whole heartly disagree. It because it is free and it costs less for business to customize the kernel than it is to write their own.
Why? When I build code with .NET 2, I know it will work with version 3.0, 3.5, 4.0 because the API is kept consistent … this is because of good design of the API, it doesn’t need to change constantly.
.NET 1.1 and .NET 1.0 had obvious flaws in it, and these were fixed … after that the core API has been extended … not reworked (which is what happened between 1.1 and 2.0).
If they kept a stable ABI intel wouldn’t have to keep on reworking it … which is exactly my point. Thanks for proving it.
Also everytime someone says this to me, I point out that they work perfectly in OpenBSD … which have an even smaller market share than linux and has less devs. Mayeb they actually know what they are doing.
Agh too much to answer point by point. Look, if the world was like you thought, someone would do stable wrapper round the unstable API, and everyone would use it. There would be pressure for it to be rolled into Linux itself. This hasn’t happened, or shown any signs of happening. You point about .NET is completely unrelated. That’s userland. Linux’s userland interface is very stable. Christ, I’ve grabbed old Unix apps and found they just compile under Linux and run, which shocked even me. I think the thing is you are seeing the kernel and drivers as separate things, so you are thinking of a interface across them. But the developers decided that was a bad model, they decided best have it one thing (the trunk) so the interfaces have no need to be stable, and can change to what ever is best. You say there is no proof that it’s wide and varied use is to do with this model working, and it’s used because it’s free/cheap. Then why don’t they use one of the other free OS? Why has none of them even close to the device support or Linux? Even though they existed before Linux? I’ll tell you why, because Linux went critical mass in the way they didn’t. This is because of the stickiness of the GPL. The GPL does tie you in and that is deliberate. It’s BSD that doesn’t. Forking is not only allowed but encouraged. Forked get merged, forks take over, fork die. It’s an organic process. I’m not saying Linux is the best in all ways, I hate the un-Unix like parts (ALSA and lack of /dev/eth0), and no generic fs device sharing like Plan9, but you can’t fault it on hardware support or speed. Graphics is sticky, but is being delt with, even though the drivers are finished, these new drivers are already taking over, and you can’t stop that because it’s open source, it’s the distros’ choice.
It doesn’t seem that you read the texts, you talk about interfaces and miss examples like “old programs that were built on a pre 0.9something kernel that still works just fine on the latest 2.6 kernel release. This interface is the one that users and application programmers can count on being stable.”
You are not forced to use Linux. As you probably know by now, their driver makers are also a community where someone gives and all receive. So having someone who wants to use Linux to his interests, having also the source code to modify it, to study it, to publish it, etc… and doesn’t want that the others do the same…
There are some things that are not easy to be talked about. I’ll try to put the results of past conversations:
A binary-only driver is very bad news, and should be shunned. That proprietary software doesn’t respect users’ freedom, users are not free to run the program as they wish, study the source code and change it so that the program do what they wish, and to redistribute copies with or without changes. Without these freedoms, the users can not control the software or control their computing. As Stallman says: without these freedoms, the software controls the users.
Also, as Rick Moen said: binary-only drivers are typically buggy for lack of peer review, poorly maintained, not portable to newer or different CPU architectures, prone to breakage with routine kernel or other system upgrades, etc.
In the article of http://www.kroah.com/log/linux/stable%5Fapi%5Fnonsense.html it’s explained that:
Linux does not have a binary kernel interface, nor does it have a fixed kernel interface. Please realize that the in kernel interfaces are not the kernel to userspace interfaces. The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is _very_ stable over time, and will not break.
The author of the article says that has old programs that were built on a pre 0.9something kernel that still works just fine on the latest 2.6 kernel release. This interface is the one that users and application programmers can count on being stable.
That article reflects the view of a large portion of Linux kernel developers: the freedom to change in-kernel implementation details and APIs at any time allows them to develop much faster and better.
Without the promise of keeping in-kernel interfaces identical from release to release, there is no way for a binary kernel module like VMWare’s to work reliably on multiple kernels.
As an example, if some structures change on a new kernel release (for better performance or more features or whatever other reason), a binary VMWare module may cause catastrophic damage using the old structure layout. Compiling the module again from source will capture the new structure layout, and thus stand a better chance of working — though still not 100%, in case fields have been removed or renamed or given different purposes.
If a function changes its argument list, or is renamed or otherwise made no longer available, not even recompiling from the same source code will work. The module will have to adapt to the new kernel. Since everybody (should) have source and (can find somebody who) is able to modify it to fit. “Push work to the end-nodes” is a common idea in both networking and free software: since the resources [at the fringes]/[of the developers outside the Linux kernel] are larger than the limited resources [of the backbone]/[of the Linux developers], the trade-off to make the former do more of the work is accepted.
On the other hand, Microsoft has made the decision that they must preserve binary driver compatibility as much as possible — they have no choice, as they are playing in a proprietary world. In a way, this makes it much easier for outside developers who no longer face a moving target, and for end-users who never have to change anything. On the downside, this forces Microsoft to maintain backwards-compatibility, which is (at best) time-consuming for Microsoft’s developers and (at worst) is inefficient, causes bugs, and prevents forward progress.
ABI compatibility is a mixed bag. On one hand, it allows you to distribute binary modules and drivers which will work with newer versions of the kernel (with the already told long-term problems of proprietary software). On the other hand, it forces kernel programmers to add a lot of glue code to retain backwards compatibility. Because Linux is open-source, and because kernel developers even whether they’re even allowed, the ability to distribute binary modules isn’t considered that important. On the upside, Linux kernel developers don’t have to worry about ABI compatibility when altering datastructures to improve the kernel. In the long run, this results in cleaner kernel code.
Edited 2011-01-15 18:52 UTC
Bad news for who? Users? They just want something that works. The current system already provides plenty of bad news.
Freedom as defined by Stallman’s newspeak that only exists to push his agenda.
Everyone in this thread agrees that the proprietary nvidia drivers are the best.
Why does Microsoft have to be pulled into this? Why not limit the discussions to Unix systems that have a stable ABI?
Tell me where Linux would have been held back if they kept an stable abi with a 3 year cycle. FreeBSD keeps a stable abi with minor releases, so be specific and show in comparison how Linux has had an advantage.
Yes.
Thats why they don’t want a stable kernel api.
You think you want a stable kernel interface, but you really do not, and you don’t even know it. What you want is a stable running driver, and you get that only if your driver is in the main kernel tree.
I certainly don’t. They are the only driver on my system that crashes, regulary. They don’t keep up with X developments, so you are left behind. I can’t wait to not have to use them.
Fine. Which Unix system support the most devices and architecture? (In fact more then any OS, ever.)
Dude, really, read the doc, it covers all that you are bringing up.
* LONG POST * (basically, the whole ABI-stability thing is a convenient thing to blame, but is just a distraction from the real problem).
I don’t see that FreeBSD has gained any advantages by having a more stable ABI than Linux. In terms of graphics drivers, it has exactly the same problems that Linux has, for exactly the same reasons.
Those reasons are that Xorg is full of legacy crap that nobody uses anymore, which still needs to remain fully supported (and no, I’m not talking about network transparency). This makes Xorg far more difficult to maintain and improve without breaking everything, and slows development down.
Worse still – the newer stuff that people actually use / want to use doesn’t work properly. Either because it’s not had enough time spent on it, or because it interacts poorly with the legacy crap.
Not having a stable ABI doesn’t hurt the open-source side of things. Xorg developers have to problems keeping up-to-date with Linux (and the FreeBSD developers have no problems keeping up with the latest DRI2 changes from Linux either). So, the only group it could possibly hurt are the closed-source guys. That’d be Nvidia and ATI, basically. Let’s see what Nvidia have to say…
http://www.phoronix.com/scan.php?page=article&item=nvidia_qa_linux&…
According to the lead of Nvidia’s Linux / Unix driver team:
– The drivers are focused on workstation graphics (CAD, 3D modelling and animation) first, because that’s where Nvidia make their money.
– Desktop or gaming features are added if they have spare time, but are a much lower priority.
– The driver is almost entirely cross-platform, with most of it being shared between Linux, FreeBSD, Solaris, Mac OS X, and Windows. The Linux-specific kernel module is tiny.
– The lack of a stable kernel ABI is “not a large obstacle for us”, and keeping the Linux-specific driver up to date “requires occasional maintenance… but generally is not too much work”.
So, Nvidia don’t seem to think it’s a problem. I think they’d know better than you do.
As for other drivers… I don’t see the problem. Nearly everything in a modern PC will run just fine with no special drivers. On Windows, you use Microsoft’s drivers, on Mac OS X you use Apple’s drivers (and they even work on general PC hardware with few problems), and on Linux you just use the standard kernel drivers.
The only exceptions are printers, video card drivers, and wireless network drivers.
Printer drivers are user-space (even on Windows these days), so the question of a stable kernel ABI is irrelevant. Besides, Linux and Mac OS X use the same printer driver system (CUPS, which is owned by Apple), yet only HP bother to provide Linux drivers.
As for wireless network cards… the hardware manufacturers can not be trusted to make drivers that don’t suck, for any OS. The in-kernel drivers for wireless devices kick the ass of any vendor-supplied Linux driver, or of the Windows drivers running through NDISWrapper.
One other point – remember the problems Microsoft had with third-party drivers on Windows? How the number one cause of BSODs was Nvidia’s video driver? How much trouble lousy third-party drivers caused?
To solve this problem, Microsoft had to develop a huge range of static test suites, and a fairly comprehensive driver testing regime. They then had to force hardware manufacturers to use these tools and certify their drivers, by adding scary warnings about unsigned drivers. Later on, they even removed support for non-certified drivers entirely.
The Linux community can not do that, for a whole heap of licensing, technical, and logistical reasons. Plus, we don’t have the money, and we don’t have the clout to force hardware manufacturers to follow the rules. So they won’t – they just won’t release Linux drivers at all.
FreeBSD does not have even close to the same desktop marketshare or mindshare as Linux and as such does not get the same amount of attention from hardware companies. The point of bringing up FreeBSD is that it has had a stable abi for minor releases and yet no one has told me how Linux was able to leap ahead in terms of specific features that could not wait a minor release cycle.
Your link doesn’t work. Try this one:
The Challenge In Delivering Open-Source GPU Drivers
http://www.phoronix.com/scan.php?page=news_item&px=ODk3MA
For proper Sandy Bridge GPU support under Linux you are looking at the Linux 2.6.37 kernel, Mesa 7.10, and xf86-video-intel 2.14.0 as being the critical pieces of the puzzle while also an updated libdrm library to match and then optionally there is the libva library if wishing to take advantage of the VA-API video acceleration
What a mess.
So you cherry picked a few positive quotes. Would it have taken more or less labor for them to provide a binary driver for a 4 year interface or their shim / open source shenanigans? Actions speak louder than words and by their actions they clearly prefer to release binary drivers for stable interfaces. Users prefer binary drivers to having an update break the system. Users just want something that works.
Wait so you are saying everything else works fine in Linux? What about webcams, sound cards and bluetooth? No complaints about Audigy then?
The question is obviously related to video card drivers and most of your long winded post is irrelevant. I asked a simple question that you haven’t been able to answer.
Bullshit, I can list numerous network cards that have excellent customer ratings. Intel cards especially have been stellar for me.
No I don’t recall that actually. If you tally up video card driver issues then Linux definitely comes out on top. There are endless cases of video card drivers being broke in Linux. That requires more than a restart.
You don’t understand what you’re talking about. What the above quote says, is that Ubuntu 11.04 should support Sandy Bridge out of the box.
Yes, for users, not for monopolists. You just have to think long-term.
Something that forces users to depend on a company which has the target of sucking the biggest amount of money from them… works for the company. You know, Bill Gates got to be the richest man and Microsoft got to be a convicted monopolist (at least three times).
You can try to modify free software to your needs… and you can try to modify proprietary software to your needs, and see where we have our hands tied
The word “typically” doesn’t mean “always”, that’s why people use the word “typically” instead of the word “always”. Also when Nvidia stops mantaining a driver (in Windows, Linux, etc) we start seeing what happens, so we have to think long-term.
It’s to show what happens with the “choose ABI” alternative.
If there are problems in this thread with elementary facts, imagine if we start speculating.
Modding down the parent comment… without giving an argument… That way reasoning is avoided?
I really like how GPL advocates proselytize the basics even on a site called OSAlert, even to someone who clearly knows about Linux and who Stallman is. Reminds me of Mormons who knock on doors and ask if you have heard of Jesus. Jesus Christ? No I have never heard of him. I’ve lived in America all my life and have not heard of the guy. Is he somehow related to that holiday…whats it called…. Santaday or something?
OSX has a stable ABI and has clearly not been as successful as Linux on the desktop.
I see you can’t answer the question either.
Perhaps I should write a formal proposal and see if the Linux devs can answer it. Stable_abi_nonsense was written years ago, so where are the benefits? Which specific feature could not have waited a 3 year stable abi cycle?
OSX also has a billion dollar marketing campaign, and limits to run on only special chosen hardware because they either don’t want to or can’t support as much hardware as linux can. Poor choice of example, there.
Some of the BSDs have stable APIs, and that hasn’t seemed to help them be successful. I’m sure your argument would be that they don’t have enough market share for that to make a difference. And you’re right – where you’re wrong is thinking that Linux would be any different. Linux doesn’t have enough marketshare for hardware companies to be very interested in it either, and for those that are the changing ABI is a relatively small inconvenience.
If having a stable API was that important, the distros would just freeze on a particular kernel/X/etc. version for 3 years while all the devs kept working on newer code that could change. In fact, that’s exactly how corporate support is handled. So, why doesn’t everything work that way?
It’s not difficult to figure out – general Linux users are more interested in getting the new features that the changing API provides as soon as possible, and are willing to give up the stable API which could get them more binary drivers on old distros. Because this is OSS, there is no way to control what users pick – you can’t simply dictate that people use the old distros, because they are free to grab whatever they want, and they’ve chosen otherwise.
Edited 2011-01-16 21:41 UTC
Right but FreeBSD has a smaller budget than Linux but can maintain a stable ABI for minor releases.
Linux drew popularity by being successful on the server where the unstable abi is less of an issue. FreeBSD has numerous advantages but Linux has the inertia.
I already posted a Phoronix article about the troubles Intel has gone through. A stable abi would mean less work for video card companies, end of story.
I’ve already gone over this. If a distro freezes the kernel then they run into a host of compatibility issues. For a desktop distro it is more trouble than it is worth. Then on top of it you have the subsplit problem. A distro that maintains a stable binary interface for video drivers won’t matter much to gpu companies since most distros would still have the standard kernel.
Linus has designed Linux in a way that discourages forking and binary drivers. He doesn’t care if Linux is a success on the desktop or even the server. It’s a hobby kernel to him and the Linux desktop legions need to learn this and accept that at its core Linux is not designed to compete with Windows or OSX. Distros like Ubuntu aim for the desktop but have to continually deal with disruptive changes made upstream. It’s a big mess but Linus prefers it that way. He is on record as stating that Linux is software evolution, not software engineering. If kernel changes break working hardware downstream that is all part of evolution. If Linux only gains success as a server and embedded OS then that is fine with him.
No, it would mean more work, because they would have to work around all the problems in the stable API rather than just fixing them. That’s always going to result in more code and hacks, until it gets cleaned up the next time they’re allowed to do so. (For Intel, that is. I agree that the binary driver developers do have to do a little more work).
Because:
1 – FreeBSD is not just a kernel. It’s a complete operating system. The few Linux distributions that have minor releases (basically RHEL and SLED) manage to have a stable ABI as well.
2 – FreeBSD doesn’t change very much between minor releases. By definition. Between major releases, all bets are off.
No, sorry. Your article is completely irrelevant.
It does NOT support your unsupported assertion that we need a stable ABI, or binary drivers. At all.
What it SAYS is that the open-source drivers are impossible to upgrade. Which is true, and is a pain in the ass. Nobody is disputing that.
This is because those components are too tightly coupled. Despite being distributed separately, they can really only be used as a single unit, and if you upgrade one, you have to upgrade them all.
It’s done this way because developer resources are tight. It’s orders of magnitude more difficult to provide a stable, external API / ABI than it is to provide an internal, unstable one. Progress is already glacially slow. Having to maintain a stable external API / ABI would only slow them down even more.
Basically, the interfaces between the DRI module and the user-space components (Mesa or Gallium2D, the Xorg driver) are all considered to be private. This is fine – nobody else uses them, after all.
Remember, the Nvidia and ATI drivers both contain a kernel module, an OpenGL implementation, and an Xorg driver, shipped as a single unit, with a private (and constantly changing) interface between them. This is all fine.
The problem is that the updated open source drivers are only made available for the latest Linux kernel, and the latest version of Xorg.
So yes. The situation with the open-source drivers sucks. I don’t see how this reflects on the closed source drivers. They don’t have any of these problems.
1 – Strawman argument. Stop it.
2 – Someone has a different opinion than you do. Oh no! That does not make them an idiot, or selfish, or arrogant. You can drop all the crazy conspiracy bullshit as well.
And seriously, relying on binary driver just isn’t going to work.
Even on Windows, almost every driver you use is written by Microsoft these days, because Microsoft don’t really trust hardware manufacturers to write drivers for their own hardware. And I can’t blame them – they made a huge mess of it, and Microsoft got fed up with crappy third-party drivers damaging Microsoft’s reputation.
That’s why they started doing WHQL testing back in 2000-ish, which the Linux community doesn’t have the resources to copy, nor the clout to force hardware manufacturers to use it. That’s why x64 versions of Windows don’t load unsigned drivers. That’s why print drivers (and most USB drivers) were moved to userspace. That’s why the Vista logo thingy requires a new PCs shipped with Vista or Windows 7 use HDA on-board audio chips, which must work without any drivers.
Breaking hardware :-). That was good
In a more real sense, if someone really read
http://lxr.linux.no/#linux+v2.6.37/Documentation/stable_api_nonsens…
would have seen:
The very good side effects of having your driver in the main kernel tree are:
– The quality of the driver will rise as the maintenance costs (to the original developer) will decrease.
– Other developers will add features to your driver.
– Other people will find and fix bugs in your driver.
– Other people will find tuning opportunities in your driver.
– Other people will update the driver for you when external interface changes require it.
– The driver automatically gets shipped in all Linux distributions without having to ask the distros to add it.
and more in the quoted article.
Nt-JerkFace talked about freedom that “only exists to […]” and it was answered telling where we all are free to do something and where we are not.
*Cough Bullshit *Cough.
As someone that actually maintains a fairly large out-of-tree kernel project (with >200K LOC), I find your comment to be misguided, at best.
Less than 1% of my team’s time is spent on making the code compatible with upstream kernel.org release, and I’m using far more API’s than your average graphics card driver. (Sockets, files, module management, etc)
– Gilboa
Edited 2011-01-15 20:35 UTC
You can try answering the same question.
How would have Linux been held back if they kept the abi on a three year cycle?
No idea.
From my -own- experience, I can’t say that maintaining Windows kernel code with its own semi-stable ABI is any easier compared Linux. (Actually, the availability of the complete kernel source makes Linux far easier – at least in my view)
Getting back to the subject, you claimed that the lack of stable ABI is the main problem with writing good drivers, I claimed, from my own personal experience (that may or may not be relevant in the case of graphics card writers) that this is -not- the case.
Now, unless you some actual experience and/or evidence to prove your point, your initial argument is pure speculation.
– Gilboa
Define semi-stable.
Write a binary driver for Windows and it will work for the life of the system. Write one for Linux and it will likely be broken with the next kernel update.
No I didn’t claim that. I claimed that Linux drivers would be better if it had a stable ABI. There is a difference. Hardware companies would producer higher quality drivers and in a more timely matter if there was a stable abi. This is partly due to IP issues and companies wanting to get drivers out on release day.
The Challenge In Delivering Open-Source GPU Drivers
http://www.phoronix.com/scan.php?page=news_item&px=ODk3MA
Microsoft should give Linus millions in stock for being so stubborn with binary drivers. It’s a needless restriction that has held back the Linux desktop, especially during the XP days. That single decision has helped Windows keep its dominate position.
In general it’s true, but I had drivers getting broken by SP releases and between different classes of Windows. (E.g. XP vs 2K3).
Again, at least from my own experience, this is complete (!!!) bullshit.
ABI changes in the kernel are -few- and -far between-.
In the same fairly large kernel project mentioned above, we have 35 (!!!) LINUX_KERNEL_CODE required to support Linux 2.6.9 -> 2.6.35.
This means, that in-order to support all the kernels used from RHEL 4.0 till RHEL 6.0 and Fedora 14 (~6 years) we only had to make 35 adjustments or less than 6 changes a year.
… At the average of 10-60 minutes a change (and I’m exaggerating), we spent on average ~3 (!!!!) hours a year on keeping our project current.
Color me unimpressed.
I beg to differ.
See the comment above.
As much as I enjoy reading Phoronix, in this particular case I wasn’t too impressed.
Plus, see my comment below.
You’re completely mixing Linux stable ABI (as in Linux kernel stable ABI) and Xorg and Mesa ABI.
Two completely different things.
(Plus, I have zero experience with latter, so I can’t really comment on that…)
Wow, you’re mixing so many different things I don’t know where to start…
Binary drivers, stable ABI in kernel, stable ABI in Mesa and Xorg… you really making a salad over here.
I’ll start by pointing out that nVidia (undoubtedly the best binary driver in Linux) is not really concerned by the lack of so called stable ABI [1].
I’ll continue by pointing out that other OS, which do have a stable ABI (Solaris?) haven’t fared better than Linux, quite on the contrary.
In short, thus far you didn’t really provided any proof for your POV – not from personal experience and not from actual binary driver developers (see below).
Maybe it’s time to reconsider?
– Gilboa
[1] http://www.phoronix.com/scan.php?page=article&item=nvidia_qa_linux&…
Those are two different operating systems. XP users were not expected to upgrade to 2K3 while Linux users are expected to upgrade every 6 months.
WorksForMe(tm).
What do you have to say to the millions of VMServer users that had their software broken numerous times from kernel changes? Tough shit?
No I’m not, a stable ABI for video cards would reduce the total amount of work required for gpu companies which is extended into Xorg as seen by that article.
Cherry picking positive p.r. comments. Do you expect a major company like NVIDIA to come out and say that Linus is a stubborn asshole? Would a stable 3 year ABI be more or less work for NVIDIA and other hardware companies? Just answer that question. Oh and please don’t claim that opening their specs would be the easiest route. AMD has already done this and now were have heard that there is a lack of open source driver developers.
I find it hilarious that the Linux defenders are so adamant about this issue. How dare I question the resounding success of Linux on the desktop. Linus and Greg KH have already stated that the kernel is a minefield for companies that want to release binary drivers. Year of The Desktop Linux would have happened years ago if the guy at the top was interested in meeting the needs of third parties like Nvidia that can help the success of alternative systems.
Oh and you still haven’t answer my question, along with everyone else here. Show me what couldn’t have waited 3 years.
Edited 2011-01-17 20:10 UTC
If you know anything about NVidia, it’s that they aren’t shy about speaking their minds even when it’s going to piss someone off. And why would they even care about pissing off Linus? It wouldn’t affect their driver in any way, and might even get people to change things if they make a good enough argument.
I find it rather sad that some people are so adamant about this issue too. Just on the other side… That’s the internet for you, though – no one ever wins an argument.
It has been. Look, there’s nothing that absolutely CAN’T be delayed. For that matter, there’s no reason the OSS drivers even have to implement 3D at all. 2d is enough for everybody, I’m sure people could argue. The point is that a stable API would slow down everything. Maybe only by a few days, maybe a few months. Maybe a few years, it just depends on the feature. If Linux had kept a stable API since 2000, the current OSS graphics situation would look much like it did back in 2006, and that would be TERRIBLE. Is there any one feature in there that couldn’t wait, no. But the sum total of all of them is HUGE. And things would just keep falling even more behind. Sooner or later the OSS support would just be a joke, and everyone would have to rely on the proprietary drivers. That doesn’t matter to a lot of users, but for those who do it’s unacceptable.
I know you’re never going to change your mind about this no matter how many times people answer your questions. That’s OK, you keep believing whatever you want and the rest of us will be here in reality.
Telling things that are not true will lead you nowhere.
That is not true, of course.
http://lxr.linux.no/#linux+v2.6.37/Documentation/stable_api_nonsens…
http://www.osnews.com/thread?458215
No comment
Graphics is a whole nother kettle of fish. As far as I know, writing a graphics driver involves writing multiple high-quality JIT compilers, a memory management layer, and a bunch of difficult-to-debug libraries. Plus you need a minimal OS on the ASIC side. The statistic I heard (and believe) is that the NVidia driver on Windows contains more code than the all the other drivers on a typical system combined.
As you pointed out, unlike, say a kernel based deep packet inspection software (ummm….) that’s forced to use 70 different kernel APIs (from memory management to files, sockets, module management, assortment of contexts and memory spaces) a Video driver, such as as the nVidia driver is fairly light on kernel API’s making it far less susceptible to kernel changes.
Most of the code (JIT, HW register management, etc) can easily be shared between Windows and Linux.
To quote nVidia [1] ~90% of their code is shared between Windows and Linux.
– Gilboa
[1] http://www.phoronix.com/scan.php?page=article&item=nvidia_qa_linux&…
Edited 2011-01-17 03:50 UTC
Another way that the title of this story is inaccurate is that we do have hardware acceleration on linux thanks to XRender — and we have had for years.
So if your drivers have a good XRender implementation then your Firefox can blow the competition into orbit in 2D graphics benchmarks such as:
http://ie.microsoft.com/testdrive/Performance/PsychedelicBrowsing/D…
What’s blacklisted on buggy X drivers is OpenGL. It is used for WebGL, and for accelerated compositing of the graphics layers in web pages.
However, for the latter (compositing), we are still working on resolving performance issues in the interaction with XRender, and that won’t make it into Firefox 4, so we don’t enable accelerated compositing by default (regardless of the driver blacklist) so if you want accelerated compositing (at the risk of losing the benefit of XRender) you have to go to about:config and set
layers.acceleration.force-enabled
to true.I’m happily using it here, and it can double the performance in fullscreen WebGL demos.
Interesting test. With Chrome/Chromium (same version), it’s faster in Linux with both fglrx and Gallium3d than in Windows7. Not much between fglrx and Gallium3d. With Firefox 4 Beta9, fglrx is slower than Chrome (but faster than Firefox 3.6 Linux and Windows), whereas both Windows and Linux w/Gallium3d are very fast indeed. Which is to say that the Gallium3d Radeon 5xx0 driver finally does something very well.
…after all we really need gallium and wayland on linux
Gallium helps X too. Means X has a single driver for Gallium3D/KMS/DRM that works on multiple cards. Removing drivers from X will greatly help X as it will mean much less code and make changing things much easier. It doesn’t just make X alternatives possible. Everyone is a winner.
It’s open source, someone else can do it if they can’t. There is a graphics drivers problem, but it’s getting much better and the future is bright (gallium3d and friends), but even with what we have now, many many many applications manage to do OpenGL just fine on X (even with the crappy closed NVidia drivers I must run, that crash X about once a month). They will look like fools if some else does a fork of FireFox with working OpenGL. My guess is that this will be what will happen because they are effectively throwing down the gauntlet. If this does happen, it will be the soul purpose of the fork and Mozillia will probably quietly take the code, grumbling under their breath.
pffft… the fast that they can’t do this as easily as they can with other platforms (Windows, Mac) already tells a lot about Linux and X.`
Others don’t make such a fuss and manage. My old OpenGL stuff just works.
That’s true too
Well, I tried the webGL test suite linked earlier, bypassing the Intel driver blacklisting.
It crashed Firefox.
If some simple tests supplied by WebGL’s vendor can already lead to this result, I agree that WebGL should not be enabled by default for this chipset. As jacquouille said, it’s too much of a security risk.
I don’t doubt there is a problem with it, what I’m saying is others manage. Worse case, do some message blaming the graphics card drivers.
So they would have to put workarounds for every bug they find in a platform-specific graphics driver, right in a multiplatform web browser ?
Sounds out of place to me. Well, sure, it is doable, but I understand their decision not to do it.
No, do like they do with plugins, separate process. Let it crash, if/when it crashes say it’s probably the graphics drivers fault >insert card name here<. With the open drivers, some one will try and fix it, with the closed, well lets hope they care enough about last year’s device.
Or, you know, look at the source of things that manage just fine….
But things which manage just fine only use a subset of the OpenGL API. As jacquouille said, the goal of WebGL is to put 90% of said API in the hand of scripts, without knowing which parts of it said scripts will use…
Unless you advocate supporting only a subset of WebGL, the part which doesn’t crash on the currently used drivers. Then we simply don’t agree. We’ve had too much partial web standard support in the past, I think.
Edited 2011-01-16 10:04 UTC
Wine is one of the things that manage, and for OpenGL it will probably do very little bar pass it on. But the DX implimentation is more complex OpenGL. Crashes in Wine are normally because of the nature of it, i.e reimplimenting a bag of closed APIs to run closed programs that use those APIs, not due to graphics drivers. That what I think anyway, don’t know of any real data on this.
Xorg drivers are buggy. Yes … sure. Buggy are the crap called GFX that are
1. Not standardized
2. Under documented
If the GFX had a hardware standard for access, more people could improve the OpenGL stack (coincidentally this happens with Microsoft. however vendors write blobs to standardize to their interfaces) . We live in 2010 and graphics cards after all the technological advancements could not export a common hardware access API. It makes me wonder about the author’s unfair and inaccurate description of the GFX situation in general. Why not have HW accelerated browsers in Haiku or Syllable. Because people would be involved in an eternal hunt for documentation. The only answer is standards. There enough FPGAs out there to burn a standard driver in it. If you want my 2 cents
1. GFX should handle interface to the monitor and do elementary 2D acceleration via a standardized way (Have you heard about VESA?)
2. 3D/GP computing should be re-factored to another chipset (APU by AMD is a good terminology) that could be put on PCIE card to provide standardized access. Put an FPGA to do the translation from standardized calls to vendor HW.
For example I buy a cheap standard 2D gfx card and a standardized accelerator board cheaper because it is more oriented on GP computing and weaker in 3D (I want solve differential equation with octave on FreeBSD for example ). If you want to be cheaper, buy only the first, let you 8core CPU do the rest.
So, we could have 2 markets, cheap 2D standardized cards, like OHCI, OHCI1394, PCI-ATA cards and accelerator / co-processor cards that should be also standardized. Less unemployment.
What we have now? Everything combined in a proprietary, non standard compliant uncompetitive manner and older vendors killed. OSS is part of the global market and making drivers for special OSes is uncompetitive. Even the mighty windows need a vendor driver.
There is always the cheapness factor. But would you sacrifice price for freedom and standards compliance? If yes, then , in my opinion, computing is not for you.
This sounds like a recipe to make everything slow and lowest common denominator.
This sounds like FUD.
Fear? No
Uncertainty? No
Doubt? well, maybe
The field of graphics cards is advancing at a rapid piece. Bridling it with some committee-derived standard would be extremely hurtful to the companies involved and mostly unnecessary anyway. They already provide drivers for the platforms that matter and since they can control the card and the driver, they can develop at a much faster pace.
By the way, there already is a standard interface and it’s called OpenGL. DirectX would count too. Adding yet another layer is just bloat and unnecessary.
IEEE1394 is slow and least common denominator?
I suggest DirectX or OpenGL to be burnt on the GFX card.
err, no. Supporting a standard API doesn’t magically make your hardware less capable. It’s not a feature superset, it’s such that what the standard requires is a subset of the features. A good standard should provide an extension point for specific/proprietary features and a means for probing capabilities.
I’m not a gamer but I guess there are several cards from distinct manufacturers that support DirectX 10. Are all cards incapable of doing anything that isn’t in the DX10 api? I doubt it.
I just tried beta 9 and well, either I get a completely black window or it flickers like Speedy Gonzales having an epileptic seizure. Not really what I would call useable :/
I bet Thom loved this article!
I too love anti-X articles!
Actually, Xorg developers have spontaneously contacted us and are looking into the driver issues we’re having (which they could reproduce). Looking forward to un-blacklisting stuff in the future.
…because of this article? Or did they contact because of something else? I can’t really see OSAlert having this kind of influence :/.
In any case, that sounds like good news!
I don’t know if it’s because of this article, or because of the article that Phoronix is currently running on the same topic, or because of the various blog posts flying around the interwebs
And yes, it’s good news
Edited 2011-01-16 00:26 UTC
Colour me confused by why is it a bad thing that WebGL is implemented on top of Direct3D instead of OpenGL? if the outcome is consistent with WebGL implemented using OpenGL then why is it even a problem? I mean if the outcome is the same then why is it a ‘sad situation’?
WebGL is based on OpenGL ES 2.0. All smartphones, outside of the Windows world are OpenGL ES 2.0 compliant.
It should be obvious you want WebGL as a layer abstracted from OpenGL ES 2.0.
That makes absolutely no sense what so ever – the issue is layering WebGL on top of Direct3D and for the programmer who is programming for WebGL he doesn’t care what happens under the hood and behind the scenes because all he is concerned about is the fact that WebGL is provided. If the WebGL on top of Direct3D implement the whole WebGL stack, a programmer can programme against WebGL and it runs on Windows, MacOS X and Linux regardless of what the back end is then the whole commotion is for nothing other than the for the sake of drama.
I think the whole sadness has to do with the fact that they have to maintain two separate back ends instead of a single one. Sorry to sound pathetic but boo-f–king-whoo. Its time that the Firefox developers stop writing their code for the lowest common denominator and started taking advantage of the features which operating systems expose to developers. That apparently they’re ok to use Direct3D/Direct2D/DirectWrite but maintaining an extra backend to WebGL is apparently ‘one step too far’? good lord. There is a reason I refuse to use Firefox on Mac OS X.
Edited 2011-01-16 02:58 UTC
Performance is an issue, too. Having WebGL code translated to Direct3D on Windows is akin to DirectX-based Windows programs running on top of Wine, which get all their Direct3D calls translated to OpenGL.
Sure, those programs don’t know about it, but the call translation overhead results in very poor performance in the end. And THAT they care about.
Edited 2011-01-16 12:15 UTC
The problem is, that some OpenGL drivers are so bad, that doing WebGL on top of DirectX is still more stable and faster than using OpenGL.
Not everyone has an ATI or NVidia graphics card, and even them have issues with OpenGL.
Care to respond to the obvious the other poster pointed out between translating graphics compositing engines and poor performance in that translation?
You really needed that explained to you?
ATI is partially to blame for bad OpenGL drivers on Windows (since ATI cards are quite widespread). They never invested the same effort as into DirectX drivers. Nvidia on the other hand produces decent OpenGL drivers across all platforms.
Still, this whole situation is a mess.
So where’s all the X-lovers now? Wayland brings forth his hammer.
wayland is’n ready for use by normal people
wayland isn’t supported by nvidia and ati
Wayland is no better if underlying OpenGL is all screwed up.
I was reading that the DirectX 10/11 API was successfully ported to Linux, though not 100% done at the time i read it, but it was already running Direct3D demos and such.
Ah yes, here:
http://www.phoronix.com/scan.php?page=article&item=mesa_gallium3d_d…
If it becomes a little more mature, we could see Directx becoming an alternative to OpenGL on non-windows systems…(kinda shows the sad state of opengl right there…)
OpenGL support for Firefox will likely arrive well before it can be done by the DirectX state trackers.
http://www.phoronix.com/scan.php?page=news_item&px=OTAyMA
i’m going to switch to chromium
http://www.phoronix.com/scan.php?page=news_item&px=OTAyMA
Finally, some movement towards OpenGL 3.0 support in Mesa:
http://www.phoronix.com/scan.php?page=news_item&px=OTAyMQ
…*clears throat*…ahem, so isn’t this going to push Wayland developers to move even faster so Linux can’ finally have a proper graphics server? About damn time they retire that stupid kludge of a software called X.