“Joining the Linux Foundation is one of many ways Valve is investing in the advancement of Linux gaming,” Mike Sartain, a key member of the Linux team at Valve said. “Through these efforts we hope to contribute tools for developers building new experiences on Linux, compel hardware manufacturers to prioritize support for Linux, and ultimately deliver an elegant and open platform for Linux users.”
Mark my words: Valve will do for Linux gaming what Android did for Linux mobile. Much crow will be eaten by naysayers in a few years.
You mean fork Linux (the kernel), use as little GPL software as possible, and minimise feeding changes back to upstream?
I sincerely hope (and looking at TFA, suspect) Valve will do much better
You mean avoid dealing with the Linux upstream cabal in order to ship the product in time, scrapping the 20 year old cruft sitting above the linux kernel (X.org, the mess that is Linux audio) while building a more streamlined userland better suited to smartphones, and getting their patches rejected by the upstream cabal (wakelocks)?
Nokia tried to play by the rules with MeeGo, and look what happened to them. Everyone was shipping their new mobile OSes by Q4 2010, Nokia wanted MeeGo to put it in the Nokia N8, but MeeGo wasn’t ready so the N8 shipped with freaking Symbian. Nokia themselves admitted “technical difficulties” with MeeGo as the reason for the huge delays. By the time they had hammered around the issues, the investors were getting impatient and Elop happened. Android would have been yet another perpetually in beta no-show if it had been based on “real linux”.
I have enough real life experience to know that _being_ upstream on product release might not be feasible (although possible), but not working with upstream at all, even after the fact, is quite different.
What was released on the first device had been years in the making, and in that time a lot could have been done with upstream. Even when not doing that, doing it after the fact is still very doable.
Nokia was shipping “tablet” products before Android even existed The fact that they screwed up says more about Nokia management then anything else….
kurkosdr even google developers behind wakelocks end up openly seeing what the major defects are with them. Of course people like you also overlook that Motorola submitted quickwakeup around the same time and there was an existing framework in the Linux kernel(prior Linux mobile phone projects partly responsible for the pre existing.).
So it was not just the Linux Upstream. Android Makers were infighting with themselves how wakelocks should be implemented. Yes the flamefest was not just Linux community vs Google. It was Google vs Motorola vs some other Android makers vs the Linux Community. The good thing that came out of this was a stack of test cases.
http://www.elinux.org/Android_Power_Management Long and brutal.
https://lwn.net/Articles/479841/ Linux 3.5 in the end merges this all most all the features of both solutions and compatible with the already in kernel framework. Also passes all the failure cases that could ruin you with Google Wakelocks or Motorola quickwakeup or Suspend blockers.
Sorry based on real Linux does not have to be a no show. In fact it never was.
http://www.openmoko.org kurkosdr pre-dates the existence of Meego or Android. Openmoko is a Full GNU/Linux phone. X11 included. Released on time and functional.
Nokia was trying to make something Symbian application compatible with Meego.
Even with all the crud of X11 a mobile phone was more than possible.
So how can you say Android would have been yet another perpetually in beta no-show if it was real linux based. The fact google was making their own new framework method following the Openmoko kind of style it would have worked if it was not for another reason.
Android was designed to be less restrictively licensed. Google did a lot of things to avoid restrictively licensing some got them sued by Sun. The major barrier to openmoko Linux was the fear of GPL by phone makers at the time.
Sailfish OS the descendant of Meego is going forwards quite well now. Particularly that it does not have the compatibility conditions nokia was asking for.
Just go and look at what Nokia did to symbian. When the worked out Linux kernel could not be bent to emulate symbian they tried wrapping QT over it.
Nokia was not exactly playing by the rules with Meego either. Magical patch submissions without decent explanation why caused a lot to bounce.
Nokia big problem with Meego was attempting to make two incompatible designed OS’s be able to use each other application source code without alteration. So costing them more time than if they had just rewrote particular applications from the start line.
Over all most of google android patches did merge. There were just a few key ones that did not.
Yeah dude.
That diatribe is a unholy mix of rewriting history, strawman attacks an plain old making stuff up…
http://www.openmoko.org kurkosdr pre-dates the existence of Meego or Android. Openmoko is a Full GNU/Linux phone. X11 included. Released on time and functional.
Nokia was trying to make something Symbian application compatible with Meego.
Just go and look at what Nokia did to symbian. When the worked out Linux kernel could not be bent to emulate symbian they tried wrapping QT over it.
openmoko was not very good and excessively buggy compared to other systems at the time.
OP is completely correct in the assessment that had Nokia chosen to not cooperate with upstream the project would have been finished earlier and might have been successful.
Your argument that Nokia failed because they were trying to make the “linux kernel compatible with QT” is just plain strange…
You have miss read. Nokia attempted to make Meego source compatible with symbian applications source code. This project turned into a complete fail. Made sections of Meego ABI stupidly complex.
When this failed Nokia attempted to move all symbian developers to Qt. Basically two completely disaster fails.
Most of Nokia delays with Meego were not upstream issues. It was attempting to be sideways compadible. Between Meego and Symbian.
openmoko funny enough go read the reviews it was not that buggy. Mostly because all applications on it had been built particular for it.
Android has the same thing all applications had been built particular for it.
Meego had these huge porting things. Or in other words how to get into a hole.
Nokia with Meego did not follow what Openmoko and Android did. This is why Nokia got into hell.
We have seen like lindows that had the same problem. Remember where it was going to use wine to port ever single application. Of course this turned into a nightmare from hell as well.
Everyone so far is forgetting Maemo. The real reasons ‘MeeGo’ had so many issues were because it had so many different origins to it.
So basically Nokia went from Maemo being GTK based, to working with Intel to merge Maemo and Moblin into MeeGo and then pushing Qt as the basis (since Nokia had purchased Trolltech) and on top of that the aforementioned Symbian / MeeGo / Qt combination.
I know the version of Qt Creator for the Harmattan SDK has a build button for Windows/Mac/Linux/Symbian/Harmattan. Newer ones have Android and Blackberry added to it, and Jolla has modified it for their SailfishOS SDK to port to it. So for the most part the compatibility layer was working (at least as far as I can tell, I haven’t gotten that far in my studies yet).
Another big kicker to the “MeeGo isn’t ready yet!” is that full on MeeGo was rpm based (from Moblin) whereas MeeGo Harmattan as on the N9/N950 are deb based (From Maemo).
Android didn’t use gtk or qt or even standard LibC. All they did was grab the Linux kernel.
There was a blog post that I can’t seem to find again that talks about how much better the N9’s power management is compared to Androids, due to the way the wakelocks are more dynamic or something.
Either way, Nokia definitely was trying to work more with the Linux developers, and their code managed to get into the kernel.
Rather it had many issues because the project didn’t have a chance to fix them, before both Nokia and Intel ditched it. Any transition is disruptive, and Mameo switching to Meego was disruptive as well. GTK to Qt and dpkg to rpm were technicalities, but ones that required effort to integrate. While Meego was ditched, it was forked as Mer, and it uses Qt, rpm and the rest of the Meego heritage successfully.
Edited 2013-12-06 18:04 UTC
Qt.
Get it right.
Sailfish is beta quality at best, unstable …and shipping. That didn’t play out well for N9 Meego.
I hope Valve’s SteamOS will be better than Android on mobile. Valve didn’t create a split and hopefully will be using a regular glibc Linux stack including the graphics (Wayland and etc.). So it will benefit all Linux distros at large in the long run. Android on the other hand created a hard split and benefits only itself.
If the Linux folks can’t even agree on something as basic as a f*cking graphics server (X, Wayland, Mir etc.), I say don’t hold your breath. Let’s not even get started on the audio stack.
Don’t get me wrong, I like choices too, but at the end of the day, IF you want to be taken seriously and have success with your endeavours, one project must be chosen above all and all efforts should be focused on improving that particular choice, instead of the current bickering and infighting that’s going on right now.
The Linux folks *are* in agreement, if you just ignore one company – Canonical.
X is to be succeeded by Wayland, due to inherent flaws.
SystemD modernises system administration and maintains modularity.
The main thing needed now is for Pulse to be able to usurp the use-cases of JACK and Pulse-less ALSA.
The one thing Canonical are doing right is switching to Qt5 with QML, but GTK3 isn’t that bad, and it’s not too different to MS’s plethora of GUI options.
Overall I do agree that the Linux audio stack is a mess, and embarrassing after all this time.
However, I would much rather see Pulse disappear to be replaced by JACK.
Pulse has caused nothing but problems since day 1. In my opinion JACK is far superior not just for latency but also flexibility.
I can’t ever see Pulse being able to cover the low-latency area and supporting the Pro-Audio segment.
Pulse does have more momentum, and more features for end-users as opposed to pro-audio (it makes it so easy to set per-application sound/stream audio etc.)
That’s the main reason I thought it would be better as a candidate for consuming the other; it doesn’t really matter which does it, there simply needs to be an overhaul of the stack, wayland/systemd-style.
Out with the old, and in with something that has been designed for all modern use-cases, with the knowledge of how a modern stack should be.
The effort will be gargantuan, but most people seem to acknowledge it’s necessary, even with the vast improvements that have been made over how things were even half a decade ago.
Edited 2013-12-05 11:34 UTC
Ok what etc. X11 is the exist due to age and design there is absolutely no question has to die. Wayland is from x.org project and is the designated successor to X11. Wayland has support of all bar 1 of the major Linux desktop environment and all major video card makers.
Mir/Unity/Ubuntu is the only break away. Has no support from anyone other than Ubuntu including video card makers. Odds of long term success low.
Now 1c3d0g what other etc’s. There is nothing.
Lets move on to audio. You are aware back in 1996 there were 12 posix audio servers in use on Linux all incompatible with each other. I mean 100 percent incompatible. Today we are down to 3. The 3 remaining audio servers are pulseaudio and jackaudio and audio-flinger(android only) . All support very different needs. Pulseaudio and jackaudio have a cooperation interface.
Yes a cooperation interface between sound servers was unthinkable in 1996. Why the developers had your foolish logic. You must choose one equals you don’t have to cooperate with other projects doing competition things.
1c3d0g there is very little real bickering and infighting really. Mir stuff is mostly that Ubuntu does not want to admit the path they have taken is mostly a no go.
One project chosen before all others does not happen in the FOSS world. What happens is the weaker slowly shows itself and gets crushed.
Pulseaudio is in fact a merge of tech from 6 different sound server projects.
1c3d0g think about it this way. Would it matter if we have 100 different sound servers and a 100 different graphical servers if applications developers only had to worry about 1 ABI/API that worked on them all. To have 1 universal ABI/API required cooperation and test-casing. Not this must choose 1 point of view. Cooperation like what happened with Pulseaudio might lead to project merging and reduction.
Yes I would like to take the heads of graphical and audio world on Linux and lock them in a room with no way out until they had universal consensus.
1c3d0g basically you have to stop repeating garbage. The Linux world is more in consensus today than it was 5 or 10 years ago. Ok the noise being generated is louder. The noise from the wayland camp is so loud because they don’t want consensus disrupted.
Hi,
These are (were?) symptoms of a larger problem.
It doesn’t matter if its X vs. wayland, or KDE vs. Gnome, or Qt vs. GTK, or Python 2 vs. Python 3, or various different package managers, or changes in “/dev”, or different init/runlevel scripts, or different CRON daemons or….
It’s a loose collection of pieces that happen to work together sometimes (if distro maintainers throw massive quantities of effort into fixing all the breakage and most software handles the hassle of many different alternative dependencies); and it is not a single consistent set of pieces that were all designed to work together. There is nobody in charge that can say “this is the standard from now on“, or “these are the APIs/libraries that will remain supported for the next n years“, or “yes, this alternative is technically better but not enough to justify the compatibility problems it will cause and therefore that alternative will not be included in any distribution“.
– Brendan
[q]I hate linux, linux is stupid.
– Brendan
/q]
Way to go Fergy … what a great argument.
This is the problem with the Linux community, cannot and will not listen to any criticism no matter how valid (and no I am not spending my free time contributing when most projects are quite caustic).
Edited 2013-12-05 08:18 UTC
lucas_maximus,
“This is the problem with the Linux community, cannot and will not listen to any criticism no matter how valid.”
Well, some people don’t like hearing criticism of the communities they belong to. However it’s no more true for linux than for other communities. You get the same kind of flack for criticizing microsoft and apple.
For what it’s worth, I think linux should adopt a stable ABI. I’ve given it a lot of thought and in my mind a good compromise exists by making the ABIs stable between minor versions and allowing them to break between major ones.
Keep in mind that even though the windows kernel maintains long term kernel API/ABI, microsoft has never the less broken many existing drivers using those stable APIs due to new kernel restrictions (especially with win vista/7). Even with win8 I discovered the dameware mirror driver broke from win7. So from a practical user point of view, windows users sometimes have to go through similar kernel driver breakages (regardless of the underlying cause).
So for a linux example: a driver could be compatible with specific major versions like linux 3.x. A new driver build will be needed for linux 4.x. This allows linux to have most of the benefits of stable interfaces. Additionally linux would not get stuck with long term cruft in legacy interfaces that no longer make sense or aren’t optimal for exposing the features of new hardware. Hardware manufacturers could stop worrying about individual kernel builds, only the major ones.
I think this is a very reasonable approach, but alas I am not in charge.
Tell Linus that.
lucas_maximus,
“Tell Linus that.”
This is not to change the subject, you can criticize linus all you want. However too often this criticism seems hypocritical. I have no more control over linus than I do over bill gates, and I’ve certainly got many complaints for him as well for the numerous times I’ve cussed at windows for not performing as well as I thought it should. All platforms have pros and cons. Anyone who only recognizes the pros of one platform and the cons of another is being biased. Can we agree on this?
Edited 2013-12-05 10:12 UTC
It not an argument … I was just presenting the reality of the situation.
lucas_maximus,
“It not an argument … I was just presenting the reality of the situation.”
Can I use this as the signature for my emails?
(after fixing the grammar of course)
if you like.
Lets take an Linux example of equally broken driver.
ATI in case when Linux Kernel Lock was removed. Alfman
Was it possible to get the ATI drivers back working again on Linux. The same binary blobs that were used in the past. Yes it was. Why there were interfaces for wrappers to be placed on top of the driver to allow for the new kernel limitation. Did this require AMD or ATI to fix the issue on abandoned hardware. No it did not.
This is the big problem. Windows 7 to Windows 8. How can you fix it to get your hardware back working.
The binary blob with source wrapper that Linux classes as it minimum acceptable driver allows drivers to operate longer. Even nvidia 8K requirement causing failures on kernel build with 4k pages also could be wrapped over if Nvidia never fixed the driver.
There is just case after case of where mostly binary blob Linux drivers have been able to be brought back to life by altering the wrapper code over it.
Hardware makers also don’t worry about particular kernels with Linux that much. If 1 kernel version does not work they publish use a different one for a while.
The nvidia and AMD wrapped binary blobs are more compadible than just 3.x they in fact support 2.6 and 2.4 kernels as well. So yes all the way back to 2001.
Any API exported to userspace from kernel space on the stable list cannot be broken/fixed must remain functioning exactly how it use to.
The selection to forbid solid blob drivers keeps the kernel cleaner. If there is a issue with a blob with Linux the wrapper to fix it only has to be loaded if the driver requiring it is being used.
Before you can consider binary only drivers with Linux something else need to be done. Implement the means to re-link a binary blob. Why so wrappers can be inserted the way they currently are.
Yes your Windows 7 to windows 8 problem also comes about because Microsoft Windows driver tools don’t have any way to relink a fix object into a existing driver.
Linus will not allow the change unless same quality of support for old unsupported by maker hardware can be maintained after the change.
Alfman the Linux kernel back in 2.2 did have binary drivers support. The feature was intentionally disabled. List of reasons.
1 gcc versions passing arguments differently.
2 Alteration in API in kernel space would leave no way to repair driver. So forcing either the kernel to grow in size with emulation parts or require binary drivers without wrapper to be for-bin.
If binutil ld had means to relink a binary with wrappers were required both 1 and 2 could be addressed.
Alfman like it or not the Linux stand on no binary drivers is grounded in operation and build tool limitations.
oiaohm,
“Alfman the Linux kernel back in 2.2 did have binary drivers support. The feature was intentionally disabled. List of reasons. 1 gcc versions passing arguments differently.”
That happened back in 1999? That’s really more of a very rare issue with GCC than the linux kernel per say. Changes to argument passing in GCC would break lots of binary interfaces and not just those in the kernel anyways. This is a great example of a breaking change that I would perform between major linux versions. Under a scheme where there isn’t any expectation of API/ABI compatibility between major versions, then this case becomes a none issue.
“2 Alteration in API in kernel space would leave no way to repair driver. So forcing either the kernel to grow in size with emulation parts or require binary drivers without wrapper to be for-bin.”
You do understand I wasn’t suggesting we keep fixed legacy interfaces over a long term, right? Just make changes at more predictable intervals. This gives devs much more time to come up with good interfaces and more time to write drivers for them. If an interface is so broken that it warrants immediate changes, then I would allow it, but that really should be the exception rather than the norm.
“Alfman like it or not the Linux stand on no binary drivers is grounded in operation and build tool limitations.”
You are welcome to that opinion, those are some of the reasons this debate exists. However in truth there are both pros and cons for adopting stable API/ABIs and this is exactly why I called my earlier solution a “compromise”.
Alfman Linux kernel does not use normal exporting and importing of functions from the ko objects. This is why its exposed to gcc optimisation altering how things work on it even using not gcc. Last screwup in this regard happened only 4 years ago.
Alfman this is the problem fixed at predictable intervals means that you have drivers fail past repair at each of those change points unless you have someway to wrap them.
By forcing a wrapper the ABI is not required to remain fixed but the driver can remain working. This is why with Linux you are talking about drivers that can be made work over a very huge life span.
The life span of Linux offical Stable ABI/API is kernel major number with the exception of 2.6 and 3.0 where there was no function stripping. So all the Stable ABI/API that existed in 2.6.0 still exists and works the same way in the last one released in the 3 series. 18 December 2003 was the release of 2.6.0. We are talking over 10 years of stability.
Yes the official stable ABI/API is the one exported to user-space. But it is completely operational in kernel space.
The issue the Linux kernel has had a lot where binary drivers break is simply internal only items been hooked up on. Better in recent years with declaring functions GPL only. Kinda keeps the closed source driver makers out of the prototype area.
Alfman the reason why Linux is such a problem with closed source driver makers is the fact it open source. So closed source driver makers see something and use it when its only a prototype being tested. All new API/ABI ideas have to be tested somewhere. What seams like a good idea might fail completely in real life.
Microsoft and other closed source OS’s get to hide the prototype stuff they are working on. Open source OS does not have the means todo this.
The big kernel lock issue was not only closed source driver developers gone wrong it was open source as well disobey how it should be used. Even so it was still repairable.
Lot of embedded use Linux userspace drivers. Why these are stable and work across a huge range of Linux kernels without requiring rebuilding. Even so a userspace driver can in many cases be wrapped if something has changed. Yes userspace drivers on the Linux kernel do have a full Stable ABI. There have been only a handful of cases mostly userspace drivers using insecure solutions. Like using /dev/mem to interface with device instead of something that only got permission for a limited address range.
You say compromise. The reality acceptable compromise to the Linux kernel developers has been done. This is user-space drivers.
Its the point 2. You want kernel space stable ABI in the Linux kernel this required working out how to fix the linker to allow relinking of them. Linux world is really after closed source drivers with a 15+ year life span even if the company making the driver goes by by. By 15 years most of the hardware is dieing due to old age. This is the big problem.
Alfman basically to have your compromise be acceptable to Linux kernel developers a re-linker of binary object has to exist for Linux. Other wise the Linux kernel developers will stick to the almost forever stable userspace API/ABI for userspace drivers and wrapper stub for closed source kernel space drivers.
Unless you address the very reason why the Linux Kernel Developers are refusing you cannot expect them to change. Yes the I want kernel ABI is not answering I want to use all my hardware until it physically dies with the latest and most secure kernel.
You might call Linux Developer cheap in there wish to us all there hardware until it dies.
Everyone overlooks that the Linux kernel did have stable ABI for kernel space drivers. The major reason for giving it up was long term operation of drivers.
In fact a lot of hardware makers don’t like the idea particularly that uses will use their hardware until it physically dies. They want you buying new hardware sooner.
Think of it this way if Linux ever does provide a stable kernel space ABI and it has the wrapping they want the life span of drivers will remain the same as it is now. Where the driver becomes useless when the hardware no longer works.
Linux core developers don’t want the issues windows has were a new version of windows can equal having to go out and buy new hardware.
Clearly note its not that a Linux kernel space ABI is not possible. Its the conditions required for the Linux kernel developers to accept it are not been meet.
The driver must be able to work for 15 years no matter what has changed in the kernel even if it means a wrapper. This is one hell of a requirement. This is why every push to get a stable Kernel ABI in the Linux kernel is failing.
Alfman you are thinking way too small of time frame. You are thinking time frames that are acceptable to Microsoft. The Linux world is the Linux world they have a complete different view on the problem.
oiaohm,
You seem to be focusing on user space stability in API/ABI. Linux userspace forwards/backwards compatibility is very impressive, however that’s really quite a different topic than the one everybody here is talking about.
“So all the Stable ABI/API that existed in 2.6.0 still exists and works the same way in the last one released in the 3 series. 18 December 2003 was the release of 2.6.0. We are talking over 10 years of stability.”
“Yes the official stable ABI/API is the one exported to user-space.”
“You say compromise. The reality acceptable compromise to the Linux kernel developers has been done. This is user-space drivers.”
This is all valid if we were talking about userspace API/ABI breakages, but it doesn’t do anything to negate the criticism of kernel API/ABI breakages. Unless linux were to become a microkernel, then the criticism over kernel API/ABI breakages will obviously still apply.
“The driver must be able to work for 15 years no matter what has changed in the kernel even if it means a wrapper. This is one hell of a requirement. This is why every push to get a stable Kernel ABI in the Linux kernel is failing.”
This seems to be a textbox response against long term stable API/ABIs, and I already admitted that I don’t want to keep legacy cruft around for long periods. From this it’s not clear to me that you actually understood the compromise I proposed, the API/ABI stability would only apply within major versions. That’s maybe a couple years, certainly not 15. Between major versions, the API could change just like it already does today, so we would be no worse off there.
I probably won’t change your mind, but there are certainly many others who want stable API/ABIs. The best path is often a middle road in between of the two extremes.
Alfman the stable ABI that is exported to userspace is used inside kernel space as well. Linux kernel avoids lots of duplication.
Anything that a driver maker has requested and got onto the Linux kernel stable API list remains that way. Never changes until primary number changes and even not then. So source code driver using the stable API of the Linux kernel for drivers for 2.6.0 works in the latest 3.12 kernel.
Linux kernel does not remove legacy crud from the API when it declared stable. API only has to exist in source code. ABI is true pain but since it has to exist in kernel in memory.
Alfman you don’t understand you request. Lets say we just stick to Longterm Kernel and make those stable ABI.
longterm:3.10.22
longterm:3.4.72
longterm:3.2.53
longterm:3.0.101
longterm:2.6.34.14
longterm:2.6.32.61
So your proposal now means 6+ drivers have to be maintained and new ones created all the time. Compared to the current dkms solution with wrapper it is one driver to cover them all plus more.
A re-linker solution also would only require one, Again why only 1 you can document what functions in the ABI have been altered. See those in the driver and add wrappers as required by the relinker. This is what the API stability in the Linux source code is doing. Something gone wrap in a replacement just for that driver.
Alfman basically your solution does not fly because you are asking for far too much work.
Linux world wants 1 driver or close to 1 driver.
Kernel stable ABI just does not make sense with the speed of Linux development. Wrapper + binary blob makes sense. Re-linker + binary blob makes sense.
You missed it right. The extremes are only open source drivers. Or only binary drivers.
Exactly in the middle is a wrapper with a binary blob or a re-linker with a binary blob. Anything else is not middle any more and is in fact leaning to 1 side or the other.
oiaohm,
“So your proposal now means 6+ drivers have to be maintained and new ones created all the time. Compared to the current dkms solution with wrapper it is one driver to cover them all plus more.”
I still don’t think you are getting it. I am not suggesting adding any long term interfaces over what there is today. To the extent that wrappers are useful, then they can still be used – however there would be less need for them if kernel interfaces had more medium term stability.
“Alfman basically your solution does not fly because you are asking for far too much work.”
Having supported file system drivers myself, the lack of stable API/ABI creates MORE work every time the interface breaks. It’s discouraging especially as an independent developer with limited resources. I honestly don’t think anybody can make the case that stable API/ABI’s are more work once you factor in the developers using those interfaces. Let me take the argument to ridiculous extremes to make this point: Lets say an official kernel interface changed once every year, how much work is needed to support this? What if it changed once every month, how much work is there? What about every week? Now every day? Now let me ask you, is the quantity of work directly or inversely proportional to the frequency of changes? If you are truthful, you have to acknowledge that longer term time frames will tend to alleviate workload, while short term time frames will tend to mandate additional resources.
“Kernel stable ABI just does not make sense with the speed of Linux development.”
Well designed interfaces really don’t need to change that frequently. In the past I might have agreed with you, but we’re not in the cowboy days of linux any more, arguably it should should be mature enough now to have longer stability.
“Wrapper + binary blob makes sense. Re-linker + binary blob makes sense.”
I’m not a fan of the whole userspace blobs to begin with, the motivation of which seems to be more about side stepping the GPL license rather than making the right technological choices. Even so, having wrappers was never ideal. Wrappers create a maintenance burden of their own and there are limits to what they can do efficiently. Having a stable kernel API/ABI is still helpful.
“You missed it right. The extremes are only open source drivers. Or only binary drivers.”
Give me a break, we are talking about the merits of API/ABIs themselves. Even when 100% of the code is open source, there are still pros/cons to having a stable API/ABI. If you don’t think linux should budge one iota to having more stable interfaces, that’s your opinion. However you should face the truth that proponents of stable kernel interface aren’t misguided, we simply hold a difference of opinion than you.
Edited 2013-12-06 17:36 UTC
I get it now you are another driver developer doing exactly what ATI and Nvidia use to then complain when they get busted all the time.
Have you ever put in a single request to have any of the functions you required provided by the Stable API of Linux.
Please note I said Stable API not Stable ABI.
Stable API in Linux at min is serviced by header files. Once something is declared stable it that way forever more. Only stable API/ABI requests I know relegating to file system drivers was done by the fuse solution. So you are using all internal Linux functions only then complaining when you get stomped on. Yes it simpler to blame someone else than accept that you are doing the wrong things so causing yourself pain.
Took Nvidia a long time to learn. ATI did not ever. AMD acquiring ATI brought there developers up on common sense. If the Linux kernel developer think something is internal only they will change and break it. Why no else is using it. You come up and complain it broke and they will also be we don’t care. Why you never put in the request to make it stable.
Wrapper + binary blob is how Nvidia and AMD closed source kernel drivers are done. Both what I am talking about with wrapper + binary blob and re-linker and binary blob are about kernel space.
I see you don’t understand the cost of a Stable ABI. A Stable ABI has very poor effectiveness compare to a unstable ABI. Main line Linux can be unstable because every time they change something they can use transformation tools to use every bit of code main line use the new path. This keeps calls short and performance high.
Stable ABI mandates wrappers between the core of the Linux system and the parts using the Stable ABI. So a stable ABI will always be slower than the Unstable. The stable ABI will cause the requirement to maintain wrappers anyhow. Worse if the kernel is providing the stable ABI these wrappers have to be loaded all the time. Providing more security weaknesses.
I have not said Linux cannot have more Stable API. Stable ABI is the one that is out. The price is high enough providing the stable user-space ABI. Stable API only happens if the parties needing it put in the requests.
Basically stop demanding Stable ABI and start submitting requests to have the functions you require moved onto the Stable API list and tolerate the performance cost this will have.
Linux kernel internal interfaces are a product of evolution not a product of design. The interfaces provide in the Stable API list go through a design process in vast detail.
Alfman basically stop complaining and start doing the right things. 6 to 12 months of doing the right things and your breakage rates will drop off.
Like particular calls you might be told to use the fuse interfaces from kernel space instead of using the direct calls to Linux Internal. Then you will do what Nvidia and ATI did and complain these are too slow. Sorry you want Stable this equals wrapped this equals less speed. Sorry nothing is free.
Alfman simple reality if it breaking on you all the time you are not using official Linux kernel interfaces for non mainline parts in the first place. Developers complaining about their drivers breaking with the Linux kernel is a clear sign they have not done the submissions they should have. If it good enough for Nvidia and AMD to do correct submits to have API stabilised it good enough for you.
This is the big bad problem. There is official stable Linux API for kernel space usage. Every item on it has to be requested.
Alfman really what is the point of implementing an official stable ABI if no one will use it because they don’t like the speed.
Only way an offical stable ABI will be look at is if enough drivers can prove they will use the Stable API option Linux already provides. Using the stable API proves you will put up with the speed price.
Edited 2013-12-07 03:07 UTC
oiaohm,
“Only stable API/ABI requests I know relegating to file system drivers was done by the fuse solution. So you are using all internal Linux functions only then complaining when you get stomped on. Yes it simpler to blame someone else than accept that you are doing the wrong things so causing yourself pain.”
I already know all about fuse. Nor have I actually blamed anyone else for these problems. I’m just pointing out that the lack of stable kernel interfaces creates a higher maintenance and compatibility burden and that a simple compromise could be made to get the best of both worlds.
You continue to insist on user space solutions, such as fuse in my case. It’s true at least they are stable. Userspace file systems are fine for low performance work but not really for high performance production systems. You can continue to search for ways to blame me, but the fact is the kernel is the right place to have it under linux.
Of course that I’m right means nothing to you, but maybe you’ll show a little more respect for the words of my friend, Linus Torvalds:
https://lkml.org/lkml/2011/6/9/462
Offtopic: IMHO your posts would be much easier to read if you’d use use the standard OSAlert quote tag…
Hi,
More like, I’ve been using Linux for over 10 years and know that (especially for graphics) it is far from perfect.
– Brendan
And let’s not forget that Linux lacks a stabe API and a stable ABI. Writing software for Linux is chasing a moving target: what works today will be broken tomorrow.
I wonder why Valve didn’t choose to use FreeBSD as a base, beside poorer hardware support.
The thing is, that Valve can decide the future of Linux. If they choose to only support Wayland, it will force everyone to use it. They have a lot of power, and it may benefit users and get rid of some of the unnecessary fragmentation.
There would have to be GOOD (comparable to windows quality) Wayland graphic drivers for both AMD and nVidia (and intel, though those first 2 are more important) for that to happen.
SteamOS is not your average linux distro. It NEEDS to be good from the first release otherwise it’s going to be a laughing stock. I hope valve understands that. They are trying to pull people away from windows gaming, if it sucks it’s not going to happen.
I actually think the initial reception of the SteamOS/Steam Machine platform will be very poor.
Valve has always followed the slow-and-steady approach, but they always have a well-defined vision and the patience to stand behind their vision until it comes to fruition. Steam was originally considered a failure, a step-back, but after a couple years people began to warm up to it as Valve slowly-but-surely continued to improve it. I expect something similar to happen with the SteamOS/Steam Machines.
Edited 2013-12-04 20:54 UTC
The difference is that steam in its infancy days had no real competition. In the os market there are already better alternatives for people to turn to, so if it fails to impress it’s not going to last. We’ll see, i’m anxious to see what kind of waves (if any) can it generate among AAA publishers.
I have to agree. It’ll probably at first only be adopted by Linux-supporters and those kinds of gamers who are already into tech and aren’t afraid of using not-quite-ready stuff. It’s not a bad place to start, though, since it means you’re much more likely to get actually useful feedback, bug-fixes and whatnot in return without all the Average Janes and Joes overwhelming your support lines with nonsensical stuff. In my experience Linux is still severely lacking in all sorts of scenarios and it’ll take time for Valve to come around and find a fix for it all, but the feedback may help them with focusing on the most important things first or help them find the most effective solution.
SteamOS and Steamboxes aren’t meant as replacements for Windows or regular PCs, they’re meant to supplement them, so Valve isn’t really in any particular need to hit the ground running and can just focus on things in the long run.
I’d argue that it is. This is why we’re still waiting (well, not really) for that mythical “year of the desktop linux”. Most people don’t like using half-assed things, and being “free” is no argument for this particular target group.
Valve is a big-ish company, it’s not some group of enthustiasts pushing out a new linux distro. Relying on customers for beta-testing is a shit way to handle things.
Intel is probably more important than those two, Intel GFX is becoming “good enough” for more and more people.
Android had to split itself to thrive.
Back in 2009~2010 the Linux kernel, and several other components of the GNU ecosystem, was all but suitable for mobile (even today it still do not sorted off his power management issues in laptops, just to give a idea), and with the tight schedules of product shipping these days, there was no time to play the politics required to adapt all components and push back the patches to community before using.
Valve on other hand don’t need to do any of that. The changes that Valve needs to to on Linux are minimal. His distro will be desktop oriented, all components needed for gaming development is already in place, and developing software on Linux these days is a bliss. The only thing lacking is a descent display server, but this is questionable. Perhaps better IDEs are needed, but this is a non-issue for many developers, more a matter of taste and development style.
And yet there were already quite a few handset manufactures using Linux based OSs, before Android was released to the world.
No, it didn’t have to split to thrive. Android was created as a closed system, they didn’t care about Linux community or any synergy with it. Then Google bought it and “opened” it somewhat. But the split remained.
Edited 2013-12-04 21:24 UTC
You have similar “splits” all over. From distributions not using vanilla-Kernel as default to desktop to … everything. This is not a bug but a feature. Allow, no support, all kind of people and groups to innovate on top and if something turns out to be useful work on getting the concepts upstream, back into mainline.
Android is a prime-example. It did modify vanilla, everybody does. It innovated successfully, new concepts, like improved power management. These concepts, read not just the patches, made it step by step back into vanilla.
This is why Linux beats every competition out there. Unlimited innovation, rework patches, make them even better, less dirty, cover more cases, with a focus for future innovation, and bring them into mainline.
The best innovation driven by requirements possible. And here comes another player, Valve, and does the same. They profit from all that. Steambox will eat lesser power cause of work done before with Android. Compared to Android we already know that Valve is innovating, driving new concepts and requirements in. Its going to be good for Valve and Linux mainline. Win, win, again and again.
Edited 2013-12-05 12:48 UTC
It did a split on a very deep level – libc. That was bad in the long term. Of course, may be they intended to be separate to begin with, then my point is even stronger – Android is too isolationist.
Edited 2013-12-06 00:37 UTC
TomTom had been shipping plenty of mobile devices running Linux, and so did many other vendors, so this is large overstated.
Agreed with this though
I would very much like them to succeed, it will give linux a huge boost on the desktop for gaming which everyone can benefit from either directly or indirectly through improved competition.
This debate about the forks is interesting. On the one hand forking is bad because it can split the community. On the other hand, those doing it are just exercising rights explicitly granted under the source code license. So it feels a bit ironic to point a finger at them while embracing the license that allows them to do it.
In any case, I don’t particularly care if “steamos” is forked *to the extent that the games can still run on a normal linux desktop*. I think Valve is genuinely betting for Linux gaming to succeed as a whole. Fragmenting the small linux gaming market at this point would be completely counterproductive for their linux strategy, IMHO.
Another way to look at that would be that there isn’t much to lose. There are problems with Linux at every turn when it comes to gaming so the question is, do you really want that to be your starting point, or is it better to throw out the crap to redo it properly and move on?
ilovebeer,
“Another way to look at that would be that there isn’t much to lose.”
I agree, there isn’t much too loose, and there’s everything to gain for valve. Games that are opengl are already going to run without too much porting effort, it’s almost a gimme.
“There are problems with Linux at every turn when it comes to gaming so the question is, do you really want that to be your starting point, or is it better to throw out the crap to redo it properly and move on?”
I think this is greatly exaggerated. However even assuming there were such huge disparities in usability, none of this really applies to “normal” consumers who could buy the steam box as a ready to use gaming console. So long as they don’t use poor hardware, it should run perfectly. Normal consumers won’t be the least bit affected that it runs linux under the hood.
And besides, if not linux, then where else would you start? Valve has already indicated that Windows is a poor choice due to the lockdowns MS is imposing under metro. It would be foolish to give one of your primary competitors so much control over your product, don’t you agree? Lest we forget Microsoft’s history of abusive relationships with competitors on it’s platforms.
Looking forward to better audio support and standards.
Sometimes the bickering on this topic is akin to a scene from the Life of Brian.
I look forward to a Linux based SteamOS running my favorite games. Enuf said.
That was a scream of a movie too!
The only way to get a large number of games built to play on Linux (SteamOS) is to build a good cross platform competitor to DirectX. It needs to be easy to use and high performance. Once they get it built they need Nvidia and AMD to include it in their chips.
Ok what are you smoking. What you are requesting done and been done for years.
http://www.phoronix.com/scan.php?page=news_item&px=MTQzNTA
SDL and Opengl cover Direct X. Wayland support is planed for SDL 2.0.1. SDL also provides all the wrappers to the Linux different sound systems and graphical.
Valve is funding SDL development and Steam Linux Runtime includes SDL. Also SDL is used by quite a few major games.
The problem on Linux has had bugger all todo with ABI. OpenGL drivers on Linux not supporting threading major performance hit. This is not a opengl issue either this has been a video card maker issue.
So all valve needs is video card makers to release decent drivers for Linux.
AMD’s Mantle is also being talked about being platform neutral just not video card maker neutral.
Reality break here valve has todo nothing to make new high performance ABI’s all valve has todo is create a market for AMD and Nvidia to fight over. Then let AMD and Nvidia do what they do best and select the Winner to be included in the Steam Linux Runtime.
AMD Nvidia and other parties making graphics cards are the major writers of Opengl.
Good api/abi starts with the video card maker not the other way around.
Hi,
I wouldn’t hold your breath. ATI and NVidia do try to provide drivers; but both the kernel developers and X developers are constantly breaking them (changing APIs). After years of having their work broken by morons (who can’t create a standard and stick to it), I can’t understand why ATI and NVidia haven’t given up on Linux completely.
– Brendan
Intel is even more funny.
They are supposedly the best contributor to Linux GPU drivers and X development.
Yet their OpenGL drivers are always behind their DirectX ones and the Linux ones are worse than their Windows ones.
For long time, their graphics performance analyzers were only targeted for Windows/DirectX developers. Situation that only changed when they started to support Android in x86 processors.
Talk about half-hearted contributions.
Out of curiosity, Could you point the numerous and constant changes to the APIs in linux recently?
Brendan this is a complete lie. Linux Kernel breakages with Nvidia and ATI have in fact in all cases traced to them depending on behaviour that was not defined in the Stable API of the Linux kernel. Stable ABI is also exported to user-space. Functions exported to userspace making up the Stable ABI if they are every broken they will be fixed in a kernel revison. So no the Linux kernel cannot be in this list.
Nvidia and ATI have got into some trouble for bad coding behaviour. Like it was never good coding behaviour to just use the big kernel lock instead of creating your own. This busted ATI. Nvidia broken due to saying that page sizes will always be 8Kb. The standard api did not say either. In fact it said it was platform definable with a look up function that told you how big the current page size was.
The kernel side of the Nvidia and ATI drivers does not break that often. Yes and almost all cases have been something that should not have been done in the first place. There are functions in the linux kernel marked GPL only as well. These are not stable and are only fore drivers include as part of the main Linux kernel.
Nvidia is getting wiser with age. Like recently needing dma-buf making sure it was exported to user-space with a interface that would be stable.
X developers thinking nvidia designed bypasses to most of the X11 stack instead of fixing it.
The change of X11 API for drivers is in fact slower than Microsoft speed. Brendan so I do not get where you get this constantly changing api bit from. Look at the time frames of DRI 1 and DRI 2 and DRI 3. Please note they over lap with each other. For a very long time.
ABI changing is a lot more common.
Brendan X11 DRI driver compatibility in X11 is a 10 year thing for each version. DRI1 has only recently started being nuked. DRI1 drivers from 1998 still work on the 1 version of X11 where DRI1 will be removed.
Nvidia issue with X11 is hooking into functions that are not part of X11 DRI driver interfaces. Yes random-ally hooking into stuff is a way to get burnt.
Brendan yes the reason why Nvidia and ATI have not walked away from Linux is most of the trouble they have had is their own fault for not working with upstream and not using the upstream provided interfaces.
This has been the big problem most of the argument against Linux on drivers is bogus.
So AMD and Nvidia have bad coding behaviour and they hook on the wrong stuff. Why doeasn’t that happen on Windows? It’s because Windows has a stable API, a stable ABI and well docummented ones?
This is a lie. Reason why vista had so much trouble with Direct 9 drivers not working was in fact ATI and Nvidia had used non exported interfaces.
There was a time frame of very badly behaved video card driver makers.
So yes it did happen on Windows and Linux and OS X and Solaris…. Basically everywhere.
Hi,
The stable API is for user-space, not for device drivers. There is no stable API that’s useful for device drivers on Linux. To work around that both AMD and NVidia use a “shim”. I’ve seen this break before (e.g. the shim relying on a kernel function that either ceased to exist or had its name changed); but what do you expect when there’s no stable API for device drivers to begin with?
Ah, so you agree it does break.
Note that I didn’t blame it all on the kernel alone. The graphics on Linux is a huge mess with different pieces responsible for different things (kernel, X, mesa, gallium, DRI, DRI2, Xrender); where responsibilities change over time (e.g. the introduction of KMS; the change from “nothing” to TTM to GEM, etc). To be fair we need to blame the entire huge mess (rather than just the piece/s of the mess that happen to be in the kernel).
Sure – functions marked “GPL only” with no alternative that native/binary drivers can rely on for the same functionality, leaving no choice other than to “do something that should not have been done in the first place”.
Sounds nice in theory. In practice there’s a 75% chance that updating X will break your graphics driver or break your GUI or break something else; a 50% chance that you’ll spend the entire afternoon searching for solutions all over the web, and a 35% chance that you’ll end up downgrading again to fix the problem after wasting an entire day.
For an example, I’m using version 12.4 of ATI’s drivers (newer versions of the drivers don’t support my card). It works perfectly fine; except that newer versions of X11 don’t support the older ATI drivers. This means that I haven’t been able to update X11 for about 2 years. Now older versions of X11 have fallen off of Gentoo’s packages and I’m screwed unless I switch to the open source ATI drivers. Of course I’ve tried the open source ATI drivers in the past and know they never work – the best I’ve managed with them is no 3D acceleration and only one monitor (where attempting to use a second monitor causes the system to crash).
Because I can’t update X, I don’t dare touch KDE either. It’s far too likely that the newest versions of KDE rely on some fantastic extension or something that only newer X11 provides, and I’ll end up with a broken system where KDE won’t like old X, new X won’t like old driver, and new driver won’t like actual hardware.
Sure, except “working with upstream” typically means “go screw yourself until you’re willing to make all your driver’s code open source”, and still doesn’t prevent Xorg from breaking things.
– Brendan
Brendan Introduction of KMS did not prevent non KMS drivers from working. Most people miss that you have options to boot Linux kernel up with KMS disabled.
Unfortunately the fault of this is ATI or AMD or NVidia. Every formal request to the Linux kernel to move a function to formally stable by ATI or AMD or Nvidia has been granted heck even some requests by video card makers no one would know. Every one of those that ceased to exist Nvidia, ATI or AMD had not done the request. Linux Kernel developers are not mind readers. Nvidia and ATI both complained that the did not like the overhead cost to provide a formal stable. So for long time were doing the wrong thing. In the last 5 years Nvidia has changed. Nvidia will simply refuse to support particular hardware combinations until the functions they need are moved onto the formal stable list today. AMD also has the same policy. Result no more of this problem.
This kernel space thing is basically thing of past.
The correct response here from a closed source driver developer is do formal request to stabilise a interface. So far this has never not been granted inside 6 months of request. Sometimes there has been an arguement over what should and should not be exposed. Support of any of these driver requested interfaces also goes into all currently supported kernels as equal to security updates.
So all this interface trouble you are talking about Brendan lands cleanly on the heads of the closed source driver developers. The main problem is that formally wrapped in the linux kernel for long term suitability adds a few milliseconds to the call. This is in fact unavoidable. Direct jumping into functions that are stabilised is not allowed. Most people don’t know that you can tell the Linux kernel to pretend to be a particular version. To allow this possibility requires a redirection table. Redirection table is overhead. Even the Windows kernel has a redirection table for long term driver support. Yes there is a price for long term stable interfaces.
Brendan you will notice that Nvidia older don’t break that often. Old ATI driver on the other hand did not use any of the interface specs. No DRI1 no DRI2 some form of random-ally hook where ever we like into X11.
Brendan AMD themselves are behind the open source drivers and that is the one the officially support.
I have used Nvidia cards for the past 10 years. Last 5 not once as a X11 server update broken it. Mostly because Nvidia in the last 5 has been information X11/x,org project where they hook in. Yes before 5 years Nvidia did have issues they never told the x.org project where they were hooking in. Again developers are not mind readers they cannot avoid breaking what you are using if they don’t know you are using it. Nvidia driver update screwing my system over yes I have had that. Where 2 Nvidia drivers installed at the same time completely shoot each other dead. This is not X11 or kernel or broken GUI. This is Nvidia being Nvidia and only allowing 1 copy of drivers installed.
I run kde. I can tell you anything past KDE 4.2.0 supports missing function mode and its not X11 server dependant as much as KDE 4.0.0 was. So you kde fear is not based in reality.
This is completely not true. If this was completely not true Nvidia Drivers would not be able to work as dependable as it does.
The driver you have having trouble with was pre AMD taking over ATI. In fact AMD is dropping it because internally its was not legally sound. AMD cannot keep on support it. Yes 12.2 and before fail legal department auditing for containing questionably sourced code. Yes when those cards drivers under Windows fail as well they will be dead over on Windows as well. Why AMD cannot update the highly questionable code without legal risks. This is why AMD had no choice to open source those cards. New drivers for windows for those cards if it ever happens will be based off the open source code base.
Also do you know what was removed when moving from X11 1.12 to 1.13 that broke the ATI drivers. UMS driver support being killed off. The predecessor to DRI 1. Yes this is right ATI had been writing drivers using interfaces older than DRI1. DRI1 is 1998.
Brendan so how is this Linux Kernel or X11 fault. ATI was writing drivers for X11 pre 1998. UMS starts in the 1980s. Your problem ATI was writing highly obsoletely drivers. Interfaces have 15 years of support. UMS is well past 15 years when it being killed off. Yes DRI1 is coming up to end of life. It is now 15 years old.
Brendan how far do you think you would get if I gave you a windows NT 4.0 or a Windows 98se driver and told you to use it with Windows 7 or 8. This is what you have been doing. Does this explain why you have been suffering.
Brendan this is the problem when you did into the problems most of the issues land squarely on the closed source driver maker for doing the wrong things. Some insanely wrong. ATI was insanely wrong with Legal issues and Obsolete design.
Brendan do you know what X11 break ABI/API policy is. I guess not.
1) its older than 15 years and marked deprecated for 4 year. So 19 years old min it can be deleted without question.
2) API/ABI under the age of rule 1 it can be broken to find out if they are in use if no one has reported it in use. If one bug report comes in that they are in use functionality must be restored exactly how it was.
3) No abi that X.org is informed of that is in use that is under under the age of rule 1 can be broken.
Sorry if you are in a location when you cannot skip one version of x.org and your driver works again. You are dealing with a driver that far legecay in design its not funny. Same with any program that does not work with X11.
Brendan this is why your arguement does not hold. You are mostly shifting blame to parties that are not responsible. Closed source driver makers have responsibilities todo the right things. The Open Source world is not being pains in but to them.
If open source developers were being pain in but drivers would break ever kernel release and every x11 release.
Hi,
Oiaohm; you’re trying your hardest to pretend that the sun shines out of open source developers butts – carefully choosing facts that suit your argument (and then stretching them as far as you think you can) and ignoring everything else. I don’t know if you’re stupid or dishonest, but I don’t really care enough to find out which.
Oiaohm; my video cards are only about 5 years old (Radeon HD 4770, released in 2008). Initially the best I could get was unstable 2D (screen flickering black when scrolling, mouse pointer turning to trash occasionally, random crashes). Support improved over time and after 2 years video finally worked properly (including 3D acceleration, etc). Then I had about 6 months of drivers that actually worked before Xorg assholes broke it again. For comparison, there’s a “Windows Vista” machine sitting next to it that is even older; where updating the video driver is a few mouse clicks with no chance of “update breakage” (and not a single bug or glitch from the start). It doesn’t matter who you blame, it’s not good enough.
Oiaohm; I don’t care if most of the problems were bugs in X11 and not “Xorg policy”. I don’t care if it was AMD/ATI’s fault that smaller OSs like FreeBSD and Solaris weren’t able to support DRI1/DRI2/DRI3 quickly enough and therefore made it impractical for AMD/ATI to change their “intended for many OSs” driver over to something only Linux supported.
Oiaohm; I also don’t care if everything has improved a lot recently or if it might work if I upgraded to recent X11 and open source ATI driver today. The fact is that after years of pain there’s no way in hell I’m willing to do anything that might upset the “carefully balanced pile of turds” I’ve got and risk ending up knee deep in shit again.
Oiaohm; don’t get me wrong – I’m not saying that all of Linux is bad (it’s rock solid for most things), and I’m not considering using any other OS for servers or as a programming environment; however, I’d still rather waste $$$ on a stupid locked down X-Box (that we all know will be useless/obsolete junk in 4 years) than to attempt to get “free” software working for 3D gaming.
Also note, Oiaohm, that I’ve tried to put your name at least once in every paragraph in the hope that I sound like a patronising jerk. It seems to be fashionable..
– Brendan
Brendan Solaris and FreeBSD both had DRI1 support by the year 2000. So why was ATI still using pre DRI in there driver development in 2006. Cost cutting/stupidity.
So no it was ATI not supporting current. 2006 was AMD acquirement of ATI. Radeon HD 4770 tech development was 2006. There is bit of a lag between tech development and production release.
2006 it was also announced by AMD that the cards in the class your is would have to live with the Open Source driver only at some point.
Brendan basically the best thing that could happen was ATI be acquired. Fixing up the stack of garbage they left behind is not simple.
I choose a card that was support and support well by Linux.
You were informed in 2006 that support would end but you paid no attention.
September 4, 2008 is DRI 2 and Freebsd and solarias are picking up by 2011. So yes there is a 2 to 3 year lag.
Brendan the reality is you choose a card where the maker was not producing current drivers. Now are complaining when it does not work any more. Ok it does work now but with open source drivers that are still missing a few feature. The maker is working on restoring all those feature.
Really are you kidding me with Windows Visa and no possibility of breakage. Try running some of the non proper supported video card in it you know the ones that force you to use direct x 9 drivers.
Brendan there is linux/windows compatible hardware and Linux incompatible hardware. This is not a Linux thing.
Did you build that machine to be a Linux machine. I would say no. Because in 2006-2008 building a Linux machine would have equaled using a Nvidia video card not an ATI one due to the crap poor support ATI drivers were.
My Nvidia card is a Geforce 6. My card is older than your. Has had way less issues.2004-2005 card.
I will get 10 years of operational life out the Geforce 6 with min issues before I have to replace it. Nvidia did promise this.
Brendan,
“Also note, Oiaohm, that I’ve tried to put your name at least once in every paragraph in the hope that I sound like a patronising jerk. It seems to be fashionable..”
Haha, this is so true it really made me laugh!
The real irony is that earlier in these comments I was arguing that the over-zealousness of linux community members was an overstated generalization, and yet here we have a poster who makes an incredibly solid case for the argument lucas_maximus was making.
I like linux and want to promote more widespread adoption. However even for me it’s a real turn off when when someone is as stubborn as an ass and refuses to consider the needs of the community as a whole. There’s always room for improvement and I’m confident that linux is improving all the time. But the extreme arrogance of some individuals is very discouraging to those who want join the linux community and make it better. At least I know they are in the minority, but they are still giving linux the reputation that lucas_maximus was referring to.
Edited 2013-12-07 06:10 UTC
What are you smoking? I said COMPETES.
OpenGL sucks to work with. Look at the number of games that work with SteamOS. it is far smaller than the number that work with windows. The reason is DirectX.
Yes, if Mantle becomes popular then NVidia will release their own version and developers will just focus on those two APIs and we can just expect Linux drivers to be created by AMD and Nvidia. In that case Valve needs to do nothing.
You must be joking. Tell me some reasons OpenGL sucks to work with.
All the smoking and joking aside, the important thing is that Valve already recognizes the deficiencies of Linux/OpenGL compared to Windows/DirectX and is still willing to throw their (considerable) weight behind the effort to minimize/eliminate these deficiencies. Valve isn’t the type of company to do this flippantly. Regardless of the solution, they will almost certainly address these issues while simultaneously taking advantage of Linux/OpenGL’s inherent strengths.
Edited 2013-12-05 14:50 UTC
What do you think is the base of the Wii and the Playstations. That is right Opengl. What about all those Android games Opengl again. Those platforms don’t have Direct X. In fact there are still a lot of games in steam that do have OpenGL support that are not ported off windows yet.
SteamOS and Steam on Linux is a newish platform. There are more games out there with a Opengl Base than there are Direct X games.
Opengl does compete with Direct X quite successfully.
Sorry to say you have a completely invalid view of the world. Direct X dominance on Windows is nothing more than an abnormality.
OpenGL is not used in serious PS3 titles, they all use PSGL. Similar with Wii – any serious game talks more directly to the hardware.
Has Valve released any details on SteamOS yet? My initial assumption was that it’d be Ubuntu-based, since Ubuntu was the first distro to get the Steam client. But I’m starting to suspect that Valve may be building a distro from the ground-up that’s optimized for gaming.
As many commenters have noted, things will get very interesting once Ubuntu starts pushing Mir while Red Hat and others want to advance Wayland. Valve will have to make a decision in favor of one or the other.