Finally. Finally the leader of a major distribution who has the guts to stand up and say what a lot of people have known for a long time, but didn’t dare to say because it usually leads to a storm of criticism. Mark Shuttleworth has announced that Ubuntu will be moving away from X.org, opting to go with Wayland instead.
I’ve said it before, and Iill say it again: for desktop computing, X is a total and utter mess. Layers upon layers of cruft, new features have to be added with ductape and glue, leading to an incredibly slow development pace for an already anachronistic graphics stack.
Ubuntu is the first of the major distributions who has the guts to stand up and say “no more”. Shuttleworth has announced that Ubuntu will switch to the brand-new and modern Wayland display server, maintaining compatibility with X.org by running X.org rootless inside Wayland. This way, the transition to Wayland can be gradual, minimising breakage. The first useful images should appear within a year, with the complete transition expected to take about four years, according to Shuttleworth.
“We don’t believe X is setup to deliver the user experience we want, with super-smooth graphics and effects,” he says, “I understand that it’s *possible* to get amazing results with X, but it’s extremely hard, and isn’t going to get easier. Some of the core goals of X make it harder to achieve these user experiences on X than on native GL, we’re choosing to prioritize the quality of experience over those original values, like network transparency.”
Ubuntu also considered Android’s compositing environment, but it would take too much time and work to adapt the free software stack to that solution. Ubuntu also talked to several providers of proprietary solutions to get them to open up their code, and they even considered writing their own. These options were all discarded in favour of Wayland.
The Wayland architecture is drastically less complex than that of traditional X. Most of the complex stuff the X server used to take care of (things like KMS, evdev, mesa, fontconfig, freetype, cairo, Qt, and so on) is now available in the kernel or in self-contained libraries, which has turned the X server into “just a middle man that introduces an extra step between applications and the compositor and an extra step between the compositor and the hardware”. In Wayland, the display server is the compositor.
This is incredibly good news, and I think it’s both wise and brave for Ubuntu to take this daunting but much-needed step – the announcement alone will increase interest in Wayland, and Canonical, too, will contribute considerable manpower to its development. Hopefully, other distributions will follow in Ubuntu’s footsteps, and we can finally enjoy a modern, fast, and stable graphics stack on Linux.
Well, it is probably good for 95% of users, but I loved to be able to open a remote window from a server using ssh without any additional apps or protocols. X have great features, but yea, when the client and server are the same computer, it is a bit overkill and the protocol is not like any other modern display backend, probably because it is not modern at all.
As long as they ship Qt/GTK with X support, everything will probably keep working.
Edited 2010-11-05 19:26 UTC
Yeah ssh -X is good, but the network transparency itself was so limited anyway as to be almost useless.
* It is a bandwidth inefficient protocol, so you have to use something like NX for anything but LAN speeds.
* Even with NX there is no way to shadow a local session. For that you have to use x11vnc.
I haven’t personally used it, but there’s mention of a “shadow” desktop profile in the NX docs, and if you go into the Advanced Configuration, you can select “Shadow”. Not sure what it does, but it’s there.
I find the future of Network Transparent desktops would be with OwnCloud, for your personal computing and Telepathy Tubes for controlling remote apps / recieving streaming media.
That’s an easy thing to do when you don’t actually invest resources towards the stated goal. It costs him nothing, as he invests no development time towards the display stack.
I fear this move, as it emphasizes glitter over utility. Network transparency is extremley useful, and is used far, far more often than most people realize.
I feel this is a bad move, as a side-effect of this change is that for commonly used toolkits, such as GTK or QT, development energy (which is finite, though lots of people forget that point) will move away from X11 towards Wayland.
Remember this when you think both can coexist without negative consequences for one.
Network transparency at the core of the display system simply isn’t an essential feature for most people, especially if clinging to it is going to hold back the rest of the display architecture.
MS Windows core display system isn’t network transparent, neither is Mac OS X. And yet, somehow people have found ways to work on them remotely. It isn’t a real problem that requires sticking to x11 to the detriment of everything else.
Apple had to make this same choice 10-12 years ago, originally they WERE going to use x11 for Mac OS X but quickly realized that they’d have to cut it up into little pieces, shove them in a blender for 4 years and reassemble them in order to make it do what they wanted, so it was easier to just use something else.
Clinging to it hasn’t had impact on OpenGL performance on X, and it hasn’t held back the display architecture. If it did, X would have been discarded ages ago.
It should be considered just bloat, but not a bottleneck.
For performance in graphics, X is discarded.
What do you think full screen openGL apps are doing? They use direct access, ignoring X entirely. That’s what DRI is: Direct Render Interface.
Bar when you are using AIGLX and piping OpenGL over the network, meaning the client machine is doing the OpenGL rendering. Which is not only bloody clever, but means a much better user experience as it’s much faster.
Err… Windows’ UI system is actually more like X than you might think.
In terms of network support, there’s really only one difference. In Windows, an application always talks to a display server directly, using local RPC and shared memory. The display server can act as a proxy, and send all that over the network. In X, network support is required in the applications, as well as the display server.
Having thought about this… I actually like the way Windows implements it. The trickier bits, like latency hiding, and handling the network protocol efficiently, are done by the UI system and the display server. They actually do a good job of it. In X, applications (or UI toolkits) are required to do this, and they do a lousy job of it. That’s why you really need something like NX, which makes X over WANs actually usable…
Being network transparent is fine (although delegating that responsibility to applications is a bad idea). That’s not the problem. The problem is the rest of the legacy crap X11 has built up over the years. At least Windows seems to implement most of the legacy crap on top of the current way of doing things. In X11, newer stuff has to be implemented on top of the legacy crap.
To be more specific, applications talk to the kernel window manager and rendering system (both implemented in win32k.sys). There is no RPC involved AFAIK.
The kernel renderer has display drivers (the old XPDM-style drivers), one of which is the Remote Desktop display driver that converts the drawing and/or the drawn data to send out over the network.
You forgot to mention how Windows interacts over the network compared to X-Windows. The big difference is that Windows only does full RDP protocol; and thus it only pushes the entire display – desktop, start menu, applications, etc – over the network as a whole. It has zero capability to isolate and send just a single application.
However, X-Windows can isolate single applications or send the whole display. It typically works by have the local system proxy to the remote system whereby the remote X-Windows server then does the actual drawing.
Want a full KDE/GNOME experience? Setup the X11 Port Forward and then run ‘startx’.
Want just an application? Setup the X11 Port Forwarding and then run that application.
Can’t do that with Windows. It’s RDP or nothing (per native, built-in functionality since you can get applications, e.g. Cygwin, that can do it).
Bzzzz! Incorrect.
The updated Remote Desktop Connection client in Windows XP Pro (Service Pack something, installed via Windows Update) and the RDP client in Windows 7 (guessing Vista as well) includes the ability to run a single program instead of a whole desktop. It’s all managed via the “Programs” tab in the client.
There’s also “seamless RDP” which allows you to do the same for non-Windows machines using an rdp client and rdp server.
It’s not the default mode, and it may not work everywhere, but it’s there.
Being an ex-NeXT/Apple Engineer your premise that Apple were moving to X is a joke.
WindowServer for Openstep and DPS was always never in doubt. What was in doubt became DPS with Adobe’s absurd $10 fee on every copy of OS X and thus Display PDF was born within the minds of Peter Graffanino and other brilliant guys in his Quartz Team.
WindowServer was re-written and Quartz/Quartz-Extreme, etc., followed.
There was never a thought about X-Windows.
The reason Apple changed their Compositing Engine was because Mac OS was dead and Openstep was the foundation for OS X.
NeXT re-invented itself as Apple.
There are no X-Windows.
What Apple abandoned with Mac OS X wasn’t X11 but Adobes Display Postscript technology, in favor of Quartz which is a sort of compositing engine built on ‘display PDF’.
Display Postscript provided superior graphics in WYSIWYG printing and DTP environments, but the licensing fees seem to have been more than Apple was willing to pay, so it developed its own display model instead.
Sun Microsystems’ X11 was extended to provide the needed Display Postscript support for OpenStep to run on Solaris, but this never played a part in Apple’s decision making since NeXT OPENSTEP (the Mach 4.0/4.1/4.2 based operating system based on NeXT Computer and Sun Microsystems’ joint efforts towards developing OpenStep as a platform neutral application and development environment, running on NeXTSTEP/OPENSTEP, Sun Solaris and Windows NT, eventually leading Sun to develop Java in order to achieve the same goals) and not OpenStep (on Solaris) served as the basis of Mac OS X.
I fear this as well.
So the question is… what will Ubuntu put in place that allows the power of X to continue in this new generation display stack?
… and I’m not talking “backward compatibility”
Yes we all want new stuff but with the awesomeness of X preserved.
Just like OSX and Windows, X11 can run on top of Wayland. It doesn’t get more compatible than that imo.
You can either
1. Have the X server installed along side Wayland
2. Forget about it as there are hundreds of other non-desktop distributions that do this
3. Have a separate X-Server edition
Edited 2010-11-05 21:20 UTC
Not for Ubuntu’s target demographic it isn’t. If network transparency is useful for you, use distro that as not switched to Wayland.
I agree completely. If I’m going to be doing something that requires remote X sessions and the like, I’m not going to be using Ubuntu in the first place. I’d stick with stable, fast standbys like Slackware, that make it super easy to get your hands dirty.
Ubuntu is fast-track desktop Linux, and while it can be used for powerhouse computing, that’s simply not what it was designed for. Even the server edition leaves a lot to be desired.
To put it another way: I love Ubuntu on my netbook, but would probably be frustrated with Slackware on it. The opposite is true regarding my main desktop though.
I support Ubuntu Desktop machines as many as 2000 miles away from me.
If I lose that network transparency… its not a good thing. That would make it near impossible for me to use the tools the people are complaining about and see what they see from a first hand use perspective.
So… I hope everything just effing works.
While Shuttlewort’s claims about performance may be untrue (is there any benchmark evidence that Wayland has been significantly faster than X somewhere on the web?), the great advantage is indeed simplicity. People that understand X server are few and far in between, while Wayland seems like something quite approachable (and supportable, forkable) by mere mortals.
That said, I’m pretty sure nerdgasms were had by various people around the world because of this. Wayland has been laughed off by tons of “people in the know”, and it takes a foolhardy astronaut to push the envelop here.
This is way cooler than the news about switch to Unity ;-).
thats the heart of the problem though. the speed of development on x.org is glacial, and that is MILES better then it used to be on XFree86. Even if wayland is total garbage in comparison, but was done in a more intelligent fashion, it would be worth banking on it for the future.
That initial release of ubuntu with wayland is going to totally suck though. Pulse audio all over again.
“Mark Shuttleworth has announced that Ubuntu will be moving away from X.org, opting to go with Wayland instead.”
Except, no, he didn’t. He made a vague statement of a possible future direction.
This is made clearer in the comments.
“# mark says: (permalink)
November 5th, 2010 at 4:01 pm
@Tim
It^aEURTMs highly unlikely that the default Ubuntu install in a year will be on Wayland. It^aEURTMs possible that there will be versions of Ubuntu that use it, or proof-of-concept images, by then. More importantly, we won^aEURTMt make it the default until it really is widely supported and supportable, by folks using a wide variety of hardware providers. And as for network connectivity, there^aEURTMs sufficient time between now and then for these problems to get solved.
And finally, you^aEURTMll still be able to host an X application on a Wayland desktop in Spain. It will feel a little old, perhaps, because everything else on your machine in Spain will be crisper, but you^aEURTMll still be able to do it.”
Everyone seems to be reading a heck of a lot into this, when all it currently boils down to is that Shuttleworth thinks Wayland is neat and it would be cool to run Unity on it. As far as anyone’s stated publicly or any commit logs indicate, Canonical has done no work on Wayland nor any work on making Unity run on Wayland, as of yet. Nor any work on the supporting drivers which would be necessary for this to happen.
What?? A dose of balanced, thoughtful, well-delivered facts. That’s enough of that, mister AdamW. Out here in the “dubyas” we don’t tolerate that kind of behavior.
My mother always taught me, if you don’t have something controversial to say, then don’t say anything at all.
I’m terribly sorry, let me rectify that terrible oversight.
Mark Shuttleworth eats kittens lovingly prepared by Jono Bacon using chainsaws and a blowtorch!
There, does that do the trick?
See, that wasn’t so hard after all.
Possible? Vague? I guess you read a different blog post than I did. What is vague about “The next major transition for Unity will be to deliver it on Wayland, the OpenGL-based display management system. We^aEURTMd like to embrace Wayland early […]”?
You’re really reading a different post than the rest of the world here, Adam. He clearly states that Wayland is the way to go, and that the first test images should be available within the year – exactly what my article says, exactly what the comment you copied says.
With all due respect, you might want to note that you are employed by a competitor of Canonical/Ubuntu. It would give your comment the necessary context.
“What is vague about “The next major transition for Unity will be to deliver it on Wayland, the OpenGL-based display management system. We^aEURTMd like to embrace Wayland early […]”? ”
It doesn’t include a date, a roadmap, or any details at all. “We’d like” is (I expect intentionally) a very weak (that’s a linguistic term, not a judgmental one) phrasing – there’s a big difference between ‘we are going to embrace Wayland early’ and ‘We^aEURTMd like to embrace Wayland early’.
“He clearly states that Wayland is the way to go, and that the first test images should be available within the year – exactly what my article says, exactly what the comment you copied says.”
Again, no, he doesn’t. He says “Timeframes are difficult. I^aEURTMm sure we could deliver *something* in six months, but I think a year is more realistic for the first images that will be widely useful in our community.” None of that is a definite commitment; it’s all aspirational. It also very definitely *doesn’t* say “Wayland will be default in 11.10”, which for some odd reason is what half the press has decided it means.
“With all due respect, you might want to note that you are employed by a competitor of Canonical/Ubuntu. It would give your comment the necessary context. ”
Er, it didn’t seem necessary. What I say isn’t a judgement in any way. If you want me to pass judgement, I’d say committing to shipping working Wayland images in a year would be insanely foolhardy, and Mark’s doing exactly the right thing by *not* committing to that. I just wanted to point out that he isn’t, actually, definitely committing Ubuntu to Wayland, nor is he definitely committing to any particular timeframe for the migration if it does happen. I think he was probably a bit surprised to wake up after writing that blog post and find world+dog declaring ‘Ubuntu is going Wayland next year!’
“You’re really reading a different post than the rest of the world here, Adam.”
The rating on my post suggests otherwise.
Which I didn’t do .
oh, more stuff wrong with the article: you wrote the headline in the present tense: “ditches”. That suggests it’s happening *now*.
Also, the choice of verb “to ditch” at all, and the phrasing “Ubuntu will be moving away from X.org”. Shuttleworth’s post is about making Unity run on Wayland. It does *not* say anything about not running on X.org. In fact, it says rather the opposite: it explicitly says both that they’ll be making sure you can run X apps on Wayland, and that you can still use X instead of Wayland if your hardware needs it:
“Nor is it a transition everyone needs to make at the same time: for the same reason we^aEURTMll keep investing in the 2D experience on Ubuntu despite also believing that Unity, with all it^aEURTMs GL dependencies, is the best interface for the desktop.”
basically the blog post says “we’re going to make Unity run on Wayland, and our hoped-for end game here is that we’re able to make Wayland the default Ubuntu display server at some point”. This is quite a long way from “Ubuntu Ditches X, Switches to Wayland”.
Yes. Which is exactly what I wrote in the article: “Shuttleworth has announced that Ubuntu will switch to the brand-new and modern Wayland display server, maintaining compatibility with X.org by running X.org rootless inside Wayland. This way, the transition to Wayland can be gradual, minimising breakage.”
The blog post is far more definitive than just “hoped-for”. The switch to Wayland is real, it’s going to happen, there’s nothing “hope-for” about it. The time frame set by Shuttleworth is the exact same one in my article.
No matter how many mod-points you get, it doesn’t change the fact that my article says exactly the same thing as Shuttleworth does.
I don’t think either of you have said anything that is completely “wrong”, it’s just the way you are interpreting what Shuttleworth said. I think Adam’s interpretation is probably closer to reality. Your story makes it sound like Ubuntu is now dropping X immediately and going forward full steam with Wayland, even if that’s not exactly what you said. The reality is that this was mostly a warning to AMD and NVidia to prepare a plan to get their drivers working. Plus it sounds like there will be work on getting Ubuntu’s new desktop running in Wayland eventually, but probably not anytime soon. I doubt it will be much past the proof-of-concept phase by 11.10, and 12.04 is a LTS release that certainly won’t do much in that area. Maybe we’ll start seeing demos around 12.10…
Shuttleworth notes that they’re planning on helping out Wayland development with touch input code, since that’s what Ubuntu knows, which should tell you everything you need to know right there – they’re relying on Red Hat and other developers to actually get Wayland into a state where it’s shipping, and will just try to take the lead in advertising it and getting others to port their applications to it away from X. That’s actually something that is really needed if Wayland will get traction on the desktop, but it also means that Ubuntu is almost completely at the mercy of other players when it comes to creating a timeline to actually ship the thing.
Edited 2010-11-06 02:00 UTC
man, what’s u with the coherent rational arguments? That’s no way to feed the irrational Ubuntu-hating crowd.
We’ll have none of that logic of yours here, you hear?
Does anyone know if copy-paste actually works.
I’m gonna be bold and say yes. As evidence I’m submitting the post of vivainio. The one above yours.
In my 10 years using Linux, I’ve never had a problem with C&P. Why so many people using Linux have problems with it?
What is not working?
I believe there are 3 completely separate copy and paste systems running under Linux, and the one in X isn’t even used by most people. They use the stuff that Gnome/KDE do instead. It would be nice to see all this consolidated into 1 Wayland system.
There is only one system in X as well, but it can be used on multiple channels.
Mouse select and middle mouse button paste, keyboard/menu based copy&paste and drag&drop are all working through the same mechanism, just on different “channels.”
Toolkits like GTK+, Qt, Java/AWT implement that mechanism and usually do provide access to all such channels, but always at least main clipboard and D&D.
A clipboard or D&D operation in X does at first not include any data, the source application announces that it can provide data and in which formats it can provide it.
The target application (where the drop occurs or where paste is performed) then asks for the data in the format it likes best.
Example: source application is a browser, a portion of text with markup is selected. In a copy operation is could announce text as text/plain without any formatting, as well as text/html, but also as an ODT (if it has export to OpenDocument capabilities).
Depending on the target’s capabilities, it in turn can now select one of these and then the actual data transfer happens.
This even allows to transport data in an application or toolkit specific format in case both apps are the same or use the same toolkit.
For example a Java application will always announce a MIME type equivalent of java.lang.String when pasting text, allowing a target Java app to just get the serialized String object.
The Windows clipboard system has a similar feature. When you copy something in say Excel, the program where the paste happens can choose which format it wants the data in; BIFF, XML, HTML, RTF, CSV, DIF, SYLK, Text, Bitmap, etc.
What I wonder is this: In Windows you can still paste after the source program has exited, while in X11 you can’t. Does this mean the program does 20 times more work on Windows when you do a copy in a program supporting 20 clipboard formats? Does copying 60k rows in Excel take up 20 times more memory than it needs to?
(When I copy in Excel 2007, it puts 29 formats on the clipboard. Some of these have types such as “DataObject” or “OwnerLink”, but nearly 20 are actual different data formats.)
God damn you OSAlert!!! I spent quite a while writing a long edit note that got killed by the timeout! Now I have to write it again as a reply:
I have a tiny program that lists the formats on the clipboard, which was how I knew there were 29 entries. To partially answer my own question, all I had to do was close Excel, and then check the clipboard. It turns out only 5 formats remain; RTF and Text in 4 different encodings. Text seems to always come in the 4 encodings, so I guess this is a feature of the clipboard.
It also seems to depend on how I close Excel – cleanly or from the task manager. When I kill it, RTF is not there, but Text and some formats that are removed when Excel exits normally are. Pasting still works.
My guess is therefore that Text goes onto the clipboard right away, while the rest of the formats are on demand, but RTF data is added when Excel is exiting. Can anyone confirm this, or explain what is really going on?
EDIT: Copy/paste from Excel to Paint works while Excel is open, but not after it has closed.
Edited 2010-11-06 18:20 UTC
its makes everything ever so much better.
I do that for anything that needs a lot of thought or a lot of explanation…
It just makes it easier.
1. select some text in your web browser, hit CTRL+C
2. close the browser
3. open a text editor, hit CTRL+V
4. see what I mean?
No. I am using KDE. This problem hasn’t existed since 2.0
This is purely a GNOME issue (in the default configuration), I believe.
KDE uses klipper by default. This is a better clipboard than Windows XP had (I can’t speak for Windows 7).
http://en.wikipedia.org/wiki/Klipper
Klipper has clipboard history, and it merges X selections and Ctrl-C selections if you want it to, or it can keep the two separate if you like.
“Keeping them separate” means you can do either of the folowing and they will not intefere with the other (i.e. you can have two different active clipboard selections at the same time, if the X clipboard and the Ctrl-C clipboard are kept separate):
X clipboard:
1. select some text in your web browser
2. with the browser still open, open a text editor, middle-click
Ctrl-C clipboard:
1. select some text in your web browser, hit CTRL+C
2. close the browser
3. open a text editor, hit CTRL+V
If you run klipper, which is part of KDE by default, then both copy-paste methods above are available at the same time, separate from each other. They can have different clipboard contents. If that is too confusing, then the two clipboards entry methods can be merged into the one combined clipboard, if you like.
For GNOME, one can choose to run Glipper instead, which offers similar functionality:
http://en.wikipedia.org/wiki/Glipper
Glipper is typically not installed by default, but it is a simple matter to add it.
KDE users by default, and GNOME users who have installed glipper, do not have the “problem” quoted above. It isn’t actually a problem at all.
Unless Meego beets Ubuntu to the punch. Makes more sense to me for it to cut its teeth in an environment where the lack of an xserver wouldn’t be much of a problem, nor much legacy software that people depend upon.
http://www.phoronix.com/scan.php?page=news_item&px=ODYwMQ
but it’s still pretty stupid.
This shouldn’t have been announced so early into Wayland’s development. I think they should have started contributing, added it to the archives with a howto once it was able to do anything, then made an experimental installable livecd to play with.
Maybe that’s what they’ll do, but to utterly dis X is not a good way to get your patches upstream in the meantime.
Here’s hoping Wayland is the panacea, because I think alot of weird little distros will be popping up soon…
They basically give NVIDIA and ATI 2 years to make a driver. If they had announced it in 1 years, they would not make it in time.
What?
Could you use a bit more punctuation?
I know that it’s a bad idea. I said that.
It’s brave, but there’s an arrogance to the whole thing that’s just plain hilarious.
Read words for their inherent meaning. I said it was stupid, and that it was bad timing, and could be managed wrong, but might not be.
This is unfortunately the “Linux way”[1]. Now only if they went about rewriting and cleaning the kernel in the same manner.
1. “On to the next thing, compatibility be dammed”
I was running Linux well before a Matrox G450 was a top of the line card and for the life of me, I cannot understand what the issues or problems people seem to have with Xorg.
It has always worked well for me on a variety of hardware, on a variety of software environments and network transparency is a killer feature. I was the project director on a set of schools labs where people donated old hardware that were turned into thin clients and we donated the servers.
Given enough RAM, it was possible to have fifty sessions running on a single P4 or Xeon server of the time. These computers/thin clients brought the internet to lots of low-income and hard working people who could not afford internet access or a computer.
For me the X architecture is a thing of beauty and its development pace has been just fine. All my hardware old and recent has worked rather well within six months of it being released, usually within weeks.
I am writing this just in case some hard working X developer reads these forums on the hope that they may understand that their hard work is appreciated and has changed the lives of millions of people.
Edited for typos
Edited 2010-11-05 20:07 UTC
Yes but you are not the use case Ubuntu is going for. What good would network transparency do on a desktop or a netbook? That’s why all other major OSes avoid it and use a layer on top of their display server instead. Like I said X11 will still be there running on top of Wayland. Most of X runs in userspace anyway, it won’t really make a difference either way, you should still be able to run it as you normally would if configured correctly. Very similar to how OSX uses it.
Wayland will jut be a more efficient display server, but it still leverages a lot of the stuff the x developer have done lately in trying to get X to a useable state for modern desktop use. The fact that Wayland’s main focus is only as a display server and compositor should reduce complexity and should increase developer interest. Currently X is so big that for such an important part of the Linux desktop, it has a limited amount of resources and developers compared to other OSS projects.
I never understood why X tried to do everything within itself. Things like input, drivers, etc. should have been handled by the kernel to begin with, imo.
“I never understood why X tried to do everything within itself. Things like input, drivers, etc. should have been handled by the kernel to begin with, imo.”
That is because of a lot of Unix history too long to summarize here but in brief: X was developed to share common infrastructure between proprietary Unix vendors and when Xfree86 was formed, they were in a completely different universe and had no input or access to the kernel teams of various Unix vendors which means have to do everything yourself. Xorg forked from xfree86 and has been doing the monumental work of modarizing and moving bits to other places where it makes sense and this includes the kernel.
Wayland, originally developed by Red Hat and now supported by Intel and others was a means of taking advantage of the modern architecture that X was moving into, bit by bit. This is in essence a complete rewrite of the graphics subsystem of Linux in a staged manner.
Aaah. I like that plan.
This does leave me with this pondering thought: does everyone understand the master plan? It seems some of it inconveniences some folks. Instead of modernization, they’d rather drop whatever is inconvenient at the time…
I think that sort of thing has to happen. Some people will be annoyed by a deprecated api or system call, but sometimes they have to die.
I’d say porcel has a point: I myself have built and work on an XDMCP based school lab that runs on old PC turned into GUI terminals through X. The fact that X is able to offer me this excellent way of remotely running apps due to its network transparency *is* a killer feature. And yes, there are ways for other display systems to do this somehow, but not in the transparent way X does it.
There is also a hidden point here: I agree with you, if you need X for this reason you will need some other distro than ubuntu. But then ubuntu is (at least for the time being) the leader in desktop linux. If ubuntu moves away from X, it won’t be long until other distros will do the same and X developers will probably cease development due to lack of interest. Then the rest of us will have no choice.
Ubuntu will probably keep X for EDUbuntu as it use LTSP
Depends on what kind of environment you run that desktop in. On your home PC, it’s probably not a lot of use. In a software development shop where logging into other people’s machine with ssh is routine, it’s absolutely invaluable.
Ok, so which kernel are you talking about? Linux? But X11 predates the Linux kernel by about 4-5 years. Or if you mean the kernel in a more generic sense, then you either need all possible kernels to present the same driver interface (which isn’t going to happen), or you need some code that’s both OS-specific and X11-specific – e.g the evdev input driver on Linux.
Indeed. Remote X was an invaluable feature in the lab I worked in. It was actually a lot better than (the way I understand that) most remote desktop clients work, because it didn’t lock the remote machine’s screen: ten different users could be remoted into the same machine from ten other seats, without affecting a user actually sitting at that seat. (IIRC, windows remote desktop, at least, will lock the remote screen. This would not have worked.) There are some situations where remote X is extremely helpful.
And some of us have more than one computer at home, too.
RDP server in Windows XP Pro (probably Vista and 7 as well) only allows 1 user to be logged in, regardless if its a local login or a remote login.
Terminal Services in Windows 2003 (and newer I’m guessing) allows up to 5 users logged in simultaneously. If you want more than 5, you need to pay for extra licenses.
There are hacks online to allow the terminal services dlls from Win2K3 to be used on a WinXP system, allowing you to connect 5 users simultaneously. Works surprisingly well. Especially when XP is running in a Linux-KVM virtual machine.
Thanks for the info. That still wouldn’t have worked: we had applications that only existed on one machine, that the entire lab might need — you could easily be talking about more than five people. Tools like the Qt development kit, OpenOffice newer than 1.0 and MatLab all only existed on one or a handful of hosts. And there could easily be more than fifteen people working in that lab at a time, any one of which could conceivably need access to any one of those tools.
There’s also the nice property that remote X windows interleave with your own local application windows, which enables smoother workflow than otherwise.
That’s false. Remote Desktop Protocol, or, “windows remote destop” as you like to name it, actually allows many users to use RDP at the same time without “locking the screen”. Maybe you confuse it with VNC or Radmin?
Anyway, ssh -x sucks badly across a long distance, RDP is much better in that aspect. There are many solutions which allows to use the GUI remote and which do a much better job than X.
X protocol is archaic and a dumb thing to use in 21st century.
Maybe they should name Wayland WNX: WNX IS NOT X.
The protocol itself allows for multiple simultaneous access. Terminal Services is built on it.
However, Terminal Services in Windows XP is limited to 1 connection only. If I’m logged into the console, and you connect via RDP, it will log me out (well, pops up a warning, and I get the option). If you’re logged in via RDP and I sit down at the console to login, you get disconnected.
It’s an artificial limitation imposed by Microsoft to get you to use the more expensive Windows Server.
Even that, though, is limited to only 5 concurrent logins. If you want more than 5 concurrent logins, you have to buy Terminal Services licenses.
Why is it people still keep bringing up network transparency as the defining difference (and deficiency) between X and anything else?
If network transparency were the only good thing about X vs. Wayland or any other up and coming replacement then we’d all just be lobbying for network transparency to be added to the new system. The people who hate X, irrationally I might add, always harp on network transparency as the obvious cause of all of X’s flaws and issues, implying as self-evident that any system without it is better. This is the only reason why network transparency is such a cornerstone of X-replacement ‘debate’: the people who do not understand or like X keep bringing it up!
The real value of X is that it was designed carefully by a number of smart people to be as neutral as possible and as extensible as possible. There are certain fundamental flaws to X’s architecture, which the designers themselves freely admit, but these problems almost never come up amidst the barrage of “We hate network transparency” bullshit. Thus, supports of X are forced to defend network transparency which, while useful, does not illustrate why it is that any system that is not X will not be as good as X.
How can X have lasted as long as it has with its protocol unchanged? It’s not because the protocol is the best or all future problems were forseen! It’s because it was designed well and made extensible. Is Wayland extensible? If it is not then it is fundamentally worse than X. Take a program written against X11 20 years ago and run it today against the latest X.org server. It will work. It may not be very pretty, because the old program doesn’t talk any of the extended features, but it will work. Can Wayland promise that the same will be true in another 20 years?
There are a lot of good reasons to replace X with a better X but not a lot of good reasons to replace X with a system that is poorly conceived, poorly designed and solves a small subset of the problem-set that X solves.
I’ll close out by bringing network transparency in to the debate again by means of this observation: If your display system is sufficiently good it will give you network transparency for free, regardless of your fundamental architecture. How does Wayland implement or plan to implement network transparency? If it just can’t then it’s inferior. If no plan exists then it’s inferior and its developers are insufficiently foresightful to be building a display system.
It’s easy to solve some of the problems some of the time. To replace the incumbent you must solve all of the same problems at least as well, or state explicitly why it is that you won’t or don’t need to, or explain why not solving them as well is still better than sticking with the incumbent. Perhaps the Wayland developers can describe all of these things about their system and perhaps they can’t; it doesn’t matter here. What matters here is that if you can’t so describe then you shouldn’t be running around declaring Wayland as the presumptive usurper to X’s crown.
By running X server as a normal application within Wayland.
Sorry, but that only gets you transparency for X apps, not for native apps. What happens when there are no more X apps?
You will typically use either Gtk+ so Qt apps. Those can eventually adapt to your environment, so they use the X backend if you want it.
E.g. with Qt you can already specify graphics system you want to use on command line (./myapp -graphicssystem opengl). This is a small technical problem, not a large philosophical issue.
I’ve read that Wayland’s main developper think that network transparency should be handled by the toolkits on top of Wayland.
This can work: anyway the toolkits have to be able to ‘speak X’ for compatibility, so the toolkits would use their X’s comptability mode for the network transparency..
No, network transparency has to be done at the lowest level possible, beneath all the GUI tookits. Otherwise, you end up with each toolkit doing it a different way, and you have learn Z different ways to do things for Z different apps.
Are Windows apps written to be network-aware? No, because it happens at a lower layer.
Are MacOS X apps written to be network-aware? No, because it happens at a lower layer.
Are KDE apps written to be network-aware? No, because it happens at a lower layer.
And so on.
Network transparency has to occur near the bottom of the graphics stack. Otherwise, it’s no longer transparent to the user, and basically useless.
It’s worth noting that even Windows does this below the toolkit level, from what people have been saying. On a *nix system saying “Let the toolkit do it” is like saying “Let’s have six hundred incompatible, incomplete and poorly thought-out ways for doing it” – GTK and Qt cannot even agree on on something simple like “Share common theme preference data” such as you preferred fonts; they’re never going to agree on something like this unless it’s something that just happens.
Uhm, yeah, that’s what I said.
This is a recipe for disaster. Simple example: Developer writes app on top of GTK on top of Wayland. Performance is Good Enough(tm) and he releases it. Some user tries to run it on top of X and hits a corner case or a race condition which doesn’t get exposed under App->GTK->Wayland and causes a crash or a showstopper performance issue. User reports this bug to the app author, who never did test his app under this configuration because he doesn’t need it himself. Most authors at this point update the documentation to mark the X back end as buggy and unsupported and close the bug. Now imagine this effect multiplied by thousands of apps over ten years. Ten years after Wayland becomes the default you won’t be able to run just any app on top of GTK->X.
In all probability it won’t even get that far. Some smart GTK developer will wake up in 5 years and say “Hey, you know what? We have a pile of on-top-of-X code that no one really likes to maintain any more, because most GTK developers just care about Wayland, and it’s getting crufty and accumulating blocker bugs and making evolving the toolkit really messy. How about we just remove support from that from the next release?” and without someone stepping up to clear the backlog this will happen, then poof! So much for network transparency.
The key word here is “transparency” as in “No one has to worry about this, it just happens.”
God bless you.
Edited 2010-11-09 00:17 UTC
“what good would network transparency do on a desktop or netbook”
I frequently open apps from different machines. I’m on the couch with my notebook but want my desktop’s browser and plugins; done. Need to pop open a GUI app from the notebook while someone else is using it; done. Want to use a work issued notebook a s think client around the house; done (by liveCD and ssh -X).
Granted, it’s not a universal need but it’s darn handy to have available.
It’s 2010. Anyone who says ‘network transparency is bloating X/killing us’ is either a troll or an idiot who just wants to be involved in the conversation without actually knowing what they are talking about. Plain and simple. Please stop.
For you trolls and insecure-but-wanna-be-in-the-conversation 5 year olds.
I’m gonna be rude and say this: “You are too stupid to actually look into the issue rather than just parroting bullshit they heard somewhere at sometime from someone who heard it somewhere at sometime so JUST.SHUT.THE.HELL.UP”. If I’ve pissed you off, good, maybe you’ll go look into it to prove me wrong. (Not gonna happen but at least you’ll look for yourself.) For you other idiots who just wanna flame me because i revealed your stupidness, eh, i got better things to do that respond to your now-obviously-mad-because-i-showed-the-world-how-dumb-you-really-are comments.
On a local machine, _ABSOLUTELY NOTHING_ goes over the network sockets. NOTHING!!!. You idiots that keep saying this should just go back to slashdot.
Now if you wanna pretend you actually know something, shift the argument to say this: ”
I can’t believe X does all those unnecessary context switches drawing stuff to the screen. I also can’t believe that Canonical/Intel/Redhat/Novell aren’t helping Jamie and Josh with XCB.”
LOL!
You’re post is true, however you leave out a few key details:
1) X does use Unix domain sockets on a local machine. That’s about as close to “network” sockets as you can get (and it’s not like X would be sending its data off to the Internet only to be redirected back to the machine it came from).
2) Wayland does not make network transparency impossible. Network transparency really belongs in the toolkits, which would be much more efficient than in a generic protocol like X.
3) The X protocol is extremely bloated and outdated. Even with XCB, it is still nowhere close to being as clean and efficient as Wayland’s protocol.
>Network transparency really belongs in the toolkits, which would be much more efficient than in a generic protocol like X.
Utterly absurd. The last thing anyone needs is each and every toolkit do its own thing with its own protocol, reinventing the wheel again and again.
Nobod said that it need to be different protocols. Besides, if you want to argue that my opinion is nonsense, you should at least pay attention to my arguments.
You lose credibility once you start calling anyone “idiot” just because they have an opinion you do not agree with.
EDIT: typo
Edited 2010-11-06 03:15 UTC
ok. I know I said I wouldn’t respond but I will to this one.
No, calling an idiot an idiot costs me no credibility. That idiot may not like me, but I don’t care about that.
It’s nothing personal, but someone needs to slap idiots upside the head. Otherwise they don’t know they’re being stupid. And mindless repeating something that the developers themselves have repeated stated was false (and given exact reasons why) is the mark of an idiot.
If I look at my Android phone and then at my Ubuntu desktop Android looks and feels much more modern and nicer to use. People who hate X think it is because of X and that using something like wayland makes it as nice as Android.
If this isn’t true you should explain instead of calling people idiots.
Credibility isn’t something related to you caring about but to the others you are trying to address to.
In my humble opinion, whenever you feel the need to call anyone “idiot” or “slap idiots upside the head” it has become *personal*.
That could be called “lack of knowledge” too. And you happen to be right and in the position to insult people because…?
I beg to differ. Calling someone an idiot is an attack, plain and simple. It causes an air of hostility that is unnecessary and harmful to the conversation.
If someone is doing or saying something that seems idiotic, pause for a moment and consider the fact that they may just be ignorant. There’s nothing wrong with being ignorant; every single human on this planet is ignorant of something, and most of us are ignorant of a lot of things. I’m not an idiot because I can’t understand most of molecular physics; I’m ignorant. If I were to go to school and learn about that subject, I would no longer be ignorant.
And this is purely an observation, but you might want to consider that you are being modded down constantly because you are ignorant of certain social standards regarding rudeness and false superiority.
(Ok, maybe two replies. )
I’m attacking idiots. Ok, fine. Someone needs to.
I am not attacking anyone specifically. However, anyone here who still thinks that ‘network transparency’ is the problem, then consider my post a slap upside the head.
There’s a difference between being ignorant and being an idiot. There’s nothing wrong with being ignorant.
But if said person is ignorant about something but still feels the need to comment on it (as every person who keeps repeating the network transparency garbage), then that person is an idiot.
Luckily, idiotness is curable. Simple research and thinking for oneself.
Regarding the downmodding:
Honestly, I don’t care. If one person has looked into the issues for him/herself and now knows why network transparency isn’t the issue, then I consider that one step forward in the battle against stupidness. If it costs me 1000 negative karma points, that’s fine by me.
It’s not an ignorance of social politeness. I chose my words.
After 10 years of listening to the same argument said over and over and over, maybe a little rudeness is in order.
I believe your going for “stupidity”.. but I’m probably the last person to point at spelling or grammar.
It’s okay, he was just ignorant of the right way to say “stupidity”. Now that you’ve corrected him, he is no longer ignorant.
well played sir.
I wish I could mod you +10 Absolute Damn Truth.
It annoys me to read that network transparency is no longer a main feature of a graphics stack. For me it is the most important feature of X besides displaying graphics at all.
But here only a trend continues that seems to be inevitable.
At our company everybody works at a X11 thin client. We see that with each upgrade of our distribution of choice GNOME or KDE run slower and slower than before.
Especially KDE 3.5 was perfectly usable over the network. KDE 4 is a nightmare to use over the network. In my opinion Qt 4 is the cause.
Most of our KDE users now switched to XFCE.
I think that there are enough users that we will not have to give each user it’s own PC.
The main issue with X11’s network transparency is roundtrip processing. In a remote X11 session, tons of stuff is sent back-and-forth across the connection. The worst of it in my own experience is mouse movement– all mouse positioning is sent to the system running the X11 client app, which is murder when you’re running a remote app over anything further away than your garden variety LAN. I’ve learned that ANY mouse movement is excessive and incredibly frustrating if you’re working on a GUI on a system more than a timezone away or over any wireless connection.
Don’t get me wrong, I love X11’s networkability and have been using it to do remote-GUI and LTSP for a while now, but it is really an outdated model in these days of WAN/MAN/wireless connections to remote systems with more than 20ms latency.
Things are changing though, particularly NX/NoMachine, which I see as a wonderful and updated solution to the problem that (mostly) intelligently caches data between the connection. VNC and RDP are also more-modern solutions to the same problem (although they aren’t root-less like X11’s native support is). NX seems to be the path forward IMO as it works to solve the roundtrip delay problem inherent in X11.
My personal leanings would be to push for Wayland for local graphics, run a rootless X11 server on top and use NX for remote-GUI applications. I’ve already switched all my remote-GUI stuff over to NX anyway (if you haven’t tried it, the difference is simply AMAZING for long-distance work) so the rest sounds rather trivial once Wayland is up to par.
This is very true. It’s the reason things like NX were developed: keep local “traffic” local. With NX proxying and caching the X traffic, you can use a full KDE 3.5 desktop, via an SSH-encrypted session, over a simple DSL link.
We have a lot of staff in our district with slow DSL or even dial-up Internet connections who connect to the school NX server and do their Internet surfing that way.
Problem with VNC and RDP is that you can’t do it for a single application. It’s “whole desktop login” or nothing. There are a *lot* of uses for connecting to a remote system and running just a single GUI app, without having any GUI installed on the remote system.
Can you use NX for a single client app, though? That would be nirvana.
Updating the X protocol to include the things that NX does would be the best solution.
That’s what the other poster meant by not being rootless.
Why not the reverse, and update NX to do the things the X protocol does? That seems like the much better solution to me, keeping network transparency out of the lowest levels of the protocol where it doesn’t belong and resulting in much cleaner code. Apple and Microsoft have had multiple opportunities to do the same type of architecture that’s in X, and they’ve chosen not to every single time. There’s a reason for that. And if you need just app windows and not a full root like RDP gives you, then just make sure your new protocol supports that. There’s no need to stick it directly into Wayland to get that functionality.
I don’t know about you, but I’ve had enough “lost work because of network disconnection for few minutes” to stop trusting X forwarding.
The current version of RDP, introduced with Vista/Server 2008, can remote single applications instead of the entire desktop. It’s branded as “Terminal Services RemoteApp”
http://technet.microsoft.com/en-us/library/cc753844(WS.10).aspx
Besides using the protocol for standard app publishing via TS, it seems MS also uses it for XP Mode to integrate apps running in the XP Mode VM with the native desktop.
Combined with RemoteFX, introduced with Server 2008 R2 SP1, you can remote high-performance 3D and media applications using host-side GPUs rather than client-side.
http://technet.microsoft.com/en-us/library/ff817578(WS.10).aspx
RemoteFX Thin Client Demo
http://technet.microsoft.com/en-us/edge/remotefx-in-server-2008-r2-…
PDC 2008: RDP Today and Tomorrow
http://channel9.msdn.com/Blogs/pdc2008/ES21
Nice.
Wonder how portable that would be to non-Windows systems?
You can setup NX to launch a single app. You create a custom connection and specify the location of the application.
For instance, I’ve run Konsole on it’s own, so I wouldn’t have to use it’s default terminal of Xterm.
Unfortunately, there doesn’t appear to be a way to pass parameters through the NX connection, so you can’t run apps with scriptable options.
For example, we currently use SSH X-forwarding to connect to a remote server, run a script to find which computer a student is currently logged into, and then run vnviewer to connect to that maching. Even with heavily tweaked resolution, compression, and whatnot enabled on the vnc connection, it’s not as smooth as an NX connetion.
Would be nice to be able to run something like “nxclient –profile vncviewer –host some-host some-username” and have it run vncviewer on the remote server, tunnelled back through the NX connection.
You really got my hopes up, only to dash them on the hard, pointy rocks of reality.
I think I see what you’re saying. You want to be able to use a script to launch the nxclient. The client kind of sucks like that. You have to create a profile for each server you want to connect to.
Check out the below. It talks about creating a nxwrapper script, and it might help you out.
http://punkwalrus.livejournal.com/937899.html
The profile-per-server isn’t the issue.
The way we do things right now is like so:
vncuser server.schoolname studentname
The vncuser script connects to sever.schoolname with X forwarding enabled, and runs a “finduser” script on that server looking for “studentname” to get the IP of the station they are currently logged into, and then runs vncviewer to connect to that IP.
I can’t find a way to send “studentname” through the nxclient connection, especially since “studentname” will change on every connection.
Have you seen the battles you need todo to have opengl work network transparent with X11.
Really wayland will work out simpler to network send if you are doing opengl.
Big advantaging of X11 was having applications from many different computers on one desktop. But this is the big but opengl really does not work that way.
Basically X11 need to be tomestoned and we move on. Wayland provide local. And tech based of virtualgl and integrated into Wayland used to provide remote.
Heck lots of us have wanted to tomestone X11 over 10 years go. Simple reason we could not. Lack of video card drivers. Yes the reason why major focus on open source video card drivers appeared in the first place. To get rid of X11. If you want to have a look at a early attempt to tomestone X11 look no future than directfb. Lot of promise no video card maker support.
https://wiki.ubuntu.com/Wayland
I know nothing about Wayland. But, removing some overhead and barriers to progress seems like a good idea.
I can’t wait to have something to play with it, even if it is buggy and lacking features.
Edited typo:
Edited 2010-11-05 23:32 UTC
Things like network transparency is less likely to be needed on mobile devices. Here speed, or perhaps rather CPU cycles/mAh of battery life is much more important.
Most X11 apps would have to be totally redesigned to fit on the smaller screens of most mobile devices, so here backward compatibility is less of an issue.
This is another sign that Ubuntu already have, or will give up on the desktop in the future. This is quite understandable. After all this is where the money is going to be. With Unify and Wailand Ubuntu will be able to provide iPad-like user experience to tablet devices.
I would disagree. Network transparency is more important on thin clients like mobile devices, than on fat devices like desktops. Opera Mini for instance is essentially a network transparent browser.
Hearing this kind of thing makes me happy. Yet it also makes me sad since I don’t feel Linux distro’s aren’t any better than say Redhat7 or Slackware8.
There is definately a lot of work that is being done that is making it better and better. But I think I will be 80 years old before Linuxes will be where I want them today.
Note: I used the word “feel” above, not saying that there hasn’t been any progress.
First off all, the title is very misleading. This is probably not something that is going to happen in the next two years, at least. I’m quite sure that Thom really doesn’t know much about X or Wayland; he just knows that X is “bad”. (In the same way that taxes are “bad” to Republicans here in the USA).
However, X does have problems. Ironically, while I was reading this article (on Ubuntu 10.10), my computer stopped sending data to the monitor, so it went black, forcing a hard reboot. This was probably an X bug (however I have had very good luck with X in general, much better than with Windows graphics drivers).
Probably the biggest hurdle for this plan will be NVIDIA. I just don’t think that Nouveau will ever be good enough to be the only way to run Ubuntu (though they have done a very impressive job so far). It is unlikely that the proprietary driver will ever support GEM, DRI2, etc. My advice to NVIDIA would be: make the driver totally independent from X. Provide just a mode-setting API, GL, and EGL with an extension to allow passing images between processes. This would be much less work for NVIDIA, and it would allow X/Wayland devs to mess around with the driver stack more freely.
The major complaint that people seem to have with Wayland is lack of network transparency. That is not really a valid complaint. Network transparency is not part of Wayland’s focus. It is a job that really belongs in the toolkits (Gtk+, Qt). They could create custom protocols that would be much more efficient than a generic protocol like X, or just use some sort of standard VNC type system. The Wayland protocol already supports processes providing images through shared memory buffers as well as GEM buffers, so it could easily be extended to support sending images directly over the socket that it uses for communication with the server. Network transparency in Wayland would most likely be faster as work better than in X, assuming that it is done right (in the toolkits).
It’s not. Ubuntu is going with Wayland in a process that is going to cover the next four years, with the first test images appearing within a year. The headline is entirely accurate.
Are you kidding? Stop defending your erroneous headline. We have a future tense in this language for a reason. Use it.
Edited 2010-11-06 04:42 UTC
He’s not. Ubuntu is ditching X.org but “We’re confident we’ll be able to retain the ability to run X applications in a compatibility mode“.
The headline in its present form can (and will) be interpreted as if the switch has already happened or will happen immediately.
To avoid such an (inevitable) interpretation by your readers, something along the lines of “Ubuntu to Ditch X, Switch to Wayland” would have been better, as it would have conveyed the fact that the change will take effect in the future in unmistakable terms.
Blame the messenger. In California the Democrats snicker at the Republicans for complaining about taxes but guess where businesses are moving? Texas and Arizona. Even if you disagree with someone you have to realize that their complaints do not manifest themselves from thin air. By being blindly dismissive you can work against your own cause.
If X was “good enough” then no one would complain except for in specific areas. However you can find X crashes in general use scenarios.
X is the dead stinking rhino in the room and you are all angry at Thom for pointing out that something smells.
I’m hoping with the hardware stuff being pushed out into the kernel, and the display server and compositing being pushed to Wayland, that it will result in a lot of old arcane junk getting stripped out of X.org.
I would like to see stuff like input handling also moved out to an external project.
X should just be about management of and the protocol to provide transport for remote sessions. That’s it.
If it were stripped down I think it could develop and evolve a lot more quickly into something much more nimble and modern. I like the concept of the X server, it’s just that the current protocol SUCKS.
I really wish they could scrap it and start over to create something much more sane, secure, and efficient. *cough* Persistent sessions *cough*
I really dislike Shuttleworth. He makes these grand statements about how Ubuntu is going to do this or do that. Problem is that he has very few developers to actually implement what he wants. Yet all the press pay attention to him. This story has probably gotten more press than the release of Fedora 14.
How many developers does Canonical have working on Wayland?
What was the last major piece of architecture that Ubuntu debuted ahead of say Fedora or OpenSuse?
Just because network transparency isn’t important to Ubuntu users, doesn’t mean that the Linux community is going to flush it down the drain.
Since Linux does all the graphical stuff that Windows and OS X does, exactly how is Linux hurting by using X as is?
Perhaps you should read the article instead of talk out of your ass.
How is this a problem? For who?
Ah, that’s what your real problem with this is..
I dunno, I don’t care, it doesn’t matter.
Who cares? It’s not a fucking contest.
So? Is Shuttleworth trying to force other distros to stop using x.org?
No. He’s killing the best “4noobee” desktop distro. Mark wants a netbook-only distro. That’s it.
I thought he was a genius. Now *I* hate him with a passion and I hope someone will stop him before it’s too late.
Wayland? Yes. I’m all for it IIF I can keep network transparency.
(DES)unity? NO FUCKING WAY!
Which you will, of course. It will simply be implemented on top of Wayland rather than infecting it’s guts.
I really don’t get all the whining people are doing about this. Wayland will work just fine, it’s not going to be the end of the world. X isn’t nearly as bad as people often make it out to be either, the biggest problem is just the messy API (which is mostly hidden by toolkits anyway) and the slow development because of all the complicated code in it – which Wayland hopefully will improve.
And you base that assertion on, what? That he wants to use a simpler and perhaps better performing graphics system? Wow, talk about jumping to conclusions.
Maybe his genius is not to listen to whiners and losers.
And you will still be be to run an Xserver.
Nope. That he targets a user base whose level of noobness might be far lower than his [the poster above] in the future, lowering the usefulness of the distro towards null for him. And I understand that, but thankfully, there still exist distros that don’t follow suit.
Chill
Changing to Wayland looks like innovation to me – no doubt it will involve a lot of investment and will maybe answer some of Canonical’s detractors, who say it doesn’t give anything back. We will see if they can deliver and produce something better.
Copy and paste in Linux isn’t great and occasionally catches me out – certainly it needs improving, not a show stopper but – “can do better”
as for Unity as a desktop – I have no doubt that it wont be the same as the current netbook Unity so wait and see (or get involved) again this looks like innovation, which we all want to see.
If Mark kills Ubuntu fear not Mint will move to Debian and all will be well for the noob.
I have read the article. And other interviews he has done. And from what I’ve seen he takes most of the credit given for innovations in Linux and doesn’t have very many programmers working on much of anything. He also, from what I can see, plans on leeching as much money from open source as possible, while not really contributing to the ecosystem.
Well someone has to do the work. It sure is hell won’t be Canonical if the past is any indication.
NO, its a community. And Canonical needs to act like a better community member than they have in the past.
Sigh. Another who believes the only contribution possible is CODE OMG HAXX0RZ. If you don’t contribute code, you’re a leecher, right? I guess drawing boatloads of users to the Linux world isn’t a contribution, right? OH wait – users don’t contribute code, they’re just leechers!
He takes the credit? How? Examples?
Leech money from open source? Jealous much? Canonical isn’t particularly profitable, buddy – if at all.
The point is that Canonical does almost no upstream development. They have nearly zero ability to dictate a roadmap, similar to how CentOS is developed. Debian does a better job dictating what goes into Ubuntu than anyone at Canonical does. Shuttleworth saying that Ubuntu is moving to Wayland carries no weight as they have no development ability to back up that claim.
Shuttleworth just saw a nifty presentation at the plumbers conference on Wayland and thinks ‘ooh shiny’. When everyone else does the work for it then Ubuntu will start shipping it. They aren’t the thought leaders here, only the people trying their best to try and figure out what’s going to be the future instead of creating it for themselves. With Canonical being perceived as some sort of thought leader in the community, they look more like the emperor with no clothes.
And Thom, please fix the title. As other posters have pointed out it’s not only misleading, but being in the present tense dishonest as well.
The other problem may very well be how do applications take advantage of both display systems? If the Linux world moves to Wayland, will anyone make apps that work with X? Can an app take advantage of the things in Wayland AND be used transparently over the net with X? Also as someone above mentioned, what if I don’t want to run anything (X or Wayland) on the machine I am remoting in to. Works just fine now with X, probably won’t work with Wayland.
How many plain “X” apps do you know of? For me, not very many. Everything uses a toolkit (Qt or Gtk+), which will work on both Wayland and X.
Yes, but for how long?
For the foreseeable future
I do not agree. I foresee a not-very-distant future in which this doesn’t happen.
Sure, Wayland isnt network transparent, but Windows isnt network transparent either and RDP is pretty good.
I see no good reason that you couldnt render a duplicate desktop to an RDP, NX, VNC, ICA, X, or better yet SPICE to serve up to remote clients. For that matter, Wayland is a compositer so it could handle scaling or depth reduction or any number of other things to reduce network needs before rendering to the remote display.
Kind of a moot point I think. Good chance you already have rdesktop or vnc and you could likely use X as well.
So will Wayland allow me to work with occasionally buggy programs without crashing the entire graphical environment? In other words, is it somehow fault-tolerant so that my PDF viewer doesn’t crash my entire desktop and take down all my work?
No. However, it should be possible to implement it fairly easily (easier than with X). Theoretically, even with X right now, a PDF viewer should not be able to bring down the system. If it can, that is a serious bug. Of course a bug in Wayland will always be able to bring down graphics, just as a bug in X or Aero can do the same (anyone claiming that Aero cannot be brought down by a buggy application is deluding themselves).
Something that would be nice to have in Wayland is some sort of fail-over functionality.
For example, if Wayland crashes for whatever reason (driver issues, etc.) then the program could save its current state and wait for the server to reconnect so that we don’t lose our work.
This was one of the issues that people complained about X11 and I think Windows can do this.
See this to see what I’m talking about:
http://www.osnews.com/story/21999/Editorial_X_Could_Learn_a_Lot_fro…
Edited 2010-11-06 22:33 UTC
It’s possible. If I could get Wayland to run on my desktop, I might try to implement it.
The problem is that Wayland is highly unlikely to crash (unless the window manager part is buggy). The graphics part is so simple that if there is a graphics driver bug, it’s more likely that it will crash either the kernel (and there’s no recovering from that), or the application. X was prone to crashing because it was so big and complicated.
Well, Windows “can” do this, but often it doesn’t. I’ve had Aero crash on Windows 7 a few times on my desktop, and it’s never recovered. Also, there is nothing stopping X from working the same way. The problem is that the toolkits are unwilling to implement it, partly because it requires storing a lot more state on the client, which will be a lot easier with Wayland.
Thom really has no idea what he’s talking about when it comes to X and graphics. I don’t think he understands that a driver bug can always bring down a graphics system, no matter how much you try to restart it. He also doesn’t understand that this is not an inherent problem in X, but rather Gtk+/Qt.
I’ve seen the developers say “users know nothing about this and nothing about that” but the reality is that we are users and we shouldn’t know about the internals in the first place.
However, we know about the experience and how bad the experience with X is, and I think the developers should take this feedback as a gem instead of insulting the users with “users know nothing about X internals”.
We don’t even want to know about the internals or how bad it is. We only care about the experience and the experience is bad. And the evidence of that is in that post that Thom wrote where X made him lose all his work.
Let’s try to be a little more responsible.
Edited 2010-11-07 14:31 UTC
?
How is refusing to understand a computer system responsible?
I think people _should_ understand the internals of a computer. Computers are one of the most powerful creations of mankind, and the more you understand about them the more you can use them to better mankind.
Yeah, I make a big deal about computing, because it _is_ a big deal.
Yeah right, tell that to your grandmother.
Not everyone is a geek or an engineer or a nerd. There are other kind of people too, with different professions, hobbies and tastes.
It’s not just black or white, if you refuse to understand this then Linux will only remain as a minority, as a server OS or as a toy for geeks and nerds to have it as their desktop.
Stop being so closed-minded and face the reality. Wake up from your dream where everyone should be a geek or a nerd, not everyone is and not everyone wants to be, and not everyone has the same interests as yourself. Face it, period.
Edited 2010-11-07 16:36 UTC
I don’t care what happens to the luddites, just like I don’t care about people who refuse to vote, understand their government, actually learn their _native_ tongue, let alone any others…
They can go off and intellectually rot on their own.
I wish people would learn about these things. You know, get involved in things that affect their lives.
If they don’t, I don’t care.
Can you change the oil on your car? Do you know why? Do you know how an internal combustion works? Even the basics is helpful in daily life.
If you don’t know righty-tighty, lefty-loosey etc you can’t fix anything with a screw/bolt.
Everyone should learn to read and write, natural and programming languages.
If they refuse to, they’re just intellectual luddites, and deserve the mastery politicians, MS and Apple trick them into.
Basic programming skills aren’t that hard to acquire.
“AH DUN WANNA LERN IT!” Isn’t a good reason for anything.
Edited 2010-11-07 16:41 UTC
I mostly agree with your comment, but you sound a bit extreme.
Well unfortunately with voting, the bigger problem is that people vote when they shouldn’t, because they hardly know anything about any of the candidates, besides what they hear on the campaign ads.
As for learning languages, I would love to see a world in which everyone knew at least two or three. My personal goal is six.
I agree. I refuse to use any machine about which I don’t have at least a basic understanding. Most people seem to have apathy toward all technology, and everything else for that matter.
Actually it is very difficult for most people. I don’t know if it’s genetic or environmental, but at least 90% of the population doesn’t have the capability to ever do any “real” programming. It doesn’t mean they’re not smart enough, it just means that their brains work in different ways.
Edited 2010-11-07 17:40 UTC
Learning languages that are not English seems like a waste of time these days. I’d adise against wasting your time on that.
The effort of learning one human language is probably the same as learning ~ 10 programming languages. Knowing programming languages is more useful unless you aim to become a professional tourist.
Well that’s part of the problem.
I’d hardly consider it wasted time. I’m also naturally good at it, and it’s extremely fun.
I’d go with 100. Once you understand programming, you can learn a new programming language in a few weeks. It takes years to do that with natural languages.
And knowing natural languages is more useful unless you aim to become a professional programmer.
(There are many more jobs that involve natural languages than there are jobs that involve programming languages… Personally, I enjoy programming, but I’d be bored out of my mind doing it for a living.)
There is still the matter of learning the libraries and frameworks that go with the language platform. 10 is more realistic than 100.
Yeah, this is violently off topic .
I can’t think of many jobs that involve natural languages (apart from the low level jobs related to tourism and customer service).
When I was high school, they recommended learning languages to everyone. I figure that since that time, the fact that everybody everywhere learns English pretty much destroyed the professional need to know non-english languages (hobbies are another matter).
The _professional_ need.
Not everything is about money.
Improving the _self_ and the _world_ is more important, I think.
Someone (taxpayers? parents? you?) is paying for your education. Learning useful things would be better use of that money – or, the time could be used on actual work.
Yes, because I serve the corporate machine.
Oh, wait, I don’t.
Your suggesting that people who are not tech savvy need to drop what they are doing and go learn about the arcane workings of computers.
Let’s take a break and resume this discussion after you’ve completed your social study on people who have different interests outside of technology and why they may choose not to invest more time in studying the topic.
Or, explain why “I don’t care about luddites” is any less intellectually lazy than “I don’t care about how technology works”.
(I agree that people could benefit from knowing how complicated processes they interact with work but that’s not justification for demanding how other’s spend there time.)
I’m equating reading, writing, and computing.
Anything you say sounds like the words of a Sarah Palin fan at this point.
“Some people don’t want to be ‘readers’, you know? We don’t have interest in literature, and would rather just turn on the TV and have entertainment happen _for us!_ We don’t want to waste our time learning how to read some arcane dialect just to sit through 3 hours about some non-existent goth Danish prince! That’s not relevant to my life! It’s pointless!”
Oh shit this thread just went to hell. Thanks dude.
Nah.. let’s not get into BS US politics. let’s stick with the original point here; you slamming people for not spending the time you do on the areas of interest you have.
Let me ask with a little less sarcasm than my previous post; what topics do you have no interest in (such as your having no interest in people who don’t learn programming/computers) and how much time have you put into understanding those topics? Unless you’ve spent equal time on uninteresting topics than you’ve spend on studying computers; how are you any different than those you deride based on your own given metrics?
Yeah, I’ll ridicule people who don’t want to learn all I want.
It’s just so easy, and it’s just so right.
*cue disco porno music, sleazy dance of hate*
Free country and all that. Ridicule others all you like. I’m just pointing out the hypocracy of ridiculing people where you’d do the example same thing as them given uninteresting topics. But, accepting that, have at it.
“Uninteresting” != “Intellectually Difficult”
I’m just saying that making an effort to understand the most powerful tools mankind has is important.
_Making an effort_ is important.
“Important” may be a stretch depending on one’s personal interests and needs but I’d agree that understanding information tools can be a benefit. Not learning about computers is going to make operating a computer a little more difficult; it’s not going to make finding food and water impossible though.
The ongoing point here remains; you ridicule other’s for not having the same areas of interest that you. They don’t devote time to learning about a topic you are interested in. the question remains; what topics do you have little interest in or do you actually devote equal time and learning to all possible topics under the sun? Unless you are the omni-learner with an equal grasp and time devotion to all areas of study; you really don’t have any basis for bitching about how others spend there time. You don’t see the irony and hypocrisy in that?
In short; sure, you can complain because others don’t share the same areas of interest and study that you do but who are you to pass judgment on how they spend there time and does that equally justify them passing judgment on how you spend your time? (“oh, he doesn’t arrange flowers.. he’s so intellectually lazy for not studying flower arrangement in addition to the topics he does study”)
The English language, European history, religious history, various theologies/philosophies I don’t subscribe to… Because I know that since it’s all one big human mess/party, and if I learn about as much as I can, then I can understand every _other_ thing I come across.
I travel to know about people, culture, and food. I study history to understand the present and future. I study about language to understand a people and what their priorities are, also to understand thought and the expression thereof. I read fiction to better understand the culture that produced those stories. I study computing to make better use of the most powerful human invention since written language.
I think less of people who reject thinking and/or learning.
YouTube search: “Program or be programmed”
It’s important.
Huh, interesting, actually Thom’s article that you linked to is exactly the one that I was thinking about. I was sort of under the impression that the newer versions of Windows are much more resistant to crashes.
Yes, I admit I don’t fully understand how the internals of Xorg and Linux work, although I’ve been using it almost exclusively for the past 8 years. I just know that for years I have experienced constant problems with X crashes that make me lose all my work. And yes, the most recent annoyance I had is Okular from KDE 4.5.2 crashing X. And I got a certain menu combination on OpenOffice to also consistently crash X every time. Not good.
But when Aero crashes, the graphics system restarts and the applications are still running.
With X, everything shuts down. There’s no reason for it actually. The toolkits could reconnect to the X server and keep going. They just don’t.
No, but then again, neither is X
WindowServer and Quartz showed even more how pathetic X-Windows was and always has been.
It’s crap software.
Wayland is just a FOSS copy of OS X’s approach to Window Messaging and Compositing.
And that’s why I still don’t use Ubuntu for anything other than quick tests in virtual images.
The quality of my experience depends as much on those “original values”, as on new features. Dropping or devaluing useful capabilities in favor of new bling, well, I couldn’t agree less.
Mind you, Ubuntu is heading toward the average joes. There are hundreds of distros that will still continue to use X.
(quite a statement from a person using this feature everyday)
It’s not that network aware user interfaces are not useful – quite the opposite, they are becoming increasingly more important. It’s just the apps that have moved elsewhere.
Most of general purpose network oriented apps we taken over by the web. It’s better suited for internet applications than X. It’s built around the network, not just abstracts it. It handles network latency better (it has higher level abstractions, client-side code execution), better security features and so on.
Native networked applications are now better served by placing network transparency layer at the disk interfaces (NFS, NIS, SMB) and running the apps locally – CPUs are powerful and cheap, CPU-GPU bandwidth is increasing rapidly and the border between the two is becoming fuzzy. Many modern applications simply don’t work well across the network anymore (even LAN) and when they do X is slower than other, bitmap oriented protocols (eg. RDP).
Third application – remote desktop access (as an auxiliary, not a primary way of using GUI apps) is better served with dedicated protocols like RDP or VNC. It’s important to be able to restart/migrate the session, adjust the size/colors of the desktop to the local display etc. X simply doesn’t address these requirements, plus it’s terribly slow on anything but a low latency LAN.
I’m totally OK with Ubuntu going that way under the following conditions:
– they make Wayland feature-rich enough to make it a drop in replacement for a local X-server. That means: reliable drivers, good performance, multi-head, OpenGL, native ports of most UI toolkits, etc.
– they will provide a root-less, on-demand X-server for legacy apps from day one,
– they will provide an integrated VNC (or some other protocol, RDP?) server. Ideally working with both single application windows and whole desktops. This might be less important for people using newer Intel CPUs with KVM (that have a VNC server built in the hardware).
Will canonical invest in Wayland development?
What about the hardware vendors such as NVIDIA, ATI/AMD, etc. Will Canonical negotiate with them to make proprietary drivers for Wayland?
I hope so.
Or perhaps the free drivers will become better, which is my hope.
Yeah that would be nice…
ubuntu will hiring some programmer?
or it will wait red hat do the job?
What a stupid question. Since they’re diverging themselves from the pack, they obviously have programmers to build Unity and Wayland.
not really, ubuntu is really nice for talking but when it’s time to work… it’s something else
What do you mean by that?
I think he meant that Canonical does a lot of glittering party, instead of spending time producing code that will actually innovate old technologies out there.
This is the main criticism Canonical has been receiving for a while now. The problem is that this is catching on them, very good.
But sometimes, one needs to let go of the old technologies in order to innovate and be ready for the market place.
Take Apple for exemple. They had to let go from the old Mac OS and completely rewrite a Unix OS. If they hadn’t so, the OS would just be a drag on the Company.
And now Ubuntu, we already know that Canonical want to do things differently from the others distro. Unfortunately, the Open-source community is afraid of innovations because consensus building will not lead to innovation.
Therefore, Canonical decided to be their own on developing technology that thing it will greatly benefit them on the long run.
Remember, Redhat is not meant for the average joes therefore their contribution is toned down on innovations.
Let see of things will work out for Ubuntu. And don’t forget, Linux is only a kernel, so there are a lot of distros choices out there.
There’s more behind the curtain. I don’t think Linux is just a kernel. The linux kernel can survive without a distro, whereas a distro can’t survive without the linux kernel. So I don’t think it’s right for distros to remove the Linux word from their brand.
http://www.ubuntu.com/project/about-ubuntu
Isn’t that enough? Do you also bitch at Android, etc? Sheesh.
With Debian running on a FreeBSD kernel, and Ubuntu running on an OpenSolaris kernel, it’s most definitely possible for a distribution to survive without the Linux kernel.
It’s just too bad more programmers don’t realise there’s more to the open-source OS world than just the Linux kernel.
What about distributions like Debian?
Debian/Linux
Debian/BSD
Debian/Hurd
“Debian” is not dependent on the Linux kernel.. for example.
In general, I believe that distributions should be recognized as the product and referred to by distro name though too. Referring to anything that happens to use the Linux kernel as “Linux” only causes confusion. For example, consider solid distributions that get slammed and blamed for the faults in unstable distributions. Ubuntu “Linux” has a bug in XYZ so therefore, any “Linux” must have the same problems.
But, the key point here is that the kernel is indeed separable from the distribution as demonstrated by Debian which at least three distinctly different kernels available for use. (though, in the case of Hurd “usability” not suggested by “available for use”. ;D )
Indeed.
I’d like the inverse of their FreeBSD flavour.
That is to say, Debian BSD/Linux.
The BSD userland with the Linux kernel? BSD focus on userland standards but with the hardware support of the Linux kernel.. that would be interesting actually. It’s only the hardware support that keeps me from more interest in the BSDs.
Indeed.
You inherit the horrifying Linux sound systems, though.
If I was setting up a DAW, I’d probably go with BSD OSS4 spend a month on it, then NOTHING GETS CHANGED EVER.
I’m ok with Alsa but I wouldn’t personally complain about a new sound framework replacing it. Give me stable 5.1 X-Fi support and you can call it anything you like.
NeWS will be re-born someday(probably in around 3 languages at once, like Ajax), so we have a network-efficient protocol and more local handling of the windows themselves.
Ah, re-invention of the wheel, because some companies had a very expensive license on round objects in the 1980s…
I really hope this move makes Canonical contribute with OSS more intensively. Pushing a new display stack and a new interface could help to move the stagnancy in projects like GNOME and Xorg.
That is, I hope this is not just a daydream hoping the outsider developers mess their hands on crap and Canonical reap the benefits of it. Time to REALLY contribute to something solid other than doing glittering and colourful desktop themes.
(I use ubuntu, but don’t agree with everything Canonical does)
X has been one of the most annoying things to deal with in my decade’s worth of Linux usage. I’m glad they’re going for the gradual shift, sudden huge changes that are premature are always suicide. For people who really want Network Transp…. whatever, they still can, it’s OSS and they can put on whatever they like. For too long desktop code/solutions have been ignored in favour of networking/admin solutions.
http://gitorious.org/~krh/qt/qt-wayland
lighthouse is gonna eat your dog, at pace
>Finally.
Finally, nothing. This is complete BS. Wayland is trash compared to X. As for Ubuntu, they have been getting worse and worse. “Unity” was just about enough, this on top of it is more than enough to break the camel’s back. Ubuntu is irrelevant now. I don’t care about it anymore, and certainly will not recommend it. If foolish people want to continue using Ubuntu, fine, they can have their pointless pseudo-Windows. I will stick with real Linux, thank you very much.
/rant
Is Debian a real Linux?
I’m asking, because Ubuntu is basically Debian with a few opinionated choices.
And if Debian is not real Linux, I don’t know what the heck is one.
Debian is real Linux, and the distro that I use. Debian without X and with a forked Gnome interface (i.e. Ubuntu) is not real Linux.
And who are you to define what “real Linux” is? So if a distribution runs without X at all (say, a server install), it isn’t “real Linux”? What is this definition? Where did you get it? What is it based on? By whom has it been accepted?
Ridiculous comment.
You should know that Linux is a kernel (brain) and the GNU CoreUtils packages http://www.gnu.org/software/coreutils/ makes the rest of the system (hands, feet… of the OS)
Therefore, any distro with the Linux kernel (brain) is Linux such as android, Meemo, Ubuntu, Fedora… etc.
Linux is just a kernel. Any OS built around that is “Real Linux”. Or do you not consider Android a “Real Linux”? Or all those embedded devices without any GUI layer at all (think wireless routers) not “Real Linux”?
It doesn’t matter what you stack above it; any system using the Linux kernel is a “Real Linux”.
Guess all my headless servers are not “Real Linux” ™ what with not having X installed on them.
Indeed, and all these years I was fooled into thinking that “Linux” was powering a good percentage of the Internet. I hope we can at least still claim a few of the servers out there are “Real FreeBSD”.
You are an idiot, let’s see if you call it “trash” again in 2 years from now when X is extinct and replaced by Wayland.
Fine, we don’t want to know, go away and use your “real” Linux with your ancient X server.
>2 years from now when X is extinct and replaced by Wayland.
If anything is extinct in just two years, it will be Wayland. If Wayland ever displaces X completely, it will take a lot more than two years.
My primary concern with this is further desktop fragmentation, we have had GNOME and KDE along with a slew of other toolkits for a long time but they at least have all had X as a common underlayer. If switching to Wayland causes developers to further choose between X and Wayland (which in many cases it appears it will) then we will have further, and more fundamental, fragmentation than we already do especially if Wayland doesn’t replace all of X’s features e.g. network transparency, leaving X as a competitor as opposed to replacing it.
I believe Wayland will end up replacing X in the long run, and I think X will remain as a subsystem layer for running legacy X11 applications.
But I won’t even bother to have X11 as a subsystem, I will try to have only Wayland applications installed on my system.
But let’s be serious, who is the person that is going to be interested in running a legacy windowing system when there is a better and modern one which is faster, better and more suitable for desktop use? Not me for sure.
And if Wayland doesn’t end up replacing X, the developers could simply target to the highest level APIs which are Qt, GTK+, etc. Which both of them are already working on X11 and Wayland.
Edited 2010-11-06 22:22 UTC
With respect to Wayland replacing X, I agree, it probably will as long as it doesn’t target too different a use-case then X. Currently X has all sorts of features that I don’t use on my laptop, but if X is replaced without those features it is likely that both systems will remain and be sort of compatible ala X on OS X. So even though I may not use X’s more esoteric features, it is still important to me that its replacement has them so that we can completely replace X. Otherwise, lets fix X’s performance for the single user systems and keep its features. We don’t need two competing and supported windowing systems to choose between even if they are compatible. That is more bloated and cumbersome that anyone could ever claim that X is.
Edited 2010-11-06 23:12 UTC
If Wayland does take off, and replace Xorg on Linux, it will cause a schism between Linux-based OSes, and non-Linux-based OSes.
It’s already starting with all the Linux-kernel-only features going into Xorg (GEM, KMS, etc) splitting Xorg support between Linux and non-Linux systems.
Network transparency is a useful asset also for home users (gui on a home server). But I still do not think it is a problem in itself that Wayland does not offer network transparency. This functionality should move to the toolkit:
* The amount of data involved in drawing windows has increased tremendously since the X protocol was invented. If I am correct gradient of e.g. a window bar are moved pixel-by-pixel in X. It would be much more bandwidth efficient to move more high level graphics primitives over the network.
* Remote apps integrate awfully in a local work station, even if they use the same toolkit as your local desktop. Themes are often very different.
Moving the network transparency to the toolkit would solve both problems. The downside is of course that more toolkit should offer this functionality. But in my opinion it will be enough if QT and GTK operate over the network. If we are burying old ancient software, lets bury motif as well.
We’ve been going through this in early X11 days (Xt, fonts) and conclusion is: it doesn’t work.
There are always differences between server and client versions, architectures etc. It means that server-side part of the toolkit quickly becomes obsolescent and there is no way to upgrade it without a major overhaul of all installed systems.
The solution is: server-side user code. If the toolkit/application could upload code to the Xserver for execution, it would solve all maintenance problems. The obvious new issue is security, so forget about implementing toolkit backends in C, that would have to be something like a Java applet or postscript of Javascript running in a sandbox.
…NeWS, in other words. Thank you.
Yes, pretty much NeWS. Except for several details:
– Postscript is probably no longer the best choice. For efficiency it should be a language hosted on a VM with JIT.
– The toolkit should be able to completely bypass this mechanism when running on a local workstation and talk directly to hardware. It still makes more sense to implement the low level operations directly in a machine language whenever possible.
Well, PS could be made to have JIT. It’s certainly a less complex language than Java, and Dalvik manages it.
Why bypass it if it’s JIT-compiled? It’s native code running, but portable.
Writing drivers for 3D-capable video cards would be a big new task, but what does PS need to work efficiently? Vectors shading and compositing.
what do 3D cards do/
Vectors shading and compositing.
It’s already been extended to 3D, The library is out there, but written for Level 2, so it needs porting.
It’s not like ghostscript can be used ootb and be fast enough.
My guess is a complete new implementation will need to happen. Friend and I have been planning to do this for a hobbyist OS concept we have (starts with Open Firmware, adds libraries, implementation of a few languages, some new, some old, and a ton of work for years in the future)
It all just makes too much sense to me.
Will it be dominant? Probably not. It’s a very idealistic system that I doubt too many people will make programs for, but the point is for it to be fun to program on instantly for anyone.
Remote apps inherit the themes and colour schemes of the X Server, which is running on the local host with the monitor plugged into it. IOW, if your remote apps are not looking like local apps, you are doing something wrong.
Network transparency is no longer transparent if the user has to know it’s there, and has to learn Z different ways to do it, since each separate toolkit does it its own way. Network transparency *has* to happen at the lower/lowest level of the graphics stack.
While I personally couldn’t care less what Ubuntu does, I think it’s interesting that Phoronix suggested this idea to Ubuntu developers way back in 2008. Some of the developer comments are interesting. Have a read here: http://brainstorm.ubuntu.com/idea/15205/
So I understand that Wayland handles compositing and that Qt4 and Gtk+ have/are being ported to it but where do window managers stand?
I ask because these days, anything more than the min, max and close buttons is considered ‘fringe/geek-only’ and I somehow doubt the included window management functionality will meet my needs.
Window managers have to be ported from X11 to Wayland.
Window managers rely on X11 functions and they have to be ported to Wayland in order to work.
I was looking at Wayland Home Page and it all looks good, it can have X as a waylandd client.
But there isn’t much about other Open Source Unix like OSes i.e. the BSDs. Which I find concerning.
http://wayland.freedesktop.org/architecture.html
It talks about KMS and evdev which appear to be Linux only technologies.
All Wayland really requires are:
1) a way to do modesetting
2) a way to do OpenGL rendering without X
3) a way to pass images between processes
4) a way to get input events
Wayland could theoretically be ported to any OS that supports these things. #4 is relatively minor. #2 and #3 are covered by Mesa, assuming you have KMS (#1). If you want to see Wayland on *BSD, Solaris, etc., getting KMS and GEM working would be the number one priority.
I’ve been a reader of osnews for a very long time (os2 days), but only today felt like I needed to register an account to comment…
A lot of these posts in this thread are exactly why so many people detest the linux community. This infighting on ‘technology’ choices is pretty pathetic, extremely sad, and quite embarrassing. A lot of you should be ashamed of yourselves. Name calling over one (of many!) distributions doing something different… give me a break people.
I am looking forwards to testing the first images using Wayland. Reading http://wayland.freedesktop.org/architecture.html, the Wayland Architecture seems to make sense.
Hopefully Unity and eventually Wayland will bring an improved experience. And if Wayland does not work out, hopefully some of the ideas will rub off on XOrg.
Regarding wayland and meego, related mailing list thread:
http://www.mail-archive.com/[email protected]/msg06364.html
Spoiler: no commitment for MeeGo 1.2. But we’ll hear more during MeeGo conference next week.
Wayland is probably the next big thing for the next couple of months. I have never seen a hot debate like this, not just here, but all over the internet. Shuttleworth really stirred things up.
Taken directly from Wayland’s FAQ: http://groups.google.com/group/wayland-display-server/web/frequentl…
“What is wrong with X?
The problem with X is that… it’s X. When you’re an X server there’s a tremendous amount of functionality that you must support to claim to speak the X protocol, yet nobody will ever use this. For example core fonts; this is the original font model that was how your got text on the screen for the many first years of X11. This includes code tables, glyph rasterization and caching, XLFDs (seriously, XLFDs!) Also, the entire core rendering API that lets you draw stippled lines, polygons, wide arcs and many more state-of-the-1980s style graphics primitives. For many things we’ve been able to keep the X.org server modern by adding extension such as XRandR, XRender and COMPOSITE and to some extent phase out less useful extension. But we can’t ever get rid of the core rendering API and much other complexity that is rarely used in a modern desktop. With Wayland we can move the X server and all it’s legacy technology to a optional code path. Getting to a point where the X server is a compatibility option instead of the core rendering system will take a while, but we’ll never get there if don’t plan for it.”
Sounds to me like it’s time to work on X12, where all the old, not-really-used-anymore features (core fonts, for example) are removed, and the most useful extensions are rolled back into the core protocol.
You know, kind of like what happens with OpenGL. Release X.Y. People extend it with custom extensions. Release X+1.0 with most useful extensions rolled into the core.
Instead of trying to replace one piece of the X11 Window System, how about just updating the protocol? After all, the industry went through 11 protocol versions already. Why are we suddenly so against a version 12?
Edited 2010-11-08 21:40 UTC
I’m sure Wayland developers don’t mind if someone wants to use their time and money to make X12.
X11 actually works pretty fine already. It’s better to make something radically simpler (like Wayland) instead of breaking compatibility without really gaining much. It’s all about bang for your development buck.
http://www.phoronix.com/scan.php?page=news_item&px=ODc2Mg
So … no Nvidida binary graphics drivers for Ubuntu going forward? Also no AMD binary graphics drivers either, but at least the AMD open source drivers are working from programming specifications published by AMD.
Is the open source Nouveau driver good enough?
Edited 2010-11-08 22:02 UTC
I guess the answer is: use Intel chips or stay with Xorg, at least for as long as it takes for Nvidia or AMD to catch up. This isn’t that hard taking into account Intel’s share of the GPU market.
Linux is not a toy OS – it has some measurable market value. If a GPU maker can’t make a fairly simple port (Wayland is way simpler than Xorg) within 1~2 years, then it’s likely to lose this segment (embedded and “home” desktop) of the Linux market.
Of course, no one would do this for Wayland only, it simply lacked credibility. But Ubuntu with Wayland is a whole new story.
Edited 2010-11-09 03:29 UTC
Whilst one could certainly “use Intel GPUs” or “stay with Xorg”, there is another course of action. AMD has released programming specifications for its GPUs, and open source developers have used those programming specifications to write open source drivers for Linux for AMD/ATI GPUs. There has been for some time now a “classic Mesa” open source driver available for most AMD/ATI GPUs, and also a Gallium3D open source driver for R500 GPUs or earlier is available, and a Gallium3D open source driver for R600/R700 GPUs is also close to becoming useable.
http://www.x.org/wiki/RadeonFeature
http://www.x.org/wiki/GalliumStatus
So, one could alternatively “use AMD/ATI GPUs with open source drivers” and move to Wayland (or stay with Xorg).
Certainly using AMD/ATI GPUs with open source drivers will result in better performance than uisng intel GPUs.
PS: Interestingly, video decode functionality (using the 3D engine) for AMD/ATI cards might finally be coming for Linux via the Gallium3D drivers. Bonzer.
Edited 2010-11-09 06:00 UTC
I meant is that if the Wayland is really going to be used as a default display server in desktop and (particularly) embedded Ubuntu then it is highly unlikely that either Nvidia or AMD will ignore it and leave the whole niche to Intel.
I’ve seen AMD drivers custom designed for some obscure closed source X11 thin clients so I simply don’t think they will ignore a player like Ubuntu. Especially that the whole port is simply repackaging their existing code base and removing some cruft.
It’s more likely that Ubuntu will tell OEMs: “Here is our new distribution, works with both X and Wayland on Intel+X+Y. With Wayland it’s snappier, has no tearing or visual artifacts and consumes 20% less battery power”.
This is a simple business choice, not a charity.
You’re right that older chipsets may be better supported by opensource drivers. That’s good for all of us – users of older laptops converted to Linux etc. But the Wayland’s power will likely come from the embedded market, tablets, netbooks, electronic dictionaries (Sharp NetWalker anyone?). For these devices X is only a ballast and GPU performance is not critical. It’s more important to get well supported drivers customized for the particular device.
I think you may have missed the technical point in all of this.
http://www.phoronix.com/scan.php?page=news_item&px=ODc2Mg
Wayland requires KMS and GEM buffer support.
KMS needs to be part of the kernel (this is why it is call Kernel Mode-Setting).
Anything that is part of the kernel needs to be open source.
Ergo, Nvidia and AMD binary drivers for Linux won’t support Wayland … because (without making at least part of them as open source), they cannot support KMS.
It isn’t as though open source KMS isn’t available, it is. It is just not compatible with Nvidia or AMD/ATI binary drivers … which are essentially Windows drivers with open source wrappers around them.
I don’t think so … it isn’t that simple.
I doubt very much that either Nvidia or ATI want to break up their Windows driver code into modesetting and not-modesetting functional pieces just to enable their binary drivers to have an open source KMS piece that can go into the kernel.
But even if they did go to all that trouble … the kernel developers won’t have it. The kernel devs won’t put only a “part of a driver” into the kernel.
The reference in the above sentence to “missing VIA code” is in response to just such an attempt from VIA. VIA announced they would be releasing an open source Linux driver … but when they submitted it it turned out to be only a KMS stub for the Linux kernel. The bulk of the VIA driver code was still a binary blob. Linux developers rejected it, and now there is no KMS support for VIA graphics. VIA support in Linux is pretty much a bust. VIA graphics won’t be able to run Wayland either.
Finally, your reference to “older chipsets” is IMO slightly misplaced. As far as Wayland support is concerned, ALL chipsets (from any maker) are ONLY supported by open source drivers.
Perhaps the follwing would be clearer:
No open source driver == No Wayland support.
Wayland support == open source driver.
Edited 2010-11-09 11:20 UTC
Yes, I get your point. There are technical difficulties, maybe even some management or legal issues to workaround etc. That’s all part of engineering.
Just wanted to say that it is not impossible for them to port the driver, it is not even difficult. The only missing part (at least for now) is motivation and the good thing about it is that where there are money to get (or lose) the motivation magically appears.
Whether they actually strip off KMS and GEM from their drivers and implement them as opensource kernel modules (that would indeed be the best solution for the users) or they simply stuff proprietary xserver driver code into Wayland and leave the kernel module untouched – that’s all technical details. They may but they don’t have to use the method envisioned by Wayland developers (just like they did with Xorg drivers). Here their task is even easier – they can simply fork the whole thing with almost no effort to maintain it.
I’m sorry, but I don’t think you do get it. Not at all.
Have a look at this page:
http://www.x.org/wiki/RadeonFeature
Scroll down to the bit where a header says “Feature dependency tree”. Find the feature in the tree which says “Wayland”. OK, now everything to the left of that point where it says “Wayland” is a dependency of Wayland. You must have support for those things in the driver before you can have wayland. OK?
So, the two mandatory things required to be supported in a Linux graphics driver, before it can support Wayland, are: (1) memory manager, and (2) KMS.
These dependencies make one approach that you suggested, to whit “they simply stuff proprietary xserver driver code into Wayland and leave the kernel module untouched” a complete non-starter for technical reasons. It can’t be done like that. Period. Kernel Mode-Setting means graphics card mode-setting done by the kernel early in the boot sequence. This is a requirement for Wayland.
So, KMS goes in the kernel. In order to go in the kernel, it has to: (1) be open source, and (2) be accepted into the kernel tree (by the folks at kernel.org).
Now, in the past, VIA have tried to get some open source KMS code for their graphics chips accepted into the kernel tree. The kernel developers at kernel.org rejected it because the remainder of VIA’s driver was a closed binary blob.
Get it now? The very approach that you are proposing, to whit “they actually strip off KMS and GEM from their drivers and implement them as opensource kernel modules” has already been tried by VIA and utterly rejected by the Linux kernel devlepors at kernel.org. They simply won’t accept a “bit of a driver for KMS only” and include it in their kernel. They insist that if a driver is to go into their source tree, they must have the source code for all pieces of it. Period.
http://www.phoronix.com/scan.php?page=news_item&px=ODI3OA
http://www.phoronix.com/scan.php?page=news_item&px=NzY2Ng
The Linux kernel maintainers are very consistent on this point. Either it is a complete open source driver, or it doesn’t go into the kernel tree.
If it doesn’t go into the kernel tree, it cannot support KMS.
The closed source binary drivers for Linux from Nvidida and from ATI/AMD are not in the kernel tree, they are instead both kernel loadable modules. By their very nature they cannot support KMS, instead they support UMS (userland modesetting) only.
Ergo, no Wayland support via closed-source binary blob drivers. Period. No matter what Nvidia try and do (apart from open-sourcing their code).
It is that simple.
Fortunately for Wayland, there is perfectly adequate open source drivers for Intel and ATI/AMD chipsets. Even the open source Nouveau driver for Nvidia chipsets can support Wayland, but AFAIK this driver is a fair way behind the ATI/AMD and Intel drivers for most functionality, due to lack of publictaion of the programming specifications of Nvidia chipsets.
Edited 2010-11-09 22:10 UTC
This actually brings up another point, that very few people (that I’ve noticed) have pointed out.
Wayland, as far as I know, is going to be sitting on top of the exact same KMS/GEM drivers that current X already sits on top of (or can sit on top of). To cut that down, Wayland will use the exact same drivers that X already does. This is an extremely significant point because, as a consequent, any driver issue that X has now, Wayland will have also. (And, conversely, any issue in a GEM/KMS driver that’s solved for Wayland will also be solved for X, because the same driver drives both.)
And this particularly significant because, in my own experience, most of the few remaining X performance/stability issues I’ve encountered over the last several years have actually been driver stability/performance/availability issues. And if Wayland uses the same drivers, then, consequently, Wayland will encounter all the same stability issues and capability restrictions that X-on-GEM/KMS does now.
Or, in very terse terms, moving to Wayland will not address most current X issues, because most current X issues are really driver issues, and Wayland will use the same drivers as X.
Edit: now, underscore that “that I know of”. Lemur – or anybody else – if I’m wrong on that point, please correct me.
Edited 2010-11-09 22:18 UTC
This is not entirely true. Wayland only uses the KMS drivers and the Mesa drivers. “X drivers” also include DDX parts and EXA, etc. Wayland doesn’t use any of that. In my experience, these X specific parts tend to be the most unstable.
So theoretically, Wayland should have less driver issues than X.
Thanks for the response. Reading that, I’ll score myself as “mostly but not completely correct.” I think it’s still fundamentally true that the driver issues that plague X will for-the-most-part plague Wayland too. Specifically, driver availability may actually be an even bigger problem, since there are a number of drivers that support DRI but not GEM or KMS, that X could still use but Wayland cannot.
I’ll repeat myself, with appologies: the real weak-point in Linux’s graphic stack is the driver layer, not the X server. The solution to most of these problems is modernizing and QCing the drivers that we already have (and furthering the development of the nacent KMS/GEM drivers that we already have).
Wayland doesn’t depend on most of the driver functionality, it depends only on memory management and KMS.
http://www.x.org/wiki/RadeonFeature
As long as the memory management and KMS functions of the driver are stable, then Wayland should be stable.
Any other instability one encounters further up the stack, such as in OpenGL, DRI, GLSL, video decode, OpenCL etc etc will be exactly the same for the classic X server or for Wayland, as you point out, but this won’t be a problem of Wayland itself … it will be a problem of OpenGL, DRI, GLSL, video decode, OpenCL or whatever.
Thanks for the clarification.
As I read that, I still think it’s a true statement that “many of the X issues that people have now are actually lower in the stack, primarily driver issues, and Wayland will still have those.”
I really think people are missing the point in all this. Like I said, I think most of the problems with, for example, OpenGL on Linux are driver problems, not X problems, and the real solution for most of this is fix the display drivers, not tinker with (or replace) the X server. If we move most display drivers over to KMS and GEM, than most of these problems will be fixed for X. If we don’t, then Wayland won’t work a lot better than X does now.
I’d be a little curious of your and Sorpigal’s take on that.
You are missing some obvious solutions:
Nvidia or AMD don’t have to convince kernel guys to ship their code – they can, and they do ship it themselves. If the code cannot be compiled as a loadable module than (a) they will simply not use this mechanism (see below), or (b) ship an Ubuntu kernel package with their opensource code compiled in.
You are also assuming that Wayland’s dependencies are set in stone – they are not. If everything else fails Nvidia or AMD are free to simply take the Wayland code (Apache license), add their binary blob to it and ship it as their own set of libwayland*.so libraries (after all, they already replace libGL.so with their own version). In fact, that would be the easiest solution, as they could just copy-paste their existing driver code into Wayland and ignore all the proposed/enforced third party mechanisms.
Of course, the ideal solution would be to opensource their drivers but don’t hold your breath for it. There are plenty of ways they can achieve their goal (if they want) without doing this.
Nvidia aren’t interested.
The kernel.org developers would probably blacklist such a driver. They have threatened to do just that before for nvidia binary blob drivers.
In any event … distributions would ship with Nouveau. This is already what is happening:
http://www.osnews.com/story/21033/Nouveau_Becomes_Default_Driver_in…
PS: Open-sourcing Nvidia’s driver code is not the only route to improving this situation. Nvidia could instead simply provide Nouveau developers with programming specifications (information of the form: “to enable function Y, set value X to register Z”). This act would not reveal any of Nividia’s IP.
Edited 2010-11-10 05:32 UTC
Nvidia isn’t interested now. Wayland, as it stands now, has no market value behind it. But that may change if there are more developments like Mark’s announcement.
Kernel.org may only advise to blacklist certain drivers, on not to support their users. But nowadays building a kernel is a distributor’s job. You don’t expect Ubuntu to blacklist Nvidia’s or AMD’s drivers, do you?
Nouveau is only an option for the secondary market – that’s not something OEMs would touch. They would rather go with a different chip instead and supported drivers. So there is still space for Nvidia to fit in, if they care to, that is.
Opening the docs is indeed an option (I agree with you that obfuscating the interface of their products is a stupid business strategy). But that won’t happen because we are crying for it. It will rather be forced by competitors that have already done that.
At the moment I’d simply choose either an Intel or an older AMD product, especially if I was buying a battery powered device.
Simply, no. It would be a tremendous expenditure of effort to distribute a binary copy of either Wayland or the Linux kernel, which would get significantly greater given the multiplicity of distributions and architectures, and the task of trying to keep pace with up-stream kernel development. They’d potentially have to build a unique package for each architecture and each distribution. It would also be all but impossible to convince distributors to actually use it. To say nothing of the fact that you specifically can’t do that with GPL’ed code.
That’d constrain Wayland to using user-space graphics drivers. Which would be a technological step backwards by years.
And do note that, in this scheme, you’d have one unique Wayland binary, coming from one unique Wayland source tree, per closed-source driver vendor. Which is a distribution nightmare.
As Lemur says, as long as Wayland depends on KMS and GEM, it’ll only work on top of the open-source in-kernel-tree drivers.
Edit: spelling.
Edited 2010-11-10 07:27 UTC
My point is: they can do (almost) exactly the same thing they are doing now. They just don’t find it profitable, yet.
I agree that these methods may not be pretty but they are legal and work. In fact they may work better (for them and the users they care about, at least) than supporting a solution that relies on multiple technologies, which are evolving in parallel in a way they don’t control.
Building their own kernels is not a big deal if they only want to support platforms like Ubuntu LTS etc. (remember that they only care about OEMs) and serve others with a patch (to be legal about GPL). However, I suspect that in this particular case they would simply convince distributors to simply patch their kernels for them.
Shipping their own Wayland is not a big deal at all. That’s only a relatively simple library. The rest of the machinery (compositor, window manager etc.) is (or is going to be) in a code that links to libwayland. Pretty much the same situation as in the libGL stuff today.
I think that by “open source” you really mean GNU GPL. )
NVIDIA is providing for at least 10 years part of their drivers as linux kernel modules.
True enough. Even now, part of any given binary driver has to run in the kernel, and therefore has to include a little glue-layer that is distributed as source code, and compiled against the specific kernel-in-question’s source tree at install time.
Altho I believe this is for straightforward technical reasons, not licensing concerns — since any two Linux kernels may have different driver ABIs, you have to compile at least part of your driver for the specific, exact Linux kernel you want to load your driver.
Yes, what you say is true enough, but the driver that Nvidia provides for Linux is basically a very large binary blob. It is mostly their Windows driver code.
On Linux, one cannot link a binary blob directly to the Linux kernel. Linux kernel developers would blacklist such a driver. Such drivers, therefore, work via open source “wrappers” which sit between the kernel and the binary blob driver. One must re-compile the wrapper every time the kernel changes.
The closed-source binary blob drivers of this nature are necessarily loadable kernel modules.
http://en.wikipedia.org/wiki/Loadable_kernel_module
Such modules cannot support KMS.
Since they cannot support KMS, they cannot support Wayland.
Edited 2010-11-09 22:18 UTC
It seems to me like it should be possible to just add some hooks from KMS into the binary driver to the thin glue layer. In another post, you reference VIA’s attempt to contribute just KMS and GEM code that referenced their binary driver into the kernel. While that was rejected, is there a reason that Via (or anybody else) couldn’t just distribute that code with their binary image and compile it on install as a module, exactly as is already done now with the thin glue-layer code of most binary drivers?
Yes, there is a reason. In order to function properly, KMS must load as part of the kernel itself. This is why it is called KMS I suppose.
Graphics drivers which are implemented as loadable modules (exactly as is already done now with the thin glue-layer code of most binary drivers) can implement user modesetting only.
http://en.wikipedia.org/wiki/Mode-setting
http://en.wikipedia.org/wiki/Mode-setting#Linux
Only the open source Nouveau driver implements kernel-based mode-setting (KMS) for NVIDIA cards. KMS is required for Wayland.
Because Nouveau has been forced to use reverse engineering as its only source of design data, it is a fair way behind the various other open source drivers which have programming specifications to work from.
http://nouveau.freedesktop.org/wiki/FeatureMatrix
Edited 2010-11-09 22:56 UTC
This is actually not true. Drivers implemented as loadable modules can implement KMS. Check here: https://wiki.archlinux.org/index.php/ATI and note the part a little down the page about starting KMS. It is possible now to build the open-source ATI KMS driver as a loadable module, and get KMS from it when it’s loaded.
(To note up-front, that page is flagged as “out of date.” Still, if it was possible in the past to implement KMS in a loadable module, it probably still is. I’ll route around for a better source.
The real reason that closed-source drivers don’t support KMS is much more likely to be political. A, companies don’t like disclosing information about how their cards work, and they’ll resist any architecture change that makes the internal workings of their driver (and hardware) more visible, and B, because most companies don’t put more than a token effort into their Linux drivers, and are likely to have the response that, if the kernel architecture moves away from them, well, then that’s the kernel team’s fault, right?
One doesn’t have to reveal how graphics cards work in order to release programming specifications for an open source driver.
Have a look at some programming specs:
http://www.x.org/docs/AMD/
You can’t build an ATI card from that information. Not even close. All it says is stuff like “in order to perform function Y, set value X in register Z”, and it gives you the odd block diagram to explain the context.
Intel and AMD/ATI have no problem whatsoever with releasing information of this kind. Intel even release complete source code for Linux drivers for their chipsets.
It doesn’t make the company non-competitive to release this kind of information, far from it. In fact, releasing programming information used to be standard practice for chipset makers.
Because AMD/ATI has released programming specifications, and because there are now open source drivers for Linux for their chipsets, and because AMD/ATI chipsets significantly out-perform Intel chipsets, it appears as though the use of ATI/AMD amongst Linux users has apparently seen a big upsurge, to the point that:
http://www.phoronix.com/scan.php?page=article&item=lgs_2010_results…
ATI/AMD users in this survey = 2074 + 495 + 1645 = 4214
exceed Nvidia users = 3293 + 628 = 3921.
PS:
The upsurge in ATI/AMD amongst Linux users has not come about through effort from ATI/AMD and their Linux driver … ATI/AMD simply released the programming specs. The open source drivers that resulted are largely written by other people, such as coders from Novell and RedHat.
Edited 2010-11-09 23:56 UTC
Not necessarily true. Right now, Wayland uses KMS, GEM, Mesa, etc. However, all it really needs are:
* a way to do mode setting
* a way to do OpenGL rendering without X
* a way to pass buffers between processes
If NVIDIA were smart, they could easily provide these features in their proprietary driver with EGL + a few extensions. They could then eliminate the majority of the DDX driver and have X.org use this driver. NVIDIA would save themselves a lot of work, and it would give X devs a lot more freedom to mess around with the internals of the server.
What OEMs are using Ubuntu? HP? Dell? Are those OEMs that you mention selling 90% of their hardware prepackaged with Ubuntu? It is the year of the “Ubuntu desktop”?
Repeat after me: Open Source drivers sucks, Open Source drivers sucks, Open Source drivers sucks…
And they will continue to suck unless being provided as open source by the hardware manufacturer.
Care to give any reason? Just because you say so? Nouveau is obviously a bit flaky because it’s entirely reverse engineered, but the open source AMD and Intel drivers provide a very good out of the box experience. The open sourced AMD drivers are certainly a lot more stable than the proprietary ones…
Also, the proprietary drivers really hold back a lot of things for Linux. Projects like Wayland and Plymouth are entirely impossible with proprietary drivers. NVIDIA’s driver essentially forces everyone to stick with the same old X server that we’ve had forever, unless they are willing to do something about it.
Intel’s drivers are entirely provided by them, and AMD’s open source drivers are largely developed by AMD.
Last I heard, the open-source KMS intel driver was a bloody mess. If they’ve managed to clean it up, it’s news to me.
In my own experience, there are two major problems with the newer open-source KMS drivers. The first is performance; the open-source drivers typically don’t offer the same level of performance that the closed-source drivers do. The second is coverage: the open-source drivers (usually) have lots of gaps in their coverage of devices. IIRC, about a year ago, the Novea driver didn’t support critical features of the NVidia 280 that was driving my desktop at work (I think that DRI didn’t work), and I had to resort to NVidia’s binary driver in order to get KDE4 to work.
The open-source ATI KMS driver – again, in my own experience, for the cards I’ve tried it on – is pretty good, and very stable. But even then, it still suffers from reduced performance, as compared to ATI’s binary driver.
Edit: I recall what didn’t work. I was trying to use a dual-head set-up, and the interactions between DRI and the multiple-desktop system where complex. It basically boiled down to being limited to something like a 1280×1280 combined resolution for both desktop, and I think DRI was right-out.
Edited 2010-11-09 22:56 UTC
All of this is more-or-less true, although none of it has any relevance to the fact that the closed-source drivers are a complete non-starter when it comes to Wayland.
It should perhaps be noted that the open source drivers are moving ahead at a fair pace now. The Gallium3D drivers have recently almost reached feature and performance parity with the classic mesa driver for R500 or below and R600/R700 ATI/AMD chipsets. Gallium3D is apparently much easier to work with, and state trackers are beginning to emerge for new features never before seen in Linux open source drivers, such as GLSL, video decode and Direct3D API state trackers.
Open source drivers are still immature and there is a lot of performance tuning still to be done, but even now they are quickly starting to overtake the closed source drivers simply in terms of the functionality of the GPUs which is supported.
According to the 2010 survey:
http://www.phoronix.com/scan.php?page=article&item=lgs_2010_results…
… the open source xf86-video-ati driver has overtaken the closed source fglrx driver in number of users. I believe that happened this year. It will only accelerate from here on.
I know, and I am elated because of it. GEM drivers might finally give us support for OpenGL > 2.1 in Linux, which is glorious, glorious news.
And this is my point: X doesn’t suck because X is poorly implemented. X sucks because it’s sitting on rotten drivers. And moving to Wayland won’t fix the drivers underneath: fixing the drivers will fix the drivers underneath. And that’s happening, and it’s great news.
All of the people clamoring for a move to Wayland claiming it’ll fix all the graphical ills plaguing desktop Linux don’t really understand what the fundamental problems are. The Open Source drivers in the kernel tree have been neglected for waaay to long, a problem that has been addressed only recently. But, thank God, it is being slowly-but-surely corrected. And that’s what’s going to fix Linux’s graphics issues, not tearing out a fundamental component of every open-source Unix-alike in a fit of pique.
Coincise answer from Nvidia about Wayland:
“We have no plans to support Wayland.”
http://www.nvnews.net/vbulletin/showpost.php?s=99209e23a1013723450a…
That’s because NV has only one employee who works on linux drivers. That’s no incentive for them yet to pay more people to rewrite the drivers. Unless Intel comes with some killer GPU’s who will support Wayland.
I see no reason Wayland couldn’t work with binary drivers. They’d do exactly the same thing they do with X – patch it and replace the parts that interact with your driver with their own code. Wayland is still using an MIT or BSD license, right? It’s just a matter of a bunch of work for Nvidia to get it working. Obviously it wouldn’t be smart for them to do so before it’s settled down a little bit since anything they do now will probably be useless in 6 months anyway.
Two problems … (1) Nvidia have said already that they aren’t interested in doing any such work, and (2) in any event the kernel.org folks won’t include any partial driver in the kernel and neither will they link in a binary blob driver from Nvidia as part of the kernel.
http://www.phoronix.com/scan.php?page=news_item&px=ODc2Mg
It isn’t going to happen. There will be no support for Wayland using Nvidia’s binary drivers. Deal with it.
For Nvidia cards, this means using the Nouveau driver. Otherwise … no go. That is your only choice if you want to run Wayland.
Edited 2010-11-10 05:13 UTC
I don’t think you understand what i was saying. It would be literally exactly the same situation they have now. They distribute their own kernel modules, and modify X to hook into it. That’s exactly the same as a potential future driver, which would distribute their own kernel modules, and modify both X and Wayland to hook into it.
I mean, your argument that the kernel devs would ban the binary driver would also ban the existing current X driver. I guess it’s possible that might happen, but it has nothing to do with whether Wayland is being used or not.
And Nvidia has only said they aren’t currently interested in supporting it. Which makes sense, they wouldn’t want to do all the work now just to have to completely redo it all in a few months when things change, because Wayland is currently under heavy development and will no doubt have to change before it can be fully used as a replacement on the desktop. If Ubuntu releases a decent Wayland solution in 4 years, then I’d guess we’d end up with an nvidia solution in about 5 years. There’s no point in them working on it before then.
Edited 2010-11-10 08:43 UTC
True.
It isn’t really likely that kernel devs would ban the Nvidia binary blob driver, I suppose, but nevertheless it is worth remembering that this exact move has been contemplated by the kernel.org folks before.
It was decided at that time NOT to outright ban binary blob drivers (which is to say, the linux kernel would simply refuse to load them), but if Nvidia started again to play funny tricks in an effort to keep their driver as a binary blob, such a proposal could easily re-surface once again. If the kernel.org developers implemented it, that would be the end of the Nvidia binary driver. All parties who wanted to run the Nvidia binary driver after that would have to patch the kernel source to get it to work.
Just use Nouveau … its easier. An even better solution perhaps is to invest in an AMD/ATI card.
It’s still true that, historically, hardware vendors haven’t given a rat’s ass about their Linux drivers, and will do almost no maintenance or upkeep work on them. I mean, KMS and GEM are available now, and are clearly better driver architectures, and neither nVidia or ATI have made no effort to modify their current binary drivers to use them.
If they modified their drivers to use GEM, they could stop distributing OpenGL components with their drivers, as those are all built on top of GEM. They’d have to do less work, and they could distribute a much simpler driver, if they moved to GEM. But they haven’t.
Don’t hold your breath for any improvement in the closed-source binary drivers. The hardware makers just do not care.
I think Wayland will be the default display manager within 2 years on new installations and on enthusiasts computers. The only problem I see is Nvidia, but I think if Nvidia doesn’t support Wayland one way or another, linux user will abandon Nvidia, and graphic cards are cheap today…
There are a few reasons why going to Wayland would be attractive.
http://www.phoronix.com/scan.php?page=news_item&px=ODc3MA
The writing has been on the wall for a couple of years now that Nvidia is no longer the best solutiopn for graphics for Linux.
Right now, anyone contemplating new purchases to build a Linux machine from parts (you kind of have to build them from parts, because retailers have been frightened out of selling them to you) would be looking at getting a decent AMD/ATI card rather than a Nvidia card. (PS: If graphics performance is not that important to you, Intel graphics on the motherboard would be OK for such a role. You can’t get Intel graphics on a separate card anyway).
I myself came to such a conclusion as long as three years ago (when I last put together a machine for myself, and which is still working like a charm as the day it was first assembled BTW), when the work on open source drivers for Linux for AMD/ATI cards was just beginning. Such a choice must be a completely obvious no-brainer to almost everyone by now.
Indeed, recent surveys show significant increases in Linux users running an AMD/ATI card to the point now that more appear to be running AMD/ATI than are running with Nvidia.
Unlike Intel and AMD, Nvidia don’t have any CPU of their own. AMD/ATI have just announced a very interesting development in the low-power space:
http://www.engadget.com/2010/10/20/amd-demos-next-gen-llano-fusion-…
http://www.amd.com/us/press-releases/Pages/amd-demonstrates-2010jun…
Nvidia are going to get squeezed out pretty soon, IMO. Given their insistence on not releasing programming specifications for their chipsets, (the release of which would not have hurt Nvidia’s “IP” one iota), I say to them: Karma’s a bitch.
Edited 2010-11-10 22:17 UTC