Based upon a recent email to the X.Org developers’ mailing list, Canonical is nearing the point of one of their goals for Ubuntu 10.10 of a rootless X Server, or being able to run the X.Org Server without root privileges.
All that’s left to accomplish within the Ubuntu land according to Canonical’s Christopher James Rogers is working out a /dev/backlight device interface that udev would set the appropriate permissions on for the user. The /proc/mtrr may also need to be handled too, but Rogers doesn’t believe any of the drivers (at least the main KMS drivers) are using this interface.
With all of the necessary prerequisites addressed, when starting the X Server they will have a check to see if kernel mode-setting is being used, if /dev/backlight exists, and if /dev/input/* has appropriate user permissions. If all conditions are true, the X.Org Server would not be run as the root user, which leads to better security. Of course, this feat has already been achieved by other Linux distributions such as Moblin and now MeeGo.
This would largely help out those with the open-source ATI, Intel, and Nouveau drivers that use kernel mode-setting while those using non-KMS drivers, including the binary drivers from ATI and NVIDIA, would still be running their X Server as root.
The mailing list thread discussing this can be found on xorg-devel.
There is also the Maverick blueprint discussing this likely feature of Ubuntu 10.10. Other details can also be found on the Ubuntu Wiki.
Is this only for Ubuntu, or will other distros benefit from rootless X server as well?
It’s got nothing to do with Ubuntu, since it’s Xorg devs that did the work. All Ubuntu do is implement it into their distro like all others.
Canonical just have a goal of implementing this into their distro by Ubuntu 10.10.
I see, thanks.
Maybe they should work on getting rid of X altogether instead… They might be able to come up with something that’s able to handle vsync…
On a server, you don’t need to run X at all. Indeed, Ubuntu Server Edition does not include X. Of course, that means you must do everything in text mode, which is not much fun if you’re also going to use the machine for other things.
For many people, it would definitely be nice to have the ability to run X on a server without compromising security. Actually, I’ve wanted to do that myself. So I would say that this is very good news. Thus, a big “thank you” to the developers from me.
Edited 2010-06-26 03:10 UTC
Why not run the X11 apps remotely, displaying them to your workstation?
It is extremely bad practice to run X11 itself on a server, and many servers don’t have onboard video or have an extremely lowend card intended only for the initial installation.
It is able to handle it if you configure it corectly.
Xorg is one of the best piece of technology available in Linux. But it is like KDE, many of the best features are unknownes by the users. Only causing bugs and problems in the small set of features they know about, like displaying local content on your screen.
It is great when you have to use a window over the network (without any additional software), thin client mode, input periferal over the network, proxy, Xfbdev/KDrive/Xephir, X in X, multiple X server, per screen fine grained control and client over ssh.
It is also great when you think of all the developpers specifications, APIs and extensions that are not used as much as they could be.
Just saying: “Arg, I hate X because I use XRandR and it is not perfect on my unsuported, Linux unfriendly Laptop and I don’t want to ever edit xorg.conf, so X suck” is just trolling.
This new feature sounds really good!
Has this been resolved yet?
http://www.osnews.com/story/21999/Editorial_X_Could_Learn_a_Lot_fro…
Agreed.
Some would argue that showing local content on screen is what X should do, everything else is bloat. But I guess we don’t need to care since the bloat is harmless.
KDE is an exaggerated comparison; KDE also does millions of things but struggles with the basics (pulseaudio, networkmanager), while the basics in X seem to be in a pretty good shape.
I have to say, though, that with Lucid and the latest nvidia driver, I’m very happy with X, using both laptop screen and monitor at the same time, as well as using projectors without issues.
One of my main beefs with X–and I’m just assuming it’s X’s fault since I see it on Mac OS apps ported from Linux, as well as in the Linux desktop–are visual artifacts when redrawing parts of the screen. This to me is the one thing that screams “unprofessional” when compared to Mac and most modern Windows apps. Is anything being done/has anything been done recently to fix this?
I only recall seeing redraw graphical glitches when using KDE4. And they involve the tooltips when you hover the mouse over the various objects on the panel. Then again, it’s been a while since I used something other than KDE4 extensively, so maybe it just doesn’t come to mind.
This is something that’s been solved a long time ago by compositing window managers. Desktop graphics on X are every bit as smooth as on win7 these days.
Hmm, that doesn’t sound like a full solution to the problem. If I remember correctly, these window managers only work with 3D acceleration turned on–what happens if you use 2D-only drivers? And secondly, doesn’t that only cover window moving and resizing? What about the contents of the windows?
If you don’t want to use proper drivers, you don’t really have the right to complain that things don’t work well for you.
Classical problem with X was that stuff moving over your window caused bad artifacts (typically leaving a blank area until repaint was done). This doesn’t really happen anymore. I don’t know what you mean by problems with contents of the windows, everything seems to work fine for me.
I mean artifacts that are very short-lived but noticeable, widgets that change their state not all at once, but in bits and pieces… like the outline of a drop-down box remaining shortly after the box itself disappears… window contents when resizing not being refreshed in real-time, but instead getting big black areas prior to the full refresh… just generally anything that requires large-ish refreshes seems to happen in jerks or discrete steps rather than smoothly. Sure it may seem “minor”, but compared to Windows and Mac it is noticeably less polished. It’s just like Linux font rendering used to be: “fine” but noticeably inferior…. I believe this is what ggeldenhuys is referring to below as X’s “poor performance” in comparison to the other platforms.
Edited 2010-06-26 16:39 UTC
This should not happen when compositing is on, as the lower layer does not need to be drawed again. See 2D without compositing as one single image. If you alter (with a combobox) one part, then what was under it does not exist anymore, so it need to be recreated after the alteration (combobox) is gone.
With compositing, it is a stack. The image you see is composed of multiple layers, each being on top, or not of an other. They are separated and does not need to each other. If you remove a layer (your combobox), then it will just show a lower layer, no need to as the widget to replain itself (that is slow).
Are you sure about that? I was under the impression that the compositing window managers did nothing other than composite windows, not the widgets within the windows….
The widtget -layer- (plasma) is a layer and the QComboBox is a proxy Widget to a real QComboBox. The QComboBox create a popup window on top of the layer that created the window. So yes, a QComboBox poupup is composited.
This has nothing to do with X and everything to do with either bad drivers or badly written apps. If you are getting actual glitches (and not just “not quite fast enough” redraws of complicated windows) then it’s definitely not an X problem. Windows apps have this too, although since Windows now has compositing, it’s not present any more unless you switch to classic mode.
What if it’s running on decade-plus old hardware that doesn’t have a worthy GPU for processing 3D? Maybe the device vender no longer even supports that particular chip, which is highly likely; nVidia seems to have gone through three or four different, incompatible driver generations in the time I’ve owned this nearly decade-old machine (though its original GeForce2 Ultra has long been replaced). The original GF2:U is now in another machine (currently running Windows and not in my possession, and if I were to install Linux or BSD on it, I would be at least three driver generations behind. No telling whether the drivers would even *work* on modern systems/kernels.)
Or what if using “proper drivers” goes against your wishes, such as using third-party kernel blobs and drivers? Hell, for that matter, what IS a proper driver–the crap nVidia and ATI put out, or something that is more well-designed to fitting into the system as a whole, both in terms of design and philosophy (open source)? Or just the “appropriate driver to get the job done” which, depending on how a person looks at it, could really be either? And if the binary drivers the manufacturers put out are the “proper drivers,” then wouldn’t that make Windows the “proper OS?”
Really, it sounds like you’re saying something along the lines of, “if you don’t like the way things are and how the GPU companies are restricting the use of their hardware through drivers, even if you run (or wish to run) a fully open system–you have no right to complain. Either use the blobs if possible, or shut up. Or upgrade to a newer model graphics card. And oh, and if those blobs are incompatible with your particular hardware and/or OS, well… tough luck. Enjoy the glitches.”
Edited 2010-06-27 08:48 UTC
Yeah, that’s what I’m saying. If you insist on taking the less supported path, adjust your expectations. OTOH, I’ve never been seriously bothered by these artifacts, so the issue seems overblown.
As for ethical reasoning – choose your battles. NVIDIA has been able to provide the best drivers there are for Linux and have good reasons for keeping them closed; I have no problem supporting them for that. I specifically chose my laptop based on the fact that it has an nvidia card.
Well, at least you’re upfront about it… that’s all I’ll add.
If you’re running on decade-old hardware, are you surprised to get 1990’s performance? Surprised that you can’t get the benefit of new work being done in 2010?
My machine from 2001 (not quite 90s, but close) is still perfectly usable (aside from slowdown issues caused by memory/swapping, but having only 256MB will do that), and in fact, would be an excellent machine if given the RAM. Ironically, it still runs much better than the Windows Me that came preinstalled on it and Windows XP which I ran up through SP2 to get rid of the abomination of an OS that it originally shipped with. Memory back in those days was scarce and expensive, and in the case of this computer, it’s still expensive.
Since when do the graphical glitches in the video driver get in the way of “real work” (other than being annoyances, just like those fancy 3D effects produced by modern compositing window managers/GPUs)? And I’m not talking about some bug that completely garbles the screen (which I don’t even remember when I last saw), I’m talking about what the original poster originally was talking about–minor graphical glitches. Brought up to a “modern” spec of 512MB or better yet 1GB of memory (or even maxed out to 2GB), this machine would certainly be enough to keep going for another six years at least… and likely more. The aging GPU, though, is already “obsolete” though, by at least one driver generation if using the binary nVidia drivers.
-Windows not not support your computer anymore
-Mac OSX does not support equivalent computer anymore
-Gnome, starting with Gnome3, will drop the support for your computer
-KDE still have some bit of backward compatibility, but is too modern to run on your hardware.
The two major Linux DE supported you longer than any other vendor, don’t complain. Now, you have to accept that to stay competitive, they have to be modern and use modern technologies such as indexation, compositing, database driven information collecting and theming. As 95% of their user can use those tech, then it is the right choice to go that way.
I still use mostly Pentium4 at the execption of 1 Core2Quad and 2 Core2Duo. All my servers and services run fine on the P4, but in some years, I know I will have to replace them because they will lack the muscle to run the new version.
Take the file server for example. With ext2 and smbfs, 200mhz and 32mb of memory were fine on the old P2. But now with BTRFS with compression and softraid, 3ghz and 2gb of DDR400 is the minimal config. I have not complained on the kernel maliling list, I accept those changes, I benefit from them after all.
You ask to have all the benefit of modern hardware on outdated hardware, this can’t work. You can still use Fluxbox and KDE3 apps on your computer, they will work forever as long as you don’t update certain packages.
Tell me though, what “modern” features *honestly* can not be done at all on this 9-yr machine? Hell, the GPU itself has 512MB VRAM, double the actual computer itself–compositing, technically, is no problem at all. Remember, the machine *does* have a fully capable GPU, and it really did when I bought it. But both of them are depreciated and need old drivers (the original possibly needing an older OS, though I’m not sure). Everything you described, this machine would perform effortlessly (and at quite a respectable speed) with a nice RAM upgrade.
I’m not quite sure what you mean by KDE4 and Gnome 3 “dropping” support for my hardware. KDE runs like shit to the point of not even really being usable (it really does require 512MB), but it *does* run. Gnome 3 will likely be the same way. Get it up to 512MB, and it’ll run like a dream. Various other desktops and window managers (Xfce, LXDE) exist as very capable alternatives.
The *only* people constantly dropping support where it matters are the GPU vendor… hell, even the original Sound Blaster would still work in it if I didn’t upgrade it to a newer model a few years ago. If Nouveau really can get 3D graphics working, that would really be one of the major steps up for Linux/BSD, IMO.
What? So what is the real, true benefit of modern hardware? I mean, in actual, practical terms? Aside from the fact that you can run in 64-bit and have a faster CPU with more memory (and the ability to use more memory), I’m really not seeing much… just a machine more capable of doing the same old things and with greater future expandability.
The real “features” which are unique are completely new technologies like Blu-ray, which if I really wanted I could get a BR drive and use it as a massive optical storage device. Sure, actually watching the video would likely require far more processing power, even just to decode the disgusting DRM. But that’s nothing about a new feature; that’s just more of a question of having a fast-enough processor. Then again there’s that new DRM that goes through every component of the system, from Blu-ray to GPU to monitor–but that’s another problem (one I don’t care about) which was only brought on by greedy media corporations that Microsoft has in their pockets.
I deeply agree with you. When I see that old computers that could flawlessly display the complex environments of Morrowind, huge resource hog on its days, can’t work with a “modern” desktop environment, it just makes me laugh. I know that “good enough” optimization is trendy nowadays, but in such cases a better name for it is power waste.
So true. It’s sad, with a memory upgrade, almost every decent computer sold in the last decade could be sent kicking and screaming into a new decade instead of rotting (or not… since they don’t decompose naturally) in landfills. Meanwhile, their original clueless owners use a shiny new machine that, literally, has more power than they’ll ever know what to do with. And yet they’ll probably junk it in other 3-5 years for a new one with even *more* power.
I should also add, the original GeForce2 Ultra GPU that came with my 9+ year old computer had 256MB onboard VRAM, so even that was a damn nice, capable chip (and for a desktop, 3D or not, still is). I will probably be getting the computer it’s now inside eventually, so I’ll see how the whole driver situation holds up.
I can’t stress it enough… this antique machine, if provided with enough RAM, provides all the power and functionality that any typical user would need. The only real exception is if they had some absolute requirement for high-definition video (in which case, the 1.7GHz P4 would need to be upgraded). I have watched some HD videos that play perfectly fine, yet others that stutter pretty bad (codec or compression level, maybe? More likely the lack of support for hardware video acceleration outside of Windows, even with the official drivers… Thanks again nVidia!). Or, if some game has come out by now that a 1.7GHz processor is just not enough for (GPU is an easy upgrade).
The only things the new Dell I’ve been running lately has over the antique is the ability to run more “modern” (ie. resource-hogging) operating systems and make virtualization usable–both relating directly to the larger amount of RAM (1GB). Oh, and no swapping slowdowns. Take this same, “modern” Dell machine and run it with 256MB memory, and it’ll run about as good as the memory-bottlenecked old Gateway I’ve been talking about. “Modern” doesn’t mean much any more; computers have long ago reached the point where for most things, they’re all fast enough. Even the cheapest will provide enough power and resources.
Edited 2010-06-28 09:27 UTC
Well, my replay was based on the fact that you had a non compositing capable GPU. I probably lost something in the 12 layer of this topic.
If you can have decent compositiing, then KDE4 and GNOME3 can work fine with 1gb or memory. KDE4 do a lot of caching to preload icons and some GUI element, so if the minimal cache start to drop in the SWAP, performances will drop badly in a metter of seconds.
Non composited capable computer will not work with Gnome3 (without the legacy package, the real and full Gnome3) and KDE4 is too slow to be usable without true compositing, even if it work.
KWin includes a 2d compositor (switch to Xrender mode) and you can always use xcompmgr. This mode is often faster on, e.g., radeon cards using the OSS drivers.
Only partial truth. I have generally found the situation to be wildly variable and greatly affected by various point revisions. Generally it has been mildy acceptable and never as smooth or consistent as osx and win7.
There are qt and kde apps that are quite a bit smoother on windows than on linux using the Xrender backend.
And one of the things that really gets to me is the fact that both major compositors seem to become sluggish and act weirdly after some uptime.
Is it just me? Or do many people with criticisms of X need to “get a life”. Visual artifacts? When I was growing up, we had a device called “rapid reader”. It had a card with short, printed sentences on it, and you could slide it so that a single one resided in the view window. It had a spring-loaded shutter that you could snap into place. Slide the card, Push a button. And whhht! You had a tiny interval of time in which to read the sentence.
I really can’t help but feel that some people are still trying to play “rapid reader” with display managers today.
And no. I note no visual artifacts, that are of any consequence to me, on my X displays.
Of course, if I plug a projector into my netbook during a presentation, it’s anybody’s guess what X might do. Hopefully, no one gets hurt.
(And, as an aside… I’ve been away for a while. But the OSAlert readership is about the best I’ve seen anywhere. And despite my criticisms in the past, the OSAlert moderation system is one which I find myself holding up as a golden standard to some of the less enlightened “cookie cutter” board administrators out there. Good job, Thom and Adam!)
As a former Linux user, I find this insulting for my previous OS of choice. There is great tech in the Linux world, sure : JACK, networkmanager, udev, APT… But X.org is *not* among them. It’s slow, it kills all apps when it crashes, its standard widgets (Motif) haven’t been updated for ages, leading to appearance of a bunch of incompatible toolkits… And let’s not consider the time it took to get mouse hot-plugging working.
The sole thing which Xorg does very well is error messages. /var/log/Xorg.0.log is a great way to know what has happened when X has crashed. Again.
Edited 2010-06-26 14:04 UTC
I don’t claim to love X (in fact I am programming my own window system similar to Wayland), basically all the things you said about it are wrong.
Well, its architecture is not the greatest, but pretty much any computer can handle it fine. (By comparison, Windows Vista and 7 and very slow, even though their graphics architecture is fairly well designed.)
No, apps decide to kill themselves when they crash. And guess what, if Windows crashes, all your apps crash too!
Motif isn’t really the “standard widget toolkit”. It was in old toolkit used many years ago that got replaced with much better things.
By a bunch, I think you really mean two. Gtk+ and Qt. However, they both cooperate pretty well, and can be themed to look the same very easily.
However, I do agree that we should only have one standard toolkit: Qt. Gtk+ is simply technologically inferior in every way. I would be very happy if GNOME was rewritten in Qt…
Try to play with it when some heavy calculation is running in the background, and see if it remains responsive ^^ Though you’re right that “slow” is not the good word. In terms of performance, X is acceptably fast, except for demanding things like games where I think that crappy drivers and compositing are more likely to blame.
Wrong. If windows’ graphic layer crashes, and I’ve seen it crash many times, the desktop disappears for a moment, then reappears with all your apps on top of it. You can safely save your work before rebooting your computer and investigating what’s wrong if crashes happen again.
There’s also the Enlightenment toolkit which is slowly getting popular lately, but you’re right that the UI toolkit mess on Linux is not nearly as complicated as the media API mess…
I know, QtCurve is one of the first things which I install on my Linux boxes. However, it’s just about looks. Things like open/save dialogs are still inconsistent…
Well, I agree that QT looks much better for the most part, but developers are lazy ™. Properly rewriting GNOME in QT would take a long time, during which people would continue to complain that the Gnome desktop stagnates. The result of the rewrite would be extremely buggy. Do you think the Gnome devs would ever take this risk ?
Edited 2010-06-26 15:04 UTC
I’ve never had a problem with it. X is designed to be able to do that, but I’m not sure how good the implementation is.
Well, maybe it can do that in some special cases, but I know that I’ve had Windows 7 and XP crash and bring down everything. My point is, in any OS, if a critical system component goes down, everything will go down. And Windows in general is more prone to this since it is more tightly integrated. Also, I’ve never had X crash, but I use only Intel hardware, which has very good drivers, so I understand that I’m not in the majority…
There is certainly nothing in X that prevents apps from not killing themselves if X crashes. The main problem is that right now, the toolkits leave a lot of state on the server. However, if you look at the design of Wayland, there is really no server state. The kernel stores the window surfaces. This means that it would be trivial for apps on Wayland to reconnect. The nice thing is that the same method is entirely possible with X too, toolkits are just lazy.
I don’t think so. Enlightenment is gaining some usage with Samsung’s Bada thing, but other than that, my guess is that its usage is declining.
This is just a myth popularized by Adobe (which is just using the myth as an excuse to be lazy). It’s actually really simple. If you are writing anything to do with playing or editing videos/music, you use GStreamer. If you just want to play some simple sounds in your app, you use libcanberra. If you are writing a game, you use SDL. That’s pretty much it. Those libraries will handle all the low level things that you don’t need to know about. It’s really not complicated…
True, it’s not perfect. It still beats Windows, where every other app decides to draw its own ugly, unthemable widgets. Linux is certainly way more consistent in this regard. However, Mac OS X beats them both by miles.
Yes, it’s really not realistic. I’m probably going to start using KDE soon, because I’m tired of GNOME not doing anything interesting in the last 5 years, and I really don’t like GNOME-Shell.
X is a server, not a framebuffer with compositing and acceleration. It is a design made to scale up and allow advanced corner case of headless, or mindless (thin) computers.
It was designed to be that and it is. If you think it’s too much for you, then it probably is. Some design decisions may make sont aspect of it look strange compared to Windows, but it is not done without reasons. Your needs are just too restricted to see why it have been done that way.
Yes, I understand that. For what and when X was designed, it was awesome. It still is awesome for remote stuff. On the desktop, it’s merely okay.
Personally, I think that network transparency should be built into toolkits like Gtk+ and Qt, not stuffed into the windowing system. I think the way Wayland does it is the best for general desktop use.
Looking at other posts, it seems that several other people have this problem, but that it isn’t necessarily related to X. So although I don’t understand how an unresponsive graphics layer can exist at all (just put X and the toolkits at RT priority ! ^^), I won’t try to argue.
I only encountered this because of a BSOD. If you’re talking about X, please compare it to the equivalent part(s) of Windows. A kernel crash is not the same as a graphics layer crash.
A graphic layer is *not* a critical component. As long as the kernel is alive, every error should theoretically be recoverable without reboot. As far as I know, Windows NT does this pretty well…
I don’t want to discuss general Windows quality, just to say that their design, in this precise area, looks better to me.
That’s probably it. Before my new laptop (which won’t ever work with anything else than VESA on linux), I’ve never owned any graphic hardware from Intel because I used to play games in the past. Drivers from ATI and NVidia can cause severe crashes, both on Windows and Linux.
This looks interesting. I just wonder : do you think that the “lazy” part is sufficiently important to prevent guys from the QT and GTK teams from fixing one of the worst design flaws of the linux desktop ? Are those behaviors so deeply embedded in QT and GTK design, so that a major rewrite would be necessary in order to get rid of them ?
Ubuntu is considering using it too, and since it is at the moment the #1 beginner linux distro, chances are that they could turn it into less of a technological demo and more of a mature desktop…
I tried E17 some times ago, it looked very nice and speed was impressive, confirming my impression that hardware acceleration should not be needed on the average PC, but it lacked some global vision and was still too buggy for everyday use.
Well, they are partly right. I remember days spent fighting with OSS emulation, nightmarish encounters with PulseAudio, struggling in order to make all applications support popular codecs (be them based on GStreamer, FFmpeg, or Xine, that is). Then there was arts fighting with ESD, before phonon came around. And JACK was incompatible with all of them for some reasons, even if some walkthroughs pretended to make other apps work with it (it was a flat-out lie). Looking at this, I think that there really is a problem with multimedia infrastructure on Linux. It’s just much more complicated than it should be.
In all cases, there should be one standard toolkit to rule them all ^^ Though actually, except for Adobe apps, drivers, and other support software, Windows managed to keep a relatively consistent toolkit in all apps until the XP days. Then came the Vista days of consistent look&feel inconsistency, which are far from being over yet…
QtCurve might suck, but QGtkStyle (the style Qt uses on Gnome desktop) looks just like the Gnome theme looks, and uses the native dialogs.
I don’t see a need to rewrite anything in Qt. Why rewrite a program in the first place? You just let the old program live on and write a new program.
Right now the innovation within Gnome happens on Javascript + Clutter…
If you’re using the default Oxygen theme, try the Oxygen Molecule GTK+ theme at:
http://kde-look.org/content/show.php?content=103741
You’ll probably have to manually install it, but I find it makes GTK+ applications fit in just as well in KDE as QGtkStyle does for Qt applications in GNOME/XFCE. It still doesn’t use native dialogs, but it does make the GTK+ dialog look like it belongs in KDE.
I don’t know why more distributions don’t package it; the only ones I know of that do are Gentoo and PCLinuxOS, the latter of which uses it by default.
Try to play with it when some heavy calculation is running in the background, and see if it remains responsive ^^
Are you saying that X becomes unresponsive if you have heavy calculation in the background? If so then you’re just talking out of your rear-end. I often compile stuff and as you should know compiling IS rather CPU intensive. And I haven’t noticed any kind of lag or issues with responses from X.
Wrong. If windows’ graphic layer crashes, and I’ve seen it crash many times, the desktop disappears for a moment, then reappears with all your apps on top of it. You can safely save your work before rebooting your computer and investigating what’s wrong if crashes happen again.
True, indeed. And surprisingly many people insist that X and Windows act the same in this regards but they don’t: I’ve had X crash several times and it took down everything I had open, including a coding session I hadn’t saved for half an hour. But I’ve also had Windows graphics layer crash, in XP and in 7, and all that happened was that the screen went black for a moment and then got back, with all my apps still intact.
What does this mean? Well… that even the god damn old XP handles this thing better than X!
I was under the impression that apps crashing when X goes down was due to the toolkits and not X, since X is not actually killing anything nor stopping apps from reconnecting.
At the end of the day you are still losing your work if X crashes, but the blame is not on the usual suspect.
Why do people keep saying this? X handles it fine. It’s Gtk+ and Qt that are too lazy to implement it properly.
A good way to understand it is to look at how Wayland works. Pretty much all clients do is get events from the server, and give handles to kernel-managed buffers containing the window contents to the server. This means that a client would not even have to know if the server crashed. A new server could just take over on the same socket, and everything would be fine.
The same method is possible with X, it’s just that X makes it a lot easier to design your toolkit in such a way that this is not possible… (like how Gtk+ and Qt do it now)
Again, don’t blame shortcomings of Linux on X11. Just because Linux has a horrible scheduler that can’t handle background tasks without locking up the X interface layer, doesn’t mean that’s how it is for every OS that runs X11.
For example, FreeBSD can run a tonne of compilations and file transfers in the background, without affecting mouse/keyboard input in X.
There’s nothing wrong with X11 … just with some of the OSes that is runs on.
Really, there is no problem in Linux or X with interactivity.
Don’t make things up. The CFS scheduler is quite good and has no problems with interactivity on the desktop.
If you want to see a bad scheduler, look at Windows XP. I don’t know if it has been improved in Vista and 7 though.
Umm… so can Linux. I do it all the time.
While there are some things I don’t like about Linux, in general, we would be much better off if X were replaced than if Linux were replaced.
No, he is right. Compiling is OK, but just try to uncompress an 1G rar file or copy some files in the background and the GUI will come to a crawl. Not just the GUI, actually: CLI programs like vim are affected too. CFS is horrible for the desktop.
And I experience this both on my Core2Duo home computer and at my workplace on an 8-CPU beast, so the problem is definitely there.
An even simpler way to reproduce the issue is to use dd.
dd if=/dev/urandom of=/somefile bs=1M count=10000 can cause an X11 GUI (KDE, GNOME, Xfce, etc) on Linux to slow down to a crawl until the dd is done. Doing the same with a USB drive makes it even more noticeable.
The same test on a non-Linux system will not affect X11, though.
I’m not sure where you’re getting this…
I just ran in this command:
dd if=/dev/urandom of=test bs=1M count=10000
on Ubuntu 10.04 on a Core i3 530, which is relatively slow. I noticed no decrease in responsiveness whatsoever. I even tried playing an OpenGL game. The FPS dropped by a few frames, but it was not choppy or unresponsive at all.
Maybe it’s just your particular system that has problems?
(Note: I’m not trying to say that there is a problem with X11. I’m saying that the problem doesn’t exist, in Linux or X11.)
Every system we try this on at work (Kubuntu 10.04, Kubuntu 9.10, Debian 5.0, Debian 4.0) will grind to a halt until the dd command is done. You can’t CTRL+TAB to other windows (well, you can, but it takes 30-45 seconds to register the keypress), you can’t load other apps, you basically just stare at the blinky HD light.
These are Intel P4, AMD Sempron, desktops and Intel Core2 Duo laptops, using ATi Radeon X-series PCie, nVidia something PCIe, nVidia onboard, and Intel onboard graphics.
Every system we try it on is affected, even our multi-core Opterons with 3Ware RAID controllers and no GUI (keyboard lag, 30s or more to ALT+F* switch consoles, etc).
Try it with a USB stick as the of= option to really cause havoc.
Our sys analyst is a RedHat guy, and he’s avoided dd for years because of this issue (RedHat 7-9, Fedora something, RHEL 4/5).
The last time I tried this was to make a bootable ChromiumOS USB stick. Took 20 minutes to dd the stick … during which time I could not use my desktop.
This Linux bug was my biggest problem on my laptop running Linux in 1 GB RAM.
It has absolutely nothing to do with the scheduler and everything to do with the virtual memory and disk systems.
The VM and disk get so busy writing data out to disk that the VM has no free pages and the disk is too busy to read in data blocks.
Heaven help you if the system starts to swap.
People are working to improve this in every Linux release but it’s been a big problem since 2.6 was released.
It is an official Linux kernel bug affecting 15% of x86_64 installting. It is not the scheduler, the kernel just fail to do it’s job. Linus tried to solve it but failed, many other tried too.
My desktop is affected by the bug, my Laptop is not. It depend on the chipsset and some hardware parts, but is due to their drivers.
In fact, no theming is even necessary. Qt treats GTK/Gnome similar to how it treats OS X and Windows these days: it assumes the widget styles of the system. To use a Qt/KDE theme on a Gnome desktop, you actually have to jump through hoops, which I think is a very good thing.
Whether GTK is or isn’t inferior, there is no denying that it is faster to take advantage of native Linux features.
Yes, but the problem is that Gtk+ doesn’t do the same. Running Gtk+ apps on KDE is still kind of painful.
No, I think you have that backwards. (And what native linux features are you talking about?) Qt is the one that jumps through hoops to look like Gtk+, while Gtk+ doesn’t play very nicely on a KDE desktop.
That’s a problem with GTK+, not with X11. Go complain to the GTK+ devs to get their heads out of the sand, stop with the NIH, and start co-operating with others.
It’s amazing how often QT/KDE devs bend over backwards to make GTK+/GNOME interoperate and look nice on a non-GNOME system … and how rarely it’s reciprocated.
Yes, I know that. That was my whole point.
I know. It annoys me because Qt/KDE does all the extra work and has way better technology, but I just really don’t like KDE that much. So I use GNOME, even though it feels like I’m stuck in 2005 and Gtk+/GNOME is technologically inferior in practically every way.
Also, KDE/Qt apps can be used with the other Xorg server: XFBdev with little performances penality while Gnome/GTK app can’t be used at all with resolution higher than 2×2. Qt is able to ask the framebuffer to update a bunch of pixel (at once) directly, while GTK redraw the whole screen pixel per pixel. So I can browser the web or watch a video on the framebuffer or scroll thumbnails in dolphin at more than 24fps while GTK can not even draw them in less than 2 second per frame.
It is when you really see how well Qt work and how well it is integrated in X.
You have no GPU but a lot of CPU? Fine, it will use the raster backend and draw the window as if it was a moving image (bitmap).
You have normal needs or networked ones? Fine, the X11 backend will use the protocol in a efficent way to draw the window (like in my xfbdev example)
Have a good GPU or want to save energy in a smartphone with efficient GPU and hungry CPU? The hardware accelerated OpenGL backend is there for you (but a little experimental).
I don’t agree. First off, GTK solves less problems than QT. If doesn’t integrate its how xml or svg parser, network tools and everything QT does. Gtk is just a widget toolkit and is lighter. Secondly, there is an at-spi bridge that works. QT with at-spi does not work yet. It’s ready to work with at-spi over dbus (at-spi2) but at-spi over dbus is not finished. There is no C binding and applications will not be ported to python.
So yes QT is very good toolkit and superior to GTK in many ways, but not in every way.
“And guess what, if Windows crashes, all your apps crash too!”
Really, FUD-boy? Not since Vista and Windows 7. What a tool.
It depends.
If “Windows crashes” = BSOD and the like, he’s right.
If it means graphics layer crash, it’s not been true since the very first releases of Windows NT ^^
Well, out of that list, really only JACK and apt are noteworthy. The rest are just kludges on top of missing kernel features. Other OSes have better solutions that Linux is still catching up with.
X11 most definitely *is* one of the greatest pieces of OSS available for Unix-like systems.
Define slow.
Yes, that is an issue.
Do any modern apps still use Motif?
And how is this any different from Windows or MacOS X??
Mouse hot-plugging in X11 has worked on non-Linux OSes for many many many years (FreeBSD has had moused (/dev/sysmouse) support since the early 3.x days, for example). Don’t blame X11 for a shortcoming of Linux.
I don’t know how long FreeBSD or Linux has supported it, but I know that I’ve been hotplugging USB mice since I started using Ubuntu four years ago. FreeBSD may have supported it earlier, but it’s not like Linux got support yesterday…
For some reason, you really seem to have a strong anti-Linux agenda. Whenever anyone points out some problem with anything, you immediately attribute it to a shortcoming of Linux, and point out how FreeBSD has done that right for years. I’m sure you’re sometimes correct. FreeBSD is awesome (I just use Linux because it’s given me less hardware problems). Just try not to be a fanboi.
The kernel device /dev/mice has worked since at least 2001 and maybe earlier. This device takes the input from all mice and presents it as a Microsoft Explorer PS/2 protocol.
If you or your distro set X to use /dev/mice then hotplugging any number of mice worked flawlessly.
The only exceptions were special devices like the Space Orb, multi-ball trackballs, mice with more than 7 buttons, etc.
Actually because you mentioned remoting as strong side of X, it is not really Xs strong side because the protocol is way too low level. Just have a comparison of X and RDP. RDP wipes Xs buttocks and have been for years. Xs remoting worked fine for simple terminals and the Athena widget set but thats it, once you move beyound that you have to rely on protocol compression hacks or entirely different remoting protocols like VNC to get a decent performance over an average network.
Err, I use X from a Linux server to my Macbook all the time on a 100 Mbit network. gvim, gedit, meld, wireshark and others all work great.
I never use compression hacks or VNC. Well, I have used VNC but not because it performs better than X.
Replacing X with something new would be very welcomed by me. I’m a GUI toolkit and desktop application developer. X has the worst performance of the big three (OSX, Windows, X) – even a relatively new project like Haiku seems to have better graphics performance than X.
I hope one of the big players like IBM could start such a project. Many individual developers have tried, but those projects very quickly get discarded.
As for the “wonderful” remote/networking support built into X – I don’t see that as relevant any more. That was designed for mono terminal displays of 20+ year ago. VNC, RDP and a host of others all manage the same thing (in color) with much better performance!
Apple had the right idea with OSX, by writing their own GUI – it’s time the rest of the *nix world wakes up to that fact – X11 sucks in performance.
PS:
I run Linux at work and home, as my only development platform. Windows only lives in VM session on my systems. So please don’t take this up as a rant/flamewar request – it’s purely my observation of X in my day-to-day work.
Edited 2010-06-26 09:12 UTC
X? So are you referring to all X11 implementations, or a particular one out of: X.org, Accelerated-X, Hummingbird eXceed, Reflections X, Cygwin/X, …
Edited 2010-06-26 09:41 UTC
An Redhat X employee is developing an lightweight alternative – Wayland http://groups.google.com/group/wayland-display-server
The good news is that he has knowledge about x, because he works with it, whats good and what’s bad. And more, with his Stack its possible to run an X-server on top for application that is dependent on X. More good news is that Wayland uses the new driver infrastructure that is also used by X. These drivers exist as I understood for ATI and Nvidia, but sadly not for Intel.
I think there are already proof of concept integration with GTK+ running on top of Wayland.
I’m just going to say that I think Wayland also has some design issues. The main one is pushing too much windows management into the clients. The plan is that things like window moving and even wobbly windows would be implemented in the client. To me this is a clear layering violation. It also makes it way harder to do things like tiling window managers.
Personally, I am working on my own windowing system like Wayland, mainly as a proof of concept for how I think window management and input should be done. I think Wayland has a much greater chance of actually becoming the standard Linux windowing system though.
That’s possible, but you would actually almost never do that. Almost all apps use Gtk+ or Qt, which would simply be ported to run on Wayland. If for some reason that doesn’t work, you would run a rootless X server like Xming for Windows, so that the apps would properly integrate with the Wayland desktop, much like X11 on Mac OS X. (Note that this article really should not say “rootless”, because that has a very different meaning in X.)
Nope, you’ve got that completely backwards. The drivers exist for Intel. ATI would work with a few small patches. NVIDIA’s binary drivers will probably never work with it, but the Nouveau drivers would also work with a few small patches.
Nope. That’s also backwards. I think Qt is much closer to being able to run on Wayland (especially with acceleration) than Gtk+ is.
Thank you for the information, I use Linux too lite (working with .net/Windows) but I am interested in Linux, and been able to introduce Linux/GPL into the company I work for. So now we use a Debian Linux server and open software such as Subversion/Bugzilla. X-windows works, but I haven’t been too impressed with it, and as system architect I know that its after you made a system and used it, you will find all shortcomings, and much has happened the last 20 years that the architects of X-window protocol couldn’t predict.
I haven’t heard of that one yet, but there are a few X-replacement prototypes available on the net. Y being one of them (very unoriginal name though <smile>). Some have put a lot of research in them.
The major obstacle is obviously convincing GTK and KDE (or Qt rather) to support the new windowing system. That way existing GTK and KDE and Qt apps should be able to move with relative ease.
The other question raised is should the replacement windowing system include a “default” widgetset as well.
X11 performance for me is fine with good drivers. Most people are really just pissed about slow drivers. X itself is just fine, performance-wise. If drivers properly accelerated all the things they should, I bet the complaints about X being slow would go away overnight.
Actually even the remoting of X is protocolwise lousy, while it scaled perfectly for a few lines widgets like the Athena widget, it falls flat on its face with complex modern UIs trafficwise. You need protocol hacks to get the dataload down.
People have been saying that X sucks for 10 years and I agree it might be the time to either let it rest or make a huge overhaul into X12 instead of doctoring around on X11.
The toolkits apparently make very bad use of the protocol, but the biggest culprit is Xlib. XCB supposedly improves network performance considerably, but no toolkits use it as of yet.
I believe X.org (and actually Xlib) already uses XCB, but overall X11 performance is still very bad. Doing ldd against almost any X11 applications shows a dependency on libxcb.so – how much of XCB it uses, I don’t know..
$ ldd gnome-terminal
…
libuuid.so.1 => /lib/libuuid.so.1 (0xb7231000)
libxcb.so.1 => /usr/lib/libxcb.so.1 (0xb7217000)
libpixman-1.so.0 => /usr/lib/libpixman-1.so.0 (0xb71d4000)
…
$ ldd xmag
…
libxcb.so.1 => /usr/lib/libxcb.so.1 (0xb7c4f000)
…
$ ldd /usr/lib/libX11.so.6
linux-gate.so.1 => (0xb8085000)
libxcb.so.1 => /usr/lib/libxcb.so.1 (0xb7f67000)
…
It’s not just the remote protocol in X11 that is slow, everything related to windows are slow. I have a rather fast Nvidia and ATI video card, with proprietary drivers installed, but performance is not even close to what I get when I reboot into Windows. Simply dragging a window around shows tearing on the edges, dragging Firefox around using OSAlert site shows really bad draw artifacts (never seen under Windows). The list goes on.
Xlib only uses XCB essentially as a transport layer. It still has the same bad implementation on top of that.
I think you need to check your driver configuration or talk to some X.org devs because unless you have a really crappy gfx card (and it sounds like you don’t), you shouldn’t be seeing those issues. I have a T500 with a Radeon 3650 in it and it is quite snappy and I don’t see artifacts or tearing. Compositing is also quite usable, which will significantly reduce flickering (just like in Windows Vista/7).
This is not a performance problem, it is a vsync problem.
Use the frame buffer then. The rest of us are happy with X.org. GTK works FB, your toolkit should be able to do it to.
The frame buffer is well supported by linux. It is just that all applications are running on X because it is much better.
Not even close. VNC is a hack and will always be a hack. It runs on the server, which means the server has all the bloat and security holes. It runs at fixed resolution and you can only run a full desktop, not a single window. use X with FreeNX just to see how much VNC and RDP suck.
And who run MacOS X on a server?
OpenWF can be used. Syllable has also a GPL solution for end users. The only problem I see is the lack of standardized GPU acces method (like USB mass storage)
Edited 2010-06-28 08:43 UTC
Why don’t assholes like yourself write the code for it? Oh, that’s right *YOU CAN’T*
Loser.
true he can’t. But it is a example of one ways that X is inferior to any other window servers.
It is something the x devs have said would require re-architecting X in order to accomplish yet is something I definitely expect ( It drives me up the wall looking at videos with peoples faces torn down the middle).
And who are “they”? Where do these “they” people come from? Will you pay their salary?
Old news mates. I was a rootless ex server over 10 years ago!
Edited 2010-06-26 08:47 UTC
Funny thing … OpenBSD did Xorg privillege separation ages ago. Too bad that lazy Xorg devs care only if you are a big software vendor, like Canonical.
Or am I getting it wrong? even if they [Xorg] get this idea recently … well, it’s better late than never.
One of the OpenBSD developers has become the leader of the Xorg Development … I forget his name, and cannot seem to find it with a quick google.
Matthieu Herrb, and he’s not “the leader,” but is on the Board of Directors.
Didn’t know that. Thanks – that explains the whole thing. Canonical and linux in general doesn’t used to give a damn about security in a long run [features comes first], so that looked just awkward. And yes – this developer is Mattheiu Herrb and he did a wonderful job in OpenBSD.
I saw this story on Phoronix (http://www.phoronix.com/scan.php?page=news_item&px=ODM2Ng) just before it got published here. Shouldn’t this be a link?
I can’t believe they’re still running X as root with NVIDIA’s binary driver.
AFAIK, it was possible with NVIDIA’s driver years before the open source drivers! IIRC, all that is necessary is that the driver’s device files need proper permissions.
This seems like a very clear case of ignoring binary drivers out of ideology…
Nvidia probably did it right because they also wrote drivers for SGI and Solaris, both of which ran X as a user-level process. If I recall that correctly.
You should not use word “rootless” to describe server that is run by non-root.
“Rootless” has a totally different meaning in X. A “rootless X server” is a X server that does not have root window (desktop). For example X servers in MS Windows or OS X are rootless – because they do not draw background – they “embed” X windows in native windows.