We reported earlier on a blog post entitled “Ubuntu Report Card (2009)” where the author detailed how they felt the Ubuntu experience had improved over the years. In a follow-up series of articles looking at the future, Tanner Helland has written 10 different broadly-scoped feature requests that [he] ‘and many others would like to see by the time Ubuntu 10.10 rolls around’.
Whilst some of these are more than purely ^aEUR~feature requests’ as they cover relationship and management issues, they are nevertheless important to the success of Ubuntu. Here’s how the 10 days break down:
- Day 1 ^aEUR“ A Great Package Management (Add/Remove Software) Experience
- Interestingly Tanner asks that the ^aEURoestore^aEUR moniker actually be dropped in Ubuntu’s plans to make a new UI on top of the package manager to make adding and removing software easier. Tanner argues that because the ^aEUR~store’ will be doing a whole lot more than just selling software (updates + patches, upgrades, drivers, codecs and more) and most software will be free, calling it a ^aEUR~store’ would be confusing and ill-informative to users.
- Day 2 ^aEUR“ A Music Player That Doesn’t Suck
- Whilst there’s plenty of variety in the Linux scene as far as music players, they are rather narrow and lack the kind of features and usability that Windows migrants expect with iTunes. “New users should be given a great default option, because not everyone wants to try out 15-20 possible music players just to settle on one that doesn’t do half of what iTunes does.”
- Day 3 ^aEUR“ Improved Visual Aesthetics
-
Take a look at Apple’s ^aEURoeGet a Mac^aEUR frontpage. What’s the first line?
^aEURoeIt’s gorgeous. Inside and out.^aEUR
Could the same be said of Ubuntu? Not with a straight face.
- Day 4 ^aEUR“ Real Wine Integration
- Tanner proposes a simple way in which Ubuntu could offer a smoother, more integrated experience with WINE to help Windows migrants to better acclimatise.
- Day 5 ^aEUR“ Solid, Functional Video Editing
- Tanner weighs in on the video editing situation with Linux and overviews five different specific video editors.
- Day 6 ^aEUR“ Simple, Reliable, Integrated Backup Tool
- Ubuntu does not ship with any GUI backup tools and backup in general is a sore spot compared to Windows and Mac. Where is Time Machine for Linux?
- Day 7 ^aEUR“ Mend Key Relationships
- As part of a larger ecosystem, Ubuntu has to interact with a large amount community, some of which are at odds with the distro.
- Day 8 ^aEUR“ Better Online Video Experience
- HTML5 video needs evangelism, Linux cannot rely on Adobe to solve the Flash problem soon enough (I’ve personally found the HTML5 video tag to be less reliable on Linux than it is on other platforms, which really doesn’t help the problem). That said, there’s still room for improvement with Gnash and Swfdec
- Day 9 ^aEUR“ Renewed Focus on Marketing
- Personally I think there is a lot more to marketing than Tanner makes it out to be^aEUR”very few people even know what a ^aEURoebrowser^aEUR is, telling them you have one for free isn’t all that effective if they still don’t know what it is they’re getting for free; and ^aEURoeUbuntu^aEUR could be hair-cream for all it sounds like to consumers.
- Day 10 ^aEUR“ Paper Cuts, Paper Cuts, Paper Cuts
- The biggy, the one that matters. Really, it’s the little things that makes OS X so good, and Ubuntu should follow suit. I’m in full agreement here. The paper-cuts project will make-or-break Ubuntu in the long run.
Remove pulseaudio
not sure what’s the deal with people whining about pulseaudio. I never had any issues with it and I hear it makes working with more than one sound card a breeze. Sure it’s young, but a good project.
Worksforme(tm)
Good for you.
It sure as hell doesn’t work for me on my Audigy ZS. It’s a know bug.
Fix the bugs then maybe it will be worthwhile.
Pulseaudio for me has on a number of occasions introduced “clicking” and stuttering in the audio which is pretty clear to hear. Right now I’m running Ubuntu without Pulse and everything works fine (in which case, why do I need Pulse?) The downside is that removing Pulse breaks Update Manager and apt because of the ridiculous dependency on Pulse from ubuntu-desktop.
“Pulseaudio for me has on a number of occasions introduced “clicking” and stuttering in the audio which is pretty clear to hear”
https://fedoraproject.org/wiki/Bug_info_PulseAudio#Playback_problems…
Which is nice, but Fedora is intended to be a bleeding edge distribution. Ubuntu is supposed to be usable. The quicker solution is ‘apt-get remove -f pulse’
PulseAudio has always been a disaster for me too. Stuttering sound that drives me insane. With each new install of Xubuntu I try it, discover it is still useless, then get rid of it in favour of alsa.
I would agree completely. Especially since Pulse is the thing providing software sound mixing (unless dmix has suddenly started working properly and I just missed that bulletin). My understanding, gathered from battling with my sound system on various distroes over my short few years of Linux experience, is that Alsa offers exactly as many sound output channels as your sound hardware actually has — and, surprise! most people’s integrated sound systems will have one hardware sound channel, that every other operating system would multiplex to produce multiple virtual sound output channels in software. Alsa doesn’t do that: it’s left to userspace systems — i.e. Pulse! — to provide that functionality. So… without Pulse, only one application will be able to use the Alsa device to produce sound at any given time.
I understand that newer Alsa versions do include a software mixer plug-in, dmix, and that it’s supposed to be enabled by default. Now, in theory, if your Alsa’s new, you can just drop multiple sound streams on Alsa, and it will mix them in software and output them on your real sound hardware (like every other OS’s sound system has always done). Except that, in so far as has been my experience, most Alsa-speaking applications don’t understand dmix, and the Alsa installations that I’ve encountered don’t actually have working/default-loaded dmix plugins. So… if dmix works well enough to make Pulse unnecessary, it’s news to me.
Edit: shorter post:
The system should be doing the job Pulse does; the equivalent of Pulse is a system service for every other OS on Earth. Until Alsa starts properly providing for multichannel sound in software, something else is gonna have to do it, and presently, pulse does the best job.
Edited 2009-10-01 23:11 UTC
No. Instead fix whatever bugs are left in PulseAudio and the ALSA drivers. It has come a long way since it was first added, prematurely and without configuring other apps in the repository to use it by default. This has given PulseAudio a worse reputation than it deserves.
Patch badly coded applications and work on better backward compatibility with ALSA and OSS. The CUSE subsystem, which is the character device equivalent of FUSE, which appeared in kernel 2.6.31 is a very good start.
and remove ALSA. It’s failed, no other *nix wants it, and much of Linux doesn’t.
Update the OSS to at least that of OSSv4. I.e. going back to Unix/Plan9 everything is a file, single name addressing system (filesystem), KISS, and all the other Unix things ALSA isn’t.
Then update and use http://www.chaoticmind.net/~hcb/murx/xaudio/ for audio over the network. That way it can use the same ssh connection as graphics which is cleaner (design) and faster (less duplicate work).
Don’t get me wrong, despite what some say, Linux audio does work, but it’s is ugly and not Unix at all.
… Given that fact that I’m using Alsa on 5 different sounds cards (SB1, SB2SZ, 3 different variants of snd_hda) and have yet to experience any problems – your post is dangerously close to FUD’ing.
– Gilboa
Not saying it doesn’t work. Seams to work just fine. I’m saying I don’t like the way it works. It has it’s own addressing system (outside the filesystem) and doesn’t work via a file interface. It’s not like a Unix component. Which, I think, is why none of the other Unix haven’t taken it, and it’s not managed to kill OSS. It’s a shame OSS went crazy/closed but ALSA ignores it’s for a Unix.
… You seem to forget that Linux was forced to use ALSA because OSS wasn’t truly open at the time. (only 3.x was open, 4.x was proprietary).
As far as I know (correct me on if I’m wrong), baring weird device node configuration (/dev/snd/* as opposed to /dev/mixer* /dev/dsp*) and somewhat cleaner API, there are not technical reasons to switch back. Am I wrong?
– Gilboa
I understand that OSSv4 was proprietary (big mistake), but I’m unclear if they could have implimented a separate more modern OSS instead of going and doing the un-Unix ALSA.
There is no reason a modern OSS couldn’t have something like /dev/snd/<card_name> symlink’ed as /dev/dsp*. The device file /dev/dsp* is a real file device, as in “cat /dev/random > /dev/dsp1” gives a blast of noise out of the first sound card. The great thing about having everything as file, is a computer is just a filesystem. So in theory, so you can create a mash up computer out of bits of other computers by mounting the folders of the other computers.
Clean API and file interface, which is why it’s easier for ALSA to ape OSS than OSS to ALSA.
Pulse only came along (argh a third solution) because of the OSSv3 / ALSA mess. I much prefer the idea of http://www.chaoticmind.net/~hcb/murx/xaudio/ for network sound.
Second judging Linux kernel dev’s decision in retrospect is easy.
At the time, Linux devs had no idea if/when OSSv4 will be opened, and as such, they used what was available at the time Alsa.
As it works out of box for 90% of the people (if not above), I see no reasons for them to change their decision.
True, I do prefer echo file.wav > /dev/dsp, but this doesn’t look like a compelling reason to switch back to OSS.
I have zero experience with OSS and Alsa as API so I can’t really comment on either one.
False.
As far as I can tell, Pulse came to solve 4 issues:
1. -Reliable- source mixing for cards that do not have a working hardware mixer.
2. Dynamic volume management across different streams.
3. Ability to dynamically reroute stream to different sound cards.
4. Network transparency and multi-seat support.
You may claim that Pulse is too buggy to be effective, but AFAIK, neither OSS nor ALSA can be support 2,3 and 4.
I still don’t use Pulse, as I find far too unreliable (partially due to buggy drivers) – but I have no doubt that in the long run, Pulse is the way to develop a versatile sound system for Linux.
– Gilboa
4 is the one I was saying could be done via X Audio plugin, thus use all the same networking code, same ssh connection etc etc.
OSS and ALSA should do 1, a third solution shouldn’t be required.
2 and 3 ok, but does that need a whole other solution?
My problem with ALSA is just feel it’s not Unix, and it’s going away from the Plan9 ideal Unix. It’s not just aesthetics, if you have a file interface, no special APIs are needed. Now ok, anything that needs audio, has one of many APIs available, but any new environment would need a binding etc etc. That’s just not Unix!
I probably should move to a BSD if I want a purer Unix, but I like the GPL…..and the size of the community…..speed of development…….etc etc.
As I said before, I can’t say that I disagree about the file semantics, and it should not be a reason to switch OS. (At least IMHO).
But in the end, Linux -really- needs a good hardware independent sound server / mixer. Pulse may or many not the solution – but in 3 years time, we will not understand how we could live without it…
– Gilboa
I don’t see Pulse taking over any more than ALSA did. It all works, and is getting better, but the design and direction just isn’t one that will get my vote. The ex-Bell-labs Unix/Plan9 elders I don’t think would approve how this all hangs together.
I’m 3/4 through Lion’s commentary on Unix 6 source, those guys really knew their stuff, and won’t have grown Unix like it has done, Plan9 is how they would have done it. Linux audio seams to be completely ignoring the Unix wisdom. It depresses me.
I blame Windows for polluting OS thinking!
Glendix might help, but there is no mention of audio, and I’m not holding my breath.
I guess for me, not being a Unix component is a big deal, but not quite enough to leave Linux and make my life more difficult.
It might be that an upgrade to 9.04 from 8.04 is where the failure of pulse shows itself. Or the failure of pulse integration.
I use mplayer. About 33% of the time I get flawless playback. The other 33% of the time I get crackle at startup and possibly no audio. The final 33% of the time I get this crap:
AO: [pulse] Connection died: Connection terminated
AO: [pulse] Connection died: Connection terminated
AO: [pulse] Connection died: Connection terminated3% 1% 0.8% 36 0 49%
For mplayer I just hit ‘q’ to quit and try again. If any of these happen in firefox using flash then the browser will lock and I have to kill it.
I’m getting kind of tired of people bashing pulseaudio. Most of the problems that people have with pulse are configuration issues with the way Ubuntu (and some othe distros) ship. Harp on the Ubuntu team to fix the configurations.
https://tango.0pointer.de/pipermail/pulseaudio-discuss/2009-February…
Also, since release 8.10 I don’t think I’ve had hardly any problems with audio… not saying that problems don’t exist, just that it seem to work fine for me.
Ubuntu needs some “official” way to run Windows software easily and perfectly… something like VMware Fusion’s Unity, but totally integrated with the system out of the box (maybe using Virtual Box and the Windows partition that 90% of PCs have).
Windows Activation kicks up a stink if you try to side-boot Windows.
I was moaning in 2006 that Linux didn^aEURTMt have any ^aEUR~migration wizard^aEURTM to copy your Windows profile to your newly installed Linux and put everything in its relative place (Firefox profile, My docs / photos / videos &c.).
Nope, still not there, all these years later.
Uhm, it’s been in there for a while now in Ubuntu.
Where the crap^aEURTMs that; I missed it??
When you install the system, at the user account creation stage, you’ll be prompted to “import” any Windows accounts it finds into your Ubuntu system.
The Ubuntu installer does this if you nominate a mount point for your Windows partition.
Apple did this for years until they decided to concentrate on there OS and there user’s, instead of trying to make Windows work on Apple OS. Microsoft was the one to port it’s software to the platform when the OS was finnaly ready.
All the other failling OS had this strategy and we only have #1 GNU/linux #2 Windows #3 Mac OS X as mainstream OS today.
You want to run windows software use Windows. You Want Mac OS X software use Mac OS X. The only real solution would be them porting there Apps to GNU/Linux.
You are right, but it sucks. My favorite editor in existence is TextMate which is OSX only. On windows we have e-texteditor which is almost there, but not quite. On linux there is redcar, but it is in the very early stages of development (and currently its unusable on HEAD).
I really don’t want to have to spend 3k on a macbook pro just to get TextMate, but I have come extremely close 3 times now. If there were something like wine for osx, I would be overjoyed.
The Linux platform should be so compelling that the author of TextMate would _want_ to port it to Linux.
It^aEURTMs an API war, and Apple are winning.
The author of textmate flat out said he is a mac developer, likes being a mac developer, and is making enough money with textmate being mac only to not want to have the pain of developing for and porting to other platforms.
He also said he gets emails all the time of people switching from other operating systems all the time for textmate, and he is totally ok with that since he supports apple getting money, as he is a mac guy.
In my mind, both of those are completely reasonable points of view, and it’s not like I am pissed off at him or anything. It just kind of sucks for me, because there isn’t any editors out there that even come close to textmate.
It^aEURTMs just a text editor. I use TextMate and I think it^aEURTMs brilliant, but it^aEURTMs not impossible to beat^aEUR”just unlikely given the needless complexity of the Linux ecosystem that makes developing something highly polished difficult. OS X is a platform where software is very highly polished in general. I don^aEURTMt know what makes it that way, but it just is.
In your opinion. Polish is in the eye of the beholder.
I tried trxtmate and is too slow for me.
My Debian box is very polished and Vim can do a lot more. (Emacs can do even more, but as a VIM user it obviously sucks.)
I have been a vim user for ages as well, and while it is a great editor (unbeatable really for certain tasks), there are a whole bunch of things it isn’t that great at, even basic things like indenting and coloring are rather unsophisticated compared to other things that are out there. The flip side is that it is very simple to add language support to the editor since it is done in such a simple way. Emacs is rediculesly powerful, the problem is learning the insane bindings, and getting used to regularly using 5 key chords for every command.
TextMate reminds me a lot of emacs, just using ruby instead of elisp as a scripting language. It also has less features, but the features it has are quite polished, and both discoverable and usable. Also, while both vim and emacs communities have the benefit of having worked on addons since the dawn of computing, TextMate probably has the most currently active addon community.
Wine for OSX?
– It’s called Darwine, it’s been around for ages, try doing some research
I want to run OSX apps on other platforms, not Windows apps on OSX
Apple and Mac OS X still makes up a pretty insignificant share of the market, even though they’ve managed to create a reasonable installed base of applications themselves.
If you don’t pay attention to what the installed base is using then you’re going nowhere. Even Microsoft has experienced it. If you’re introducing a new OS to people, like Vista, with a new development platform that you hope will get developers writing lots of cool new applications then it doesn’t amount to a hill of beans if the incumbent installed base, Windows XP users and developers, can do nothing with it.
OSX welcomes proprietary developers and gives them a platform they can target.
Linux distros are designed around open source and seem to expect that all programmers volunteer their time.
You would think that with 1% share they would re-think this strategy of being hostile to proprietary developers.
At this point the iPhone has better games than Linux even though the latter has been in development for over a decade. That’s because the iPhone is designed around developers who want to get paid for their hard work. The GPL ideology needs to go, there’s nothing wrong with proprietary software. Programmers need to get paid like everyone else.
You can’t buy freedom.
We see this with the Apple devs.
I’d argue that a good operating system needs both. Look at Windows for example. A fantastic selection of open source and freeware programs written by FOSS community, hobbiests, gov’t, educational organizations, etc. And right alongside of that is a full lineup of commercial offerings – games, CAD, GIS, etc.
No need for the either/or ultimatum, pick and choose whatever you need from both.
I have no problem with open source. I do have a problem with people who push an ideology that demonizes proprietary software developers.
If you think a good OS should have both then you should probably avoid Linux because every distro is designed around open source.
Why would anyone do that?
The only people who might deserve demonizing are those who would demonize you. An eye for an eye, as it were …
Now there do seem to be a lot of proprietary software developers who want to endlessly try to discredit open source. In reality, open source is just self-help collaboration, so why on earth should it be disparaged?
You are somewhat misinformed. The FSF is not every Open Source developer. For every loud-mouthed zealot, there’s an enormous corpus of normal programmers, who just want to create decent, usable software, that benefits them. Using Linux, or Open Source software in general, does not require that one subscribe to a particular philosophy; some will push it, yes, but more won’t, and you’ll be able to get along pretty well without anyone force-feeding you kool-aide.
http://ultimateedition.info/ultimate_edition/ultimate-edition-2-3-g…
http://ultimateedition.info/Ultimate_Edition_2.3/gamers.png
http://www.google.com.au/search?hl=en&q=%22open+source%22+e…
Thanks for providing links to a distro that contains a bunch of 90’s clones and Quake 3 mods but I already knew they made up the bulk of Linux games.
Those were just the native games. They are perfectly fine if you just like to run games.
If you like spending a fortune on commercial games, there was “Play On Linux” provided.
http://www.playonlinux.com/
It isn’t perfect, but it will let you run most contemporary native Windows games on Linux.
What are PlayOnLinux’s features?
Here is a non-exhaustive list of the interesting points to know:
– You don’t have to own a Windows^A(R) license to use PlayOnLinux.
– PlayOnLinux is based on Wine, and so profits from all its possibilities yet it keeps the user away from its complexity while exploiting some of its advanced functions.
– PlayOnLinux is a free software.
– PlayOnLinux uses Bash and Python
Nevertheless, PlayOnLinux has some defects, as every piece of software:
– Occasional performances decrease (image may be less fluid and graphics less detailed).
– Not all games are supported. Nevertheless, you can use our manual installation module.
Put it this way … it does a fantatically better job than anything available on Windows for running native Linux software.
That’s a non-sequiter because no one wants to run Linux software. The best open source software gets compiled for Windows.
Which has been better at attracting big developers? Linux or the iphone?
Have a look at the EA iphone games:
http://www.eamobile.com/Web/ipod-games
Why isn’t there an EA Linux section?
>Which has been better at attracting big developers? Linux or the iphone?
It is estimated that there are the equivalent of 1.5 million full-time developers of FOSS software.
It’s not a non sequitur; you may want to look the term up.
Also: Cygwin. Or, Microsoft’s own POSIX compatability layer, Microsoft Windows Services for Unix: http://www.microsoft.com/windowsserver2003/R2/unixcomponents/defaul…
Yet OS X will still have an insignificant market share because there is a lot of incumbant software people can’t run.
I don’t think people are hostile to proprietary development. I’d love a pound for every time I’ve heard how the LGPL attracts proprietary development over the past ten years. Unfortunately there is little to no proprietary development because the development tools many distros push are crap and getting your software installed on a system without putting it into a central repository is a nightmare.
It’s a hell of a lot easier when you enter into a new market and can write whatever you like because it’s all new. It used to be like that for the desktop computer market in the 90s, but it isn’t that way any longer.
Edited 2009-10-02 12:37 UTC
not really, Ubuntu needs focus on LINUX applications! There’s far too much focus on desktop Linux as “windows-cheap”. I like that many of the comparisons in the list are to mac.. but I don’t want “mac-cheap” either.
I want to see KDE with it’s plasmoids and new GUI elements turned into something nobody else has yet. Desktop “Linux” needs it’s own way of doing things. Something clean and simple. Ubuntu has one thing going for it only Mac does.. that nearly all the apps are Free Software… there should be less bickering about which apps and more focus on small apps working together well.. that’s the “linux” way.
We need more focus on making apps “invisible” on apps as mere “plug-ins” to work with data types… but most importantly we should be focusing on “apps” as toolboxes to build task-specific tools and making desktop distros that play on that fact and lose the old fashioned idea of an application.
Now, I read that several times…
but what does it…
mean?
Really bad idea. Ubuntu is not Windows. It can never be a better Windows. If you want Windows, buy Windows. What you are suggesting cannot be part of Ubuntu as it would require a Windows license, which means it can’t be free. What you are suggesting must be a separate, bought, piece of software. Perhaps what you want could be done with Wine, but I don’t think it will ever be perfect as Wine will always be playing catch up with Windows, and MS will make life harder for Wine the more successful it is. Windows is not a open standard. If you’re running a Linux, you should run Unix/Linux software.
The closest thing to what you want is http://portableubuntu.demonccc.com.ar/
Running Ubuntu on Windows, that way round can work.
Wait, I thought Linux was the “desktop OS that blows Windows away” .. so whay would you need to run Windows software?
Linux. Is. For. Servers. For desktop, get Vista or 7.
Funny, I’m using Arch Linux right now, as I type this in Firefox 3.5, listening to a CD with the latest version of Amarok 2.2 on my KDE 4.3 desktop system. I can give you a screenshot if you really want.
http://ourlan.homelinux.net/qdig/?Qwd=./KDE4_desktop&Qiv=name&Qis=M
I works just great. My modest AMD 64×2 system with 1GB RAM and humble ATI HD2400 video card really performs well. Far better than Vista (and yes, I have had the misfortune to have had to use a Vista system). Given that experience, I’m pretty sure that Vista would be a dog on this system.
Windows 7 is not released yet. I doubt very much that it will hold a candle to what I have running on my desktop right now on this modest hardware.
http://www.psy-q.ch/blog/articles/2009/09/13/win7-review-from-free-…
Edited 2009-10-02 14:16 UTC
WorksForMe!(tm)
http://linuxhaters.blogspot.com/
What has that got to do with the point?
The point being that for most of the desktop machines running right now, something like Arch Linux is by far a better desktop OS than Vista or Windows 7.
For some recent (more expensive) machines that are capable of running Windows 7 or Vista well enough … then something like Arch Linux is only a little better.
Edited 2009-10-03 04:26 UTC
Don’t feed the trolls.
I have Vista on my home machine, for gaming. One day, I wanted to get a number of non-gaming related tasks done — do some on-line banking, rip a CD, play some music, create a document. Easiest way for me to do that? I installed Sidux Linux in a VirtualBox VM. And it worked very, very well. Linux may not be as hampster-could-use-it user-friendly as Windows, but almost every distro on the planet comes loaded with a broad array of high-quality power-tool software, that you just don’t quite get on Windows — at least, without tracking down and installing a lot of disparate third-party software, that either came with Sidux or I pulled in one apt request. If you want a real, productive work environment, don’t dick around with Windows.
Windows. Is. For. Gaming. For everything else, there’s Linux.
Perhaps you were being facetious, but that sure seems a bit arrogant. I have been running Linux as my main desktop since the late 90’s. My wife and 2 of my kids use Linux on the desktop. I am a Java/.NET/PHP developer, and I have no problem with Linux as my primary desktop. I use Wine or VM’s when I need to do .NET development, or sometimes one of my Windows boxes (especially for games, even though a lot of them work in Crossover Game).
So your statement seems rather dogmatic. It would be like me saying you should not run Windows. I also have Windows boxes. I love playing with all OS’s. But, honestly, Linux really is and has been my main desktop for a long time. – even when I was employed as a Windows developer. So it seems disingenuous to make such a blanket statement. I know I will try to avoid them, but they do slip out at times!
I might also add, that Windows makes a pretty fine server. I think the Windows Server 200x line is pretty solid. So I wouldn’t be dogmatic about that either.
Edited 2009-10-02 16:21 UTC
Don’t feed the troll.
Thank you,
– Gilboa
1. My laptop has 3.1 surround. An apt-get remove pulseaudio will make it actually work, but alsa does not see the subwoofer as LFE, just as another speaker. This means it will not do a low pass filter to route lows to the speaker designed for lows. The end result is fantastic sound on windows, and awful sound on linux after some work.
2. I have an NVidia GeForce 9600M GT, not an uncommon card. However, if the boot-splash is enabled (which it is out of the box), the os will not only irrevocably crash on shut down (without actually shutting down), but it will puke white garbage all over the screen that looks remarkably like a broken monitor (was pretty scary first time I saw it)
3. I can be installing an os in a virtual machine in windows while watching a DVD, and the DVD plays perfectly. In linux on the same machine, if I am copying some files from one directory to another, everything is so choppy it is virtually unusable until the process is done.
I have the same problem on every Linux box. When is Linux sound ever going to get fixed? Even just the “startup sound” tends to stutter on my Linux boxes. Sound is basically at a Windows 3.1 stage in my opinion. Everyone gripes, but no real solution ever seems to appear, despite the pile of frameworks that keep arriving.
Join the club. The state of sound on Linux systems, and the way that distributors simply package up and follow yet another layer to solve the problems rather than thinking themselves, makes me despair of ever being able to put a properly working Linux desktop on someone’s system.
That’s why our Rails developers in our company are all going the Mac way for their desktops. It’s more expensive, our deployment environment is Linux and I’d love to use the same thing locally…….but I just can’t trust it. Perhaps not ever. It doesn’t get any easier when people start drinking the anti-freeze and telling you that Ubuntu, or something else, is the answer to everything.
PulseAudio works perfectly well with multichannel systems but it sucks on autodetecting it. It always setup stereo 2.0-sinks by default, even if the card is capable of 7.1.
To get 3.1 he has to manually set default-sample-channels = 4 in /etc/pulse/daemon.conf or manually create a sink and restart PA.
As ALSA doesn’t properly report his LFE channel as LFE (this is an ALSA bug, not a PA bug. It must be fixed in ALSA.), he HAS to manually create a sink in /etc/pulse/default.pa and enumerate the channels like this:
load-module module-alsa-sink channels=4 sink_name=Foobar channel_map=front-left,front-right,center,lfe
Yes, this sucks badly. But the proper solution is to fix PulseAudio so it becomes better at automatically detecting hardware and configuring itself.
1. X.
Thom, I have to ask you this: do you hate the X11 protocol, the X.org implementation of the X11 protocol, or both?
Well… the first thing is that I do not want a graphical environment for networked computers. I need a graphical environment for my own PC for my own use. And I do not run applications from other machines. If I need that, I just use a web browser. It feels that X is just making things more complex than they need to be. I probably don’t understand X at all, but this is just my gut feeling, and I bet there are people who share this feeling.
The thing is ..the network stuff is used when you are actually using it over the network ..when you use it on the same machine, X doesnt hit the network and you dont get any network related performance issues ..
X server servicing clients on the same system use traditional unix sockets ..all windowing systems have a client/server model and X isnt unique here …
No it doesn’t hit the network but it still uses IPC and the same protocols as if it were. Saying that it doesn’t hit the network is like saying that it doesn’t format your hard drive. True, but hardly relevant.
Latency, caused primarily by the protocols it uses for IPC is the main culprit for bad user experience.
Graphics cards and drivers can be at fault some times but 2D performance which is what should be used most of the time still now is a solved problem and 3D support in X11 a lot more than enough for drawing windows and controls.
X has other problems such as applications going down when it dies. One would expect a network protocol designed by 3yo’lds would keep that from happening. And it would be great to have “that” network protocol in our desktops. Just not X or at least not Xorg.
It’s clear that, as usual, you don’t know what you are talking about and you are proud to flaunt your ignorance on the subject. There are problems with the X11 protocol, sure, but the fact that it uses IPC (like EVERY OTHER MAJOR OS) is not one of them.
But really, have you measured the latency of IPC? Do you know that’s why it’s slow? Or do you just assume because someone else mentioned how they think network transparency is worthless and you just ran with it? I’m inclined to think the latter.
The X protocol is used over IPC instead of a network, which is why it doesn’t matter that they removed the network. That is what I wanted to say, and you know it, but ad-hominem attacks work better.
Of course I know that from third parties that write about X design. No(sane)body can possibly bear looking at X11 source for long periods of time. I did it once to debug a driver that was bothering me and won’t do that again. Surely not to win an argument with you.
Anyways that I screwed up a term doesn’t help with my apps going down when it dies because of the convenient bad-driver(TM) or getting lagged i/o or other marvelous side-effects of using X as opposed to some superior system like Windows 3.0 GDI.
Linux could do fine with even that and would do if it weren’t that X gets all the drivers and support.
Argh, more blind X hate.
Play with X and ssh. Sorry, but it’s a great system.
From the shell on a the laptop I can:
* login into the desktop
* run some gui app so it’s windows are on the laptop’s desktop for me to use as I see fit (firefox for instance is faster running this way than native on the craptop).
* set running some gui app on the desktop on its second screen (for instance the second screen is a TV, so a movie) (X server 0, screen 1).
All with just:
ssh -Y user@mydesktop
someguiApp&
SSHDISPLAY=$DISPLAY
DISPLAY=:0.1
setsid someotherguiApp&
DISPLAY=$SSHDISPLAY
And yes, I do use my computer like that. And no, I don’t feel there is some make believe IPC/network cost when I’m not.
And yes, I like the server/client design, it fits well here (but I do feel audio belongs with graphics, and I want a Plan9 style filesystem interface).
But there is a problem with XOrg, it contains real drivers that should be in the kernel.
However, I know XOrg is going through a revolution (though of course closed drivers like Nvidia lack way behind).
With Gallium3D and KMS the drivers can be removed from XOrg and put into the kernel where they belong.
Then there would be one XOrg “driver” and it would just be a Gallium3D+KMS one.
This means XOrg can be stripped right down. This will improve the code no end, and mean that XOrg can run as the user not root.
Also, it means better accelerated XNest etc etc. Which means you could perhaps have an extra X, just piped through, so if the “real” one crashes, the “fake” one can keep everything and hook into a new “real” one once it’s running, and nothing is lost.
All that will Wayland, etc etc, X’s future is very interesting.
The real benefit of Gallium+kms to me is that once the drivers are moved out of X, we can finally get rid of X itself. Most of the leg work would have already been done and any display server should be able to piggy back off of the Gallium+KMS without having to rewrite all the drivers which is what has kept most competitors to X away.
Yes it enables competitors to X, which is good (like Wayland), but I bet each of those have X compatibility (again, like Wayland), which means they could be argued as X implementations. To replace the X standard (not just the XOrg implementation) a standard that can do all that X can but better is required. Even if a X replacement comes along, that everyone is happy with, there will be compatibility needed for decades. There is little doubt that XCB is better than XLib, and in fact the XLib most use just wraps XCB, and yet everything is still written to XLib.
The big thing is get drivers out of X, you can then have a much smaller X sever that runs as the user instead of root, more readable code, more reliable code, and can have better redirection, so better Xnest and X forwarding, etc etc. Everyone wins.
//All with just:
ssh -Y user@mydesktop
someguiApp&
SSHDISPLAY=$DISPLAY
DISPLAY=:0.1
setsid someotherguiApp&
DISPLAY=$SSHDISPLAY
//
Oh, yes, that’s simple. Most computer users would find that a lot easier than clicking on an icon.
Linux on the Desktop = fail.
Linux on >your< desktop = fail.
Most users aren’t going to want to do that kind of thing, or many of the things I do, but needless to say I do.
In the wilderness years between RiscOS and Linux I missed the command line. Despite what some will tell you, some things really are easier on the command line, and on some systems, you don’t have the resources to spend on GUIs.
WorksForMe!(tm)
Typical freetard response.
Ah, name-calling. Thanks for bringing the level of this discussion back in line with the internet norm.
The point that he was trying to make is that people still use X’s networking capabilities. He was not trying to argue that desktop users should; for you guys, there’s VNC. WorksForMe actually has nothing to do with his point — people have been using that one that mean, “when someone claims that a bug does not need to be fixed, because it does not effect them.” Which is not even remotely the claim that was being made.
So your point is that the fact that you have to use some console commands to take advantage of a feature that Windows just lacks turns out to mean that the OS is a failure.
If you don’t need/want to export X, use remote desktop. There, problem solved.
You don’t have to use the console. But you can use the console, it’s easy and powerful. I don’t think you can argue either OS is a failure.
Amen, amen. I work with a Linux cluster, and I do a similar dance a lot:
Windows/Mac users, don’t let the fact that they’re weird commands dropped into a console scare you, it ends up working out quite well. Home Desktop users may not use the networked X architecture much, but there definitely are real-world usage-cases out there for it.
You’re also definitely right that using IPC is not why X sucks. There are basically three reasons: horrible/third-party-hacked drivers; lack of adequate failure recovery and isolation; and poor auto-detection. And you’re also right that X is evolving right now, as we speak, and points 1 and 3 are being actively and incrementally dealt-with.
Someone who actually knows X, a rare commenter indeed!
I think issue 2 will be dealt with, and it will be much easier after issue 1 is, but also it won’t be such a issue (though it’s never been for me) after issue 1 is dealt with.
There really should be a article about the truth behind X and how it’s evolving and why. There is so much rubbish out there.
You screw up another term: you criticize X design by saying that X11 sources are awful..
Do you understand that the design of an API is something totally different from the sources of a particular implementation???
I’m not sure exactly why you don’t like X, me I like its network transparent design, XFree isn’t that good on many points but that’s a manpower issue, this has nothing to do with the design..
What use is a network transparent design that doesn’t get used? Even on Linux today most people use VNC, RDP, Citrix to access remote resources because they don’t really require any special command line options to get them to work. There are real issues with X, performance is one of the top complaints against it. The real problem with X is that it wasn;t designed with real desktop use in mind, instead its getting retrofitted for that purpose and it shows. IMO, it probably would have been easier to start something new and there have been plenty of opportunities for this to happen but we keep chinking away at the same old codebase in hopes of making it work properly. Why, because of compatibility which is something we constantly berate MS for doing with windows. X is no different. X itself is the least forward thinking part of Linux.
Like I said before, once the drivers are removed from X and put in the kernel where they belong then it would be much easier to replace X altogether if that is what the community chooses to do (and I hope they do). There are alternatives in the sideline waiting to take its place. Wayland is a great candidate, imo.
What use is a network transparent design that doesn’t get used? Even on Linux today most people use VNC, RDP, Citrix to access remote resources because they don’t really require any special command line options to get them to work.
I don’t know anyone who uses X over ssh anymore. I’ve tried it myself a few times, but I get better performance if I just choose to use a full remote desktop like VNC or NX. Especially NX just is so much faster. Not to mention the fact that I don’t lose what I was doing if the network goes down or anything like that.
VNC sucks, big time. It’s a hack for Windows. Anyone who use it on linux is clueless. It’s far slower than X and you can do much less.
I know a lot of people who use X over ssh. That is the way we administer servers in my job. They have linux servers and windows clients. Xming is installed on each window client. X forwarding is providing a clean integration in the windows desktop, unlike VNC, which sucks (big time).
BTW, RDP sucks and so does Citrix. And Citrix costs an arm and a leg and is a pain. freeNX is the way to go if you have slow network. If network is faster than you need, then X over ssh is perfect. Just don’t use VNC if you can avoid it, it sucks!
Edited 2009-10-02 15:21 UTC
VNC sucks, big time. It’s a hack for Windows. Anyone who use it on linux is clueless. It’s far slower than X and you can do much less.
I don’t understand how you can do “much less.” It’s a full desktop, you can do the same as you could do with RDP or NX or anything. And as said, I’ve always gotten better performance even with VNC than with X over SSH. (And of course even better performance with NX) Maybe it’s something I do wrong, but I don’t really see what I could be doing wrong.
Well, for a start VNC is for a single user. On windows, it moves things on the screen and everybody can see it. That’s one hell of a big security hole. On linux, it doesn’t do that, because it runs on top of a X server specifically launched just for it. So VNC is dependant on X on linux and that is a good thing, or we would be in the windows situation where it is single user. It also means that it consumes a lot of memory for nothing on linux.
Next thing is that it can only do a full desktop. If you are using a VNC client, it means you already have a desktop, which mean you don’t actually need another full desktop. When you use X11 or NX, you don’t launch a full desktop (for what?), you use your running desktop and only open the window you need on the server. If you only open a Firefox window, or an administration window, it’s a lot faster than VNC. It’s far more user friendly, because the application you open on the server appears in your task bar like any other window and you can switch to it with alt+tab.
Using VNC, you have to lower resolution and colors in order to have acceptable performances over the network.
VNC is ugly, rigid and slow. Actually it is just a hack.
Using it every day, I find this quite funny.
Which performance?
-Remote display performance? Agreed (It should really include NX style compression natively).
-Local display performance? I disagree: I find it good enough (the responsiveness issue I feel are caused by applications such as Firefox which have a poor design and are the same on Windows).
Using both Windows and Linux, I don’t understand why you find performance of X such a big issue..
Given that it had network transparence far before the competition, I find this quite laughable.
As for the rest, yes, stripping the drivers out of X is a good idea, and more competition is always good.
I wish Wayland the best but don’t forget that there are already many dead X competitors: Berlin/Fresco, Y, etc.
I tested UNIX domain socket latency on my machine and I get an average around 250 microseconds to send a 1 KB message between two processes. Granted, there’s no overhead for an abstraction layer like libxtrans, but shuffling around bits entirely in userspace is probably pretty fast compared to sending data through the kernel.
Network latency is not the problem. I don’t know why people keep harping on it.
As for the rest of your incomprehensible rant, the best I can say is that you are uninformed, or you are overgeneralizing. The drivers do not have to follow any specific architecture (except in the interface between the driver and X) because that is dependent on the hardware and the people who developed the driver. So you will find a range of quality. Then again, based on the crappiness of many Windows drivers, I’m not convinced that they are really that much better.
The rest of the X code base is actually pretty decent. The X server is decently well-designed, albeit not perfect (no software is, except for what they use at NASA). It has proper layering from the abstract portions in the DIX down to the generalized framebuffer and machine-independent code, down further to the OS and HW specific parts. Extensions live in their own directories for the most part and don’t interfere more than need be with the core stuff.
Convenient bad-driver: I’m sorry you don’t like this response, but fact of the matter is, it generally is the problem. Hardware is flaky and really hard to deal with, especially when you have no docs or limited docs. In my experience, and in watching the mailing lists and to a lesser degree, bugzilla, most bugs and problems are related to the drivers. You just need to accept that and move on.
I don’t know what lagged I/O you are talking about. X drawing is pretty fast in many cases, where it is properly hardware accelerated. I don’t notice any input lag. If applications are laggy, blame that on the app/toolkit, or, again, bad drivers that don’t accelerated commonly used operations (RENDER is a repeat offender here). On my machine, most 2d ops are properly accelerated, so I get great 2d performance, even with a compositing manager running. It’s almost on par with XP, and that’s saying something given the crappitude of GTK+ (Qt3 feels, however, about as good as GDI, if not better). I have noticed that some distros, Ubuntu especially, are particularly poor performers on my machine. I will continue to use Gentoo as long as that’s the case.
The fact that apps go down when X goes down is entirely the app’s and the distro’s faults. Apps are perfectly capable of waiting for X to restart and reconnecting to the new X server, and distros are perfectly capable of not returning users to GDM/KDM when the X server crashes.
Yes there are problems with X, but this is not one of them. With a few simple patches to Gtk+ and Qt, and a few changes to upstart in Ubuntu, and this problem would be nearly solved for most users.
Yeah, but the sole fact that X is crashing is X’s entire fault.
Perhaps a combination of all of the above, but perhaps mostly the ludicriously disjointed driver situation – which the Linux kernel doesn’t have, incidentally. It’s made things unpredictable and disjointed and developers hate working on X.
The X11 protocol and XDMCP at one time in the 80s and 90s was certainly useful, but it’s had its day. Back when desktops were less complex, required less hardware resources, client hardware was generally expensive and exotic and we didn’t need to secure the traffic over an unreliable and insecure network then things were OK. However, all of those things are the case today.
Desktops and applications are more complex, they require more memory and hardware resources located where the user is, powerful client hardware is ludicrously cheap when compared with yesteryear and we use remote desktops for remote working these days. That means that we need it to work over unreliable, unsecured and unpredictable networks. There have been a few attempts to try and make X work with those requirements over the years and Keith Packard himself says that it can’t be done.
The embarrassing failure of Sun and Oracle’s ‘Network Computer’ revival over ten years ago should have been the time to quietly take X11 and XDMCP out the back and shoot it in the back of the head. However, backwards compatibility reared its head again but nothing new was created.
I think the fact that Apple didn’t fork X says enough.
The open source world needed something like quartz a decade ago.
Apple didn’t fork X then because X was in a terrible state then. A huge amount of work has gone into it since then and I have a feeling that if Apple had to make the decision today, it would be much more likely to fork X.
Sure, but in some ways things were actually better in the 1990s with 1990s hardware and X than it is in the 2000s with 2000s hardware and X.
Now that I think of it, beyond anything else my first impression is the enduring instability during the whole decade.
The funniest thing is that when they completely broke their development process, moved to git, etc., the X stack is constantly in an immense state of chaos.
Git isn’t the problem. They just need more manpower. There is a big discussion going on on their listserv right now about how to fix the development process and get more people involved. That is, IMO, the biggest problem facing X today. The technical issues can be surmounted, but not without enough developers and testers to make it happen.
I think the biggest thing that can be done is get the “real” drivers out of X! This is happening with Gallium3D, KMS etc etc, but it can’t happen soon enough. This will shrink the code no end, and thus simpler, and thus more people can get involved.
Isn’t google going to ditch X as well with ChromeOS?
Anyways X obviously still has problems, just look at some of the complaints in this recent thread:
http://www.osnews.com/comments/22271
I see people complaining about driver problems and Thom’s usual baseless X11 complaints. The vast rest seems to be about PulseAudio, GNOME and Ubuntu’s vision.
Thom isn’t the only one in this thread to complain about X. Scroll down to the comment by Deathshadow.
I really think it is funny that you guys attacked him for suggesting what google plans on doing: dumping X.
I remember that editorial where he was attacked for complaining about how resizing a window made X crash.
X is a weak stack and blaming the drivers won’t hide some obvious flaws in the design:
X crashes when mouse is unplugged
http://ubuntuforums.org/showthread.php?t=496059
X is a standard you can not ignore. You can argue about implementations (Xorg for instance), but the concept is sound. XOrg is having drivers moved out of it, this will simplify it massively. This makes it easier to maintain, or even write alternatives, like Wayland, but regardless, X support is always going to be required. Just like it is on OS-X the moment you go away from Apple specific software. If Google’s Linuxs are going to be worth having, they will have X support. XOrg or not.
They included it for backwards compatibility, but they wisely decided not to base a new system off the back of it. That’s what Linux distributions should have done years ago.
Actually they didn’t fork it in order to support their own legacy applications.
Maybe my expectations are incredibly low but both my parents run Fedora 11 (moved from Archlinux for compatibility reasons) and I haven’t noticed the issues which a number here keep pointing to. My fathers laptop is able to go to suspend and recover without any problems.
Are there issues with X? of course, but they’re more to do with the lack of resources and leadership than anything intrinsically wrong with Xorg itself. When you look at most of the complaints it comes from too many problems and not enough resources to solve them.
The solution is for Novell, Red Hat, Connical, and Sun to put some real man power behind the Xorg project instead of the tokenism of 2-3 employees. Have small teams dedicated to certain parts of Xserver that need resolving and you’ll find that a good many problems people here whine about will become non-issues.
Edited 2009-10-02 09:13 UTC
One could argue that switching to a simpler architecture like Quartz would have required less resources than incrementally patching modern graphics features into X, but it’s difficult to prove or disprove that.
True, but unfortunately almost every idea, such as project Berlin, almost always hits a snag when it comes to getting the drivers ported over.What I think is really required is for Nvidia, ATI and other minor graphics producers come together and tell the developers what they would consider the ideal model so that driver production is easier and can deliver the best bang for the buck for the end customer.
I would love to see a replacement but give the lack of will power by the big distribution companies like Red Hat, Novell and Connical to actually allocating real man power to the project – unfortunately any ideas that come about will never get off the ground because of it.
We will have to wait for Wayland and first for drivers to be moved in kernel.
Yeah, unfortunately for Wayland there is only one developer working on it. For something that the whole of Linux success or failure rests upon, it doesn’t appear that any of the Linux vendors or open source advocates are taking this project all that seriously. If they get this sorted out, it’ll be a massive leap for Linux – one of the biggest problems having been solved with a few minor ones left to deal with.
It always seems that Linux is on the crux of success only to have it taken away by the absence of any real contributions by the big name vendors such as Novell, Red Hat and Connical. Image if each of those companies hired 5 programmers each (15 in total) to work on this project – if they did that, it would be ready by the middle of next year.
Why don’t you just use the framebuffer?
If they do that, then they will need XFBDev, which is X server for the framebuffer(shock X and XOrg are not the same), or they won’t be able to run much, and they will find performance sucks because there is no acceleration going on, just framebuffers. I’m guessing that was your point, but I was just making it clear.
Point 10 is dead on.
I love to try Ubuntu, because I appreciate the vision and work behind the project… but I then end up just getting paper cuts trying to do simple, intuitive things.
For instance, I still don’t get why there are two areas in the System menu bar for system type stuff – Preferences and Administration is just frustratingly arcane. If I want to change the look and feel and configuration of the OS, aren’t I administrating it? I never know which one to look in to change the things I want.
And even then, I rarely use anything in the Administration menu – why not just clear it out if people aren’t going to use it? Make an Administration Utility which pulls those rarely used functions together in one place.
And then get rid of the System Menu altogether and move it to the Application menu under a System folder.
Lots of little things can be worked on while still being different and keeping a sense of unique style.
Glad to see some developers are talking about these things.
It seems to me that the goals in linux reflect promoting certain usage habits under certain conditions, which all operating systems do. You can ask for special icons, here but chances, just as many people won’t want them there, etc. You fix something, you break something. I don’t see how asking for Feature X in Scenario X when the Linux way seems to be finding trying to find Feature Y that works in every scenario.
Else we wouldn’t keep dropping back to plain text and the Command Line. No criticizes the process, just the results of the results of the process.
Edited 2009-10-02 00:59 UTC
The repeated biggest problem remains video that’s like the dark ages of computing. When I cannot plug in a 16:9 1440×900 19″ LCD that has DDC information over a DVI connection and have it actually start up in the native video mode – Hell, it doesn’t even LIST the video mode and I have to spend three to four hours dicking with xorg.conf to get it to show anything other than 800×600, there is something MAJORLY wrong. Then of course that xinerama is still a joke, going more than one display kills compositing, and in general it’s like a trip in the wayback machine to analog CAD displays circa 1989 X11R4… All those cute ‘panels’ for controlling the display are **** worthless if they don’t work/don’t do anything.
Of course, what’s wrong is that steaming pile of manure known as X11.
Edited 2009-10-01 23:14 UTC
I have a CRT monitor so when I first booted up ubuntu, I get a nauseatingly low refresh rate. Then I had to look up all the info to get everything to work (had to figure out how to use terminal, etc)…and ended up with a big headache. I was willing to do this because I like to do crazy things for the heck of it. But ubuntu for the average user? This is crapware not ready for primetime. Sorry.
If I had a Linux distribution liveCD and I booted it and it did not run my video card & monitor at optimal resolution and refresh rate (like Windows when you first boot the installation disk), then I would:
(1) be very disappointed, and then I would
(2) get a better liveCD distribution.
JustUseAnotherDistro(tm)
InSearchOfAMagicalCombinationOfGimpAndPidgim(tm)
Ahh the joys of not having a Control Center. Also all that autoconfigure X crap is a pain in the behind. Use gtf and generate a modeline.
e.g. gtf 1440 900 60. Stick that modeline in your xorg.conf. Force xorg.conf to use that mode and off you go.
Here is my xorg.conf
Section “Monitor”
Identifier “monitor1”
VendorName “Generic”
ModelName “Flat Panel 1600×900”
HorizSync 31.5-90.0
VertRefresh 60
# modeline generated by gtf(1) [handled by XFdrake]
Modeline “1600x900_60.00” 119.00 1600 1696 1864 2128 900 901 904 932 -HSync +Vsync
EndSection
Section “Device”
Identifier “device1”
VendorName “nVidia Corporation”
BoardName “NVIDIA GeForce 6100 and later”
Driver “nvidia”
Option “DPMS”
Option “ModeValidation” “NoDFPNativeResolutionCheck,NoVirtualSizeCheck,NoMaxPClkCheck,NoHoriz SyncCheck ,NoVertRefreshCheck,NoWidthAlignmentCheck”
Option “DynamicTwinView” “false”
Option “AddARGBGLXVisuals”
EndSection
Section “Screen”
Identifier “screen1”
Device “device1”
Monitor “monitor1”
DefaultColorDepth 24
Subsection “Display”
Depth 8
Modes “1600x900_60.00”
EndSubsection
Subsection “Display”
Depth 15
Modes “1600x900_60.00”
EndSubsection
Subsection “Display”
Depth 16
Modes “1600x900_60.00”
EndSubsection
Subsection “Display”
Depth 24
Modes “1600x900_60.00”
EndSubsection
EndSection
And you really think desktop users want to deal with that?
Stick that modeline in your xorg.conf.
I think you just summarized the Linux desktop experience in a single sentence.
1) Joe Sixpack is going to tell you where to shove it when you tell them “add that to your x.org.conf”
2) That does NOT work with the nvidia drivers, you have to include a metamodes line, and metamodes doesn’t seem to work right on a GTX260 if you have more than one color depth specified as ‘displays’.
For me: I am using Arch right now.
http://chakra-project.org/news/index.php?/archives/17-Chakra-Alpha3…
It doesn’t use Pulseaudio.
You can select the open source drivers for your video card, and in my case doing that gives me great performance compared with the binary blob driver.
http://www.phoronix.com/scan.php?page=article&item=amd_r600_r700_2d…
Arch is a rolling release, so as soon as new software is released:
http://amarok.kde.org/en/releases/2.2
I can install it right away, without having to wait for the next six-month update:
http://www.archlinux.org/packages/extra/i686/amarok/
Since the open source 3D driver for my card won’t be released until kernel 2.6.32, which is still in release candidate status, this means I will have to wait only about 10 weeks now until I can run the 3D driver.
Hopefully the great 2D performance, which is what is required for desktop use, will still remain after the 3D functionality is added.
Having said all that, right now with Arch (apart from the current lack of 3D), I am having no problems with audio or with X.
Edited 2009-10-02 03:37 UTC
I think you are mixing up X11 and XOrg. There are many implementation of X11, XOrg is just the one used by Ubuntu and most Linux desktops.
And your problem is the XOrg driver for your card, and I’m guessing it’s not XOrg’s own driver your using but the closed driver from the graphics card manufacturer, and that will be because I’m guessing XOrg’s driver for your card can’t use all the features (like 3D) of the card because the people writing XOrg’s own driver for the card don’t have the specs.
xinerama is old hat, Xrandr is what you want, only closed drivers aren’t keeping up.
But this is a silly conversation from the start, “real” drivers have no place in any X server, and through efforts suchs as KMS and Gallium3D the drivers are indeed being moved out of X and in to the kernel, then their need only be a single X driver which talks to these abstractions. The closed drivers need writing or replacing, but they are closed, however there are efforts such as Nouveau to replace them with open ones that can be kept up.
XOrg needs people, specs and time, but it is evolving.
>> I think you are mixing up X11 and XOrg.
>> There are many implementation of X11, XOrg
>> is just the one used by Ubuntu and most Linux
>> desktops.
… and they all suffer from these problems, many of them WORSE than x.org.
>> And your problem is the XOrg driver for your
>> card, and I’m guessing it’s not XOrg’s own driver
>> your using but the closed driver from the graphics
>> card manufacturer, and that will be because I’m
>> guessing XOrg’s driver for your card can’t use all
>> the features (like 3D) of the card because the
>> people writing XOrg’s own driver for the card don’t
>> have the specs.
Don’t give me that open source BULLSHIT… or more specifically where you point the finger on that is complete **** manure. Why do I say that? Because Windows has had FLAWLESS multiple display support since Win98, Apple has had it since System 5, and ALL you have to do is plug in the cards, install the drivers, and check off a box under display properties… and assuming your monitor is connected via DVI the mode detection has worked pretty well back to win98 and is flawless under V/7 (one of the few things that WORKS in Vista) and I’ve been running multi-display since Win 3.1 using a Targa board… NONE of which involves ANY open source driver malarkey. What it involved is a stable damned driver API – But don’t ask that of the *nix community.
The dirty ***** hippy attitude of open source or nothing has prevented there being a consistent binary base for closed drivers – and hardware makers LIKE closed drivers… and so do I since to be brutally honest I’ll stack nVidia’s closed drivers on a crappy decade old Ge2 against the best open source driver efforts on a ‘modern’ Intel. What would you rather run linsux on? Ge2MX with closed drivers or GMA950 with open ones? Unless you’ve dipped into the FSF cool aid….
>> xinerama is old hat, Xrandr is what you want,
>> only closed drivers aren’t keeping up.
Being the two do entirely different things, one setting display resolution and the other allowing the use of multiple monitors – I want BOTH. I want them talking to each-other, and be in nVidia, Intel or ATI guess what, they don’t do so worth a flying ****.
>> “real” drivers have no place in any X server
That X is effectively monolithic when it comes to the video drivers I agree on that one part, but…
>> and through efforts suchs as KMS and Gallium3D
>> the drivers are indeed being moved out of X and in
>> to the kernel, then their need only be a single X
>> driver which talks to these abstractions.
Because adding yet another layer or two of abstraction to the process is the answer… NOT. Let’s face it, the X11 server/client layers were not even developed to run off the same machine and as such ends up like driving with the parking brake on – which is why damned near every low end extension that’s been done to X11 tries to bypass that relationship altogether – xRandR, Composite, dbe, bitmap, dri, glx – all exist to bypass how X11 is supposed to work because the server/client relationship is too slow to be practical for anything except remote server.
Much less the programming API that sucks so bad everyone and their brother has another layer of abstraction to sit atop it to make it usable – Old school you’d be hard pressed to find a single book that actually tells you how to program X11 directly – they all will tell you to just use Motif. Today we have GTK, QT, lessTif, FLTK, Fox, TCL/TK – all exist entirely because the X11 API is such a half assed convoluted mess nobody actually wants to program for it directly.
Adding yet another layer of abstraction adding yet another layer of bloat… So yeah, let’s add another layer of abstraction to that, that’s a GREAT idea.
… and we wonder why even when it works anything running X Windows feels like a disjointed buggy mess. Hell it’s so bad most Desktop managers can’t even get user notification that a program is in the process of launching right. Like when I start most any application if I have more than four of them running and the cursor sits there as the normal arrow for about fifteen seconds before it shows any indication of activity on screen on a Q6600 w/GTX 260 – naturally I click again and eventually have five copies open en-masse…
Even with the big fancy desktop managers most every X11 implementation still has all the fit and finish of a 1984 Yugo GV. If you are lucky it will get you where you are going – but you aren’t going to be happy when you get there.
Edited 2009-10-02 16:46 UTC
I’ll be quick;
The other X implementation aren’t as good as XOrg for the desktop. That is why everyone uses XOrg. Outside the desktop is different though. But it is important to distinguish between X the standard and XOrg the implementation.
The open source point is not bullshit. How can you possibly expect XOrg to be able to do anything if they don’t have the source to do so!
Linus has made plenty of good arguments why a stable driver API + closed drivers isn’t as good as an unfixed API with all open drivers in the kernel trunk. Or you can think all the kernel guys don’t know what they are doing……
You can use Xrandr to setup a single desktop over two monitors. Just not on all the closed drivers, which goes back to my first point.
Locally XOrg IPC done with shared memory and sockets. The server/client model isn’t a issue, and the power it adds I use almost daily.
The X API isn’t meant to be a widget lib. X is designed to be the basic required. Not a easy widget kit. That’s why you have widget libs on it. On top of that I’m betting your looking at XLib not XCB, which is meant to replace XLib.
KMS and Gallium3D replace existing abstractions, they don’t go on top. Read on it, it’s all pretty cool. It could be used to make a X replacement, like Wayland, but any X replacement will need X compatibility for the foreseeable future, again like Wayland.
The current X works well for me, as I said the network stuff I often use, and what’s in the pipe looks even better.
They don^aEURTMt need the source! They need to provide a stable API/ABI that vendors can target with their cards.
For crying out loud, Windows 98 is closed source and vendors still targeted that fine^aEUR”because the API was there and it was documented.
Because drivers have to be compiled into the kernel instead of providing a clean separation interface, the Linux kernel has done more to hold back the state of graphics on Linux than marketshare or lack of games has.
YOU HAVE WINE for Heaven^aEURTMs sake^aEUR”the community has already shown it can map APIs for compatibility. It wouldn^aEURTMt be beyond the open source community to clone the Windows graphics API so drivers could be made for Linux with nothing but a re-compile of Windows code^aEUR”if only the fecking Kernel and X and all that shite would stoop to making things easy for developers instead of being such hurdles to progress.
And there is ReactOS for that.
There’s also NDIS-Wrapper, which wrapped Windows wireless drivers so they could be loaded into the Linux kernel. A lot of Linux users experience a lot of hassle, when they have to re-install their binary drivers every time they update their kernel; there really, really should be a stable driver ABI.
Compare a Ubuntu machine when the driver of the wireless dongle is open and in the kernel against a Windows machine where the driver must be installed. You only get out the box drivers in Windows at release time, after that you must install them. You always know what brand it is, it’s in your face, often with custom software you don’t want. Once stuff is old it’s forgotten, not so with open drivers in the Linux kernel. etc etc. Give me Linux every time!
You mean:
1. If the device is supported in Linux
2. If the device isn’t new since it takes time for the driver to make its way into the kernel
I’ll stick with Windows where I know the device will be supported instead of dealing with stuff like this:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/203819
I’ve read stable_abi_nonsense and I’ll tell you what is real nonsense which is expecting hardware manufacturers to meet your demands when you have 1% share of the market.
As we have seen many times even if they release an open source driver they don’t need to make as good as the Windows version.
But whatever keep following Linus and Greg with their sacrosanct view of open source drivers. Who can argue with the resounding success of Linux on desktop. Almost as much market share as Windows 7, which hasn’t been released yet. Amazing. Greg K-H was right all along.
We’ll see. The Linux desktop is perfectly usable right now. The only driver I’ve had to install is the Nvidia one so I could get 3D. Didn’t have to install printer drivers (thanks to HP good Linux support), or the wireless dongle drivers or anything else. It all just worked. It was dual boot with XP, and for XP, everything needed drivers installing, each bring along with it’s own branded crap “free” software. Nope, I’ve dropped Window for home and won’t be looking back, especially now I can get more out of old kit.
Yep the Window model works really well, I can use all my old devices and Windows supports more devices that any OS…….
Oh please. Windows driver API isn’t really stable either. The difference is when it’s changes, it’s often completely rather than increments, and old drivers won’t get update if it’s not economical to do so. Just like bugs won’t get fixed unless it’s economical do so.
Even when you really do have stable, things change and you don’t want to be stuck with some old broken design. Yes you can start to stack interfaces, but better is to change the driver to use the new design properly.
The policy of open drivers and unstable driver API is deliberate.
http://git.kernel.org/?p=linux/kernel/git/aegl/linux-2.6.git;a=blob…
And it is working.
Yeah, because whatever the reason we have to copy it to Linux. And if that doesn’t increase marketshare, then copy Time Machine.
Rinse and repeat until Linux is a success on the desktop!!
Time to fire up the Xerox machine!!
Only trouble is keeping mr. Jobs away from Xerox…
I agree with a lot of what it was said. WINE integration is a must, there are some precious applications whose developers have declare not porting to Linux ever. I can mention a few of them: ImgBurn, foobar2000, utorrent, the TAK lossless codec… therefore is essential for us to have some direct compatibilty.
I also think it’s time to Ubuntu move on to the DVD. And if desired, a “slimmed” down CD version for the poor. My ASUS motherboard came with a DVD, not a CD, and that’s where people should head when things start to get past the CD media size.
As for the music player, I agree… the alternatives suck, including Amarok, Banshee itself. The closest to foobar2000 was aqualung. But this one was not that user friendly. Songbird may be an alternative in the future. But I doubt Banshee will ever be, because of Mono libraries. I tried MPD plus clients and it basically sucks. I used to like Winamp a lot, and current Winamp-esque projects XMMS, Audacious & the like are a bit odd with its own slowness quirks.
Each Ubuntu release is getting better. I am sorry that the next version is illustrated by a stupid Koala, which has roots in Kardecism. Oh Well…
I cannot stress how true that is. foobar2000 in WINE is the best audio player on linux and it’s not even remotely close. And that’s even with the typical WINE bugs such as non-perfect unicode support and menus not working perfectly.
1) Custom title formatting. No more having to write false information in tags (or omitting tags altogether) to get your player to display the information you care about. Just write semantically correct information and a small function that takes the dictionary of metadata and returns a string and you have nice titles in your playlist.
2) Support for APEv2 tags on MP3 files! This is related to 1) in that you just can’t do proper semantic metadata with ID3v(1|2). Those ancient tags formats need to die a horrible death ASAP. They are (partly) responsible for why we have so many stupid audio players (pretty much everything that is not foobar2000 actually) that assume that all you ever care about is Artist, Album and Title.
3) It uses a smaller font for the playlist with less vertical spacing than GTK/Qt default. OH MY LORD WHY do most linux players default to showing so few lines in the playlist by default?! Banshee, Rhythmbox, Amarok, you’re all guilty and it makes you SUCK!
4) It does everything else. Replaygain scanning, tagging, multiple playlists, album list, directory structure list and it plays everything under the sun. I can’t think of a single linux native player that isn’t missing at least one of those features.
Something like GoboLinux (gobolinux.org) where multiple versions of the same library can be used simultaneously.
Yes, definitely. I personally think this is the single greatest issue with Linux. It would be entirely possible to come up with a good standard, cross distro package manager. Unlike most other things, a package manager really can be “one size fits all”. It could be used on all distros anywhere from embedded devices to desktops to servers. With a unified package format that actually works, installing software would be much easier for end users (although I think it’s not too bad already), and it would be much easier for developers to package software (especially proprietary developers). This would allow proprietary developers to release just one package that would work on all distros with a specific required ABI so it could continue to work after the ABIs of all the libraries change.
The current package management system is heavily biased toward free software developers (which makes sense). I don’t like proprietary software, but I understand that programmers need to make a living, and support for proprietary software will be important for the success of Linux as a desktop operating system.
People have tried it before, and it’s consistently failed. It’s a nice idea, but I don’t think it’s going to happen any time in the foreseeable future.
This is true, but I think there have been two issues with all previous attempts:
1) All previous “universal package managers” (Autopackage, Zero Install, and Klik are the only ones I know of) were lacking features or ease of use. A universal package manager would need to be easy for end users and support multiple versions, advanced dependencies, non-root installation, and installation of both system software (libraries, daemons) and applications. As far as I know, no package manager satisfies these requirements (except possibly GoboLinux’s package manager, but that is made specifically for GoboLinux).
2) A universal package manager would need the backing of at least one big distro. If Canonical supported it in Ubuntu, they could probably get most other distros to adopt it, since Ubuntu is by far the most common desktop Linux distro.
It may not be easy, but it would certainly be possible if the desktop Linux community cared enough.
I for one am not going to bitch too much about a powerful, easy to use UNIX-like OS that’s free! I have been using Linux for many years now and while it’s not perfect, it’s also NOT Microsoft (which is its biggest selling point as far as I’m concerned.) Microsoft needs to realize that there is a certain faction of computer users who refuse to put up with its crap! That’s we Linux users. I have no doubt that sound and video will eventually be straightened out on the Linux platform. Further, Windows apps run in Linux will become easier to use and more true to form in WINE. Finally, DEs like Gnome and KDE will become polished and feature-rich to the point that Windows and Mac users will wonder how we, in the FOSS community, do it all for free!! In the meantime, I’m enjoying Linux and FOSS and am grateful for the vast community that supports this fantastic software for absolutely free! Thanks!
I’m glad people are posting criticism because it gives a good balance to the horde of Linux advocates that spend their time writing forum and blog posts about how OEMs are to blame, how Microsoft is evil, how it WorksForMe(tm), how they switched their grandma and blah blah blah enough with the religious revival already.
Linux desktop adoption has problems that cannot be blamed on external factors when more people are willing to buy a $1400 macbook that only comes in one color.
Can we at least get a year without Linux activism? A year of desktop criticism perhaps?
But if you read what they wrote – they have to setup the Linux installation for their mum, dad, grandma or granddad; that by itself proves Linux isn’t ready especially given the pain the ass it was trying to find a printer compatible with Linux or finding that Canon is too lazy to release a 64bit driver for their MP240 multifunctioner printer.
US$1400 MacBook? a MacBook white costs US$999 )in NZ costs NZ$1,999.00 (incl GST)).
It isn’t going to happen – if there was zero noise from people using Linux, I wouldn’t care but far too many are willing to advocate but unwilling to address short comings.
Edited 2009-10-02 09:24 UTC
I’m getting pretty tired of hearing some of these arguments. Linux users are not all frothing idiots; many of us know there are problems, and we’re trying to freaking fix them. Nobody’s saying Linux is perfect now; it has problems, some things work well, some things work poorly, some things work great here and badly there. But Linux it has strong points other OS’s don’t, it’s at least decent, and it’s getting better all the time, slowly but surely.
Let me underscore that: Linux has faults, but so does every damned OS. Windows, Haiku, Mac OS X, they all have strengths and weaknesses — and bugs, and things that don’t work as well as they should, that make them appealing to some users and repulsive to others, and that make them work well on some hardware platforms and poorly on others. A month or two ago, I managed to blue-screen Windows Vista by unplugging my USB headset; I chuckled, but I didn’t go piss on moan on the interwebs, because I understand that every OS has some drivers that don’t work, and some bad subsystems, and even in 2009, you occasionally meet a damned kernel crash. And don’t even get me started on the WinXP/WinXP64/Win7RC horror stories some of my more adventurous windows-using friends have told me.
Note also that the Linux camp is far, far from the only group advocating their platform, and we certainly don’t have a monopoly on fanaticism, arrogance, misinformation, or blindness to our faults, Apple.
Edited 2009-10-02 16:24 UTC
But it does seem to be the only group advocating not from a love of your own, but a hatred of single other, Microshaft, err. soft.
1) Ever seen an “I’m a Mac, I’m a P.C.” add? Rabid hatred no, but certainly mockery and derision. I’ve seen a lot of Windows-hating Mac fans who don’t have the first bit of technical knowledge, have no idea what they’re talking about, and basically really just want to shit on Windows users for being uncool. And I’ve seen Windows power-users who think Macs are for idiots who don’t understand computers. And so on. There’s a lot of rational people and a lot of fanaticism on every side.
2) Plenty of Linux users use Linux for no greater reason than it’s the best fit for their needs and preferences. I’d wager the majority of Linux users use Linux for this reason, rather than because they have a deep, abiding hatred for Microsoft products. They’re just not as outspoken.
Was more making a general point that there are more people willing to spend over $1000 on a macbook that comes in one color than run Linux as a desktop.
I’d like to see a simple gui way of allowing Ubuntu to authenticate using an ldap server and mount NFS directories as well.
Oh an while we are at it it would be nice to have some tools on the server to set up ldap, and I think the option to set up a light window manager on the server as well.
Simple. Just use Windows XP, Vista, or 7.
I don’t believe this paper cut effort is realistic at all. I submitted tons of real papercuts that does in fact make ubuntu (or the gnome desktop in total) look far less polished than OSX and even Windows. So far they all got rejected for some reason. They ranged from graphical glitches to behavioral issues, and in all cases they were rejected with some “working as intended” comment or even comments about it not being their problem, this despite following the rules of the paper cuts thingy.
Kind of reminds me of that minister in Iraq that refused on TV that the americans were in bagdad while you could almost see the tanks rolling in in background. The devs may say “its not an issue” lots of times, but it does not make it true for anyone except themselves.
So after that I am actually considering buying a Mac in pure frustration over the arrogance when I spent time actually trying to contribute and was shot down each time. I dont really think they will ever hit the same level of user experience with this attitude towards the problems.
Edited 2009-10-02 05:45 UTC
I have just installed Ubuntu 9.10 beta and I must say I am completely underwhelmed with the mess that appears in gnome-volume-control. while 5.1 output does work on my X-Fi, there is no lowpass filter for the subwoofer (yay constant noise from the 200+Hz Bands) and what’s worse using any of the sliders for balance/fade/lfe does completely unexpected things. 1. Using the sliders in most instances adds pink noise 2. balancing left lowers volume 3. fading front and back is impossible, sliding front actually mutes front speakers and lowers volume on middle speaker, sliding back mutes back speakers and lets front and middle play at full volume. 4. sliding lfe towards minimum immediately adds pink noise.
I have tried to file a bug report for this but it is simply to complex to even begin to put it in reasonable words and it would probably take a wizard to grasp how many bugs are involved and how to break them down into organisable bug reports.
I think this is a far more important issue than any of those feature requests because it’s actually making it impossible for me to even use my soundcard properly.
And to make a feature request: please build a working gui, remove pulseaudio, build around asoundrc if at all possible and make ladspa plugins part of the gui options so I can easily build a nice chain of modulations, including a lowpass filter for the lfe channel and a system wide equalizer.
thank you, I hope it will be done in 5 years.
As a server it will always be a good option but Linux as a desktop is going to suck forever.
I tried distro after distro since 1998 believing that some point in the future Linux will be an usable desktop. Guess what: I realized I was wrong.
The biggest problems as far as I can see:
1) 10 000 of new wheels each of different size, shape and colour, another 10 000 wheels still to be reinvented
But we don’t need 10 000 wheels: we need a single wheel that just spins. We don’t need 10 000 libs that do the same thing, we don’t need 10 000 file systems, we don’t need 10 000 windows managers, 10 000 of desktops environments and 10 000 apps that do the same thing in a crappy way. We do need though a single one of each that does the job.
Nor we do need 10 000 distros. I mean, if Adobe decides to make Photoshop for Linux, they can’t because a binary can’t support each and every single distro out there. And that’s because each distro and each release uses different libs, different desktops, or different versions of the same lib.
The Linux world is too diffused and scattered. For things to work out it would take some unity and a central development team like Haiku has.
2) “Community developed” small apps might be good. But bigger and more complicated apps aren’t going to be half as good as commercially grade apps. Don’t talk nonsense about Mozilla and Open Office because Mozilla was first developed by Netscape and Open Office by Sun.
3) GPL sucks for major software houses. BSD and other open source license are better in that matter.
It might have been better if Linus never released his kernel and Pandora Box remained closed. That way, we might saw an open source OS on the desktop. Maybe FreeBSD, maybe other.
My two cents.
Focus on RPM and Deb; That is what Canon does for their printer drivers and I haven’t experienced any issues beside my whine that there aren’t 64bit drivers available.
Because there are very few instances were large and complicated applications are needed for the average end users. Most applications the average end user interacts with regularly are small, uncomplicated applications that get mudane things done quickly. No doubt that OpenOffice.org and Firefox have been major contributions but most of the time I see my parents use small applications more often than big ones.
GPL has zero impact on software houses because all that they would link against uses LGPL not GPL.
Take them back because every point you made was factually incorrect.
YouDontNeedThat(tm)
GetBetterHardware(tm)
YouDontNeedThat(tm)
Mind you, a lot of end-users use rather complicated software for which there are no open source equivalents.
Have you ever wondered what the huge class of so-called white-collar workers use daily?
EndUsersJustUseFacebook(tm)
MyGrandmaUsesUbuntu(tm)
Oh, mine don’t.
STFU(tm) ’cause you don’t get this great Linux thing — a big ™.
TheSameProblemKeepsHavingTheSameSolution(tm)
How is it that Adobe can’t do what OpenOffice and Mozilla can do easily?
Linux as a desktop OS is a better Windows than Windows:
http://www.unixmen.com/linux-tutorials/421-install-ms-office2007-on…
Edited 2009-10-03 13:55 UTC
That tutorial was amusing, just as it started at the end of the first paragraph:
The first instruction:
You are correct in that the author did not strictly follow the original promise.
However, typing “sudo apt-get install playonlinux” as an instruction, and copy-pasting that line into a terminal when following instructions, are both a lot easier actions to describe than:
However, even that latter alternative set of instructions for the GUI method is easier than the equivalent set of instructions if this were Windows, where one would have to:
Start IE or Firefox,
Find a website that had a trustworth copy of the desired program to download,
Download the program and save it somewhere
Open the Windows explorer and navigate to the place where it was saved,
Run the executable
Click next, next, enter a name for the menu group, next, next.
Run the virus checker to make sure your system didn’t pick up a virus.
Do that again after the next two virus database updates.
Edited 2009-10-04 01:43 UTC
Actually that means: google the damn app, click to save and clickety-click the damn file in the damn FF or IE download window to install.
And guess what? You won’t have dependency problems, you don’t have to download libBullshit.15.so.1000 which in turn makes you need to download libBiggerBullshit.20.so.2000.
Don’t talk nonsense about deb packages, because even if deb is the best package system out there, I had from time to time unsolved dependency issues.
And for sure, I don’t like anymore to download & compile some app, track compiling bugs, download another damn gcc and/or autoconf/automake, or another libc, which in turns breaks some dependencies, or patch the source code just for the damn app to compile.
I remember with sorrow they days when I used linux as desktop. 75% percent of time I was fixing drivers issues (insmod, edit /etc/modules.conf, compiling kernel etc), editing various config files, compilng some software because .rpm has unsolved dependency issues, editing init system, hacking gnome or kde to suit my needs, trying to track & fix various crashes etc etc
And after 2 weeks when I was done making the OS more suitable to me and almost decent, I gotta try another damn distro, so I had to take everything from beginning: editing files, compiling, fixing crashes etc
Sure, the things got better and with Ubuntu I can now use .deb for 99% of apps. But I still have to havily modify that damn UI because Gnome have big usability issues. There are still various things crashing. I have still have to manually edit config files to get online, because for some reason ppoeconf freezes in 9.04.
And why’s that if I compile some program with user32.dll and gdi32.dll in the damn windoze XP, that program will work under windoze 7? And viceversa, if I compile the program with windoze 7’s user32.dll and gdi32.dll it will still work in windoze XP? No. In linux, you will allways depend on some weird version of some weird lib. So if you want to run a binary copiled on another distro or on another version of the same distro, the chances are that you can’t. Unless you install the versions of libs the the binary was linked against and break almost everything else.
If you try to use a rpm from SuSe, or a deb from Debian on Ubuntu, the chances are you can’t. If you make it work, than you will break all dependancies for other packages. Isn’t it fun?
Utter rubbish. Complete bilgewater.
The procedure in Linux is exactly as described.
Either type “sudo apt-get install appname” in a terminal window, or if you don’t like typing, copy that line from instructions such as these you are reading in your web browser and paste them anywhere in a terminal window. If you don’t like terminal windows, open the GUI package manager, search for the application by name or by keywords in the description, select it for installation, and click apply.
That’s it. That is all that there is to it. Any unmet dependencies will also be installed and a menu entry for the new application will automatically be made for you.
Guaranteed malware free if it is from an open source repository, as a bonus.
Windows fans are getting really desperate in this discussion. Personally, I just don’t see any reason why they feel compelled to “lie for Microsoft”, but apparently they do.
PS: If you don’t like GNOME, try KDE 4. It is a far better desktop than any other available today.
PS: If you don’t like GNOME, try KDE 4. It is a far better desktop than any other available today.
This is off-topic, but I don’t quite agree here. I just installed KDE4 yesterday and while I like how it seems a LOT better than GNOME performance-wise (f.ex. animations are always very fluid, resizing images or such doesn’t stutter and so on) it has some serious usability issues; I for example spent a good long while trying to figure out how to move the panel and make it the correct size. It just was not in any way or form intuitive. There’s actually lots of places in KDE where things are very unintuitive or completely misleading if you aren’t familiar with it.
As such, I still think that GNOME is more newcomer-friendly and more suitable for people who want minimal fuss and just want things done. KDE seems to have some good things going for itself too, but someone should give more thought to the UI design.
Anyways, this is off-topic, just felt the need to make a point here.
I completely agree. Prettier-looking and a lot harder to figure out. Sometimes, the places configuration options get put seem completely arbitrary; it seems like I’m always wandering over several unrelated tabs in several unrelated configuration menus before I finally find the option I’m looking for. Nothing’s ever where you’d expect it to be. That’s one thing XFCE does a lot better; their configuration application is very clearly laid out, and there are very few “where the hell is this — why is it way over here?” moments.
I also kinda think that KDE has a habit of picking up it’s own discrete sub-systems that are different than everyone else’s, and causing needless divisiveness — I somewhat suspect that phonon is just an attempt to not have to admit that arts was a bad idea, and it’s a better idea if everyone uses the same sound daemon.
Nearly every KDE component has its’ config available via the right-click menu. For the panel, you’d just right click on an empty space and select “Panel Settings”. If it’s locked you’d obviously need to unlock it first, but that’s no different to other OS’s way of doing things.
This is actually one of the things I really like about KDE as opposed to Gnome. There’s very little you can’t tinker with very easily
That’s not to say it’s perfect of course…
Once you have unlocked the panel, and you subsequently right-click on the panel, a right-click context menu pops up, with one of the available menu selections being “Panel Options”.
Click on “Panel Options”.
One of the “Panel Options” is “Panel Settings”.
Click on “Panel Settings”.
Once you have done that a “Panel Settings” box appears above the panel. The widgets within this box are all fairly self-explanatory, and they all work as one would expect them to.
It isn’t hard, really. Not that much different from other panel settings on other desktops. I can’t really fathom why one would think of KDE in particular as unintuitive.
Once you have done that a “Panel Settings” box appears above the panel. The widgets within this box are all fairly self-explanatory, and they all work as one would expect them to.
That’s the issue: they aren’t self-explanatory to someone who isn’t used to KDE. They may well be that to you, but you’re already used KDE before.
Of course I knew how to find the options menu, it’s the same in every OS.
Mozilla is Open Source and allways will be people willing to compile it for every damn distro and with regards for every shitty revision of every shitty library or layer.
On the other hand, Adobe is never going to give its products for free neither as in “free speech” nor as in “free beer”. So the source is never going to be open. Same case with Autodesk, Symantec, Microsoft etc.
Hell no. Even Haiku as alpha software is better as a desktop OS. You feel that OS is a whole, not a bunch of various software putted together at random.
Is good that we have wine so we can run various windows software but no software is going to run emulated so well as it runs native.
Edited 2009-10-03 15:39 UTC
Being Open Source is unrelated.
Take another example: UT2003. Runs everywhere, with one single set of binaries.
I installed Google Desktop Search on a Red Hat Enterprise Linux 4 machine, and it basically went like a Windows install. Google bundled all the depenacies with the app, so it created a folder like /opt/google/ (or whatever it was) and put all it’s binaries — and all the libraries it needed to run — in that folder.
The whole dependency-resolution issue doesn’t have to come up; distributors could do exactly what google — and Adobe, I’ll bet — did and just bundle the dependencies with their binaries, and it would work. The reason linux software distribution is done the way it is is because it’s more efficient when it works, not because it absolutely has to be.
Just to point that out: I know the dependencies-and-packages model does pose real problems for proprietary software distributors. But they’re not quite as insoluble as people are trying to make them out to be.
Great example, a game from five years ago. Dumbass.
Yeah, because programs and libraries from five years ago didn’t work like nowaday’s programs and libraries, at all.
Get a clue, troll.
Edited 2009-10-05 15:07 UTC
Typical freetard response. YouDontNeedToRunModernSoftwareTitles(tm).
Thanks for keeping the streak alive!
(plus, with 7 coming out this month, there actually is a HUGE difference in how apps from five years ago operate, compared to 2009. But I’m sure you know that, wise ass.).
Edited 2009-10-06 14:07 UTC
Adobe actually does it quite well with Adobe reader for Linux. They do not seem to have a problem with that application making it onto Linux distributions, and actually running on all of them. And to point out, Adobe Reader is free, as in beer, for every platform they support, which includes Linux for that application. It does not require OpenSource at all, just the company willing to support Linux.
Ever had a job? That’s the place where your boss tells you to do something (like compile a program) and you do it because you want to keep the job.
Exactly what I was thinking.
Linux is fine for what it is..
But a serious option for the desktop, don’t see it ever happen.
1) Make laptop LCD brightness keys work.
2) Make laptop WiFi on/off key work.
3) Make laptop battery life not suck.
4) Organize the efforts to seamlessly integrate Pulseaudio and Jack for reasonable coordinated behavior in pro- and desktop audio scenarios.
1) Works fine on my laptop (after some fiddling)
2) Worked out of the box for me
3) I got ~6 hrs of battery life without tweaking anything from my Acer Timeline 5810T (given, it’s not ideal, but it’s not bad)
4) You have the right idea, but I rather had Linux dump ALSA and Pulse alltogether for OSS4, since all the other unices use OSS and it’s a de-facto standard. And, OSS doesn’t need something like pulse since it does channel mixing in the drivers. And, on top of that, it does recognize surround sound stacks better.
Lucky you with the first three points. My wish is that they worked more or less everywhere. I have about an hour of battery life with Jaunty, while not being able to dim the screen (unless I use nv – but then I cannot dim as much as I would like) or turn off WiFi. On my tamed Vista (dual-boot) installation, however, I have about 3 hours of battery life.
Dunno about OSS4, haven’t tried that. My understanding is that this isn’t going to happen anyway, so it makes sense to focus on integrating what we have and not enact another drastic change.
5) WorksForMe(tm)
1) WorksForSome(tm), but in your case it may be that YouDontNeedThat(tm)
2) WorksForSome(tm), but in your case it may be that YouDontNeedThat(tm)
3) WorksForSome(tm), but in your case it may be that YouDontNeedThat(tm)
4) YouDontNeedThat(tm) but it may be JustAroundTheCorner(tm)
I have been using both Windows 7 & Ubuntu and I have to say that I am truly impressed with the way Ubuntu works. I like Windows 7 a lot but I equally like Ubuntu.
There is a real chance for Ubuntu to become the default ‘Linux’ distribution. This means, hardware manufacturers would start producing drivers for ‘Ubuntu’ rather than ‘linux’. For example: you get a box and you look at the back: ‘Compatible with Ubuntu 9.x, Windows, Mac OS X’. On the disk or the manufacturer’s web site, you find ‘.deb’ packages (pretty much like Adobe flash player).
I really hope Ubuntu the doesn’t become the “default Linux distribution”. While I do like Ubuntu, I also equally like Fedora, Arch Linux, and GoboLinux. Choice is one of my favorite things about Linux. If hardware manufacturers start making drivers specifically for Ubuntu (i.e. they wouldn’t work on other distros, at least without some work), then Canonical would practically control desktop Linux.
Already happened, I’m afraid, as far as desktop is concerned.
Luckily this can’t happen because of GPL.
It doesn’t happened, but it will a good thing to have just one and single Linux “distro” or OS. That way things will be pretty clear and you can have one single app that does it’s job and does it good. Not need for hundreds of desktops, windows managers, sounds daemons, toolkits, apis, layers and bloat.
I real like the windows and os x model: just a standard set of apis and libs and all apps linked with the standard apis. That way you won’t have dependency or compatibility issues and you can download & run a software because you don’t need to download other libs or packages.
That doesn’t happen because of any single set of libs, but rather because of static compilation and bundling any required library with the application (thus lib duplication).
You can do the same on Linux if you wish (some do that already, eg. adobe and google).
On the other hand linux’s package managers allow to actually share libraries between apps instead of bundling them with every app (hence all the dependencies), something that on Windows would be just impossible.
Drop Gnome, use KDE. This would solve points 2 and 3:
A Music Player That Doesn^aEURTMt Suck? Amarok
Improved Visual Aesthetics? Lose the turd-brown / garish orange colour, it’s not a good look, it never has been a good look, it does not look ‘human’, it looks like a very sick persons poo.
Real Wine Integration? Wine is and always will be a hack. Making Linux run windows apps perfectly will NOT help Linux. Porting the apps is a better use of resources.
Better Online Video Experience? That will come with HTML 5 – instead push for open codecs such as theora to become the web standard.
Renewed Focus on Marketing? See point 2, lose the turd colour!
And a few I’d like to see – stop making releases that break on older machines (note the fiasco with intel/ati drivers and X) – dump pulse-audio and make all apps jack-aware instead – stop screwing up debian by adding ubuntu-only hacks and instead fix debian and make ubuntu properly debian compatible (i.e, can use the same repos)- and finally, once again, get rid of that horrid, horrid, horrid turd colour!
Porting apps to Linux would be better, but…
It will NEVER HAPPEN.
Many companies don’t even update their software to new versions of WINDOWS, so why does anyone think they’d care about Linux?
Microsoft has about a zillion engineers who do nothing but write special layers into Windows to emulate bugs and “features” from previous versions … this is basically Wine for Windows … so that these apps will still run.
Just Properly implement the elatech touch pad drivers so i can disable this accursed tap to click functionality
Yeah, I really, really hate tap-to-click. I forget how I finally turned it off, but I eventually did.
I bet your problem is that psmouse driver is conflicting with synaptics driver.
You may benefit from following low-tech hack:
http://tinyurl.com/disabletouchpad
It will save your skin at least when typing.
To disable tap-to-click on touchpad (MaxTapTime), you need to create or edit /etc/hal/fdi/policy/shmconfig.fdi and add the following contents:
<?xml version=”1.0″ encoding=”ISO-8859-1″?>
<deviceinfo version=”0.2″>
<device>
<match key=”input.x11_driver” string=”synaptics”>
<merge key=”input.x11_options.MaxTapTime” type=”string”>0</merge>
</match>
</device>
</deviceinfo>
This is not necessarily an Ubuntu issue; it could be solved at the Desktop (KDE/Gnome) or even X-Windows level; however a graphical tool that allows one to configure the pointing device — whether mouse, trackpad, trackball, or something else — completely is really needed.
I am left handed, and the first thing I do when setting up an account on a machine (whatever OS) is to configure the pointing device for left-handed use.
The standard tools allow one to do swap the left and right buttons, however it does so poorly. For example, with a touchpad, the traditional touchpad actions are mapped to the swapped right button actions, rather than either (a) retaining their original mapping, or (b) making their behavior user-definable at a GUI level. Also, when three or more buttons are available (my IBM Thinkpad T41 has 5 buttons) there is no GUI method to configure these.
The functionality to do so exists at the X-Windows level, but for such basic GUI functionality these should be configurable at the GUI level for two very good reasons:
– It is one of the first things a new user experiences.
– As a GUI feature, setting and verifying should be at the same level.
There was a qsynaptics/ksynaptics package available for KDE at one point, however it did not contain all the necessary functionality, seemed to be Synaptics-specific, and was discontinued. I believe this one feature lack turns off many potential new users up front.
for the general Consumer and then you’ll be getting some where.
Except that the general consumer don’t know shit about software design. Trust me, you don’t want software designed by the general consumer, it would suck, big time. You want software engineers to design software because they are expert in software design. They know that “organizing my desktop like a home, putting apps in the living-room and files in the fridge and delete documents in the toilets” is a bad idea.
Nope. You want an engineer to program software. To design it, you need a designer. That’s the problem with linux: too much programmers involved. And the few designers involved are not very talented: the Ubuntu desktop is a clear example. Even the name “Ubuntu” is ridiculous. And this comment section is an example of that, the article dealt with minor changes on the interface to make Linux more friendly and all I see is 4 pages ranting about X and ALSA. Linux is a major fail on desktop in 2009 and if more people that are not programmers doesn’t get involved, it will still be a fail in 3009. Use OSX, then use Linux. There’s a world of difference. And I’m talking about desktop here. Screw servers, screw command line. We want pretty, clean useful GUIs. Let me repeat the mantra: software is what you see not the code. In 2009 software has to look good and be easy.
Shhh … don’t let the freetards think about such things. CLI should be enough for anyone.
Who said anything about the consumer designing the software? The consumer needs to be listened to in order to find out what features they actually use, want and need. Places do that, you know, like Apple and such. I want people with a clue to design the software, and I want the software engineers (programmers) to put in the features that I want, not what they think is “good enough” for me.