Adobe, IBM, Intel, Hewlett-Packard, Novell, RealNetworks and Red Hat are all backing the new Linux standards effort led by the Free Standards Group. The nonprofit organisation plans to marshal their resources to form standards for key components of Linux desktop software, including libraries, application runtime and install time. The group said Monday that it will encourage software developers to use its guidelines when building programs for Linux as part of its Linux Standard Base project.
Tech Firms To Tackle Linux Desktop Standards
135 Comments
Seriously, collaboration on Autopackage would involve the two kinds of developers coming together to create a feature set that is flexible enough to allow software to be packaged once for all distributions.
You can already do that with tar balls.
-
2005-10-18 9:15 pmsegedunum
Hey Segedunum, so now that some major players are backing this does it count?
ROTFL.
Yer, these people are all major players. In fact, most of them are the people that lost the last time around.
It was inevitable that the Qt license was going to come back to haunt some people.
Ahhh. I go away for a nice pint, come back, peruse the internet for a bit and lo and behold – over a hundred comments about making pieces of shit like GTK and Gnome standards.
If it’s crap unreliable software, people ain’t using it. Users and businesses do not pick software based on favourable licenses, no matter how much the technology and license wankers like Lumbergh jerk. General rule of thumb there explained many times before, but we have quite a few goldfish round here. I’m amazed you found your way back onto the internet.
-
2005-10-18 10:20 pmLumbergh
So much bitterness Sege. Not surprised though. I love how your story has changed from “it doesn’t matter” to “Unix tried this before”. Nice one.
But you might be right that it doesn’t matter. It’s not like the Linux desktop is going anywhere anyway.
I love how you throw a tantrum anytime the Qt license topic comes up. It’s amusing.
-
2005-10-18 10:50 pmAnonymous
So much bitterness Sege.
Errr, no. I’ve never been here for most of this useless ranting with people wanking over perceived big player support again.
Not surprised though. I love how your story has changed from “it doesn’t matter” to “Unix tried this before”. Nice one.
Errr, no.
But you might be right that it doesn’t matter. It’s not like the Linux desktop is going anywhere anyway.
Why bother then?
I love how you throw a tantrum anytime the Qt license topic comes up. It’s amusing.
So what were the last seventy odd comments about then?
Business companies all over the planet have decided to standardize on Linux. The open source crowd wrote the licenses, they will now have to deal with the consequences.
Almost every major piece of software I run on Linux is the result of corporate business involvement. It is the only reason Linux is a viable desktop system today. After years of 0.1alpha software littering the Linux world, big business has come in and is turning Linux as a Project into Linux as a Product.
Corporations are taking over Linux and finally whipping it into shape and there is nothing the open source crowd can do about it now. They wrote the licences, the code is out there. The amount of money to be saved for companies to standardize on a free and open system is so great that nothing the open source crowd will try to do to stop it will have any effect.
What some bearded GNU freak still running Slackware has to say about the matter will be of no consequence.
-
2005-10-18 6:58 pmLumbergh
Nobody can co-opt anything on linux because it’s open source. But I’ll grant you that corporations like RedHat, Novell, Suse, and even Trolltech have pumped resources into the desktop that wouldn’t have been there. On the flip side, corporate involvement could hinder development too. If it ever came to be that some RedHat marketing guy started playing puppet master on the RedHat gtk+/Gnome guys then that could be problematic.
But this whole thing is not about choice or freedom because you can’t take that away with open source. It’s all about standards and making the desktop more palatable for ISVs to target.
Frankly, I think it’s too little too late. I’ll run KDE as my desktop and a bunch of gtk apps with it, but I don’t think either desktop is really that important compared to Mozilla and its derivatives.
First of, Lumbergh had you bothered to read some of the other replies you would have known that libQt will be standardized as a part of LSB desktop first release. So you have to find another approach for your FUD.
And the link you provided as proof was rather telling, it gives me a 404 Not Found. Nice way to tell us all that you are wrong.
Besides you are rather confused with the “no strings attached” argument of yours, as it’s the LGPL that has strings attached. Qt on the other hand has a no strings attached license.
-
2005-10-18 7:49 pmLumbergh
Sorry Morty, try the second link. I don’t care about the other “replies”. Show some evidence. And the wiki page doesn’t count.
-
2005-10-18 7:57 pmamadeo
Show some evidence.
Here is some evidence: Qt has been discussed for inclusion in the last 3 months in the LSB desktop meetings / conference calls. Here is the agenda for the next one:
http://mail.freestandards.org/pipermail/lsb-desktop/2005-October/00…
-
2005-10-18 8:03 pmLumbergh
Here is some evidence: Qt has been discussed for inclusion in the last 3 months in the LSB desktop meetings / conference calls. Here is the agenda for the next one:
let’s hope more distrobutions jump on the ship!
about time. This is really needed badly. Looking in from the outside the whole linux thing seems like a complicated mess with no standards.
LSB has been going on for quite a while. It’s not new, it’s just one more step.
But it’s needed, no doubt about that
Maybe this is a good thing, maybe not. If you look closer at the companies that are leading this they are Gnome-centric. Lets hope they don’t shut out KDE at the same time they are drawing up their standards.
With Trolltech being one of the members of the FSG, I don’t think that will happen anytime soon
If you look closer at the companies that are leading this they are Gnome-centric.
Well Novell isn’t Gnome-centric because if you look at openSUSE and their selling products they’ve actively gone the other way.
I’d hardly call those companies Gnome-centric either. They’ve employed a bunch of people getting lots of free lunches in the open source community who happen to push Gnome and GTK, but it’s not as if their companies use it. The companies listed on there are some of the biggest Windows developers around – hypocrites!
Lets hope they don’t shut out KDE at the same time they are drawing up their standards.
I doubt it. People have been saying that non-stop for about six years, but when all is said and done, people want technology and a desktop that works. What’s the point of drawing up standards for a desktop absolutely no one will use when put up against what they have now – Windows, perhaps Mac? After all, that’s what happened to Unix and stuff like CDE in the 80s and 90s.
“let’s hope more distrobutions jump on the ship!”
Linux is doing more to destroy the English language than Education related budget cuts.
Distrobution. Heh.
Well, it should be distribution.
But some people probably get confused over the distro->distribution thing :p
dylansmrjones wrote:
>Well, it should be distribution.
>
>But some people probably get confused over the distro->distribution thing :p
What we need is standard that specifies the correct term.
“What we need is standard that specifies the correct term. ”
Which would promptly be ignored by Linus as being bullshit, and then there-after be packaged in RPM, DEB and assorted other formats that will unpack themselves to different directories based on the Distributer’s whim.
Red Hat’s will require GNOME as a dependancy (as it uses aspell for some of it’s sub-packages), and Debian won’t include it in anything other than Sid for the forseeable future.
This is definitely a good move. I look forward to the fruits of the work.
About Linux destroying the language….I think it was Curly Howard who complained about mangling the “Queens English” with the messing up of tense in the sentences. Remember Justin Wilson, the Cajun Chef? “Now, what I’m going to DID is…..”
Of course, a minor spelling error has nothing to do with the English language, as that can happen in any language and the point of the person was clear as can be. But all that said, it’s amazing to me how many articles get published on the web with bad spelling and improper usage of language!
If you have ever heard true Cajun dialect, you would understand that it is a mixture of French-English. Also South Louisiana is the only part of the USA where there is still spoken French natively.
Regards,
A Coonass
I wonder what sort of overlap this will have with freedesktop.org.
Freedesktop.org has accomplished great things in a relatively short space of time. I’d hate a clash between the two.
i totally agree. FreeDesktop should work together with “Free Standards” …. we don’t need any more forks.
I wonder what sort of overlap this will have with freedesktop.org
I don’t think there is any overlap.
LSB specifies libraries, APIs and ABIs and freedesktop.org hosts specifications and is a collaboration forum for working on new specifications.
Very likely LSB will base some of their work on the that of the developers contributing to freedesktop.org
Standards for the sake of standards is what killed desktop Unix systems and open source software off in the late 80s and 90s allowing Microsoft to move off unchallenged.
Basically, and this happens a lot with the LSB now, many people get involved in technology wanking (and conferences in Hawaii) over who’s software, and even particular implementations, will be counted as a standard. Once something is ratified as a standard (and it isn’t even the best implementation most of the time) it then stagnates totally and all calls to replace it with a better option are totally ignored.
What is even worse than a lack of standards is crap and stagnant software that stays around simply because many people can call it a standard. That might not matter to these people, but it matters to end users. I see all the usual suspects have not learned that lesson.
Hope it would not be another UnitedLinux
Lets see if these companies actually stick to this. I know in the past when other groups formed a standard, they lasted for a while then disbanded. UnitedLinux as the person above mentioned is a good example.
They could solve a good deal of the current situation by moving to a single desktop. Supporting Gnome and KDE might offer choice, but the downside is huge.
Hopefully not, they’ll just marginalize themselves. Whatever any supposed standards body decides, Gnome and KDE aren’t going anywhere.
IMHO those big vendors RedHat Novell HP are pushing Gtk as the standard desktop for Linux .
too bad
What is needed is Untied Linux. We don’t need no stinking innovation crushing standards declaring, “oh system V is the only right way to do it”, stupid crap suffocated UNIX in the formative years and this will do it again. There is a broad spectrum of requirements out there and we need the broad spectrum of linux distributions to meet them. STAND UP, give this trash the raspberry and just say no to conformity.
IMHO those big vendors RedHat Novell HP are pushing Gtk as the standard desktop for Linux.
I don’t understand why these companies push a toolkit/desktop that is technically inferior to what Microsoft and Apple have to offer. If they don’t want to use Qt because it is not lgpl, why don’t they a) buy Trolltech and release Qt as lgpl b) help the gnustep people to finish their toolkit or c) create something that is technically as advanced as Qt and release it as lgpl. While it maybe possible for them to make some money with gtk/Gnome based stuff in the short run, Linux will never be a viable desktop in the long run if it is technically inferior to Windows or MacOSX.
IMHO those big vendors RedHat Novell HP are pushing Gtk as the standard desktop for Linux.
I don’t understand why these companies push a toolkit/desktop that is technically inferior to what Microsoft and Apple have to offer. If they don’t want to use Qt because it is not lgpl, why don’t they a) buy Trolltech and release Qt as lgpl b) help the gnustep people to finish their toolkit or c) create something that is technically as advanced as Qt and release it as lgpl. While it maybe possible for them to make some money with gtk/Gnome based stuff in the short run, Linux will never be a viable desktop in the long run if it is technically inferior to Windows or MacOSX.
If they don’t want to use Qt because it is not lgpl, why don’t they a) buy Trolltech and release Qt as lgpl
Yeah, why don’t they buy Trolltech? It’s a simple matter, right? “Hey Trolltech, we’d like to buy you, here’s some cash.”
LGPL, great toolkit, Cairo and right now is being optimized, not propietary nor dual license bullshit.
Go GTK, Go Free Standards Group.
> Linux will never be a viable desktop in the long run if
> it is technically inferior to Windows or MacOSX.
In which way is Gtk+ technically inferior to Windows? As someone who has programmed in Win32 and MFC, I strongly dispute your claim.
Linux is based on or around many, many standards. Examples might well be TCP/IP, The linux kernal and Posix.
However, I have to say that in general, the home brew nature of every man and his dog, and everyone involved is at the stage where hopefully people can come together.
It does’nt need RFC style standards. It just needs some simple stuff. I’ll add that I think Linux’s strength and the homebrew nature mean that this might mean that you never reach where you need to reach to get this.
During my Linux LPI 1 exam, I studied over several areas, RedHat Vs Debian. From the critical side, Permissions were different, start up scripts, and settings were held in different locations, logfiles were called names that logically held no resemblence to what they would be logging, software installation was all over the place, with each distro operating on its own idea of what and where things should go, software directories were the same. In the harshed terms, you might have a unified Kernal, but the system sitting on top suffers the homebrewed nature of being all over the place. And that was before the 2.6 driver changes.
Now, for the uninitiated home user who fires up linux, I am sure they can live with this, they probably don’t delve to deeply in the system. And for the Unix specialist, I would guess that they too can cope with this ‘evolving’ mix of end product.
There *must* be a way that this could be addressed. The LSB to be honest does’nt seem to actually tackle the underlying issues. Perhaps there are just so many people and distro’s involved that they deem it as unwanted should they go there.
The older Linux actually gets, the more cludgy and backward looking it seems to grow at the same rate.
Perhaps its time to take the kernal, and just start with a clean state from there.
That is a much better example of poor English. I had to read the second paragraph several times to see whether you were just injecting random sentence fragments or trying to make a point. You get bonus points for redundant phrasing like “you never reach where you need to reach to do this,” seemingly-random use of commas and capitalization.
Nonetheless, I do agree with your viewpoint and the basic thrust of your message. It is difficult for vendors to provide solutions that work properly together without some form of standards. Standards can introduce stangation by stifling creativity, but they can also stimulate innovation by stifling contrariety (difference for its own sake).
Standards are an essential part of evolution. As growth occurs, new ideas are tried and basic similarities are recognized between these ideas. In time these become standardized either through a formal comittee or through natural use (i.e. de facto standards). These new standards become stable platforms upon which we can build more elaborate and diverse structures by developing new solutions to new problems.
For this reason I believe those who are “for” or “against” standards are simply ducking the question. Effort must be expended to determine which things should be standardized. It all depends on whether the potential standard has immediate, tangible benefits. There is no sense in reinventing the wheel: when we accomplish something, let’s mark it done and move on until there is a legitimate reason to revisit it.
The only real problem here is human laziness.
Let’s have a nice, sane packaging format first that allows people to install what they want, when they want.
Currently, to get the latest software, people need to build from source or use ‘unstable’ repositories and pull in dependencies or find some RPM for Mandriva that might work on SUSE and and and…
Let’s get using Autopackage (www.autopackage.org) and make new programs double-click easy install.
(If you don’t see it as a problem, try getting the new Gnumeric release when it comes out, without using some ‘unstable’ package repository. Not easy, eh?)
(If you don’t see it as a problem, try getting the new Gnumeric release when it comes out, without using some ‘unstable’ package repository. Not easy, eh?)
Why not wait until the distribution has added it to their repository and have the new stuff tested a little bit, before you yourself clumsilly destroy your setup with the latest and greatest?
Installing Autopackage software is like Russian Roulette. No way on earth that an Autopackage binary blob can play nicely with each and every intricacy of all the GNU/Linux based OSes out there. I trust on the software management by the distribution.
Plus, I really, really, really hate the idea of having to trawl the net and download each and every piece of software I want to use and then afterward have to click each and every piece after I’ve downloaded them and then having to answer the same stupid questions over and over again. Click -> next -> next -> next times 1,000,000.
Repository-based and (graphic) package-manager accessible software is superior to any installshield solution.
You’ve just supported my argument even more. Why should users have to “wait for it to appear in the distro”? In many cases, a new major release of a software package fixes bugs and problems in earlier releases. And now you’re saying, users shouldn’t have easy access to those bugfixed releases?
That they should have to wait for it to get into their “repositories”? Or download the source (and all the dependencies) and build by hand? Or poke around for unstable repositories that bring in their own problems?
All to fix a bug? It’s insane.
And if you hate the idea of ‘trawling’ the net for software, ask your distro to collect together Autopackages of software into one place, so you don’t have to go hunting. That also leads to a MASSIVE reduction in duplicated effort.
Right now, when a security issue is discovered in FooApp, hundreds of developers for hundreds of distros scramble to patch, rebuild and distribute the exact same fix over the myriad of distros. If there was an Autopackage for FooApp, distro vendors could simply grab that and push it out to end-users.
If you look around forums on the Net, one of the most common problems Linux newcomers grumble about is software installation. They see FooApp, go to the site, and just want to download and run a program. But instead they have to go through the massive somersaults mentioned above, or wait five months for it to appear in their next distro release.
Do you think that’s acceptable? Do you not think we, as a community, should be doing something to get Linux out of its 5~% market share, and going upwards?
Software installation under Linux is fiendishly messy, and projects like Autopackage could solve it — with you STILL having your repositories if you wanted them.
I prefer the Mac approach, first developed on NeXT computers –> everything there is about an application is bundled within its own subdirectory; drag it/copy it to wherever on your system, double-click and it runs. Erase it, it’s gone. Have many releases of the same app runnable on your computer? Sure!
For folks interested in writing apps for a wide-spread linux audience, including potential commercial developers, this would be the pipe dream.
Problem? All the desktop/services stuff. The icons, associations with files, launch/start menu, etc. MS led the way in requiring packages to stick things in system directories to make such magic work. Thus, they require install scripts that know where to put these things. It also requires (ugh!) uninstall scripts to remove this junk when you want to ditch or upgrade a package. An we all know how often we need to upgrade apps in linux!
Ditch all this. That’s an accomplishment I’d like to see from a Linux Standards Board.
Let package managers handle the underpinning distro toolkit foundation stuff. Software like Open Office, Abiword, Audacity or Unreal Tournament should not have to be squeezed into /usr/bin, /usr/etc, /usr/lib, /opt/kde, etc. That’s as bad as MS requiring DLL’s to be stuffed into the Windoze directory. Remember DLL hell?
It was handling DLL hell which encouraged the creation of Installers in the first place – a kludge to handle a kludgey OS that required tens and hundreds of files to be put just so or else the program wouldn’t run.
Heck with that. All these problems came from a time when memory and storage were scarce and there was 16-bit windoze to remain compatible with. Linux need not follow in such footsteps. I hope the LSB establishes a standard where Apps may be drag-n-drop install, like the Mac. Simple and clean for the expert, easy to comprehend for the newbie.
It will take work for something like this to come to pass, but what’s the LSB there for, anyway?
This has been discussed times and times already.
Apps have library dependencies. Having more copies of the same library installed would (probably) break dynamic linking in one or the other way. Either you would end up using a library from a different apps’ directory or you would end up with two copies of the same library in memory. Though disk space and even network connectivy (at least some places) are cheap, RAM still is not. I’d rather use my RAM for a file cache than for duplicate copies of the same code.
There’s nothing nice about drag and drop installation. Might be easy to explain – “just” download and drag’n’drop. Fits well into a desktop metaphore. However, I prefer typing “aptitude install app1 app2 … app10” then doing ten drag’n’drops.
I also _welcome_ and _appreciate_ that the distro (Debian in my case) people test the new versions of applications and make sure they don’t break too much of the existing stuff.
I’d rather use my RAM for a file cache than for duplicate copies of the same code
Good point, but I think an even worse problem is maintainability.
Every app has to provide its own update mechanism and in the case of a security problem in one of the libs, you will need to get them multiple times and if one of the apps doesn’t have an update you can’t use the secured version of another.
Every app has to provide its own update mechanism and in the case of a security problem in one of the libs, you will need to get them multiple times and if one of the apps doesn’t have an update you can’t use the secured version of another.
Yes. But I wrote about this point on OSAlert so many times I did not want to go into it again.
That’s true, but what I’d like to see is an effort to define clean separation between foundation/base packages (libs, critical data) and leaf, application packages.
The first ones could be severely coupled and intermingled, could be developed intensively, break compat between eachother but as long as they would present an selfconsistent, extendable and stable api/abi to the second group user could sleep well. (see the analogy with kernel internal/external api)
The other group would be self consistent application packages that are only allowed to depend on an stable api of the first group but are not dependencies for any other apps (lets call it leaf packages).
Optionally distros could provide optional, special purpose extentions for the base and including more libs
That setup would enable to synergise both sw distribution ideas and combine their strengths:
+ foundation packages distribution would be repository-dependency graph based and thus would receive all the update goodness. As their number would be finite the approach wouldn’t hit the scalability problems that whole encompassing repository based approach is ridden with
+ leaf packages could employ easy install=copy mechanism. As they would share popular foundation libs, the amount of bloat could be kept manageable
+ leaf apps would check for presence of nonfoundation libs and use them instead of their own copies reducing bloat even further
+ distros could still provide their repository of selected leaf apps to present userfriendly application selection mechanism (aka click’n’run) , but have to be be replaceable by upstream leaf packages
I believe that LSB is driven by this very vision but by taking conserve (yet only currently feasible) approach by including only handful of foundation components under its umbrella it limits its usefullness.
Of course there are technical obstacles to overcome:
+ borderline cases – e.g. should apache be considered foundation or a leaf package? – i think we could look at windows when deciding that
+ what about plugin enabled apps?
+ what about not stable, still necessary foundation components (e.g. power management, hotplug, virtually every new technology)
nevertheless i thing it’s overall worthwhile.
rgds,
DS.
drag and drop and installation are different things. You can have a wrapper around apt so you can have a window called “repository” and another one called “installed packages” and when you drag from “repository” to “installed packages”, it automaticly download and install the package and all its dependencies. i think this can be done with kioslaves or gnome_vfs.
Now if you complain that when you drop a package on “installed packages” it ask you to install N more packages (dependencies) and you feel bad about that, the “repository” window can be tweaked to show you only end-user applications and no library and utility packages, and when you drop a packages it only shows to you the final size of all packages together, the system can schedule periodic cleanings of unused packages so you don’t have to think. With this you get the sensation that packages are simple files, even if you wish to pick the package and copy to a pendrive, the system can copy all needed packages (all dependecies) as a big one to the pendrive and when you get to another system and drag it to the “installed packages”, the system only install the packages that are not already present.
with this system you can keep installing and uninstalling things and the system can keep tracking dependencies and updating packages and libraries without redundacy.
the only things neede are:
a new package type: package of packages
a kioslave or gnome_vfs to control the package manager
tag some packages to be shown as “end user applications”
Do you think that’s acceptable? Do you not think we, as a community, should be doing something to get Linux out of its 5~% market share, and going upwards?
I did not choose Linux for its market share. I chose it, because I like how it works, how it is created, etc.
You believe that technically-worse solutions are worth it for a project like Linux just to increase a market share? Keep in mind, that there is not one entity called “Linux”, just a number of coders and distributions and companies, each with a different motivation and goals. I, personally, prefer technical correctness to appealing to Joe Sixpacks.
So hundreds of developers all scrambling to patch the same flaw for their distros is “technical correctness”?
So software that needs libfoo.so.0.2.1 but won’t work on libfoo.so.0.2.2 and requires a certain GCC and glibc symbols is “technical correctness”?
Methinks you just enjoy saying things like “technical correctness” when you have no idea about the problem. It just gives you the warm and fuzzies
No. Having a unified way to install all software in your OS is technical correctness. Having a unified way to apply patches is technical correctness. Checking that a patch does not break the existing system / distribution is technical correctness. Utilizing existing dependency-resolving schemes and libraries-sharing (both in RAM and on disk) technologies is technical correctness. Getting your software from a central, trusted, verifiable source is (sort-of) technical correctness.
I agree that whatever worked with libfoo.so.0.2.1 should work with libfoo.so.0.2.2 as well. However, someone has to check / test it. Requiring GCC or glibc symbols is OK, if they are public. Requiring unsupported / deprecated symbols is not OK, but that is a coding issue, not packaging / distribution issue.
Methinks I should start ignoring anonymous comments.
Methinks you should spend some time with Linux newcomers, and help out on Linux newbie forums, and you’ll start to see what a nightmare the packaging situation is. I’ve seen far, far too many potential convertees go back to Windows because it’s way too fiddly and convoluted to get a new program.
If the labyrinthine tangle of per-distro repositories, duplicated effort, compiling or waiting months for a new distro release is all ‘correct’, how come so many other OSes have a much cleaner system?
Is the Mac, BeOS, RISC OS, Windows etc. way of installing software ‘technically incorrect’ then? Because to the vast majority of users, it’s orders of magnitude faster and simpler.
I love Linux, but I’ve come to accept that there are inherent problems trying to developing a modern desktop OS by a vast number of disparate groups, working independently. It has bad side-effects as well as good. And yes, if you’re involved with Linux purely for technical reasons, then these problems won’t be a concern.
But if like me, you want to share Free Software and get the masses away from proprietary systems, we need to offer something better. In some ways we do, but in some ways we don’t. You can choose to ignore these problems and the thousands of newbies struggling with them (I know, I run a Linux website forum), but don’t complain when, in 2010, Linux still has ~5% of the market share.
‘ve seen far, far too many potential convertees go back to Windows because it’s way too fiddly and convoluted to get a new program
It might be a matter of what you are used to or how you expect things to work.
I for example find it a lot easier to just type a query, click an install button and be done, but of course others might find it easier to search with google, locate a download page, potentially register for download, download and install.
Is the Mac, BeOS, RISC OS, Windows etc. way of installing software ‘technically incorrect’ then?
Besides Linux I have only experience with Windows and while it is a different approach for installing programs, it works quite well.
The main problem I have with their way is what happens after installation.
The program register themselves with the central software list but do not register any update hooks.
If you start a software update, they only thing it can update is the operating system and maybe other Microsoft software.
I really don’t get it why other software vendors do not register their applications with systems update service.
Instead they have their own update thingie that checks for updates during runtime.
So if you already know that you have an application with a potential security problem you either have to start it despite the threat or remove it and install an newer version.
How can anyone count this as simple?
“I for example find it a lot easier to just type a query, click an install button and be done.”
But it doesn’t work that way. That’s only applicable if the specific piece of software has been explicitly packaged for your distro.
But what happens when a new version comes out? After a while, your distro may include it in its ‘unstable’ repositories. So you have to add some ‘unstable’ source, pull in loads of other files, just to get the new program?
Or find some unproved 3rd-party repository with, again, packages for your specific distro? apt-get, yum et al. don’t solve the real problem.
To prove my point:
You’re running FooLinux. SuperProgram 3.0 comes out, but your distro was supplied with SuperProgram 2.1. You can just ‘type a query and click install’? Really? Without switching to some unstable repository, which has its own problems? Or without adding some 3rd party repository, which again, will pull in other files that’re untested?
The real hurting problem is really not distro incurred latency of new functionality availability but the fact that that cruicial fix that you absolutely depend on is only appended the dreaded next 3.0 version, along with new features (and bugs ). Because your distributor most of the time doesn’t backport it (as it depends on 3.0 specific changes) you’re stuck with a broken apps most of the time. That’s especially true in case of usability bugs that get ironed out only after some early adopters feedback. You grumble knowing that somewhere, there are happy people enjoying life and a fixed version. You can look (at screen shots on a news site), you can touch (on life cds) but you can’t take it. And your frustration only spirals up.
Well said. If only more people would see these problems, instead of mindlessly believing the Linux way is always ‘the best’ and avoiding anything that makes people’s lives easier…
That’s only applicable if the specific piece of software has been explicitly packaged for your distro
Which is in my experience (almost) all software most user ever need.
The only software I have installed from a secondary source is mplayer but it is still available through the software installation framework.
But it might heavily depend on a user’s working patterns.
But what happens when a new version comes out?
Assuming that I am using a consumer level distribution, for example Ubuntu, I guess they will package the new version as soon as they have stabilized the possible new dependecies.
Of course there will still be a delay between a project’s release and availability of the package but I can always use something like klik if I can’t wait for six months until the next release.
“Which is in my experience (almost) all software most user ever need.”
No, it’s software at a snapshot in time. When a new major release comes out with bugfixes and new features, then you’re stuck, unless you start fidding around with unstable repositories and/or source. Why should it be that way when in almost every other OS, that’s not the case?
“Assuming that I am using a consumer level distribution, for example Ubuntu, I guess they will package the new version as soon as they have stabilized the possible new dependecies.”
You sure are guessing Ubuntu doesn’t do that, nor do many of the major distros. Ubuntu issues security fixes only. New software releases are placed in the next release’s repos. So when the new FooApp comes out with major bugfixes and new features, it’s only added to Ubuntu’s ‘unstable’ repository — and you expect production users to run that? Or wait several months Do you see what the problem is now?
“…if I can’t wait for six months until the next release.”
Thanks for consistently proving my point! The fact that it’s even an issue, that we’re talking about ‘repositories’ and distro-specific ‘packages’ is a clear indication of how big the problem is. Users of other OSes don’t have that problem — they just see a new app, download and go.
But the best thing is that you’ve mentioned Klik, so you’re aware that something cleaner, easier and more elegant is required. I’ve recommended Autopackage, which is very similar.
The sheer duplication of effort and convoluted tangle involved in Linux packaging is astounding. You can talk about ‘repositories’ and ‘unstable’ and ‘source’ while everyone scrambles to package the exact same software for hundreds of distros…
…or you can consider a system where anyone can just download and run the latest software. In a snap. If people can do that on Mac OS, Windows, BeOS, RISC OS, Amiga OS, Syllable, SkyOS…. why not Linux?
Why should it be that way when in almost every other OS, that’s not the case?
As I mentioned before my only other experience is with Windows and there I surely miss the maintainance capability.
But of course this might be a misconfiguration on my part and installed programs are indeed upgradable through a central UI.
So far I only managed to get Microsoft software updated that way.
You sure are guessing
Yes, I am. I have never used one of the consumer level distributions, only the full blown can-do-everything distributions like SuSE and Debian.
Ubuntu issues security fixes only. New software releases are placed in the next release’s repos
I see, thanks. What about Linspire? Do they update the software in the CNR warehouse?
Do you see what the problem is now?
Actually just from some theoretical viewpoint. The users I know personally, mainly relatives, do upgrade their software less often than once per year.
Now that I think about it I am sure they do not upgrade at all, upgrades are usually part of the things I do when I have to fix their system.
But of course different users might have different needs.
But the best thing is that you’ve mentioned Klik, so you’re aware that something cleaner, easier and more elegant is required.
It certainly helps for the adventurous users that want to try lots of stuff.
I’ve recommended Autopackage, which is very similar.
I will start recommending autopackage once it stops destroying the system as their default option.
Planned for version 1.2 if I remember correctly.
If people can do that on Mac OS, Windows, BeOS, RISC OS, Amiga OS, Syllable, SkyOS
So I take it that those systems have in fact a central software registration service which can update each package when required and does not need depend on each vendor to fix shared code?
My information on other systems is rather limited, but my impression is that they don’t have this, that they rather let the user deal with lazy vendors, lots of duplication and manually hunting for updates or depending on each application to update itself.
Doing that would be a real regression compared to what I have now, but of course if they actually have a component registry with update mechanism I’d agree that it would be a better solution.
In many cases, a new major release of a software package fixes bugs and problems in earlier releases. And now you’re saying, users shouldn’t have easy access to those bugfixed releases?
In many cases, a new release is buggy and unstable. Waiting for the distro to include it in the repository is the reasonable thing to do for 95% of users. The 5% of users who must compulsively use the latest and greatest version can always install it from tarballs. It’s not hard, and with utilities such as checkinstall it’s pretty safe as well.
The idea that you must absolutely install the latest version of an app comes from the Windows world. Things don’t exactly work the same way with open source…
If you look around forums on the Net, one of the most common problems Linux newcomers grumble about is software installation.
Not really. The most common thing Linux newcomers grumble about is lack of native drivers for their hardware, or the fact that they are proprietary drivers that require special procedures to install. In other words, they indirectly complain about the fact that such drivers are not available as packages, since that means that they could be installed automatically during system installation. Autopackage (while a good idea) would not solve this.
Programs like Autopackage and other similar installers are useful for commercial software, because that provides cross-platform packages for ISVs. For most system and open-source software, however, the repository system works just fine. I use it every day and I don’t feel slighted by it at all. I really do think you’re exaggerating the problem here.
“Waiting for the distro to include it in the repository is the reasonable thing to do for 95% of users”
What, even waiting up to six months? Or a year? Most distros only include security-fixed packages in their archives, so unless you risk your whole system by switching to an ‘unstable’ repository you have to wait for a whole new distro release several months down the line — upgrading everything else too, which you may not want to do. Or go through the trials of compiling from source.
Do you realise what you’re saying here? There are many, many operating systems that have orderly and clean packaging solutions, so that users can get new programs as and when they want. But all this waiting and distro-upgrading is somehow a ‘feature’ for Linux, when it’s seen as vastly backward to other OSes?
Please, go use a Mac for six months. Or RISC OS. Or BeOS. Or even Windows. Then come back and see that this distro-specific-repository-waiting-or-source charade is comically bad.
What, even waiting up to six months? Or a year?
You obviously haven’t heard of backports. This means I can get up-to-date packages without switching to an unstable distro.
Please educate yourself a little bit more before trolling.
Oh, and I use Windows every single day. I have done so for 15 years. So please STFU.
Installing Autopackage software is like Russian Roulette. No way on earth that an Autopackage binary blob can play nicely with each and every intricacy of all the GNU/Linux based OSes out there.
It would work if distributors collaborated with the Autopackage developers. What would not work is having the Autopackage developers trying to guess at the intricacies of many distributions.
I trust on the software management by the distribution.
That promotes duplicated effort and stifles creativity. It’s harder to create a new distribution when you have to set up your own repository. Even if you counter this point by saying that a new distribution could be based off an existing one, that existing one is still duplicating effort; therefore, the derived distributions’ package selections are constrained. Distributors ought to be specifying distribution-specific details, not repeating the entire packaging process.
Plus, I really, really, really hate the idea of having to trawl the net and download each and every piece of software I want to use and then afterward have to click each and every piece after I’ve downloaded them and then having to answer the same stupid questions over and over again. Click -> next -> next -> next times 1,000,000.
Repository-based and (graphic) package-manager accessible software is superior to any installshield solution.
I agree wholeheartedly. Remember that this is a separate issue from Autopackage; there is no reason why we couldn’t have one distribution-neutral Autopackage repository. This would also make it easier for new software to be published, particularly so if the developer is new to Linux.
It would work if distributors collaborated with the Autopackage developers.
What kind of collaboration would you like to see?
Auopackage didn’t make themselves a lot of friends when they decided to treat all distributions as “broken” and to deliberately risk breaking the system.
Bad first impressions take a long time to overcome
What kind of collaboration would you like to see?
Cherry flavored, please.
Seriously, collaboration on Autopackage would involve the two kinds of developers coming together to create a feature set that is flexible enough to allow software to be packaged once for all distributions. I thought this meaning was rather obvious given the rest of my post.
Perhaps you just wanted a hook into the conversation?
Auopackage didn’t make themselves a lot of friends when they decided to treat all distributions as “broken” and to deliberately risk breaking the system.
From the viewpoint of someone who develops solutions for problems with existing systems, those existing systems are going to appear broken, so people who come up with new ideas tend to irritate those using the old ones. If what you say is correct, then people just need to get their egos out of the way so that they can start improving distribution methods.
I have no idea what you mean by “to deliberately risk breaking the system.”
Cherry flavored, please.
Hehe
I thought this meaning was rather obvious given the rest of my post.
Yes, what I meant is which kind of work/contribution would be considered collaboration.
Since autopackage is designed to worked on different distributions, I have no idea how a distribution developer could help.
Did you mean installing the autopackage package by default?
From the viewpoint of someone who develops solutions for problems with existing systems, those existing systems are going to appear broken
I was referring to the /usr/local problem. Seems some weird distribution didn’t have it in their PATH setup.
Not sure which one, but obviously all that do were treated likewise.
I have no idea what you mean by “to deliberately risk breaking the system.
Installing into the directories controlled by the package manager (/usr) without notifying it about changes.
/usr/local or /opt are there for a reason.
There are all sorts of excuses for that, but it is still a bad default.
Distribution developers spend their time to ensure stability for their users and of course anyone who just walks over them isn’t going to be liked a lot.
Even if autopackage changes this in a later version, which I think is going to happen, their bad initial choice will still be in the heads of a lot of people.
hoorah
As much as I see a need for better standards in linux distributions, something I feel unnecessary is the push for a single package format (that is, RPM).
Why cannot applications be distributed as a binary tarball, which can be extracted to a directory and run. “Installing” simply consists of copying the directory to /usr/local, and uninstalling would simply involve deleting it.
RPM’s, DEB’s and other dependancy package managers are great for managing the software which is supported and provided by the distribution, but I don’t see the point of having third party applications provided in these formats.
If we followed the same reasoning with our automobiles, we would all be driving generic no-name 4 door ford taurus’s. If I couldn’t tell the difference between a ford and a toyota, I would lose satisfaction in car ownership. I love my PTCruiser but my neighbor likes his minivan just as much. And Billy, down the street, despises his Accord. Who’s right?
Linspire, Xandros are already creating their own ‘standard desktop’s. I don’t particularly like them, but they do provide a ‘standard’ for their customers. They are providing the structure and security some clients need. More power to them.
If Linspire had 40% of market share, we wouldn’t be discussing desktop standards, we would be full of suggestions for a disparate approach.
Frustration with package management is real, but let Redhat take care of Redhat and let Xandros take care of Xandros. We are free to contribute to both and others, but gosh we need variety. The differences foster innovation, but standards foster complacency. The day that Slackware and Fedora look eactly the same, is the day I will find something else.
If we followed the same reasoning with our automobiles, we would all be driving generic no-name 4 door ford taurus’s.
Which in a sense, we do. All cars have the exact same layout, they work the exact same way, they have the gas pedal in the same place, the gear stick (automatic is for women, semi-automatic for playboys) in the same place, the doorknobs, the whole nine yards (except for TVR which insists on placing everything exactly there where you don’t expect them, have fun trying to open the door on a TVR ).
Your comparison is more akin to having a different theme on your desktop than to having different toolkits and package managers.
I agree with Thom. And I’ll also add, yes, competition is good. But can we have competition between OSes please? There’d be nothing wrong with having Linux, Syllable, Haiku etc. all battling it out to make the best desktop OS, cross-polinating ideas and bringing new innovations.
But 600 Linux distros, with an incalculable number of frustrating little quirks and differences, making it hard to move software or configuration skills from one machine to another, doesn’t help anyone.
Which in a sense, we do. All cars have the exact same layout, they work the exact same way, they have the gas pedal in the same place, the gear stick (automatic is for women, semi-automatic for playboys) in the same place, the doorknobs, the whole nine yards (except for TVR which insists on placing everything exactly there where you don’t expect them
You’re talking about physical objects there. In the software world standards are effectively setting the laws of physics, in many ways, skewed to the advantage of a handful of people and companies.
Software is a whole different kettle of fish.
Qt and GTK will be in the LSB desktop especification. No problem here. See the lsb desktop wiki.
I didn’t find anything on the wiki but I foubd this:
http://www.linuxbase.org/futures/candidates/Qt/
Where it shows that the status of Qt is Blocked.
Where are you looking?
This page:
http://www.linuxbase.org/futures/candidates/Qt/
indicates that Qt is blocked.
http://www.linuxbase.org/LSBWiki/DesktopWG
“LibQt libraries: LSB WG and FSG are considering the licensing criteria update proposal. Pending that ratification and trolltech schedule alignment, libQt will be standardized as a part of LSB desktop first release.”
It said that is being coinsidered not that will be part of it, to do that they need to rework the license issue in LSB and till now nothing has changed.
The Desktop working group has already accepted the reasoning, and a new licence criteria was already drafted:
http://www.linuxbase.org/LSBWiki/LicenseCriteria
There is currently a lot of work around Qt, and it would be very unfair not to include it after that.
[i][4] “Almost no strings attached.” On occasion, licenses with reasonable restrictions may be considered, e.g., a number of ABIs in the LSB have conforming component implementations licensed under the LGPL. The LGPL has requirements of which you must be aware if you intend to ship a proprietary work. In particular, you cannot ship your work with an End-User License Agreement (EULA) that forbids reverse engineering. The LSB intends to make development easier, but you must still invest effort to ensure you understand the licenses of the components you select.[i]
Will this mean I will be able to use Qt w/o the annoying GPL issue to my customers?
So the “End Work” will be GPL free?
I
Dear Anonymous (IP: 201.145.141.—):
You are so confused I don’t even know how to start. This point has nothing to do with GPL.
It points out that even a LGPL licensed library needs to be carefuly evaluated by the legal department before used. It is not a “no strings attached” license: you have to allow reverse egineering. Some companies may prefer to pay a developer licence to Trolltech and keep the reverse engineering restriction, than allow it, especially it they are writing a simple linux front end.
you have to allow reverse egineering.
Is that true?
I didn’t know LGPL forced you to allow reverse enginering?
Can someone confirm this or put a link?
Yes, it is true. You have to allow reverse engineering if you use the LGPL. See section 6 of the LGPL license. And you have to allow the user to modify the binary too:
http://www.fsf.org/licensing/licenses/lgpl.html
If you don’t want to allow reverse engineering, you can’t use GTK.
Is doens’t say that you have to allow reverse enginering it keeps it like an option.
No. The licensor has to allow modifications of the binary and reverse engineering:
“6. As an exception to the Sections above, you may also combine or link a “work that uses the Library” with the Library to produce a work containing portions of the Library, and distribute that work under terms of your choice, provided that the terms permit modification of the work for the customer’s own use and reverse engineering for debugging such modifications.”
Basically it says that you can use any license, provided that you allow for “modification of the work for the customer’s own use and reverse engineering for debugging such modifications.”
See the GNOME wikipedia article, it says the same thing:
http://en.wikipedia.org/wiki/GNOME
(look at the origins part and the footnote)
you may also combine or link a “work that uses the Library” with the Library to produce a work
In other word, is an option “you may” use an separated librarie, so your proect can be reverse enginering free.
In other word, is an option “you may” use an separated librarie, so your proect can be reverse enginering free.
Wrong: if you link to the library, you have to allow reverse engineering. If you don’t link, than you are free to use any license. See section 5.
Wrong: if you link to the library, you have to allow reverse engineering.
I did, the emaning is the same, is still optional.
There is currently a lot of work around Qt, and it would be very unfair not to include it after that.
I see more unfair to force software vendors to pay to TrollTech for licenses or force them to use GPL.
I see more unfair to force software vendors to pay to TrollTech for licenses or force them to use GPL
True, that would be pretty unfair.
Fortunately nobody is forcing them to use Qt if they don’t want to.
I see more unfair to force software vendors to pay to TrollTech for licenses or force them to use GPL.
How is anyone forcing anything? You can still use GTK. It seems to me that it is the other way around: you want to force Qt out.
Not at all, but how can these to projects interact w/o forcing you to use a GPL compatible license?
Is there a way?
but how can these to projects interact
Interaction between applications usually envolves either working on the same data (e.g. files) or communicating using some kind of IPC (inter process communication)
Both are usually not bound to a fixed implementation.
For example two applications communicating by D-BUS could use different implementations of the D-BUS stack.
Both are usually not bound to a fixed implementation.
In other words: you can use any toolkit you want, Qt, GTK, Java, whatever.
But you don’t explaing how a GTK+ based project with an LGPL license can interact with GPL Qt librearies, is there a way? if don’t, I don’t see much the reason fo LSB to exists it won’t change anything.
But you don’t explaing how a GTK+ based project with an LGPL license can interact with GPL Qt librearies, is there a way?
What do you mean by interact? Give me a concrete example?
I wan’t my project to be LSB complain, with a LPGL license, I don’t to pay to TrollTech, but I need what LSB offers me so if LSB if offering Qt and GTK it means I can use both, but if I use a GPL Qt librarie that would force me to change my project license to GPL or to but a TrollTech license that means a division of LSB of those who can pay for a license and those who can’t in that’s the case I don’t see a reason for LSB to exits.
Because a reason to LSB to exists is that project can interact each other in a simplier way but with that there’s division and means tha not all the project will be able to interact with each others, so I don’t see an reason for LSB to exists.
Because a reason to LSB to exists is that project can interact each other in a simplier way
The main reason for LSB is to make deployment easier, i.e. that software vendors can relay on certain ABIs being present.
As I wrote earlier interaction is usually toolkit/library independent, unless some specification can only be implemented in one way, which is not very likely.
The main reason for LSB is to make deployment easier
Exactly, and if I wan’t to deploy my work that needs to interact with a bunch of Qt GPL libraries it wouln’d make it easier, it would force you to change your license or to buy a license from TrollTech, so, no easier deployment after all in that case LSB shouln’t exists because it doesnt fix anything.
if I wan’t to deploy my work that needs to interact with a bunch of Qt GPL libraries it wouln’d make it easier
Why would you need to interact (could you describe what you mean with that) with a library?
You either use a library or you don’t. As nobody is going to force you to use a Qt library, why would you?
Why would you need to interact (could you describe what you mean with that) with a library?
That’s my problem, it is a use case that you can be sure it will show, and I need a solution for it, if LSB allows this restriction many won’t be interested on it and shouln’d exits because it doesn’t fix anything.
LSB is not meant to fix licencing confusion. That’s a job for your lawyer.
I fixes number of issuses of Linux as an ISV platform and thus DOES ITS JOB.
If you want to have cleanroom “GPL for libs free” solution, develop your own distro and advertise it as such.
The LSB is about ease of deployment. Including Qt does not make deployment (or development) of GTK software any harder, it makes the deployment of Qt software easier.
If you think that the LSB is a “pressure group” to create a free as in beer platform for development, excluding any GPL software you are wrong. BTW, the LGPL has its own restrictions too, so it is not so simple anyway. It is not true that the LGPL is a “no strings attached” license.
That is why Qt is going to be included.
But you don’t explaing how a GTK+ based project with an LGPL license can interact with GPL Qt librearies
Why would a GTK+ based project want to interact (whatever this means in this context) with a library it doesn’t need?
Or in other words: which features of the Qt library would it possibly like to use?
Or in other words: which features of the Qt library would it possibly like to use?
Could be anyone, all the case must be studied, and yo can be sure the need of interaction will be needed.
Could be anyone, all the case must be studied
I think all features available in the Qt library collection are also available in the GTK+ library collection, that’s why they are counted as alternatives for each other.
This is not true: there are some differences in the features. They are generally similar, but not totally.
Then why include both?
Ged rid of one and use the other.
Then why include both?
To let Qt using software vendors get easy deployment as well.
To let Qt using software vendors get easy deployment as well.
And to force me to use a GPL or to buy a license from TrollTech, so where’s my easier deployment?
And to force me to use a GPL or to buy a license from TrollTech, so where’s my easier deployment?
Come on, everbody knows that you’re not forced to use Qt, because there are alternatives.
Unless the definition of alternative changed, it means that they do not depend on each other and using one does not require using the other.
Come on, everbody knows that you’re not forced to use Qt, because there are alternatives
Sure, but projects can’t live completeley isolated, sometime there wil be the need of interaction and the problem is there, and I wan’t a solution.
“Sure, but projects can’t live completeley isolated, sometime there wil be the need of interaction and the problem is there, and I wan’t a solution.”
Are you on crack or mushrooms maeby?
sometime there wil be the need of interaction and the problem is there
Care to specify what kind of problem?
Unless you don’t like details, since you haven’t yet clarified what you mean with interaction, since obviously it is neiter operating on the same data nor communicating with another application.
The problem to interact with the DE itself, if LGPL project needs somethink tha is on the DE but the DE is GPL then my project can interact with its libraries so I’ll need a DE that is LGPL and that wouln’t make it easeir, now if you tell that the DE can have the LGPL libraries to I don’t see the need of have both.
If there is a LSB for GPL adn a LSB for LGPL then I don’t see a future for it. I find LSB atractive cause the not strings attached.
What’s the point of LSB if you can’t completly use it? and if to use it completly you will be forced to GPL your code or to buy a TrollTech license? there’s simple no logic on that, because the main idea of easier deployment won’t exits anymore.
The problem to interact with the DE itself
Ah, interaction with a desktop environment.
I was already wondering how one would interact with a library
As far as I understand is interacting with a DE not significantly different to interacting with a simple application, i.e. using IPC like D-BUS.
An application which wants to leaverage LSB for easier deployment will very likely not depend on DE libraries as they are not part of the LSB set.
While the DE libraries are usually nice enhancements that make the developers life easier, they are not required IIRC.
What’s the point of LSB if you can’t completly use it?
Not sure why you wouldn’t be able to use it, whatever completely means in this context.
f to use it completly you will be forced to GPL your code or to buy a TrollTech license?
No, there is no such clause in the LSB.
I.e. to be LSB complaint doesn’t imply that you have to link against all the libraries it has, you can still only link to the ones you need.
Would be pretty stupid to require linking all libraries, application startup times would be horrible
If there is a LSB for GPL adn a LSB for LGPL then I don’t see a future for it. I find LSB atractive cause the not strings attached.
You’re misunderstanding. There would be 1 LSB which would have LGPL for everything, but some optional alternatives with other licenses. Why is this so useful? Because it allows the programmer a choice. If they don’t want to use any GPL code, then they don’t have to. But if they want to use libraries that have more features and don’t care about the license, then they are allowed to do that as well. Your way forces people to care about the license even if they don’t want to. Allowing both would let people who care about the license to use what they want, and everyone else could use what they want.
Software installation under Linux is fiendishly messy, and projects like Autopackage could solve it — with you STILL having your repositories if you wanted them.
That’s nice, but who’s going to create the Autopackage files? As a developer, it’s not my problem. I’m only concerned with getting the source tarball out. What you want to do with it beyond that is your problem.
You want the distro to make the Autopackage file? Why would they do that when they already have a system set up that works? Besides, you wouldn’t get it any faster than the already established means that distro has developed.
Autopackage solves nothing. It just makes devs do more work that takes time away from the real work they want to get done. If you want bleeding edge, use Gentoo.
Well it’s a nice idea but if the views here are representative it has about as much chance as a snowball in the Sahara. To paraphrase a Red Hat spokesman, “They’ll be crying ‘Choice’ all the way to the bargain bin at Comp USA.”
Another angle to consider is that eventually one of the larger distros will take matters into their own hands, announce the standards they will follow and in effect fork Linux (fully so, if they decided to tackle the kernel as well). Some folks might put up their hands in horror, but if the distro managed to get just a couple of big ticket items on board – open office, Adobe ports perhaps, apache or whatever – they would have every chance of doing well. Apple have already done something not that dissimilar, after all, and no one complains because Mac OS doesn’t major on KDE or apt-get.
In many ways this might be better than the present morass, which is slowly getting worse as Linux gets more complex. The prospect of an “OS Lite” from Sun/Google could be quite tempting as a way of avoiding the f/oss standards swamp. Different standards is basically just a euphemism for “incompatible”.
They could solve a good deal of the current situation by moving to a single desktop. Supporting Gnome and KDE might offer choice, but the downside is huge.
But not as huge as the upside of being able to choose a kit that works best for what you want to do.
Making one desktop the “standard” is a bad idea. Creating common standards by which the different desktops / toolkits can interact with each other is what needs to be done. (Actually it’s been something that has been happening for quite some time now. Perhaps you’ve heard of freedesktop.org?)
The trolls are already working on the details. The only doubt now is if Qt3 or Qt4 or both will be included.
LSB should not pick favorites. It is a standard to make the deployment of Linux apps easy. And there is plenty of Qt/KDE linux apps, commercial and free. Nobody is forced to use Qt: GTK will be there too.
Therefore, the inclusion of Qt is logical. And competition is good: the desktop battle will be fought on the merits and qualities of the desktops, not by trying to exclude the opponent, making both desktops better in the process.