PolishLinux has an interview with the KPackageKit developers. PackageKit is a abstraction layer over the different Linux package management tools. It is primarily designed to unify the graphical tools and provide a consistent distribution neutral framework for application developers to install add-ons as well. This project was initiated and continues to be maintained by Red Hat developer Richard Hughes who also wrote the initial GNOME frontend to it, called gpk-application. Multiple backends currently exist and it is the default for Fedora and Foresight Linux already. Other distributions including Ubuntu, OpenSUSE, Mandriva, and Gentoo are actively participating in the development of different backends. A KDE interface has been under rapid development recently and just did a 1.0 release last week. This interview provides more details.
they want to present a common interface to the user no matter what distro they happen to be sitting in front of?
while commendable, i dont think this fixes the real issue, that of being able to put one file/link up on some webpage and expect it to install across distros.
common dependencies notation would help a bit, but when one have distros that feel like overruling the original developers in how something should be packaged (iirc, debian slices kde into 1001 different small packages, when the original setup is 5-6 larger ones around specific feature sets), and apply distro specific patches rather wait for them to be processed up stream, it can quite well turn into a never ending mess.
Exactly. Just like I can have an ODF file on my computer and open it with any word processor that supports the open document standard, so to is needed a package format that is standardized to the point where you could download a single package and use it across multiple distros and package managers.
I don’t know why people are opposed to this idea. On one hand, whenever there’s a major release of some big package, I see comments on here from people asking why there’s no packages available for distro x yet. But on the other hand, many of these same people will claim that different distros reinventing the wheel/duplicating work by creating the same packages is not a problem.
Looks like they’ve gone from the idea of trying to create a package manager that works across all distros to an abstraction layer. Well, at least they’re getting warmer
Edited 2008-10-11 03:33 UTC
On one hand I see your point, but on the other hand, I’d love being able to download and install Krita by itself, like I do the Gimp. On Windows the KDE installer only lets you pick KOffice as a whole.
yes, it can be both a problem and a benefit.
but i am of the opinion that if the kde people wants to, they can provide for this splitting themselves, rather then having someone at the distro level overrule their decision.
but then there is a counterexample, xorg, where a “split” into sub-projects have made the updating of drivers and introduction of new features outside of full releases easier, it also puts a bigger workload on the distro level to make sure all the pieces fit together.
so its not a one size fits all, but i firmly believe that the decision rests with the maintainers of the code, not the distros, as to how it should be packaged.
having distros overrule the individual projects just muddies the water even more, as one cant say for sure where the problems originate.
but then i guess one should always report to the distro one uses, and let them handle it. but then that kinda puts one at the mercy of ones distro of choice…
According to what the KDE developers themselves have said for years, your opinion are wrong. They strongly reccomend that the packages are split up for end users. They also go to great length to ensure the build system support this, and maintain strict module separation to make this easier for the distributors.
The KDE project only provide source packages, and it’s more efficient to provide the source packaged as it’s done today doing the splitting with the build system. The recommendation for the distributions are to split up the packages, but the distributions are free to use the strategy they feel supports their users best.
And that Krita and the rest of the KOffice packages on windows are not split, are by all accounts simply a case of not yet done. Not surprisingly since KOffice 2 are still in beta, and that the Windows didtribution team are very small, they have tasks they deem more important.
While a nice idea in an ideal world, I don’t know how realistic it is?
There are dozens of very different GNU/Linux distributions with very different goals. Some have nothing much in common with other distributions except that they use some, maybe heavily modified Linux kernel as a base. As Linux is free and open, the very heterogenous GNU/Linux OS landscape isn’t going to change.
Secondly, I think most people would agree that more important than reaching some ideal state where a same package can be installed regardless of a distro, is to make the system stable and make packages work well without having constant system and aplication crashes and other problems. Debian stable achieves the latter goal admirably, because they control everything that goes into it, from package management to individual packages. On the other hand, there have been some distro experiments that tried to implement support for many package standards simultaneously, usually they haven’t been very successful nor stable.
As for OS X or Windows kind of software management in the Linux world, nothing prevents distribution designers from building such a Linux distribution too, with their own kind of software management. But that will be just another Linux distribution, and most other distros aren’t likely going to follow suit. For example, the Debian and Ubuntu package management has so many advantages that it is very doubtful whether their users would want those distros to change their package management into something like OS X has.
Hi
there is a PackageKit plugin which allows to do exactly that. The specification allows the developper to specify different package names for the different distros (eg. libfoo on distro A, foo on distro B…). It’s not widely used yet though…
Cheers
Adrien
ah, now it gets interesting
One way to solve that is to simply depend on libs… which most package managers have the ability to do today, so that’s not really a big deal…
I’d like to see PackageKit come up with something like OneClickInstall for itself, which I believe is in the process of being done.
This way, you can have PackageKit itself talk directly with the underlying package management suites, and sites can still post single links to download a package.
Lets face it, the package management infrastructure a lot of times is the primary differentiator between distros, take that away, and there is nothing to separate a distro. Maybe PackageKit is a common ground here, and make it so the average user simply doesn’t need to care.
packagekit is an interesting idea, but not entirely new. The nice separation of concerns with a ‘neutral’ daemon, dbus communication and multiple back ends all sounds good. But, in the end you still must solve the same issues always encountered by package management GUIs and deal with the same issues always found when trying to gloss over the differences between superficially similar back ends.
I don’t doubt that you can make it do something useful, but I would prefer a complete GUI for e.g. apt which can be easy to use, easy to understand and allows all or almost all of the power of the command line utility. Having a GUI which implements the lowest common denominator of package management wont make me happy. (Don’t try and tell em packagekit somehow solves this problem; it’s no better than previous attempts in this area.)
While I’m talking, is anyone else annoyed with the look-at-me-I’m-like-Apple way in which certain recent efforts have adopted the *Kit naming convention? As if choosing this as a name somehow gives your application the usability pluses of OS X. It’s strange to see.
Apart from PackageKit, which recent projects…?
DeviceKit, PolicyKit…
I thought *-Kit came from BeOS’s API names
Actually Apple doesn’t do this, they are on a Core* kick. CoreAudio, CoreAnimation, etc, etc…
I believe BeOS was the *Kit people.
OSX shows that an OS can be easy to install an app to and still be secure and “nix”.
Why can’t Linux implement Apples way of installing?
Edited 2008-10-11 04:31 UTC
Most 3rd party apps are that way. They are just an installer or script, or even tarball, that you unpack somewhere, or that goes into /opt and that’s it. No scattering of files across the file system or anything like that.
What is needed is not to get rid of existing package managers, but have two types of package managers. One type is for the base system and the other is for 3rd party stuff. The latter could be standard and would basically manage installing into /opt or something like that, with a nice GUI front-end, so software can be easily installed from websites, CDs, etc.
The existing package managers should remain because they solve the problem of modularity in the Linux base system and they do it quite well. It’s just that they shouldn’t be used for 3rd party software. Neither should they be thrown away because they fail with that one case.
im tempted to drag out gobolinux again. especially the rootless part.
but then its more or less a new take on the old gnu stow app, that put everything in a subdir under /usr/local, or whatever one set it to, and then symlinked things into the DHS.
but another problem is deciding where the line between distro and third party goes.
is it the kernel and core utils, xorg, or some desktop or other?
still, if there was a system where one could say that these are binaries, these are libs, and these are misc files, and then allow the distros own tools to slot them into the file system where the distro wanted, one would be at least one step closer.
but a lot of paths are set at compile time, and cant be changed afterwards…
Edited 2008-10-11 06:16 UTC
Because…
root@dcom1:~# ldd /{bin,sbin}/* /usr/{bin,sbin}/* | grep libgtk-x11-2.0.so.0 | wc -l
78
maintaining 78 copies of gtk is not only insanely unnecessary bloat but insecure as hell. And I don’t even run GNOME. I don’t even wanna know what that number would be on a standard RedHat install.
Next question?
bingo, i recall when microsoft announced they had found a issue with one of their core libs.
their “fix”? a program to scan for the problematic lib, as it was something that nearly all programs shipped either compiled in or alongside the binary.
Using that command on my system gives 166!
Ouch…
Why exactly would you install 78 copies of GTK? GTK is clearly a system component. It would be part of the system installation and applications could depend on it being installed.
How is GTK a “standard” system component? the only standard component of a Linux system is the Linux kernel. You cannot depend on any specific library to always be there. For a system like the one OSX uses, every package that is packaged in that manner has to be built statically against it’s dependencies. OSX has a clear base system and third party package division line. Linux does not.
That’s up to the distribution, many of which follow the LSB & fd.o standards.
This does neatly highlight one big problem with Linux distributions as a whole though: there is no one to codify or enforce standards of any kind, and there is always someone who thinks they know better and refuse to follow what standards there are.
That’s why desktop Linux distros should have a common standard including GTK+ (and Qt, and all that stuff), each version of which should be supported for a reasonably long time (say, 3 years). They need not be based off that standard, just support it. And there should be a distro-neutral package standard for 3rd party software that every distro would understand – again, without the need for the distro to be actually built around it. This could be some kind of binary tarball with additional metadata convertible to the distro’s native package system using alien (so that after first install, the same app could then be installed directly from say apt package archives). Of course, the packages should be relocatable, which can be ensured using the binreloc tool from Autopackage.
Fortunately this is exactly what the LSB is about.
Its interface deprecation policy is even longer than three years:
“…interface deprecation policy does provide us with a mechanism for removing interfaces, but only after the interfaces have been marked “deprecated” for at least three major versions, or roughly six years…”
http://ldn.linuxfoundation.org/lsb/roadmap
One can use LSB RPMs (not to be mistaken for some distribution’s RPMs) and there is some work done on a kind of packaging API for installers: http://www.linux-archive.org/debian-dpkg/110818-lsb-package-api.htm…
Yeah, I know about LSB, but it has been largely irrelevant to date and is widely considered a failure (BTW, this very perception can be an obstacle in getting things done). I suspect the reasons were mostly organizational. Maybe a new enthusiastic and dedicated team under a new name with a narrower focus (specifically the desktop and desktop integration) working closely with major distros could change that.
Also, LSB specifically uses RPM and is therefore not perceived as “neutral” by distro makers, which unfortunately has undesired political consequences. I think that a neutral tarball convertible to the native
package format wouldn’t cause as much controversy.
Finally, I’m not sure whether LSB plans to provide fixed library versions as a base – without that, it’s difficult for an ISV to target the standard, since we cannot be sure that library developers won’t break anything even with minor version changes.
True, the LSB isn’t adopted as widely as it should be, but right now most distributors support it and the main problem is ISVs ignoring it.
They will rather complain about differences instead of using the common ones, probably based on misunderstanding or lack of information but probably also intentionally.
Anyway, the problem is no longer that there is no stable base system of common libraries, the problem is it not being used by those who supposely need it.
Right, quite unfortunate.
Package format and package manager having the same name didn’t help either. It often triggers the misconception that a LSB compliant distribution would need to have RPM (the package manager), while it can use any means it wants to get the data from LSB RPM (the package format).
Yes, probably. The Berlin packaging API tries to decouple the distribution format from the installation data, most likely because of unfortunate political issues like mentioned above.
It specifies all available symbols of each library so that there won’t be any build or runtime-link problems.
A Bugfix release could potentially change behavior, but then such a change would also be a bug and can and has happend on pretty much any other platform as well (e.g. famous “service pack” regressions)
Well, probably not on a server system or non-user interface embedded Linux, but most certainly on a desktop linux, because it and Qt are part of the LSB Desktop module and most of the common distributions are actually LSB certified.
Well, it does, but it is a lot more convenient to ignore this and continue claiming it doesn’t. Otherwise one would need to find new excuses to hide one’s personal unwillingness or unability to support Linux as a target platform.
That command, I believe, does not print out how many times a certain library is installed, but rather, what shared libraries all programs in the given directories use. In your case, you have 78 programs that link to libgtk-x11-2.0.so.0, not 78 copies of libgtk-x11-2.0.so.0 floating around your system. Examine what the output is before you run it through wc, and you’ll see that all references point to the same file.
That’s exactly the point. I have 78 binaries that use GTK. If I were on a a system with OSX packaging, I would have 78 instances of GTK compiled into the various apps I use that require GTK. As it is, Linux uses shared libs so I have exactly 1.
It would also mean a possibility of multiple versions of GTK being installed, depending on what version the particular app was built against. It would be a nightmare to try to keep up security updates on a system like that.
Then there is the problem of a particular library getting loaded multiple times for every app that uses it. If it’s statically compiled or dynamically yet shipped with the binary, then I have firefox loading it’s copy of GTK and Sylpheed loading it copy and so on. Freaking insanity. It wouldn’t take long for people to get real tired of 10GB Linux installs that need 8GB of RAM to run.
Edited 2008-10-11 15:12 UTC
No you wouldn’t because the app wouldnt bundle GTK libs because they are a standard on any LSB Desktop Linux.
Whatever. I’m not explaining it for the umteenth time just because some people can’t wrap their heads around the fact that a Linux system without Gnome is still a Linux system. There is nothing standard about GTK, Qt, or half of the hundreds of libraries installed on any one Linux box.
I know this is going to blow your mind but until just 2 days ago I had a fully functioning desktop and GTK was nowhere to be found on my system. It only came along because I decided to use Midori and abiword.
The world isn’t black and white. A massive core library should be installed seperately for just that reason, and dynamically linked. Small non core libraries should be statically linked, simply because it is not worth the extra 1k of memory you may be saving to make software deployment a total nightmare.
> Why can’t Linux implement Apples way of installing?
I think the responses in this thread show it very nicely: Because the OSX way requires certain libraries – most importantly those used by many applications – to be installed as a “core system” such that only one copy exists for them, while the remaining libraries are duplicated in each application that uses them. Obviously, as a system advances and more and more applications use a specific library, it would be incorporated into the core system.
This requires the ability to draw a line between “core system” and “applications”. Apple has managed to do this. Linux, so far, hasn’t.
And the reason why GNU/Linux systems do not draw such a line is that while Apple is a single entity, there are literally hundreds of Linux vendors. This in turn comes down to the philosophy of having choice.
Strictly defining a “core system” more or less means killing off the concept of distros, which a lot of people would not be very happy about. Choosing “one true distro” over all others isn’t really possible at this point either.
Having one common way to install things will help people that write manuals or answer questions in helpdesks a lot. Not to mention that it makes the knowledge about how you install things portable from one distro to the next.
Once we have a common user interface for installing things, there is not much point in having different install file formats either. Now developers of third party commercial software need to learn .deb, .rpm,…, to support all distros out there. If they only needed to learn one packaging method more software would be ported to linux.
Finallly, if there was a standard on package naming at least on the packages that makes up LSB, then all of a sudden developing for Linux would be a lot more profitable.
I started reading the linked text, until I came down to the shot with “The transaction has completed. A system reboot is required. /link/Restart Computer Now/link/” Now that’s what I call progress. I don’t usually care what and how the yet another installer and package manager gui thing is doing, but when I see such shots and such behavior, I couldn’t even care any less.
“No, no one from this team helped us. – But we didn~Ac^AEUR^ATMt contact them either.” – Ahem. Ok.
And this is due to PackageKit how?
If you look at the previous screenshots, the kernel was among the updates installed, so a restart was definitely required for the update to take effect.
The kernel can’t be updated while it’s running (nor could you update X or KDE from within an X/KDE session); the best that can be done for these packages is to setup the updated files to be used the next time. So the reboot notification you’re complaining about is actually a good feature.
IIRC, Ubuntu has this same feature.
kexec
and a xorg or kde update is often just a logout away as long as one do not start any extra apps while the update is ongoing…
If we can afford tons of package managers, then why resort to just one abstraction layer? Don’t we have enough man-hours? Choice is good, remember? Lets code a whole bunch of those things!
It’s not like I’m really calling for that, but let’s face it: there’s no way to come up with a single solution in the free software world. Any attempt to try to build an abstraction layer is doomed, because another one will be quick to pop up, spoiling the whole idea of having a single interface.
they work just fine when developers want them to, there are multitudes of examples.
Wouldn’t it be a cool idea if everything was distributed in tar balls, and used autotools to install everything?
It’s such a novel idea, somebody should look into it!
autotools doesn’t resolve dependencies and automatically download and install them. And frankly, given the huge amount of work that autotools already does, I don’t think adding something like that bolted onto the side would be a good idea.
hmm, gobo style recipies for dependencies bundled with the primary code? now thats a thought…
And this is again a fine example of how the Linux community keeps insisting time and again on creating technical solutions as a workaround to a problem that is essentially political in nature.
That kind of thing is not limited to the Linux community. We are, perhaps, more transparent about it because we are much more transparent about everything than the proprietary world. But rest assured, this kind of thing is the norm, not the exception.
indeed, how many tools exist that make some form of installer for windows software? i recall seeing at least 5 different ones over the years…
For all the people that complain about there being too much choice in Linux, they need to take a look at a site like download.com and see tens of thousands of notepad clones and FTP clients, etc. Linux is far more consolidated than people seem to make it out to be. I think that’s because when they were with Windows, they had their set of apps that they used and that’s that. Then they come to Linux and they have to figure out what they want to use as replacements and they see all the options and say “Linux is disorganized and duplicates effort”. They never had to take that look with Windows.
There’s only one Windows vendor, and one platform that remains stable and predictable for a reasonable period of time.
If you are tempted to reply with the number of Windows Vista editions… well, they share the same core platform. You don’t get that with Linux.
Because every so often the technical solution to a political problem actually gets adopted. You’ve got to keep re-introducing the solution, slightly modified, until it becomes palatable to the people who rejected it on political grounds. This is a lot like voting down a bill again and again until it appears with just the right sponsors and amendments to no longer be quite so offensive to those who dislike it.
This has happened before in the Linux and free software world, and it will happen again. We start with the position: No one can agree how to do it. Eventually someone proposes an answer which ‘everyone’ can agree on, but only a third of the people like the solution. So we get three different solutions everyone can agree on, fiercely defended by their camps. Eventually someone proposes an abstraction layer so everyone can keep disagreeing but still work together. It’s rejected because it favors one solution over another, usually by accident. This process repeats until an abstraction is introduced which everyone can actually live with, or a new solution that everyone really does agree on is invented, or the dictator of the project makes an arbitrary decision.
Most of the time we’re stuck in between the discovery of the problem and the final state.
To me, this is just another package manager. That’s a big idea: why don’t create a package manager that everybody will use and get done with all the different ways of doing the same thing. The big problem is that everytime you have this idea, you create another package manager and there is one more other way to do the exact same thing. In other words, it is not such a great idea, but an addition to the problem.
Actually, rpm is supposed to be LSB and universal accross distros, but you guess what? Some distros decided not to install it and not to folow the LSB at all. Your new package manager or API or whatever does not make sense if not every single distros under the sun have it and it ain’t gonna happen.
In my opinion, the right way to go is the alien way.
By the way, it is not a linux problem. Linux doesn’t handle or care about packages at all. Linux is a kernel. It’s not Mandriva’s fault that Windows uses MSI or that OSX uses a strange package format and that debian uses deb files. Linux is not an OS. Mandriva is consistant with Mandriva, Debian is consistant with Debian and red hat is consistant with Red Hat. There is no political problem here. If you see a political problem, that’s because you don’t understand the difference between a distro and linux. Mandriva or debian is not linux. Linux is the kernel of MAndriva and Debian.
PackageKit is not a package manager at all. It still requires the underlying package manager to be able to find the given packages.
Sites and the like will still need to provide RPMs and DEB’s and whatever else… PackageKit will just make it easier for the user, and will provide a way to install codecs and plugins in a seamless way across distros – helping application developers.
Most distributions introduced since the LSB adopted RPM have used RPM. The distributions which “decided” not to use it natively decided this long before there was a LSB: Debian and Slackware. Their derivatives retain that decision, naturally. How many distributions begun from scratch since the introduction of the LSB don’t use RPM? Answer that.
The RPM format itself is not much of a problem, though I hear of some complaints about it. There are some issues involved in the RPM database and the rpm(1) toolset. A distribution should, properly, control these things and cannot be beholden to the control of some competing vendor which does not care about the same set of improvements.
What is proposed by packagekit is a lot like the Windows Installer system. It’s not mandating the MSI format, or RPM, or DEB. It’s mandating an API, a mechanism for installing, updating and removing software. How it works on a technical level is left entirely up to the distributions. You, or rather Microsoft, could implement a back end for Windows which plugs in to the windows update subsystem and uses the same API.
The problem is that it is too easy when abstracting something away to gloss over important details. If you build it with RPM in mind, which is what happened, then perhaps you make it work beautifully and nicely for RPM, but when adding an APT back end you cannot quite get all of APT represented because it has behaviors you did not anticipate. It’s an easy trap to fall in to.
I can think instantly about quite a few actually. Puppy, Gobolinux, DSL and several other specialized and general distros that appeared after the LSB but decided not to use it.
Anyway, your post makes sense, I wish good luck to the project, and I wish a lot of distros will use it.