There are currently at least five popular ways of installing software in GNU/Linux. None of them are widely accepted throughout the popular distributions. This situation is not a problem for experienced users – they can make decisions for themselves. However, for a newcomer in the GNU/Linux world, installing new software is always pretty confusing. The article tries to sum up some of the recent efforts to fix this problem and examine the possible future of packaging software in GNU/Linux.
the user go to package manager of their distribution to install any program….
the real problem is to be able to install a package for distribution x to distribution y
According to the Microsoft Defense Brigade we must all stop using Linux and sell our souls to Billy Gate. There is no escape. In a year or two Windows Vista will rule our skulls.
Could you be anymore off topic, or troll-like?
Have we have this conversation before but i’ll say again. I’d rather use something like synaptic package manager than go looking around the net for looking for software, what is the problem?
Besides alot of software what you need is installed anyway on most distros, so don’t need to go outside the package manager.
“Have we have this conversation before but i’ll say again. I’d rather use something like synaptic package manager than go looking around the net for looking for software, what is the problem?”
It’s because most novce Linux users have migrated from some “Windows” and think using Google is the way to install software. Some of them have problems to accept the easyness of having a built in solution from their OS distribution that does the job for them.
“Besides alot of software what you need is installed anyway on most distros, so don’t need to go outside the package manager.”
The usual home user should be satisfied with the programs coming along with KDE. In fact, KDE comes with more applications you don’t use than you use. If that’s not enough, there are apt, yum, yast etc. For example, even PC-BSD has a “dull mode” packaging system (called PBI) – no, it’s not dull! In fact, it’s great if you’re willing to accept having more disk space used in order to completely avoid entering the “dependency hell”.
As it’s true for most free software, it will be packaged sooner or later for the respective Linux distribution. So there’s no need for the average user to “./configure && make install”, he just needs to know where the icon of his package manager is located in order to deal with installed software.
1) Synaptic is nice when you know what you need, but it doesn’t really shine for search.
2) You cannot see the screenshots, reviews etc.
3) Not all software is or can be in repositories. Even when it is, it may be not the version that a user needs.
What if the software is not avaailable for my distro? The amount of software available for RHEL (RedHat Enterprise) (or any of RHEL based distros) is ridiculously low. What if one needs RHEL (or compatible) for work but also needs desktop apps not packaged for RHEL?
Or what if one needs new version of a desktop app (say Inkscape, Epiphany, Rhythmbox)? Having to upgrade the entire operating system just to install new versions of a few desktop apps is extremely inconvenient for the end user.
daemonologist,
Re: “What if the software is not available for my distro?”
This is a real concern. By having various packaging methods that are not supported by all Linux distributions can make it difficult for third parties to supply binary software, drivers, etc. For example in the post-production film/television industry the Linux standard has always been RPM (ie: RHEL,SLED). The reason for this is Red Hat Linux was first used by studios on standard PC systems to lower cost over using SGI Irix workstations. That change caused commercial software developers like Alias (bought by Autodesk), Discreet (also bought by Autodesk), Softimage, etc to port their software to Linux compiled for RPM installation. These companies used by such studios do not support or offer binary packages for their commercial software to be used on Debian based distributions such as Ubuntu and Linspire.
Re: “The amount of software available for RHEL (RedHat Enterprise) (or any of RHEL based distros) is ridiculously low.”
Can you clarify what software you need that is available on a Debian based distribution that is not available for RHEL/Fedora Linux users?
These companies used by such studios do not support or offer binary packages for their commercial software to be used on Debian based distributions such as Ubuntu and Linspire.
http://kitenet.net/~joey/code/alien.html
Yes, this can be dangerous for system software, and should not be used for that, but for apps it often works quite well.
BTW, the reason these big commercial vendors will only offer a few versions of their apps is not because it’s hard to package for multiple distros, but because they don’t want to *support* multiple distros. Also, they often have commercial agreements (i.e. partnerships) with RedHat, so the choice of distro is also a strategic decision.
Edited 2007-02-24 18:15
The way I see it, the problem exists on basically two levels: packaging format and run-time environment i.e. libraries (the more difficult one to solve)
The run-time environment problem can’t be solved even by compiling software from source. Here is an example: I just tried to compile latest Inkscape (vector graphics app) from source on my RHEL 4 (actually Scientific Linux CERN) machine. The ./configure -script stops with an error:
checking for INKSCAPE… Requested ‘gtk+-2.0 >= 2.8.0’ but version of GTK+ is 2.4.13
configure: error: Package requirements (gdkmm-2.4 glibmm-2.4 gtkmm-2.4 gtk+-2.0 >= 2.8.0 libxml-2.0 >= 2.6.11 libxslt >= 1.0.15 cairo sigc++-2.0 >= 2.0.11 gthread-2.0 >= 2.0 libpng >= 1.2) were not met:
The new software does not apparently work with RHEL’s “obsolete” libraries. This means basically that I would have to install new Gtk (and its dependencies!) from source as well! I actually did this some time ago and it sort of worked but the new Gtk broke some existing RHEL apps.
There is lots of software not available for RHEL (note that I’m NOT talking about Fedora here, there are many more software packages for Fedora than for RHEL). Just some of them are listed below:
– LaTeX beamer (presentation tool, had to install it by hand, but works)
– Auctex (emacs extension)
– new version of Rhythmbox (the version provided with RHEL crashes all the time)
– Inkscape
– OpenInventor (either SGI or Coin3d implementation, Fedora 3 RPMs seem to work)
– Evince
– KPDF (the one for KDE >= 3.4, not the KDE 3.3 version)
– Desktop search tools
What I would like to have is this:
1. More software
2. More up-to-date software (e.g. when new version of some program becomes available I would like to upgrade this particular tool to its latest version)
3. No need to upgrade the entire operating system just to install new end user apps.
Someone sees an announcement on a news page that version x.2 of FireFox just was released and they want to install it.
On OS X you click on the link to the new version and either you get a disk image or an archive of the app. Double clicking either unpacks the app or mounts the disk image. Both ways the app is ready to run. The user can drag the app to anywhere they want and OS X automatically makes note of file types and other such app related issues automatically. Prefs go right in the common user pref directory. Deleting the app involves dragging it to the trash can. Most apps have all its resources tiddley packed in the .app bundle folder so when the app is moved or deleted nothing is left behind.
Windows usually has an installer/unistaller for apps. Corruption of the registry can lead to problems. And apps need to be re-installed if you re-install your system. Not as flexible and tidy as OS X’s bundles but you also don’t have to worry about apps leaving extra files around on the harddrive.
Linux…
You have to know about adding repository urls to your package management utility.
You have to wait around for someone to manually add new apps to the database.
You can be faced with your current repositories not having the app you are interested in.
Other than packages are the way it’s always been done with Linux, what practical benefit does an average desktop user get from downloading a common user app like FireFox from a repository and having it wrapped in one of the many different Linux package types?
Since the FireFox can run on Fedora, Ubuntu, Gentoo, etc the differences between the systems can’t be that drastic. Is just each of these distros can’t agree where to place various app and config files? And if these differences are important enough to keep that you require a package format to handle the differences what benefit are Linux users getting from this disparity?
You can download Firefox and run it from it’s own directory in Linux if need be. The benefits of waiting for it to appear in your package manager is knowing it’s gone through another layer of testing.
Yep, as well as drastically reducing the chances of running malware on your machine, the fact that it will install with no immediate setup required (no desktop icon nonsense), the fact that you can supply it as an argument on the command line with 500 other programs that will all work the same way, the fact you’ll be able to trace what files on your system are installed by which particular program, the fact you can uninstall 1000 applications cleanly at once, the fact you can update all your software and system at once, the fact you can keep a single shared library updated with ease and save on resources… I really could go on.
This is all very nice for an engineer who manages a bunch of servers, but definetely not for Granny who wants to use instant messaging and Epiphany.
Because Granny will want to go out and upgrade her Epiphany before the next version hits the repo in a few weeks or months?. I don’t think so.
Edited 2007-02-23 00:55
My point was that Granny doesn’t know how to use CNR or Synaptic, even it’s damn easy for who developped it.
This is to you, (and the other guy).
Look, if you think CNR or Synaptic are harder than finding apps on the web, I really don’t know what to say. There is surely nothing I can say to convince you.
But in my humble opinion, package managers are perhaps the very best thing about Linux, a good package manager (hello pacman & apt-get) is just an incredible thing.
Anyway, I’m not really into advocating for the most part, i’ll let people do what they want and I don’t much care for Linux adoption so i’m not going to try any/much harder to convince anyone
I don’t care about Linux adoption either because I dislike its community and the system itself, but from my experience, the elderly have no problem opening msn.com, clicking the “Download MSN Messenger” link and installing MSN Messenger. Now, installing software on Ubuntu is another story. It’s because it’s different and people don’t bother learning how to do. That’s life.
I don’t care about Linux adoption either because I dislike its community and the system itself
No need to remind us that you have an anti-Linux bias, it comes through in your posts.
the elderly have no problem opening msn.com, clicking the “Download MSN Messenger” link and installing MSN Messenger.
I guess they’ll have no problems with CNR, then. See, you should be happy that your concerns are being addressed, instead of whining about the ease of use of a system you don’t even use.
Granny will not be hunting the web for bleeding edge software updates. The tested and tightly integrated distribution packages provide a far more convenient solution for this kind of user. These are not a weakness but a strength of Open Source systems.
For “power users” (those who border between just wanting to use their computer as a tool and having any development skills), custom installers or autopackages should do the job just fine if they aren’t already using bleeding edge or source-based distributions. Of course most small projects don’t provide these yet, and this is where I believe that autopackage or similar systems can be most beneficial.
Granny will not be hunting the web for bleeding edge software updates. The tested and tightly integrated distribution packages provide a far more convenient solution for this kind of user.
Ok, to tell you the truth, on Windows systems, there is an “Update” icon in the system tray that tells the user there are system updates ready to install. Most non-technical users don’t even notice/care about it, and even if the computer downloaded system updated in the background, they aren’t installed, thus the system isn’t updated.
Alright. In Linux, there is an “Update” icon in the upper-right corner that is red when the system has new system/software updates. We click this icon because we know what it’s meant at, and because we care about it. But I’m sure Granny wouldn’t notice or care about it. It’s not because it’s Linux that the situation would change.
So, no, the tested and tightly integrated distribution packages does not provide a far more convenient solution for this kind of user.
snip —>”
Granny will not be hunting the web for bleeding edge software updates. The tested and tightly integrated distribution packages provide a far more convenient solution for this kind of user.
Ok, to tell you the truth, on Windows systems, there is an “Update” icon in the system tray that tells the user there are system updates ready to install. Most non-technical users don’t even notice/care about it, and even if the computer downloaded system updated in the background, they aren’t installed, thus the system isn’t updated.
Alright. In Linux, there is an “Update” icon in the upper-right corner that is red when the system has new system/software updates. We click this icon because we know what it’s meant at, and because we care about it. But I’m sure Granny wouldn’t notice or care about it. It’s not because it’s Linux that the situation would change.
So, no, the tested and tightly integrated distribution packages does not provide a far more convenient solution for this kind of user.
” <— snip
What kind of logic is that? Of course it’s more convenient! Just because someone doesn’t click the “Updates Available”-type of icon doesn’t make it less convenient. It’s also a perfect example of when to set the “Apply Updates Automatically” option in the update manager in the OS of your choice.
Granny will not be hunting the web for bleeding edge software updates.
Yes! Very well said. The real problem is that Windows Power Users see themselves as regular users, and think regular users have the same desire for bleeding-edge versions of applications. Since Power Users are usually reluctant to learn new ways of doing things (having spent so much time getting comfortable with a particular system), they are the ones complaining the most about it – and then they claim that they speak for casual users as well…
Granny will not be hunting the web for bleeding edge software updates.
Unless Granny needs to perform some task that is only possible in a newer version of a program from the repository or using a software that is not or will never be in the repository.
Unless Granny needs to perform some task that is only possible in a newer version of a program from the repository or using a software that is not or will never be in the repository.
Then Granny will have to wait for the program to be available. If Granny was using Windows, she would also have to wait for the program to be packaged and distributed.
The difference with open-source is that development versions are available beforehand, while in the closed-source world they aren’t. This gives the impression that the open-source apps are not available in the repos fast enough, but the reality is that the actual *source* is available early, while in the closed-source world you have to wait for the program to be released.
If the software will never be available in the repository, then *tough luck*! Amarok is not available for Windows, last I knew…
If your distro doesn’t have enough packages quickly enough, then change distros.
It amazes me how many people here think that impatience in getting bleeding-edge applications is a legitimate feeling. I think it means there’s too many people with too much time on their hands…
Then Granny will have to wait for the program to be available. If Granny was using Windows, she would also have to wait for the program to be packaged and distributed
Oh come on. Packaging: make a single EXE installer for all versions of Windows or one for Win 95-ME and another for Windows 2000-XP. Distribution: post a link to the installer on vendor website. This way the new version becomes available instantly for all grannies in the world, and no granny is told to change the distro (well, maybe it still IS a good idea if she has Win95 – but the majority uses XP now).
And another thing regarding open source. Let’s forget now about grannies and think about slightly more experienced users that are willing and able to test a new version of a program, but can’t build from source. Take for example Sylpheed, an open source cross platform email client. On Linux, no distro I know makes development versions of Sylpheed available via repositories. On Windows, it’s a matter of downloading the installer from the website. Now even not very experienced users can provide valuable feedback to the developer.
Oh come on. Packaging: make a single EXE installer for all versions of Windows or one for Win 95-ME and another for Windows 2000-XP. Distribution: post a link to the installer on vendor website. This way the new version becomes available instantly for all grannies in the world, and no granny is told to change the distro (well, maybe it still IS a good idea if she has Win95 – but the majority uses XP now).
You completely missed the point. It doesn’t matter what the OS is, the application still has to be packaged. As a user, you still have to wait for that before you can download and install it.
Making a Linux package doesn’t take any more time than it does to package a Windows app using an installer – in fact, it often takes *less* time. If the version of an app is available for Windows before Linux, that’s because the ISV actually took the trouble to package the Windows app, and not the Linux app.
Take for example Sylpheed, an open source cross platform email client. On Linux, no distro I know makes development versions of Sylpheed available via repositories. On Windows, it’s a matter of downloading the installer from the website.
That’s exactly my point: you’re faulting distros for not packaging them Linux, but it is the ISVs who packaged it for Windows, not the equivalent to the distro maker for the Windows world (Microsoft). The culprit here is *not* the distro makers, but the Sylpheed developers themselves.
BTW, there’s a reason why distros don’t make development versions available for current, “stable” distros: that would risk the integrity and stability of the system. Someone who wants bleeding-edge development versions should run an unstable distro, or compile it themselves.
I used to do that. I was a Windows Power User, and when I switched to Linux I ran Cooker, Mandriva’s (then Mandrake) unstable distro. At some point, I got tired of the constant instability, and then I realize that I *didn’t* need to run the “Latest’n’greatest”…I just wanted a system that worked. An lo and behold, this is what I got when I switched to a stable distro.
Okay, okay, I sometimes download development stuff (such as Beryl)…what can I say, old habits die hard!
You completely missed the point. It doesn’t matter what the OS is, the application still has to be packaged.
No, it’s you who missed the point. With Windows, you package the application once and for all, and it becomes available in binary form right away. On the other hand, there isn’t such a thing as a “Linux package”. The software has to be packaged for gazillion distros (and VERSIONS of the distros). The developers rarely do it, because it is time consuming compared to the Windows case, and to make packages for different distros, you have to have them installed. So packaging is relegated to distro makers, hence the delays, packaging bugs etc.
BTW, there’s a reason why distros don’t make development versions available for current, “stable” distros: that would risk the integrity and stability of the system. Someone who wants bleeding-edge development versions should run an unstable distro, or compile it themselves.
OK, the user compiles the development version himself… So what? Does it magically become less dangerous to the integrity and stability of the system? You see, people want to help test stuff (a wide-scale user testing has to be done anyway), and this means they are willing to sacrifice integrity and stability a little bit. FWIW, I’ve run many development versions of different apps, and not in a single case have I compromised integrity of the system. Stability yes (a question of killing the app or possibly rebooting), but not integrity. On the other hand, installing KDE on STABLE Ubuntu Dapper rendered my Linux installation unusable (I still don’t know what happened back then). Go figure.
No, it’s you who missed the point. With Windows, you package No, it’s you who missed the point. With Windows, you package the application once and for all, and it becomes available in binary form right away.
You can do the exact same thing for Linux by packaging it with a standalone installer. *That* was my point.
On the other hand, there isn’t such a thing as a “Linux package”. The software has to be packaged for gazillion distros (and VERSIONS of the distros).
You might have missed this, but there is only one Windows “distro”, while there are many Linux distros. That’s the way it is with Linux, and it’s not going to change.
Linux is *different* than Windows. It is developed differently, and it evolves differently. Don’t expect it to replicate the Windows monoculture. However, it’s easy to distribute one’s app as a standalone installer that will install on *all* Linux distros.
So what? Does it magically become less dangerous to the integrity and stability of the system?
If you compile something yourself then, yes, it’s going to be less risky, because you’ve compiled it for the current libraries on your system. It doesn’t mean the app will be stable, but at least you’re not changing your system libraries.
You see, people want to help test stuff (a wide-scale user testing has to be done anyway), and this means they are willing to sacrifice integrity and stability a little bit.
Then you should a) ask the ISV to provide a standalone installer, b) compile the app yourself, or c) run a bleeding edge distro, such as Cooker or Feisty Fawn before the feature freeze, so the app will generally be the very last version available.
For most people who just want their system to work, bleeding-edge versions are just “not ready”. Windows users have a hard time understanding, because for them if a version is available, it usually means it’s ready. The fact that versions are available quicker (in source form) for Linux doesn’t mean that you should absolutely run them right away…
You can do the exact same thing for Linux by packaging it with a standalone installer.
My point is that there is no standard yet for a standalone installer. That is why the existing solutions like Autopackage are not very reliable – and they cannot be, given that the idea of a possibility to install software not from repositories sounds like a deadly sin to most distro makers, and they don’t want to cooperate. Luckily, some of them, like the founder of Debian, are starting to get it and actually try to do something about the situation.
However, it’s easy to distribute one’s app as a standalone installer
See above, it’s not. Also read the archives of the autopackage-devel mailing list to see what kind of problems they encountered. And somewhere there was an interview with a VMWare who told a bit about the PITA that working with dozens of distros is.
but at least you’re not changing your system libraries
I don’t change system libraries in any case. I compile something that is absent on my system or I install a binary working with my system. Compatibility is about interfaces: it is not necessary that the software be compiled against the exact same version of libs – it only needs to support the same interfaces. You can compile your app against GTK+ 2.2 headers, and the binary will work perfectly on your GTK+ 2.10.
Then you should a) ask the ISV to provide a standalone installer b) compile the app yourself, or c) run a bleeding edge distro
Whereas with Windows, they don’t have to do anything of that, and ISV gets more user base and wider testing, and more users get the same open or closed source software more quickly Everybody benefits! You see, it all is actually possible on Linux – it is NOT that different from Windows. It’s just about standards. Just as there are POSIX conventions that are supported by any Linux distro, so there can be a common packaging standard supported as broadly. The nature of Linux or of the way Linux distros are developed does not in fact prevent it. The technical hurdles are surmountable – it is the political will and cooperation that is needed.
My point is that there is no standard yet for a standalone installer.
That’s because you don’t need one. All you have to do is install in /opt with statically-linked libraries (or to the home folder for locally installed apps).
There is no “standard” for standalone installers in Windows, either – installing in Program Files is the norm, but you have programs that install in other locations too.
See above, it’s not. Also read the archives of the autopackage-devel mailing list to see what kind of problems they encountered.
Autopackage is not a standalone installer, it was conceived as a universal package manager. That’s quite different, and one of the reasons it failed.
Providing standalone installers that put the app in /opt with statically-linked libraries is quite easy.
I don’t change system libraries in any case. I compile something that is absent on my system or I install a binary working with my system.
You missed my point: I was talking about installing bleeding-edge rpms or debs as opposed to compiling the apps yourselves. If you compile the apps yourselves, you’re not mucking around with system libraries (though if the app *requires* a new library for some functionality, you may be screwed, unless you compile it with static libraries, something which I’ve never done but imagine is possible).
Whereas with Windows, they don’t have to do anything of that
Yes you do. The executable file you download from their site *is* a standalone installer. The ISV had to package it. The fact that they may choose to do one for Windows and not for Linux is *their* choice, and *they’re* the ones you should complain about, not the distro makers whose sole responsibility is to make sure all the software is stable and plays nice with each other.
Just as there are POSIX conventions that are supported by any Linux distro, so there can be a common packaging standard supported as broadly.
This is not going to happen. For starters, which one would you choose, debs or rpms? Proponents of each will not budge. And then, how will distros make sure that they all have the same versions of libraries and other dependencies installed at the same time? To do what you suggest, all distros would have to be in lockstep as far as version numbers of system software is concerned. And all this effort for what? Standalone apps can be installed on any distros with standalone installers already, so really it seems to me that this is a solution in search of a problem.
You should direct criticism where it’s due, i.e. ISVs who do not provide standalone installers for their applications. As for the rest, the package management system we have, which mostly works, will continue to evolve and become more user-friendly, but it will not change radically.
That’s because you don’t need one. All you have to do is install in /opt with statically-linked libraries (or to the home folder for locally installed apps).
Not quite. Static RPMs under /opt will not provide you desktop integration – menu entries, MIME type registration, manpage installation, dbus config files. That is, they will install these files under that prefix, but the system won’t see them. You will have to mess with /usr anyway for all stuff to work. Again, read autopackage archives. These problems are not specific to any packaging system – you will run into that even with a source install.
Autopackage is not a standalone installer, it was conceived as a universal package manager. That’s quite different, and one of the reasons it failed.
I can’t see how this matters in the light of the technical problems described above.
The fact that they may choose to do one for Windows and not for Linux is *their* choice
This choice is made for a reason. Because on Linux, it is a hassle which it needn’t be.
This is not going to happen. For starters, which one would you choose, debs or rpms?
This is the least of the problems. Firstly, it needn’t be one of those. Secondly, alien is our friend – if two packages differ only in the format, alien will convert them successfully. In the small percentage of cases when it probably won’t, it could be adapted for this – only if there is the will to do the right thing.
And then, how will distros make sure that they all have the same versions of libraries and other dependencies installed at the same time
As I said ealier, this is not needed. You just code for specific interfaces, not versions.
Also, I’m afraid you may be thinking that I advocate a unified packaging API and format to be adopted as a base for all distros. This is not the case – the base system, be it RPM, DPKG or whatever, should stay intact. The standard is needed for installing third party software or software versions that are unlikely to be in a repository. There is just no common, reliable, and fully functional means to do that. Hopefully, with the packaging initiatives of Debian’s Ian Murdock and the help of the wider community, it will change.
Not quite. Static RPMs under /opt will not provide you desktop integration – menu entries, MIME type registration, manpage installation, dbus config files.
There are standards for menu entries and MIME type registration, as set forth by freedesktop.org. There’s nothing preventing standalone installers that use /opt to follow these standards.
As far as manpage installation, that’s something which is used mostly for system or commandline apps…I don’t think this is what we’re talking about here.
I don’t know enough about dbus to comment on the last point, but I imagine some standardization (along the lines of what was done by freedesktop.org) is possible.
That is, they will install these files under that prefix, but the system won’t see them.
It can easily add itself to the path by updating the /etc/profile file, or the .bashrc file if the app is installed locally.
This choice is made for a reason. Because on Linux, it is a hassle which it needn’t be.
I beg to differ. It is not done because a) most developers will let the distro makers package the app, and b) they know that technical users who will want to test the app can compile it themselves.
Note that we’re not talking about system software, here. The distinction is important.
Also, I’m afraid you may be thinking that I advocate a unified packaging API and format to be adopted as a base for all distros. This is not the case – the base system, be it RPM, DPKG or whatever, should stay intact.
Well, at least we agree on that.
The standard is needed for installing third party software or software versions that are unlikely to be in a repository. There is just no common, reliable, and fully functional means to do that.
Common, no. Reliable and functional, yes. The proof is that you have many commercial apps that use standalone installers already. When I install Codeweavers’ Crossover Office, I don’t use the rpm (because there’s not one for Ubuntu specifically), I use the graphic installer. It automatically appears in my menu structure, and file associations are also created without me needing to do anything. Of course, it installs static libraries as well, but that’s okay, I can live with that.
Let me make myself clear as well: I also think that standardization is good, and I like alternate installation methods. I thought Autopackage was good, but tried to do too much. I like Klik. I have nothing against standalone installers. However, I don’t think that any *radical* remodeling of how Linux software installation is handled by distro makers can succeed. Any improvements will have to happen in an evolutionary way, not a revolutionary one – if only because it’s impossible to get everyone in such a large and diverse community to all go in the same direction together.
There are the way things should be ideally, and then there’s how things really are. Unless you want to spend a lot of energy to accomplish relatively little, you’re better to work with what is there instead of trying to reinvent the wheel.
Anyway, I don’t think we disagree that much…and all in all this is a healthy debate, which is part of the FOSS experience. I salute your reasonable, non-aggressive approach (which, as you may have noticed on this thread, is not the case of everyone…)
“as well as drastically reducing the chances of running malware on your machine”
OS X has no need to use packages or repositories and has no, or at least no greater, malware problem than Linux.
“the fact that it will install with no immediate setup required”
The vast majority of user apps on OS X run without any installation without packages or repositories.
“the fact you can uninstall 1000 applications cleanly at once”
Just like you can go to /Applications Select All and drag them to the trash?
“the fact you can keep a single shared library updated with ease and save on resources.”
Can you give a disk space/memory space figure that common Linux apps are saving vs the same apps or type of apps on OS X?
“I really could go on.”
Please do. You have yet to give any reason that adds value to the average user in going through the hassle of using application repositories or installation packages.
” OS X has no need to use packages or repositories and has no, or at least no greater, malware problem than Linux. ”
Malware could be included in your OS X app. Unless it is open source and somehow “certified” to be malware free like those in the distro repositories, you cannot be aware of what is doing the app you are running. It could be keylogging you while you type passwords or exploit your system. So it really depends on you trusting the application you downloaded.
“Can you give a disk space/memory space figure that common Linux apps are saving vs the same apps or type of apps on OS X?”
I don’t know how OS X manages shared libraries, but if a couple of applications bundle some library with slightly different versions, I pretty much guarantee that both end up loaded in memory (I could be wrong, though).
Other than packages are the way it’s always been done with Linux, what practical benefit does an average desktop user get from downloading a common user app like FireFox from a repository and having it wrapped in one of the many different Linux package types?
The benefit doesn’t come from downloading a common user app like Firefox, it comes from dowloading, installing or upgrading, and configuring nearly any common app like Firefox, plus the uncommon ones, plus the core operating system, typically in only one or two steps. If you don’t want to upgrade everything, you have that option too.
People complain about package management in Linux, but I prefer any package manager to the Windows model of:
1. look up the app on a third party site to find out if it has spyware/adware/etc.
2. download from the distributor and click next a bunch of times to install
3. waste space with redundant copies of incompatible versions of libraries
4. rely on the app to notify you of updates and clean up after itself on uninstall
1. look up the app on a third party site to find out if it has spyware/adware/etc.
This is FUD, and it’s just not true. I have never downloaded spyware from Adobe or Skype web sites. To tell you the truth, I have never downloaded spyware/adware from any software vendor.
2. download from the distributor and click next a bunch of times to install
This is handy to set your configurations (place to install, install for one or all users, start together with system bootup, readme file, language support choice, etc…). Neat flexibility that Linux doesn’t give during installation.
3. waste space with redundant copies of incompatible versions of libraries
This is hardly true, maybe a few KBs, and this definitely doesn’t make any difference in today’s 500GiB drives.
4. rely on the app to notify you of updates and clean up after itself on uninstall
Fine, but Linux applications don’t uninstall everything either, so…
This is FUD, and it’s just not true. I have never downloaded spyware from Adobe or Skype web sites. To tell you the truth, I have never downloaded spyware/adware from any software vendor.
Good for you, but the fact is lots of people do download spyware. Even experienced users like us who know which vendors to trust can benefit from only having to trust one.
This is handy to set your configurations (place to install, install for one or all users, start together with system bootup, readme file, language support choice, etc…). Neat flexibility that Linux doesn’t give during installation.
A better way to configure applications is to have most of the settings global through package management, then let the individual apps configure themselves the first time they’re run, on a per-user basis. Store documentation in a central location.
This is hardly true, maybe a few KBs, and this definitely doesn’t make any difference in today’s 500GiB drives.
I’ll give you that one. I should have complained about wasting RAM during runtime and bandwidth for updates.
Fine, but Linux applications don’t uninstall everything either, so…
True, but that’s a point in favor of package management. The package manager does know how to uninstall every single file, because it installed them all in the first place. It’ll leave per-user configuration, but that’s not a problem because it’s just cluttering up your home directory with hidden files, as opposed to the Windows world where junk gets left in the registry.
Well, IMHO, I think you guys are comparing apples to oranges. Centralised repositories of open source packages are GREAT (more convenient, easier to automate, much safer…). The problem is what to do when you want a program and it’s NOT in the repos. If you are using a mainstream distro, chances are that there’s an unofficial repo you can add to your “sources.list”. This is not an ideal solution, since name clashes and similar problems are rather frequent. If your distro is not so mainstream, then you need an installer (not so frequently found), or you have to compile from source. Among other problems, these methods are more likely to wreck your system, and the apps are not integrated with the applications from the repos.
So, an ideal package management system should let you install programs on your own, and keep track of which programs are from the repos and which ones are from outside, make them work with each other and with the DE and unistall them cleanly. And it has to manage shared libraries if it’s going to be used extensively.
The problem is what to do when you want a program and it’s NOT in the repos.
Solution: use Ubuntu. I’m an advanced user, and I often try rare/obscure stuff, and I’m amazed at how often there’s a package available for Ubuntu in the Universe/Multiverse repositories.
The reason this is a minor problem is quite simple: the apps a newbie/intermediate user will want and/or hear about will be in the repositories; the rare/uncommon apps an advanced user might need might require him to compile it – but that’s all right, because he’s an advanced user. Meanwhile, commercial software will be available as a standalone installer from ISVs (or as packages, if they offer it).
Whenever this debate crops up (and it does so quite often), all we hear about are hypothetical case…but in reality, these types of situation are quite rare.
I hope we can finally put this minor issue to rest with the advent of newbie-friendly front-ends such as CNR.
I’ll answer point-by-point.
“Solution: use Ubuntu. I’m an advanced user, and I often try rare/obscure stuff, and I’m amazed at how often there’s a package available for Ubuntu in the Universe/Multiverse repositories. ”
That’s a non-solution. Of course, if there were only one distro, there would be much less of a problem (I wouldn’t say “no problem”, because it depends on how good the “one distro” actually is). But this is not the case, it won’t be anytime soon, and it shouldn’t. Package management is an unresolved issue, and that’s why there are so widely different package management schemes. Sure, you can pick one of those systems, build a huge community around it and brute-force the illusion that there’s no problem.
“the rare/uncommon apps an advanced user might need might require him to compile it – but that’s all right, because he’s an advanced user. ”
Have you ever wondered why one has to be an advanced user just to do something as conceptually simple as compiling a program? It should be something as simple as installing a package. Sometimes it (almost) is (configure, make, make install), but sometimes problems arise and you wreck your system.
“Whenever this debate crops up (and it does so quite often), all we hear about are hypothetical case…but in reality, these types of situation are quite rare. ”
The only reason why this works now is that there aren’t so many big, specialised programs for GNU/Linux that can’t be included in the main repos for whatever reason (the distro policy, lack of resources, lack of packagers,..). This is not a stable or desirable situation. The expression “dying from success” comes to mind.
BTW, don’t tell me about Ubuntu, I’ve tried it at least twice. Yes, the main repos are huge, but unofficial repos are a chore (“library bar of package foo is trying to overwrite… which is also in package foo”, will you tell me it never happened to you?) Now I’m trying Linux Mint, which is basically Ubuntu with all this repo hassle done for you, and a few customizations. But I’m afraid the name clashes will stay.
“I hope we can finally put this minor issue to rest with the advent of newbie-friendly front-ends such as CNR.”
Well, I don’t agree it’s a minor issue, and I don’t think CNR will solve anything, if it is, as you say, a front-end. If the underlying structure is wrong, a front-end only hides the problem. What I mean is that it only shifts the burden of the problem from users to maintainers, which is never a good thing, especially for GNU/Linux, where there’s no clear distinction (at least no brick wall) between them.
The only reason why this works now is that there aren’t so many big, specialised programs for GNU/Linux that can’t be included in the main repos for whatever reason
Which is why standalone installer exists.
Yes, the main repos are huge, but unofficial repos are a chore (“library bar of package foo is trying to overwrite… which is also in package foo”, will you tell me it never happened to you?)
Yes, it has happened to me a couple of times. That’s not due to a bad system, but to bad packagers who haven’t done their job right. Every time this has happened, I’ve waited a day or two, then tried again, and this time it worked.
Again, you guys are seeing this from the power user’s point of view. For people who want a system that “just works”, staying with the current distro set and being patient is the key. I don’t agree that a system is broken just because it won’t pander to the impatience of a few power users addicted to bleeding-edge software.
“Again, you guys are seeing this from the power user’s point of view. For people who want a system that “just works”, staying with the current distro set and being patient is the key. I don’t agree that a system is broken just because it won’t pander to the impatience of a few power users addicted to bleeding-edge software.”
I’m not saying the system is broken. It does work, if you restrict yourself to official repositories. This is a problem because it’s not very scalable. Sometimes you want a 3rd party app and you know it won’t be available in the official repos anytime soon, or not soon enough.
Unofficial repos can break your whole system, and this is just not acceptable. I disagree with the argument that goes like “well, you want the app before it’s ready, you accept the risk. That’s a feature. If it breaks your system,tough luck.” Not! It’s one thing that the app is buggy and I have to unistall it after a while, and it’s a wholly different thing that I can’t unistall it AT ALL because my system is hosed. Linux is much about trying things before they are ready for production, so it should be much less risky and much more convenient to test new programs.
I’m a FOSS and GNU/Linux fan, and I don’t think Windows or Mac OS X manage this any better. In those systems, apps simply don’t depend on each other (usually), only on the base system. But someone had to develop the base system (MS or Apple engineers). In Linux, there’s not much of a base system (for the moment, despite LSB and similar efforts) because any library could be a key part of the system. I like this approach better, but it also requires a MUCH better thought-out packaging system. I point this out because I think it’s possible and to some extent, already implemented by some distros.
Sometimes you want a 3rd party app and you know it won’t be available in the official repos anytime soon, or not soon enough.
Yes, I understand this. My point is that, at least for non-system software, it is the ISV’s responsibility to make that app available to all with a standalone installer.
Unofficial repos can break your whole system, and this is just not acceptable.
Okay, I realized I’m not quite sure what you mean by unofficial repos. I thought you were talking about Universe/Multiverse for Ubuntu, but now it seems as if you’re talking about something else…
You can install 3rd party RPMs/DEBs. It’s actually really easy. When you download from Firefox, it automatically launches the software installer and prompts for the root password (or user password on Ubuntu). Since these packages generally don’t have any dependencies that aren’t already installed, dependencies aren’t a problem, but at least on Ubuntu, apt-get will take care of those. I think newer Fedora may do the same thing.
They have dependencies but they are compiled into the package. It’s a waste of disk but a nice way for releasing stuff you don’t trust distros to package for you…
Someone sees an announcement on a news page that version x.2 of FireFox just was released and they want to install it.
what practical benefit does an average desktop user get from downloading a common user app like FireFox from a repository and having it wrapped in one of the many different Linux package types?
The benefit is that they don’t have to “(see) an announcement on a news page” for Firefox, or x number of other apps they may have installed. apt-get upgrade will install it for them, along with any new versions of any other program they may have. They don’t have to hunt around news pages for each bit of software, but can open up their package manager and see which apps have new versions and choose which ones to install, all at one go.
If the software is available only from the app vendor’s own repository, adding the repo is a bit more work than doubleclicking an exe (after finding and downloading it) but once the repo is added, it’s there. One doesn’t have to add it again for the next new version.
It’s just a different way of using it, with obvious advantages. People who really don’t like it have other options, like OSX and Windows. Maybe people should stick with those instead of pestering Linux distros to change their time honed methods of installing and keeping up to date the many apps people use.
Someone sees an announcement on a news page that version x.2 of FireFox just was released and they want to install it.
Any kind of user that reads news sites that post about new Firefox releases will have no problem going to mozilla.com and grabbing the standalone tarball (by clicking on the big green “download firefox” button). Your DE will automatically open the tarball in the archive application. There’s a script in there conveniently called “firefox” that starts the browser. All of the settings and whatnot are stored in this folder, and removing the folder removes the app. It’s a quick and dirty solution. Very much like how all software is installed on MacOSX.
You have to know about adding repository urls to your package management utility.
Only when you’re updating to new releases of your Linux distribution, which is much easier (and cheaper) than doing so on MacOSX.
You have to wait around for someone to manually add new apps to the database.
On MacOSX you also have to wait for somebody to package the software for you. Except that on Linux there’s almost always a way to build it yourself if you know how, which is almost always impossible on MacOSX.
You can be faced with your current repositories not having the app you are interested in.
New Mac users often find themselves in the same situation–not being able to find the kinds of software they used to use on Windows. Old Mac users prefer to live in a world where you never need software that isn’t readily available for MacOSX, which is similar to the way old-school Linux users think. I don’t have a single package on my system that wasn’t installed through my package manager. Heck, most Linux distributions come with 90% of the software you’ll ever need right out of the box.
what practical benefit does an average desktop user get from downloading a common user app like FireFox from a repository and having it wrapped in one of the many different Linux package types?
Benefits include adding Firefox to the applications menu, placing the binary in a directory that’s already in your PATH, allowing all applications that use the Gecko renderer or XUL toolkit to use the same version, allowing all users on the system to run Firefox, placing the documentation in a known place where your DE’s help functionality can find it, enabling your distributor to specify defaults (including themes) that are consistent the rest of your desktop, keeping you aware of available updates (which Firefox does on its own but most other apps don’t), and providing a single interface for installing any application. There might even be more benefits, but that’s enough for now.
Since the FireFox can run on Fedora, Ubuntu, Gentoo, etc the differences between the systems can’t be that drastic.
Firefox also runs on Windows and MacOSX, so they must not be drastically different either…
Is just each of these distros can’t agree where to place various app and config files?
No, binaries go in /usr/bin on nearly every distribution (except for GoboLinux and some other weirdos) and config files go in ~/.appname for each user, or /etc for system-wide configs. It’s all very standardized from distro to distro.
The reason why each distribution maintains their own repositories is so that they can ensure that all of the packages work together seamlessly. The reason why they need to ensure this is because each distro includes earlier or later versions of packages depending on when they were released and includes (or excludes) different packages depending on the requirements of their userbase.
if these differences are important enough to keep that you require a package format to handle the differences what benefit are Linux users getting from this disparity?
The differences don’t mandate different package formats, they mandate separate package repositories (as explained above). Since one distro’s repositories don’t work with another, it doesn’t matter which package format they choose. There are plenty of RPM distributions with different repositories that don’t work together. The RPMs are all in the same format, but one distro’s RPMs won’t work on another’s. Same with DEBs, although there’s slightly more unification on that side of the universe.
The ultimate benefit is that we aren’t locked into one distributor with one idea of how a Linux distribution should be managed, developed, tested, and released–as well as how free (libre) it should be, what kind of user it should target, what kinds of machines should it run on, what support options should be available, which desktop environment should be the default (if any), and other matters of taste and requirements. The benefit we get is choice and along with it the high probability that at least one of the distros is right for me, possibly right out of the box. We also get progress, since small distros can try out new ideas that eventually make it into the big ones, and each distribution benefits from the unique insights of the others.
If we really had this thing nailed down–exactly how to build an operating system that everybody likes–then we would standardize. But we refuse to settle for merely “good enough” like some other OS vendors. We honestly want to get this right, and in order to do this we need to try different things and see what works (and what doesn’t). Sharing enables us to take advantage of the benefits of diversity without a large penalty in the overall productivity of the community as a whole.
The fact of the matter is that we aren’t seeing a proliferation of new major distributions. That era has past. Instead we are seeing the community come together behind a few projects, and distributions begin to derive from these projects. Just look at all of the consolidation that has happened around Ubuntu within the past 18 months. The Linux ecosystem is taking form as a three-horse race, and that seems like a manageable number for proprietary ISVs to target. The community has proven that it can take care of packaging the OSS stuff itself.
I’m afraid, you’re generalizing a bit too much here.
“You have to know about adding repository urls to your package management utility.”
Nope, I don’t have to do any such thing. When the software becomes available, I will be able to get it simply by telling my package manager to install it. Could you be more specific as to which distro you’re referring to?
“You have to wait around for someone to manually add new apps to the database.”
Not really. All _I_ have to wait for is for the package to marked as stable, which I don’t mind waiting for at all, since I’m not a big fan of running poorly tested software on any of my machines.
“You can be faced with your current repositories not having the app you are interested in.”
Once again, not really. If this piece of software is something I need, I can just compile it from the source, which is not as scary as some people sometimes think. Otherwise, like I said, I can wait. Or, I can compile it and help test it if I’d like to speed up the process.
“Other than packages are the way it’s always been done with Linux, what practical benefit does an average desktop user get from downloading a common user app like FireFox from a repository and having it wrapped in one of the many different Linux package types?”
Say, tomorrow all Linux-based distro developers get together and decide that they’re going to adopt a Mac OS-like way of installing software on their distros — you know, the process you’ve described. No more packaging software fragmentation. What’s going to happen?
For the sake of the argument, let’s assume that “average user” is someone who just wants to install Linux and use it without ever having to worry about things like conditional compilation, optimizations, etc. What about the power users? What about those not-so-power users (myself included) who want to have a higher degree of control of what gets installed on their systems and what doesn’t? I mean, the system you’ve described sounds awesome, but I _personally_ would find it extremely limiting.
Allow me to illustrate. Say, I want to install Xine media player. So, using your system, how should Xine be built? Should it include support for DVD, MP3, ALSA, DirectFB, OggVorbis? Who’s going to decide? What if I don’t have any .ogg files and I don’t plan to use ALSA, because I actually prefer the OSS compatibility layer in the kernel? Why should I have installed capabilities that I won’t ever use? Doesn’t it limit _my_ freedom to decide what goes on my computer and what doesn’t (beyond the packages needed to resolve the dependencies, that is)?
You may say, Well, dude, if you’re so picky, then download the source and compile it yourself. Sure, and if push came to shove, I could do that. No problem. And I wouldn’t be alone. Soon there would be hordes of anal nit-pickers like myself and power users compiling their own software from the source. But, you see, they would have to resolve dependencies by hand in this case, which is a bit of a pain in the butt, so before you know it, someone would come up with a set of scripts to automate it and — voila! — behold a new package manager. Back to packaging software fragmentation.
Personally, I believe that the general fragmentation present in today’s Linux-based systems (a multitude of package managers, desktop environments, media players, web browsers, etc.) exists because of two factors:
1). The very nature of open source: freedom to modify the source code. If someone capable of coding doesn’t like where a particular project is heading, (s)he is free to either fork it or write a clone.
2). Users. In other words, if users didn’t want to use GNOME, its development would have stopped a long time ago.
“Since the FireFox can run on Fedora, Ubuntu, Gentoo, etc the differences between the systems can’t be that drastic. Is just each of these distros can’t agree where to place various app and config files? And if these differences are important enough to keep that you require a package format to handle the differences what benefit are Linux users getting from this disparity?”
Methinks that the flawed assumption here is that “Linux users” are some kind of a homogenous group. They aren’t. Maybe they can’t even be homogenous.
As long as software developers will have the “Works for me” attitude, they will consider this is not a problem. In a previous thread I read that software installation isn’t a problem. Users are probably inventing a problem then? We could have said the same about package managers. Why should we offer a package manager if we can run a tar xvzpf firefox-2.0.0.1-x86.tar.gz after all? For the same reason exposed by short-sighted developers, I suggest removing Synaptic from Ubuntu and CNR from Freespire. And you should thank the developers who are kind enough for offering a tar.gz precompiled package of your favorite applications! So installing software on Linux is not a problem, not even for Eric S. Raymond. Autopackage and Klik are trying to fix non-existing problems. Real men use plain .deb and .rpm and resolve their own dependencies themselves!
In a previous thread I read that software installation isn’t a problem. Users are probably inventing a problem then?
It’s not users who are inventing this problem, it’s usually Internet forum posters with an anti-Linux bias.
We could have said the same about package managers.
No, we couldn’t. Package managers *do* make it easy to manage installing/removing applications. All you need after that is a nice front-end (Add/Remove Programs, CNR) to make it newbie-friendly.
Your analogy is flawed, and therefore the rest of your argument is invalid.
Generally speaking the package manager/repository system works really well. I definitely like it.
The problem is really developers who design their software so that it works with the latest libraries but not with e.g. three to five years old ones. So if you have old “stable” distro and you need it for work you can’t install latest desktop/productivity apps for it. Where can you find modern desktop apps for RedHat Enterprise 4? Not only are there not binary packages but building from source fails too (too old libraries)… Is it really good idea to require users upgrade the entire OS/distro just to get some new apps?
Fortunately some developers do realise this problem. E.g. Firefox and OpenOffice.org developers deliver software that actually can be installed and used on older systems as well as new ones.
@archiesteel
You really are doing a disservice to the Linux community by simply invalidating Joe User’s comments. You said that it is forum posters with “anti-Linux bias” that are creating a problem with installing software on Linux. That’s simply not the case.
I was tired of Windows XP, and resolved that I would not be upgrading to Vista. I decided to give Linux another shot (after three other failed attempts) and installed Ubuntu. I was quite surprised by how much Linux has matured with this experience. Many things are very easy to configure now, installation is a breeze…in general, I’m really liking Linux. The exception is software installation.
Software installation is the ONLY real failing point I’ve still encountered in Linux. I know there have been at least ten things I’ve wanted to install that have required me to get the source and compile it (which was tough for me to do, and I’m fairly competent with computers, so to expect the average user to be able to do this is ridiculous). I can’t remember all of the times I’ve had to do a compile to install, but here are a few instances where what I wanted simply wasn’t in Synaptic or Automatix2:
– beta version of Audacity (needed this because my USB microphone was not working, even though it does in Windows)
– Avant-Window-Navigator. I saw this on Digg and wanted to check it out, but couldn’t find it in a search of Synaptic. I had to compile it from source.
There have been others as well. Don’t get me wrong, I really like the whole package manager thing, in many cases it works very well. However, what if the package manager is not an option (which as I have proven, it is not always an option for some people)? What is going to be easier for the end user, in getting their software installed? They can either:
– Download the source, open it with an archive tool, extract it, drop to a command line, perform a make, and install it. (Linux method)
– Download a file, extract it, double-click and the program is running. (MacOS X method)
– Download a file, double-click it to start installation process, install, double-click an icon on the desktop. (Windows method)
I use four different operating systems almost daily (Windows and MacOS X at work, Linux and SkyOS at home). I can, with much confidence, say that Linux has the least accommodating installation process of all four systems. Even SkyOS, which is still in beta development, has two methods of installation:
1. Software store (similar to a repository process, a la CnR)
2. Download a .pkg file, right-click on it, choose “Install”, and then run the program from the SkyOS menu.
Again, the repository idea is fantastic, and often works. However, why does this fact seem to put into many Linux users’ heads that there does not need to be an easy, and more importantly STANDARDIZED solution to software installation, should the user prefer to simply download the software outside of the repository? There is nothing stopping you from continuing to distribute and install from source, but there needs to be a middle-ground between a source install, and the repository, because it is a solution that people seriously need, despite what you may think.
You really are doing a disservice to the Linux community by simply invalidating Joe User’s comments. You said that it is forum posters with “anti-Linux bias” that are creating a problem with installing software on Linux. That’s simply not the case.
First of all, stating an opinion is not doing disservice to anyone. It is what I believe, and it is my right to say it. I will not be censored by accusations that this will somehow prevent anyone from making software installation in Linux better.
Second, there are standalone installers that ISVs can use to mimic what is available with Windows – so if a vendor doesn’t want to provide packages or an installer, it is *their* choice, and their responsibility.
That said, I take exception to the examples you gave. First, if you needed to install the beta version of Audacity, then it means that the software wasn’t *ready* yet…the fact that you were able to install it from source means that there are ways to install “not-ready-for-prime-time” software – that’s a feature! Distros have a responsibility to make sure that all software installed will play nice…what’s worse for average user? Having to wait a little bit for the software to be available, or allowing the risk that libraries updated to work with the bleeding program will break the system?
– Download the source, open it with an archive tool, extract it, drop to a command line, perform a make, and install it. (Linux method)
– Download a file, extract it, double-click and the program is running. (MacOS X method)
– Download a file, double-click it to start installation process, install, double-click an icon on the desktop.
That comparison is unfair. All of these programs start as source, in Windows and OSX as well as in Linux. If you want to compare, you have to compare how easy it is to install the program from source on all three systems. If the ISV packages an app for OSX and Windows but not Linux, then it’s their fault, not the system’s.
there needs to be a middle-ground between a source install, and the repository, because it is a solution that people seriously need, despite what you may think.
That solution exists: a standalone installer with statically-linked libraries. Again, if the ISV doesn’t provide it, it’s their fault.
That’s why I say that this is not a real problem: there are ways to make Linux software install in a way similar to Windows and OS X, however it’s not the distro maker’s responsibility. As is the case in the Windows and Mac world, that falls on the ISV’s shoulders.
If the ISV packages an app for OSX and Windows but not Linux, then it’s their fault, not the system’s.
Yes it is. Stop seeing only your own point! You know Linux doesn’t offer all that is required to use stuff like Autopackage. And Linux distros need to provide a *framework* to unify this sort of step-by-step software installation, a framework any ISV could rely on, instead of developing their own in the dark. If a Linux distro were ready for it, it would greatly help, and Autopackage wouldn’t have to do its initial install. But Linux distros refuse Autopackage and refuse to be Autopackage-ready because they don’t want to have a Windows-like package installer. That’s the problem, recognize it’s true.
That’s not due to a bad system, but to bad packagers who haven’t done their job right.
Ah, right. But the end user doesn’t care. He wants his application to install flawlessly like on Windows, or he says “screw Linux”. You don’t have such problems on Windows.
The problem is that Linux is not an operating system — each distribution is, and naturally there will always be incompatibilities. On the other hand, MacOS and Windows only have one flavour. Obviously it’s easier to support the latter.
Whose fault?
would you prefer to have only statically compiled programs? No dependency problems, but also no possibility of benefiting from library updates and (much) more disk usage
No one cares about your shared libraries. It’s not faster in the real world, just in your imagination. And it doesn’t save *much* more disk space. The savings aren’t worth the hassle.
Free OSses naturally choose the more elegant approach, as opposed to commercial OSses.
Is a missing dependency error message “elegant”? No comment. A frustrated Linux n00b is not a GoodThing(tm).
The failure to recognize that such tools already exist is a symptom of a grave misconception of how things work in the Linux world.
The once promissing solutions like Klik, ZeroInstall or Autopackage have be rejected by the übergeeks, so I don’t consider the problem solves, if we all ever agree there *is* a problem.
No, they don’t. Stop your FUD.
Yes they do. Do yourself a favor and download Ubuntu this week-end and try it at least once to see what I’m talking about: http://www.ubuntu.com/products/GetUbuntu/download
I have. What do you think of that?
I think you’re a liar, or at least that you’re refering to the time you were a Windows user, and you stopped using Windows when it was still the notorious Windows 98, that was indeed buggy and problematic when dealing with software installation. The problem was fixed with Windows XP.
Guess what: they all have installers.
If you call a shell script an “installer”, LOL!
Granny will be happy running a shell script, next she’s gonna ask SVN access to tweak the Linux kernel
Your clear, long-standing anti-Linux bias doesn’t give you much credibility when it comes to discussing Linux’s future, I’m afraid.
I’m just reporting facts. Sorry about that.
Yes it is. Stop seeing only your own point!
Stop seeing only your own.
You know Linux doesn’t offer all that is required to use stuff like Autopackage.
That’s a nonsensical statement: if it didn’t offer it, then Autopackage wouldn’t exist in the first place.
And Linux distros need to provide a *framework* to unify this sort of step-by-step software installation, a framework any ISV could rely on, instead of developing their own in the dark.
Linux *does* offer the framework. It’s up to the ISVs to use it.
What are we talking about here, really: location of files and methods of launching the executables. That’s all there is to software installation, really! Libraries can be statically-linked, and there are standards (freedesktop.org) for stuff such as main menu entries and file associations.
But Linux distros refuse Autopackage and refuse to be Autopackage-ready because they don’t want to have a Windows-like package installer. That’s the problem, recognize it’s true.
That’s nonsense. The reason distros don’t change to Autopackage is because the package system is *good enough*, and changing it would be a *huge* undertaking for very little benefit. Package managers are still the best choice for system software, including such things as desktop environments. Heck, even Windows uses a package manager (the .msi installer)! For standalone apps, standalone installers are an acceptable alternative, no matter the OS.
Ah, right. But the end user doesn’t care. He wants his application to install flawlessly like on Windows, or he says “screw Linux”.
But installations do install flawlessly on Linux, and they’re much easier to manage.
No one cares about your shared libraries. It’s not faster in the real world, just in your imagination. And it doesn’t save *much* more disk space. The savings aren’t worth the hassle.
I’m not the original poster you’re replying to in this particular instance, but I’ll reply anyway: *all* OSes use shared libraries to a degree. If you think this isn’t the case, then you know even less about computers than you seem to.
The issue is not disk space, it’s RAM and security. Multiple versions of the same library being loaded at the same time can end up taking more RAM, and if a program hasn’t updated their version of a library for security purposes, this may introduce risks.
That said, it’s all right for standalone apps to be statically-linked. You can have the best of both worlds.
Is a missing dependency error message “elegant”? No comment.
I haven’t had a single dependency error message in a year. No comment.
Yes they do.
No, they don’t.
Do yourself a favor and download Ubuntu this week-end and try it at least once to see what I’m talking about
I’m running Ubuntu (Kubuntu, actually) on a laptop and a desktop. Programs install flawlessly. Enough with the FUD already.
I think you’re a liar, or at least that you’re refering to the time you were a Windows user, and you stopped using Windows when it was still the notorious Windows 98, that was indeed buggy and problematic when dealing with software installation. The problem was fixed with Windows XP.
I’m talking about Windows XP. It doesn’t happen often, but it has happened. Call me a liar if you want, I really don’t care – you’re not about using insults and making derogatory comments, as we’ve seen on this thread.
If you call a shell script an “installer”, LOL!
Yes, a shell script is an installer, albeit a text-based one. You also have graphical installers, such as the Loki installer (you must know about those, since you say you use Crossover Office).
Granny will be happy running a shell script, next she’s gonna ask SVN access to tweak the Linux kernel
Yeah, because the two things are identical, right? I guess using fallacious analogies is a sure sign that you’re out of arguments.
I’m just reporting facts.
No, you’re not. Your posting your opinion, and trying to support it using half-baked arguments.
Edited 2007-02-24 02:23
So installing software on Linux is not a problem, not even for Eric S. Raymond. Autopackage and Klik are trying to fix non-existing problems. Real men use plain .deb and .rpm and resolve their own dependencies themselves!
RPMs and DEBs are okay when it comes to open-source software (note I say okay, not great).
The problem is packaging commercial, closed-source software. How do you package that? Do you really expect to walk into a shop, buy a CD, go home, put it into your PC, and see a few dozen different RPMs and DEBs, one for each distro? Of course not. That’s why some commercial software for Linux only supports 1 version of 1 distro.
For commercial software I hope RPMs and DEBs are not the future, and things like autopackage would solve a real problem.
Great, insightful comment.
The problem is packaging commercial, closed-source software. How do you package that?
Give packages for the most popular distros, and offer a standalone installer for others. This is what many commercial ISVs already do for Linux.
Personnally, there is one thing I do not like in Linux package management. In the distributions I know (Debian/Ubuntu, Fedora Core, Slackware) a package version is always associated to a distribution release.
If I want to easily install the last update of my favorite program, I may have to upgrade the core system or use backport/contribution repositories if the package is too new.
With systems like FreeBSD – I know, the article is about Linux and not *BSD – the core system is independant of the package system. For instance, I can upgrade a very new package without having to upgrade the whole system as they are managed independently.
I have to frequently use some last package versions without modifying the core system too much. With such a system, I do not have the impression to be stuck forever with old programs.
I know what you’re trying to get at here but it’s not quite the panacea you’re making it out to be. Try installing FreeBSD 4.11, or even 5.2.1 and see how well packages from a fresh csup’d ports tree react. Half of them won’t even build, the others are going to have all kinds of insane bugs.
Seriously, I don’t see why CNR is easier to use than Synaptic, YUMEX or YaST, so it doesn’t solve the problem of installing stuff on your desktop.
Also, I don’t see Red Hat, Mandriva or Novell adopting the technology of a competitor anytime soon, especially Red Hat that has invested a lot into the RPM technology.
CNR is not a package manager, it is a web-based package browser with an e-commerce solution for purchasing proprietary packages. It attempts to better categorize and search the packages already available for your distribution in a marginally slicker interface compared to Synaptic and other GUI package frontends.
The package managers for the major distributions work really well. Installing software on Linux is remarkable easy as long as your have a rough ideas of what you’re looking for, and new tools like CNR are designed to make that process a little easier. Why don’t you try it? I don’t need it, and maybe you don’t need it, but it’s worth going to their site and checking it out. At least before you rant about how it won’t fix anything.
And Red Hat has not invested nearly enough in RPM technology. If they had, the prevailing dependency-resolving utility wouldn’t be called the Yellow Dog Update Manager.
So once again, we’re down to everyone arguing for their favourite package systems for their own personal reasons as the be all and end-all.
However, from what I’ve observed, the current requirements could met by a small modification of the existing apt-get/dpkg system (rpm/yum etc could probably work as well, but I’m an Ubuntu/Debian Sys-Admin, so I’ll just get my bias clear and on the table).
1. Sys-Admins are probably quite happy with the existing system.
2. Allow gdeb to automatically handle dependancies – maybe it already does this.
3. Get third party propietry vendors to package their wares as debs with static libs except for the really low level stuff like libc6.
4. Bleeding edge people work in the same way as third party vendors.
The only modification I would support would be a file that holds information that specifys;
A third party repository.
Applications and Libraries it provides.
GPG keys for the above.
Thus, your third party vendors can specify a repository for their packages, and get the full benefit of the apt-get update system.
—
The only people this doesn’t work for is those without admin privilages who want to install stuff into their home directory. Personally, I’m not entirely convinced on this argument, it sounds a lot like ‘My sysadmin has done his level best to stop me screwing around with the system, but I’m going to try anyway.’
But in the interests of fair play, I’ll bite. I believe debian already has a fakeroot that could be easily mangled to allow the standard applications to believe they are installed via root. Add some local user-space magic, and you’ve got local installs with auto-update as well.
The problem, as I see it, is that some programs don’t get packaged in a timely manner, if at all. I recall GQView took a while to get a package for v2.0 on my distro (it was either Mepis or Ubuntu). I saw delays for MPlayerplug-in too. Meanwhile, XnView debs are almost impossible to find. (I hear there’s some ideological/legal issue, but the upshot is, no XnView debs.)
There’s an annoying attitude I see that says “Your distro’s package system has all the programs you’ll ever want! Well, okay, it’s missing some more obscure programs, but nobody cares about those! Use something mainstream!” Well, I care, if I want to try a program out, but there’s no package for it. By that point I’ve probably tried the popular programs already, and found them lacking.
(There’s some parallels to be drawn here. “This website works for the vast majority of websurfers! Well, okay, it doesn’t work in Firefox or Opera, but nobody cares about those! Use IE! What, you say there’s no IE for Linux? Use Windows!”)
Also, re: not looking up websites, what if I want to read more about the program before trying it? What if I want screenshots? (Which I usually do.) Again, what if I’ve tried all the torrent clients in my distro’s repos, and didn’t find one I liked? I can tell you for certain that Ubuntu’s repos didn’t have every client in existence back in the day. (Again, don’t assume what’s missing from the repos must be obscure and therefore crap.)
Multiple distributions differing in application installation, different directory structure etc are a pain in the ass for developers.
This is one aspect of Linux that turns me off a lot. Thankfully there are not multiple incompatible linux kernels out there or it would be even worst nightmare…
Both sides of the argument are ignoring the positives of the other side and the negatives of their preferred solution.
There are a few “real” problems with the OSX/Windows method of installation being implemented on GNU/Linux. One (this is something the BSD guys talk often about) is that a GNU/Linux distro as an “operating system” is not really a stable target. It’s a collection of packages made to work together, but the number of packages that are guaranteed to exist across distributions are miniscule (you don’t really get much beyond glibc) and the number of libraries out there to link against (dynamically!) are astronomical. Windows and MacOSX have a fairly large set of system libraries that maintain ABI stability (Windows actually does this quite well) and application developers in these systems can more or less create dynamically linked binaries that they know will work without shipping too many libraries with their installer/image.
The assembled packages in a linux distro however all have different development cycles, and because of this ABI stability for the “core system libraries” (which are not very numerous or featureful from an application developers point of view) is nearly impossible. This means that placing the onus of creating a binary that is compatible with “linux” on the developer will more frequently than not yield binaries that are compatible only with installations similar to the developers machine.
Because of this “moving target” nature of linux, a binary package format that is “point and click” is also relatively useless, as it’s guaranteed to stop working as the sands of gcc/glibc shift.
There are a few ways you can “solve” this (static linking, say) but the solutions are usually a bigger step back technically than anyone wants to take. You could glacialize development on the “core library set” and even try to strong arm your decisions on what is core on everyone, but what you’ll end up with is a technologically inferior distro compared to those who keep with “current” and benefit from new features and abilities.
You also have to remember that there is little to no distinction made from a 40 megabyte DSL installation and a 6 gig Fedora Core desktop installation, insofar as they are both GNU/Linux. If you make a dominant binary package format based on a large standard core library, even disregarding that you are forcing everyone to have X windows and a number of somewhat redundant toolkits to reach the same features in the core set as the other OSes in question, you are also destroying the ability for your binary package format to be used in the case of the embedded or headless linux machine (which are numerous!)… Fail to capture these and your binary format won’t ever truly catch on.
As for the benefits of repository based package management, it’s important to actually look at what the words are there. It’s not repository software installation; it’s package management. This means maintenence of the operating system (the system libraries and the kernel) as well as the userland and the desktop applications. It means dependency resolution and thus the ability to call upon the hordes of “third party” libraries when necessary, rather than having multiple copies of them in different or even same versions (like VB runtime dynamic libs that used to be distributed with virtually everything).
There are a few other specialized realms in which centralized package management makes far more sense than “go off into the ether and find your software”, like systems administration (where there are expensive tools in order to allow repository and package rollouts to many machines; on a team of GNU/Linux systems with apt or yum this is relatively simple and built-in to the OS).
There’s also some heavy disadvantages to the OSX/Windows method of “installers”. They duplicate the process of the installation and upkeep of software to every application developer. They decentralize trust relationships, and for the most part make opaque what a piece of software has done to your system by installing itself. They also decentralize information about what programs are present in your system, especially software that neglects to register with the central authority (software that is not “installed” on windows, for instance, but maintains keys in the registry, is tough to eradicate completely).
Finally, and unfortunately for the folks lamenting how unfriendly repositories are to their grandmothers, package management and software installation in GNU/Linux is driven by the developers writing the software, who have their own ideas of what is and is not convenient based on usage characteristics that might not match a novices. They have their own ideas about how best to maintain a system, and quite frankly I agree with some of them… I’ve never had a windows system (Or OSX system.. I find OSX gets obseleted a bit too fast because application developers are usually very eager to try the latest candy) last as long as a Debian system and still “feel clean”.. assured that applications are all being tracked and updated, that stale installation files are not languishing in unknown locations and that software that the occasional cleanup of software that is unnecessary is in fact exhaustive and total.
Outstanding post, jonas. I wish I could mod it up straight to +5. This post should be saved by the admins to be used every time that these discussions about package management being useless and yadda-yadda-yadda come up.
“Outstanding post, jonas”
I also agree. This should be saved. The sticky mode of some forums could be usefull here. There’s almost no more to be added to jonas’s post.
Wow, great post.
I think it exposes some fundamental….not flaws, but maybe issues that Linux has as a system, at least as compared to the current paradigm that people are used to.
Hopefully I don’t get voted down for that, because I don’t mean it in a negative way.
People should understand that CNR is just a service/interface, it uses the Debian package format with APT repositories. The major advantage is the fancy interface and the commercial repository.
The CNR packages have the same features and problems of the Debian ones.
The news of CNR becoming available for Ubuntu is just marketing. Because Linspire will be based on Ubuntu all their packages will need to be Ubuntu (Debian) compatible, they are not adding new supported platforms.
where jane user doesn’t have an internet connection? Did anyone even consider that? Use CNR or add Repo’s yada yada, not everyone has dsl or broadband.
So if jane/joe needs a particular program how do they get it? And please think about this, just because it is easy for you to use a particular method doesn’t hold true for everyone else.
On OS X you go to a store, buy or get the cd from a mag, it’s in .dmg. You double click. BAM.
Same thing on most windows, .exe or .msi , double click. BAM.
On *nix, depending on your system, what do you do?
Remember not everyone can or wants to build from source.
You’re fighting against the wind. The guys who are reading this thread don’t care about people who don’t have a connection to the Internet. They only care about themselves.
“How about a different scenario where jane user doesn’t have an internet connection? Did anyone even consider that? Use CNR or add Repo’s yada yada, not everyone has dsl or broadband.”
You’re making a valid point here, but the most famous Linux distributions can be bought in a store where they come along with some CDs or DVDs holding all the programs a novice home user might need.
Linux and UNIX systems that are *not* aimed at novice home users are usually not used by these novice home users. So the problem you’re pointing out does not exist there. The users of these systems usually are able (and willing) to read and to educate theirselves, so they solve the problem on their own.
“So if jane/joe needs a particular program how do they get it? And please think about this, just because it is easy for you to use a particular method doesn’t hold true for everyone else.”
They use the software managing program included in their OS distribution. Simple answer.
Now try to imagine which application Jane or Joe might need that does not come with the distribution itself. I’ve got no clue…
“Same thing on most windows, .exe or .msi , double click. BAM.”
Ever heard of this? “This program cannot be run. An error occured. Error: 2953.” or “Missing DLL VCXY12L.DLL, cannot continue.” And now imagine this message is in a language you don’t know. So your “most” is absolutely correct.
“On *nix, depending on your system, what do you do?”
As I said, just use the right tool, read the instructions. Because in Linux and UNIX you have the choice (!) you can do whatever you want. For example, PC-BSD offers a simple possibility do download and doubleclick the package you need (including search function), no “dependency hell” here. Along with this PBI system, you may use “pkg_add -r xmms” or “portinstall gimp2” if you like.
“Remember not everyone can or wants to build from source.”
Can? What’s the problem with “./configure && make install”? It’s the usual way.
Want? Agreed. Especially on older hardware with newer software; go compile X and OpenOffice on a P2/300 with 128 MB RAM.
But this thread is about package managing systems, not about building from source, so there’s no problem here.
All that is needed is repos and decent repo management.
That’s why Ubuntu,Debian,Gentoo have strict policies for package management.Their repos include more packages than many rpm distro.And because a lot of users require multimedia playback and soforth not covered by the default repo settings it’s essential the additional repos are governed by the distro members and not (no phun intended) by third party volunteers.
Perhaps linux should have a peek at how PCBSD handles some click and go packages.
Personally I want it to work both ways.
I want to use a repository and install my software via apt or synaptic (if the GUI is your thang) but when a new piece of software comes out which is not available in any of my repositories I want to be able to go to the website and get a binary installer built with BitRock or something along those lines.
The best of both worlds is what I’m looking for.
I would just like to add some sort of view to this conversation about CnR. How many people here that have posted on this forum have used both CnR and their local Package Management software for their distribution?
Those people that have would be saying things like…CnR is great because I go in to the “Media” Section of CnR and just click the software download button for the software I want….
Or they might say..I don’t like being presented with a list of software. I want to go out and get what I want and install it the way I want….
The difference here is who the target group is, the first group is for Joe Average and the second is for Linux Power User/ Admin
However the thing that I would suggest is, that for Linux to become acceptable in a day to day environment for Mr Joe Average who has a PC at home then CnR is bliss…it will show them what it is….tell them what it is…and install if for them when they want….including updating the menus and desktop icons etc.
I have used Microsoft 2000/XP/ and Linspire with CnR and I use Ubuntu as my distro of choice. I am very happy that CnR is coming to Ubuntu. For 2 reasons. First because all the dependencies will work when installing….and Second and most importantly Commercial companies can use CnR as their way of releasing software to the Linux community. If I was a vendor with a driver that Linux users needed. It would be time and effort on my part to ensure that this driver worked with “your” flavour of Linux…but if CnR does this for me and makes sure that it works with all the dependencies then I am more likely to write the drivers for you.
I urge all people here to try CnR and then make up your own mind. See if it works for you. If it doesn’t then insted of waiting for the solution to come to you (what ever that solution may be) start putting pressure on the Vendors the Linux Distros that you use to get what you want.
Its the only way the monopoly will change – I have no problem with Microsoft as such…I just have a problem with one company dictating the whole market place.
No flames intended here.
First I tried the RPM aisle. I tried to get Warcraft and the Burning Crusades. They said I has to buy StarCraft and probably Diablo and Diablo II (just to be safe).
So I went over to the Deb aisle. It wasn’t available yet, but they said some unstable guy might have a copy. I got out of there quick!
I hardly ever go to the TGZ aisle, that’s mostly $9.99 tetris games anyway!
Finally I went to the Emerge aisle and tried to get my Warcraft and the Burning Crusades expansion pack. The sales person said he could help me, but that I wouldn’t be able to pick it up until next Thursday.
Finally I threw up my hands, and the sales person did an apt-get manager.
Anyway, have a great week-end!
I support a granny (it’s my gf, and I’m a grampa too) and I show her the Add Applications menu on Ubuntu. “Don’t install crap off the internet”, I tell her. “Try the stuff in there, and if you can’t find what you want, then get me.”
The Windows way of application management is terrible except for the operating system. Microsoft, the company, gives me no assurances that third party software will work well.
I can understand them not providing official repositories for third party apps, but couldn’t they at least provide automatic checking of md5sums for tested downloads of free software, freeware, shareware, and trials?
I think this is an often forgotten part of Linux security and stability. With a repository-based distribution, I am maintaining my _ENTIRE_ system within a simple interface. ALL of my programs get security updates. In windows, it’s just the OS and maybe Office. The rest is up to me. Someone should really play that aspect up to the media!
You can do something like it in a Windows business domain, but it required so cash as well as admin time. I get that at home as a linux user OUT OF THE BOX. And thousands of applications are supported, as opposed to a few well-chosen Windows apps (where updates may be infrequent, and may even cost!)
The Windows way of application management is terrible except for the operating system. Microsoft, the company, gives me no assurances that third party software will work well.
Oh please, don’t tell me all applications install flawlessly with Synaptic or Adept. It breaks quite often. On the other hand, I have never had any problem of broken installation on Windows XP.
And thousands of applications are supported, as opposed to a few well-chosen Windows apps (where updates may be infrequent, and may even cost!)
Nonsense. I’d rather have 10 cutting-edge paid applications than thousands of free and crappy applications. Makes sense.
What kind of logic is that? Of course it’s more convenient! Just because someone doesn’t click the “Updates Available”-type of icon doesn’t make it less convenient. It’s also a perfect example of when to set the “Apply Updates Automatically” option in the update manager in the OS of your choice.
It’s not convenient if it’s not used. To apply updates automatically, you need to *set* it to apply updates automatically, which a regular person will *not* bother to do. This is what happens nowadays on Windows, most people like Mom and Dad don’t set updates to be applied automatically ; they don’t apply them manually either, and the system tray icon keeps lit.
most famous Linux distributions can be bought in a store where they come along with some CDs or DVDs holding all the programs a novice home user might need.
I suppose these CD-ROMs keep synced with the Debian repository? This is not serious, you say Linux software is always up-to-date, and then you suggest using CD-ROMs. My RHEL CD-ROMs are like 3-4 years old.
Linux and UNIX systems that are *not* aimed at novice home users are usually not used by these novice home users. So the problem you’re pointing out does not exist there.
Ah! I was waiting for this one: So it’s official, Linux is not aimed at regular people. We can close this thread then, problem solved.
Now try to imagine which application Jane or Joe might need that does not come with the distribution itself. I’ve got no clue…
You’re not a very imaginative and creative person. Are you a Linux software developer by chance? Most applications that I use on Linux are applications that are not available in the repository: Flash Plugin, Acrobat, Opera, Win4Lin, CrossoverOffice, Skype, RealPlayer, Google Earth. Yes, I don’t care about GKrellM and SuperKaramba.
Ever heard of this? “This program cannot be run. An error occured. Error: 2953.” or “Missing DLL VCXY12L.DLL, cannot continue.” And now imagine this message is in a language you don’t know. So your “most” is absolutely correct.
No, never heard of this, after 10 years using Windows, which is in my own language.
read the instructions
No, I won’t read the instructions on Linux if I don’t have to read the instructions on my current system. No thanks.
What’s the problem with “./configure && make install”?
You’re dreaming. I feel bad for you and your little world.
Malware could be included in your OS X app. Unless it is open source and somehow “certified” to be malware free like those in the distro repositories
I think it’s the other way around. It’s easier to add spyware to an open-source application because the source code is available to crackers, so anyone in the repository team with bad intensions and with coding skills can grab the source code, add malware, recompile the code and add the package to the repository. On the other hand, I don’t see a company like Apple packing iTunes with malware, LOL. Of course you can’t trust as easily a company that was born yesterday and located in Russia/China, but anyone has enough judgement to decide.
Because of its close-minded members and developers, Linux will never go mainstream and get like 30% market share on the desktop. Too sad, it had great potential.
Oh please, don’t tell me all applications install flawlessly with Synaptic or Adept. It breaks quite often.
No, they don’t. Stop your FUD.
On the other hand, I have never had any problem of broken installation on Windows XP.
I have. What do you think of that?
Most applications that I use on Linux are applications that are not available in the repository: Flash Plugin, Acrobat, Opera, Win4Lin, CrossoverOffice, Skype, RealPlayer, Google Earth.
Guess what: they all have installers. And yet you keep complaining…
It’s easier to add spyware to an open-source application because the source code is available to crackers, so anyone in the repository team with bad intensions and with coding skills can grab the source code, add malware, recompile the code and add the package to the repository.
Yeah, and the reason this doesn’t happen is that the person would get found out, and they would never be allowed near a repository again, in addition to being the target of legal action.
In any case, reality disagrees with you. There is practically no spyware in open-source application.
Because of its close-minded members and developers, Linux will never go mainstream and get like 30% market share on the desktop. Too sad, it had great potential.
Your clear, long-standing anti-Linux bias doesn’t give you much credibility when it comes to discussing Linux’s future, I’m afraid.
“What kind of logic is that? Of course it’s more convenient! Just because someone doesn’t click the “Updates Available”-type of icon doesn’t make it less convenient. It’s also a perfect example of when to set the “Apply Updates Automatically” option in the update manager in the OS of your choice.
It’s not convenient if it’s not used. To apply updates automatically, you need to *set* it to apply updates automatically, which a regular person will *not* bother to do. This is what happens nowadays on Windows, most people like Mom and Dad don’t set updates to be applied automatically ; they don’t apply them manually either, and the system tray icon keeps lit. “
Still not a logical statement…so, I guess just because someone doesn’t eat at McDonalds, it’s not fast food?!?!
BTW, auto update *is* the default setting for Windows since around SP2, so if it’s turned off, it was intentionally turned off. One would think you knew that already being such an MS guru!?!
Edit: BTW, what do you care about Linux software packaging….you won’t be using it anytime in the near future (unless of course your software install doesn’t make it past the next MS Genuine Advantage check.)
Edited 2007-02-24 03:54
Still not a logical statement…so, I guess just because someone doesn’t eat at McDonalds, it’s not fast food?!?!
This is correct. As long as I don’t know what “McDonald” is, it’s not fast food for me.
I see a lot of excuses from the distro crowd: “Use Ubuntu”, “don’t use that software”, “use only what is in the repositories”.
If this was on Windows, people would be going crazy.
We need BOTH rpm/deb AND a 3rd party installation method easy for everyone to use. This 3rd party installation method should both on command line and graphically.
Google evaluated the Linux desktop and came also to the conclusion that is not viable for Joe User because of the software installation problem (google that).
The failure to produce such a tool is a symptom of conservatism and non-cooperation. Please step up and solve this!
We need BOTH rpm/deb AND a 3rd party installation method easy for everyone to use. This 3rd party installation method should both on command line and graphically.
You’ll be happy to know that there is already a variety of standalone installers for Linux, both graphical and command-line.
Google evaluated the Linux desktop and came also to the conclusion that is not viable for Joe User because of the software installation problem (google that).
Uh, no. That’s Mike Hearn’s conclusion (the guy who wrote Autopackage, and who now works at Google), and I do believe there’s a bit of sour grapes in that assessment.
The failure to produce such a tool is a symptom of conservatism and non-cooperation. Please step up and solve this!
The failure to recognize that such tools already exist is a symptom of a grave misconception of how things work in the Linux world.
Never will be.
It’s for the programmer and the code contributors use it because they feel that they created it.
It is not a superior OS regarding security.
It was never built for security or threading.
The real problem is that all these developers think it’s the future.
It’s not.
The problem is that Linux is not an operating system — each distribution is, and naturally there will always be incompatibilities. On the other hand, MacOS and Windows only have one flavour. Obviously it’s easier to support the latter.
Plus there is the trade-off: would you prefer to have only statically compiled programs? No dependency problems, but also no possibility of benefiting from library updates and (much) more disk usage. Free OSses naturally choose the more elegant approach, as opposed to commercial OSses.
I believe the only practical way of offering a cross-OS package fomat is only to produce static binaries (if you have dependencies then you have yet another package manager), possibly built from source at install-time, or path-independent that may be installed anywhere (or configured at install-time). Then each OS would have a program that would convert such a package to a native package to be able to install it. Or maybe put it in a separate tree (/opt/static or sth). And put all supported architecture/kernel combinations inside such a package.
Yes, a pain for the developer and a pain for the user in some ways, but it would Just Work. I don’t believe anyone has come up with anything better.
Since Linux installation is a subject that comes up regularly on OSAlert, I thought best to clarify my main objection to the way the matter is approached.
I don’t see Linux software installation as a problem that needs to be fixed, but rather as a good, open system that can always be improved. Some posters here seem to imply that Linux is broken because of the package management system. The goal is to argue that the OS should not be chosen over others because of that, which is IMO ridiculous.
The choice of words is important. One is meant to discredit the OS, the other to make it progress.
Software installation in Linux has both advantages and drawbacks over methods used by other OSes. Even more, for each chosen method (i.e. packages vs. standalone) there are advantages and drawbacks, no matter the OS.
The Autopackage guys, in a way, were trying to bridge that gap between the two methods, which was a noble endeavour, but I really question if that’s the right way to do it. I think it’s much better to have parrallel methods (with user-friendly interfaces for both) that use common standards for desktop integration. That’s a much more realistic goal, in my opinion.
The ISVs must do their part, providing standalone installers that use freedesktop standards, as well as statically-linked libraries when necessary. I personally like Klik for those, and I recommend to try it if your distro is compatible. Since it’s not a replacement for the package management system, it’s much easier to integrate too. I personally think that Ubuntu and others should install it by default.
Distros will continue to use the source to provide
We’re losing our time with guys like archiesteel. He’s the kind of selfish and narrow-minded guy who is always right. He’s a typical Linux advocate.
Responding with personal attacks instead of arguments…seems I struck a nerve!
Come back when you’ve actually got something to say.
No, nothing else to say, all I had to say has been said.