There are a lot of people who believe that program and application management is currently as good as it gets. Because the three major platforms – Windows, Linux, Mac OS X – all have quite differing methods of application management, advocates of these platforms are generally unwilling to admit that their methods might be flawed, leading to this weird situation where over the past, say, 20 years, we’ve barely seen any progress in this area. And here we are, with yet another article submitted to our backend about how, supposedly, Linux’ repository method sucks or rules.
Not too long ago I characterised listening to such discussions like “listening to a discussion between a deaf and a blind man about whose condition is the easiest to live with”. If people could just take a few steps back for a second, they would see that program management can be improved dramatically.
All of the points raised in the article are addressed in my Utopia of Program Management, a system designed from the ground-up for simplicity and ease-of-use, while also offering all the advanced functionality that power users have come to expect – and a whole lot more that none of the current systems offer. It offers centralised updating, the ability to effortlessly run multiple versions of the same program side-by-side (including multiple settings files), installation via repositories or Mac OS X-like bundles – whatever you prefer. You can manage your programs by hand, as on Mac OS X, or you can write an application like Synaptic and use repositories. Simple CLI tools are envisioned for command line junkies, but thanks to live queries, you could use the graphical file manager as well – no need for special tools if you don’t want them.
Obviously, my proposal is just a solution. My point is that squabbling over today’s methods is pointless – they all have their flaws and downsides which are very hard to fix without breaking the model anyway. What is needed is a dramatic re-thinking of how we manage programs and applications, instead of wasting all this energy on these pointless discussions about Linux vs. Mac OS X vs. Windows. We could do so much more with technology available to us today.
Program management is probably my number one frustration in managing operating systems today. Sadly, I’m not a programmer nor am I anyone with any power, so all I can do is highlight the problem, and make sure people know about my alternative plan that is far superior to anything any current operating system has to offer.
I fully understand that you can’t change the world overnight, but it’s pretty obvious that no one seems even remotely interested in breaking the status quo, as we’re far too busy with “my dad can beat up your dad!”-discussions; and that in an industry which is built on fast development and the rapid adoption of new technology. It’s appalling.
“over the past, say, 20 years, we’ve barely seen any progress in this area”
oh, _come on_. 20 years ago, apt-get didn’t exist.
Edited 2009-06-24 12:41 UTC
Or app bundles, for that matter (almost).
Bundles came to OSX from NeXT. NeXT shipped its first model in September 18 1989.
I hope that you are not implying that NeXT/Steve Jobs invented App “bundles.”
I suppose not, but who else was shipping them before 1989?
Acorn Computers Ltd; http://en.wikipedia.org/wiki/RISC_OS
Ok, thougt that was later.
Too bad you didn’t like my article.
OTOH, let’s consider your proposal:
1) It requires a whole new userland. Any app available today will need porting. So you start with a very large barrier to entry.
2) If you don’t think your proposal is practical (as the “utopia” bit semms to imply?) then why use it to bludgeon people discussing practical things?
3) Who decides what dependencies are “common” or “unusual” in your scheme? What happens if you get an app with an “unusual” dependency the author thought was “common”?
It won’t work, and since the dependency support in the package manager doesn’t seem to handle “unusual dependencies”, the user is left at sea without a paddle.
4) The idea of live queries to handle the package database has a huge problem with race conditions.
The locking of the database is not a bad thing, it’s a *good* thing, because it guarantees you don’t start installing things that will not work when the installation ends because you sawed the branch they were sitting in.
In short, IMHO, your proposal looks like a good idea… until one starts thinking how to practically implement it beyond the level of blog post.
Obviously, I never denied that. Sometimes it takes massive change to improve your product. Look at Mac OS X, KDE4, and to a slightly lesser degree, the massive overhaul of Windows Vista.
I called it Utopia not because it’s technically impossible, but because it is impossible because of an unwillingness to change. All the things described in the Utopia can be done with technology available today, on computers of today.
Mac OS X and Windows seem to do just fine with such a set separation – Mac OS X more so than Windows, however. My idea relies more heavily on Mac OS X and BeOS than it does on Linux, that much is true. The BSDs seem to have a similar separation, but I’m not too knowledgeable on the BSDs.
Linux-centric thinking. In the Linux world, yes, this would break the system. However, if you treat applications as separate entities – heck, even different versions of applications as separate entities, then your point becomes moot. You have a default set of system software/libraries/services (like Mac OS X for instance offers), and applications are small contained islands that can be moved around and managed indivudually – or collectively. BeOS offers something similar (to a degree) and AmigaOS4 as well.
It’s been done before, just not in a good enough way.
Again, another proposal to learn from failure instead of success. It’s so sad.
This is so typical of “end-user thinking” that I don’t know where to start. It’s OK to be an end user but here we’re talking about package management and that’s a technical discussion, no way around it.
So the basic fact is this: in linux package managers, the lock is necessary because linux packages are interdependent. Yes, if they were completely independent app bundles, packing their own libs instead of sharing system libs (except for a small set of system libs), the lock wouldn’t be needed.
However:
There are huge advantages in the “interdependent” linux way. It means that installed software can take 5x, even 10x less disk space. Ever heard about live CDs and all that fits on them. It also means that software takes far less space when loaded in memory because the libraries are loaded once even if used by 5 apps (except when braindead companies like NVIDIA compile without -fPIC to gain 1% speed). It also means that libraries need to be patched only once, and the patch then applies everywhere.
We’re talking about few megabytes here in a time when most harddrives are measured in terabytes. Applications take up a tiny fraction of my hard drive and most of the space in used by any significantly sized application is for stuff that cannot be shared between apps
Live CDs should be the exception, not the target we optimize for.
The difference in real terms is trivial, especially in a world where 1 GB of RAM is considered low end.
Optimizing for low end hardware is a worthy goal and all, but I’ll happily sacrifice few hundred megs of harddrive space and a few dozen megs of RAM if it means I can have a more pleasant computer experience, as I’m sure would most people.
I understand your point.
However there is nothing that says that sacrificing disk space would lead to a solution that gives you a more pleasent experience. A good solution solves both problems.
Obviously not, but my main point is that disk space should not factor into your considerations when designing a new system. If you come up with a good idea that works don’t scrap it just because it pushes my full fedora install from 3.5GB to 3.8GB
No, we are talking about pushing a Fedora install from 3.5 GB to 8 GB AND upping the memory footprint after having started the desktop and 10 different apps from 500 MB to 1 GB.
And thinking of netbooks, that IS a problem.
In Linux I have all community-managed open source software from the repositories, and if I install two or maybe 5 closed-source programs which are all statically linked, THAT does me no harm.
Additionally we need to have a differentiation between apps installed on a local drive, vs. apps installed on a network drive. Makes a big difference for companies.
You are right, but if exactly the same software can take 3.5Gb or 3.8 Gb, and it is still exactly the same software, then the 3.5Gb is better than the 3.8 Gb solution.
Of course on modern system, it is less important than 20 years ago but that does not mean it is irrelevant. 300 Mb is not just disk space. It is also bandwidth, time to download, security issues and file management headaches.
The point is that less files and less memory usage IS a Good Thing.
Things can quickly get out of hand if you don’t care about disk space. Why not put in your package a binary for x86, one for x86_64, one for PPC, one for ARM and one for MIPS if you don’t care about space? Then your Fedora install is 20 Gb. But then you still don’t care and you copy the whole system each time you install an application. Then your system is 2 Tb. Then you still don’t care and you create a virtual machine for each application. Then your system is 200 Tb.
BTW, my dad can break a bottle of bear with one hand.
This is where we fundamentally disagree. If the exact same software can be installed in two ways, one taking 3.5GB and one 3.8GB then the better solution is the one that is easiest for everybody to manage.
Define pleasant.
As a sysadmin I define it as easy to manage across hundreds of hosts and easy to fix *when* it breaks.
Multiple copies of everything sounds hard to fix and hard to manage.
I’ve been a sysadmin for many years and as such I’m more than sympathetic to their plights. But even as a sysadmin I’ll take a hard to manage but easy for my users to use system over the other way around. After all I exist to make their life easy, not them to make my life easy.
I believe this way of thinking is behind a phenomenon known as Wirth’s law.
The only type of thinking that can generally be attributed to Linux/open-source is that “any type of system that you can think of you can probably create.”
One cannot say that there is a “Linux way” to handle packages. There are many, varied Linux methods.
However, one can certainly say that there is a Mac way or that there is a Windows way.
Are you referring to the way in which Gobolinux handles packages?
Here is the second sentence on the Gobolinux homepage: “In GoboLinux you don’t need a package database because the filesystem is the database: each program resides in its own directory…”
It is easy to run different versions of the same app/library in Gobolinux. I think that I have seen other Linux distros that use a similar system.
In addition, I recently noticed another variation in Tiny Core Linux, which has “*.tcel” extensions which come bundled with libraries and also “*.tcem” extensions with modules bundled.
No doubt, there are a few more examples.
Linux distros like Debian stable do a fairly solid, flawless job in this regard.
I remember moving apps to odd directories and running them… in DOS and Windows 3.1.
Furthermore, while I run one Linux distro, I sometimes use apps compiled for another distro, located on another partition.
And, of course, we could refer back to the Gobolinux method (and other distros with similar systems).
Obviously. It’s been done and it’s being done in a lot of different ways — especially in open-source.
The value/appeal of each method is subjective.
I would like to inject at this junction a useful aside:
http://nixos.org/index.html
Okay. Question: If you started with Nix, what problems does it *not* solve? Would the remaining problems be solvable more easily than designing and creating an entirely new system with its own set of deficiencies?
Package management and designing the perfect one are like assholes. Everyone has one, and everyone thinks everyone else’s stinks. For every package management idea, every attempt to create a silver bullet, there will be Murphy who raises his head to wreck what appears on the surface to be a utopian idea.
The only thing I think that does come close to it would be for a standardisation in the base of the operating system, bundles like they have with Mac OS X and the icing on the cake being that the bundle tracks when configuration and temporary files and directories are created so that uninstall is as easy as right clicking on the bundle and then clicking on ‘uninstall’ which removes not only the bundle but all the other crap that it creates.
The problem with Linux is there is no standard base – there is no ‘linux core’ which causes a whole host of problems; where a piece of code might compile and work reliably on one distribution – all hell breaks lose on another distribution because either the distribution has custom patched a library or their library lacks the custom patches which the software was originally tested against.
Microsoft has a package system, MSI but we have companies who insist on using their own packaging system because from what I have heard the tools to make MSI packages are so atrocious that one is better off just creating a custom one from scratch. The problem with Microsoft is this – why don’t the vendors keep their own damn bundled DLL’s in their own directory? DLL hell could have easily been avoided if the bundled DLL’s that vendors ship with their software were kept in the same directory as their software – why don’t vendors do that? am I really asking for too much?
What I really miss on linux is a standardized way of installing stuff that does not come in source form – mostly talking about games.
The package manager is great when it comes to installing/uninstalling programs as long as they are from a repository. I wish there was a way for my package manager to be able to handle a precompiled binary blob so I can add the binary to the package manager and it will handle the blob just as if it was a program installed from the repository.
Now that would simplify distribution of games for linux.
Edited 2009-06-24 14:51 UTC
Program management is probably my number one frustration in managing operating systems today. Sadly, I’m not a programmer nor am I anyone with any power, so all I can do is highlight the problem, and make sure people know about my alternative plan that is far superior to anything any current operating system has to offer.
This is the kind of drivel you get on blogs. Which I guess is not far from the truth for OSAlert vis-a-vis Mr. Holwerda.
I appreciate taking the time to expand on one’s ideas, but IMHO it needs some technical insight. The opinions of other commenters are practical.
My own desktop environment project has things in common with Mr. Holwerda’s utopia. However, it still relies on the underlying operating system to be managed by something like apt-get or yum. FYI, I am a programmer.
I would prefer a system with some kind of repositories where you can add repositories for all kinds of software. Software vendors have to create their own repository which should be kept simple and clicking a download link adds the repository and installs the software from there.
Each program can be installed completely in its own directory or rely on another program. e.g. GNOME can rely on GTK. I call the shared libraries on linux programs to keep things simple.
I don’t like the windows way of putting all dll in the same directory and the use of the registry. I also don’t like the long list of dependencies from some linux apps.
Updating a program goes via only 1 update service, Removing/Installing one via a program manager (or package manager). Both with a command line option for the fans or for system admins.
Linux distributions can make a long list of software already in the repository and for windows that list can be very short (only windows itself).
Of course everybody thinks his/her system is the best, but as you can read every OS has his pro/cons. I haven’t mentioned OSX because I have no experience with installing software or so on that one, but I have read about it.
This isn’t about msi or deb or any other way of packaging things, just a way to use them. For every OS there should be as little as packaging ways as possible, but I haven’t enough knowledge to decide which one is best.
Anyone ever tried PCBSD? Their PBI installation is quite good, and I think there wouldn’t be such a problem to port it. Just a shame that the OS isnt useable else.
While I know Thom you are a VB6 and not .NET fan…
Actually, MS has attempted to address the installation and versioning issue with .NET, with XCOPY deployment by placing DLL’s into the application folder. This method supports side by side deployment of different versions of an application. Due to “DLL Hell” from active-x controls, an alternative approach was introduced.
The negative side is bloat, but as already pointed out, memory constraints are becoming less an issue than ease of use.
Alternatively, if you do want to share DLL’s, the GAC (Global Assembly Cache) permits multiple shared versions of DLL’s (assembly in .NET speak) to coexist, so even that problem is eliminated. Since .NET DLL’s are self registering, all you need to do is copy the DLL into the GAC, instead of the messy registry system.
In looking at this approach, it required operating system support (e.g. GAC structure) as well as requiring programmers to use .NET DLL’s instead of active-X DLL’s, so it is still, ultimately, left to the programmer to enforce.
Finally, this still does not solve cleaning up files ( if app removed with a simple delete) that an application creates, e.g. in the users document folder. Additionally if services (etc) need to be installed you still have to use the MSI installer or alternative, so not a complete solution, but it sure is nice to just copy an app into Program Files and run it for many common .NET applications.
So a partial solution at best, but at least another approach to evaluate.
I don’t really understand why 0Install has had so little coverage. I wouldn’t say it’s quite perfect, but it’s a very interesting solution indeed.
As I’ve said before, there’s a huge difference between core system software and user applications. Package managers are well-suited to system software, and appdir-like solutions work better for user applications.
What is this ‘huge difference’ and who would decide which software is ‘core’ and which is an ‘application’. Furthermore, what if there’s non-core software that’s not an application?
Example: jack, its libraries, daemon and such forth. You cannot foist that upon everyone as a core system, yet it is certainly not a user application.
I’m not suggesting a complete system here — just a concept. The point I’m making is that central package management works brilliantly for installing a missing library or command-line tool. However, package managers are a pain to use for graphical user applications, which need a more distributed and flexible approach. It’s simply a pain when you come across an interesting app, but have to go through the whole process of compiling + packaging just to try it out. Mac + Windows users get to just download and try it out, why not us too?
However, to answer your question — plenty of distros already have a concept of a “core” repo. It should actually be quite possible to have a unified system that works despite of differences in the available core packages. 0install already does a good job of this.
Finally, it’s important to remember a core design principle here: consider the interface before the implementation. We need to think of how users want to use their systems, and not let difficulty of implementation get in the way of good design.
But that would be in the power of the people who make their program available on their website.
They can link everything in statically, so that in the end you get one big binary. They do it for Mac and Windows, why not for Linux?
If you’re a programmer, I’m sure you’ll be aware that not all libraries are designed to be linked statically. For instance, many features of the Qt toolkit are unavailable when linked statically.
What’s more, if a library is statically linked, and a new version of the library is released, it means that the whole application must be relinked (and probably recompiled) in order to update it. If the library were linked dynamically, it would be possible to simply replace the library file in the package before distributing it, which is far easier.
I’m not talking about difficulty, I’m talking about impossibility. I was trying to ask leading questions so you’d realize your mistake.
There is simply no way to logically separate ‘base’ from ‘applications’. Any distinction is purely arbitrary and highly mutable. Distributions do not generally have ‘core’ repositories, not in the way you mean. What they do have changes drastically with every release.
What the user wants is “See any application, click application, have application.” There’s nothing fundamentally at conflict with that in the package management world, 90% of closing the gap between reality and what the user wants is adding some good UI. The other 10% is making third party integration easier, which is a tricky but solvable problem, unlike your ‘concept’.
I think it’s important to note that “Centralized” package management is bad and was AFAIK never really intended to be the norm. Distributed, independent repositories is the way it ought to be done. Throwing out package management for most software is not a workable solution.
Edited 2009-06-26 11:59 UTC
Well, I’m open to the idea that I might make mistakes However, I don’t see my mystake so far.
Yes, but that doesn’t mean that the whole idea comes crashing down. 0install already has a good framework for dealing with this — it ties in with the native package manager to determine whether or not an adequate library for the given application is already installed. I’m worried you’re letting implementation details get in the way of design ideas. It may be years and years before a good system emerges, but that doesn’t mean it’s impossible; it means we need a clear idea of where we’re going.
Mmm; I think what users really want is “see application; use application”. They don’t really care about the “having / installing” bit. That, I think, is where package managers slip up.
Mmm, again — I don’t think users want to think about repositories. It’s a very developer-centric idea.
Yes, it does.
Leading to a very messy situation in which every system is entirely unique and application dependencies are based on the order in which you installed them. No, thanks.
Did I say it was impossible to have a good system? No, I said your approach wont work. I said separating core system from applications is impossible. And it is! I’m all for designing a good system, but your proposal is emphatically not it.
When did I say “installing”? If I have a menu, such as the GNOME application menu, I have precisely the described workflow. I see an application, I click it, I have it.
If it’s not the GNOME app menu but instead a web page then ideally the same thing would happen.
What has this got to do with having or not having package managers?
[/q]
Whoever said that users would see repositories? It is not necessary for users to know how it works, I think you’ll agree. What users want is:
1. See application.
2. Click application.
3. Have application.
There is no reason you need to throw away repositories to get this! I am, frankly, a bit suspicious of anyone who wants to throw away repositories and cannot provide an advantage to doing so other than a straw man of usability.
I think we’re on the same page where a desire for increased usability is concerned, but you seem to have some strange notions of what is to blame for the current abysmal state of affairs.
Edited 2009-06-29 11:39 UTC
Portage is far superior in my opinion. And BTW my dad could beat your dad easily.
Every discussion that ends up with people calling for a universal Linux installer is just plain stupid. They are separate operating systems! No one seems to be discussing how this will kill diverstiy in the Linux eco-system. Either that or they just don’t care to realize how important that is to Linux. It’s also extremely impractical considering how Linux is developed. Libraries and programs are developed independently and released independently. There are advantages and disadvantges to this approach and I have no problem with people pointing out the disadvantages but the solutions always seem to be completely impractical for Linux.
Diversity does not preclude interoperability. For instance, all major desktop environments now conform to the standards laid out by the FreeDesktop organisation. However, we still see plenty of diversity, and it doesn’t stop new DEs popping up every year. What it means is that they all observe a unifying standard that ensures that all the software on a system is able to behave as expected.
You’re not addressing the issue. A universal installer just isn’t possible. Some distributions are more cutting edge and require newer (or even different) libraries while some focus more on stability and require older libraries. The solution always put forward is to static link every library but that defeats the purpose of shared libraries and their benefits. There are many compile time options available for open source software that can depend on everything from the installed kernel to options enabled in other applications. It seems the only solution is to kill diversity on the Linux Desktop to achieve enough compatibility to make a universal installer even feasible. Filesystem layouts and DEs have nothing to do with it.
Yes; I’m not suggesting we completely replace every distribution’s package manager. I’m suggesting a complementary system mainly for user-facing GUI applications in a Desktop Environment. If you read up on 0install, you’ll see that they already have a decent model for dealing with pre-existing system libraries and different versions of libraries.
I agree completely. In fact, I said exactly the same in another thread. It’s silly to link everything statically.
The kind of software you’re talking about is what I’d call “system” software — mainly CLI applications. Package managers already do a good job of dealing with those. I agree that they’re often not very portable. I’m talking about GUI applications, that usually use relatively high-level toolkits and are pretty self-contained. Basically, anything that would be an App Bundle on MacOS X. There can be little doubt that downloading & testing a small app from a friend’s blog is easier on the Mac than in Linux. Casual users should not need to compile something to try it out. It’s about making simple tasks easy.
You missed my point. Who determines what compile time options are enabled when packages are created? Even gui applications have compile time options, in fact they generally have a lot more options than CLI programs. There is bound to be a conflict. I don’t see how a system can handle this gracefully. There are lots of hacky solutions floating about but nothing that doesn’t introduce a lot of complexity unnecessarily.
Oh come on — I accept that there are plenty of technical difficulties to overcome before a really nice system can emerge, but this really isn’t one of them.
There’s a reason all the large cross-platform toolkits like Qt and GTK+ ensure binary compatibility. These large libraries really won’t give you any difficulty across distributions. If you’re talking about the sort of utility libraries that give you the option to exclude certain features at compile-time, that’s a different story. For those libraries, you’d obviously be looking to bundle a copy of the library with the application.
Forgive me if someone else suggested this. I got bored about halfway through, reading all the comments about how hard it would be to transition Linux over to a new way of managing apps.
Let’s get the first thing out of the way. I’ve seen arguments in favor of the way Linux “organizes” apps into separate pieces going variously into /bin, /usr, /etc, /usr/local/bin, some random place under /opt, and I’ve even seen vital app data stored under /var. Most of those arguments were some obscure comment about mounting things over NFS. As if 90% of Linux users did that.
Face it. It’s a mess. It’s broken and needs to be fixed. And don’t even get me started on the topic of how every app uses its own unique text-based config file format and how absolutely no distro can properly merge user changes to config files with updates (despite unsuccessful efforts to automate the process).
However, what I don’t get is why so many people seem to think this change has to happen over night. Even MacOS X still retains the legacy hierarchy, hidden from the regular user. Moreover, there are LOTS of programs that are part of the base GNU system that BELONG in the legacy hierarchy (IMHO), like bash, ls, cp, date, tar, sync, etc.
This can and must be a very gradual process. And the steps are simple:
(1) Define a sensible filesystem hierarchy. Imitate something like MacOS X or GoboLinux. It doesn’t have to be overly fixed either. Sure, keeping apps in /Applications makes sense, but there’s no reason a bundle couldn’t be installed somewhere else (although some automatic references might break, but that’s the user’s choice). Also, of course, define a bundle. Again, copying GoboLinux makes sense. No reason not to imitate the legacy hierarchy. The only thing in /Applications/MyApp.app/bin/ would be MyApp. No problem there. Seems organized enough. Defining this new standard (with the locations of the executables, default config files, user config files, app data, etc.) will actually be by far the hardest part of the whole transition!
(2) Have every Linux distro installer create this new hierarchy in the filesystem. INITIALLY EMPTY.
(3) Update application managers so that they can support the new format and hierarchy in addition to the old style.
(4) Pick one app and create a stub that makes it LOOK like the app is in the new hierarchy, even when it isn’t. Just symbolic links. This is to help with the transition.
(5) Port one app so that it works cleanly under the new hierarchy. Aside from the fact that the app now appears on a bundle under /Applications, KDE and GNOME otherwise make it appear no differently to the user. Seamless migration.
(6) Goto 4.
This transition will take YEARS. Even when the transition is “complete”, there will still be apps that belong on the old hierarchy, there will still be legacy apps that no one felt like updating, and there will be apps that are too difficult or too illogical to port for one reason or another. And all of this is okay. Change takes time. So develop a new system that people can transition into slowly and comfortably.
The transitions from 16-bit to 32-bit and then 32-bit to 64-bit, for some OS’s, took this form. The user didn’t know the difference. New and old stuff ran side-by-side in one system. Doesn’t it seem intuitive that we should do the same here?
I stopped reading here.
If you don’t understand it, throw it out. Right? Wrong. If you don’t understand it you are not qualified to design a replacement. But thanks for playing.
Okay, I lied. I kept reading. This statement is naive to the point of being funny. Who gets to decide what’s sensible and what will you do if it’s me and I reproduce the FHS you don’t like?
This is all unnecessarily complicated and would be much simpler in practice. Transitions happen all the time in the Linux world and are not especially hard.
The hard part, you were right here, is *defining* the new system. One of the advantages of what we have is that it’s mostly defined in written standards, which are mostly agreed upon by all distributions and the specs are mostly followed. Getting a new set of specs that most people can agree on would be very hard. Your only real option is to do what Gobo has done and just *do it*, then await fame and glory (meaning everyone sees the superiority of your system and adopts it).
I still don’t see what problem you’re trying to solve. You seem to begin with the unspoken assumption that app bundles are obviously the way of the future and the right thing to do, then you invent a straw man argument concerning how difficult transitioning to app bundles would be and how we could manage anyway. Well, no kidding! It could be done, but it wont, because it isn’t a good idea.
If you could actually develop a *superior* system for (as the article says) “program management”, then I would be happy to read your proposal. I’m not sure what redesigning the FHS has to do with it in any case.
You and I are both guilty of being incredibly arrogant.
Actually, the main reason that you might want to pay attention to my attitude towards the legacy UNIX file system layout is that I’m not primarily a software developer or a system administrator. I’m a chip designer. I have plenty of software experience (drivers, X11), but I think more like a hardware designer, and as a result, I have no patience for any software that strikes me as unintuitive. Now, you and I will disagree about what’s intuitive and what’s not. But I’m the kind of user that you want to be supporting. Technically savvy. Knows how to use Google to find things out. Has published in computer science conferences (in both AI and Computer Architecture). Willing to listen (I would really like you to elaborate further on your argument). Contributor to open source hardware. But NOT all that interested in system administration. If you make me memorize all these random details about how to make Linux do what I want, that’ll take up some space I need to store things I know about chip design. Unlike perhaps some people like yourself, I have limited capacity.
First the file system is no more a hierarchy. It is a relationnal database and the shell can do SQL and PL/SQL:
SELECT data FROM config WHERE user=me AND program=’house designer’;
SELECT p.name FROM programs p, tags t WHERE t.program=p.name AND t.name IN (‘WEB’, ‘DEVELOPMENT’);
…
Next, the package manager allows you to manage programs directly from the database (with triggers and such):
DELETE FROM features WHERE name=’QT support’;
INSERT INTO programs SELECT programs FROM repo@dblink WHERE p.name IN (SELECT tags.program FROM repo@dblink WHERE t.name=’GAMES’);
UPDATE programs SET optimize=’SIZE’;
…
I could do it because I’m a developper, but unfortunately I have to spend my time on useful stuff like posting on OSAlert.
Let’s not make things overcomplicated. The last thing I want is a filesystem that is dependent on a database system to run, it would have a significant overhead and a greater chance of corruption and failure. Now, a database to complement the filesystem might actually work well, similar to what BeOS did if I remember correctly. What I think you’re actually thinking of is a live query like system, where file or application info would be stored in metadata attached to files and/or folders, and all this metadata linked on a rdbms structure. This would work well, provided you had a way to insure the metadata was transferred over network connections and/or different filesystems, otherwise you could lose it when downloading or giving an app to someone else. OS X does this, that is the purpose of their dmg files, you download a filesystem of the app you want so that when you transfer it the associated metadata gets transferred too–not, of course, that OS X uses that metadata anywhere near to what its full potential could be.
I think an rdb would complement the filesystem in a very useful way, but I do not think the filesystem should be replaced with an rdb entirely.