One question on everybody’s mind when they are thinking about Linux and how it will fit into the enterprise mould is that of whether the number of known distributions — believed to have reached approximately 130 — is helping or hurting Linux. This week at “CA World” in Las Vegas, a handful of the Linux world’s most influential activists gave their viewpoints on that issue. Elsewhere, CNET News.com’s Charles Cooper says the refusal of Linux resellers to indemnify customers is bound to weigh on the minds of CIOs implementing open-source software. “Who’s liable for Linux?” Editorial at ZDNet.
‘Choice in Linux distros healthy’ says Torvalds; More Linux News
About The Author
Eugenia Loli
Ex-programmer, ex-editor in chief at OSAlert.com, now a visual artist/filmmaker.
Follow me on Twitter @EugeniaLoli
65 Comments
Mystilleef I have to agree the filesystem also makes alot of sense to me as well. The problem is that some poeple are to arrogant or lazy to adapt to change. Thus they should stay in the windows world IMHO. If it suits them nicely then why change ? Why do they sit here and moan about things when no one is foricing them to use Linux. If you like windows and how it does things then stick with that and stop whinning like a little baby.
> If you have a lot of programs installed on a Windows
> machine it can be very difficult to find the one you want.
Most programs have an icon, so you can just right-click on the icon and see the path to the program in the properties. Not as easy as “which program”, but still…
Quote (Mystilleef) –
“Your various distros use variations of this latent standard.”
But you see, that is the problem. Since they all use variations, the standard is as good as nothing.
“And your various distros provide documentations, or should provide one, regarding their file structure standard.”
Even if they do provide documentation, if the various distributions provide different file system structures, its not a standard like you state. In fact, the need to provide separate, unique documentation for each distribution’s file system structure is proof that the “standard” isn’t achieving its intended purpose.
Quote (Anonymous) –
“Why do they sit here and moan about things when no one is foricing them to use Linux. If you like windows and how it does things then stick with that and stop whinning like a little baby.”
One of the touted advantages of Linux is that the user community can more directly influence the software development. What you are trying to say is that users who dislike the software should withhold their feedback and return to Windows. How is that going to help Linux get better?
“One of the touted advantages of Linux is that the user community can more directly influence the software development. What you are trying to say is that users who dislike the software should withhold their feedback and return to Windows. How is that going to help Linux get better?”
That’s not really true. The touted advantage is that if you want/need a feature you can implement it yourself. The code is open to allow anyone to change or improve a program. The advantage is NOT for every user to whine about how they don’t like something so people who program IN THEIR FREE TIME can implement things because joe shmoe thinks it’s more like Windows, therefore easier to use.
Quote (abraxas):
“That’s not really true. The touted advantage is that if you want/need a feature you can implement it yourself. The code is open to allow anyone to change or improve a program.”
Not everyone is a programmer, nor do we expect the masses to be. Let’s use the example we have here; we’re talking about changing the file system layout. You may not know which programs expect files at different locations in the filesystem (distribution maintainers may add their own patches to tweak the software to fit with their file system). Hence, even if you are a programmer, you may not be able to do it yourself without considerable effort. This is discounting the fact that the user may not even have a clue about programming.
Linus Torvalds himself says user feedback is a *good* thing which has helped Linux mature. He admitted he would not have been able to think up himself every possible use for it that has surfaced up to this point. Rather, the wide exposure and feedback allows him to consider such uses when he accepts or rejects kernel patches.
“The advantage is NOT for every user to whine about how they don’t like something so people who program IN THEIR FREE TIME can implement things because joe shmoe thinks it’s more like Windows, therefore easier to use.”
Nobody said anything about more like Windows == easier to use. Sometimes it is, mostly because you adopt the habits Microsoft has imbued into you with Windows. Windows does some things right, however. The file system layout is one of those things — I’m not talking about its actual layout, I’m talking about how its layout is _standard_ (well, more standard than Linux has anyway).
To put it negatively, you could say users are “whining” about problems in Linux, but I’d say its unfair to generalize that when users provide feedback all they do is whine. Such users believe their comments won’t be heard, so they “whine” in hopes of raising a big enough stink to get the developers’ attention. They are accustomed to this because that’s how it has to be done in the proprietary world. They’ll eventually learn from people who discuss the problems in a mature manner, because these kind of people are more likely to persuade the project to incorporate the needed changes. However, this doesn’t mean that feedback, be it whining or not, is not a good thing. All kind of feedback is beneficial, and its an advantage of open-source. Take note that this does *not* mean all feedback should be implemented, just that it should be at least considered.
Also, you seem to be expressing the thought that open-source developers who program in their free time are taking great pains to do so. This is not usually true – open-source programmers are there because they do it as a hobby (read: doing it for _fun_). Besides, if they disagree with someone’s complaint, they are free to ignore it. Since they are not doing it for money, they have no obligation to adhere to any individual user’s desires. Of course, I do not think it would be wise if they did not even consider the user’s point, but ultimately its up to the developer whether or not he/she wishes to accept feedback. If the developer wishes to remain aloof of all user feedback, it will be his loss.
Whilst I broadly agree with much of what you’ve said, I just have a few minor quibbles:
“You may not know which programs expect files at different locations in the filesystem (distribution maintainers may add their own patches to tweak the software to fit with their file system). Hence, even if you are a programmer, you may not be able to do it yourself without considerable effort.”
LOL! Let’s not go overboard here; firstly, most packages use the PATH and other variables rather than hardcoded locations, so in the majority of cases you can just manually crack the package open and move the contents to wherever you like. Or there’s always the source to build from. It’s not rocket science, especially for a programmer. That said, sure, the masses are never going to take to this.
“Windows does some things right, however. The file system layout is one of those things — I’m not talking about its actual layout, I’m talking about how its layout is _standard_ (well, more standard than Linux has anyway).”
Having a standardised layout which is as crap as that of Windows would not, in my opinion, be preferable to the minor variations on the clean and elegant FHS which are in place right now. Besides, if you’re a distributor, creating packages for the different distributions is not *that* hard; indeed, projects like http://www.easysw.com/epm/ claim to be able to generate them automatically. In addition, if your package manager failed, the filesystem layout is intuitive enough that you should be able to cleanly uninstall the package and all its contents yourself with a little effort. In Windows, it would be impossible to do this except through Add/Remove Programs, and even then it often leaves old registry keys behind.
“they “whine” in hopes of raising a big enough stink to get the developers’ attention. They are accustomed to this because that’s how it has to be done in the proprietary world.”
No, it’s not. In the proprietary world, it’s not the developers they’re whining to. It’s the complaints department of the corporation, which may or may not then pass the feedback down to the developers. A similar model is available in the Open Source world, as well. Choose a distribution, and support it financially. Then gripe at them, and *they* will endeavour to implement the functionality you require, because it’s in their business interest to do so.
“I’d say its unfair to generalize that when users provide feedback all they do is whine.”
Yes, totally agreed. User feedback is a vital part of the software development process, e.g. bug reports.
“However, this doesn’t mean that feedback, be it whining or not, is not a good thing. All kind of feedback is beneficial, and its an advantage of open-source.”
Rubbish. *Constructive* feedback is useful; a bug report, a feature request, a suggestion for improvement. Some feedback is utterly useless, e.g. a bug report that simply says, “it doesn’t work.”
“Also, you seem to be expressing the thought that open-source developers who program in their free time are taking great pains to do so.This is not usually true – open-source programmers are there because they do it as a hobby (read: doing it for _fun_).”
I’d say it’s more of an effort to write and maintain code in your limited free time than if you’re being paid to do it as your day job. Besides, as your project grows, more and more time is going to be taken up by management, support etc rather than the coding that you love. This can rapidly make a project less fun.
“Besides, if they disagree with someone’s complaint, they are free to ignore it.”
True, but also bear in mind that flooding volunteers’ mailboxes with negative feedback is more likely to demoralize them than motivate them. Maybe they’ll even stop maintaining their code altogether and move on to something new, because they feel unappreciated. Just something to bear in mind. For the same reason, if you use their app and are *happy* with it, take the time to drop them a thank-you note.
“most packages use the PATH and other variables rather than hardcoded locations”
GCC hardcodes the prefix used in its compilation into the final binary to be used (Amongst other things) to find the specs file.
GLIBC hardcodes /var/mail as the path to the spool directory, and also the default user/root paths.
The kernel hardcodes the path to it’s init parameters, although this can be overriden at boot time, or did you think it just magically knew where your bootscripts were?
A whole host of packages won’t work without a set of non-standard symlinks to XFree68 directories.
LILO goes a step further and installs the actual positions of the disk blocks that make up the kernel image into the boot sector.
Harcoded paths are all over the place and breaking the FHS starts breaking all kinds of things. Sure, if you know what you’re doing you can configure source packages around whatever filesystem hierarchy you’re using, but it isn’t a job for the faint hearted and is a pain in the neck. For the light user it’s more than a little annoying when the “./configure && ./make && ./make install” scheme of things no longer works as it should.
With the exception of /usr/local (Which can die a painful death because nobody’s using it properly anymore) the FHS might suck, but the rewards of changing it aren’t worth the effort it would involve.
Quote (Syntaxis):
“Let’s not go overboard here; firstly, most packages use the PATH and other variables rather than hardcoded locations, so in the majority of cases you can just manually crack the package open and move the contents to wherever you like. Or there’s always the source to build from.”
Ok, perhaps this is true. But there are always the exceptions to the rule and poorly written software that will trip you up and slow you down.
“Having a standardised layout which is as crap as that of Windows would not, in my opinion, be preferable to the minor variations on the clean and elegant FHS which are in place right now.”
While the Windows file system layout leaves something to be desired, the fact that its standard is advantageous over the minor variations. Although package management on slightly varying file system layouts may be the most obvious hurdle to overcome, you also have to consider documentation – its much easier to follow documentation on a standard than on multiple variations of a standard. IMO, it would definately be preferable if you looked up the location of something in your Linux book and it gave you the exact location rather than 3 or 4 different possible ones.
“Besides, if you’re a distributor, creating packages for the different distributions is not *that* hard; indeed, projects like http://www.easysw.com/epm/ claim to be able to generate them automatically.”
I’ll admit here that I have not seen software such as EPM before, so I’ll give you this one.
“In addition, if your package manager failed, the filesystem layout is intuitive enough that you should be able to cleanly uninstall the package and all its contents yourself with a little effort.”
Eh, not quite. An installed software package will typically spread its files throughout the Linux file system, dropping individual files in their appropriate directory for binaries, documentation, configuration files, etc.
“In Windows, it would be impossible to do this except through Add/Remove Programs, and even then it often leaves old registry keys behind.”
Actually, in Windows you can clean up after a program which doesn’t have an uninstaller far better (in terms of the files it uses, anyway). Each program typically occupies its own subdirectory within the “Program Files” directory. Delete the appropriate directory corresponding to your program, and all of its associated files – whether documentation, libraries, executable – are all gone. Registry keys are usually not a problem, either. There’s usually a directory in the registry containing the keys for all the apps (each app gets its own registry directory, so you just delete the directory and its keys are gone).
“No, it’s not. In the proprietary world, it’s not the developers they’re whining to. It’s the complaints department of the corporation, which may or may not then pass the feedback down to the developers. A similar model is available in the Open Source world, as well. Choose a distribution, and support it financially. Then gripe at them, and *they* will endeavour to implement the functionality you require, because it’s in their business interest to do so.”
I’ve digressed on that quote, which is somewhat irrelevant. The point I was trying to stress in that paragraph I wrote was whining on feedback is not necessarily bad, as abraxis seemed to convey.
“Rubbish. *Constructive* feedback is useful; a bug report, a feature request, a suggestion for improvement. Some feedback is utterly useless, e.g. a bug report that simply says, “it doesn’t work.” ”
Point accepted. I had originally taken the “constructive” part as understood, because we are talking about ways of making Linux better, not simply lauding the design of Windows. However, my point still stands – whining feedback is not necessarily bad. As you have pointed out, as long as the feedback is constructive, it helps – even if the user presents it in a not-so-friendly manner.
“I’d say it’s more of an effort to write and maintain code in your limited free time than if you’re being paid to do it as your day job.”
Once a task becomes a *job*, it is more burden some than an enjoyable hobby. Because we are free to work on code we *want*, whenever we want to, it is enjoyable. Once someone says “do this, by this deadline,” you are restricted to rules and regulations, thereby decreasing its fun-factor. Therefore its probably more enjoyed when you write code in your free time than when you are paid to do it. Otherwise, why would you bother if you considered it work?
“Besides, as your project grows, more and more time is going to be taken up by management, support etc rather than the coding that you love. This can rapidly make a project less fun.”
This is true, but if your project has grown to such a size, there’s nothing to prevent you from delegating the tedious tasks to other members of the project.
“True, but also bear in mind that flooding volunteers’ mailboxes with negative feedback is more likely to demoralize them than motivate them.”
Well, not all feedback is going to be directly sent to the developers by e-mail. I’m talking about the kind of feedback that you can kind of sense from the community – e.g., how years ago everyone was criticizing Linux for not having a journalled file system. Word would have gotten around to the developers some way or another as a result of growing community dissent.
“For the same reason, if you use their app and are *happy* with it, take the time to drop them a thank-you note.”
Very good advice. Hobbyist programmers have little inspiration/motivation to develop free software. Some positive feedback never hurts.
“But there are always the exceptions to the rule and poorly written software that will trip you up and slow you down.”
Ok, true. This is the same on all OSes though. The way to deal with this is to only use packages from a source you can trust, e.g. restrict yourself to packages from the official Debian tree. Of course, if you want to try out a piece of software that’s not yet in the tree, or a newer version, you’re screwed.
“you also have to consider documentation – its much easier to follow documentation on a standard than on multiple variations of a standard. IMO, it would definately be preferable if you looked up the location of something in your Linux book and it gave you the exact location rather than 3 or 4 different possible ones.”
I don’t really see what’s wrong with just reading the docs for your specific distribution. End users are only going to be dealing with one distribution at a time, anyway.
“Each program typically occupies its own subdirectory within the “Program Files” directory. Delete the appropriate directory corresponding to your program, and all of its associated files – whether documentation, libraries, executable – are all gone.”
I’m afraid not. The C:Windows directory gets bigger over time. Note, too, those annoying “This file may still be in use: C:Windows<dll-name>.dll – remove file?” warnings one often gets in uninstallers. Most users will be freaked by this and just click no. The registry grows over time, as well… just general accretion of crap.
“An installed software package will typically spread its files throughout the Linux file system, dropping individual files in their appropriate directory for binaries, documentation, configuration files, etc.”
On Debian, at least:
Binaries: /usr/bin
Documentation: /usr/share/doc
Configuration files: /etc
“Once a task becomes a *job*, it is more burden some than an enjoyable hobby.”
Okay, I agree with that. I guess I didn’t express myself too well. What I was trying to get across was the general idea that it’s far more of a gift for someone to donate his time to help me, answer my questions etc, than if he were paid to do so. This denotes genuine effort, and he’ll have gone out of his way to do so.
“This is true, but if your project has grown to such a size, there’s nothing to prevent you from delegating the tedious tasks to other members of the project.”
It’d be great if it worked that way, but unfortunately it doesn’t. You’ll have tons of volunteers for the fun stuff, but the boring tasks will tend to languish. Notice how documentation for OSS projects tends to lag quite a way behind the code? Well, that’s why. Similarly, people will often prefer to contribute a new feature than polish up existing code, because it’s more fun.
“e.g., how years ago everyone was criticizing Linux for not having a journalled file system. Word would have gotten around to the developers some way or another as a result of growing community dissent.”
I think you’ll find that, generally speaking, the journalling filesystems for Linux were written/financed/ported by people that wanted or needed them, not because “community dissent” drove them to it. But yes, I suppose user demand would be a factor, though not the determining one in my opinion.
“most packages use the PATH and other variables rather than hardcoded locations”
Okeydokey, my bad… I guess I was a little unclear. What I meant by this is simply that you can e.g. plop the binary in any directory included in your PATH variable and it’ll execute just fine. Just because the package system puts it in /usr/share/games/bin doesn’t mean you have to have it there.
“Harcoded paths are all over the place and breaking the FHS starts breaking all kinds of things.”
You’re quite right. I wasn’t talking about breaking the FHS. It’s perfectly possible to have two differing filesystem layouts that are still both FHS compliant. For example, Debian doesn’t make use of /opt in the same way that some other distributions do, since the usage of said directory is stated as a suggestion rather than an explicit requirement. There are lots of similar little variations like this.
Quote (Syntaxis):
“The way to deal with this is to only use packages from a source you can trust, e.g. restrict yourself to packages from the official Debian tree. Of course, if you want to try out a piece of software that’s not yet in the tree, or a newer version, you’re screwed.”
I think this is wandering a bit off the tangent of the subject – the obstacles faced when you implement/fix up the software yourself. If you’re going to be changing things, you’re not going to be able to use official Debian (or whatever other distribution) packages. The more fundamental the change, the less of the vanilla packages you can use.
“I don’t really see what’s wrong with just reading the docs for your specific distribution. End users are only going to be dealing with one distribution at a time, anyway.”
There’s nothing wrong with reading the documentation with your specific distribution, and yes, end users will be dealing with one distribution at a time. However, the reasons for having some unified documentation are similar to the reasons for having commonalities and standards among distributions. It simplifies things. Skills in one distribution can therefore carry over to another distribution with minimal effort.
“The C:Windows directory gets bigger over time. Note, too, those annoying “This file may still be in use: C:Windows<dll-name>.dll – remove file?” warnings one often gets in uninstallers. Most users will be freaked by this and just click no.”
This is _if_ there is an uninstaller. I am referring to the possibility of a software package omitting an uninstaller (aka. a bad oversight on the part of the software developer). For example, I had installed Opera on Linux awhile back, and I was not as pleased with it as the offerings of Mozilla and Konqueror. Therefore, I went to uninstall it, only to realize there was no uninstaller. Now what? It installed its files to who knows where? Assuming the Windows equivilant of Opera had no uninstaller (I think it does, but hypothetically assume it does not for now) – Opera would have most (if not all) of its installed files in the subdirectory of the program files directory. All I would need to do would be to delete that directory, instead of searching around for individual files that Opera installs. Sure, there may be a few DLLs still left in the windows/ directory, but you have a majority of Opera’s files deleted, reclaiming most of the disk space. In Linux, it would be much more difficult to identify the files which were installed, and I wouldn’t know if I had deleted all of them even if I tried. There might have been an install log, but I couldn’t find any, and even if I did, it would have been much easier the Windows way anyway.
“The registry grows over time, as well… just general accretion of crap.”
I’d be interested in some verifiable examples. Yes, I’ve heard that this is a problem on Windows, but I’ve never actually seen any solid proof.
“On Debian, at least:
Binaries: /usr/bin
Documentation: /usr/share/doc
Configuration files: /etc”
This supports the fact that the package’s files are dispersed throughout the filesystem as opposed to organized in its own directory makes it tougher to manaully manage (see above on manual uninstallation).
In addition, this is – as you point out – something you are only sure of /on Debian/. Standardization goes a long way for a better user experience.
“What I was trying to get across was the general idea that it’s far more of a gift for someone to donate his time to help me, answer my questions etc, than if he were paid to do so. This denotes genuine effort, and he’ll have gone out of his way to do so.”
Not that he has to. When we speculate in news comments like this, the developer never has to go out of his way to read all of this. And I agree with you that mostly when he donates time to answer your questions and guide you personally, he is going out of his way to do so. On the other hand, he can also be partially self-motiviated to do such things – people will continue to use his software, he gets valuable bug reports and feature additions that he possibly never thought of himself. So his project can benefit from such interactions, too. Its not just a one-way interaction.
“You’ll have tons of volunteers for the fun stuff, but the boring tasks will tend to languish. Notice how documentation for OSS projects tends to lag quite a way behind the code? Well, that’s why. Similarly, people will often prefer to contribute a new feature than polish up existing code, because it’s more fun.”
The two orignal categories you mentioned are management and support. Generally, its not too difficult to find help in management. People like to manage because it puts them in authority over others – they have the power to control. Of course, as the initiator of a project, you can always step in and override management if you think something is going in the wrong direction. As for support, a majority of it is provided by the community itself. This also has the advantage in that your support will scale with the userbase. Increasing userbase will yield increasing support. Users will help other users – you see this kind of thing way more often in OSS projects than in commercial projects, because open-source projects don’t guarantee any form of support, unlike commercial vendors. That is _not_ to say that the people in the project don’t do any form of support at all. They do, but a great deal of the burden is lifted when community provides its own support. And if you are the entrepreneur of a project, things get better. You do not shoulder all of the project support either. Other developers who specialize in maintaining portions of the source tree will diagnose bugs for that specific portion, since they are the most knowledgable about it. So in the end, it seems quite feasible to me.
“I think you’ll find that, generally speaking, the journalling filesystems for Linux were written/financed/ported by people that wanted or needed them, not because “community dissent” drove them to it.”
You’re probably right on this one, because I am not sure of the details. I was merely trying an example to show you what I mean. You seem to have understood anyway, so I guess there’s no need to clarify further.
“But yes, I suppose user demand would be a factor, though not the determining one in my opinion.”
That’s true. I never said user demand was to be a determining factor, however. The developer should at least /consider/ it.
If we want OSS to succeed, we need to have as many projects as possible, and that goes for distributions as well. Period. We need as much software as we possibly can so that ambitious startups wanting to get some marketshare have all that at their disposal.
The idea that hardware companies don’t have a standard to conform to, hence they won’t enter the linux market, is not really true. With the exception of 3d video cards, hardware companies want to add to the kernel. In the case of 3d cards, companies all have to work within the XFree mess (so it’s bad for them, but not because of different distributions).
As far as games go, as more and more distributions get LSB compiant, they’ll be able to just provide general packages for everyone.
The important thing is that everyone that has a unique direction they want to explore, whether it be software or distributions, should start their own project or add to another (fork it if necessary).
OSS on the desktop is in its infancy, and we need all the lines of code we can get.
Before anyone thinks that Microsoft is liable go re-read your windows XP EULA. XP may be Microsofts best and most stable operating system, but it doesn’t hold microsoft respsonible for code they stole(no proof) or hold them responsible for damages done by viruses, Blue screen of deaths, or lost work due to constantly rebooting system. To top it all off they have to right to modify your computer’s registry to disable programs that they do not like( though they have said that this is extremely hard). With Trusted computing initative, Microsoft will soon be able to track your computer very easily.
wow…links to 2 different sites running the EXACT SAME ARTICLE BY THE EXACT SAME AUTHOR.(news.com and zdnet.com)
sorry for the yelling…so much for different sides of the story, eh?
now go and choose Mandrake
Who’s liable for any other commercial piece of software?
Remember that license you clicked through when you booted up your windows box the very first time
How about that license you agreed to when you installed *insert your software here*?
You’d have to be insane to allow yourself to be held liable for software when some companies and individuals make their “living” almost exclusively off of lawsuits *cough* SCO *cough* and *cough* fat, err, ummm obese micky dees customers *cough*
I Choose GENTOO!!
Why choose?
-a happy Mandrake/Red Hat/Slackware/FreeBSD/OpenBSD… user
OpenBSD, NetBSD, FreeBSD, MirBSD, hopefully soon DragonflyBSD
Yes, choice in Linux distros is nice, however, certain things need to become standard between the Linux distros. Things like file placement, which is PATHETIC in every version of Linux. Windows does a much better job here, all system files are in the Windows directory, all folders are named so you know what the heck is in it by reading the name of the folder. All program files (with the exception of a few programs that feel they should be placed in say, C:app) are in the Program Files directory, etc… A lot of UI needs to be standardized (not necessarily the same UI, but the same basic way to organize data within a UI).
In short, yes, the choice is good, but in it’s current form, the whole distro system sucks.
I agree, but the LSB makes this practically a non issue to those distro’s that follow it. Windows does a bad job also, I really don’t like the idea of having all my binaries in many locations which is my chief complaint about the /opt file system. The unix file system makes sense once one spends 10 minutes learning how it works. What’s wrong with the UI? I’m really curious. I think it’s smart to put user bins in /usr/bin, and shared files in /usr/share. It’s also smart to put user libraries in /usr/lib, which is *NO DIFFERENT* than system and system32 on a Windows machine exception being that system libs are in /lib. Once you get the feel for they unix filesystem layout, you’ll wonder why Windows wasn’t organized that way.
The choice in distros is good for the common user. For system types it sucks sweaty donkey balls. Until I see Oracle saying they support LSB version blah.blah, then I will continue to think it’s one big mess. In fact, what’s happening now with third party commercial application vendors is they are “supporting” certain distros. This is a bad bad bad thing…..as it gives you zero choice or at least very little. The BSDs have a much better mechanism. LSB is also starting to go south as it requires RPM as it’s package management scheme….laaaaaaaaaaaaaame.
-Dan
It would be better if they were to stick those libs into /distroname/lib or /linux/lib, etc… so you aren’t cluttering your drive with system folders all over.
Dumping everything in the C:WINDOWS directory does not strike me as an elegant solution to file placement. There is a standard for file placement in linux, and it is fairly rational.
I think the most rational of the bunch is OSX, but that’s just my take.
Cheers,
prat
re-reading my comment, /distroname would be BAD… /linux would be great.
I’ve not really used OSX in-depth enough to actually look, only to look up a website on a friends computer and such, how does it work?
The choice in distros is good for the common user. For system types it sucks sweaty donkey balls. Until I see Oracle saying they support LSB version blah.blah, then I will continue to think it’s one big mess. In fact, what’s happening now with third party commercial application vendors is they are “supporting” certain distros. This is a bad bad bad thing…..as it gives you zero choice or at least very little. The BSDs have a much better mechanism. LSB is also starting to go south as it requires RPM as it’s package management scheme….laaaaaaaaaaaaaame.
They only officially support certain distros…it doesn’t mean others won’t work. Most commercial Linux software is targeted toward Red Hat, Mandrake, and SuSE…they’re the most popular…they’re the ones most likely to earn back the cost of development.
In addition, since the majority of Linux distros are forks of existing distros (mostly Red Hat) these forks tend to be reasonably compatible with the original distro, often offering only superficial differences (686 optimizations, different default window manager, extra management tools).
RPM was chosen as the package manager for LSB because it’s the de facto standard…the big three (Red Hat, SuSE, and Mandrake) all use RPM and a lot of the small guys do too.
What should they have used instead? .debs? .tgzs?
They sort of do. All system libs go in /lib. Anything installed for a regular user by an admin puts it’s libs in /usr/lib. Anything that a user installs would have it’s libs placed in ~/lib but that’s pretty uncommon these days. It makes perfect sense to seperate userspace libraries from system libraries. I think see where you are going with /linux/lib you would also have /linux/bin and /linux/sbin too right? Would you then also want /users/bin and /users/lib as well?
Why? It works, and nearly every major linux supports it. It has it’s issues but so does every packaging system.
Here is a link from the apple developer site on the OSX file system layout:
http://developer.apple.com/documentation/MacOSX/Conceptual/SystemOv…
Cheers,
prat
I never knew the # was that high, we do need standards for linux. It is just stupid that there is no standerdation yet. When the same program install in one place in one disttro & in another in a diferent distro, that is just lack of comon sense.
Why are the directories under / almost always just 3 letters anyway? Iss it that much harder to make the user folder, user instead of usr
In short, yes, the choice is good, but in it’s current form, the whole distro system sucks.
You got the source code. Design your own, since you obviously know how to do it better.
There’s nothing preventing Linux from taking on that Windows filesystem layout. Only us programmers who would rather type “/usr/bin” than “C:Program Files”.
Why are the directories under / almost always just 3 letters anyway? Iss it that much harder to make the user folder, user instead of usr
This is a common misconception…usr doesn’t stand for “user” it stands for “Unix System Resources”
Also, the reason the directories under / are almost always just 3 letters is the same reason almost all of the program names are just 2 or 3 letters long – less typing…remember unix started out as a command-line only OS, and things like tab-completion are a fairly recent innovation. In the early days, typing things like copy /UnixSystemResources/Binary/copy /Users/JoeSchmoe would have been a nightmare
I don’t know about the US, but where I am from the EULA is not worth the paper it’s printed on — whatever may be printed on there.
– They only officially support certain distros…it doesn’t
– mean others won’t work. Most commercial Linux software is
– targeted toward Red Hat, Mandrake, and SuSE…they’re the
– most popular…they’re the ones most likely to earn back
– the cost of development.
Try telling that to upper execs who need accountability. I’ve run things like Doubleclick adserver on “unsupported” distros without a problem, which is fine for small time products. The minute I call Oracle and tell them I running Debian (I haven’t looked at their support list lately — but it wasn’t on there a while ago) they will tell me they can’t help me.
– What should they have used instead? .debs? .tgzs?
LSB should not include a package manager in the specification. They should include what the package manager should have to do. They have specified an implementation as opposed to what makes up that implementation. Which is bad for GNU/Linux.
-Dan
Try telling that to upper execs who need accountability. I’ve run things like Doubleclick adserver on “unsupported” distros without a problem, which is fine for small time products. The minute I call Oracle and tell them I running Debian (I haven’t looked at their support list lately — but it wasn’t on there a while ago) they will tell me they can’t help me.
Welll… ya could use Red Hat or SuSE if that sort of thing is important to you.
LSB should not include a package manager in the specification. They should include what the package manager should have to do. They have specified an implementation as opposed to what makes up that implementation. Which is bad for GNU/Linux.
Now you’re really showing your ignorance!!! How are commercial software vendors supposed to deliver their apps then? Do you honestly expect Oracle to deliver .deb .tgz .yourmotherspackagemanager AND .rpm files too???
Choice is good. But basic things like the file system layout need to be standardised. There’s just no advantage to differing on basic things like that.
– Now you’re really showing your ignorance!!! How are
– commercial software vendors supposed to deliver their
– apps then? Do you honestly expect Oracle to
– deliver .deb .tgz .yourmotherspackagemanager AND .rpm
– files too???
No, Oracle should release there packages in any package format, as long as it meets the LSB specification. What this get’s you is the ability to have an LSB compliant Debian/Lindows/etc distro without having to have some conversion tool. The difference is now you will be required to get a conversion tool in order to install Oracle, but that conversion tool is not required to become LSB compliant — a much more important issue.
-Dan
Okay, so you’re a Debian zealot…
apt-get install rpm
Was that so hard?
…Packaging paradigm. Yes, I think rpm and deb both suck a fair bit. And so does Windows style too.
I think using executables to deliver software is a WRONG idea. Its a nightmare on Windows when my friends click ‘Yes’ on every prompt they see, and I end up with all sorts of Advertising software I really hate.
RPM is too complex for the common user too. I know apt makes it better, and stuff like redcarpet.
Deb is also the same. They do a lot of stuff similarly.
I think the MAC way is very good. Drag and drop into a certain folder and it works. I think the community needs to develop something like that. I realise that it is a bit harder, given how many apps are dynamically linked, and therefore how it would be difficult to install dynamically linked software like that. So maybe this is not feasible for Linux. Or needs refinement.
I think an ‘executable directory’ system would be beneficial. An executable directory is simply a directory from which software is allowed to run. No hiding software in shady locations. The system can maintain a lot of executable directories, and the list of these is readily available to the sysadmin too.
Maybe a way to register all the software dropped into the ‘executable’ directory. (There could be more than one such directory, a new attribute or maybe just a config file). Executables would not be allowed to execute outside these directories. Obviously, only the root may install into these directories. Or, each user could have their own ‘executable’ directory, but these executables are limited to being used by the user alone, and have no permissions to change any of the roots files or those of other users.
Thus this ‘executable’ directory would be some sort of an ‘active directory’ which monitors any changes to it.
For dynamic linked software, there could be a system where every ‘executable’ is required to publish information about other software it needs to run. So when one drags and drops, before it copies, it could check to see if the requirements are met, and then install the software. It would then register all the files placed there.
Removing software would then also be a matter of the admin removing the directory with the software in it ala Mac. When removing the dir, it would check to see if any software depends on it, and advises the user whether to remove that software and all software and all software dependent on it or remove just that software and leave teh dependents there, at the risk of running an unstable system.
This is brainstorming right here. This is probably easier said than done. But I thnk such a system could possibly make it easier for users to install software for themselves, if the system admin allows it. And the scope of damage that software could do would be limted to their home directories.
Also, for those who like to compile their own software, as long as they compile their software to the specs, installation is as easy as copying the directory with the binaries to the ‘executable’ directory.
These are just ideas. But I think package delivery on Linux is hell as it stands. Feel free to critique/criticize what I have written here. I can not code to save my life, but I think I have an intuitive idea for what would be an easier way of packaage management on Linux.
– Okay, so you’re a Debian zealot…
– apt-get install rpm
– Was that so hard?
Actually, I run RH pretty much all over the place. I use rpm on a daily basis, but I just don’t like rpm being tied to the specification. Historically rpm was added to LSB because nothing else was around, and in fact it wasn’t a requirment to be “LSB compliant”. Recent LSB releases have changed that, for the worse.
Sure apt-get will install the rpm for me….and that’s
great, but we need to go back to the original pts:
1 – Oracle doesn’t support LSB compliant distros, but rather certain versions of RH and SUSE.
2 – On a totally tangent topic, LSB requires RPM package format.
My arguments are with those two issues:
For 1 – In an environment that deploys commercial applications you are very limited to the choices you have at hand. This completely diminishes that whole argument of having choice — which you don’t and won’t in the foreseeable future. LSB attempts to fix this problem, but I have yet to see a commercial product say it supports all distros that are LSB version blah.blah compliant.
For 2 – I think this is a mistake. In some cases you can’t help to reference an implementation, such as glibc (but I would even argue that that is bad — But LSB makes up for it by declaring system calls and all sorts of low level functions that must be supported). Saying you need RPM is anti-opensource and overall anti-Linux.
-Dan
“LSB is also starting to go south as it requires RPM as it’s package management scheme….laaaaaaaaaaaaaame.”
I don’t know what proof you have that makes you think it’s going south. Especailly since every major software VAR in the Linux world has agreed to the LSB standard and are part of the council that decides on it. One package format has to be chosen for a standard. The different package formats are too different otherwise. LSB is just starting to be adopted if anything, it hasn’t been around long enough for things to really kick into gear. Once it does certifying things will be easier.
LSB does not specify Kernel yet, which I believe will have to be the next step. The Kernel is one of the big reasons why RedHat / SuSE are certified for Oracle and no one else is (at last check). Both RedHat and SuSE have worked hand in hand with Oracle to make sure that the proper optimizations and tweaks have been done to ensure Oracle runs reliably.
Anyway, package format is part of the specification. You need to choose one because of dependencies and various other things. If you don’t understand that, then you haven’t worked as a major commercial software developer.
I actually don’t think LSB is going South…I just don’t like the RPM requirment — a relatively small part of the whole specification.
-Dan
Heh, I should have re-read what I wrote….my original “Going South” statement was a bit strong…
-Dan
There ARE too many Linux distros. That is why I use FreeBSD. I got really tired of each frickin’ Linux distro doing things its own way, and having to figure out what was different, how and why. Cr*p! Think if there where 130+ different versions of Windows 2K/XP – NIGHTMARE…
*BSD is about the same way anymore, plenty of forks there too, although not nearly as many as Linux, but that’s probably only because it’s popularity isn’t as well known.
Personally I use FreeBSD.
These comments about the UNIX filesystem hierarchy are inane. Its not like total newbies have any idea what “WindowsSystem” stores, and I refuse to believe that knowledgable computer users could have a problem understanding a file hierarchy, that for all intents and purposes, has only a few properties:
bin – Binaries
lib – Library files
share – Other resources
/usr – Network-wide system resources
/usr/local – Resources specific to this machine
/home – Resources specific to users
/etc – Configuration resources
/usr/X11R6 – GUI resources
Take different permutations of the 7 properties, and you cover most of the files in the entire system. Second, all you need to know, in reality is the ~ directory. There is nothing outside of ~ (and maybe /etc if you administer your own machine) that you need to be concerned with. Your uber-intelligent package manager is supposed to manage all the messy stuff. If you find yourself mucking around in /usr/bin and /usr/lib (trust me, I did the same thing when I started using Linux) you’re doing it wrong. UNIX doesn’t work that way.
“Windows does a much better job here, all system files are in the Windows directory, all folders are named so you know what the heck is in it by reading the name of the folder”
I don’t necessarily agree with this comment. If you have a lot of programs installed on a Windows machine it can be very difficult to find the one you want. Some of the programs come from some odd company and the directory name for the program is the company’s name. I can’t even begin to count how many times I’ve seen one of those directories and thought “what the hell program is that?”. With linux it’s just a simple command to find the program:
which <command>
On the other hand it is a pain sometimes when you don’t know what the command is. Not all the programs have intuitive names for their commands.
Firstly, I’d like to make a comment on the subject of too many distro’s. Having too many me-too distros is bad. There needs to be a balance between creating competition and spreading resources too thin. At the moment, even though there are 100’s of Linux distro’s, I would still say that the Linux landscape is rather healthy. The reason is that most of these distro’s are trying something unique (Debian, RedHat, Gentoo, etc.) Hence, they are not reinventing the wheel but instead are creating new ideas and new technology. RedHat is interested in commercializing Linux by standardizing the base. Debian is trying to get rid of version numbering by creating an easy packaging system. Gentoo is trying to make compiling packages easy so people will have everything optimized for their system. However, if these 100’s of distro’s were all just RedHat Linux clones, I would say there is something definately wrong. Most of these companies would be doing nothing but just trying to rewrite the solution to a problem that is solved. There just needs to be 3-4 clone distributions, enough so that they compete but not too much as to steal resources from innovation in other areas.
Now, I’d like to comment on Gentoo. While I definately see situations where compiling is good, Gentoo has taken it too far. Yes, compiling your server kernel is good. Yes, compiling apache on the server is good. But compiling Gnome or Xfree86? That is going way too far. Unless I want to export them, or create some kind of terminal server, spending 1-2 days compiling them is just too much effort. This is especially apparent when you have to go through this ritual every 2-3 months when a new version comes out. I think Gentoo is a great idea, but they need to create a precompiled package list, like Debian, so that I can choose what I want to optimize and what I don’t really care to optimize.
Perhaps in the old days, stuff did get dumped in the windows directory, but the windows directory is not longer flat and has more structure that it use to. Windows has a much richer file structure than it used to.
>Dumping everything in the C:WINDOWS directory does not >strike me as an elegant solution to file placement. There >is a standard for file placement in linux, and it is >fairly rational.
>I think the most rational of the bunch is OSX, but that’s >just my take.
>Cheers,
>prat
Gentoo does have binary packages, and they are dealt with by portage in much the same way as emerging source code.
I think they will provide binarys with the imminent 1.4.
You can also download an older install CD with binarys at the moment, though it’s quite outdated.
Why? Is RPM any different to .deb? I really want to know?
Debian has apt, and now so does most RPM distros. Makes me think that there is no longer any point to choose Debian as a normal user.
Also I wan’t have to search all of the web for good debian sources and be worried if they are stable, testing or unstable.
With apt4rpm I don’t think there are a million sources.
gentoo does have binary packages, but really, do you trust them?
LSB doesn’t require you to use RPM, just have a way to handle them, gentoo uses rpm2tgz and unpacks them into the sandbox then installs them into your system.
i’ve tried freebsd but gentoo+glibc(nptl)+linux 2.6.0-test1 is just erotic, no ifs ands or buts about it.
But compiling Gnome or Xfree86? That is going way too far. Unless I want to export them, or create some kind of terminal server, spending 1-2 days compiling them is just too much effort.
Well, I guess it depend on the speed of the computer. Xfree takes ~2 hours to compile on my AXP 2200+. KDE (the base package without the addons) takes ~3 hours ’cause it needs to compile QT. IMO, the only real bitches to compile are Mozilla & OpenOffice, and you can get a up-to-date binary package for both (Mozilla-bin & OpenOffice-bin).
I think Gentoo is a great idea, but they need to create a precompiled package list, like Debian, so that I can choose what I want to optimize and what I don’t really care to optimize.
I agree on that. Personally, I prefer to compile, but I understand why some people don’t. I once compiled Gentoo on a Duron 900 and it took about 2 or 3 days to compile everything I needed. There are some big programs where compilation isn’t really necessary for most of us (Xfree, KDE/GNOME…) too.
Spoken like someone who doesn’t actually use Gentoo. You very rarely have any time when your Gentoo machine isn’t usable because of compiling — the first time you install it. After that, all updates are incremental. You start an “emerge world” before you go to sleep, and its 100% up to date in the morning. Time required to maintain a system is an absolute non-issue with Gentoo, because (unless you don’t sleep) it works on your down time, without you having to manually intervene. That’s how computers should work, and is very unlike the awful Windows Update manual labor mess.
The real issue with Gentoo is that people who like to play with all sorts of different software have to be patient and allow new programs to compile. I don’t diddle with my software config too often, and I don’t care if I have to wait till the next day to get a new piece of software installed. In fact, even taking into account compile time, I still get the software before everyone else, because Gentoo ebuilds are often available the minute source packages are released (or, in the case of high-profile projects like KDE, before the source packages are released rather than a week afterwords when a range of binary packages are built and released.
is that why you name it usr,bin,opt,lib,etc ??
It’s not a technical limit, but was rather out of laziness…in the olden days, before Command Line auto-complete, when you had to type in these paths all the time it made sense to make them as short as possible.
You get used to it pretty quickly, btw.
Yup. Back in the days when UNIX was command-line only, and had no tab-completion, it’s a great usability imporovement to have all those three-letter names. From a command-line perspective, it’s great to not have to type “/UnixSystemResources/LocalMachine/Binaries/SomeObscureProgram”
I know I’d rather type out /usr/bin/whatever. On the other hand, in a GUI-based OS, with tab-completion in the CLI, it makes more sense to me to hide the three-letter names (They’re still there, you just never see them; after all, they do serve a purpose still), and have something like “/Applications/Web/MozillaFirebird.app”. You end up typing “/Ap[TAB]We[TAB]Moz[TAB]”.
In fact, even taking into account compile time, I still get the software before everyone else, because Gentoo ebuilds are often available the minute source packages are released (or, in the case of high-profile projects like KDE, before the source packages are released rather than a week afterwords when a range of binary packages are built and released.
Unless said project happens to be Mozilla, in which case they take ages to release the source so everyone else has 1.4 well before you do.</whinge>
That sort of file system should be the goal of the Linuz developers. But Linux is a bit of a one-way street, isn’t it? There’s no changing certain aspects of it for the better.
So, learn to deal with the complicated mess, and perhaps provide support for the many who will need it.
“Where is my VMWare installed? Star Office?” etc.
I’m of the mindset that I don’t care if there’s 1 distro or 10,0000, so long as what works in one distro works in all of them. Of course, it’ll be a cold day in hell before that actually happens, which only leads me to conclude that Linux users must enjoy only having < 1% desktop marketshare.
Come on guys … is it really that f**king difficult to come up with a standard package managing scheme?
I like Dan’s idea where you come up with a standard, and not an implementation. The LSB requiring the use of RPM (or at least requiring a distro to be compatable with it) is like the W3C implemententing Internet Explorer is a web standard instead of XHTML.
Dependencies are dependencies – if a program requires libdvdread on the system to function, that isn’t going to change no matter what package manager you’re using. Why can’t we come up with some sort of standard for polling the system for packages and a standard for installing packages so any package manager can scan the system and find out what is installed and what is needed to install something else?
Hell, maybe you could come up with some sort of way to poll any program on the system and have it give you a list of what dependencies it needs, kinda like how you can tell which DLLs an executable uses on a Windows system.
Having something like this would mean you could download an .rpm, .deb, .tgz, or whatever … and you could install the same package on any LSB-compliant distro. Certainly, something like this is not impossible ??
If we want OSS to succeed, we need to have as many projects as possible, and that goes for distributions as well. Period.
If all of those distros make their own file sytem and their own packaging wrapper where would it go ? No where !
I think, only my opinion,
that a standard in file system layout (like WINDOWS NT 4 + 5 and Mac OS X) will be good ! (should I say professional) for Linux as a kernel.
I said it before, only Mr. Linus, which owns the patent, has the power to demand it.
Let say that, unlike Unix, Linux *must* comply with _this_ rules on the file system for comercial software !!! and this (new !) kind of *packaging wrapper* (sya, package.bin.lin) for comercial/ordinary software vendors to *claim* it to be *Linux software/binary* compatible (just like it happens with BSDs !!! ).
That would clear a lot.
Besides, kernel 2.6 is almost out. There is nothing stopping developers to include some install routine that would outpout like:
“this (rpm/deb/tar.gz) package is NOT compliant with the new 2.6 kernel packaging system, use it at your own risk or contact your software vendor!
“8 letters limit ? like DOS?
Yes ! (will look like win 32, good).
… is it really that f**king difficult to come up with a standard package managing scheme?
I like Dan’s idea where you come up with a standard, and not an implementation. The LSB requiring the use of RPM (or at least requiring a distro to be compatable with it) is like the W3C implemententin
Dream on !
No one can demand that.
Forget about LSB. LSB will never be enforced (it’s a Free standard, altought it’s good, it will neer go anyhwere). Linux will continue to suffer.
Like 95% of Linux users:
(which are not, or don’t havetime, software/kernell developers and only want to use software and GUIs on the distro *they* like – and redhat, mandrake, suse, + debian + slackware only have < 45 % of Linux market [which is 4 % of desktop] — and plenty of source code doesn’t compile equally because of ‘config’ files !) (ops).
______________________
I wish too it was so, but it is not.
What a shame.
Sorry if I am whinning. That is real life. Life and truth can hurt.
Like many I am not satisfied with Linux today.
One of the posts asked the difference between package managers. Surfing http://cbbrowne.com/info/total.html I found this link: http://www.kitenet.net/~joey/pkg-comp/ that does a very good job.
On the question of short command names and three letter directory names, remember that unix was originally used with very slow teletype machines. Short names decreased the amount of typing a user had to do as well as the amount of typing the teletypewriter had to do. It influenced a lot of the unix philosophy.
Personally I look forward to either doing my own Linux from Scratch http://www.linuxfromscratch.org/ or trying out GoboLinux http://www.gobolinux.org/ which I first read about here on OSAlert.
My question is what compatability issues have people had? I know that some files are located differently between distros, but it seems to me like most of them are in standardized places. I constantly hear people complain about standardization in Linux, but what specific issues are there from a system standpoint?
I’m not talking user interface issues but specifically with supporting different programs. I know about the packaging issue, but that can be worked around. Most distros come with more libraries than you would ever need and contain both KDE and Gnome libraries. Most binaries and config files are located in standard places but I realize that there is a conflict between putting things in /opt or /usr or /usr/local or whatever. Shouldn’t properly written programs check for this or a proper installer make the necessary arrangements for a program?
I’m in the minority here once again. As I find the Linux, in particular, file system structure least cumbersome. Lord, could it be much easier!?
/bin
Common programs, shared by the system, the system administrator and the users.
/boot
The startup files and the kernel, vmlinuz. In recent distributions also grub data. Grub is the GRand Unified Boot loader and is an attempt to get rid of the many different boot-loaders we know today.
/dev
Contains references to all the CPU peripheral hardware, which are represented as files with special properties.
/etc
Most important system configuration files are in /etc, this directory contains data similar to those in the Control Panel in Windows
/home
Home directories of the common users.
/initrd
(on some distributions) Information for booting. Do not remove!
/lib
Library files, includes files for all kinds of programs needed by the system and the users.
/lost+found
Every partition has a lost+found in its upper directory. Files that were saved during failures are here.
/misc
For miscellaneous purposes.
/mnt
Standard mount point for external file systems, e.g. a CD-ROM or a digital camera.
/net
Standard mount point for entire remote file systems
/opt
Typically contains extra and third party software.
/proc
A virtual file system containing information about system resources. More information about the meaning of the files in proc is obtained by entering the command man proc in a terminal window. The file proc.txt discusses the virtual file system in detail.
/root
The administrative user’s home directory. Mind the difference between /, the root directory and /root, the home directory of the root user.
/sbin
Programs for use by the system and the system administrator.
/tmp
Temporary space for use by the system.
/usr
Programs, libraries, documentation etc. for all user-related programs.
/var
Storage for all variable files and temporary files created by users, such as log files, the mail queue, the print spooler area, space for temporary storage of files downloaded from the Internet, or to keep an image of a CD before burning it.
Excerpt from,