Earlier this week, we ran a story on GoboLinux, and the distribution’s effort to replace the Filesystem Hierarchy Standard with a more pleasant, human-readable, and logical design. A lot of people liked the idea of modernising/replacing the FHS, but just as many people were against doing so. Valid arguments were presented both ways, but in this article, I would like to focus on a common sentiment that came forward in that discussion: normal users shouldn’t see the FHS, and advanced users are smart enough to figure out how the FHS works.
There are numerous problems with that sentiment. In fact, the last people I would expect such a statement from is from open source and Free software enthusiasts, who generally believe that it are the users who should control and understand their computers and the software that runs on them, instead of big, greedy, and scary corporations. The statement above goes directly against that entire noble idea, hence my surprise to see the statement made so often.
The statement can be broken up in three parts, all of which raise serious questions:
- Normal users shouldn’t see the FHS,
- and advanced users are smart enough to figure out
- how the FHS works
I want to focus on the third part first. “How the FHS works”. As has been shown by numerous people in the thread to the GoboLinux story, but also in various articles around the web, the Filesystem Hierarchy Standard does not work. In theory, the system sounds like it could work. The standard exlains which files should go where, so that both developers and users can find these files, as long as they know how the standard works.
In practice, the situation isn’t as pretty and idealistic. The reality of things is that every distribution has their own rules, and organises their files in a way they deem fit. This effectively means that as soon as you figured out how the filesystem is laid out in distribution Abc, you’ll have to re-learn various things before you know how distribution Xyz lays out its filesystem. And to make matters worse – the standard itself allows for various exceptions, such as X11R6
being placed in /usr
.
Because of this, the FHS can barely be regarded as a standard. How much of a standard is a standard that nobody follows (ever seen the /srv
directory?)? Consequently, the merits of the FHS are debatable. Its goals are to make various UNIX systems’ filesystem layouts consistent with one another, but there is no consistency among Linux distributions – let alone other UNIX systems such as Mac OS X or the BSD systems. In other words, how valuable is the FHS in reality? Does it really provide the often-advocated advantages?
Which brings me to the second part. “And advanced users are smart enough to figure out”. I am an advanced user. I know my way around stuff. I know how to get tricky things done in Linux, Windows, Mac OS X, and my personal favourite, BeOS. Still, I would greatly appreciate a human-readable filesystem. It would make my life so much easier if I didn’t need to decipher cryptic acronyms and learn ten billion million exceptions for each specific UNIX variant.
Why is that just because I’m an advanced user, I need to painstakingly invest so much time in learning things that can be done so much easier? How much easier would Windows maintenance be if the /Windows
directory actually made sense? How much easier would it be to fix a broken Ubuntu install if files were located in directories whose names actually explain what they do?
So, why do so many people resist the idea of making computers easier to use not just at the surface, but also at the core? Why do I, as an advanced user, need to learn a lot of difficult and cryptic things? Is it because the developers want to take revenge on other people because they had to learn it themselves too? A sort of geek hazing? Creating job security?
This leaves me with the first part of the statement, which also happens to be the most important one to me. “Normal users shouldn’t see the FHS.” I’ll keep it plain and simple.
Why not?
Seriously. Why shouldn’t normal users see the original directory structure on their machines? Are we purposefully trying to prevent users from learning how their systems work? Isn’t the goal of Free and open source software to allow users to take ownership of their systems again, contrary to proprietary software such as Windows? If that’s the case, than why do we still hang on to an incomprehensible directory structure that pre-dates the first ice age?
If we make the directory structure easier to understand and to manage, by making it human readable and clear, we will make it easier for normal folk to move from merely using their computers, to actually understanding them. I’m harping on the directory structure here, but of course, this goes for a lot more parts of a modern operating system.
Designing for developers, by developers
All three major operating systems today are designed for developers, by developers, and it shows in almost every aspect of computing except the very surface. Developers have stacked layer upon layer upon layer just to make it possible for ordinary people to use these complex beasts we call operating systems. Funnily enough, by providing all these layers, developers actually flat-out admit operating systems are anything but designed for users. If they were actually designed for users from the ground up, they wouldn’t need all those layers. Those layers are the developers saying: “look, we know we have an incomprehensible messy system that even we ourselves barely understand, but instead of actually fixing the problems and making our operating systems better, we’ll just staple another layer on top to hide the ugliness.”
The holy grail of the coming ten years of the operating system world is to make them not only pretty on the surface, but also pretty underneath – logical, structured, human-readable. This is exactly what I’ve been trying to do with my theoretical program management system, which I designed from the ground up to be easily usable by all those hypothetical grandmothers, while still offering advanced functionality that no other operating system offers. I strongly believe that this is the next big challenge for developers on all sides – Windows, Mac OS X, or Linux.
We got computers to work. We got them in every home. We got them to be usable. The next big step is to make them manageable.
If there were an attribute for ‘humane’ directory names associates with existing directory names that would be a good start at bridging the gap between the FHS and notice users.
So, /etc could have the ‘humane’ name ‘/settings’ or ‘/system settings’.
Such a surgical change would be fairly straight forward to implement (especially if it was limited to the GUI) without the hacks gobo has to employ.
But how does that improve the situation for non-English speaking users? Seems to me it only makes it worse for them, or we have to use some sort of an abstraction layer on top of FHS for language-specific directory names.
All of this talk about human-readable directory names seems to imply that linux users should all learn English, in order to simplify things for English speaking users.
Why?
As soon as you have a set of standard human-readable names, translation only gets easier – not harder. How on earth do you translate /usr? Or /etc? Compare that to /settings. Or /programs. Or /system.
The desktop environment and cli can easily be aware of the locale and language settings and change the names displayed accordingly. Heck, even non-Latin alphabets could be used. It’d be a massive improvement in localisation.
Yet another advantage.
Edited 2008-08-23 19:09 UTC
Well, if translation is needed, and let’s say your locale was English, couldn’t you as well translate /usr as /programs and /etc as /settings etc. too – if you want intuitive file hierarchy system? Just a thought.
Unix and Linux file hierarchy system with its /etc, /usr, /dev etc. can indeed look cryptic. at least if you are not an experienced user yet. But at least /etc and /usr etc. are short which can be a big plus if you want to see full paths to files and use commandline a lot.
Having said that, I do like the Gobo Linux ideas like its file hierarchy system and have often considered that I would like to give that distro a try.
Seeing as more than just programs reside under /usr you’d have to be more specific.
Please. We don’t really want to localize a system hierarchy, isn’t it?
Do you imagine how things start to become terribly complex and fragmented if each computer can have a different name for the /settings directory, based on its locale? And what about people using more than one locale?
(begin rant) Probably I don’t get it because I simply don’t understand the real point of localization. It seems a terrible waste of time to me. Computers are tools. Tools should need knowledge to be used. One of these knowledges is (basic) English language.
To want to live in the modern world and to want to use modern technology without knowing the current Worldspeak -that is, English- is utterly beyond my comprehension.
The time spent localizing could be much better spent by teaching people needing localization proper English.
Notice that I am not of English mother tongue -I am Italian, in fact, living in Italy- so this is not “who cares about non-AngloSaxons” talk. (end rant)
good point. Imagine if computer languages were redesigned for non english languages. In some sense you have to know at least some english before you can really get into the computer.
Well, I do. But I also see the implications go much further than some admit.
In short, if properly done, it would only mean that you have to set the correct locale in the instance of Konsole where you are about to copy-paste the advice you’ve just seen in a forum, mailing list or IRC channel. For instance, if your system is localized to Italian, but you are asking in an English forum, the pieces of code you copy from there will have English localization, so you have to open a terminal and set English localization (just) for that instance of the terminal. Languages and locales should be like skins.
Yes, that’s not how it works now. Not just in system directories, but also in programming language and all kind of technology infrastructures.
I’ve often heard the following “reductio ad absurdum”: If system directories should be localised, then why not APIs, keywords in programming languages and so on? Good point, but you know what, I’m willing to bite the bullet.
The idea that you have to learn some keywords in a foreign language if you want to program is so pervasive that most people give it for granted, but it’s just a product of the monolingual environment in which most of the programming standards were established (hello, ASCII, I’m pointing at you!).
If the first programming languages had been developed in, say, Canada, their authors would have realized that the first thing a parser should do is check the locale of the program, and that every program should declare its locale (at least when distributed outside its original locale).
One crucial activity of national standards bodies should be to establish translations for such keywords. But even if no “official” translation is available, it should be straightforward for users to define their own mappings, let their systems do the translation and interoperate with everyone else about as easily. If, for some reason, a new keyword only has translations to some languages, it should be possible to still use it in a program localized to a language where this keyword is missing, for instance by prefixing its name with a locale-setting expression.
The only people giving some serious thought to the need of localization in programming, AFAIK, are those involved in semantic web technologies. Maybe that’s because ontology building is sufficiently close to natural language that the unfair advantage of being a native English speaker is most evident.
As I said in other posts, I have nothing against English, I find it nice and all. What upsets me, in this case, is the attitude of carelessly hardcoding a particular language into technology, and then expecting people to learn the language if they want to use the technology, as if said hardcoding were the most sensible thing to do.
It’s still “who cares about non-AngloSaxons” talk, just one by a non-Anglosaxon
Hi,
If each directory has a standard name and a locale dependent nickname, then hopefully people giving advice would use the standard name instead of the locale dependent nickname.
To make it easier for beginners to learn the standard directory names a GUI would display both. For e.g. you might see “/etc (settings)” or “/etc (regolazioni)”, instead of just “/etc” or “/settings” or “/regolazioni”.
-Brendan
Hi, Brendan
Yes, that would probably work and would be of genuine help for beginners, but just as long as “etc” and the like are accepted as the standard. My proposal is more along the lines of not having to pick a particular keyword like “etc” or “settings” as the standard for everyone. So, “Settings” would be the standard for those using an English locale, and “regolazioni” would be the standard for those using an Italian locale. The only names that would arguably have to be the same for everyone are the names of the locales themselves, so that if I say (in a IRC channel, in my post, in my computing tips blog, whatever) that I’m using ES_es@Euro you can copy-paste that string in the “locale” slot of the terminal and then just use my code. It may look more cumbersome but it would be more general.
One possible reason why this idea sounds so odd when applied to programming language keywords may well be the deeply ingrained unique names assumption (UNA), that is, that no two names refer to the same object. So, if something is called EN/Settings but IT/Regolazioni is another equally valid name for the same thing, we are violating the UNA. Well, some languages (IIRC, OWL is an example) don’t make this assumption. So what’s wrong with violating the UNA? It’s not fatal, since no ambiguity results of it. The problem is that code becomes less readable. If there’s just one name for each thing, you can hope to memorize them.
Anyway, the UNA would still hold (if so desired) inside each locale, as long as all keywords have a translation. The only occasion where the UNA would be patently violated is when a foreign locale has to be used for a particular keyword inside a locale lacking this keyword. For instance, if you are using the IT locale to program in C, and there’s no translation available to the “for” keyword (that would be “per”), you would have to insert a foreign keyword, with its locale marked, something like “EN/for” or “ES/para”.
Hopefully, UNA violations inside a locale, for traditionally UNA-assuming programming languages, would be rare, as long as the importance of comprehensive keyword translation is recognized.
The fact is, everyone using English means everyone using a standard.
A standard is always better than no standard.
Natural, homeland language is all good and well (heck, I like Italian more than English) for personal talk purposes. But when a standard is required, a standard should be enforced.
Why someone wants to make a complex system to allow for non-standards, when people could just be teached the standard (and such a standard is useful for much,much more than simply reading filesystem hierarchies) again escapes my comprehension.
I have issues with both sentences. English is not a true standard, it’s just a “de-facto” standard, much like the MS Office file formats.
Your complaint seems to be that people can’t leave politics aside and pick whatever looks like a dominating option as a standard. But standards are all about politics, that’s the whole point, IMO: a good standard must be perceived as common ground by all the parties conforming the standard’s intended audience.
A true standard should be carefully designed with input from all the interested parties, beginning with a minimal common ground and then building upon it, trying to keep it simple, fair and balanced. Whenever possible, if parties disagree on how to do or how to express something, the standard should provide means for each party to specify their chosen option, rather than picking one of the competing solutions.
A bad standard may well be worse than no standard. At least a no-standards situation is fair, so it’s a much better starting point for building good standards.
Of course you can ignore all of this, take the dominant player’s caprices and call them standards. And then wonder why on earth so many people don’t adhere to them.
Again, calling English “the standard” begs the question . English, the standard.. says who?. In my view, it’s just the dominant player’s option in a no-true-standards situation. The “complex” (not really) system I propose is an attempt at creating a standard, one that lets people agree to disagree and still communicate. Again, It only seems odd because modern computing was developed in a largely mono-lingual environment. If it had been an international (and inter-lingual) effort from the beginning, locale-agnostic provisions would seem only natural.
Regarding direct human (spoken and written) communication, learning a second language up to near-native level is *hard*. As long as the “standard” language is native for some of its speakers and not for others, the native speakers have a *huge* advantage. The only way that a particular natural language could work as a fair standard is that all the other languages were effectively eliminated, so that this “standard” were everyone’s first language. Given that other strategies are available or at least conceivable (auxiliary languages like Esperanto, real-time translation), is it so surprising that many of the affected societies resist this trend?
MS Office is not a good standard because it is not free and it is undocumented.
English is both free and documented.
So it’s a wrong comparison.
A de facto standard is not always a bad standard. Take Linux, for example: it’s hard from being standard, formally, but it’s a good example of a de facto free standard in the Unix world.
The only problem with usual de facto standards, like Office file formats, is that they are nor free nor documented. If these file formats were free and documented, I’d have no problem in using them.
In scientific contexts, for example, English is the required de facto standard. No serious journal publishes something in a language other than English.
The same holds for a lot of other disciplines.
*All* language standards emerged as de facto dominant players. It has been Greek, Latin, then German, then French, and now is English. One day maybe it will be Chinese, who knows.
Please. I’m not talking about writing all newspapers, novels and so on in English. I’m not saying I want to eliminate all languages (in fact, I’d like language diversity to be saved as a value). But there are contexts in which it is better to know a bit of a standard language instead of relying on your own language. Technical contexts, in which technical information has to be shared worldwide, is one of them.
Also, anyway pushing those people learning the current de facto standard language can be hard, but it is an enormous advantage for those people that goes much, much further than simply being able to read computer documentation.
They are not resisting any trend. Quite the contrary. English teaching is compulsory in most, if not all developed countries, and probably also in most non-developed ones. Here in Italy there’s a lot of concern on the relative illiteracy in English of our population. I guess in a couple of generations a substantial part of humankind will be mostly (also) English speaking (if another language does not jump in as the new de facto standard).
But English is still a de-facto standard.
Linux is more of a free implementation of a standard (POSIX). In particular, the fact that so many people use GNU/Linux as opposed to, say, FreeBSD, doesn’t make FreeBSD users’ lives more difficult, as they never worked under the assumption that all the world would use FreeBSD. OTOH, when one of the GNU/Linux distros, with all its quirks, tries to be “the one Linux distro”, and third-party developers only build packages for this distro, then indeed a de-facto standard is being imposed. I think that’s a bad thing.
Office formats are quite similar to each other, at least from a user POV. You don’t have to learn the new format, the computer does it for you (you do have to learn small differences in the tools available, but that’s not intrinsic to the format). The situation with human languages, measuring systems and other conventions is very different in this aspect.
Sure, I’m not saying that other languages have a higher moral ground over English. All I’m saying is that it’s perfectly logical for speakers of a language to resist the attempts to impose a foreign language as a standard, no matter how successful these attempts are turning out to be.
I see that’s not what you want, but I think that’s the most likely outcome. If only English is admitted in technical communication worldwide,it will soon become the only language admitted in any kind of public communication.
That’s the trend I’m talking about. Some sectors of these societies are enthusiastic about it, but other aren’t. For instance, here is Spain there’s an institution called “Instituto Cervantes”, which is concerned about the expansion and “good health” of Spanish language here and abroad. I guess there are similar institutions in other countries. They tend to see a problem in the increasing predominance of English.
I would much prefer that some international agreement puts an end to all the de-facto standard dance, maybe by agreeing on a suitable auxiliary language, or on more effective translation protocols.
But translation would have to be done completely (and not just parts, such as you can see it in KDE). Not only the directories would need to be translated. What about the system’s utilities, the command? And their manpages? Maybe the kernel interfaces and library functions?
Personally, I prefer a comfortable english naming across the system than a subset of poorly done translations into another language.
/usr = UNIX system ressources
/etc = et cetera (and others) – well, this is not easy to translate; /system_settings would be definitely a better name that explains what’s inside this directory. But /etc has historical reasons. Anyone remembers /etc/mount or /etc/fsck?
It would be good if those names would have the same relationship to their contents as the “old fashioned” names have – see the difference between, let’s say, /usr/bin and /usr/local/bin; the difference is well intended: difference between purpose causes difference in naming convention.
That’s what desktop environments /at least the “two big ones”) already successfully do. They add a layer of abstraction that leaves the “old fashioned” infrastructures intact, while providing a “more humane” surface to the user.
The standardization on the english language enables help (with illustrating examples) across the language barrier. If someone from Russia asks how to do this and that, I (from Germany) can answer with a script to solve the problem that he can use 1:1 without needing to translate it from the german language to the russian one.
That’s why I think the desktop environments should put more emphasize on a good (!) translation, instead of adding features after features, just to scare a German user with an english error message that makes the user throw Linux out of the window. Yes, it it that way in my country – one english word and woosh! away it flies. Strange languages scare Germans.
That whole “/usr” = Unix System Resources is not really confirmed. Most people think that’s a “backronym” which was invented long after the /usr structure was in effect.
And what the hell is “etcetera!?” I know what the term means, but if 50% of a usable Linux system lives there, that’s poor categorization, no?
Frankly, the current layout sucks. From one system to another, the same program could be installed in ten different locations (/bin, /sbin, /usr/local/bin, /opt ,to name a few common ones).
Whatever the changes, something should be done about this obscene mess. The only logical argument to keep it is that millions of programs follow its ridiculous rules.
I think you’re right, I just wanted to illustrate what /usr is today.
This is correct. Traditionally, /etc resided on the / partition and contained essential binaries to run and maintain a system at a low level, for example, when problems occur mounting further partitions. So you did have things like /etc/INIT, /etc/rc, /etc/mount or /etc/fsck, all of them in the same directory. At some point, there was a consensus to use /etc for the system startup scripts and their configuration, and later on, for configuration of installed application (Linux: sometimes /etc; BSD: /usr/local/etc).
This is not due to the concept in general, but to the developers, maintainers or distributors not following recommendations and standards (it you may call them this way), or just by their different interpretation about how to use existing structures. Again, BSD is more clean here than Linux, but not generally, as I have to admit.
While projects like GoboLinux have interesting approaches of abstraction, most of the good GUI solutions still have the mess you described under the hood. If you can’t get the developers to develop a reasonable consensus, this mess will only get worse, I believe.
As I said in my post, this basically entails an abstraction layer on top of the file system. Why? What’s the point? What’s the gain? Why is this any different from simply using symlinks to point /programs at /bin, or the equivalent language-specific symlink at /bin? At the end of the day, people are going to have to deal with /bin because it is the baseline.
Right now, somebody looking for support in …insert distro… can find information regardless of language or environment, because it refers to /bin.
How does using language-specific references enable somebody that doesn’t understand english, or understands it poorly, to use any sort of support or other google-generated information to understand that /programs should be translated to their specific environment settings?
How will anybody packaging applications using the current tools be able to have their applications layout properly when there are 100+ potentially different environment-based directories to deal with?
The only way this can happen is to have some sort of baseline point of reference that the system understands, whether it is dealing with environment variables for interpreting what directory names should be, or for installing packages. Which means we’re using an abstraction system. Which means, ultimately, we’re still relying on /bin as a baseline.
Trying to make FHS “user friendly” is no more useful than slapping lipstick on a pig. Yes, it’s ugly, and yes, with the benefit of hindsight it can be implemented much more cleanly. But at this point, changing something substantial as the FHS will carry less benefit than the effort required to do it. As archaic as the FHS is, and despite the fact that linux distros have their own interpretations, it doesn’t change the fact that there are decades worth of applications and documentation that rely on it to some extent. It also doesn’t change the fact that at the very least, one common baseline is the fact that every user is dealing with equally archaic directory structures regardless of their language. The archaic nature is almost a universal language in it self.
I’m one of those people that believes that the average user should not have to deal with file structures at all. User interfaces should be abstracting this, not the platform itself. We’re slowly reaching a point where file browsers will start categorizing files and information based on meta-tags and contextual references. This will ultimately make things more intuitive, and non-ethnocentric. That’s what we should be aiming for.
And if anyone wants to touch the filestructure, they should know what they’re doing. I don’t see the benefit to changing such a core component, ugly as it is, to address perceived corner cases.
Linux and friends have much bigger hurdles to cross for wider acceptance and improvements of ease-in-use. Pointing fingers at FHS is like pointing fingers at Adobe for not porting Photoshop as a reason for holding linux back from further adoption. It’s a popular deflection.
Sorry, but just my 2c…
If all the applications install correctly and get their menu entry then who gives a rats ass where the binairy resides?
As long as the application can be upgraded or get fixes when necessary via a sane package manager.
That’s kind of my point. The emphasis should be on FHS transparency via the user interface.
I don’t think it is such a good idea to translate directory names.
For Example:
You, being dutch, run your $OS with locale set to dutch. Now, you have to explain to a chinese, in english (he doesn’t speak dutch, you don’t speak chinese), where he would find $PROGRAM or $FILE. He is, of course, running $OS with locale set to chinese.
You repeatedly tell him to go to his “Programs” folder. He keeps telling you, he doesn’t have one. Let’s assume that the translation from chinese to english for that very same folder is “Applications”.
Now apply this to the entire directory tree.
It’s even worse when naive or inexperienced developers make assumptions about existing directories. My Windows XP is set to german, and every once in a while there is a program that installs to “Program Files” instead of “Programme”.
While translating, more often than not, you end up translating *concepts* rather than *words* because it’s the only way to make sense.
While I do think I18N and L10N are really great and necessary, I also think they shouldn’t be applied (naively) to file system hierarchies.
I am more or less indifferent to how a file structure is designed from windows to linux and gobo and everything in between they all make some sense but all require the layout to be memorized. where is the user’s data kept how about his configuration settings quite frankly I only want to have to learn so many layouts.
Ideally a layout should keep program files easy to find and separate from any data and configuration which should also be easy to find. I want to back up quick and not miss configurations or user data. I can’t think of any major file structure that completely fails in this regard worse than the windows structure quick where are the outlook and quicken user files.
If the only major advantage to gobo is multiple versions are easy to keep is that a big enough benefit to learn a whole new system. That sounds like a personal decision to me.
BeOS only shows folders in the tracker that is “real” there is still /var and other /% folders. But i agree to 100% BeOS has the best folder layout ive ever come across. Its easy and you undersand every bit of it by just reading the names of the folders and if not there is always excellent documentation.
Definitely. It’s pretty easy to guess what you’ll find in ~/config/settings/, ~/config/add-ons/, or apps/.
The extent of the filesystem abstraction in BeOS is that there’s the /boot/ folder – and in the terminal, you can go all the way up to /, but the Tracker treats /boot/ as the root of the filesystem (or the Desktop is the filesystem root, depending on how you look at it – since /boot/home/Desktop/ is invisible in Tracker). Above /boot/ are symlinks for compatibility with POSIX software.
As a desktop user I am not really bothered about the layout used by the Operating System, providing it is hidden from me.
The only layout I am really interested in is my own home directory (e.g. pictures, music, documents, etc).
Suppose the layout is more important to people administrating servers, who have to make manual configuration changes in /etc …
Personally I think the layout of the /home (or /users) directories should be empty.
It is me deciding how to keep my own files. Not the operating system. Instead recently all operating systems start to drive me crazy with thinks like “Images” “Music” “Documents” (heck, even Ubuntu started with this rubbish). Is there any serious reasons for users wanting that?
Don’t get me wrong I am not suggesting an enforced structure for the home directory.
What I am saying is that I am only interested in the structure of my own home directory and that should be left to me.
I don’t want something enforced on me like “My Documents\Music” and also don’t care about underlying layout for the OS (e.g. “/usr /usr/local/ /opt”).
I also hate it when operating systems like windows enforced stuff like “My Documents\Music” etc.
Not as in “I won’t use this system anymore” but as in “OMG this can’t be happening again”.
Each time I have to deal with “invisible” .directories that keep accumulating files but won’t ever never be deleted by uninstallers and “documentation” packages that hide the text files you want to read in some obscure directory deep into /usr, I kill a Linux developer.
In case you feel sorry, they should have developed a nethack version of cd so that browsing the endless dungeon would be somewhat more interesting than hacking through them.
Edited 2008-08-25 07:19 UTC
That’s very debatable, because it means that if you want to backup your account, you need to backup several directories because you want also to backup your program settings (bookmarks, password, etc).
And many people will forget to do this..
The directories, their names are capitalized.
Hatred.
Really though, most users never touch the system directories, and people who actually have to will be able to remember them. It would be nice if the various distributions could standardize on a (non-capitalized, for the love of god) directory structure of course.
who generally believe that it are the users who should control and understand their computers and the software that runs on them, instead of big, greedy, and scary corporations
The problem is not that FOSS people don’t want “the people” to use, understand, control, … their software. The problem pops up when larger number of people start using the OS who don’t understand how it works, but enjoy considering themselves professionals, and from time to time they can come up with pretty delusive ideas about how things should work. Don’t tell me a sane developer would consider an ad-hoc bunch of links dropped upon a file system to be anything worth even thinking about seriously, let alone introducing it into the OS.
I’m not against any change for the better in any aspect of my earthly life, which includes the Linux file/directory hierarchy to an extent – a very small one, but still. But when I see that change to lead into a smokey cloud which is not even a bit thinner than the one we’re currently in, then I really don’t feel the need to be enthusiastic about it.
And again, about users being or not in control: that does not depend on how their system directories are spread all over their drives, not the smallest bit. Does the current one represent the best possibility ? No. Do we need something else ? Maybe, if it’s proven to be better and not just by the “professionals” but also by the professionals. Would the average user crowd care about the change ? Absolutely not. All they would notice is that these crazy people didn’t even deliver yet the best OS, but they still keep changing parts of it most won’t even notice, they’ll just wonder what the fuss was all about.
Other than the odd folder names, i really dont see anything wrong with FHS. Maybe making the names slightly more human readable, like kristoph suggested would be good, but i _really_ like having all executable files in a single folder, and all libraries in a single folder.
In windows it can be a challenge to find your application, since Program Files often contains names of companies, and when you do find its folder, it often contains other junk with it, like dll’s that mean nothing to an average end user.
i prefer having all libraries in their own place, so that as a programmer i know what i have access to. stuff like /etc and /dev could be nicer, but the structure is still very sound to me.
my only gripe would be that some programs use weird directories, like /usr/local/firefox, and /opt/gnome. I agree in setting a standard, but I think GoboLinux’s way is trying to be like Windows in a way that isn’t really relevant to end users.
Other than the odd folder names, i really dont see anything wrong with FHS. Maybe making the names slightly more human readable, like kristoph suggested would be good, but i _really_ like having all executable files in a single folder, and all libraries in a single folder.
The problem with this, as Gobo pointed out, is that in the FHS all your executables are NOT in the same folder. There’s /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, /usr/local/sbin, /opt, etc. It doesn’t get any better for the libraries, documentation or anything else really and the only standardization across the various Distributions is that they can use those folders, not what must go where.
Gobo’s effort does seem to accomplish a great deal for both developer and user. Developers can still use /etc, /bin, /lib but the end user never sees them! Their ability to install and run different versions of the same libraries would be a great help in a lot of situations. For instance, I installed OpenSuSE’s official releases for TCL 8.5.2 and VTCL 1.6.0. VTCL won’t run on TCL 8.5.2 and hasn’t been updated yet, how does your average computer user figure that one out?
These sorts of problems are not isolated in Linux, and forward looking ideas like Gobo might work or might not, but at least it puts a spotlight on the problem and gets people thinking about it.
Thats the problem though, they aren’t. You have /sbin because you dont want super user apps in the normal users path variables. you have /bin which has the basic apps required to use bash, because years ago /bin and /usr were always on seperate partitions (if not drives), and people wanted the system to be usable even if /usr failed to load. /usr/bin is where things installed with the os go, and /usr/local/bin is where stuff compiled on that machine goes.
In reality, if /usr isn’t loading properly, 99% of the time having access to sh and vi aren’t going to do it for the person, and they are just going to end up rebuilding the machine. /usr/bin now has plenty of apps that require super user creds, so at this point its a pretty arbitrary line between what goes in sbin and what doesn’t. Now that we have package managers, /usr/local/bin is also a pretty silly convention, since 95%+ of what is on your machine is going to come from the package manager, and if you dont package up the stuff you compile yourself, you are really asking for the pain you are going to be inflicting on yourself.
And that is just when it comes one thing getting spread all over the place for reasons that are now irrelevant, or at least irrelevant for most installations. I mean, with the advent of tab completion, is it really worth dropping the e in user? Most of the system folders are gibberish words, and there is no reason for that any more. Sure, you would still need to learn that /applications is where your executables live, but at least it is a proper word.
First of all, I would say it is alot closer to OSX then it is to windows. Secondly, /usr/bin has stopped being relevent to anyone for at least a decade. Being archaic for the sake of being archaic is only really helpful for the people who have been using it since the time there were reasons for these things.
Either I’ve misunderstood or you have!
user without e would be usr and that is not short for user, but for unix shared recources.
But then again I might have misunderstood your post, making the quote in the top here being out out of context?
Nalle Berg
/.nalle.
Not so much. /usr is likely just shorthand for “user”, as documented a zillion times everywhere. Unix System Resources was introduced after the hierarchy was already in effect.
http://www.definethat.com/define/7110.htm
http://lists.freebsd.org/pipermail/freebsd-chat/2003-December/00171…
I suppose you believe f(s)ck stands for Fornication Under Consent of the King too
Bacronyms ftl
Quite apart from this article’s somewhat demented rantings, there really are problems with the FHS. None of them have anything to do with being cryptic or unfriendly to average users.
Here are a few:
Configuration goes in /etc/. Which files go in /etc/ and which go in their own subdirectory of /etc/? If /etc/ is for settings, why does /etc/init.d/ contain scripts? (Set aside for a moment a debate about the fine line between a config file and a script.) Under what circumstances is configuration stored in /usr/lib/appname/ and/or /opt/appname/etc?
Libraries go in /lib. Or is it /usr/lib? Which libraries go in which directories? Why does some libraries get stuck in /usr/lib/appname/? Why are some, sometimes, under /opt? Why do I sometimes see a /lib/firmware–is a firmware blob a library? Where does the FHS say firmware goes?
Does each program get installed into /usr/lib/appame? If so, do you symlink from /usr/bin to the binary or place the binary directly in /usr/bin? If I’m using /opt do I symlink or adjust the system PATH? What’s the difference between /opt, /usr and /usr/local? What about games, where do they go?
What is /mnt for and what directories will you find under it? What is /media for? Where should /floppy and /cdrom really? be located?
Is there any structure to /tmp? What is the structure of /usr/local and what determines which programs are installed there? What’s the difference between /usr/doc and /usr/share/doc?
What directories can I expect to find in /var? How do I decide whether a file should be created in /var or /tmp? If I have a web site where should the files for it be stored?
If you think you know the answers to some of these questions I have a surprise for you: *every unix does some of them differently and even distributions of Linux can’t all agree*. The FHS does not even attempt to answer some of these questions except in mostly useless ways.
You’re bringing up valid questions.
This is really something (along with the lib/ problem) that I find a bit strange in Linux. In BSD, there’s a differnce between “the OS” and “installed packages”, while Linux does not have this kind of separation. BSD puts system’s stuff in /etc/, and local (not to the system belonging) parts in the respective /usr/local/etc/ directories. You can conclude the nature of a file from its name and it place within the file hierarchy.
This one continues the aspect mentioned before. Some Linux distributions have /opt/, others don’t. In some cases, the purpose of lib/ and share/ subtrees is merged, too.
In general, additional software should go into /usr/local/ where the basic subtrees of the system (etc/, lib/, include/, bin/, share/) are replicated with the respective purpose. Games should obey this rule. But as I mentioned before, it’s hard to say which things do not belong to the system because Linux distributions do not differ between OS and installed packages; in fact, the “OS part” is a set of packages chosen by the creator of the distribution. Rule: Everythin within /usr/local/ is extra stuff, everything outside is the system (mountpoints and home directories not mentioned here).
And /opt/… I think it has initally been intended as a directory that contains extra stuff that does not obay the subtree rule, i. e. no etc/, lib/ or bin/ separation, instead a name of the application with its own subtree.
Following the rule:
/usr/local/bin/foo
/usr/local/lib/libfoo.so.2
/usr/local/share/doc/foo/readme.txt
Not following the rule
/opt/foo/foo
/opt/foo/libfoo.so.2
/opt/foo/readme.txt
/mnt is intended as a temporary mount point for the system administrator (according to man hier).
/media is intended for (usually auto)mounted media, it contains a subtree for the devices (e. g. /media/cdrom, /media/dvd, /media/stick) or mountpoints are created from a label provided by the media itself or by the class of the drive (man geom).
Allthough the access to /floppy and /cdrom is much easier than their successors within /media (due to less typing), these mountpoints may already be deprecated.
No, because programs or users that use /tmp should keep an eye on the stuff they do on their own. This is because /tmp may disappear at system shutdown, or, may be empty after system startup, for example when /tmp leads to a RAM disk or some system setting clears /tmp at startup. It’s the system’s waste dump.
I mentioned this before, it’s complicated in Linux because it’s hard to determine what’s local and what’s not. In BSD, it’s obvious.
Only the last one should exist, an assumption from priority and precedence considerations.
Usually databases and logs that are created and managed by programs, not by the user.
If it’s okay to lose it – /tmp. If it should be kept – /var.
In ~/public_html?
You see this from my explainations, and some reasons why it is so. Alltough much of the stuff is well intended, there are inconvenient uses of the existing structures, maybe due to sloppyness, or due to general problems of interpretation. There are many differences between the many Linux distributions and among the UNIXes, too.
These are not questions; they’re rhetorical or, rather, they’re designed to make you think of the answers and wonder whether they are truly correct. I already know the answers to them, as far as I’m concerned, it’s just that the answers vary depending on who you ask.
I knew some BSD user would mention this. I hate that in BSD some configuration goes under /usr/local/–this makes no sense! Why isn’t it under /etc/local/? That would be much better.
And no, you can’t always be sure (as a layman) whether something is part of the “base OS” or not.
Defining “additional” is hard. For the BSD guys it’s ports, fine. For Linux distributions it’s either things not managed by the package manager or things not packaged by the distribution vendor, or it’s a mess. Usually it’s a mess even if one of the other rules was supposed to apply. If you make a strict distinction between local and base, why is there no /var/local? /etc/local? For that matter, why isn’t base stuff under /base/ and local stuff under /local/? That would really make it all clearly different.
But why are games a special class of binary? Why do I have /usr/games/? Supposing on BSD there are no base-system games (I really don’t know) why are games in /usr/local/games and not /usr/local/bin? On Linux some games are in games, some are in bin. Some game-related binaries (like wesnothd) are in games when they’re not really games at all. If games are a special class of binary, why not development-related binaries like gcc or bison? Why not make database utilities special too?
/opt typically has this structure:
/opt/appname/etc/
/opt/appname/bin/
/opt/appname/lib/
/opt/appname/share/
Unless it has some other structure or none at all.
But even if you were right, why should there be any exception to the rules officially allowed? I despise “standards” that say “You must do it this way to conform, unless you don’t feel like it in which case we’ll say you conform anyway.” So useless.
You say this and the standard says this, but do the users know this? Do distributions obey? I see frequent violations.
What’s more, defining “temporary” is hard. Is a flash drive temporary? I certainly think so: It is not part of the system and it may come and go. But you just placed stick/ under /media/, which makes sense in a lot of ways, too. I would have defined dvd and cdrom drives as temporary, had you asked me. What about an external USB cdrom drive? Why should it be in /media/?
It’s easy to say no, because that’s what always has been. But *shouldn’t* there be a structure? Some kind of convention for temporary files that an application t relies on during run vs. ones it just throws out and does not care if it sees again. Maybe distinguish tmp files created by “the OS” and “the user’s applications”. Do lock files go in /tmp? Lock files tend to matter, yet I see e.g. /tmp/.X0-lock. Many applications now create a subdirectory in /tmp to hold their files. Is there any documentation on when this should be done or how to name the directories?
If e.g. lock files are violations, why is it so common? I tend to blame the standard when it gets ignored.
This is your opinion. I agree, but since I see /usr/doc/ frequently it appears we are in a minority.
Forgive me if I desire a more precise answer. Does each app make its own subdirectory in /var/? In /var/lib/? Or do you first make a subdirectory for purpose, then for app, like /var/run/appname/ or /var/lock/appname/? I see all of these things being done without agreement, rhyme or reason.
Is there any restriction on creating more directories directly in /var/? Or, if you believe each app should make its own anyway, what is the naming convention and are there any restrictions on creating diretcortories not obeying that convention?
Correction: If it’s okay to lose it between reboots /tmp, otherwise /var. Some things in /tmp are *not* good to lose while an app is executing.
In whose home directory? I’m not trying to be difficult, it’s not as if I can’t answer these questions, it’s just that nobody agrees on the answers. You can always give one, and I can always give one, but that doesn’t make it correct and it doesn’t make it likely that someone else will think the same thing and do it the same way.
[/q]
I know why these things are the way they are, I know most of why they got that way and I know most of the differences that will be found. Your explanations are just that: yours. If there were right answers that everyone agreed on and conventions that most people followed correctly then there would be no problem.
That’s why they are valid. And you are right, many answers depend on the concepts the person you ask is aware of.
Why? The concent in /usr/local/ is structured exactly the same way the system’s directories are structured. So if you know what something is, you can simply conclude where it is.
This is correct. While BSD has a strict concept how to define “the OS” and “anything else”, other OSes don’t. This is due to their nature. If you understood the BSD system, you will know what’s “the OS” and what’s not.
That’s the situation, exactly.
Or /usr/local/var, just as /usr/local/etc. As far as I know, the content of /var is managed through system services (such as the system logger or system database tools), or at least special users and groups have to be created on the system to put things in /var (for example “CUPS Owner” or “MySQL Deamon”).
Yes, if you could exactly differ between both.
There are the basic system games, but most people won’t call them games. The programs bcd, factor, grdc, number, ppt, random, strfile, caesar, fortune, morse, pom, primes, rot13 and unstr are… toys?
I’ve never heared of /usr/local/games. Installed games go to /usr/local/bin (and their components to lib, share respectively).
Thanks for giving the layout of /opt, I haven’t seen /opt for years, I still do remember something like /opt/kde/share…
So /opt does seem to contain structures like those created by the PC-BSD’s PBI system: one entry per program name, bin and libs beneath; for compatibility, symlinks from /usr/local subtrees.
In Linux, yes. In BSD, no. See “man hier”, everything is explained. Of course, the question “Why?” remains.
You’re completely right: The definition of “temporary” depends a lot on how you use a media. Plug in, plug out, use today, not tomorrow, well, that would be temporary. An external USB disk that is mounted all the time, okay, not temporary.
That would be helpful if your system would not clean /tmp automatically. You would have a better idea, for example, who (which program) placed files there, you you could see from their name if it’s okay to remove them.
Anyway, at system shutdown, /tmp usually disappears.
I’ve seen lock files for programs inside “dot dirs” inside the user’s home directory, ~/.netscape/lock for example. And there’s /var/spool/lock, too.
That would be a matter of the maintainers of this particular software. As far as I remember, KDE creates own subtrees in /tmp, but for documentation… it’s not that you can “man kdehier”…
Not blame those who do ignore it? If I don’t obey the traffic rules, it it the rules’ fault?
I think this is Linux-specific again.
At least in BSD, theres some kind of standardization (see section “var” in “man hier”: “multi-purpose log, temporary, transient, and spool file”. And sadly, I don’t have a more precise answer because often, applications do things on their own.
Yes, I intended it to be understood this way. The content of /var should be present without any disappearings while the system is running. Anything else would be a catastrophe.
In whose home directory? I’m not trying to be difficult, it’s not as if I can’t answer these questions, it’s just that nobody agrees on the answers. You can always give one, and I can always give one, but that doesn’t make it correct and it doesn’t make it likely that someone else will think the same thing and do it the same way. [/q]
A common way is to use the home directory of the user to which’s name the HTML content is registrated. But this doesn’t have to be, for example, if an automated system is running that’s not registrated to any user on the system. Another concept is to symlink files from a user’s home directory into a directory belonging to the HTTP server application, so you could place “un-registrated” content there directly.
As you introduced, the knowlegde of existing standards and concepts, their interpretation and their de-facto use are very individual. Just imagine what happens when people start questioning and interpreting the traffic rules. Just a general consensus helps he
I meant to say that *even in BSD* it’s hard to be sure, as a user, whether something came with the base or was part of an add-on. “User” here is both system admins and regular users.
An interesting definition. OK, so how does the author of a system service know how to answer the questions about what structure /var/ has?
Perhaps in BSD this might be true, but on my system there’s /usr/games and /usr/local/games. Some games don’t put their binaries there, most do. This is part of the FHS, see here: http://www.pathname.com/fhs/pub/fhs-2.3.html
/opt only *sometimes* contains this structure. Each program has a subdirectory, after that it’s up to the whim of the author.
I’d like to believe that no violations exist, but I just don’t. Nobody is that perfect.
All *nix systems clean /tmp on start. This is not a workaround for a broken system that doesn’t clean /tmp. Systems rarely clean /tmp while the system is up. I don’t know about you but I very rarely reboot my computers, except to patch the kernel and upgrade hardware. Can we really rely on boot-time cleaning?
Secondly, even if we’re not worried about crufty junk accumulating it seems to me that it would be useful to provide more clarity. Don’t tell me “just don’t ever look in /tmp” because sometimes you have to… and sometimes you’re writing a program that has to work with temporary files. Isn’t it better to have a clear place to put things?
This is a perfect example of the problem: The correct behavior is not known so a developer makes something up. I’d like to avoid this kind of thing,
There are two problems with that analogy: (1) Laws are enforced, standards aren’t. (2) When you have a rule no one obeys you have a bad rule, not bad people.
In whose home directory? … [/q]
A common way is to use the home directory of the user to which’s name the HTML content is registrated. But this doesn’t have to be, for example, if an automated system is running that’s not registrated to any user on the system. Another concept is to symlink files from a user’s home directory into a directory belonging to the HTTP server application, so you could place “un-registrated” content there directly. [/q]
But in the FHS there is no place for a directory belonging to the HTTP server except for /usr/lib/httpd (or under local as you choose) and somewhere in /var. Yet a web sites files are not exactly run-time modified and clearly should be under /home, but no user in /home owns them, so…
The best answer I have apart from /var is /home/www on a system where a user named www executes the httpd.
This once again goes back to my point: The FHS has problems, mostly that it doesn’t answer questions it should and partly that it’s terribly, arbitrarily inconsistent. People who want to radically overhaul it are usually misguided, but they frustration springs from very real issues.
BTW, your reply looks truncated. Was it?
A means to determine it is by looking at the path of an installed program:
% which lpq
/usr/bin/lpq
Ah, this one belongs to the OS.
% which lpstat
/usr/local/bin/lpstat
This has been installed afterwards. (Now it’s possible to use the tools provided by the package management system to find out which application had installed it.)
Furthermore, you can read from the creators of the BSD which things they provide (with their OS) and which they don’t. For example, the default installation of FreeBSD and OpenBSD differ in regards of what does belong to the base system.
Other questions coming into mind could be: Why is a name server part of the base system? Why is a DHCP server not part of the base system? I’m sure you can imagine similar considerations.
Usually from “man hier” or the respective description – if one is available. If not, well, I think the author starts guessing and finally implements something on his own.
I won’t claim there is no exception. Often, I find applications using share/ and lib/ directories in a similar way (e. g. to put icons there). There are recommendations, but not everyone uses them. So it’s completely possible in the BSDs, as it is in Linux.
No. For example, if you set clear_tmp_enable=”NO” in /etc/rc.conf in FreeBSD, the content of /tmp will be kept between reboots.
At home, my computers run just as long as I use them or as long as they’ve got something to do. At work… well, who reboots servers?
It’s a system setting, the maintainer of the system should know. And it’s the standard behaviour to start with an empty /tmp, as far as I know.
Definitely. Maybe you know the term “file disposition” from IBM mainframe OS architectures / JCL. You can define how a file will be handled during a job, e. g. it’s deleted after the job has finished (often welcome solution), or it should be kept for further use (sometimes useful, mostly for diagnostics).
But I still think the term “temporary” indicates that something is not very useful to the user, but maybe to other programs.
Exactly. But when we suggest a “correct behaviour”, it should be documented in an understandable way. I’m not sure who would be responsible for this, maybe the creators or maintainers of an OS? But then, what about cross-platform applications? And when we’re talking about Linux, who should develop a common standard there? And would the different distributors follow it?
Interesting look at the nature of rules, but understandable.
Other arguments could be “never touch a running system” or “don’t ask why it works, it just works”. Sooner or later, this can lead into real problems. I see the problems simply by following your questions: Many of them cannot be answered completely, and answers sometimes lead to the inconsistencies you mentioned. Concepts leading to such answers are far away from a mandatory standard.
Maybe I exceeded the char[8000] limit, but the preview was complete. “Just a general consensus helps here.” should be the last line, it’s possible that I didn’t press the keys strong enough.
Yes, it is these kinds of things to which I was referring.
I did just look up the hier(7). It is nice in that it does specify some directories that exist under /var/, but it does not say what application authors should do with their own apps. I do like that there’s a /var/tmp/–this suggests that someone is thinking about these issues.
The BSD systems are certainly less schizophrenic then Linux distributions. It’s probably because of the more cohesive base on which things are built and the lesser variation due to there being many fewer bases (like… one, each).
I should be more careful what I say. All *nix systems of which I am aware have as a default or common behavior to clean /tmp between shutdown and start. That is, none are missing this feature.
Windows admins (-;
[drumroll]
I am imagining an all-too-likely real world scenario in which root doesn’t really understand how the system works and is supposed to work and where application authors don’t take time to understand the system but just write as quickly as possible. This is the mainstream “Windows”-style world of server administration that I see. I don’t expect the useless people of the world to become less worthless and more expert just because they start using a better OS. I don’t want to exclude these people for a lack of knowledge. So I try to think how we can, without giving up any actual power or control, create a system that will be nicer to people who really *don’t* know what they’re doing, are not going to learn it if they can help it and feel that they are far too busy to be concerned about details. Having a system which is consistent, logical and predictable in as many ways as possible would help.
I did not know about this. IBM mainframes are before me, or above me, depending how you look at it. Thanks for mentioning it.
I would treat a ~/tmp/ very differently from /tmp/, to be sure. That said, I remember a story about a MacOS user who stored all his files in the trash can and was very upset when he found one day someone had emptied it…
This is an important point. I don’t think any answer which originated from a single Linux distribution is ever likely to gain wide adoption (too much NIH syndrome). If several people who together do not represent a single vendor were to get together, preferably in an open fashion (e.g. a mailing list, a conference) and hammer out some agreements, that would be good. If they could then produce some documentation as you describe (clear, understandable, preferably with rationalizations in the footnotes) and release it as a recommendation then I think this might have some potential.
I am a believer in success on merits. The reason no FHS revision has gained traction is because they all have large deficiencies vs. the present way of doing things, or are equally arbitrary. If some good, clear approach were designed carefully it could avoid serious harm to any major workloads. If it also provided some clear advantages to distributions, packagers, and developers then I think it might gain acceptance. Good documentation will naturally be followed if it is known to the people who might need it. That is my belief.
Well as long as you see my point I think that is as much as I can ask. I’d prefer more concrete standards (mandatory is such an ugly word!) and not just concepts. Concepts are fine, but some concrete recommendations would be better.
[/q]
Ah, yes. It was definitely truncated.. by three characters. It ended: “Just a general consensus helps he”
Oh, if only this was a rare event. You wouldn’t believe how many phone calls we get on the helpdesk line from users who emptied their Deleted Items folder in Outlook, or emptied the Trash Can on their Windows or KDE desktops, or went looking into their Trash folder in the webmail server, only to find their uber-important files/messages were no longer there.
The most common excuse: they don’t know how to create folders, and the Trash is already there, and you can move items there with a single keystroke (Del).
Some days, I understand how the “average” IQ has dropped to 80, when 100 is labelled as “average”.
It truly boggles the mind how little people want to think.
From what I understand, local is for stuff compiled on that machine. Although, that does seem a less arbitrary way to seperate things.
The idea behind /opt is lets say I have kde 3 and I want to try kde 4, but not as a full time thing. I want it somewhere that is easy to blow away, but at the same time I don’t want to put it all in my home dir. /opt is a place to stick stuff you want isolated from the rest of the system.
I really hate this. Saying /media is for stuff the system mounts, and /mnt is for stuff that i mount basically means i have to think about who mounted the thing i am looking for every time i go looking for it. Why did /media have to be created in the first place? the more oldschool automounting stuff never had a problem sticking everything in /mnt
Err, no, at least not from a BSD standpoint. This is due to the difference between “the OS” and “anything else” I did try to explain before. Let’s assume you’re rebuilding your kernel and system, so it would be “compiled on that machine”, but because it’s the system, it would go to the designated directories. If you install a package of software, you don’t compile anything. So everything you install *after* the base OS (which’s content is well defined in its basic distributions, for example base, man, dict) will go in /usr/local where you have the same subtrees that the system uses itself: bin/, etc/, share/, lib/ or include/.
Ah, that’s another interesting interpretation of what /opt should be for. I haven’t seen /opt for many years, but I think it was some S.u.S.E. Linux that had KDE in /opt/kde by default.
At least according to “man hier”, /mnt is reserved for the system administrator for temporary mounts. In most cases, /media is used, and due to the concepts that KDE and Gnome propagate, i. e. to make media accessible as soon as it is available, most of the content in /media gets automounted. I don’t have such services, I do have to “sudo mount /media/pd” or similar. This depends on the paradigm a Linux or BSD (or a desktop environment installed on this system) does follow.
And remember the traditional mountpoints in the root directory, such as /cdrom or /floppy you mentioned. Furthermore, there have been concepts of creating mount points for media within a user’s home directory, for example ~/mnt/dvd. Or subtrees in /export that lead to different hard disks that do contain home/, share/ or dist/ subtrees.
Missing consensus.
In complex machines such as computers, the interface can be organized for the uninitiated user in manner that overcomes a machine’s complexity, allowing the new user to perform many common tasks. Usually, this feat is accomplished by organizing and keeping visible the more frequently used controls/aspects of the machine, while hiding infrequently used controls/aspects.
However, hiding the basic conceptual model (or teaching an inaccurate conceptual model) of a complex machine frequently leads to user frustration and usability mistakes. Such is the case with obscuring (or “dumbing-down”) the directory structure of a computer from the user.
People are often smarter than designers (and CEOs of trendy computer/electronics companies) think. Any grade school child can understand the concept that information on computers is organized into a tree of files and directories (folders) within directories. From this rudimentary model, it is not a huge mental leap to realize that some of the files are executable (programs/scripts) and some files merely store data, while a few files are a combination of the two types. It is not much of a brain strain to additionally realize that directories are often organized to separately contain data files, applications, code libraries, configuration files, temporary files, etc. One does not have to be a programmer nor a computer expert to comprehend such a simple model and to memorize a few of the frequently used directories.
A user’s understanding of such a basic conceptual model does not ruin the user’s ability to thoroughly employ the computer desktop model nor does it impair the use of search-based systems (such as Gmail, Sup, slocate, Spotlight, etc.). This understanding merely enhances the user’s ability to work with a computer and solve problems. For example, a common frustration occurs with new users when one cannot find a file downloaded from the Internet. With a basic knowledge of the directory tree, one can readily track down the location of the file and also configure the system to download future files to a more convenient directory.
Consider the analogy with hand calculators and math education. Hand calculators have been available for decades and they preclude the need to understand simple addition, subtraction, multiplication, division, etc. However, if we refrain from teaching basic math to every school child, we would probably have a lot of frustrated, helpless people in the world. Such a situation is actually happening right now with computers. Computers are now a part of everyday life, and the lack of comprehension of the basic conceptual model of computers often gets a lot of naive users stuck on the tiniest of problems.
Everyone should have a rudimentary knowledge of the directory tree and of the basic internal components of a computer. Such knowledge is much less involved than the algebra we all learned in middle/high school. Almost all that is needed to understand the directory tree is contained in the single pargraph above, and the internal components of a computer can be explained in four or five more paragraphs. The pervasive prevalence of this simple knowledge would eliminate a lot of problems and would allow a majority of computer users to grow and flourish.
I’m sure most here agree with that, the real question at hand is whether knowing the difference between /sbin, /usr/sbin, /opt and /usr/local/bin counts as the sort of rudimentary knowledge the avg. computer user should have.
There is no good reason not to simplify things that can be simplified.
I’m actually not to sure that most here agree with that assertion. In fact, the first question/point listed in the parent article is: “Normal users shouldn’t see the FHS [directory structure].”
In addition, I agree that the Gobo/Beos/OSX-style directory configuration is better, and that the *nix directory configuration could probably be greatly simplified (especially for single user machines). However, I don’t think that it would be a major effort for anyone to learn the basic differences between the directories that you mentioned. There are many simple charts that plainly show the particular properties of each basic directory.
I think you’re giving end users more credit that they deserve. I’ve worked at a help desk, I’m currently a system administrator at my current place of employment and before that I worked selling and supporting computers as part of my own company – I can tell you that you have the optimism of youth.
The end user is a lemming, I’ve seen people who, after moving an icon slightly – they’re completely clueless as what to do. I remember telling an end user to ‘double click on internet explorer’ and claimed it wasn’t there – even though it was sitting on the desktop (do end users ever read what is on their screen or do they just randomly click stuff?). End users need to be educated from day one, but I go back to blaming a society which as embraced laziness and slovenly behaviour as the forte rather than people wanting to learn for the sake of learning. Then again, this is an entirely new topic altogether.
Back to the original article; MacOS X did it right; hide the traditional UNIX structure and have the applications end users run sitting in the Applications directory. There are alot of things I’d love to see the opensource world copy from Irix, Amiga and MacOS X. Copying doesn’t mean you can’t come up with good ideas – it is recognising that there is already a good idea and it makes little sense re-inventing the wheel for the sake of dogma.
I don’t want to look like a bad guy, so I’d like to state this first: I’ve worked with users that were very smart at the beginning, e. g. those who came from a mainframe background or were developers, but after using /insert monopoly OS family here/ for more and more years, they developed into persons that you did describe, maybe in a not very nice way, but those people make up the majority of the users, at least from my individual point of view. Why do I think so? Because I’ve seen them, I’ve served them, they trampled on my nerves.
No, because /insert monopoly OS family here/ propagates that you don’t need to know (or to read) anything in order to use a computer.
If you think you’re unfair to the users in characterizing most of them, feel free to read this:
http://www.rinkworks.com/stupid/
Lots of things are really stupid, but what scares me most is that I saw the stupidest things already.
Hey, I wasted all my youth to read and to learn, should all this be useless now?
PC-BSD provides something similar with its PBI package system.
You said Irix. Well, that’s a UNIX system I really enjoyed using. All the power, but still a system that could be used with just atomic knowledge of computers. Of course, reading what’s on the screen and a bit of common sense are very useful everywhere.
That is just the tip of the iceburg. Be more concerned that these people can have children and vote
To quote Chris Rock: “I don’t need to learn that sh-t! keeping it real!” – yeah, real dumb
They key is to bring all these ideas together, no use having 100 operating systems, each implementing one good idea. I want one operating system that implements all the good ideas in one product
IRIX had a wonderful desktop; it was designed for graphics boffins; arty-farty people who have no time to learn the intricate details. When ever I see the opensource community whole sale suck up and clone Windows – I can’t help but scream to high heavens as to why they’re copying half baked POS when there are better things to ‘clone’.
Edited 2008-08-24 21:22 UTC
I agree that there are a lot of clueless computer users. However, I think that much of this clueless-ness stems from a helpless attitude, that has been conditioned by a decades-long prevalence of desktop interfaces designed around users with no mind.
In current ergonomic design circles, a lot of emphasis goes to designing interfaces that can be quickly comprehended by the uninitiated user (usually at the expense of power and speed). The usability phrase for this practice is “reducing the knowledge required in the head” of the user. My point is that, with just a little more prior “knowledge in the head,” users will act much more resourcefully. They will learn to think for themselves and will actually look at the screen.
Most people are smart enough to understand a lot of what is typically considered too complex for the everyday user, and most will use their minds if they are encouraged to do so. Jef Raskin kept the Mac mouse from having more than one button, because he thought three buttons were too complicated for the typical person. Underestimating users seems to be a common mistake with usability “experts.”
By the way, as I recall, MSDOS and Windows 3.1 employed user-friendly directory names, such as “programs,” “data” and “system,” etc.
Edited 2008-08-25 18:38 UTC
Certainly, you do not recall correctly with MS-DOS, and I do not think you are recalling correctly for Win3.1. Those were the wild and wooly days where really only one directory mattered to Dos (c:\dos). Windows lived under c:\windows, where there were a few known sub-directories (e.g., c:\windows\system), but that is about it. Other directories were made up by users – for instance I usually created c:\games and c:\downloads directories. Most applications preferred to live in a folder off the root directory (e.g., c:\jazz).
The funny thing, is I had to revert to this bahviour in Vista. Many of my applications would not work if installed in c:\Program Files, so I had to install them in folders off of the root c:\ directory. Sure made a mess!
No, you didn’t . ‘downloads’ is 9 characters, and DOS only allowed 8.3 (8 characters, and 3 extension characters). (Fake-ish) long filenames were added in Windows NT 3.5 or Windows 95.
Edited 2008-08-25 20:52 UTC
Oops, you’re right, it was c:\download. It’s been a while!
Windows for Workgroups 3.11 came with the first version of vFAT, which introduced the first incarnation of long filenames.
Windows 95 inherited vFAT, and extended it into FAT32.
I’m sure this isn’t my idea, but perhaps it’s time to dump the directory tree altogether and replace it with a relational database. Files can be associated with various tags that denote not their “location” but their purpose. Files that have multiple purposes can be given several tags. If you want to find a file, you don’t have to remember which directory it’s in, only which tags are associated with it. Hasn’t someone already done this successfully, or studied it?
That said, I don’t entirely agree with this statement you made:
As long as you’re talking about the file system layout, I can agree with you. As a general remark about operating systems, however, I disagree strongly. The general layers sitting upon layers–say, KDE sitting on QT and X, or Gnome sitting on gtk sitting on X–is a matter of abstraction and/or code reuse. Developers are taking something that is inherently complicated and messy and distilling it into something simple and accessible. The same can be said for many, many of the layers involved in an OS. That’s very different from the layout of a typical *nix file system, which essentially looks like the long accretion of various practices that grew up in the absence of standards, or for historical reasons that, as others have pointed out, no longer apply.
I don’t think you meant that, but it seemed as if you were making a general comment about OS design in general.
I’ve yet to see a truly persistent relational database that doesn’t live in a file itself. You still need a file system underneath your database if you want it to store data anywhere besides RAM. The actually database application needs a binary somewhere too, and the contents of the file has to be dumped to disk if you want it to persist.
And, by the way, Microsoft has been trying to do this since Windows 95 with “Cairo,” through 2003 with “WinFS,” and they’ve yet to get it working. No one has been able to actually move to a RDBMS for file storage at all, let alone for an entire filesystem (which, to be frank with you, doesn’t make a lot of sense anyway). WinFS ended up being much like Google Desktop, Spotlight’s database, or Beagle – they’re just indexes on top of an optimized filesystem.
WinFS isn’t done yet, what we got with vista was Windows Desktop Search. Aparently they are still working on Winfs, although as you already mentioned, they have been working on that on and off since 95.
Seth Nickell started a project a few years ago to do a similar thing on linux, but to my knowledge it never really got anywhere http://www.gnome.org/~seth/storage/
WinFS beta 2 has been out for some time. It is NOT a RDBMS. It is a layer that runs on top of NTFS. And, as far as I know, it never made it past beta 1 “refresh.”
http://blogs.msdn.com/winfs/archive/2005/12/01/499042.aspx
In order to turn a raw file system to a RDBMS, the raw file system must be informed about the types of entities and their relations.
That’s something that clearly belongs to the application domain, not the operating system domain. No two applications can agree on the binary schema of things in a file system.
And that’s the reason WinFS has never materialized: even Microsoft applications could not agree between themselves on what schema each file should have.
For example, Word may require doc files to have schema X, whereas Excel may require doc files to have schema Y, for interoperating with Word.
Edited 2008-08-25 17:25 UTC
What if the same way an app “registers” its extension with Windows, it registers some sort of schema definition in a master table and gets its own “table” with its own schema?
The problem is moved to poorly designed apps/schemas/queries, but nonetheless, permits each application to have its own schema.
“I’m sure this isn’t my idea, but perhaps it’s time to dump the directory tree altogether and replace it with a relational database”
This was actually tried in an early version of BeOS:
http://www.letterp.com/~dbg/practical-file-system-design.pdf
“At the time, the BeOS managed extra information about files (e.g., header fields from an email message) in a separate database that existed independently of the underlying hierarchical file system (the old file system, or OFS for short). The original design of the separate database and file system was done partially out of a desire to keep as much code in user space as possible. However, with the database separate from the file system, keeping the two in sync proved problematic. Moreover, moving into the realm of general purpose computing brought with it the desire to support other file systems (such as ISO-9660, the CD-ROM file system), but there was no provision for
that in the original I/O architecture.”
Edited 2008-08-24 17:19 UTC
“Funnily enough, by providing all these layers, developers actually flat-out admit operating systems are anything but designed for users. If they were actually designed for users from the ground up, they wouldn’t need all those layers.”
All these layers exist because computers are complicated. Computers are complicated because they are versatile. The single machine on your desk can become a planetarium, music player, ham radio accessory, cash register, dental x-ray viewer, slot machine, fire sprinkler system controller…
If you want a simple machine, go get yourself a single-purpose box. Instead of Amarok, get an iPod. Instead of Doom, get a Xbox. Instead of Thunderbird, get a Blackberry. These gadgets are complex too, but because their uses are limited, they are hermetically sealed so you can’t mess around with them. That’s a blessing–I read articles about how I can turn a PC into a mega-powerful router. I don’t care. I prefer my basic, $100, hermetically sealed router–without even DD-WRT.
What you want is a versatile machine that is also simple. You want a computer that does it all, and yet is simple. That isn’t gonna happen.
“You want a computer that does it all, and yet is simple. That isn’t gonna happen.”
So let’s all just stop trying to achieve a balance and name shit by smacking our palms on the keyboard.
A redesign of the FHS is desperately needed and obviously beneficial, but it in no way should be undertaken with the goal in mind that it be made easier for average users to understand. A curse upon the average user and a world that caters to his fickle whim! The real FS should not be deliberately hidden nor made artificially ‘friendlier’. Doing so would be partly both useless and harmful.
The FHS should be systematically rethought. It should be deliberately designed, this time, to serve the purposes for which it is now used in a manner which is structured, logical and consistent. As long as the structure is logical and the purpose of and place for every thing is made clear then it really doesn’t matter if Joe Average Used To Use Windows And Thinks My Documents Is The Computer can fathom it or not. People who *need* to understand it will find it easier and less stressful, people who *don’t* need to use it will continue to be as oblivious as ever.
What a stupid comment is this? The user is still in control of the machine, he can do everything he wants to even if he does not see the filesystem hierarchy. Does that mean you don’t need to put in an effort to learn things? No! What’s next the demand that all programs should be written in logo, so people can understand it? As many others have said, a computer is a complicated machine, if you want to understand it it takes effort and it isn’t easier to understand if /etc is named /system\ settings
What a lot of the people here who complain about the complexity or inconsistency of the FHS don’t realise, reordering actually increases complexity in a lot of cases. Take the suggestion of creating a /settings
directory and a /data directory and then have subdirectories for each user. First, how do you decide what is data what is setting? You will end up with inconsistencies. Second why does anyone think it’s a good idea to keep user settings and data separate, it complicates backups for one. Or what if you want your personal things to be on a portable drive, now you have to worry about 2 directories instead of one. For what benefit? I could come up with lot more examples.
Mozilla could have simply said “ok, firefox is open source. Do what you want with it.” What they have done is make it easy to get into it with extensions and that makes firefox more open in spirit
I have no problem with any distribution monkeying around with any aspect of Linux they want to, FHS included. FOSS is about evolution, and we have to create “mutants” if we want to find the “fittest”. If every distro did things the same, the software would never evolve.
But I think the criticism of FHS is overblown. What bothers me most is how narrowly some folks look at Linux. There’s this mentality of “all this junk is from the server days, now we all run Linux on our laptops so we can throw it all out”. I happen to like the fact that I can put /usr on an NFS mount. Maybe that’s useless to the average myspace user, but I don’t see why it makes a difference to them whether all the executables are in one directory or seventeen.
Really, why does it matter? Why does it matter if firefox is in /usr/bin or /usr/local/bin? As long as both are in your $PATH you don’t need to know anyway. The only times I can think of that I needed to know where an executable was, I was doing things no “average user” would do.
A few other notes:
– /srv is standard on Novell SLES, it’s used for the webroot, ftp root, and similar. I wish more distros would use it (Hello Debian? Why is the webroot in /var???). Granted, in 99 out of 100 other cases, Debian sticks to the FHS better than SLES (ahem.. KDE binaries in /opt? What?)
– /etc is not English. It’s Latin. Maybe we should create a FHS using all Latin terms. At least nobody would be getting favoritism.
– /usr/local is used by most Linux systems for packages that are compiled locally as opposed to being installed through the package manager. To me that is more useful than trying to create an artificial distinction between “OS” and “apps”.
– I wonder how many of us linux admins are guilty of inventing our own TLD’s and stinking up the situation even more. Yep, I look guilty…
Most of what you say is great, but I take some issue with this. It doesn’t matter to the user where their binaries are, for the most part, but it ought to matter to system designers, developers and packagers. Currently there’s not enough agreement. My favorite example: games.
I think improving the logic and consistency of where things are placed and what those places are named would be good for Linux. It would improve matters for people who have to deal with it every day and it would ease more “average users” into “power users”, make learning easier, which makes hacking easier, which will lead to more free software.
Why would there even be a relationship between name spaces and physical layouts?
Of course you should be able to distribute your data across different machines and different connection types. Especially these days when the web is more and more getting integrated into the “computer”.
But why should the name space be dependent on this? Today we have the beginnings of ways to separate the two with tools like unionfs and such.
I think the discussion is going into the wrong direction. The original GOBO concept has two important points, not one. Friendly names are important, but not much, actually. It is not hard to realize, that \lib means \libraries.
The most important point is that eventually we need to make some regularity and consistency in our file system. And we can use file system instead of all these manifests and configuration databases. Instead of copying executable into one of bins, we can keep it in and execute it from its own well-known place.
It is like books on a bookshelf. In a bedroom it is Ok to keep all books together, sorted by size and color of covers, but it does not work in a library. We need a catalog.
There are a lot of architectural possibilities coming from that. For example, side-by-side execution of different versions of software. Or, maybe, we can mount repository itself and run application from it directly using local file system as a file cache.
For me, the GoboLinux approach clearly goes into the right direction. Many packages use their own subdirectory in /bin, /lib, /etc, … already and Gobo makes this movement complete. Like putting the vision on the goal instead of looking at the next step.
What I’m missing are different places for distribution installed packages, admin installed packages and user installed packages. Many people have root privileges on their own machine these days but this is by no means always the case. Once you set up a Linux box for your grandma or your kiddies, you should think twice before you grant them root privileges. University and corporate environments are other examples.
There are two bright things Gobo achieves I’ve not seen mentioned yet:
First, once developers have learned there is not necessarily an /etc or /tmp or /bin directory, they learn to use system functions to find what they look for. Many apps do this already, als GNUstep, Windows and Mac OS X strongly recommend this way. That done, some of the links can vanish.
Second opportunity to get rid of legacy cruft is to teach the executable finding mechanism, which currently uses the PATH variable directly, not only to look at /Programs, but also at /Programs/*/current/ . These big link directories Gobo crafted are currently necessary, but with a few not-so-complex changes to the kernel, they can got away entirely.
So yes, Gobo advances in the right direction and once they’ve done a few steps more, ugly hacks like hiding directories via an kernel extension can be taken back without disturbing the bright picture.
Re complaining that binaries are spread amongst /bin, /sbin, /usr/bin, /usr/local/bin. It doesn’t matter! Just typing the name of the binary will run it! If you want to delete it, you go to your package manager, not to your file browser.
The same sort of thing applies with libraries – does it matter if different libraries are in different places? I believe they can be used without problems, as long as they are in a directory that is used for libraries.
I believe users only really need to go to a handful of places:
1. ~/
2. /etc, which is so messy anyway that calling it “Miscellaneous Preferences” won’t help anyone
3. /var/www
4. /var/log
5. /tmp
Everything else is usually manipulated by command-line programs or even GUI programs (usually package managers). Does the average Joe need to go there? No. If the average Joe needs to go there, they will be doing so under instruction from someone who does know, the the FHS is not so complicated that nobody knows it
Finer-grained standards for where files should go is a worthy idea, as there is notable variation there, but that’s just a matter of somebody putting in symbolic links so the SUSE file locations work on Ubuntu, and vice-versa.
No, please no. Papering over the problem with symlinks is a rather horrifying idea. Wouldn’t it be easier to get people together and get them to agree on something?
I have had the same thought thom, especially since gobolinux came out.
If a car engine could be so simple that anyone could fix it, would you build it that way? I would. I don’t want to need a car mechanic to fix my engine.
So why not do the same with software? Car engines are not as malleable as software. Software we can make to do almost anything we want. It doesn’t have to fit into a certain shape or size anymore. Possibilities may be limited only by imagination and ambition.
So why not build software so it doesn’t require a mechanic to fix it? The real reason is “change is hard.”
Because it’s impossible.
Software is complex. If it’s broken, then that means it has a bug. Fixing a bug is rarely simple, and can even be disastrous when left in the hands of someone who doesn’t understand it. Look at all the examples on TheDailyWTF.com. Or look at the recent Debian OpenSSH vulnerabilities, which were caused by the fact that a person who didn’t understand it tried to “fix” it.
You can’t fix something without understanding it.
Man, OSX and Windows already do some of this better than Linux, so if you think it is impossible you are way off.
Software is complex, no kidding. The whole point here is to make it more understandable to humans. Not an impossible task.
You can’t fix something without understanding it, no kidding. The whole point here is to make it more understandable to humans so more of them can fix it. Not an impossible task.
If you’re referring to scandisk and defrag tools, that’s to fixing software as throwing a paper airplane is to being a real pilot. What happens if there’s a bug in the filesystem driver? Or a bug in the virtual memory system? How do you ever expect a normal person to fix that?
Making software usable by humans doesn’t mean that everybody can *fix* problems in it. If there’s a problem, then the cause can be *anything*. How do you ever want a non-programmer to fix things like that?
Yeah, and understanding how software works happens to be equal to learning programming. And now you’re back at square 1.
Edited 2008-08-24 23:38 UTC
> What happens if there’s a bug in the filesystem driver?
We aren’t talking about fixing driver bugs. Software is so complex that you’re losing view of the difference between things we’re talking about and things we aren’t talking about. We are talking about the design of the file system and maintainence relating to file locations.
So for instance, if your settings break you should be able to just copy them from /settings on your backup disk to /settings on your system disk. We’re not talking about fixing bugs in code.
+1 for you, you get the gist. Others here get lost in debates about details, while all I was trying to say was: we made the front-end rather useable (not in all cases, but hey) – the next big step is to make the back-end useable and logical.
That being said, I am an OCD patient (and i’m not just saying that to be cool – I’ve actually been diagnosed as such), so my inclination towards order, cleanliness, and structure might not be exactly… Healthy.
Car engines used to be basically so simple that anyone with reasonable mechanical aptitude could fix most problems. Then people started making demands like more horse power, lower weight, lower emissions, better fuel efficiency etc. So to accommodate all this engines needed to be more complicated.
So yea, car engines could be a lot simpler, but most people didn’t want that, they wanted features.
Same with software, sure it could be more simple and stable and with less bugs, but at the expense of features. Most people aren’t willing to make that trade off.
Except your mutation of my analogy isn’t correct. For example, simply moving all user binary files to one directory with a human readable name makes the system more simple and prevents bugs, and it doesn’t sacrifice any features.
I fail to see what bugs that would prevent or what problems would be significantly easier to fix. Not that it’s necessarily a bad idea, but I fail to see what you win.
If you’re going to do something like that, why not go all the way and use a solution closer to what OS X uses.
> I fail to see what bugs that would prevent or what problems would be significantly easier to fix.
man, how much software depends on the file system?
making the file system easier to work with will affect every upstream software, from package managers to Open File dialogs.
Where is smb.conf on my computer? Can you help me with it? File search says there are two. Do you know why? Come on, give me a break. The complexity of the linux file system design is causing headaches in places we cannot even comprehend.
I just searched the firefox bugzilla for the first relevant thing to come to mind, “usr”. There are too many bugs to display. I just picked one: https://bugzilla.mozilla.org/show_bug.cgi?id=246672
I’ve seen bugs like that for longer than I can remember. Mozilla.org released builds should look for plugins in /usr/lib/mozilla/plugins. SuSE uses /usr/lib/browser-plugins, according to Hendikins. Someone in FC2 reported the plugins were in mozilla-1.6/plugins, and mozilla/plugins was empty. Holy shit, lets get some standards that make sense.
How many bugs would have never existed it present unnecessary complexity never existed? It might be possible to guess. My guess is a lot. How many tech support problems would have been easier without that complexity? Again my guess is “a lot”!
Sure, but dumping all binaries in a a single folder? Also do you consider libraries to binaries? Are plug-ins binaries, libraries or something else?
Yes. Of course you could argue that most people shouldn’t need to care where smb.conf is located, or even what smb.conf even is. They should be using supplied tools to interact with that file. Those who actually need to edit smb.conf by hand should be knowledgeable enough about what they’re doing to know which smb.conf file they should be editing. Otherwise they stand a good chance of breaking their system.
That complexity also gives power and flexibility that some people need. There are situations where you actually want several smb.conf files. Admittedly most people don’t, but those who do should still have the option. One of Linux’s strength is that complex and uncommon configurations are relatively easy to pull of compared to for example Windows. Any improvements made, should be made without sacrificing this strength.
But I agree that things could be a lot better. However we shouldn’t simplify things to the point where the uncommon, yet occasionally useful, configurations become impossible.
Again I agree, but dumping everything into one folder isn’t the solution. Things are a bit of a mess, but a flatter file system with everything in one folder isn’t the solution.
locate smb.conf
This means *your own* system is messy. My Gentoo Linux system has only one smb.conf , as sanely expected.
Let’s be careful with attributing blame here. I doubt the user in question actually created the two smb.conf files. It was most likely done by the packagers of his distribution. Saying the users system sucks (and hinting at the incompetence of the user) because of choices made by the distro he uses his hardly a good start to a healthy debate.
Anyway most distros come with two (or more) smb.conf files, one under /etc and one under /usr/share (and before we go any further, yes I agree that neither of those names are particularly good). So if anything it’s Gentoo that’s the odd one out and I’m a bit surprised that it doesn’t have both of those. Are you sure you haven’t deleted one of the .conf files yourself or does Gentoo rename one of them to something else?
I didn’t mean at all it was user’s fault and I apologize if I gave this impression. Saying “your system is messy” I meant “there is something wrong with your OS”, no matter who did the mess (the user, the packagers etc.) -in fact, I thought about the OS designers when writing that .
I am on Kubuntu now at work, but I think on Gentoo there is smb.conf.example in one dir, and smb.conf in the /etc.
I checked that on Kubuntu the double file effectively pops out, and that’s to be blamed on (K)ubuntu, surely.
Why is it so, btw?
Actually, it would remove some features — just to mention 3:
– being able to put /usr on a remote filesystem
– keeping superuser-only files out of normal user’s paths
– being able to keep non-package-manager-installed files from those installed by the package manager
But I guess you might argue that these aren’t common use-cases and thus not compelling enough reasons to put binaries in different directories. But I guess I don’t see a compelling enough reason to sacrifice features like this to accommodate the odd situation where someone needs to know where their executable is. I just don’t see how it’s that big of an issue.
Seems like the real issue is that the FHS is not well enough defined or followed. I wonder how many distros even have a clear standard for how they define the hierarchy?
How about a filesystem layed out as this:
/Home
/User
/Applications
/Settings
/Frameworks
/Data
/Documents
/Desktop
/Temp
/Common
/Applications
/Settings
/Frameworks
/Data
/Documents
/Desktop
/Temp
/Shared
/WWW
/FTP
/NFS
/System
/Applications
/Settings
/Frameworks
/Data
/Documents
/Desktop
/Temp
/Boot
/Applications
/Settings
/Frameworks
/Data
/Documents
/Desktop
/Temp
A FS would be split into the five domains below the root (/) levet: Boot, System, Shared, Common and Home. The Boot domain would contain files vital to the system bootstrap and nothing else. This domain will be always local, and reside on the separate read-only flash device. The System domain will contain rest of the OS, including drivers and frameworks for advanced functions such as network, sound and accelerated graphics. Common domain is for application and data common to all local users. All non-vital services such as HTTP servers should be placed here. Home domain is for user files only, and the Shared domain must only contain resources shared over network. A technology like unionfs should be implemented, so if there are files /Home/User/Applications/ping, /Common/Applications/ping and System/Applications/ping only third should be visible on the working system, second should be visible to the users which don’t have /Home/User/Application/ping, and /System/Applications/ping is only for startup and system maintenance.
Define “boot”. When my laptop boots I get a graphical login screen. Should X be in the boot domain?
I see that you advocate a unionfs solution, so that could answer this question. It’s certainly an interesting approach. I’m trying to say… be careful, because deciding which things belong in which domain can be very tricky and debatable.
Also please be aware that no system which includes routinely capitalized first letters is likely to replace the current FHS. There’s really no harm to saying /system/application/ping, is there?
No, is should not. /Boot (or /boot) should contain only kernel, storage and system console drivers, a set of hardware testing and filesystem repair tools and a flashing tool for updating the /Boot from some storage device.
Services for user authentification, network, graphic, sound, etc should be in /System, along with WM and decorator, frameworks like OpenGL and OpenAL and servers. Also, here is where system configuration tools should be placed.
/Common and /Home/John Doe/ sould contain user level application only.
The article states that the common sentiment from the previous article was that normal users should never see the filesystem. Now from my reading of the posts, the more common belief was that users should never *need to* see the filesystem, which is quite a different message.
Thom interprets this argument as a statement of elitism, that users are being prevented from learning how their system works, going against all the principles of openness. Nonsense. It’s a statement that users shouldn’t be forced to learn about things that shouldn’t be relevant to them. They should be welcome to do so should they choose, but it should never be a requirement in order to successfully use their computers. Windows users manage just fine without knowing anything outside of “My Documents”, and Linux desktop users should be able to do the same.
I could not have said it better and have in fact said it worse.
The problem is that users NEED to see the filesystem. If everythings work perfectly all the time and there exist application for everything a user would do. Then MAYBEE the layout of the filesystem wouldn’t matter.
I am a good example, I work as a programmer, the company I work for only develop Windows applications, therefore I have limited time to spend with Linux. In this time I have had to use too much time navigating the cryptic layout of the filesystem. I am realy not interested in “mucking” around there, and I always use apt/Synaptic to install programs, but yet too often I have to use Emacs and commandline trying to fix some problem.
On other problem I heard was about different version of library. The solution is simple – both Windows(.net) and Mac do it today. You make a filesystem layout like this:
library
->openGL
->->1.0.0.0
->->1.0.0.1
->->1.1.0.1
->->2.0.0.0
->->2.0.0.2
->->2.5.0.1
If your application doesn’t say witch version, it will use the last version, otherwise the application can say it should always use for example 1.0.0.1 version or say 1.0<=version<2.0. – no need for slimy soft links
Edited 2008-08-24 19:03 UTC
The reason users shouldn’t see the FS is the same reason users shouldn’t see the libc API. I see the FS no different from any other API the only difference is that somewhere we decided that that API was a good UI metaphor and build thin GUI wrappers around it like file managers and desktop icons and folders for user managed documents and such. Imagine if we did the same for libc: What is a good icon for printf that my father could use? That question is just wrong. (Hint: It assumes the solution before stating the problem)
FHS my very well be a bad API, and doing a better API is probably a good ideas. But the intended audience for API is application programmers. Don’t design a bad UI for application users just because the basic assumption is that the application user is the audience.
IF you want to fix the way a user interacts with computing objects I don’t think the solution looks anything like a structure of files, hierarchal or not.
If we do have to change everything why not use a lot of variables like
$(bin) $(sbin) $(local_bin) $(include) $(locale_include) etc. etc. Then every distribution can make there own choices as to where to place it’s files, and Joe User who doesn’t want to know how his computer works can easily find important system files to delete.
How has this change to Human readable names gone for OSX or GoboLinux users and helping them understand how the computer works? It just seems to me like a pointless area to focus on.
Then again to guess at one possible answer to that question, I seem to recall what I learned about Plan 9. In their papers about namespaces and from what I was told the reasone why they shifted to a namespace rich architectual design were portability, scalability and predictability.
“The view of the system is built upon three principles. First, resources are named and accessed like files in a hierarchical file system. Second, there is a standard protocol, called 9P, for accessing these resources. Third, the disjoint hierarchies provided by different services are joined together into a single private hierarchical file name space. The unusual properties of Plan 9 stem from the consistent, aggressive application of these principles.”
“The client’s local name space provides a way to customize the user’s view of the network. The services available in the network all export file hierarchies. Those important to the user are gathered together into a custom name space; those of no immediate interest are ignored. This is a different style of use from the idea of a `uniform global name space’. In Plan 9, there are known names for services and uniform names for files exported by those services, but the view is entirely local. As an analogy, consider the difference between the phrase `my house’ and the precise address of the speaker’s home. The latter may be used by anyone but the former is easier to say and makes sense when spoken. It also changes meaning depending on who says it, yet that does not cause confusion. Similarly, in Plan 9 the name /dev/cons always refers to the user’s terminal and /bin/date the correct version of the date command to run, but which files those names represent depends on circumstances such as the architecture of the machine executing date. Plan 9, then, has local name spaces that obey globally understood conventions; it is the conventions that guarantee sane behavior in the presence of local names.”
The above two paragraphs were taken from an article on the Plan 9 website which didn’t seem to be reachable at the time of this post (I had a copy just laying around though)
You can always check Wikipedia for Plan 9 … the point is that with well established conventions and the concept of Union directorys and localized namespaces they seem to have a solution that could address they currently debated issues of FHS.
All this debate about the FHS being too complicated for newer users is just sad. I started off my computing days using the Atari 8-bit computers. There really were no specific file structures for them at all. Then came the Atari ST, and with the exception of the Auto folder and a desktop.ini or newdesk.ini file depending on what version of the OS, there weren’t any ‘system’ files so everything was just either the program, or it’s libraries and they went in the programs folder.
Then I used Amiga OS and Windows 95. Frankly when I switched to Linux it was a five minute read and I completely understood why the directory structure was the way it was, and it made complete sense.
So those who are confused by it…. read wikipedia or something, it explains it nicely.
Thanks for your comment. It seems the one most down-to-earth. People who say the FHS is “complex” really do not understand the meaning of “complexity” even remotely.
In addition it seems that half the problems stem from the fact that FHS isn’t strict or detailed enough, leaving too much leeway for distros to decide where stuff should go. If anything the FHS should contain more “complexity” to the extent it should contain even more restrictions and details about how things should and should not be done.
The UK RISC OS system that was used by desktop PCs running a ARM processor was like that.
All folders can be renamed and put wherever the user wants. This allows the user really deep organising control.
If you want to install an app, you copy it from the source material. If you want to uninstall, you just delete it as all apps were self contained and were inside a folder. no hunting for other files to delete.
I believe Linux has a GUI shell which was based on RISC OS called ROX. You should give it a try.
http://roscidus.com/desktop/
Imagine being able to re-arrange all the folders in your root drive to suit the way the work you’re currently doing is going.
Your desktop becomes so productive it was unreal. You didn’t have to hunt to find things or wonder where the OS had put files. Because it’s organised the way you like it. Not as a 3rd party developer decided.
…that you cannot ‘reason’ about an OS like you can with mathematics, mechanics, physics, electronics, chemistry, and biology. In those fields I can make inferences, form hypotheses, and conduct experiments. When using a computer however, whether it’s through the command line, writing a program, or even just knowing which icon to click, I have to know the EXACT ‘name’ of a thing to get anything done.
The only way to reason about names and symbols is through their cultural significance. The name ‘settings’ is more meaningful only because it is (or perhaps just SEEMS to be) culturally significant to a larger group of people. However, even with this cultural reference, I often can’t infer the meaning of an OS related name/word/term based on the context it is used in like I can with a natural language, especially on the command line where names are nearly devoid of context. Directories can provide this kind of context, but only if there are already enough meaningful names in the path.
The specific names are really not that important. What is important is what they MEAN, and the meaning of these names are either not shared within a large enough cultural reference (or perhaps not the particular cultural reference some people want), or they are still used despite their reason for existence becoming obsolete or forgotten.
This is why I don’t believe a ‘semantic’ web is possible. You would need too large a cultural reference to do it (the whole world), and it would have to change over time.
I came up with an idea like this six years ago and actually tried to implement it:
http://groups.google.at/group/alt.os.linux/browse_thread/thread/1c5…
At that time I was mostly bombarded with comments about how absurd the idea was. How interesting to see the sentiment turn and awareness of the problem grow. Unfortunately it took six year to get to here. Considering that, chances are, I will not live to see the day Linux is using a decent directory structure.
Hello,
after years of parsing of packaging systems, software distribution technologies, OSX Bundles, ROX AppDir, GoboLinux way to do thinks, Autopackage, Klik & Co, BSD Ports, PC BSD way to spread to the world catalog software, Java WebStart, NetBSD pkgsrc, ex Lindows klik like catalog with browser integration to mimic dmg OSX management…and so on.. I made SpatialBundles.
Single file full application bundles that you can manage like any other file in an typical human object oriented way…move, remove, send without using any intermediate layer like gio/kio/gvfs & co just use the power of file system engine (the layer exported from the kernel and available in the same way to all).
I think the best way to feel what I mean is to test on the road software packaged with SpatialBundle technology.
here you can taste it:
http://downloads.infodomestic.com
Today the SpatialBundles are build with Ubuntu Linux but in future will be easy to build against any UNIX flavor like *BSD, OSX and Windows that support POSIX shell
To better feel the power of what I did just start to taste Winamp that represent more than one technologies glued together into a SpatialBundle.
SpatialBundle is made for generic human that does not like complexity.Think my little children or my grandmother, peoples that does know about computer but want just make few direct click to object oriented thinks.
The process to manage a SpatialBundle is reduced to:
1) Download from internet or receive it by mail or by USB key or by CDROM or whatelse you think it’s better for you..
2) Add exec attribute on the file
3) Double click on it
Then you have your application running
No root password
No installation
No dependencies required other than provided by the standard Ubuntu (Operative system) first installation.
SpatialBundle now support freedesktop menu and hidden configuration and local files.
When you start a SpatialBundle an icon appear into desktop and into tray icon.
The icon tray let you access to a little menu to better manage the package (About,Open,Send,Reset) i.e. the Send item let you send your bundle via mail (in the future by bluetooth like you can find into generic mobile phone) or send to Desktop or a selected folder…
At this point of my development cycle I think that there is nothing equivalent to the world like SpatialBundles they seems to be really unique.
I know that there is a lot of very closed technologies around here but nothing so extreme closed to an file/objects without any dependencies other that POSIX shell.
SpatialBundle are self protected against code injection so it’s up to the distribution to provide catalog based key signed.
If code injection was made, SpatialBundle does not start at all.
The portability of a SpatialBundle is granted by using the most portable and available language that is POSIX shell.No Perl, no Python, no Ruby no other dependencies in term of language other than posix shell.
This help me to think in term of easy migration through OSs like OSX, BSD and finally Windows (why not!!).
Today I’m fine tuning the builder before release as GPL source code but you can freely use the SpatialBundle I already made now.
Hope this will help you to thread better your life