“This document outlines the set of requirements and guidelines for file and directory placement under the Linux operating system according to those of the FSSTND v2.3 final (January 29, 2004) and also its actual implementation on an arbitrary system. It is meant to be accessible to all members of the Linux community, be distribution independent and is intended to discuss the impact of the FSSTND and how it has managed to increase the efficiency of support interoperability of applications, system administration tools, development tools, and scripts as well as greater uniformity of documentation for these systems.”
FSSTND is the old standard, the latest released one is called FHS 2.3, and is available here:
http://refspecs.linuxfoundation.org/FHS_2.3/fhs-2.3.html
There is also a beta of FHS 3.0 here (that includes /run and /sys among others):
http://www.linuxbase.org/betaspecs/fhs/fhs/index.html
See here for the history of FHS: https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard#Release_…
PDF, ebook somewhere ?
Kochise
You can find PDF, PS, and text formats here:
http://refspecs.linuxfoundation.org/fhs.shtml
Looks to me like they refer to the same document by different names.
Edit: although the format appears different.
Edited 2013-06-26 16:05 UTC
Wasn’t it on OSAlert last year that there was a direct quote by the guy who wrote the file system originally saying all this was nonsense? The reason for sbin and usr was down to the low capacity of hard drives and nothing else at all. A disk filled up so they made a reason to separate stuff to elsewhere. In hindsight this was a mistake as people now argue over the directory structure and purpose and nobody is willing to fix the mess and remove the excess.
Nope.. sbin was originally for STATIC compiled binaries that could run stand alone and not need any dynamic libs to be loaded.
That is important for the early phases of bootup, and for single user rescue tasks, when /usr/lib is not mounted yet.
However, just using /bin+/lib on the root disk and /usr/bin+/usr/lib on the os disk would do the trick if you ask me.
Edited 2013-06-26 18:57 UTC
In this day and age of bind mounts, why even bother? Have a small /bin & small /lib on the initrd and bind mount the disk over it.
No, sbin is for SYSTEM binaries. Anything that normal user will never run. sbin is usually not in user’s PATH.
sbin and bin have their use and should/will remain separated forever.
/usr/bin /usr/sbin /usr/lib are redundant given the size of our disks however. In fact, a few distro create them as symlinks to /bin /sbin /lib.
Other distributions use the opposit, with symlinks behing in the root and real dir being in usr.
Those arguing that /sbin and /bin are useful for rescue, then explain to me what is the purpose of the initrd? Drivers can perfectly be loaded from the disk or built into the kernel, what remains? The rescue prompt!
Edited 2013-06-26 20:10 UTC
For statically compiled programs, some systems offer (or offered?) a specific /stand directory located on the root partition that contains programs to be used under absolute emergency circumstances.
As an example, the FreeBSD manual page for the file system hierarchy, “man 7 hier”, explains /sbin as follows; “system programs and administration utilities fundamental to both single-user and multi-user environments”.
Source: http://www.freebsd.org/cgi/man.cgi?query=hier&sektion=7
The separation is intended and may even be useful.
This is possible. Non-Linux system sometimes keep the differentiation between /sbin and /usr/sbin and define /usr/sbin as follows: “system daemons & system utilities (executed by users)”.
This concept and its execution is relatively specific to Linux, whereas the directory separation is more generic and applies to many kinds of UNIX and UNIX-alikes.
Sometimes you’re lucky to get that far.
As I said, for normal users all this directory layout discussion does not matter. In case of an emergency that you today cannot even imagine, and under worst circomstances, you’ll be happy about any separation and differentiation that lets your system come up to a state where you can start recovery procedures or other actions to get out of shit. Because you never know. And if you don’t experience such kind of trouble: Be happy. Ignore the rest. It doesn’t matter.
“Nope.. sbin was originally for STATIC compiled binaries that could run stand alone and not need any dynamic libs to be loaded. ”
Oh really? The guys that created it would disagree…
http://www.osnews.com/story/25556/Understanding_the_bin_sbin_usr_bi…
I believe that Fedora (and therefore the next RHEL) has moved away from the standard directories to put everything inside usr/.
There are still symlinks from the old locations though.
What it meant back then is not important, what’s important what it means today and today it’s an established *nix convention.
/bin, /etc, /usr – it’s all dinosaurs.
Atleast dinosaurs went extinct…
So, what are you trying to say? That “C:\WINDOWS” and “C:\Program Files” are better? That you’d rather deal with something like “C:\WINDOWS\system32\drivers\etc\hosts” instead of “/etc/hosts”?
I’d say that the UNIX file system structure is far from “extinct.” It could use a few improvements (some of which are already being done), but as it is it’s very comfortable to navigate–either by command line or by graphical file manager. By comparison, I can navigate a Windows file system fine with Windows Explorer (well, in most cases–the hosts file example above is one that I never remembered…), but there is no point in even trying to navigate the Windows file system structure by commands. Too many special characters needed to bypass spaces in file/directory names, and the file/directory names tend to just be too damn long.
Even the Amiga is miles ahead of the old Dos structure (which Windows continues on to this day.)
At least it has the ability to Assign drives or directories to where ever you want them.
For example, you can do DH0 (or HD0 depending on whatever you want) as the first partition, and then name it System, or whatever, but then use an assign to make it so you can use System: to mean that partition’s root. Or let’s say you have the SSL software (AmiSSL) installed in the Utilities folder under the System partition. You could put in the startup-sequence file “Assign >NIL AmiSSL: SYS:Utilities/AmiSSL”
Is it simple? Not especially. It is useful? Very. Is it unix like? Certainly. You basically do the same thing with mount points in Linux.
Anytime I see articles or distributions talking about changing paths around it makes me want to start beating people. In fact I’m debating on whether or not I should dump Arch Linux, who is going that route and start using something else. Anyone know of another Rolling Release as awesome as Arch, but doesn’t jump on every single bandwagon that Fedora wants to push? (Fedora was the first one I saw that said they were merging all of the ‘bin/sbin’ folders into one…)
Probably inspired by the logical names in VMS. Cool feature nonetheless.
http://en.wikipedia.org/wiki/Files-11#Logical_names
Edited 2013-06-26 23:53 UTC
You could do something comparable with DOS, using the SUBST and JOIN commands. Both can be used to compose a “lazy man’s mount command”.
Allow me to mention a small detail.
Today, “folder” is being used synonymously for a directory. This is technically wrong. A directory is represented by a folder (a pictural element) in many (or most) GUIs, but it’s not the same. The relations we are talking about are “is a” vs. “is represented by a” or “looks like a”. Therefor directory is the correct term, and “folder” is the name of the kind of icon used for a directory. (By the way, it’s not the only existing visual representation. Others are a filing cabinet or a drawer.) No, honestly: Terminology sometimes matters. Just because many people insist on calling directories “folders”, they do not become folders.
I know people will start bashing me for being more than pedantic about this issue. It will probably convince me to call any computer “Bob”.
Hey, if you can’t be pedantic on the internet – where can you be?
Total Commander.
Don’t blame Windows if you’re using Explorer for navigation.
As for file system layout – GoboLinux did it right.
Edited 2013-06-27 06:33 UTC
Wait–let me get this straight. Don’t blame Windows for coming with a crap file manager? Something as basic and critical as a file manager?
But actually, that’s not the problem. I can use Explorer; I never really had any problem with it that I recall. I have tried others very briefly (pretty sure Total Commander was one of them), but either didn’t like them or the idea that you have to pay for such basic functionality that comes with the system in the first place (also not a fan of nagware)… and again, Explorer worked just fine for me.
The real problem that I’m referring to is the file system itself, not the interface/file manager (Explorer). It sucks no matter what file manager you throw it it. And I have to say that I wasn’t exactly amazed with the GoboLinux file system either… seemed heavily Apple-inspired to me. Their idea was never adopted by anything else and where is GoboLinux today? Seems it’s long been dead. No new release in years. It was interesting, though.
Let me get this straight 2x- I should blame Linux, not Adobe, that Linux doesn’t come with decent photo editing tool?
I should blame Linux, not Microsoft, that Linux doesn’t open my shiny new docx file?
Boy, Linux is unbelievably bad!
Operating System is one thing, Application – other things.
Edited 2013-06-27 10:11 UTC
[/q]Let me get this straight 2x- I should blame Linux, not Adobe, that Linux doesn’t come with decent photo editing tool?
I should blame Linux, not Microsoft, that Linux doesn’t open my shiny new docx file?
Boy, Linux is unbelievably bad!
Operating System is one thing, Application – other things. [/q]
What point are you making?
[/mis_q] blame Windows, not Adobe, that Windows doesn’t come with decent photo editing tool?
I should blame Windows, that Windows doesn’t open my shiny new docx file?[/mis_q]
Incidentally most Linux distributions can open docx files, and many come with a decent photo editing tool Windows can’t and doesn’t or even open pdfs. Should Windows be blamed for not having a decent file manager (which is a critical part of the GUI of the OS) obviously it should if you need a third party app to manipulate the files that is bad.
To be fair, if you are navigating those paths by CLI, then you’d be better off using the env vars (eg %programfiles%, %windir%, etc). Plus they take into account non-standard directory paths (which are rare, but can happen).
Worse than that, on a really big project, we run out of characters in our system model hierarchies (around 260 max characters – I don’t remember exact number off-hand except that it’s small) unless you keep your folder names really short.
And we constantly must ask each other the eternal question, “So what did you map to M: to run that script again?”
Windows’ drive letters are the abomination IMHO – a relic of the distant past where each floppy drive had to be manually tended with care. It’s always a bit of a relief to me to return to a sane Linux / Unix unified file system.
Yeah, who needs to keep the binaries and the configuration files separate anyway? Just put things at random all over the place.
Structure? Ain’t nobody got time for that.
SunOS/Solaris was really bad back in the day … they kept executables under /etc!!
RH/CentOS still has Apache logs and libraries in etc.
Well, symlinks to the logs and libraries but totally braindead regardless. I guess it’s some kind of compatibility to something that was done back at the dawn of time. You know, “enterprise” stuff.
A tradition originating from classic UNIX. If I remember correctly, UNIX system III had those, like /etc/mount or /etc/fsck. I remember this because I have been working some time on WEGA, a UNIX systm III derivate developed in the GDR for the EAW P8000 workstation. Those entries are also mentioned in the OS manual.
The intention of /etc with the translation “et cetera” emphasized the character of “additional” things, whereas one would usually consider things like the “mount” essential instead of additional.
# ls -l
bin -> usr/bin
lib -> usr/lib
sbin -> usr/sbin
We have to start somewhere, but I fear that those links will have to stay for the next 20 yeas.
The good thing is all that mess is on the root partition, which doesn’t interest users that much. Mounting is no longer a problem (even media and mnt dirs are no longer required, since it goes to /run/mount/username now), /dev is maintaned by udev and /home is often on another partition. Only /etc is still going to be important and it’s mess, but so much software depends on that mess, that is impossible to solve this.
Edited 2013-06-26 23:33 UTC
The /lib and /sbin links could probably be removed without much fuss – there’s little software that actually cares about those directories specifically.
The tricky one is /bin, since there’s a billion scripts out there starting with #!/bin/sh or #!/bin/perl that would need to be modified for it to work. Not to mention tools (like autotools, for instance) that *generate* such scripts.
Wouldn’t recommend it. /lib is where the kernel modules are kept. You would have a very hard time booting if no /lib is to be found.
Why not put everything in /bin, /sbin and /lib? I mean, if you’re going to change this why even bother with /usr? You could just move /usr/local to /local.
What’s so important about /usr?
I’m curious, what real-world problem does this solve?
Main goal is to have one less directories for “exe” files, and generally more clean /. There’s too much in there now. I can leave with /etc, /usr, /home, /boot, but /srv, /mnt&/media, /bin&/sbin, /opt? I rarely even look inside of those, so why they are exposed so much? I simply type something in terminal (or #!/usr/bin/env) and hoping that PATH in shell config is OK. Or plug usb memory stick and click the icon.
I wonder if ever we’ll have one dir for binaries in Linux and throw PATH away.
But, why move everything under /usr, instead of moving everything from /usr into the root?
Linux fhs is a joke. It’s been the source of countless and endless argument. Some people use it, some use only portions, and others disregard it completely. Obviously there is a problem but the linux community is far too segmented to ever resolve it. Too many people with too many varying (and often opposing) opinions. The beauty of linux is if you don’t like something, you can change it. The Achilles heel is uniformity is thrown right out the window.
What? You don’t like a standard with so many optional parts that two distros can be “FHS-compliant” without having more than 4 directories in common?