The split between
Proposal on the Fedora wiki/bin
and/sbin
is not useful, and also unused. The original split was to have “important” binaries statically linked in/sbin
which could then be used for emergency and rescue operations. Obviously, we don’t do static linking anymore. Later, the split was repurposed to isolate “important” binaries that would only be used by the administrator. While this seems attractive in theory, in practice it’s very hard to categorize programs like this, and normal users routinely invoke programs from/sbin
. Most programs that require root privileges for certain operations are also used when operating without privileges. And even when privileges are required, often those are acquired dynamically, e.g. usingpolkit
. Since many years, the default$PATH
set for users includes both directories. With the advent of systemd this has become more systematic: systemd sets$PATH
with both directories for all users and services. So in general, all users and programs would find both sets of binaries.
I think Arch already made this move a while ago, and it seems to make sense to me. There’s a lot of needless, outdated cruft in the directory structure of most Linux distributions that ought to be cleaned up, and it seems a lot more distributions have started taking on this task recently.
Thom Holwerda,
Yes, there’s tons of legacy cruft in there and I’d like to see a mass simplification. I merged these directories (among others in /lib and /usr) in my distro in 2007 or so. I had to use symlinks to keep software happy though because there are a lot of binaries and scripts that have paths hard coded and will not work when moved.
Indeed. People forget Linux is now Unix.
Fedora doesn’t but there are some niche distros that still do; stali comes to mind immediately, as well as Oasis though I don’t think there has been much development on it lately. The concept seems to be most popular with musl libc based distros, with the notable exception of Alpine Linux.
It will be interesting to watch the Fedora team implement this change, and of course like many other things in the Linux ecosystem, once Fedora does it the rest of the major distros fall in line, so this is likely coming to your favorite distro soon (unless you are a weirdo like me and use Void and Alpine).
Morgan,
I agree, static linking can still be very useful for creating simple linux applications that are more portable without dependencies. This is often easier than trying to maintain different dependencies across distros. I’ve done this on centos.
Speaking of things which are artifacts of a different time, dynamic linking. LOL
Newer programming languages are defaulting to static compilation, and more things should probably be statically compiled. Dynamic linking is useful, but maybe not for everything.
Flatland_Spider,
Yeah, whether software is dynamicly or staticly linked doesn’t really change the way a user invokes it and it makes no difference where the binary is stored. These two decisions should be decided independently.
This makes sense. “sbin” and “bin” being separate made sense when *nix systems were timeshare systems with small disks, but most current *nix systems are headless servers which maybe have 1 person logging into them. People can stop pretending the majority of Linux systems aren’t single purpose appliances.
Next, unmerge “/usr” or admit “/sys” should have been for the install instead of procfs v2. LOL I want “/usr” back for user accounts.
I disagree completely.
Sure, if you want to make new stuff, then pick just _one directory for that (ie. /usr/bin), but for heaven’s sake leave everything else where it was originally placed.
Merging (ie. smushing, smashing, etc) directories with a _very _loooong history will lead to unintended consequences and will result in symlink spaghetti just to satisfy someone’s superficial make-work idea that things should be “tidied up” at the top level.
ponk,
The problem with this is, quite obviously, that it leaves us with lots of legacy cruft over time. (Not just linux, but windows, macos, etc.) So naturally the question is one of cost versus benefit: is cleaning up the cruft and making it cleaner worth the cost of breaking with old convention? Some would say yes, others might say no.
I’d like to point out that these costs and benefits are affected by the period of time involved. Cleaning up cruft every build can/will result in a lot of overhead and thrashing, which can be harmful in and of itself. As an example this might be the linux unstable ABI where out of tree kernel modules break frequently. At the other extreme, never cleaning up cruft means you accumulate more and more cruft over time, burdening future generations with nuanced details with obsolete justifications. The example I’d pick for this is the linux termios & tty subsystem responsible for very low level console operations. *nix have inherited tons of obsolete complexity from 1960/70s line printers even though the hardware is long gone and kernels have no business being designed around them today. And yet there it continuing to complicate software shells, SSH, etc.
Of course, one “solution” is to hide the cruft behind cleaner abstractions. This does work and SDL is a good example of hiding tons of complexity behind a relatively simple abstraction. There are still some detractors though, the code for SDL itself can end up with very nuanced code that might be hard to maintain. In CS we also can end up with leaky abstractions and code that has to go around the abstraction, which is bad. Furthermore abstraction layers often result in more overhead especially when trying to emulate a new model over a different underlying model.
I may be getting into the weeds here, but my point is that there’s a balance of needs and this balance changes with time. As more time elapses, we should start to favor cleaning up rather imposing legacy decisions indefinitely.
I’d say take a look at gobolinux since they’ve done a great job at cleaning the file system and it’s so much more pleasant to work with.
https://gobolinux.org/
You’re right that change can be difficult, but I still feel there is merit in cleaning up legacy code & designs long term. The way we go about it becomes very important for a smooth transition. If it can be done with respect to the needs of users and earnestly mitigating the problems they have, then I’m all for it. The problems tend to stem from having upstream developers who don’t give a damn about their downstream users. This is exactly why we tend to get a bad taste for changes in the first place. Hell look at wayland and those of who still can’t use it, it’s probably the perfect example for this conflict.
And, of course, I forgot one of my favourite quotes from Usenet days:
| This is Unix, we shouldn’t be saying “why would you want to do that”,
| we should be saying “sure, you can do that if you want.”
| — Steve Hayman
So sure, by all means design and create a Linux distro to your liking, with all the so-called “cruft” excised. It would not affect anyone who still prizes that cruft because there will always be a version of Unix/Linux with all that legacy still available as part of its core.
I would still argue that there’s not much value in “cleaning up” because there’s no real cost to leaving things the way they are, and there is value in keeping all that history in place for others to discover. But for those who really want to go down that path, the exercise will surely add to their knowledge of the history of Unix/Linux.
ponk,
I see…but I don’t quite agree. There is such a thing as “maintenance burden”, where overhead multiplies over time. The costs of not cleaning up the baggage will eventually exceed the costs of cleaning it up.
It’s a real mess and I faced these challenges personally on two separate instances. Once as s a developer porting bacnet code (a low level RS485 serial protocol) to linux And another developing a distro and having to deal with obsolete terminal interfaces.