James Hague: “But all the little bits of complexity, all those cases where indecision caused one option that probably wasn’t even needed in the first place to be replaced by two options, all those bad choices that were never remedied for fear of someone somewhere having to change a line of code… They slowly accreted until it all got out of control, and we got comfortable with systems that were impossible to understand.” Counterpoint by John Cook: “Some of the growth in complexity is understandable. It’s a lot easier to maintain an orthogonal design when your software isn’t being used. Software that gets used becomes less orthogonal and develops diagonal shortcuts.” If there’s ever been a system in dire need of a complete redesign, it’s UNIX and its derivatives. A mess doesn’t even begin to describe it (for those already frantically reaching for the comment button, note that this applies to other systems as well).
I don’t think it’s the underlying design of UNIX that needs to change so much as the applications layered on top of it. The userspace stuff. In contrast with other operating systems it’s often the other way around, where the userspace apps and controls are well designed and mesh together, but the underlying system is often a mess.
Totally agree. Just see TermKit (http://acko.net/blog/on-termkit/) to see how terminal should look like.
All I see is yet another GUI/DE replacement that runs in its own window.
A new broom sweeps clean, but new brooms becomes old brooms in short time.
That’s the heaviest site I’ve ever visited.
I hope that the whole “HTML5 + JS + CSS3 (+ webGL + …)” stuff will not be worse than Flash in terms of CPU usage
Don’t keep your hopes up, too much… http://en.wikipedia.org/wiki/Wirth‘s_law
(and when you think about it, Flash was relatively light a decade+ ago – but then, it was also used primarily for its originally intended function of vector animations, not video streaming with overlays)
Edited 2012-05-29 02:20 UTC
What? “Other operating systems” (at least where the userspace apps and controls are distinct enough to warrant talking about them like that; embedded OS are really outside of it, as are hobby OS) basically means… Windows – not much of anything other non-*nix around.
And the underlying system in that case, NT kernel and its immediate surroundings, is hardly a mess… seems to be less of a mess than typical *nix, actually.
(plus OTOH, I wouldn’t really call win apps “well designed and mesh together” – they, and the compromises forced by them, are the real reason for poor reputation of Windows)
It’s easy to complain that Cygwin doesn’t follow the Windows philosophy, but that’s not a commentary on the benefits or shortfalls of windows. Nor is it representative.
Its not representative to say that a Java Application Server is “not Unix”. I mean, of course it’s not Unix – It’s Java. Java brings along it’s own virtual machine world that’s different from Unix (or Windows, or anything else).
The JVM is less “in your face” about it’s differences than, perhaps, cygwin is on Windows, or some other virtual machine environment running on top of Unix (such as an MV emulator to run legacy IBM applications). But it’s certainly different.
The fundamentals of Unix came to life before the interconnected world of network computing became dominant. Even still, systems such as inetd and HTTP CGI made publishing pipelines trivial.
The fundamental issue today with the Unix model is simply its inefficiency. Chaining heavyweight processes has costs, high enough costs that it’s worth going to a different dedicated model, a model that foregoes the simplicity and safety of the classic Unix system.
Combine that reality with the stark difference of the GUI event/window model to the command line process pipelining model, and it seems as if there is “less Unix in Unix”. Where Unix becomes simply a process loader rather than a component toolkit.
Java and database servers are the worst offenders here. A good app server deployment is one that dedicates the vast majority of system resource to a singular JVM instance. A dedicated DB deployment is the same kind of thing. Why leave free memory for generic file buffers when they can be better used for dedicated internal database pages, since it’s doing all of the I/O anyway.
But contrast those to a classic php_mod server on top of the legacy Apache forking model. That’s almost a pure Unix design. Process safety, effectively a stdin -> stdout processing chain. Embedding PHP in to the Apache server saves the process pipeline an extra fork and exec, along with the associated startup and resource costs. But process protection means that one connection can’t kill Apache, can’t kill the machine, can hardly kill anything.
While started by Apache, the OS still has a say and can control that process. Too much memory? Kill it. Too much disk? Kill it. Too much time? Kill it. Kill it now, kill it safely. Recover cleanly. Services the OS can provide, wheels I as a designer and developer don’t have to reinvent.
Today, folks are not stacking processes in pipeline as much any more as they are stacking entire machines and sticking them together via the network. VMs are the unit of work today, which makes much of the Unix philosophy unnecessary. Back in the day, Processes were cheap and ubiquitous enough to create and kill on demand for the most trivial of tasks. VMs aren’t quite that cheap, but designers are to the point where they’re considering them that way. While not as cheap as processes, they’re getting cheaper on their own every day.
Now the Unix philosophy is transcending the operating system itself.
In continuance of something I said in another article comment – I don’t think we can solve any operating system organization given that we can’t solve real world organization. The messiness of an operating system, whether it’s the user interface or file system, is a reflection of the messiness of the people (in general) who use them.
“Intelligent design” doesn’t even work for things that are actually intelligently designed, because sooner or later, outside pressures change (often as a result of the new design), causing the need for evolution of the system in a positive feedback loop.
I think the mistake is to try and force everything into just one kind of organization. Just like software design patterns and data structure, each organization has their own strengths and weakness in flexibility, performance and consistency maintenance. The argument between Torvalds and Tenenbaum notwithstanding, there just hasn’t been a scientific study of the properties of organization.
The creators of UNIX realized this years ago. That’s why they created Plan9.
Plan 9 which, if adopted in any meaningful way, would probably quickly end up similarly…
Its niche, hardly used, academic status is what allows it to remain pristine.
I whole-heartedly agree!
Indeed. With a suitable object-file* oriented desktop Plan9 would rock really hard.
* Plan9 is taking the “everything-is-a-file“ concept to its extreme. OS/2 took “everything-is-an-object“ to an extreme. Combine those and you have “everything-is-an-object-is-a-file“.
You also have two operating systems that went pretty much nowhere.
You can’t argue that OS/2 went nowhere. It may not have ended up on the average Joe’s home computer alongside Windows but it was (and still may be) used in corporate America.
The Win32 API that so many people complain about, me included, was originally developed for OS/2.
NT kernel likewise, no?
OS/2 actually DID go somewhere. Back in the early ’90s, it wasn’t a forgone conclusion that Windows would win the OS wars. OS/2 was largely sabotaged by it’s principle developer after the MS/IBM split. That helped lead to OS/2 leaving the consumer market, but it stayed in the corporate market -doing exceptionally well in the finance & manufacturing sectors. In fact, I once saw a crashed ATM that had OS/2 installed on it an a mall in Killeen, TX at the end of the ’90s.
Curious that “exceptionally well” made you recall a crashed OS/2 ATM ;P
Anyway, it really hardly went anywhere worldwide – the position of OS/2 in banking or manufacturing, in some places, most likely stemmed from the earlier, long standing IBM presence in those sectors there, before OS/2 was even conceived; it had nothing to do with any virtues of OS/2 itself.
And you can’t say it was sabotaged after the split when it hardly gained a foothold in the first place – also because of earlier (warped*) development processes…
…some of them by design. IBM wanted to use OS/2 to recapture the control over PC market. Of course most manufacturers and users wouldn’t go with that – so yeah, “in the early ’90s”, with already quite nice Win 3.x around (with strong worldwide presence), it was a foregone conclusion.
* you have to wonder what kind of people insisted on such codename – for most of the population not evoking pop scifi, but something completely different…
http://www.insearchofstupidity.com/ch6.htm
Edited 2012-06-02 00:17 UTC
Like the English language, Unix is inconsistent, difficult to learn and impossible to master.
For the same reasons it is flexible, powerful and often fun.
I remember my first experience with Unix, back before I had access to the internet. I had to get a book just to figure out what the goddamn ‘help’ command was. That is how f**ked up and completely counter-intuitive Unix is.
I know some people will defend Unix to the death, but I hate it. Just because something is powerful doesn’t excuse it from being a pain in the ass to deal with. (C/C++ also comes to mind here.)
As an architectural masterpiece I prefer OpenVMS, but its shell was never as usable as the Unix shells.
Overall my favorite setup was NeXSTEP — Unix shell + the best GUI I’ve ever used.
Like the English language, Unix is inconsistent, difficult to learn and impossible to master.
Perhaps it should be
Like the English language, Windows is inconsistent, difficult to learn and impossible to master.
As a Unix user since 1981, Linux since 1994, I find Windows far more inconsistent than Unix has ever been.
Want an example?
I have a VM running on Windows 7. No matter what I do on one drive, every time I want to start it, it needs an admin override. Move the VM to another system or drive and it does not ask for an admin override. Three different Microsoft Gurus have looked at it can they are stumped.
Except we’re not talking about Windows here. Comparing Unix to Windows is like comparing your ugly girlfriend to your friend’s ugly girlfriend, and then arguing over which one is uglier. In the end, everybody loses
Command lines are just cryptic. If you sat down at a computer running DOS, without knowing any commands, you would have had to get a book as well. Using Powershell for the first time is another example, it at least has a bunch of aliased commands to ease the transition.
Well, when I sit down in front of a Unix terminal now days and type ‘help’, I at least get a list of commands to try. That wasn’t the case when I first started
LOL!
I had to switch to a terminal and type help to verify this works.
$ help
In bash, it apparently does, although I’m not sure how helpful it would to a complete neophyte.
Bash’s help command (and probably any other shell that has a help command) is only good for built-in shell features so it’s not that good to anyone who doesn’t realize they’re using a built-in feature of the shell. The shell on windows is similar, has its own help system for built-in features.
I’ll bet if you do this in Aix, HP-UX or Solaris, it won’t work.
Command line interfaces are more like a language than selecting from pictures. As every language, you have to learn it to make use of all its features. That’s nothing bad per se; in fact, it’s the required “pre-knowledge” that enables you to utilize its immense power of expression.
But allo me to illustrate this by quoting Master Foo Discourses on the Graphical User Interface.
One evening, Master Foo and Nubi attended a gathering of programmers who had met to learn from each other. One of the programmers asked Nubi to what school he and his master belonged. Upon being told they were followers of the Great Way of Unix, the programmer grew scornful.
“The command-line tools of Unix are crude and backward,” he scoffed. “Modern, properly designed operating systems do everything through a graphical user interface.”
Master Foo said nothing, but pointed at the moon. A nearby dog began to bark at the master’s hand.
“I don’t understand you!” said the programmer.
Master Foo remained silent, and pointed at an image of the Buddha. Then he pointed at a window.
“What are you trying to tell me?” asked the programmer.
Master Foo pointed at the programmer’s head. Then he pointed at a rock.
“Why can’t you make yourself clear?” demanded the programmer.
Master Foo frowned thoughtfully, tapped the programmer twice on the nose, and dropped him in a nearby trashcan.
As the programmer was attempting to extricate himself from the garbage, the dog wandered over and piddled on him.
At that moment, the programmer achieved enlightenment.
Source: http://catb.org/~esr/writings/unix-koans/gui-programmer.html
Also note that (like human languages) pictural elements can change their meaning. The most prominent example is the 3.5″ floppy disk which means “save” even to those who do not know this media anymore.
For details, read the article “The Floppy Disk means Save, and 14 other old people Icons that don’t make sense anymore”.
http://www.hanselman.com/blog/TheFloppyDiskMeansSaveAnd14OtherOldPe…
I don’t even try to claim that command lines (in general) aren’t cryptic. Some of them are, some are not. I could try to argue that one human language is less cryptic than the other. It always depends what language you already know. This kind of knowledge can be adopted to command lines: If you know the language, ythere’s nothing cryptic in it. If you don’t know it, it’s mostly unreadable.
Again, try to also see this argumentation for pictures and how we “read” them. Well… the quotes aren’t neccessary I’d say. Pictures are also a form of language, with all implications. However, expressing in that language is much harder (in terms of usage related to a computer). You can select from a predefined set of symbols, but you cannot express directly in those symbols (unlike typing letters which form the language of a command line). This means what you can do with pictures is limited. You are limited in creativity on the basis of their language.
As always: Depending on specific settings, this can be a good thing or a bad thing.
Well go use Singularity or something then… then you’ll have no UNIX *and* no C/C++.
Edited 2012-05-25 21:30 UTC
This is actually what is so important in the C and UNIX relationship.
UNIX success was partially due to the portability offered by C, and C’s became sucessfull, because UNIX took over universities.
If this integration was not so strong, they would had failed, most likely.
If my memory serves me right, the environment provided by Aztec C on the Apple ][ was Unix-like. It wasn’t beautiful, and it wasn’t ugly, but it wasn’t very friendly, either.
Really? I just typed “Help” into Ubuntu, and got… help. Not that hard, really.
Oh, wait, you were probably using a shell, which isn’t Unix – more like the command prompt on Windows or bash in Mac OS/X. Common mistake.
But yes, if you try to use the command line *only*, “man” and “info” are not exactly intuitive. Google, on the other hand…
What I’m really saying is that Unix years ago isn’t remotely similar to Linux today in terms of new user friendliness – bad choice to use the word “is”, don’tcha think?
Give it another shot; you’ll be surprised.
No, not really. Sometimes, to access a file in the same directory I’m in, I have to do ‘./filename’ (or is it /.filename? I can’t remember). Some of the most important files in the system are in a directory named etc. Do you know what ‘etc’ means?
http://en.wikipedia.org/wiki/Et_cetera
Why the f**k would you put a lot of critical files in a directory that means ‘and other stuff’ ?
The default text editor for crontab on the systems I have to use is still vi, which is one of the most user-UNfriendly pieces of shit ever written. Hard drives are named ‘hda’ in the file system. And I could go on and on.
I suppose many Unix gurus would argue that the pain of learning such an ass-backwards an incomprehensible system such as Unix is a rite of passage for enjoying its power. And I also understand that a lot of its eccentricities can be understood if you ever learn what a developer was thinking back in 1970-ish when all of this was being put together. I’m just saying that in 2012, we should be able to do better than this.
OK,you’ve convinced me – you really have tried to avoid Unix all these years, haven’t you?
You don’t need a preceding ./ to access a local file; the filename does nicely. On ALL modern operating systems of significance, including Windows, “.” refers to the current working directory (a most useful concept they borrowed from Unix). So if you have a file in the current directory named foo, you could access it as ./foo on Linux, .\foo on Windows, or simply foo on either one.
Windows tosses all of its files into C:\System, but then, it’s a single user, local operating system. Unix and Linux are multi-user network operating systems – they segregate files into several different directories depending on purpose. /etc holds all configuration files not necessary for boot and basic system configuration. Since booting and configuring the system are unique concerns relative to normal operation, it’s natural to think of all of the non-unique config data as “et cetera”. But you’re free to think otherwise.
When you launch vim in a modern version of Linux, you get an editing window. Those little icons at the top are buttons for the typical editing features that you would find in (say) Windows Notepad – from left to right on this computer, they are Open File, Save Current File, Save All Files, Print (separator), Undo, Redo (separator), Cut, Copy, Paste (separator), Find/Replace, Find Next, Find Previous (separator), Choose Session, Save Session, Run Script (separator), Make, Build Tags, Jump to Tags (separator), and Help. This doesn’t strike me as more difficult – or indeed, much different – than Notepad (ignoring script and make support, of course – but surely you can just ignore those?).
Now, if you want to run vim within a text window, it’s a bit more complicated – but ed in Windows isn’t exactly a paragon of user friendliness, either!
All of this is rather academic, though, since vim is not the default editor in a modern Gnome-based Linux system – gedit is.
But look, I really don’t care if you want to hate Unix based on a comparison of whatever you use today compared to what you tried decades ago. Feel free. But doesn’t that seem a little unfair? Just a thought.
What do you mean by “access”? If you’re going to read from or write to a file in the current working directory, no path needs to be specified. If you want to execute a file (a binary or a script) from the current working directory which is not part of $PATH (the search path for programs to execute), you need to prefix it with the ./ “here” path. This makes sure you don’t run the “ls” binary some hacker left in your home directory by accident.
The dot infront of a filename means it’s a “hidden file”. Files starting with . are not listed by ls by default, you need to use ls -a to see it. To be more generic, “.* is not part of *”.
It seems you should read a little bit about UNIX history. In the older versions, /etc (having the meaning “et cetera”) did not only contain files for system configuration as it does today, but it did also contain “additional binaries” such as /etc/mount or /etc/fsck.
Today, some people interpret /etc as “editable text configuration” which is what it is commonly used today on UNIX and Linux: Text files for configuration of the system, its services and additional software.
Again, you should read some history. Agreed that vi is not intended to be a “word processing typewriter”, it’s a powerful editor. Again my statement about language applies: You have to know how to operate it in order to make full use of its power.
Some UNIX and Linux systems have a different standard editor (even though they often ship vi in the basic distribution). $EDITOR or $VISUAL can be used to configure what editor should be invoked for programs that will open a file for editing (e. g. chsh).
Yes, and what’s the problem?
The problem is that you are wrong. On some Linux systems, the 1st hard drive is named /dev/hda, yes. But some Linusi use sda. BSD uses ad0, da0 or0 ada0 for the same disk, depending on how it is attached to the system (and which driver grants access to it). OpenBSD did use /dev/wd0. On other UNIXes, it’s just /dev/hd0. On Solaris, it’s even “more complicated” like /dev/dsk/c0t0d0s0. Since disks usually use labels, there even is no need to deal with this “bare metal” kind of files. Just ignore what you don’t need.
Not quite. Learning basic independently from their actual implementation is what makes a UNIX guru that powerful. He can use any system even though they are different. He has successfully learned how to deal with new situation, not being tied to strange concepts of how to do things. This flexibility is the result of learning. I know many people fear learning because it consumes their time, and they feel much more comfortable in their “just works” world. Until it stops working. Then they are helpless, having a black box that just doesn’t work. And of course, they cannot diagnose problems, create workarounds or create something new. They can only consume what others have left for them. A UNIX guru can always create functionality he needs, and this is what he can do with the most limited tools. Because, you know, in worst case, when nothing works, you’ll be happy for all that “ass-backward” stuff because it brings your data back, your system up and your company back into production.
UNIX has always been about development. It started its life as a development system, not as a consumer platform. This heritage is even present in modern systems that come with development tools, compilers and debuggers, with means of looking inside the back box. Free UNIX-derived systems even come with source code of all their parts. All parts of today’s modern technology that consumers take for granted is somehow related to the beginnings. Do you know why the Internet works today? Because there’s a lot of “old-fashioned” UNIX stuff (systems, concepts and philosophy) that keeps it running. This is not a short-term consideration as you typically see them in (home) consumer products.
What exactly do you mean? What are you missing in particular? If it’s just about that you don’t like it – just don’t use it. It is that simple.
UNIX and Linux development has come a long way. But also recognize that much older stuff like mainframe technology or COBOL is still alive and kicking, especially in governmental use (where money doesn’t matter). Could they do better? Sure! But why risk to break something that has proven to work?
Plural Linusi, srsly?
loli ^^
Well, that’s true for most things in life, yes? Since I am a ‘computer guy’, I can diagnose and fix most of my own issues, unless it is a hardware problem, in which case I usually take it to a repair shop, because I don’t have the time nor patience to try and figure out which one of the hardware components is causing the machine to glitch or lock up.
But when it comes to cars, I don’t know shit. If it breaks down, I pay a mechanic to fix it. Same with my washer and dryer, air conditioner, plumbing, television, etc. If I were a doctor, lawyer, etc, the computer would be just another appliance to me. Nothing wrong with that, really. A doctor may not know much about computers, but probably knows a hell of a lot more than we do about healing a sick person. It’s just a different area of expertise. For this reason, nobody is ‘better’ than anybody else just because they know a lot about computers.
What I’m missing is something that’s intuitive. We both know how powerful Unix is, but why can’t we have something with the power of Unix, with the added benefit of being able to sit down and use it right away, without having to know that if you want to run a command named ‘foo’ that’s in the directory you’re in, you have to type ‘./foo’. The very fact that you had to sit and type out a lengthy explanation as to why Unix works the way it works is proof positive that we can do better than that. The way something works ought to be apparent by just looking at it. It shouldn’t be necesary to comprehend 30-40 years of an operating system’s history just to be able to use the f**king thing and understand the way it works.
All I’m saying is that Unix is a very powerful OS, but is about as intuitive to grasp as the Chinese alphabet.
I don’t even try to claim that a UNIX guru is superior to other professionals per se. I just say that he grew up with the habit of learning, concluding, adopting, exploring, training and re-orienting than many other branches would have (which are quite static in knowledge). In terms of computers, the UNIX guru of course is superior to the lawyer to whom the PC is just a tool.
And that’s what a computer with an operating system and application software is: a tool. So even the lawyer should try to treat is a such. To properly use a tool, you should at least know a bit of how it works and what you use it for. “Know your tools” may be a fully different knowledge for a lawyer than for a doctor. The lawyer will know how to retrieve his cases from the database, edit text, search for law content and references. The doctor will know how to access patient data, bring up x-ray images, magnify them or change contrast, administrate medicine or order tests. Both have a thing in common: They have a computer as an important tool.
To reply to your car analogy: If you’re using your car as a tool, you don’t need to know about all its inner workings. I say that’s even impossible for a modern car because you’re surrounded by “black boxes”. Most of the stuff you cannot repair, or even remotely understand. But there are few things you need to know, and you even have to prove it: You need to be familiar with the pedals, the steering wheel, the dashboard. You have to know how to accellerate, brake, turn, shift gears and so on. You also need to know traffic rules: the funny signs and colorful lights, the speed limits and the precedence at a crossing. You even have to prove that you know them. Without that pre-knowledge, your car would be totally useless to you.
Again, the language analogy applies: It’s not sufficient to know the letters of the cyrillic alphabet. A language is more. You need to know the words, the grammar, rules and uses. And you need experience. You will get it only by using the language.
GUIs are the abstraction layer that provide this access. The limit the pre-knowledge to pointing and clicking, the mentality to “trial & error”, which is not a big problem as you cannot break things easily (but you can, just try hard enough).
As I said, typing is the way UNIX gurus get their work done. It’s faster, because it’s more direct. Instead of selecting from a list of prepared possibilities (“show on the chart”), they form their commands directly (“build a sentence”). This implies that the language is known, otherwise any attempt would be futile.
Please try to understand the important difference of “foo” and “./foo”. “foo” means: if we have a program “foo” in our $PATH which contains directories where execution of programs is allowed, run that program; if not, give an error message back (like “foo: Command not found.”). “./foo” mean: if there’s a command called “foo” in the current working directory (and nowhere else!), I intendedly command to execute that program because I know what I’m doing.
Just imagine some hacker compromizes your account, for example because you have a weak password. He cannot access system directories, but he can access your home directory. Therefore he will place “dummy programs” like ls, cp, cat or dd in your home directory. So if you routinely call them, you will call his programs instead of the ones you intend.
But there’s also a positive effect of the ./ notation. Imagine you’ve downloaded the source of a program you’ve installed, but you did some modifications to it. You can then call “./program” or “~/src/foo/compile/program” directly which makes sure the modified version will be tested; if you run “program”, just the one that is installed will be run, e. g. for reference.
There are two main concepts:
WYSIWYG: What you see is what you get. Emphasize: see. If you don’t see it, you won’t get it. Visual confirmation is highly important here, and the set of possible selections is limited. You can compare this to a baby that points to a thing and says “gah!” or “bah!” in order to acquire it.
YAFIYGI: You asked for it, you got it. You have to use the proper “language” to say what you want. The analogy here is that the baby has grown and adopted spoken language; it will no longer point and “gah!”, but start with simple sentences like “Timmy ball!”, and the more it learns, the more complex operations it can perform using language, like “Mommy, can you bring me the ball?” and so on.
You can clearly see that there is an evolutionary gap between the two concepts. This doesn’t mean that mouse action in general is inferior to keyboard interaction. It isn’t. The key to productivity is a good combination of both!
So when the system presents a prompt, for example
bob@vs19:/home/bob/src/foo% _
you can already obtain information from it. But what else does it say? It says: “I’m ready for your commands. Go ahead and tell me what you want.”
You already answered your own question: “Understand the way it works”: As long as you don’t try that, you don’t need any more knowledge than pointing and grunting. But compare it to your car: Want to fully understand how it works? You’ll get your hands dirty, you’ll learn a lot, and with every car generation, you can use a lot of pre-knowledge, but you also have to re-learn many things. That’s what development and evolution is.
Tell that to a Chinese, saying that your native language is way superior to his and he should forget his stupid little drawings and learn it instead?
Depending on your individual education and knowledge, things look easy or hard. If your “IT career” started with UNIX stuff, you’ll understand everything in UNIX being quite simple and logical, so this would be “easy stuff” in the end. If not, it might be overwhealming and not even understandable. Learning is the key. There is no other way to get access to and to employ technology that flexible, powerful and advanced. Sorry, I don’t want to sound impolite, but it simply is that way. You cannot acquire UNIX guru skills without knowing your tools. I claim nearly anyone can, but one has to actually do it, learn it, practice it. It’s not a matter of “PC on, brain off”. There are many resources for help and learning, but you have to access them and use their input for your own thinking. As powerful as today’s computers are, they don’t free you from thinking on your own. They don’t make a working brain obsolete.
Allow me to come back to the car analogy: If you can’t figure out car, better don’t touch it, or just f*ing learn.
This is not true for commercial UNIX systems. Shortly after System V went commercial, GNU tools were the only way to get free compilers.
Try to get a free compiler for HP-UX, Aix or Solaris, from the compiler vendor, with an EULA that allows you to resell the software.
I do not understand how inconsistency would be fun or flexible.
Here’s a quiz – which one will operate recursively on directories:
‘ls -r’ or ‘ls -R’? ‘rm -r’ or ‘rm -R’? ‘chmod -r’ or ‘chmod -R’?
The only reason I can come up with for this inconsistency is that there is no reason. There wasn’t any thought put into it when those commands were written, and we’ve stuck with it for decades.
It is only “rm -rf /” which will operate recursively on directories…
Bad joke aside, you’re quite right.
I like the fact that there are different shells, compilers, make utilities, editors, etc. etc. These are not all equivalent or even compatible. But it’s good to do things different ways, as long as it all hangs together in the end.
This bazaar-like system will produce the kind of inconsistencies you point out, but it’s no biggie if it produces better software in the end.
I find it’s often important to understand the “why” of a thing be it computers or other topics.
For chmod “-r” means “remove read permissions” so it can’t also mean “do this recursively” hence, you have the only other available option “-R”.
Granted, there are other examples like things that only recognize “–help” rather than providing a “-h” quick help display.
Still haven’t found an OS that does remain truly consistent though either.
You do realize this describes every language, right? The only thing really unusual about English is its spelling.
Now that I think about it, this describes every non-trivial operating system too…
The human languages I know other than English (Japanese, Latin) are far more consistent and easy to learn than English — uniform grammar, phonetic spellings, etc.
Nope, sorry. Just because you want something to be true doesn’t make it true.
There is really nothing very out of the ordinary in the grammar of English (except for the usual selection of oddities that you can find in every natural language). I’m not sure how you can say it’s any less “uniform” than Latin or Japanese. It’s also far easier to learn than Latin or Japanese for people of most linguistic backgrounds due to its analytic/isolating nature. Latin wins on phonetic spelling, but English and Japanese are pretty comparable in terms of wackiness of the writing system.
How do you know what I want?
Anyway, understanding this objectively is not trivial. Maybe worth reading this: bit.ly/JKRzxc
As a French person who has dealt with English, German, Swedish and Japanese in the past, I tend to disagree with you on this one. Of those, English is probably the easiest to learn, and by a fair amount.
That is noticeably because you English persons have been smart enough to deal away with most of the cruft that curses German and French. Common noun genders, abysmally complex verb conjugation, dozens of article cases and declinations, stupid amount of diphthongs, and other WTF rules that serve no other purpose than adding complexity such as mandatory noun capitalization or context-sensitive past participle terminations… Basically, the English grammar, although not perfect, is nice enough than it lets us foreigners focus on the challenging task of learning your vocabulary in which all common words seem to have at least two meanings, which in my opinion should be the goal of every language.
Modern Swedish tries to get rid of the cruft as well, but it has not went as far as English in some areas such as articles, and more noticeably has a much more complex pronunciation (particularly due to the strong difference between short and long vowels, and them fiendish sk and sj). Japanese is as nice as English from a grammatical point of view (easier in some areas, harder in others) and easier to pronounce, and I’d say that oral Japanese is overall a truly nice language, but then they ruin it by having a writing system that feels completely decorrelated from the oral language, effectively requiring one to learn vocabulary twice.
Edited 2012-05-26 08:31 UTC
As with zifre above, we’ll have to agree to disagree until we can get some hard data. I would define complexity as how many rules one has to know. English has few big rules, but many small ones, e.g. regarding different meanings for words, which you point out.
True, but then in Japanese one frequently meets words that are pronounced in exactly or near-exactly the same way, and can only be differentiated by context or by seeing their written form. And then there are all the etiquettes rules concerning vocabulary use, most obvious of which being the half-dozen ways one can say “I” or “you” depending on the context.
I cannot discuss vocabulary peculiarities much, though, because I don’t know well about those for the languages which I have only studied out of curiosity, without a serious attempt at speaking or writing them every day. In general, I hate the repetitive task of learning vocabulary no matter how simple it is
Edited 2012-05-26 08:49 UTC
The difficulty in learning Japanese vocabulary is mostly limited to learning the kanji, which are recycled to form different compound words. Once you know their Japanese and Chinese readings it’s not hard to get the pronunciations of even technical words.
But learning kanji is quite a chore.
Sou desu ka…
Let alone the onomatopoeia and mimetic words, as a foreigner who have been trying to master the language here in Japan, I can assure you that Japanese reading of kanjis are not 100% comprehensible. It’s true even to the natives, especially for people’s name nowadays.
By the way, technical words nowadays are usually in katakana, not kanji, since most of them are loan words from other languages.
While it’s true that kanji names have obscure readings, surely English wins on this count — it’s almost impossible to work out the pronunciations of, say, American names from their spellings. Same is true of place names in England.
Technical words from Meiji era, such as in medicine and basic sciences, seem quite sensible. It does seem true that modern technical words, such as for computing, are in katakana, or simply English. This adds complexity as one must basically learn a second language.
> English has few big rules,
d’accord! In other words: its grammar is simple.
> but many small ones, e.g. regarding different meanings for words, which you point out.
Double, triple, quadruple overlapping meanings are staple with *every* language.
And so are the per-word grammar twists. As the base grammar is that simple, there is not much complexity there neither.
Empirical test:
compare a Latin and an English dictionary of equal physical size. Compare the number of words listed on the cover. Latin will be approx one fourth! I.e. each entry is four times as long, with exceptions for this case, that sub-phrase, those prepositions. A nightmare. Only manageable, because no one really tries to *speak* that.
The complexity of English comes from
1.) incredibly crappy spelling a.k.a. inconsistent pronunciation rules
2.) sheer raw size of vocabulary; by whichever way of counting at least twice the size of the next biggest corpuses (yeah, corporis) which are French and German.
What if you compared a Latin grammar book with an English one?
Now if we could only find sane gender-neutral pronouns for English to replace she / he, him / her, and so on.
Why not use the “it” family of gender-neutral pronouns that you already have ?
It sounds strange to talk about someone using that word today, but give it a chance and maybe tomorrow it’s he/she that will sound weird or derogatory…
Edited 2012-05-26 18:43 UTC
“Why not use the ‘it’ family of gender-neutral pronouns that you already have ?”
Maybe that’d go something like this?
http://www.youtube.com/watch?v=RQb2m6VJ-eo
(buffalo bill scene from silence of the lambs)
Edit:
I admit, word genders are the most pointless feature of the french language. At least the english language generally limits the use of he/she to things that actually have a gender.
Edited 2012-05-27 04:59 UTC
There’s an easier way that’s not contrived: all men use male pronouns, all women use female pronouns.
Problem solved.
What about the very common case where the speaker doesn’t know the gender of the referenced person, or he or she (ahem) knows that the gender is indefinite? That’s the actual problem case, not the case where we know the gender!
Edit: Wait, just realized your gender reference is to the speaker. But I dislike that approach as well. It means the language is different for men and for women – an odd and unnecessary distinction – and also doesn’t cover all of the cases, such as when text is machine-generated. Gender-neutral pronouns are far more satisfactory IMHO.
Edited 2012-05-26 20:12 UTC
I generally follow the xkcd method:
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=…
I use “them” / “they” / “their” pretty much all the time if I am not speaking about a specific person. It is just easier to use in cases of ambiguity.
Not a bad solution, but it will take a little time to adapt for us old-timers. Of course, we replaced ye and thou with you, so anything is possible…
Actually, English is pretty easy in comparison to a lot of other languages.
But Unix is atleast trying to adhere to the KISS principle, which, in my book, makes it pretty easy too.
It has a few basic concept.
Try to understand how Windows works. Good luck with that.
The simplicity to learn in Windows came from the GUI and actually for non-computer users Ubuntu is even easier to learn than Windows is these days.
I guess I do not fully understand the issue. You malign UNIX for being too complex, so what? Do Apple users care that a Macbook is BSD at its core? No, it just works.
That’s what UNIX is to me. It just works. I have no concerns about the differences with /bin and /usr/bin as those decisions were made long before I delved into the digital world.
If a large group of users decide that UNIX is too fragmented then that group should create an alternative to UNIX and just leave the old guy alone.
Because it just works.
I once explained this insidious habit of calling stacking layer upon layer “fixing things” (instead of actually fixing the root issues) to a non-geek friend as such:
Saying that Mac OS X addresses the complexity from UNIX is like saying large sunglasses and extensive clothing fixes the issue of a woman getting physically abused by her husband.
Edited 2012-05-25 17:31 UTC
The reason why this layer stacking happens (developers like to call it “abstraction”) is in the futile hope that if they can get everything to work on a standardized layer, they can then get on with the task of fixing everything underneath the layer without breaking people’s setups.
That ends up not happening because 1) developers, whether open source or commercial, are fired up about the designing of a new layer, but that excitement dies soon after it’s half finished 2) end user software can’t keep up with the changes, whether it’s the people making them, or the people using them not wanting to change 3) developers create competing standards which means no one wants to commit to a design that won’t survive.
Layering is a necessary evil because no one wants to break everything, especially the developers.
IMHO: It’s this “layering” (or rather clear separation of layers?) that keeps “Unix” going and relevant, is it not?
It means it can change and adapt instead of needing to throw everything out and starting from scratch when ever a new trend comes out.
Sure, it’s nice to design a new bespoke system with no kludge from the past. You (potentially) get a smooth system, without many of the historical inconsistencies. However, more often than not, you end up solving problems that already have been solved a dozen or more times before.
Also regarding the calls to “start over”, I know from bitter experience that it is very disappointing to invest so much time and energy only to find that everything you know and have developed expertise in is now completely deprecated.
Remember the feedback from the .NET developers when the primary development environment for Metro was announced? Can you imagine that sinking feeling? This is what it means to “start over” – we should really pause and consider the full ramifications of this idea before calling for a complete re-design.
I know things that I learnt way back when I was using Irix and HP-UX on a daily basis still have some relevance today. This is what *I* consider to be a good design – it permits knowledge accumulation.
IMHO: The “problems” people are seeing on the Linux desktop is not a fault of the Unix philosophy. They are merely competing visions. You really don’t get this in any other OS environment. I for one appreciate the options available, even if it means the end result is not always something that appears super-slick to the naked eye.
I wish “it just worked”. For one thing, the switch from Mac OS to Mac OS X meant that all of a sudden, spaces in filenames became a real problem. Try it, create a folder with spaces in its filename, check out some sizable Unix OSS project and try to run the configure/make/etc command chain. Quite often you will get error messages. Since Xcode is just wrapping gcc/llvm, it suffers from similar problems.
Well…. the skill level required does go up when you are playing with the UNIX side of OSX. A mac user really does not have to touch that part of the mac for normal usage.
Yes, that’s why almost all production and even home systems are Unix-like: GNU/Linux, Solaris, *BSD, Mac OS X.
Even Windows includes SFU since once.
It is the most vital philosophy. Just because it just works over many years and allows easily port and replace small pieces of software without significant changes to the whole product.
Fat and monolithic systems tend to die in relatively short terms. Just look at Windows: they are unable to replace UI properly because their old ui is the part of solid initial design.
And what of Unix? No, KDE is bad now, we switch to Gnome, oh it is bad too, just use XFCE, still to fat? – dwm is your friend.
X11 is bad, bad, we are switching to Wayland.
What if text logs and shell scripts are too easy to maintain and understand, but we need to be called super-pro. No problem, bro, systemd and journald are your best friends. Or upstart. Or just have old plain init and nobody shall change your mind.
Yes, this diversity has an obvious side effect of fragmentation.
But it is free as in “free culture” – just reuse, replace and share.
Unix is a free culture of a system design. Everyone is welcome.
Yes, without GNU and BSD Unix will be just another proprietary blob. But because of them it was revolution and it still has great potential.
I just wanted to reply to say I think this was a really excellent post. I can’t just mod you up, since I’ve already posted above as well
I think that there are two main reasons for the complexity:
– compatibility
– closed minds: the reaction to PowerShell from Unix users is quite sad really..
posix layer has existed for a very long time. MS has a gigantic case of NIH due to its desire to lock in its users.
To be fair to them, I’m sure that having a member of the VMS team in power during NT’s design and development didn’t help.
On the other hand, the VMS guys was obviously the only skilled low-level people they had.
Like any other OS vendor that does not sell UNIX based OS.
You totally misunderstood my post, I was pointing out that PowerShell should have been a wake-up call for Unix users: it shows that relying on objects instead of text to pipe between executables has many advantages, but nearly nobody thought about it seriously as it comes from Microsoft..
I despise Microsoft as a company too, but this doesn’t mean that the technical idea behind PowerShell isn’t good, IMHO it has many advantages over the traditionnal text based Unix interfaces.
I wouldn’t exactly call the two arguments counterpoints. They seem one in the same. The second post by Cook merely explains why we have created machines of complexity.
I think Cook does a pretty good job of it by showing the 3 points of Unix philosophy.
1. Write programs that do one thing and do it well.
2. Write programs to work together.
3. Write programs to handle text streams, because that is a universal interface.
I’d actually suggest people don’t really like all 3 all the time. Many people don’t like text streams and would prefer table or other more explicit format.. or in the case of programs calling programs… needless encoding and decoding and text grabbing.
But the same is true of course of Windows. Windows had an ideology of centralization (registry, event logger…)… but many people and programs didn’t like it and this resulted in complexity on its end. Especially in Windows this often resulted in weird convoluted ideas. The easiest to pick on was the add/remove program… but without any actual install/uninstall enforced standard format, it became a weird convoluted system.
Only by rigidly controlling your apps by some kind of approval process could you keep your ecosystem simple. Both Windows and Unix rejected any control of the system as a whole.
Maybe this inconsistency and incomprehensibility is an inevitable product of complexity and evolved complexity.
Using organisms as an analogy on the outside they look neat enough but inside the organization is often a ridiculous mess – no one and his dog would wire the vertebrate eye with the nerves in front to the photo receptors, let alone some of the blood vessels in the neck and thorax of a giraffe.
Things get messed up, it is an inevitable consequence of the way the universe is.
You already used the word evolved, I would put it more strongly:
complexity is usually the result of evolution.
When something is designed from scratch and all you need it for is what it was designed for, you’ll (be able to) get beautiful designs.
The typical mammalian teeth are probably one of more frustrating baggages… I’m fairly certain that most of humanity would like to have a new set of teeth every dozen years to two decades, or so.
But no, our rodent-like distant ancestors at some point lost the ability to have more than two sets. And since it didn’t matter in their very short lives anyway, didn’t put any selective pressure, it stuck – eventually into times when, for some mammalian lineages, lifespans became quite a bit longer (even more frustrating: rodents, the contemporary mammals outwardly & in lifestyle most similar to those ancestors, they managed to bypass it with continuous growth) – by now, even if we’d manage to reinitialize leftovers of gene pathways responsible for further sets, the results would be probably… messy (with how our skulls evolved for 100+ million years without the need to accommodate more than two sets)
Damn entropy (and since we already mention things which are very unsettling to some people… ;p http://groups.google.com/group/net.origins/msg/ca73e0fd518a23f8?lnk… )
BTW what’s with giraffe? (I’m not familiar with any specific blood vessel weirdness of this one / I’m lazy )
I was being lazy too and miss remembering
I was thinking of the Recurrent laryngeal nerve, which starts in the brain goes down the neck goes round the aorta / subclavian artery (in the thorax) then goes all the way up the neck again. A detour of about 4 meters.
A bad bodge by any standards, the kind of thing that happens when things evolve and new features are added
I like to think the reason why humans are so successful is because we were dealt a really crappy hand by natural selection. We have probably the least remarkable mammalian features – we’re not strong, we’re not fast, we’re not fast breeders, we are not huge or tiny, we have average eyes, we have average smelling, we have average ears, we have a bad spine. And it’s probably because of these crappy parts that we had to work really hard on cooperation and language and tools which increases intelligence to allow us to overcome our crappy bodies.
I like that most of the time I spent learning about the UNIX shell and tools, X Window, shell scripting, etc on an HP-UX system during my student years helped me build skills I still use today.
I like that I’ve been able to use Unixware (SCO), CLIX (Intergraph), SunOS, Solaris, AIX and OSF/1 (DEC) systems without investing weeks of my time on them.
I’m sad that my years of experience on Novell Netware won’t buy me anything today.
The problems of Unix is that:
1. Graphics and audio stacks that don’t suck are proprietary (read: belong to Apple). Android is intended for mobiles and we can’t be sure if it’s graphics and audio stack will fit the desktop’s needs.So that leaves X.org as the only real choice for non-Apple systems. You know, the one that’s slow and has hardware accel issues like tearing. The much praised nvidia drivers actually have to replace a significant part of X
2. The terminal sucks. The commands have no logical consistency, for example ls prints the result by default, find doesn’t, and they were made with the thought one command’s output might be a input for another. Someome needs to build wrappers for Unix’s CLI commands, now. This would also seperare interface fron engine.
3. The fact the hard drive unix is installed gets labelled as “/”, and everything else is essentially mounted on a virtual folders inside /mnt is also insane. Let’s be honest, this is a hack because unix was designed for systems with only one drive. If this can’t be fixed, let’s at least fix it at terminal/GUI level so we can have HDD2:something.jpg instead of /mnt/hdd2/something.jpg
4. Commands should be seperated by ASCII “unit seperator” not spaces. Using the space as both a seperator and a legit filename character is bad. Which reminds me, control characters should be banned from filenames.
Anyway, as a last note, the author of the linked article is falling into the same trap the unix haters handbook fell, treating “Unix” as if it’s one thing. Proprietary Unix systems like OS X and HP-UX with truckloads of resources behind them are different from FOSS projects like PC-BSD with little resources behind (sorry free software fanatics). The only grudge one may have with OS X is that theydidn’t bother to fix the terminal because it wouldn’t look good in a WWDC presentation.
Edited 2012-05-26 11:04 UTC
GNU find has printed the results by default as long as I can remember using it. From the man page:
You seem to have a fundamental misunderstanding of how the UNIX file system hierarchy is constructed and why it is that way. There is no reason at all to change it, even a little bit.
Why? What’s the benefit?
If it helps, most file browsers you’re likely to come across these days will show you a bunch of shortcuts, usually these shortcuts include devices I.e. if you plug in a USB drive, a new shortcut will show up. Clicking on it takes you straight to the device. Which is precisely what you’re asking for.
Just how were you planning to type this non-printable ASCII character?
In hindsight, you may be correct about spaces in filenames. But the set of allowable characters in a UNIX filename is not going to change now: it’d break too much stuff.
Why? What’s the benefit?
Simple. Amateur user sees HDD1:Folder/image1.jpg and HDD2:image2.jpg and immediately understands what’s going on. User sees /Folder/image1.jpg and says “Da freak is this? Okay, I guess / is the hardrive i installed the OS or something, because this is were my usr directory is” Then he sees /mnt/hdd2/image2.jpg and says “Why is my second harddisk a subdirectory under my first harddisk? What’s that mnt folder? Waahhh, I don’t understand what’s going on!” See, most people don’t know the concept of “mounting”, and good luck explaning it to them before their attention span ends. As you say, GUIs try (and partially) solve this problem by making it appear a if Unix/Linux has multiple roots, by Gnome still has a “filesystem” button that will expose the nastyness and confuse the user. IMO all GUIs should completely hide from users the fact Unix doesn’t have multiple roots, by replacing “/” with HDD1: and /mnt/hdd2 with HDD2: and hiding the mnt folder, and have a switch somewhere in the settings that old timers can activate to get the real filesystem back (if you know how the Unix filesystem works, you should know the button).
Just how were you planning to type this non-printable ASCII character?
This sounds funny, but it’s a real problem. It’s the lack of a unit seperator button that’s causing spaces to be a problem with most CLIs. Just use some key combination like shift+space. No, that would make typing slower. You could mandate double or single quotes, but again some people would complain it makes typing slower. Ok, I don’t have an answer. Anyway, this isn’t Unix’s fault exactly, I admit.
Well tell me why I don’t have X.org (or Wayland or MGR or any type of GUI) installed but I can still use graphical programs?
How? What is your OS using to draw on the screen? Do you have hardware accel and what drivers are you using? Anyway, the fact most (non-Apple) Linux and Unixes ship with the slow, problematic X.org, is turning people off from using them. If X is not needed, why everyone bundles it with their OS? So that some geeks can boast their OS has a network transparent desktop? Just selve the network transparent desktop most people don’t need ’till you have perfected it. Geeks can download and install X by themselves if they want a network transparent dekstop. And since we are talking about graphics and audio stacks, Linux innovates once again by breaking audio too (which used to work in *nix) with pulseaudio, yet another middleman unecessary for most users.
PS: Another problem with *nix is that Unix has a really narrow hardware compatibility list, and Linux tends to break upgrades if binary blobs are used to run the hardware, but I am an optimist, and hope that Sputnik will solve this, by providing a range of laptops with Linux friendly hardware that doesn’t break during upgrades, but till that happens, it’s still a problem. And when it comes to Unix, it’s still “good luck, dude” when trying to find compatible hardware (esp laptops)
Edited 2012-05-26 19:46 UTC
How do you figure that? I mean, what usability data do you have to support your claim that HDD1: is more unstable than /? Both are symbolic representations of an abstract concept. I’d put money on both being as “understandable” as each other.
[A]mateur user sees HDD1:Folder/image1.jpg and HDD2:image2.jpg and immediately understands what’s going on. [/q]
Moot point. The amateur user is already using gnome or kde (or whatever) so they have no problem knowing what the drives are and accessing the files on them.
>>Well tell me why I don’t have X.org (or Wayland or MGR or any type of GUI) installed but I can still use graphical programs?
>How? What is your OS using to draw on the screen?
I use Linux framebuffer to draw graphical stuff on my screen. With it I can run at least Links2 with graphics, mplayer, Netsurf and DOSBox. I like it because it has made me able to quit X which took about 65% of my RAM on normal use. (I have 64MB of RAM)
Now this suggestion is really bad. The main point, the single reason of using a tree (as in filesystem tree) is to have one root. You dont want a forest. You want one root, one mount point.
The algorithms/programming will be much more elegant and less bugs, if you have one root. If you have several roots, a forest of independent filesystems, then your program need to find out how many roots there are, and then run on each root. You have lost elegancy and require additional steps (how many roots are there) before you can run your program.
If you have a single root, with several mount points, the algorithm/programming will be unified and simplified. You just start to run on the root, and it will automatically traverse every node.
Windows works the way you describe. You have C: D: E: … Z:. Now, let me ask you, where do you find the database server? On which disk? Where are all source code? On which disk? Easy to answer, right? Very intuitive, right? One company might use E: another uses Z:. No standard, a new user needs to traverse and examine each drive. Very very very ugly.
Compare to Unix. /opt/database. Or /opt/sourcecode. One root. You always know where to start. In Windows, do you start at D:? Or L:?
A programmer knowledgeable in algorithm theory, always prefer a single tree. The reason you propose your ugly suggestion, is because you are not a computer scientist, that is obvious. If you study some computer science, you will change mind set and understand how beatiful trees can be, and how they simplify the algorithms and programming very much. Recursion is extremely powerful, and makes elegant simple solutions.
There are several storage devices in your computer, some removable, some not. Abstracting it into one hierarchy just for the sake of lazy programmers and some other made-up reasons is crazy in the mind of a novice user and, at best, questionable when the novice develops into an advanced user. This “one root” magic that somehow maps into the real world (encompassing virtual filesystems like /proc and whatever is below /mnt) is a consequence of the unix design, not some universal panacea or *TEH ONLY CORRECT WAY* as some religious unixers make it out to be.
Is it /opt/database or /var/database? /database? /var/database-vendor-name ? No standard, a new user needs to traverse and examine every possible place in the tree that the database can reside in. Very Very Very ugly.
I’m with kurkosdr, it’s madness. His (and my) “fundamental misunderstanding” is indicative of just how wacky it is. Your inability, or unwillingness, to explain what we’re missing just further underscores this. I’m guessing it would take a major effort to lead us through the convoluted logic to justify a scheme that’s so counterintuitive to normal, non-Unix-indoctrinated people.
Furthermore, I’ve see how well named volumes worked on the AmigaDOS command line. That made sense.
Since many UNIX and Linux systems support file system labels (in different ways), this functionaliy can be considered a standard functionality. It’s a feature commonly used today.
You can “mount /dev/label/data2012 /home/db/thisyear”, or have the disks in your NAS labeled A1, A2, B1, B2, and SPARE. You can reflect RAID criteria (part of mirror, part of stripe, spare) as well as functional parts (system disk, data disk, program disk, scratch disk, secret disk). You can even automate mount actions depending on labels.
Oh I’m sorry, I didn’t realise I was here to provide a CIS lecture. How about you make it your responsibility to educate yourself and then we can have an informed debate on both sides?
From the context of your post I’m guessing you’re complaining about the directory structure. That wasn’t what the original discussion was about. Whether “everything is a directory” with volumes mounted under / makes sense and whether the names and structure of those directories makes sense are two different things.
I’m actually inclined to agree that the current hierarchy and naming scheme in most *nixes are a complete mess.
So do I. Volumes on the Amiga were no more intuitive than everything-is-a-directory on UNIX. Why is LIB: or C: any more easy to understand than /usr/lib or /bin?
Edited 2012-05-27 12:44 UTC
Not just GNU find, either. Every ‘modern’ UNIX system I’ve used (AIX, HP-UX, Solaris) behaves this way.
That, and it’s not *that* big a problem. Modern shells with tab completion cope pretty well with spaces in filenames, automatically escaping or quoting characters that would cause parsing problems. An annoyance, certainly, but only a small one.
>So that leaves X.org as the only real choice for non-Apple systems.
Well tell me why I don’t have X.org (or Wayland or MGR or any type of GUI) installed but I can still use graphical programs?
>2. The terminal sucks. The commands have no logical consistency, for example ls prints the result by default, find doesn’t
Well both print results on my system (I have busybox).
3. The fact the hard drive unix is installed gets labelled as “/”, and everything else is essentially mounted on a virtual folders inside /mnt is also insane. Let’s be honest, this is a hack because unix was designed for systems with only one drive. If this can’t be fixed, let’s at least fix it at terminal/GUI level so we can have HDD2:something.jpg instead of /mnt/hdd2/something.jpg
Some people like multiple roots, some don’t. I prefer to have my data on /.
find prints by default.
More wrappers, awesome idea…
Because that’s a great improvement. or not.
Are there any modern operating systems that doesn’t do this? Nope, they all use space both as a command separator and a valid file name character.
I’m pretty happy you don’t have any real input on how the OS should work.
To wrap them into what exactly?
For CLI programs, I think one can see the interface as an essential part of the engine. You cannot separate them at this low level.
First of all, please try to use the proper terminology. There are no folders. Those are called directories. A folder is the name of a visual representation of a directory.
Then, the disk isn’t “labeled as /”. Depending on support, disks and their partitions can actually be labeled (e. g. “bootdisk”, “datadisk1”, “datadisk2”). / is the root of the file system hierarchy. This is not neccessarily a hard disk! For example, if you boot from a DVD or a USB stick into a RAM disk like environment, there are no hard disks at all. And now you introduce hard disks to that system. Depending on if you apply some RAID technology, a disk, a part of a disk or a compound of disks might be mounted at /home in that hierarchy.
Depending on the system you’re using, /mnt might be reserved for specific operations, e. g. as a temporary mount point for the system operator. Using the “everything is rooted in /, but mounts can be done everywhere” concept, you are free do even mount several disks “combined” into one directory, e. g. you have /opt/stuff containing files of many disks. This enables you to separate functionality of the system into different media (or as a memory file system), and you can easily enforce specific mount options for security (e. g. no execution possible) or access (e. g. can only be read) or speed. You can even swap the set of installed programs or databases based upon mount point (just changing which device gets mounted).
As you see from my examples, UNIX has been designed to have lots of drives. Flexibility is the key.
What’s the difference? I don’t see anything significant here. I see you show a “drive as a prefix” habit such as the “drive letters” you find in DOS and “Windows”, but why should they be tied to be the 1st element of a path? What if symbolic links enter the stage? How should that be processed?
You see that the flexibility of “mount wherever you need it” is superior to this approach.
However, the representation to the user is a totally different thing. For example, if you attach a disk to the system, an icon pops up on the desktop, you click it and see what’s on the disk. Need to deal with device names or paths? No.
The space is the command line separator. If filenames do contain spaces, quoting or escaping is needed.
If you compare to VMS, you’ll find a slightly different approach: Here command options or qualifiers are often found as associations, e. g. instead of calling “dostuff in.txt out.txt” you do “dostuff /in=in.txt /out=out.txt” which is more verbose, but you could omit the spaces.
Correct. You shouldn’t use spaces in file names. Using _ is possible, and many GUI programs simply substitute the _ to an actual space in display. However, there are “tricks” that a good programmer has to implement to make scripts work with malformed filenames that don’t just contain spaces, but also other possible characters, such as quotes, newlines, and backspaces.
This makes it easy to compare to “Windows” which is also interpreted as being “one thing”.
One of the issues I see when many people compare UNIX against other operating systems, is that they start from false premisses.
Usually Linux or BSD experience is used, however these systems are much more user friendly than any real UNIX system ever was.
Just try to use a fresh installation from Aix, HP-UX, Solaris, or other more exotic commercial UNIX systems, to see how close they still are to the original UNIX, in terms of lack of user friendliness.
UNIX is a very nice operating system architecture, but the same way Windows everywhere is bad, the same can be said about UNIX everywhere. We need diversity in our field, and new ideas and operating systems.
I use Linux framebuffer to draw graphical stuff on my screen. With it I can run at least Links2 with graphics, mplayer, Netsurf and DOSBox. I like it because it has made me able to quit X which took about 65% of my RAM on normal use. (I have 64MB of RAM)
Yes, but do you have hardware acceleration and Gnome or KDE with all their bundled apps? “I have graphics without X” is different from “I managed to have some graphics without X, as long as I don’t use Gnome or KDE or need hardware acceleration”. From what i hear around, xorg is a neccessary evil.
As regards the dudes who try to convince people the Unix filesystem as it is is a good thing, I guess if all you used in your life are bicycles with the steering behind you instead of in front of you, you will eventually convince yourself and other people it’s better than a normal bike. Assigning “/” to the hardrrive the OS is installed in and make everything else appear as a subfolder is silly. Mounting a drive to a folder should be an option, not a requirement in order to use your drives. Gnome and KDE know this and are trying to hide the issue, but due to dudes considering the Unix filesystem a good thing, the still have some button that exposes the nastyness.
Edited 2012-05-27 19:15 UTC
The UNIX filesystem was designed to be network agnostic. You’re not arguing anything profound: you’re just arguing for one set of conventions over another which doesn’t really change anything.
The “average” user doesn’t even care how drives are represented. All they want is a window that they can open for the drive, which does not matter if it uses the UNIX / convention, or the DOS drive letter convention.
Assigning “/” to the hardrrive the OS is installed in and make everything else appear as a subfolder is silly
Wow, you really don’t understand the Unix filesystem. The drive mounted as / is specified in /etc/fstab, and the physical drive files are located in /dev/. Some people mount /usr separately, some mount /var separately, and a lot of people using *nix at home mount /home separately. The beauty of the Unix filesystem is that nobody freaking cares what your partition scheme is unless you’re running out of room. Mounting remote filesystems is completely transparent once they’re mounted. Even NT does this internally; it only keeps drive letters as links because people are retarded and incapable of change.
tidux,
“Wow, you really don’t understand the Unix filesystem.”
I don’t think that the OP’s opinion demonstrates any lack of understanding. For some *nix filesystems can seem cumbersome and it’s a valid opinion.
For me, linux mounting is a nice abstraction, but sometimes I’m put off by the lack of overlays in the mainline kernel. I shouldn’t have to have to store all /home/ directories on one disk for example. Overlooking several caveats, we can mimic overlays manually using symlinks, but linux’s mount capabilities are occasionally inadequate.
A bigger problem for me is the standard linux directory hierarchy. I prefer an application centric hierarchy rather than one where everything is dumped together in the big /usr/bin soup pot.
Um… you don’t have to put all the home directories on one disk. Nobody said you absolutely have to assign $HOME values within /home, although it is easiest.
http://sprunge.us/BUCV
That’s the output of “df -h” on sdf.org, a NetBSD shell provider I use. My home directory isn’t in /home at all, but in /arpa/tz.
You don’t have to. Home directories can be spread across many disks, they can even be placed on “no (local) disk” (see NFS home).
True, there are options to do it differently. For example, PC-BSD utilizes a concept as what you are suggesting. Still this may have disadvantages, e. g. doubled and tripled libraries. But as hard disk space is cheap, nobody sees a problem in this.
However, the traditional layout has advantages and intended baheviour, even if it’s hard to see this on modern Linux where, as you said, things tend to be thrown into one pot.
Allow me to point you to the FreeBSD file system hierarchy documentation, “man 7 hier”, for a more detailed description about what the different directories should be used for:
http://www.freebsd.org/cgi/man.cgi?query=hier&sektion=7
In addition to them, some systems even use /opt (directory initially coming from Solaris, if I remember correctly) to manually manage software that is not handled by the system’s software magement facilities, so avoiding problems with standard tools.
“man hier” is present on Debian-based systems as well, although it describes the Linux Filesystem Hierarchy Standard instead of FreeBSD’s layout.
Linux has supported bind mounts since kernel 2.4.0