Federkiel writes: “People working with Apple computers are used to a very consistent user experience. For a large part this stems from the fact that the Lisa type of GUI does not have the fight between MDI and SDI. The question simply never arises, because the Lisa type of GUI does not offer the choice to create either of both; it’s something different all along. I usually think of it as ‘MDI on steroids unified with a window manager’. It virtually includes all benefits of a SDI and and the benefits of an MDI.” Read on for how I feel about this age-old discussion.We have touched on this discussion a couple of times before on OSAlert, most notably when we ran a poll on whether or not GNOME should include a patch to enable a global application menubar. The 175 comments to that story provided us with some very valuable insights concerning the matter; most importantly, it illustrated how hard it actually is to make a case for any (G)UI related standpoint, probably because in contrast to many other computing related issues, you cannot benchmark usability. There is no ‘UsabilityMark2001′ you can run on your Mac and Windows machine, and then compare the results in order to come up with a grand theory of usability which will predict users’ behaviour.
The author of the article writes:
“First of all, it saves a lot of screen space. Because the additional menubars are no more than optical bloat. ‘But,’ you may say, ‘screen estate is not so important anymore. Screens get bigger and bigger, with higher resolutions and stuff.’ Well, yes. But the human visual sensory equipment has limits. There is a limit of how much information you can get on an area of a certain size. And there is a limit to the area the human eye can usefully overview.”
While this makes a lot of sense, the article author fails to realise that this is why menubars ought to be standardised; the order of menubar entries should be the same across all the applications, reducing the amount of new information the eyes and brain must process in order to use the menubar. On top of that, the author also fails to mention that no matter how many windows of, say, Firefox you have open, the menubars in all those instances are exactly the same. In other words, the eyes and brain only have to process that menubar once, since it will know that that menubar will be the same in any instance of Firefox.
In addition, the author’s argument does not take training into account. Because I use Firefox so often, I know its structure for the menubar from the top of my head. I do not need to process the menubar at all, simply because it is imprinted in my spatial memory. In other words, the brain has processes in place to minimise the amount of information it needs to actively process.
The article goes on:
“Another advantage is the document-centric approach the Lisa-type GUI takes. Documents, not applications, are the center of the desktop. No matter what kind of document you’ve opened, it never feels like you’ve ‘started’ an application. It never feels like you are using an application, rather the document itself seems to be providing the necessary tools.”
This is a very bold statement to make. Here, the author is promoting his own personal user experience as fact. I have been a Mac user for quite a while now, and I ‘still’ think in terms of applications. The document-centric approach has often been touted as easier to use than the application centric approach, but in spite of its apparent superiority, many people do not seem to have any problems – at all – when using an application centric approach (Windows). Interestingly, many people switching from Windows to the Mac complain about applications not closing when destroying its last window – which brings us to another interesting point: experience.
If you have only had experience in using an application centric user interface, such as Windows, a document centric approach is just plain weird. You have become accustomed to the application centric approach, and as such, this approach will be superior to you, no matter the documented advantages of its alternatives. I always compare this with manual and automatic gearboxes: no matter how many people explain to me the advantages of an automatic gearbox (and I acknowledge those), I simply cannot get used to driving a car with an automatic gearbox, simply because I feel uncomfortable on the road while using one.
Of course the author also brings in Fitts:
“Last but not least there is Fitts’ Law. More properly termed: Fitts’ Rule. There have been many discussions about how much Fitts’ Law really applies. But in the long years I’ve been using graphical user interfaces, the rule has proved itself many times, again and again.”
Now, I expected the author to dwell a bit more on Fitts, since Fitts really deserves more than just a few lines when it comes to this discussion. Fitss Law is very hard (if not impossible) to disprove, but it is actually not very hard to restrict the law’s influence on user/computer interaction: training and experience come into play once more.
When you first learn to play darts, it would really help if the triple 20 was the size of Texas. This would make a it a whole lot easier to hit it, and thus would greatly improve your accuracy. However, as you get better at playing darts, the triple 20 does not need to be the size of Texas. Experienced darters do just fine with a triple 20 the size of a few square centimeters.
The same applies to user interface design. Sure, Fitts’ Law predicts that the larger a graphical user interface target is, the easier it is to accurately click. However, just as with playing darts, the better you get at clicking stuff with your mouse, the smaller a target can become. When my grandparents bought their first computer at age 76, it was extremely difficult for them to hit and click the icons on the Windows 98 desktop. However, as time progressed, even my grandmother and (late) grandfather got better at performing this task, and now, my grandmother has no trouble at all hitting smaller targets.
In other words, despite its correlation coefficient of 0.95 (which is quite high), Fitts’ Law does not take training into account, which is a major limitation in this day and age, where computers have become ubiquitous in our western world.
Now, it was not my intention to ‘attack’ the Macintosh interface; in fact, I prefer it over Windows’ and GNOME’s, and I make my KDE behave exactly like it. In fact, my PowerMac Cube is my main machine. What I did want to show you is that it is pointless to claim that one approach to user interface design is superior to that of another. There are simply too many factors involved (most notably, the 6 billion different people on this planet) to make generalised statements about it.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSAlert.
I am not an Ion/RatPoison user, and I don’t know if I *could* become one, because for some reason I prefer to have control over the exact size and position of my windows (this size is seldom “maximized”, except on very small screens). But it is interesting and I think relevant that they follow a sort of meta-fitt’s law/rule: when using the keyboard (as we usually are), the fastest point to reach on screen still involves your keyboard.
That said, I’ve observed that conversations of Fitt’s law and the archetypal windows/mac interfaces concentrate on the corners and top of the screen. I think this is probably because mac users are more aware of the design philosophy of their platform, or even just more aware of design philosophy itself (as a whole) than windows users. This isn’t really a criticism of either camp, and since I don’t really know a lot about either anymore, is probably based on inaccurate stereotypes.
Still, the discussion almost always ignores or merely mentions in passing the fastest point to reach on the screen, the fifth point of fitt’s law/rule, your pointers current position.
Windows and the applications that run on it (and most Linux applications, too) heavily use this position for contextual menus. In my linux environments of choice, the context of the root window is a list of my programs, which when my pointer is not inside the context of an application further makes use of this point. On interfaces coming from apple, they still use it, but less heavily, and usually less contextually (because they always include these options elsewhere on the interface).
I think that mouse gestures were an attempt at bypassing the limitations of “aiming”, but I feel that people with physical handicaps or people who are just plain not practiced with a mouse will probably struggle to do any but the simplest mouse gestures. Does anyone know of any studies done on this that prove or disprove my guess? Have there been any other ideas in user interface history to utilize the fifth point?
Some of us also move our hands when we talk, and move our mouse as we read. One more type of person to go on the list of people that can’t get the hang of mouse gestures.
Personally, I think context menus are a great way t use the fifth point, because they allow you to do stuff to what’s at that point, which is easy to put into a mental model. I right-click on a nut, I want to see wrench in the list.
IMO, getting farther from the desktop metaphor (like OS X does, XFCE 4, KDE 3, and Gnome 2 allow, E basically forces, and KDE 4 is doing) is more important than worrying about Fitts, except in so far as figuring out exact placement (like the stupidity of leaving a useless 2px border at the bottom edge of the Windows taskbar until Windows XP, or similar borders in KDE, XFCE, and Gnome themes, and other borders that get in the way of otherwise big targets).
Edited 2007-06-25 03:36
Personally I find having all elements of the application (file menu ect.) within the context of that applications window.
I dislike on a Mac having to first bring focus to the appropriate application on the desktop and then having to move to the file menu at the top of the screen to do something with it.
When the file menu is part of that main window, I can bring focus to the application and access the file menu with one action instead of two. Seems to be more efficient to me.
As others have pointed out, there is a matter of training. I’m a long time Windows user. I don’t use a Mac as a main machine but it wouldn’t surprise me that if I did, the change in interface wouldn’t bother me for long. The reason being is that those functions more typically accessible within elements of the document interface or within contextual menus and keyboard shortcuts.
Now as far as “aiming” goes, I suppose mouse gestures could be an attempt bypassing that limitation, but it doesn’t have much to do with one design philosophy from another as the forward back keys buttons are within the main application window regardless.
I don’t know about any studies, but what might be useful to users who have difficulties with gestures as well as aiming is actual forward back keys. For example Opera. You have the choice between buttons, mouse gestures and the z and x keys to go forward and back between pages.
Menu bars and such are only a part of the equation. There are many other variables. For example MacOS X and its applications are designed in such a way that it makes using the system very intuitive. When you want to do something on a Mac you just have to do it the most natural way, for example by dragging and dropping something. What I observed in Windows users who switch to a Mac is that they try to do everything in a complicated way, always looking for a workaround when it’s not really needed. For some reason Windows leaves an imprint on users’ minds that you cannot just do anything easily, there’s always some inventive “hard” way. And that’s what they try to do and it doesn’t always work that’s when they start complaining. So actually when you make a switch it’s always better to forget what you knew completely and start from scratch. Otherwise however productive and helpful Mac interface can be in theory, a user wouldn’t be able to take advantage of it.
And that’s what they try to do and it doesn’t always work that’s when they start complaining.
You just don’t seem to get it. If a user knows how to do action A on Windows just fine, and then OSX uses a different method (which you deem ‘easier’, something you cannot prove AT ALL), then it doesn’t mean either of the two is “the hard way”. It means just that – that both employ a different mean to achieve the same end.
On top of that, dragging and dropping is overrated. It is actually a VERY complicated and muscle-straining way of doing things. Using keyboard shortcuts or context menus to copy/paste things can not only be faster, but also easier on the muscles.
So actually when you make a switch it’s always better to forget what you knew completely and start from scratch.
And that’s something you cannot do, so this is a completely irrelevant remark.
Otherwise however productive and helpful Mac interface can be in theory, a user wouldn’t be able to take advantage of it.
Ok, you REALLY didn’t get it. The interface that is the best for user A is the interface that makes them do tasks in a way that is easiest and most familiar for them. If a user has been using Windows for 15 years, then the Mac is simply (probably) not the best way of doing things. It MIGHT become the best way, but that can take years – it might not happen ever.
Experience and training is not something you can just brush aside – something many self-proclaimed “usability experts” seem to do all too easily.
Edited 2007-06-24 15:05
Ok, you REALLY didn’t get it. The interface that is the best for user A is the interface that makes them do tasks in a way that is easiest and most familiar for them. If a user has been using Windows for 15 years, then the Mac is simply (probably) not the best way of doing things. It MIGHT become the best way, but that can take years – it might not happen ever.
I think there’s something you aren’t getting either: Everyone is different. Fred Brooks reports that in the old days, when everyone who had a computer could afford to pay onsite programmers, each company would have its own payroll program, and that payroll program would be written to mimic, as best as possible, the company’s paper accounts system. Now that everyone and his dog uses Excel, everyone fits their accounting practices to it (and it would be the same if they were all using Oo.org Calc, VisiCalc or 1-2-3).
In the same way, since not everyone has the time, inclination, or ability to design and write a user interface for their operating system and applications, everyone makes do with the one they’re given (whether their system of choice/work is Mac, Linux, or Windows). Even Linux, which in theory allows you to use any number of interface designs, is moving towards “the big two”.
But that’s far and away a different kettle of fish than if someone did studies (on virgin subjects) to determine which (if any), of a myriad of different interface styles people liked. And the one that grabbed the most amount of people in total is very much more likely to be the one that the greatest amount of people disliked least, than the one they liked best.
I think there’s something you aren’t getting either: Everyone is different.
Exactly, which is what I said in the article. And because of this, it is hard (impossible) to claim that one design is better than the other – you can, at best, claim that one design is better than the other, for you.
But that’s far and away a different kettle of fish than if someone did studies (on virgin subjects) to determine which (if any), of a myriad of different interface styles people liked. And the one that grabbed the most amount of people in total is very much more likely to be the one that the greatest amount of people disliked least, than the one they liked best.
How nice that is in theory, that research would most likely prove to be fairly useless in the western world, since everyone here basically has experience with graphical user interfaces.
Even saying a design is better FOR YOU just means it’s what you’re used too… Wasn’t there an article about this, ppl who simply preferred a familiar interface over another one, even if the second one was ‘objectively’ much more usable… ???
That and had the brightest shiny. My roommate’s girlfriend had him install Ubuntu on her laptop <s>because of</s> for Beryl after she saw him playing with it.
Not saying that it would be the only reason that people would choose one but attractive packaging does help.
Even saying a design is better FOR YOU just means it’s what you’re used too… Wasn’t there an article about this, ppl who simply preferred a familiar interface over another one, even if the second one was ‘objectively’ much more usable… ???
Quite true. It had to do with the Safari font rendering on Windows:
http://www.joelonsoftware.com/items/2007/06/12.html
The gist:
Apple font rendering honors the design of the typeface, at the expense of blurriness. MSWindows hammers the fonts into the pixel grid for greater legibility, at the expense of the look of the font.
Users generally like the one that they’re used to.
What does this have to do with what I posted?
I was commenting on UI packaging and the shallowness of people when it comes to choosing a UI. They don’t sit down and consider all the fine grained details like button placement; they go after whatever shiny widget catches their eye.
My roommate’s girlfriend want Ubuntu with Beryl because the windows would jiggle like Jello. Not for any other reason.
I posted farther down that UI gurus are just snake oil salesmen, and a good UI is totally subjective.
I see I replied to the wrong person, it was meant as a reply to what Thom said above.
Anyway, I don’t think a good UI is totally subjective. It’s pretty easy to imagine two interfaces which do the same task, one being horribly complex, the other one needing just a few mouseclicks. Things like non-resizable dialogs, overcrowded toolbars, badly designed configure sections etc – it makes a difference.
Until you’ve worked with em long enough. After a few months, you’ll prefer the one you’re used to, no matter how horrific it is…
Ah, no worries then.
We’re talking absolute values here. Okay, somebody might be accustomised to doing everything in Norton/Midnight commander and doesn’t want to learn a real CLI or GUI at all. Does it make them more productive? Is it a better way? The answer is “no”, just because somebody doesn’t want to learn to do things better doesn’t mean that Norton Commander is the best interface out there. Same thing for Windows and Macs. It takes a Windows user 15 steps to complete a task while in MacOS it would take only 4 steps, so naturally if the user learns how to do things right it would save him time and make him more productive. The truth is, computer interfaces limit humans in different ways and we’re in constant search of new and different ways to make the user more productive. That means learning or re-learning. And honestly I never understood that kind of stubbornes – “I won’t learn because I WILL NOT!”. One explanation that comes to mind is that people are so afraid of computers that it’s a miracle when tasks can be successfully accomplished. Then even if it takes them 30 steps to do something, they’d rather go the longer route because they’re afraid that a different method will break everything. It’s psychology. It doesn’t mean that doing things the way a user doesn’t know is inferior. Crying about it doesn’t validate it. Consider a photographer who used to do everything on film and is being confronted with the digital workflow. There’s no reason to dismiss it and live in the stone age, so he goes and learns Photoshop and Aperture and the color space theory and everything.
Okay, somebody might be accustomised to doing everything in Norton/Midnight commander and doesn’t want to learn a real CLI or GUI at all. Does it make them more productive? Is it a better way? The answer is “no”, just because somebody doesn’t want to learn to do things better doesn’t mean that Norton Commander is the best interface out there.
If that user prefers Commander, then yes, it is the best interface – for him. Just like how you prefer the Finder – making Finder the best interface, for you.
It takes a Windows user 15 steps to complete a task while in MacOS it would take only 4 steps, so naturally if the user learns how to do things right it would save him time and make him more productive.
The world isn’t that simple. The tasks a computer has to perform cannot be brought down to “a task” requiring “x number of clicks”. Using a computer involves millions of little tasks which all require a certain amount of user input. I’d think you’d have a very hard time trying to prove that the Mac requires fewer click to operate than a Windows box.
And even if you could, it is irrelevant, since a trained Windows user might perform his 15-click task faster than the equiv. 5 click action on the Mac, due to training, and preference.
Glad you brought up the midnight/norton commander.
Something that’s great about the midnight/norton commander is the predictability of it all. After building up some muscle memory, I can manipulate files almost faster than my screen can refresh, people watching me work just see the screen flash a bit ;-). Combined with the CLI, it’s great. I still use it daily, even though it’s a bit dated and clunky in some parts (the editor is only nice because I’m used to it, and very clunky when it comes to copy/pasting for instance)
That being said, even though graphical variants have been tried (a lot) both in windows, linux and other operating systems and desktop environments, somehow they never work out. Someone interested in usability should really look into why it worked in the first place, and why it fails so badly in mouse-based/wimp environments.
I think it worked because of its source/destination system (the 2-panel layout always made clear where your file would end up), the always available command line (it doesn’t get in the way of the CLI, on the contrary it augments it), the integrated file search/filtering/comparing, and the integrated editor – and most of all: absolute predictability of the commands.
Maybe I should write my own eventually, not necessarily using the same layout (2-panel + CLI) but copying some of the concepts, and adding some modern features in the mix.
(Something like http://hotwire-shell.org/, the old http://en.wikipedia.org/wiki/XMLTerm, using modern editors/viewers, and try to get away from being “filesystem based”, allow different views based on its content/metadata. I still think you need an source/destination thing, so 2 panels might stay ^_^)
“Glad you brought up the midnight/norton commander.”
“Something that’s great about the midnight/norton commander is the predictability of it all. After building up some muscle memory, I can manipulate files almost faster than my screen can refresh, people watching me work just see the screen flash a bit ;-).”
Just as an addition: The MC takes into mind that most operations done with files and directories are “source target operations”, such as copying, moving or symlinking. The two panels concept seems to be a very good approach here – better than handling files using the edit buffer (^C, ^X, ^V).
“Combined with the CLI, it’s great.”
In fact, it is, because it does not limit you. If you need to do an operation that is not supported by the MC, you just call the command you want.
Using mc.ext file, you can even implement the “open on doubleclick” feature here. And for something else, you just enter a command, followed by Meta-Enter (Esc, Enter if no Meta keys available), such as “mplayer -idx Esc Enter Enter” for a malformed AVI file.
“I still use it daily, even though it’s a bit dated and clunky in some parts (the editor is only nice because I’m used to it, and very clunky when it comes to copy/pasting for instance)”
The mcedit is one of my favourite editors. It even supports syntax highlighting that you can configure and extend as you wish.
Running in an X Terminal, the MC has mouse support. The downside: You cannot copy / paste via middle mouse buttons from / to other applications.
“I think it worked because of its source/destination system (the 2-panel layout always made clear where your file would end up), the always available command line (it doesn’t get in the way of the CLI, on the contrary it augments it), the integrated file search/filtering/comparing, and the integrated editor – and most of all: absolute predictability of the commands.”
This is completely correct. Sure, it needs a bit time for the average user to see how this concept works (and why it works), but I knew many people coming from DOS and NC who cried out having nothing similar in their new “Windows”.
I prefer the way Apple handles toolbars. It uses less screen space.
“It takes a Windows user 15 steps to complete a task while in MacOS it would take only 4 steps, so naturally if the user learns how to do things right it would save him time and make him more productive.”
Show me a task that takes 4x more steps on a Mac. I’ll be waiting.
That is not what he/she said.
The OP said it takes Windows users 15 steps to complete a task where a mac user can do it in 4.
I know, exaggeration, but lets see what he replies to this rewording of your question….
Show me a task that OS/X users can do in 4 steps, that would take 15 on Windows ?
he talked about being “more intuitive”, not “easier”
maybe. but it’s LESS INTUITIVE. do you get it?
why can’t you? I did it, and a lot more people out there are doing it… windows -> macos, windows -> linux… and the larger part of them are not complaining at all.
that’s YOUR opinion. I switched to Mac two years ago, having used Windows since Win 3.0. It took about *two weeks* to be as productive as in Windows. I’d never go back. so?
So is pressing Cmd+O easier then just pressing Enter to open a document?
Is having to open the applications folder then drag an application to it easier then just double clicking a package to install?
Each platform has its own moments of stupidity.
Well when you install something, you drag it from one place to another, why is that so unnatural on a computer?
It didn’t say it was unnatural.
I said it was more complicated, as in more steps are used to accomplish installing an application.
I like the entire package design as it is more sreamlined, and it makes backing up my Mac easy. What I don’t like is the extras steps needed to get something installed. I’d rather be able to right click on a dmg and select an install option.
I guess I could write a script to do that, but I haven’t looked into it.
What, and pressing Ctrl+O is any different on Windows?
That’s just one way of opening a file, it’s not like you can’t double click on a Mac.
I think what you were trying to say is Cmd+Down to open a file, like pressing Return in Windows. In OS X it’s Cmd+Down to drill down folder / open files and Cmd+Up to return. It’s Return / Backspace to do the same in Windows.
Most Disk Images for Mac Applications now include an alias to the Application folder, so you only need to drag the application an inch to install it; how is that any stupider than going through an MSI in Windows?
Ctrl+O doesn’t do anything in Windows.
Most of the time I select the window then type the name of the file to jump to it instead of scrolling through it with a mouse. I don’t have to move from home row if the open file hotkey is enter. I do if it is Cmd+O.
I haven’t tried Cmd+Down. Is that something specific to Mac’s browser view? I’ll have to test that when I get back home. That’s not any better though. I still have to move off home row.
I use spatial view and Cmd+O is what works for me. I like the view where the folders scrolls inside the window; I can’t remember what it’s called at the moment though. I like how the structure can be navigated with just the arrow keys then opened with the open hotkey. I just don’t have enough screen real estate to really run it the way I want to, 12″ PowerBook.
Some of them don’t have it. There are still a lot of DMGs that are just the application. If they have it it’s nice, if not then I have to open the applications folder first. I’d still rather just right click, select install, and have the application magically appear in my App folder.
I wasn’t trying prove Windows was better. I was just trying to point out even the Mac platform has it’s quirks that could work better.
So is selecting an icon and pressing F2 really more simple than selecting an icon and pressing enter when you want to rename an item in your file browser…
Select-enter does already have a useful function in the Finder.
well seeing as F2 and enter are the same number of buttons, actually it sorta is AS simple
and seeing as I don’t tend to spend much time renaming files by hand in the file browser.
and seeing as it’s also available from right clicking.
and seeing as i can also simply click on a file a second time after the double click time window is up.
I’d still say that enter, which in pretty much every application ever means activate/confirm, is the better choice.
Edited 2007-06-24 18:03
Pressing CMD+O isn’t easier, it’s an extra keystroke and the keys aren’t close to each other really. But that’s not the point… this isn’t about easy.
On a Mac, CMD+O is what you use to open anything, whether a folder or drive in the Finder, or a document in a program. On Windows ENTER only works in Explorer, and then you have to use CTRL+O within an application. Two different key strokes within the OS to do the same thing. Opening something.
I’m not sure if anyone else has had this (I’m sure I’m not alone) but just pressing Enter is a bother when you’ve selected a bunch of icons and accidentally hit ENTER and they all open! The CMD+O ensures you don’t make these mistakes.
It’s about the consistency and thought
“For example MacOS X and its applications are designed in such a way that it makes using the system very intuitive. When you want to do something on a Mac you just have to do it the most natural way, for example by dragging and dropping something.”
God, I hate the way it is assumed that dragging and dropping makes tasks easier or more intuitive. If anything I find the opposite. As an experienced computer user, I am still never sure when dragging and dropping from one folder to another, if it will copy or move the file in question. The best implementation of I have used is in KDE, where dragging and dropping brings up a context menu offering the option to move, copy or cancel.
As a software developer probably, the most unintuitive load of crap I have to deal with is visual query builders. I have the SQL all worked out in front of me, but trying to get these things to generate the same query is often nearly impossible if they have any form of complexity in them.
…I’m glad someone else finally admitted that there’s no way to measure usability, because I’ve been saying this for years. Having said that, the each-application-has-its-own-menu approach (which both Linux* and Windows suffer from) really sucks: It really wastes a lot of screen space, and despite Thom’s comment about consistent menus, different applications can only have consistent menus up to a point – there’s no point in having “ray tracing” options in the Tools menu on a browser, for example, or “Bookmark this page” options in AutoCAD.
*KDE mitigates it somewhat because you have the option of having the menus at the top of the screen, but this only works for KDE apps (surprise, and I’m not suggesting that there’s anything they can do about it). And no matter how much I try to stick to KDE-only apps, sometimes I have to use something like Firefox or (gasp) pidgin.
my lb0.02
different applications can only have consistent menus up to a point
No, but the most important ones are always the same (File, Edit, View), and so are its contents.
Edited 2007-06-24 15:07
and despite Thom’s comment about consistent menus, different applications can only have consistent menus up to a point
I believe Thom was talking about the names and placement of the menus, which should be consistent across apps, and not their contents, which are (obviously) application-dependent.
The menu bar should look the same across apps, and for the most part, they do:
KDE:
File Edit View … Tools Settings Window Help
WinXP:
File Edit View … Tools Window Help
Don’t have access to a Mac, so can’t comment on the order of the menus there.
Gui design is entirely subjective. There is no right interface just like there is no right color. Red is not better then yellow, but I do prefer red cars.
People will adapt to whatever the designers decide. I was looking at a mockup for KDE with the KDE menu in the middle part of the bottom of the screen. That flies in the face of the “the menu should be in the corner” wisdom. Will it work? Yeah, people will get used to it after feeling odd for about a week. If someone really wanted to make a useful UI, it would tailor itself to the user, or they should get rid of most of it altogether. (<–Subjective Opinion so totally invalid)
Microsoft’s surface interface looks really interesting from a UI point of view. I’d like to see them adapt it to a regular mouse and keyboard desktop though.
As far as Win versus Mac goes. I personally like to have things in neat little packets, it’s my nature, so the application centric interface feels best for me. It also helps me switch between tasks, it’s like context switching for my brain.
My main thing against document specific interface, or at least Apple’s version, is that the menu bar changes. I like things to stay the same, and I learn hotkeys for any frequent task, which cuts down on my usage of it. I’d rather use the space for something more productive.
Lastly, I think you should check out Office 2007’s new ribbon interface. It really is pretty cool, and much more useful then the File menu plus toolbars approach.
Lastly, I think you should check out Office 2007’s new ribbon interface. It really is pretty cool, and much more useful then the File menu plus toolbars approach.
I’m a total addict of the ribbon interface element as used in Office 2007. Great stride forward compared to 2003’s interface.
Even as a Mac user, I’ve followed the development of the Ribbon interface extremely closely. It is one of Microsoft’s best interfaces and even gives me some Windows-envy. The ribbon is scheduled to appear in Office:Mac 2008, albeit in a slightly more traditional way.
Coming up with a way to get rid of 100’s of menu and toolbar buttons, and end up with something that’s actually more productive than before is as big a UI innovation to me as the migration from DOS to GUI. It makes the GUI usable like it was supposed to be in the first place
I used to know where things were on the menus, and some of those things aren’t even present on the ribbons. Secondly, Microsoft really screwed over OLE integration with the ribbons. Used to be that there was menu merging when doing embedding. Now Microsoft doesn’t support it–it has the ribbons instead. Plus the ribbon takes up more screen real estate.
The ribbon _DOES NOT_ take up more screen real estate. In fact, it takes up less space for the same amount of total UI (i.e. turn on all the toolbars in Office2K3, the ribbon never gets taller).
Read this and get edumacated:
http://blogs.msdn.com/jensenh/archive/2006/04/17/577485.aspx
Oh, nonsense, man!
Surely anyone who has a clue about actually /using/ apps efficiently realises a few basic points:
[1] screen height is generally more important than width, especially in word processors, web browsers and so on
[2] screens are getting wider as both TFTs and movie viewing becomes more common
[3] for easy legibility, one should always set programs to zoom their contents to fit the window as a default
[4] therefore, it follows that the effective place to put your toolbars in Office, the taskbar in Windows and so on are down the left and right edges of the screen, not across the top. Leaving them across the top squanders precious depth, meaning you can see less of your document. Put them at the sides, then zoom the doc to fit width, and you get bigger fonts *and* more lines visible, both at once. Plus, lateral top-to-bottom toolbars once again benefit from Fitt – you just whack the mouse over to left or right.
Can you do that with Fluent? I think not. And I have tried.
Some fields do not work so well sideways. The font combo box for example. Toolbar functionality is reduced in some places when putting all toolbars on the side. Sideways text does not make good UI, and the most common screen resolution in the world is 1024×768, shortly followed by 800×600. It’ll be a while yet until everybody is on widescreens.
Well, true, but so what? If I want to format fonts, I use the character formatting dialogue. It’s more valuable by far to me to have a larger amount of the document visible than to be able to occasionally change fonts from the toolbar. Priorities!
Some valid points, however, what I and many other users of wide screen monitors like to do is have two documents side by side – a 22″ Monitor at 1680*1050 displays two A4 word documents side by side at approximately actual size (although the menus and toolbars obviously detract a little from this). this makes copying and pasting from one to the other much easier, and allows you to compare documents. As someone who dabbles in web development, I find it handy to have a browser and my text editor side by side as well.
Also, the human visual system is inherently better at taking in more horizontal information than vertical information. Our peripheral vision works well in the lateral plane, because our two eyes are side by side, not one on top of the other. It is for this reason that the vast majority of writing systems that have been invented go from left to right or right to left, not top to bottom. And for this reason, if I can get away with it, I prefer having documents in landscape format, unless it really needs to be in portrait.
I personally would rather do away with menus altogether – programs like Tracktion (http://en.wikipedia.org/wiki/Tracktion, http://www.mackie.com/products/tracktion3/record.html) which use a single screen interface with no menus are much more enjoyable to work with than apps with menus and toolbars floating all over the place (for me at least).
As far as MS Office goes, the ribbon is brilliant in my opinion.
Get rid of menu bars altogether, I say! I dislike the way both windows and mac handle menus and windows.
(?)
I don’t really see any of your objections. I do, frequently, have 2 documents side by side: I have 2 17″ monitors, one next to the other. I find 2 screens much more easy, simple and productive than one whacking great one, and 17″ monitors come free with a packet of cornflakes these days.
2 eyes are for binocular vision, not for looking at wide things with. If you want to see a visual system optimised for scanning the horizon, go look at a sheep’s eyes. There’s nothing about the human visual system that I’m aware of (as someone with a degree in biology) that’s optimised for horizontal scanning and there have been plenty of vertically-oriented writing systems.
As for your music app: jesus wept. If you think that’s a friendly, explorable and discoverable interface, you should be using Unix from a command line, sunshine.
Human visual field is certainly horizontally extended – look for example here (nice images somewhere in middle of article):
http://vision.arc.nasa.gov/personnel/al/papers/64vision/17.htm
Horses for courses. I prefer a single monitor uninterrupted by bits of plastic. Many programs are not designed to work with dual monitor setups, and I find that kind of fractured workflow frustrating.
Binocular vision merely requires two eyes separated by some distance. The simple fact that your eyes are side by side rather than one on top of the other makes them able to take in and process more information in the horizontal plane than in the vertical plane. Coupled with the fact that your cheeks and eyebrows limit the amount of vertical space you can scan without moving your head, and the fact that you can turn your head from side to side much further than you can tilt your head forward and back means that wide screens are a much more natural format for presenting information.
Sheep have a whopping great blind spot in the front of their head, so your twin monitor setup might actually suit them.
Interestingly, as someone who also has a degree in biology (I work on rodents), I have perhaps gotten a bit more out of my studies and profession than you have. The human eyes pick up a much wider field of information in the horizontal plane than the vertical (see reasons above for example). Of course we do not have the field of vision that sheep have, but even without moving your head or your eyes, you will have a greater visual awareness horizontally, because you have two eyes side by side (the human eyes sees 180° in the horizontal plane, versus 135° in the vertical – admittedly, this is closer to 4:3 than 16:9, but that assumes no movement of the eyes). If one eye was above the other, then this situation would be quite dramatically different. This is a well known principle, and is the main reason even 4:3 monitors are wider than they are tall, why more people prefer widescreen TVs and cinemas, etc, etc. I’m surprised you didn’t pick this up in your studies.
Sure there have been plenty of vertical writing systems invented, but as a proportion of the total number of writing systems invented, they are in a very small minority. And the vast majority of vertical writing systems use ideograms (more information contained per unit of linear space) so that excessive vertical scanning is reduced.
Having used Sonar, Cubase and various other sequencers in the past, absolutely nothing compares to the ease of use and efficient workflow that Tracktion offers. Perhaps you haven’t used sequencers before, but sequencers with windows popping up all over the place are a workflow nightmare for me, and totally kill my creativity. A great many people agree with me, as Tracktion’s user base is growing rapidly.
There is even a Linux window manager (Twindy) based on the Juce SDK used to write Tracktion. Haven’t tried it though.
A good detailed reply!
Yes, our visual /field/ is wider than it is tall; I certainly wouldn’t deny that. But then again, my monitors are side-by-side, not stacked. I have actually tried it, when pushed for space, and whereas it’s better than nothing, it’s not ideal.
You seem to be contending that horizontal toolbars and so forth are somehow more ergonomic than vertical ones, and frankly, I doubt that.
/Currently/, yes, L-R (and a few R-L) writing systems significantly predominate over T-B or B-T, but that’s cultural, it’s not an emergent phenomenon of the human visual system. Go back 1Ky or 2Ky, I think you might find things were very different. If you want an efficient ergonomic writing system, you need boustrephodon! (But it has other drawbacks).
Multiple screens over one are helpful if you use >1 app at once; you can have 1 app maximised per screen with no manual window positioning at all. Multiple real desktops beats multiple virtual desktops and they are very widely used. I think the ideal might be 3 portrait displays, but I am not willing to pay that much.
The other thing you have to consider is that widescreen monitors are considerably cheaper to manufacture, (due largely to the way the raw componenets are manufactured and supplied – when you cut sheets of glass etc for the LCD, you get more sheets if you use an aspect ratio of about 16:9). 19″ widescreens are dirt cheap, cheaper than most 17″ 4:3 monitors (technically, you get more pixels on a typical 17″ 4:3 monitor, but most people perceive the widescreen as larger, even thought the vertical aspect is smaller).
I dislike clutter on my desktop, and I would ideally prefer a single 30″ ultra high res widescreen monitor over multiple smaller monitors.
I would prefer a tiling application manager over traditional windows & menu bars etc, but there are no tiling WMs that I am aware of that are sufficiently mature or polished for everyday use
Another app that I enjoy using, even though initially I found quite intimidating, is Blender. For the novice user, it is probably the least intuitive and discoverable interface ever invented, but once you have gotten over the ridiculously steep learning curve, it becomes one of the most efficient and powerful ways of interacting with the software around.
Actually as far as menus go, what are your views on the way iTunes on Windows handles the menu? (it sticks it in the actual window bar itself, so that if you maximise the window, you effectively have a Mac style menu bar).
I enjoy large monitors myself, but in actual practice, I find multiple smaller ones a lot more productive. And they’re much cheaper, which doesn’t hurt.
A friend of mine wrote a piece on how they’re better for you, here:
http://technology.guardian.co.uk/online/story/0,3605,1022647,00.htm…
But I must admit that most people who I know who used to be multihead evangelists have now gone over to one big TFT. The desk space argument seems powerful. Doesn’t to me; I need them a certain distance away, so the space in front remains relatively constant. But anyway.
Yes, of course, there are UIs which are much more efficient for experts than a simple WIMP with pulldown menus. That’s not the point; the point is that the WIMP with pulldowns is one of the most discoverable, beginner-friendly UIs yet developed. We are /all/ beginners at one time and most computer users remain so all their life.
I vaguely dislike the win iTunes GUI, ‘cos it’s not very Windows-like. It’s usable, though.
But then, I detest all applications with “skins” and “themes”. I want my UI to be uniform, thanks; that was one of the original core concepts of the WIMP GUI itself and I think it was a valuable one. Now the market droids, who are, axiomatically, stupid and don’t understand, have gotten to it and nobbled it.
Whereas on the FOSS side of the fence, a lot of the graphic design appears to be done by 16y old boys with the sophisticated aesthetic sensibilities of a budgerigar on LSD.
>Gui design is entirely subjective.
Let’s lay this myth to rest once and for all.
Imagine a GUI that is controlled by a mouse, but the mouse pointer moves in a random direction every time you move the mouse. Clearly, this would be bad for *everyone*.
A mouse-driven GUI where the pointer always moves in a consistent direction is, therefore, a demonstrably better design than a similar GUI where the pointer moves in a random direction.
Therefore, GUI design is not *entirely* subjective. There are at least *some* absolutes.
Of course, this does not mean that GUI design is not *partially* subjective.
This really makes sense since most people read top left across and down. Microsoft made the menu in the left bottom corner popular but to me thats just screwy.
I like how OS X put the window functions on the left hand side, this makes sense since most of what you do is on the left hand side anyway.
Since I read from top left corner, I do NOT want the menu there. I want my applications.
From the article: “But the human visual sensory equipment has limits. There is a limit of how much information you can get on a area of a certain size. And there is a limit to the area the human eye can usefully overview. And while there are people that are working with two, three or more screens, this is only the exception.”
Doesn’t this actually argue for keeping the menu WITH the window.
Also “Microsoft may have recognised the folly of their action back in the ‘80ies; Microsoft’s Office 2007 suite behaves very much like a Lisa-type user interface when in full screen. But only if in full screen. And they are constantly trying new user interface concepts. The new Internet Explorer hides the menu bar in the default configuration, don’t be surprised if it re-appears on the top of the screen in Version 8. And the new Microsoft Office 2007 plays with a interesting new concept; Ribbons. Maybe they’ll come up with something new altogether, who knows.”
This has nothing to do with one placement of the menu being better than another and everything to do with menu’s sucking balls regardless.
First off, it is more then personal taste, some interfaces ARE better then others. A better designed interface takes less effort (i.e. the brain consumes less energy while using it)
As for the article, I think pretty much everyone abandoned the idea of SDI awhile ago, in favor of permanent maximized document window switchable with tabs. This is because the SDI was a prime example of microsofts attempts to copy something they didnt understand. The spatial design metaphor worked because the entire operating system functioned under it. When you start mixing spatial stuff with other stuff, the whole thing falls apart. The whole reason the spatial design works is that it is consistant, and you can count on thinking of objects as objects. This is why the spatial explorer was horrible in win95, and why spatial nautilus was only slightly less horrible in gnome.
Last point is ribbon, which IMHO is one of the best UI innovations to come out of Microsoft since the tool tip. Menus on windows suck, and microsoft should be lauded for their attempts to get rid of them. My only criticism is that it seems like every vista app takes a different approach. You have the ribbon in word, tabs and button menus in IE, a contextual toolbar in explorer, mode switching/menu buttons in WMP, and an almost webpage style UI for windows defender. While ribbon is, IMHO, the best of the bunch, MS really needs to sit down and decide on some standards, because vista looks like it was conceived by a designer with ADHD.
“My only criticism is that it seems like every vista app takes a different approach. You have the ribbon in word, tabs and button menus in IE, a contextual toolbar in explorer, mode switching/menu buttons in WMP, and an almost webpage style UI for windows defender. While ribbon is, IMHO, the best of the bunch, MS really needs to sit down and decide on some standards,”
The Office team has always tried new GUI stuff first…if it works (which in the case of the ribbon, it definitely seems to be), the rest of the OS will soon follow. It eases the burden of transition a bit financially. I expect ribbon menus to appear in Vista in SP1 or SP2 given how well it’s doing in Office.
What I don’t get is the obsession with short ways and speed, that’s a manufacturing belt mentality and reduces the human being to a time factor. The value of my work, computer work BTW, is not measured in units but in quality. My employer couldn’t care less if I performed a repetitive mundane computer task with a 100 or a 1000 clicks a day as long as my code is A-OK.
That being said I love my setup — menus pop up under my mouse pointer if I press hot-keys on the keyboard — I’d hardly say it is the fastest way of doing things, but it works for me.
It’s not so much about being in a hurry as it is about getting rid of unnecessary work.
I have to use my computer every day, and pretty much all day. Two extra clicks for doing a task starts to get annoying pretty fast if there is no good reason for the extra clicks to be there. Especially for people used to the expressiveness of the commandline.
Regarding your employer, if it takes you twice as long to do your work because you spend most of your time doing 1000 clicks when you could be doing 100, I’m pretty sure he’d start caring about why half your salary is spent clicking around when you could be doing actual work.
All of this stuff that you guys are commenting on, back and forth, is something I’ve been wrestling with for a while now – why it seems that everyone claims to be an expert on what’s “intuitive” (FWIW, I consider myself a student of usability, but no expert but there are seemingly endless debates on what’s right.
Usability is both an art and a science — both objective and subjective.
Some components of usability can be easily verified scientifically, such as Fitts’ Law, the time savings in using keyboard shortcuts, or why it’s so time-consuming to switch between the mouse and keyboard. Other aspects aren’t so simple, like what constitutes a logical order in window control placement for a particular task or what would be a suitable toolbar icon for a “Compile” icon in an IDE.
Case in point: the menu bar placement. This is an age old debate — each has its pros and cons.
A global menu bar (like MacOS) consumes less screen space and, according to Fitts’ Law, is easier to click on because the user need only concern himself with horizontal aiming. Having a menu bar in each window has a natural mapping of tools to the document, as you have said.
Both cases deal with mode — the current app/document — in different ways. The global menu bar provides a consistent place for the user to look for what app he is dealing with but at the cost of a visual disconnect between the menu and the document window. Window menu bars utilize nearness to connect tools with a document but at the cost of being slower to click on and more screen space. These are the empirical parts of menu bar usability. Which one is easier for a user is partly the human factor, but also it’s a matter of training / habit.
We all have some low-level training, such as using the left mouse button for regular clicking. The art of usability is really a balancing act between habit, training, scientific facts, and the ever-so-unpredictable human factor. It’s this fuzzy, subjective stuff that causes all the bikeshed debates, or at least as far as my experience has been.
Everyone loves to talk about Fitt’s Law, because it’s quite well established, and very simple.
Unfortunately, like Thom said, it doesn’t take training into account. It also doesn’t take inexperience into account. I have never seen an inexperienced computer user take advantage of Fitt’s law. Even if the start menu is in the bottom right corner, or the close button in the top right, they still carefully move the mouse cursor on top of the button and click. Fitt’s law, while a valuable measure, is not as important as everyone seems to assume.
As for the global application bar, I’m against it, mostly because I do take advantage of Fitt’s law, but I very rarely need to access the menus in an application, and would much rather have other functions in the top corners of the screen (minimize and close). If I need to use a function often, it’s in a toolbar or a keyboard shortcut. I only very rarely use menus at all.
Thanks Thom for a great article. People need to stop thinking of usability as a black and white issue. I resent desktops which force one interaction model on me and then claim that it is optimal. I decide what is optimal, not them.
Oh, it does take training into account but not explicitly.
The general formula “T=a+b*log2(D/W +1)” is the same for trained and untrained but the constant “b” would be a lot lower if you’re trained to using this interface.
Let me explain what this means by a simple example:
Let’s say we have a 1cm target (W=1) 30cms away from your pointer (D=30). This yields:
T=a+b*log2(31)=a+b*5
“b” could be 1 for a very inexperienced user and 0.1 for a trained user. “a” is probably mainly the reaction time + time needed to decide which button to hit, somewhere between 0.2 and 1s.
Let’s say it’s 0.3 for the pro and 1 for the noob.
noob: T=1+1*5=6s
pro: T=0.3+0.1*5=0.8s
So our pro could still be 7-8 times faster, all within the mechanics of Fitt’s law.
Now let’s say the target was 2cm and 10cm away:
noob: T=1+1*log2(10/2 + 1)=3.6s
pro: T=0.3+0.1*log2(10/2 +1)=0.6s
While the pro would still be 6 times faster, he would also be 33% faster than with the smaller target at a bigger distance.
Of course, the noob would be 1.7 times faster so any change in UI layout has the biggest effects on noobs.
So while the influence of Fitt’s law is most easily seen for noobs, it is still valid for a trained person.
Edited 2007-06-24 18:30 UTC
As I commented to TFA, I always experience mingled amusement/annoyance at reading these methodical argued “analyses” of GUIs where it seems clear that the author has only used 2: Windows and the Mac. There are many other ways; NextStep’s cascading context menus from a fixed position, for instance.
But one I think should always be considered, as it shows a genuinely different model, is Acorn’s RISC OS. No fixed menus at all; as the 1st commenter says, the easiest menu to hit is the one which requires you not to move the pointer /at all./ Other things RISC OS pioneered 20yr ago are OS-integrated global font anti-aliasing, solid window drag, the global taskbar – all now widely used – and drag-and-drop file open and close instead of file selectors with a mini-browser, which, alas, no-one else has adopted.
You cannot judge these things on a sample set of 2. Especially 2 which are actually quite similar.
If you want to judge GUIs, go out and master half a dozen different ones first. *Then* and only then can you make informed, educated comment.
The same goes for most things, of course, from word processors or operating systems to types of cheese or varieties of beer. Who would take a wine critic seriously who had only ever tasted 2 wines?
I can do pretty much all of that on my NeXT from 1991 too It has fixed menus on the top, left hand side of the screen AND you can activate the same menus from anywhere on the screen with the right mouse button. It also does AA, full window dragging, etc etc… And its really fast and stable on a 33mhz 040 with 128mb of ram
Heres a screen shot of my NeXT… WHich I’m probably selling, sadly :'(
http://helf.freeshell.org/NeXT-9-4-06.jpg
I also forgot to mention that the GUI on NeXT machines and on risc OS macihnes ( I used to have a RiscPC 700, loved it…) do what a GUI should do. They are intuitive, fast to get around in, and best of all, STAY OUT OF HTE FARKING WAY. I don’t want shiny shit bouncing around everywhere. I want to work and not be annoyed
great, now I’m talking myself out of selling my NeXT. poo I need that money
Nice!
But for me, the idea of a context menu that you have to move over to seemed to be a bit… well, daft, frankly. If I summon a menu, I want it where I am now! If I have to go to a specific place, then why not just leave the menu always there, in a single cascading tree, like Nokia’s Series 90 (Symbian) or Hildon (Maemo Linux) GUIs?
Alas, I have /very/ little NeXT experience. I’ve always wanted one, though. I’d happily buy yours, except that I am in the UK and you’re probably not, and I can’t afford to spend any more money on old computers while I have so many…
Yeah, I’m in the USA. Shipping it to the UK would be around ~80GBP probably.
I’m not quite sure if I get your meaning on the menus without seeing that in action… But on the NeXT you can ‘tear’ off menus and make them stay on the screen if you are going to use their fucntions a lot. Comes in quite handy. you can also rearrange most of the screen.
Do you have any pictures showing what you were talking about with the menus?
You can summon menus anywhere your pointer is on the NeXT. using the Right mouse button. so you have a context menu that stays on the screen in the top left and one you can summon whereever your mouse point is…
“I’m not quite sure if I get your meaning on the menus without seeing that in action… But on the NeXT you can ‘tear’ off menus and make them stay on the screen if you are going to use their fucntions a lot. Comes in quite handy. you can also rearrange most of the screen.”
As far as I know, Gtk has the “tear off” feature, too, very useful for Gimp.
WindowMaker allows having menus on the screen, they get an “X” button next to the title then. This gives a little feeling about how NeXT looks and feels like.
oh yeah. I forgot you could do that in wmaker. I haven’t use it in ages. It’s a really nice feature
I haven’t seen much mention made of mouse buttons 1 v 2,3 or more and multiple clicks and mouse use v keyboard use.
As one get older, one’s methods may need to change due to the sheer pain of RSI and even during an extended session the mode of use will change from use of multi clicks and mouse use to keyboard use and single clicks to avoid pain. It no longer makes sense to assume the fastest sequence will be the best way even over an hour since we need to change the exercise the digits get. Might as well build in 2 or more ways to do the same thing just to avoid RSI.
In some applications I may have to use 5-10 mouse actions over and over, if the developer had known what I was doing, they might have realized a single command to do the same thing would really speed things up. In the past I might have used a macro tool to repeat these sequences but they never really worked well.
Another issue that bothers me is the the notion of counting and sorting file names that have partial numerical values, say x1,x2,..x10,x11. It is pretty common to see files get sorted and placed in a folder in the wrong place because the sort is just plain wrong. Sorts done by character position of digits rather than by the digit decimal significance is just inexcusable.
The issue of menubar at top v each window pales in significance when these other factors are considered.
Agreed.
As with the other issue, I would add:
– consistency: for example tab handling is vastly different from one app to another which is annoying.
On Unix, command-line due to history bagage is too different from the GUI: rm should move a file into the trash as the GUI does, there should be a single shortcut sequence for Copy be it in the GUI or in the console, etc.
– Undo-ability: every action should be undoable, now it’s nice that we can reopen a closed tab on many application, but we should be able to do the same thing for a closed window.
– Data persistence (related), as much data as possible should be kept all the time. So that in case of failure/error, it is possible to go back quickly to the previous state.
– Responsiveness: applications should be at least as responsive as BeOS apps were.
And let’s kill the ‘start-up screens’, and use instead a start-up window instead which you can iconify, kill (if you started the wrong application by mistake), put behind other window..
Well first of all in *nix, each desktop environment handles its trash folder ina different way. Having rm send things to the trash bin means that rm is goign to have to know about every DE out there and what protocol or folder the trash bin is in. What about if you don’t use a DE, wher is rm supposed to send the deleted items to. Changing its current behavior would be silly, not to mention break compatibilty with a majority if automake scripts out there. However, maybe writing a script that is named del after the windows function that does what you suggested might work for you.
>each desktop environment handles its trash folder in a different way.
If this is true, this is stupid IMHO, there is no value added to the user (only pain), this should be fixed by the freedesktop guys..
I doubt that it would break many scripts after all when you remove something, it is removed also to put it in the trash.
Coherency is especially valuable for new users, so adding a del command is much less interesting.
There is one thing I really wish to see the end of and thats the double click to open (or do anything). I could do this 20yrs ago with glee (even triple quad clicks) but now try to avoid them at all costs. I see in Ubuntu a switch to not use them but haven’t really explored that yet.
There are just so many issues with each individual OS that I use BeOS, W2K, Ubuntu and probably KDE thats its not worth going over anymore given they can’t or won’t be fixed. I am curious if XP or Vista seriously changed any of the mouse click feel in Explorer, they look mostly the same to me, just shinier skin over the same core.
Q to any OS X users (and others)
In OS X do files named x1, x2, ,,, x10, x11 get listed in that order or the not quite right order. Do any OSes get this right.
Does OS X always remember the precise x,y coordinates of files arranged manually in a folder as it did in MacOS.
Windows is still Windows if that’s what you’re asking. The Explorer experience is not fundamentally different then in Win98. A few new bells and whistles, but still Windows.
I’m pretty sure that since Win98 Explorer can be set to single click mode. I can’t think of how to do it in 2000, but it’s under Folder Options in XP.
Back in the days of command line There was Unix and VMS and other Mainframe Command Lines. Unix Command Lines uses rather cryptic commands for new users which make it difficult to learn but once you do learn it it is a very efficient way of doing things. VMS used a far more wordy way of getting around While it normally allowed the first few letters for a command it was sill rather wordy commands
cd / on Unix
vs.
SET DEFAULT [000000] in VMS.
But still VMS had a HELP Command vs Unix Man. If you are stuck on a foreign command line what are you going to try to do first.
Write HELP or man man (so you know how to use the manual).
It all comes down to what you are use to. Linux GUI isn’t any less user-friendly them Macs or WIndows If you know it it is user friendly for you. What Linux fails is user intuitiveness having the application menu bar filled with odd named programs… Konqueror yea I would think that is a web browser. Or GIMP with a picture of some rodent that would make me think it is a program like Photoshop. Where Macs are really strong with is being able to click and drag parts around to different disjunct programs and it often does something that you expected it to do for example Ill drag my Terminal App to this text window…
/Applications/Utilities/Terminal.app
Cool I now have the full path and location of my terminal application. It is little things like that that help out. But with people without these features will not stuffer they will live with out it. But what GUI people need to learn a good GUI doesn’t take stellar code and a bunch of cool stuff but a way for a novice user to intuitively use the application while not making the expert feel insulted.
In my experience, coders/techies etc go via applications etc as they love to configure.
New users and the completely non-technical prefer to go via a document name as they use their PCs as a tool to produce documents.
Horses for Courses
The weird thing is that I don’t like to use menus when I don’t have a global menu bar.
But in Mac OS X, it’s a pleasure to use menus, so I use them more often. So I think that if you say, “I don’t like to use menus that often anyways”, that’s probably because there isn’t a global menu bar.
My problem with all GUI’s is a lack of configurability. The only one that even comes close to what I want is KDE. On the desk top I want a launcher bar (launcher only, no task bar functions) at the bottom with most frequently used applications. At the top left I want a main menu, at the top right I want a notification tray. In between I want a real task bar. I want my applications to close when I close the last window.
I want more than that too. I want to change the window widgets and decorations at a whim, I want tools to fully configure my desktop.
In short, why should I not be given the ability to roll my own desktop if I want? Provide tools for corporate users to lock down the look and feel for supportability across the enterprise, but give other users the ability to make the desktop what they wish.
I would switch to Mac in a minute if I could get application menus in the application window, if I could resize windows from anywhere on the edge, and if it had a real file manager instead of the POS finder, had a task manager and I could turn off all spatial features. Since other users feel strongly otherwise, keep those things, even ship them as standards, just give me the option to customize them as I see fit.
One of the things “personal” computers were supposed to usher in was mass customization. Let a billion UI paradigms bloom. Death to UI Nazis of all stripes. I want to work like I want to work without keeping you from working like you want.
Is the hobgoblin of little minds. What I want is for the UI to work and look how I want it work, not how someone else thinks it ought to work and look.
Why should a UI expect everyone to work the same way?
I think UI’s are over thought, like food sometime. I ate at a restaurant where they served caviar with little spoons that had mother of pearl bowls. It was said that silver reacted with the acidity in the caviar to produce a less than pleasant taste. Was this refined? Oh my yes. But it was entirely too precious, refined past the point of meaning and sense.
That’s happening in UI’s. When they get so refined they can’t change because they’ve become “works of art”, touching anything causes cascading problems through the entire system. Then they stagnate and die because nobody is willing to change anything for fear of offending the cognoscenti and they become less and less relevant. What we need are robust UI’s not refined/artistic UI’s.