“The purpose of this article is not to emphasize the strengths and merits of Ubuntu user experience, but instead to shed a brighter light on areas that have been neglected due to shortage of time and resources, usability testing, and various software and artwork defects. I hope those who are sometimes overprotective of open-source software will take my recommendations with a pinch of salt and see this article for what it really tries to be: a vocal user experience report and constructive criticism.”
I hope that the Gnome and Ubuntu people pay close attention to this for future releases.
To me, it is especially important that they pay some attention to making the 3d desktop features really work well and cleanly. It seems as if so much work has gone into getting them to work at all, that they have ended up lacking a considerable amount of polish (though, I still love having them enabled on my 7.10 install).
The rest of his issues were less useful, imo. I would have preferred him to focus more on the overall picture than so much worrying about widgets. But worrying about widgets is important as well, I guess.
It would be nice if someone would go through the whole preferences panel and reorganize. It has become chaotic and disorganized, imo.
I am a gnome user. But I just wonder when will Gnome ever work on making the menu easier to configure? Alacarte is buggy. We need transparency too. And yes, the two panels are unnecessary unless you like to fill one with lots of icons (which you can access using the menu too).
I think the two panel approach is just for what you mentioned, a wealth of permanent icons and static items (the top panel for menus, clock, favourite apps, weather) and the more dynamic changing and resizing items (task panel, notification area).
It should also be noted that the placement of the show desktop and trash use very user friendly concept of hotspot corners. Without having an intemate knowledge of the key commands in on the desktop one can quickly and subsequently more easily achieve these very common tasks.
Note that a significant amount of Gnome users use and prefer “standard” aspect screens*.
In this case, trying to squeeze the Gnome menus, shortcut icons, desktop pager, clock, notification icons and controls onto one panel would leave little room for the window list (the most important part) on all but the highest resolutions.
Plus, remember than almost all Linux users are at least “power users” and are likely to want to add applets to their panels. With a single-panel layout (as used in earlier versions of Gnome) there is simply no available space for this.
Better to have too much available space than not enough.
*IMHO “widescreen” is a con by screen manufacturers to exploit the fact that screen sizes are measured diagonally – so they can make 21″ “widescreen” monitors with barely as many pixels as 15″ “standard” aspect monitors.
1024*768 = 786432 pixels
1680*1050 = 1764000 pixels
I don’t see how that’s a ‘con by screen manufacturers.’
I prefer the widescreen (even with two panels) because I find the work flows better when windows are next to each other. The windows don’t get hidden nearly as much.
IMHO, I think your humble opinion might be wrong, or at least a little misguided.
I guess that it’s also possible that I missed some oozing sarcasm in your post.
16:10 ratio makes way more sense for monitors in virtually every application other then document processing. Not only do you have more space on the taskbar, but you can have more tool pallets up without sacrificing document space.
I thought it was rather gimmicky too, until I got my widescreen laptop. Now, I would never by a “standard” ratio monitor again.
Heh! Oh widescreens are scams alright, but just not the way you’re thinking of. Look at the biggest feature from Vista– the sidebar. Now try to remember back when IE4.0 first came out, and Microsoft contracted with Disney and all sorts of other advertisers to sell our desktops as their own personal billboards…
Yup! IMHO the main reason for the decision to go ‘widescreen’ was so the media companies can exploit the fact we’re used to standard displays to place advertising and other types of crap on the sides of the screen, while still giving us the “standard” display we’re used to. You already see it happening with the little mini-ads the TV networks run over the top of their TV shows….
–bornagainpenguin
It’s also worth mentioning that you can ditch one of the panels if you wish, or add new ones. The choice is yours.
I laugh reading this because with my operating system I totally have artifacts. I can afford a “Screw the picky user” attitude because… I don’t care. There’s a saying like 20% of the effort produces 80% of the result — the spit-and-shine as the article refers to is no fun. I don’t know about you, but I want to say screw the damn picky bastard users and if this guy was my boss I’d hate him.
You guys should be nice to the coders and stop worrying about desktop domination. If you never spent the 80% on spit and shine, you’d have 5 times as many cool features.
I do agree with your opinion to a degree, but developers sometimes forget that it’s the ‘spit and shine’ that often attracts new users (OS X is an example of this). Unfortunately, the name of the game is to attract new users to keep projects (such as Ubuntu) running.
To me you’ve demonstrated the problem with software development in general. Most software developers really are content to make it “good enough”. That’s a shame.
I personally currently use Kubuntu. No issues are raised here for me.
It seems to me that a too many of the “name” Linux distributions have decided on GNOME as the default desktop. This applies for Fedora, SuSe and Ubuntu.
Perhaps this focus of the “name” distributions is the entire reason why lesser-knows such as PCLinuxOS, Sabayon and MEPIS are on the rise?
“””
“””
Are they?
http://tinyurl.com/yu5zlu
”
I personally currently use Kubuntu. No issues are raised here for me.
”
Should I care? … Mmmm, no!
Red Hat always had Gnome as default (and more or less only proper) desktop, but now KDE is much more on equal foot. Suse always was KDE based, now Gnome is more on equal foot. Ubuntu always Gnome based, Kubuntu slowly gets more support.
Most distributions still use KDE as default, like the ones you mention, and don’t forget Mandriva, allmost all Asian distributions (mostly based on Red Flag), Linspire and Xandros, etcetera.
But the G based distro’s (es Ubuntu) make more noise, do communication better, have more marketing, are more vocal or however you state it – which leads to ppl thinking they have a much larger marketshare than they probably really have.
But the G based distro’s (es Ubuntu) make more noise, do communication better, have more marketing, are more vocal or however you state it – which leads to ppl thinking they have a much larger marketshare than they probably really have.
Yeah sure, I wonder how come an psychologist student can live in eternal denial.
No, this is exactly how a psychologist thinks. He doesn’t trust what ppl say or scream, as there almost always is a (totally different?) underlying reality.
PCLOS is little known? AFAIK, it has been dominating distrowatch for ages now.
Good read and i agree fully with the author. I hope with an LTS release due next year the Ubuntu guys and girls focus on these little things and perhaps shy away from hefty changes. For comparison i watched the feature walkthrough for the latest OSX. My god it really is a work of art, and time machine looks beautiful and seems to function just as well! I think the hard reality of the moment is that people will and are judging the things by their facades. Just see that recent article about Apples great quarter. Even compared to the likes of Suse, Ubuntu leaves allot to be desired in terms of looks. Apple is a magnitude ahead again.
Edited 2007-10-25 04:37
Most of the highly notable elements of OS X that make it so appealing I think have more to do with the time and attention that goes into the fundamental infrastructure. I say that because with limited use of the OS it^aEURTMs self I have noticed many inconsistencies and nuances that are in there. Not withstanding the greater majority of promised behaviors are consistently and predictably there. So esthetics is one thing but consistency is another and that^aEURTMs where I think OS X is really gaining mind share. That and the ability to virtualize windows to a stateless application that you can minimize, close and restart as needed.
I have to say that I start reading this article ALREADY disliking the premise from which it was written but I have to admit as i read through the article the blog writer did make some very well thought out and well written points. On That I was pleased.
I feel though, that the writer is tearing down the proverbial straw man or throwing punches blindly into the dark. But who do you blame , when in the open source world, software issues don’t have a single person who is responsible but several people over several different projects working to several different release schedules. When you take time and think about this to critique user interface improvements on the DE is difficult and perhaps a wasted effort.
Some of you might say, and I think some of you do, that constructive criticism is good and should be encouraged. Which leads me to my next point,which is slightly more contentious than the one before but I believe to be equally valid when focusing on priorities in advancing the linux desktop experience. This can be simple stated by asking ,”Who cares?”.
let that stick for a bit. Who cares about usability issues ON the Desktop/Gui environment, really? Hear me out though,before you shoot me down. The most important thing in a gui environment are the applications that it runs. The visual quirks or instabilities don’t count for much. From a business user or a home user perspective people put up and have put with this with Desktop Computing for a long time and it does not bother them that much if is their icons are slightly dark around the edges or this or that is not like this way or that. When using photoshop or word for that matter such concerns are soon forgotten. ( just to preempt those who will argue that MS has set the standard, the bad standard of computing and forced it down users throats – i would say , perhaps so but it doesn’t diminish at all what I have said about the reduced importance of some of the usability issues raised in this article. ). Completing tasks and getting real work done is what computing is mostly about and yes , you will always find fault with the one-size-fits-all model from an open source or closed source model of production – it will be a fact of life for long time to come.
By the way – the article was a good read ( better in IMHO than what has shown up recently ) just a bit misdirected.
“But who do you blame, when in the open source world, software issues don’t have a single person who is responsible but several people over several different projects working to several different release schedules.”
This is a (in my opinion) a valid criticism of most _desktop_ free software projects. Because it’s developed in a rather organic manner, sometimes it’s difficult to pin responsibility on any person or factor specifically. It’s a very reactive, immediate methodology to developing software. Have a need? Write a program, or add a new feature to an existing one. The positive side to the chaos is that there are many wonderful ideas that are given the opportunity to prove their usefulness, but there is the resultant disorganization, and the necessity to “figure out how to make it all work together”– work which has to be done over and over again with every distro release.
In think, for all it might make some squeamish, that the community development model of Linux finds its analog in Microsoft’s development model. A manager will go out and find out the needs of Microsoft’s customers, the larger corporations, and the ones that work closely with Microsoft to solve a very specific problem. These requirements all get added to the list of “stuff it has to do.” There are engineers that decide how to advance the state of the art, as well as how to “make it do all that stuff,” things which it has to do NOW, with today’s software, where it might be more elegantly solved 3 to 5 years from now. The result of which is the same massive release engineering project Microsoft has with every release of Windows. Now Windows can do all that “stuff,” but some of the software is immature, and only does that “stuff” if you use it the way it was designed to be used. Forget using the computer as you want to, the software can’t keep up with you.
I think the coordinated efforts to make the Linux desktop more robust, as long as the code stays available to everybody, will make Linux MORE hackable, not less. If every component is very competent at doing its job in every situation, it means less work to implement a good idea in an elegant manner, because it has a solid architecture to build on. A rising tide raises all ships, and the like.
That’s why I really like Ubuntu. Ubuntu has a pretty well defined path ahead of it because Shuttleworth has stood up and said, “This is my pet project, and my contribution to society, and you all have to play by my rules, or watch from the sidelines.” While being open to listening to what users want or need, the leadership behind Ubuntu has produced a consistently high-quality product because they are focused on changing things proactively, instead of waiting for a solution by community committee.
Now I know that while in some ways this is opposite from the “community spirit” of free software, the benefits of this trickle down to all users of the software, and it does not hamper the ability of anybody to use the software for whatever purpose they see fit.
Again, I hope not to mischaracterize free and open source software in general. I find many of the non-desktop projects have clear leadership, and very stringent requirements for accepting an outside contribution.
A second reply to you, antwarrior:
Just because many people have learned to “put up” with computers doesn’t mean it should be that way. There are many people who have never learned to overcome the difficulties inherent in computing, and that’s a shame, because they can’t benefit from a very useful tool.
There are those who are not capable of grasping the abstractions in interacting with a computer, and there’s no problem with that. They’d be more productive and happier NOT using a computer. The problem comes when those who are capable of grasping the abstractions find those abstractions implemented inconsistently. Why should they have to be able to compensate for the flaws in the tool? Some of the flaws don’t really “matter,” and are more about ergonomic usage, but presenting desktop computing as a set of inconsistent, opaque paradigms only frustrates people. They begin to think that the computer is always “doing something” behind the scenes, out of their control, and that at any moment they could find themselves in a situation they don’t know how to fix.
There is always difficulty in shoe-horning a new task into the desktop model, because most tasks do not lend themselves naturally to presentation in a computer interface. The desktop paradigm ceased to be an abstraction of a desk in the early 90s, due to the proliferation of tasks demanded of computers. Configuring a printer? What is the real-world counterpart of installing a printer driver, and setting up a print queue? The least thing we can settle for is a solid set of abstractions that, when implemented consistently will themselves describe the way to complete the task that the user wishes to complete.
For instance, there is nothing that says we must have a back button in a web browser, but because we have an established concept of what the back button’s function is, due to reinforcement in other areas, the behavior and purpose of the back button in a web browser should be intuitive.
That’s why the whole wizards concept was tried, and ultimately failed. It was thought that if we could give straightforward, natural language instructions about what is going on, and how to complete the task, there should be no problem completing it. The problem with this is that it places the back-end, mathematical churnings back into that place where the user feels out of control of the computer. Furthermore, the user learns nothing about what just happened, and has nothing to draw on in the future.
You know you have a solid set of interface primitives when you find you can use the same set to accomplish tasks which have never been represented in the collective mish-mash we call the “desktop” paradigm.
I won’t go too far into it, but my opinion is that people are scared to experiment with computers and see what works because they’ve learned that if they do that they’re going to mess something up. If we give people an environment where they are capable of figuring out which actions can be undone, and which actions are destructive and binding, I think we’ll make a lot of progress in making computers discoverable, and encourage people to experiment and try to figure things out for themselves.
Let me finish by saying that there is nothing wrong with not being able to figure out how to accomplish a certain task using a computer. If you can’t figure it out on your own, learning a strict procedure is one way of doing it, as long as you are comfortable with the limitations that puts on you. I don’t mind helping people, but honestly I feel like being honest sometimes, and saying, “Look, it’s not worth it for you to be dependent on me to be able to do your work. Have you considered finding other means besides a computer for doing your work?”
OK, I’m done playing armchair usability expert. I can also be an armchair brain surgeon if anyone would care to listen. (No, the hypothalamus has no point regulating circadian rhythms. Circadian rhythms should be determined from the surface of the brain, because it’s closest to sunlight.)
…a button that turns into a slider bar which is the most recent incarnation. What does the first click achieve? Save space from displaying the bar when not needed?
Don’t most Gnome users adjust the volume using the mouse wheel over the volume icon? No clicks.
Don’t know about you, but I use the volume control keys on my laptop to control the volume. Yes, using gnome.
I know I do. I am guessing he doesn’t even know that feature exists. It is WAY nicer than any other way.
Wow, I didn’t knew that, thanks for the tip
I don’t disagree with any of the comments in the article, but they seem tangential to me. Ubuntu and desktop Linux have come a long way, but: My phone doesn’t sync properly. I had to spend half a day getting A2DP to play to my Bluetooth stereo receiver. It’s great that the external monitor works now, but it doesn’t work out of the box. I can’t capture the video from the builtin webcam. Fix some of these problems and I’ll happily take as many screen spasms and duplicate volume controls as you serve me.
Thats where the “push bugs/patches upstream” mindset comes in very handy. The bulk of issues are being experienced are being addressed by the various developers. I find it often come down to the level of support, assistance and feed back that they receive which short changes us all. After all if the guys who have the know how and ability to come up with these systems in the first place and busy doing other less benificial items like finding a way to pay the bills they won’t be able to spend the necessary time fixing the ever so often simple issues that cause the greatest damage.
I definitely agree with the author of this article. Just because you have free choice to do whatever you like with your computer doesn’t mean that distro makers should forgo release engineering. Not that I think they ignore it in the least bit, but it’s just taking Linux a long time to establish a really robust desktop environment.
I believe the “spit and shine” should absolutely remain high on the list of the developers and distro makers. That’s one thing I’ve always been a bit unsettled about- the Linux desktop experience, while nice in its own way, feels fractured. Especially in KDE. I feel like I have to work the way the computer wants me to work if I want to avoid all the corner cases that haven’t been addressed. It’s really not as robust as it should be.
Take a look at MacOS X. Why is the experience so pleasant (at least for me?) Is it because it’s architecturally superior to all other OSs out there? While the userspace is pretty well architected, the answer is NO, it’s not entirely due to the architecture. Apple puts a whole lot of work into making sure it the computer knows what to do in all the given possible usage scenarios. Each component is made to do one thing, and do it well, in any given situation. The result of having these deeply written components is that the whole thing just does its job.
I applaud all the efforts going into making each component of the Linux desktop work better. I know that you make Linux run on anything, given enough work, but I’m glad to see that efforts are being made to make a working desktop the natural result of the system, instead of something that has to be persuaded out of it.
Think of it this way: If more work goes into making X.org work without a config file, then the natural state of X.org will be working, instead of only working when it is passed the correct configuration from a text file. Maybe I’m mischaracterizing it, but X.org only does what it’s told, and it doesn’t really know too much on its own about the capabilities of the hardware it’s on without that magic configuration file. Now that work is going into making X.org know how to handle its job regardless of the environment it’s dropped into, I believe we’ll see considerably less flakiness, because hopefully X will know what to do, and be more able to handle errors on its own. How much less work distro makers will have to do to ensure that they set up the system the *right* way, when the right way is automatic, instead of having to assemble the parts manually, and find what works, and what doesn’t!
I know this has been a long and convoluted post, but I’m really jazzed to see that these things are finally being addressed satisfactorily. Ubuntu devs take heed to what the author of this article is saying! You’re all on the right track.
EDIT: added the word “all.”
Edited 2007-10-25 05:17
This is a very good article that I think is a definite must read. Especially regarding Compiz, which I have come to basically detest. This latest version of Ubuntu, so many numerous problems it was not even worth enabling. Now comparing to Compiz today with either Apple or Vista is not even close. It seems much more than just a hack, but actually seems at time more like a bug. While the potential is positive, the implementation is certainly not.
One aspect that is currently lacking is any time spent finding out through thorough marketing research, how users respond to the UI. Both Microsoft and Apple invest a considerable amount of time and research into finding out how users respond to the UI. The end result, is that both GUIs are close to how the users want things to look and feel. The result is that both GUIs have consistency and features that are actually beneficial, and not just plain eye candy. A spinning globe of the desktop may wow your friends and family for a second or so, but beyond that it has no functional purpose.
Personally I am in the minority who feels that instead of having a new release every few months, I would rather have just a completed release. Instead of a “new” release in 6 months, it would be better spent having a complete feature freeze and instead just polish this.
I see quite a few comments about how ‘unstable’ and ‘alpha’ compiz/beryl/fusion are…but in all honesty, I rarely have any issues with it/them. I do have to change a few settings after installing the latest NVidia drivers due to missing titlebars, but i’ve done that so many times it’s pretty much an automated action now. All the effects, spinning cube, transparency etc. work as exactly as advertised for me.
…but then, I’m a KDE user so maybe thats the difference?
Yeah … it may be because you are less bothered (or inconvenienced might be a better word) by tweaking/fixing and out-of-box setup than less technical users. Users who upon seeing a missing title bar might think their program has crashed and try to restart it. They most likely do not have the prior knowledge that there is something named compiz which is a window manager that is responsible for the window borders and Xorg which has drivers configured for their NVidia video card. Not knocking on KDE or anything but some times having the available option to do exactly what you want does lead to forgetting about the fact that there are those don’t know yet and may just rely on what should be sensible defaults to be there.
Edited 2007-10-25 08:36 UTC
To a point I agree, the user shouldn’t need to do anything to make this work correctly, but a quick google for “missing titlebars kde” provides a bunch of information about this issue.
My initial install of Vista gave me no compositing effects and required a quick search to find out how to enable that, so I don’t see this being all that different. Maybe we just need to educate users on how to make better use of the search tools so they can easily find what they need.
Responses to your points:
I don’t know what your issues with Compiz are, but I use it without problems and many other people do too (if it really was as buggy as you say, then Canonical could not have shipped it in Ubuntu (seeing as they provide commercial support for it)).
You imply that you tried it in Ubuntu 7.10, but you go on to mention a “spinning globe of the desktop”, by that I assume you mean the “desktop cube”, the old switching desktops effect, seeing as this is only available as an option (and one that you need to install additional packages for at that) in 7.10, you obviously did not try it to any depth.
Customer research takes money. If you want to be helpful, try sponsoring some, rather than complaining about the lack of it.
As someone who uses Vista a lot (my computer is tripple-boot XP(for games)-Vista-Ubuntu and I had networking issues with Ubuntu 7.04 (and to a lesser extent with 7.10) so Vista is my “main” OS), I don’t know where you see consistency in it! Pick any 3 apps that ship with Vista and you will see 3 different UIs. Different colours, different widget placement, menubar/lack of, etc. Vista builds on the consistency problems of previous versions of Windows to build a user experience that seems to use inconsistency as a way of keeping the user interested.
Apple, on the other hand, gets it right (mostly).
Many of the effects of Compiz are actually very good. Shadows give the windows a sense of depth, window transition effects make applications appear to come from somewhere, rather than just appear, the “desktop plane” effect (that replaced the cube) allows you to see all your desktops simultaneously and drag windows between them, even the “wobbly windows” effect makes the system feel more friendly and natural (even a short period of use makes the rigid windows of Vista feel odd. (Well they are made of glass, maybe they would break…)).
On a technical level, they easily surpass what Vista and OS X have and with the customizability that Linux has you can tune the effects until they are just how you like them. (Unlike Windows that loses customizability with each release and the Mac which never had any). Of course, Ubuntu’s defaults are pretty good too.
Not that Compiz is completely without issue; I spent a good hour trying to stop it from making Alien arena’s window transparent (eventually had to exclude it by window title) and the screensaver is also transparent on my second monitor (how about this Compiz, leave fullscreen windows ALONE! Do not apply any effects to them at all!)
I agree with the lifecycle issues, it does get annoying to have to re-install every 6 months (last time I tried an upgrade (6.06-6.10), it broke and left the system unbootable).
Upgrades should be working without issues since Feisty, you have to make sure you use the update-manager as opposed to the old way of doing aptitude (or apt-get) and doing a manual dist-upgrade. So re-installing is not really necessary.
I agree with the article for some things like transitions. For example when I start gnome, it starts metacity, then kills it then start compiz. GNOME souldn’t even be displayed until everything has been loaded, the panels should either fade in or or slide out liek they do in OSX. Usplash needs to go, its not a very good solution, and there are better ones out there. GDM needs to use AIGLX to handle effects, so that when it start it fades in instead of just popping up. I happen to like the two gnome panels. Even on a widescreen I find it very useful. There has been many compalints about the gnome-panels for years and so far very little work has gone into updating the gnome, at this point maybe they should consider re-writing the gnome panel altogether.
there are a couple of other thigns I agreed on too, such as some of the consolidation of options. I understand whythe moved the pointer themes to the appearance menu, but it really doean’t make much sense in the end and it should be moved back to its original place. I like his idea abou the two about links in the system menu. There shoudl be only one about item and the other should be say be similar to the about this system menu in OSX. This will allow them to do things such as allow the system monitor to be easily accessed. Maybe they should get rid of both about gnome and about Ubuntu, users are already using the system they don’t need to know what they are using, when there are logos plastered all over the distro. Instead the about information should be moved to the help menu.
I actually have had very little issues with compiz and beryl since whenever the project started. I think there is room for improvement on many fronts, but so far I’ve had very little bugs, maybe because I tend to go witha very conservative set of effects (except for expo, and shift switcher). I usually turn off the animation plugin (because I find it slower)and use the classic minimize plugin. Most of my issues with compiz have to do with how it interact with gnome and some of these issues are actually gnome’s fault. For example compiz shoudl be used to for any transition, fade, animation, etc. However this level of aiglx support requires that gtk+ get an overhaul and many developers are calling for changes in the toolkits development but progress is slow. Compiz is only a window manager, there is only so much it can do, and at this point it does more than its fair share, gtk need to pick up the slack a bit. I’m hoping KDE4 will give the gnome team a good kick in the pants and hopefully incite some action.
Edited 2007-10-25 13:37
Well, SOMEONE needs to fix these things.
I’d very much like to see no tearing effects when I move windows around. I’d also like to see one of the bottom bars being for ‘most recently used applications’ in addition to ‘launchers you intentionally placed there’.
I suppose with X making rapid progress in the last three years after a period of relative stagnation, we may see something soon. Xorg 7.3 supposedly won’t need xorg.conf files (fine, as long as autodetection is perfect)
@Mallard
I think you need to reread the article. Not once does the author say that Compiz is buggy. He does however say it is unpolished which I happen to agree with.
@ssa2204:
I understand that the spinning cube itself does not do much but wow the audience, but other features of compiz are pretty useful. The compiz implementation of expose works beautifully as does the expo plugin which is what I use every day. I have Vista and OS X as well and gnome/compiz appears very polished to me compared to both Vista and OS X.
Some of the other plugins that I use a lot are tab/group, ADD helper amongst others.
I understand that not everyone may agree with me, just as not everyone will agree with you, but Linux is free/libre and I get to do ALL my work on it AND play my games (under wine and native) so I don’t really see any value in going to OS X or Vista (though I work on both every day).
Apple just got more consistent with the OS X ui with leopard. There was many glaring inconsistencies in the way applications looked, which is why projects like UNO exists. And many apps on Vista also look different.
Gnome/KDE etc are all becoming better and better by each release and I don’t see this trend changing anytime soon.
Edited 2007-10-26 00:14
i’m far from an ubuntu fan, but i think this guy’s attitude is not very helpful.
did he make any of these issues known during the testing phase – he did say that he noticed this before release. where’s his bugzilla entries? if he had, then the issues might not have made it into the release version.
also i would hate the disk1, disk2 thing, it’s almost as bad as some installers i’ve seen calling /dev/sda the C: drive!
as for the widescreen comment above, i totally agree, i can’t think of anything more useless for a desktop than widescreen. frankly i prefer more height than width, hell i’d even prefer a portrait mode monitor! widescreen is ok for your tv for watching films, but for desktop computing its silly.
and yeah, my work laptop’s 15″ widescreen is smaller and lower resolution than my 14″ normal laptop.
Edited 2007-10-25 08:24
Disclaimer: I like Linux a lot and use it myself, and encourage family and friends to use it.
Overall my family, and especially my very set-in-his-ways and picky step-father are happy with 7.10.
However, when he created launchers on his desktop he found little dots with text. Stretching the icons isn’t intuitive as it seemingly gives you four corners, but to drag you find only one way enlarges, regardless of corner. It felt artificial and very different from sizing a window corner for example.
Additionally, the icons were springs, and going to change the icons simply brought up a file browser at home, and did not bring up any icons. Had I not known where the icons were, I would have deleted the shortcuts and recreated them hoping that changing the icons there brings up some icons list (it does).
I showed him that he must select the icon during the shortcut creation, and that he must then stretch it by dragging it just-so, and to change the icon it may be easier to just recreate the shortcut.
The author of the article is right about one thing if nothing else – the small details, especially in quantity, can kill the deal.
Edited 2007-10-25 10:08
I see a lot of bellyaching about inconsistencies, etc. in the Gnome (specifically Ubuntu 7.10) desktop. But, users should be thankful for small favors. The beauty (no pun intended) of Gnome (and Linux in general) is its open nature. If one doesn’t like a particular convention, then one can change it or even change desktop environments altogether. With Windows or OS-X, this isn’t an option. Also, it should be emphasized that Gnome (and again, Linux in general) are relatively young OS. Many of these so called irritations and inconsistencies will likely be polished and glossed over in time. Just yesterday, I read about the huge hoopla that ensued when Apple decided to redesign the OS-X Dock applet in a more 3D appearance. Many users don’t like the new look but cannot change it without hacking. This scenario will not occur with Linux as every single line of code is freely accessible and therefore one can change anything one likes easily.
Apple, made the classic dock available. You should be able to change it back through the commandline. People are still complaining about the look though, it seem that it looks rather different that the one in tiger. These same people complained when the dock was created in the first place.
Excellent article. These are exactly the kinds of things that the Linux community needs to pay attention to if Linux is ever to overtake or compete with Windows/Mac OSX over the long haul. Good interaction design is extremely important to create that perfect user experience.
17″ – 19″
1280×1024 > 1440×900
21″ – 22″
1600×1200 > 1680×1050
“A 16:10 monitor with the same diagonal size has 6.8% less area, meaning that you buy less screen space in total. A 16:9 monitor with the same diagonal size has 12.3% less area than standard aspect ratio display. ” – http://en.wikipedia.org/wiki/Wide_screen
Not that wiki is all knowing, but I can only do 2 math problems a day.
“Not that wiki is all knowing, but I can only do 2 math problems a day”
That’s still 2 more than I can do
yup, its a total scam.
a 19″ widescreen monitor should basically be a regular 19″ with a bit extra width, not less height!
widescreen needs to be renamed to shortscreen, or same-width-but-not-as-tall-screen.
i can’t be bothered to do the pythagoras and get my ruler out, but to get a widescreen that’s bigger than a regular 19″ monitor, i suspect you’d need to buy a 22″.
Something else to be checked by the Ubuntu guys ?
http://www.linux-hero.com/rant/explanation-ubuntu-hard-drive-wear-a…
I did in my laptop and decreased a LOT the clicking.
[ignorance-mode]
Pitty that Ive been “killing” my hard drive for so long without knowing what to do…
[/ignorance-mode]
Many good points in the article but I’m not sure if I agree with all the points.
For example, having two panels improves usability in my opinion.
It is very easy to reach the four corners of the screen with your mouse, so the corners, left and right edges of the top and bottom horizontal panels, are natural places for most used shortcuts and icons (log off, show desktop, trash can & start menu).
Also, the Gnome taskbar needs enough space in order to remain easy to use as the taskbar is easily filled with open tasks even when you use a big resolution monitor. If there was only one panel, there might not be enough room for all the icons plus the taskbar, especially if you have a small monitor.
Also if you have a tiny monitor (a portable computer, for example), the empty space in the middle of the top panel may not be that big. Besides, at least personally I like to put my most used app icons to the empty space, so I don’t mind at all having that extra space there ready for my personal shortcuts.
Edited 2007-10-29 09:35