“The first KDE 4 release will come along with several major changes compared to KDE 3.x. While explanations for these changes have been posted at several places before there is a central list missing which explains the reasons to normal users. This post lists some hot topics and tries to shed some light on the reasons behind certain decisions.” Update: The release date for KDE 4.0 has been postponed from mid-December to January 11th 2008. I’ll be sure to give you my address, Aaron. Insert smiley face.
It seems like there were, and still are many debates about how things should look — but this is to be expected. If it is really that flexible as they say, I am sure sooner or later (sooner) everyone will be able to tune the desktop to his/her needs.
I just hope that the menu can be changed back to the old style. I like the stone age, so I change it back whenever I have to work with Windows.
Anyway, did someone realize that “powerful” was written with double ‘l’ not only in the post, but in the comments as well? Is this another KDE syntax convention?
I’m afraid it’s my fault, I’ve started to infect others with that stupid spelling mistake
About 2-3 years ago, when Microsoft ‘updated’ their Intellipoint mouse drivers to version 5, they removed several of the features that made version 4 so good (such as the universal scrolling and the ability to customize mouse buttons at the application level). I don’t know why they did this, but I assume it was because they had to rewrite the drivers to support the new wireless mice they were releasing, and didn’t have time to put all the features back in. And it really sucked balls not to have those features that I’d been using for years, so much so that I decided not to buy one of the newer models so that I could stick with version 4.
The last time I checked (a few months ago), they had added most of the features back, but it still wasn’t as complete as v4 was. Let’s hope that the KDE devs don’t wait too long to add the missing features back.
And let’s also hope that in their quest to ‘idiot proof’ the interface, they don’t make life unnecessarily difficult for the power users.
thats the beauty of open source. even if the features gets officially put in, anyone can add them on their own.
with proprietary stuff its wait and pray, with open source one can either wait, or go do it yourself…
I’m starting to learn C++ programming tomorrow so I can customize KDE features.
So where do I go to get started? Will one of those “learn C++ in 24 hours” books do?
How long you figure before I can start submitting patches? Couple weeks?
> So where do I go to get started?
once you get far enough along with your c++, there are great tutorials that come with Qt itself and then there is techbase.kde.org for kde stuff on top of that.
can’t wait to see your patches when they appear =)
You may want to start with a Qt specific book rather than a generic C++ one. I’ve heard good things about this one:
http://www.nostarch.com/frameset.php?startat=qt4
Techbase also has some good tutorials and examples to walk through for some more KDE specific code.
http://techbase.kde.org/Development/Tutorials
I’d recommend using C++ GUI Programming with Qt 4 by Jasmin Blanchette and Mark Summerfield.
Their examples got me up to speed, and I still keep it at arm’s length from my computer at all times.
Also, I’d recommend against buying a generic C++ book, because C++ is a _very_ complex language, because there are lots of things that Qt makes easier (even for console applications) than using plain c++, so for a beginner, I’d say jump directly to qt.
(Disclaimer: all my homegrown c++ apps use Qt, even if they’re console ones. I can’t stop using it, it feels too good =D)
I totally agree. C++ without Qt is more or less terrible. I made several attempts to learn C++ from a pure C++ book and the examples are just incredibly tedious to work through. Start with Qt, and you won’t have to suffer through the crap that is plain C++. It really is a completely different beast.
Unfortunately, a lot of people have been scared off by C++ because they used it without a good class library like Qt.
Altougth is fine to learn Qt first before the C++ fundamentals, I would recomment at least read about the basics of C++ like operator overloading, classes and templates, becuase Qt makes havy use of those. Qt could look intimidating if you don’t know anything about those concepts first.
I think you are absolutely right.
I’ve looked at QT and some of the QT tutorials. But, I’m focused on C++ because I think there’s no substitute for knowing what’s going on behind the scenes. Sure, take shortcuts and don’t reinvent the wheel — but, only after you already know how to do it the hard way. I would never write my own queue or stack again. I would always use the one included with the language. It only makes sense. But, I would do it because it is tried and tested code, not because I wasn’t capable of eventually doing it myself.
Well, I’d prefer to learn at least the basics of pure C++. No matter how great a toolkit is, it’s only one of the many the language supports. I never used Qt, but I was very satisfied with wxWidgets. Just choose what’s best for you and your task.
A great book for learning pure C++ is Bruce Eckel’s “Thinking in C++”. Available in print and fully online:
http://mindview.net/Books
I agree you should first learn the basics of C++, I’m not sure if those Qt books really do much of that or not. I suspect they assume you’re already familiar with the language.
However, shapeshifter said he was doing it to work on KDE, and a lot of what you would learn in one of those C++ books is much more complicated and wouldn’t be accepted in a patch for KDE anyway since they already have Qt/KDE specific ways of doing things. That’s why I suggested buying a Qt book instead. Reading the first few chapters of that online c++ book first would be a good idea, but I think reading the entire thing is overkill.
Actually one of the reasons why I started visiting programming courses on university – beside my boring and unfulfilling BA study… – is that I also wanted to improve programs I use regularly – not to forget I like to fiddle about problems. And after all why not give something back when you’ve taken so much?
Though this semester we are only using C with only so little libaries and man that sucks. I know a little C++ (made it nearly through hat learn C++ in 20 days book in the summer) and that’s a lot more comfortable.
Reading so much positive things about Qt (on planetkde e.g.) makes me curious.
I won’t have time in the next months though.
Go to #c++ on irc.freenode.net. They will explain to you why “learn C++ in 24 months” is somewhat more realistic
Damn, so I will not be able to fork KDE in January if I don’t like the new design?
But seriously, 24 months seems kind of long.
24 months is a minimum, depending on the background and depending on the level of expertise looking to be acheived
Edited 2007-12-03 00:14
if he wants to become an expert, then yeah, at least 24 months. if he just wants to hack on fun stuff, individual bug reports or specific application features … it’s a couple months for a decently motivated person with a half-decent mind to get to the point of being able to sling kde c++ code about proficiently enough to get acceptable patches together.
Well… I was working productively in C++ on Krita within one month — with my previous experience being Java, Python, Visual Basic and a couple of other languages. But I had never used C++ before. I’m still learning new things, of course, even after four years. (Like, if your subclass reimplements only one of a set of overloaded virtual methods, the remaining overloaded virtual methods are hidden). But it’s really doable to get started in Qt and C++ and do useful stuff really soon.
Don’t by Steven Oualline’s Practical C++ Programming, though, and be wary of Thinking in C++. If you don’t mind using an index, getting Stroustrup’s The C++ Programming Language together with the Blanchette book will put you miles ahead of the game.
To get a sense of how complex the issues you deal with in C++ are I propose that you take a look at the following C++ FAQ and the corresponding FQA (just the fact that a FQA like this exists suggests something already):
http://www.parashift.com/c++-faq-lite/index.html
http://yosefk.com/c++fqa/index.html
this looks like top quality sarcasm, but given the serious responses i start to having some doubts..
But how many actually do?
There’s all this talk about open-source and how “anyone can just add/fix this!”, but in reality, everyone waits for the devs of the actual software to do that.
If the original devs/maintainers aren’t the ones doing the fixing/adding, either the original software has to be forked and re-released, or everyone has to wait around until they approve a patch and release a new version themselves.
Well, we’re quite a few sitting here and there, patching the life out of firestarter to keep it working with the newest kernels – all because development has come to a halt and no response from devs. So we do it ourselves.
Of course most people will wait for somebody else to do things – it’s so much easier. But open source _does_ give you the option to do things. The fact most people don’t want to is completely irrelevant. The important thing is that those of us who cares have the option to do so. And we have. Great, ain’t it?
> If the original devs/maintainers aren’t the ones
> doing the fixing/adding, either the original
> software has to be forked and re-released, or
> everyone has to wait around until they approve a
> patch and release a new version themselves.
that’s not particularly how it works. patches get submitted and are picked up based on their merit. it really doesn’t matter where they come from. nor does patches submitted lead to having to wait longer for releases.
please consider the actual history of the kde project in this regard and you’ll find that your assessment is just a chicken little scenario.
At least, you can hire them (or anyone else who want to). No legal barrier as proprietary software.
> Let’s hope that the KDE devs
..
> let’s also hope that
let’s not hope, let’s do. you can help by providing specific feedback with specific solutions.
http://dot.kde.org/1196525703/
Congratulations Thom, you won your bet with Aaron =)
And that on my birthday! Cool .
http://youtube.com/watch?v=yj6cbM-h8xg
Happy Birthday, Thom!
Double congratulations then
What did you win? Wasn’t it something with beers?
it’s totally not finished and yet again KDE 4.0 is delayed with the same reasons. Does this mean feb for proper polish?
“Does this mean feb for proper polish?”
No, it still means 4.1 for PROPER polish.
Edited 2007-12-01 21:46
“proper”.. what an ill defined word. it makes your queries and statements really difficult to respond to. =)
ok then, by proper I mean a proper release date, we can quiet clearly seen KDE 4.0 is not ready for release so you KDE devs should, so why set unrealistic dates?
> by proper I mean a proper release date
when you are ready to discuss specifics versus yet more generalities (i have no idea what a “proper release date” means to you, nor do i think you probably have the context to make such an assessment to be honest) then we can start having a productive conversation.
I prefer that they use another month and get it right the first time. Excellent decision!
When I read this post, though it explains a lot, I feel like the author is acting a bit condescending towards those who might have complaints or critics on some decision. I.e. the kickoff menu discussion is ended with: “there’s been thoroughly user testing”, pretty much a dead end to the discussion, but… while this might be true, things that shouldn’t be in a menu (favorite aps/places i.e.) are put in a menu… perhaps it’s done in the most optimal way to put it in a menu, but did any of the menu designers figured that these things don’t belong there in the first place? One can use as much scientific tests as he/she’d like, in the end he’ll never get the right answers if he asks himself the wrong questions to begin with.
But well, the coders deside, and luckily there are alternatives, and perhaps the stoneage will save the day.
speaking of condescending feelings, did you ever figure that the menu designers figured what should go in a menu?
I’d hate for KDE4 to end up being compared to Vista (or Leopard, for that matter) in the arena of ‘unpreparedness’. The kommunity has produced incredible work so far, no reason to believe they can’t do it again.
Edited 2007-12-01 20:43
The “RC” looked and felt completely unprepared.
i know that most people on the receiving end of the release completely lacked the context needed to understand why an rc release was done. i’ll be on the linux action show in the next week or two explaining a bit more about that context. perhaps you’ll listen in and discover another perspective on things.
I’m sorry, but a Release Candidate is a candidate for release. KDE 4.0 “RC1” is in fact another beta, rather than a pre-release without apparent bugs.
It’s OK to do pre-releases with obvious flaws. That’s what pre-releases are for, but please don’t call them Release Candidate.
Edited 2007-12-02 16:25
> That’s what pre-releases are for, but please don’t
> call them Release Candidate.
i’m sorry, what part of “[you lack] the context needed to understand why an rc release was done” was hard to understand? simply restating your position doesn’t suddenly give you that missing context. seriously, tune in to the show when it’s released and *then* comment on things further.
There’s no context for calling an obvious beta a Release Candidate besides marketing. End of story.
Extremely bad marketing then, because all this has done is made people very upset and they’re all assuming KDE4.0 is going to be extremely buggy.
I think they were just trying to pressure developers to finish up by letting them know that KDE4.0 was almost ready to release, and it wasn’t going to be delayed for months. But there are lots of better ways of doing that then misleading all the users.
I look forward to hearing aaron’s explanation, although it gives me pause that he won’t just say it now – it makes me wonder if he’s still trying to figure out what he’s going to say.
I still think KD4 is going to be great, but I’m starting to wonder if I’d be better off waiting for the service pack (4.1). Time will tell, I’m still hopeful about 4.0.
Edited 2007-12-02 19:55
maybe it’s just because it was a long day and it’s getting later here, but.. man, it strikes me many of the people who post to this board are really, really … gah. frustrating.
KugelKurt says:
> There’s no context for calling an obvious beta a
> Release Candidate besides marketing.
you’re kidding me right? because from a marketing POV it was the worse choice. see, marketing is about making the public happy and keeping them understanding.
smitty says:
> But there are lots of better ways of doing
> that then misleading all the users
would’ve been nice, yes. the reality / theory quotient tends to screw with things though.
> they’re all assuming KDE4.0 is going to be
> extremely buggy
a negative assumption that lasts a couple weeks is better than fulfilling that assumption in a couple a months.
> although it gives me pause that he
> won’t just say it now – it makes me wonder if he’s
> still trying to figure out what he’s going to say.
i didn’t want to go through the whole thing right at that moment right on this board. it had everything to do with my time availability combined with the level of annoyance the comments on osnews tend to inspire in me (regardless of whether it’s a kde topic or not.. it’s like slashdot all over again =)
thanks for assuming the worse, but if you ever meet me in person you’ll quickly discover that the one thing i never suffer is for a lack of verbage even when forced to think on my feet. (that’s the interesting way of saying it’s hard to shut me up sometimes
How much does it take for you to finally admit: “You know what?, I f–ked up big time but now I want to clean my mess”.?
Looks like your ego won’t allow you to admit any faillure, scary, coming from the president of the KDE foundation.
Edited 2007-12-03 00:42
the testing cards seems to get played a lot when there is opposition voiced towards changes related to usability.
in many ways its a bit like reductio ad hitlerum for gui debates…
> testing cards seems to get played a lot
yes, it’s absurd that actual data is used instead of just personal musings. maybe that’s what’s wrong with all the various fields in science. repeatable experimentation is such a silly idea.
</sarcasm>
seriously though, there are aesthetic issues to take into consideration, there are general usability principles to keep in mind … and those are augmented by testing.
having people offering personal opinions on a whim really doesn’t help much. it just leads to a cacophony of equally valid statements with no way to measure what makes for a good solution.
it may be more “fun” to just to sit in an armchair and expound randomly, but .. yeah .. the software stands a better chance to improve this way.
while this might be true, things that shouldn’t be in a menu (favorite aps/places i.e.) are put in a menu… perhaps it’s done in the most optimal way to put it in a menu, but did any of the menu designers figured that these things don’t belong there in the first place?
Why not?
Where’s the part of the 10 commandments that says “thou shalt not put bookmarks in the Kmenu; they’re fine in a bookmark menu though”?
I think allowing people to put places in their menu like regular entries is a no-brainer, and it’s just a minor leap from there to having categories for all apps, favs and places.
I really don’t see how having those buttons at the bottom of kickoff will confuse anyone or how it is a major hurdle to usability, because if you don’t use any of the buttons apart from the “applications” button, they won’t slow you down in any way.
What I don’t like is the “change levels in situ” part. My resolution is 1280×1024 and the (thoroughly oversized, too many apps… =) Windows start menu uses almost all of them to display all those tools. Kickoff otoh uses just that single column. Oh, I can enlarge it until it at least doesn’t waste all that vertical space, but making it wider is just a waste of pixels because it doesn’t give you additional information.
Due to larger and larger displays and more and more being widescreen that’s a complete waste of space.
I fear Razor’s gonna be even worse. All the stuff I’ve seen about it talks how it keeps the distances your mouse has to travel down. WTF? Taking the mouse cursor from one corner of the screen to the other takes a fraction of a second, actually finding the menu entry you’re looking for is generally much more time consuming and having to look at half a dozen menus with 5 entries is bound to be a lot slower than one larger menu (well organized, with categories the same way Dolphin can group by first letter) with 50 entries.
That said, I’ll probably keep using Alt+F2, so what do I care?
EDIT: Let me add that I don’t think the article was particularily condescending. It didn’t belittle those of a different opinion.
It explained that they had to replace kicker and had to make a choice between keeping the old style menu that was kind of a mess, coming up with something new while -as the delay proves- they were already short on time, or use a design Novell spend a lot of money on and that unlike all possible competitor got actual usability testing and did well in that. Which one would you choose in that case if you try to look at it objectivly?
It also mentions that plasma will make it easy to do a relatively primitive menu like the old one (primitive in the sense that it’s static with no bling) or other alternatives and in fact there are already a number of those in development.
KDE’s motto’s always been “he who codes decides” and SuSE had the code =)
Edited 2007-12-01 22:37
The part about wasting space by having just a single column was spot on.
But you should have said the 10 Kommandments
the kickoff menu discussion is ended with: “there’s been thoroughly user testing”
Guess I can skip reading this one then. That horse is so dead. [joke]Besides, I already know why they made each and every change: to annoy me personally[/joke].
thats the beauty of open source. even if the features gets officially put in, anyone can add them on their own.
But how many actually do?
I used to. I had a patchset for Konqueror I maintained for personal use from about 3.3 on that corrected several behaviors I wasn’t enamored with. People actually do that sort of thing. I also did it with Litestep back in the day. I don’t think I’ll be continuing though. Better things to do with my time. The whole job and responsibilities thing.
If anyone wonders, at least one of the behaviors addressed by my Konqi patches was brought up in a bug report, and ignored like I would have expected. One other I remember was a deliberate design decision. Perpetually maintaining patches as the underlying codebase changes gets old. Probably almost everyone is just going to deal with how things are.
So yeah, some people actually do that stuff but if my experience means anything (and it does for me) it doesn’t last long. In my case I’m starting to look elsewhere.
*meandering slightly off topic. Not entirely, but one can ignore the following*
It’s not just the little things I don’t feel like patching anymore. It’s that other people also have that “jobs and responsibilities” thing I mentioned earlier. I’m sick of wondering whether software I use frequently is going to get ported over, or further maintained at all. My latest example is Klibido (binary newsreader): basically unmaintained for a year. Relying on volunteer labor keeps leaving me up a creek. In contrast, I’m fairly certain Unison (on OSX) isn’t going anywhere.
I love the principles of free software and I used Linux exclusively for about 4 years. I’m not dropping it now but I just don’t feel so strongly about expending time and effort and passion getting it working like I’d prefer, or even just working, and KDE4 hasn’t helped. That’s my thing though not theirs. Hah I’m giving the “it’s not you it’s me” speech. It feels like a breakup in a way.
Ironically I’m typing this from Konqueror.
Edited 2007-12-02 01:42
There are two different concepts here. You claim to be supporting proprietary software but what your arguments actually support is commercial software.
For example, MySQL is free software maintained by a company; as long as that company is doing well, it’s not going anywhere. I suppose you mean the same with Unison: as long as it’s profitable, you’ll be able to use it. Let’s not ignore the possibility of these companies going out of business.
Non-commercial software can stop being maintained when the developer loses interest. That applies to both free software and proprietary as well, and it can happen as non-commercial is often done as a hobby (there are other categories though, like academic, etc).
In both cases, the difference is that free software can be picked up by anyone interested, which is not the case for proprietary software. If Unison’s developer goes out of business, you’re screwed and will have to stop using it eventually. If the maintainer of Klibido loses interest, you’re only screwed if nobody else steps up to the plate, and it could be anybody really. It’s not uncommon to see free software changing maintainers.
> my Konqi patches was brought up in a bug report,
do you have a br#?
> and ignored like I would have expected
the implication that bugs as a generality get ignored is overly broad and not ime correct. there are two types of bugs that get ignored: those filed against products which are over-reported on already given the # of people working on it (khtml comes to mind) and those that are simply filed poorly.
> I’m fairly certain Unison (on OSX) isn’t going
> anywhere.
if we ignore all the successfully maintained mainstream free software and all the abandonware in the proprietary world, i could perhaps agree. i just don’t like ignoring reality like that, though.
Let me give you an anecdote that really hits me where I live:
I do software engineering for environmental modeling. A typical year will see me writing upward of 100K lines of (mostly open source) environmental modeling code. That puts me on a variety of systems from IBM, HP, Sun, SGI, and others, many of which run some UNIX variant and not Linux. My desktop *is* Linux. I want a common programmer’s editor across all of these platforms, and don’t really want to have to deal with Emacs. That pretty much means “NEdit” is my only choice.
KDE klipper’s non-standard behavior pretty much breaks NEdit, whether I’m running it locally or running it remotely and displaying back to my desktop. You can search the NEdit developer archives and see just how many times we have complained about this behavior, even quoting chapter and verse from the X11 standards to the klipper maintainers, and either been ignored or told, “We don’t care; we’ll do it our way anyway.”
And for that matter, kdbg’s lack of support for Fortran is an obstacle as well (and RedHat’s announcement that they’re EOL’ing ddd because of OpenMotif, without providing any viable alternative, does not endear me to them, either!)
This — condescending! — attitude does not endear me to some of the KDE developers. Particularly since I’d otherwise greatly prefer KDE to Gnome or other alternatives.
FWIW.
Why are you running klipper if you don’t like it? It’s completely optional. For most people it works just fine, and goes a long way to fixing the broken linux copy/paste support (like being able to paste content from an app that is no longer running), but if you don’t want it, just right click on the icon and choose quit. Not so hard. I haven’t heard of any other apps being broken by klipper, so I suspect NEdit isn’t completely blameless here.
What the heck. So you’re missing a feature, and the devs consider it low-priority. How is that condescending? There’s just limited manpower.. not every feature can get done.
Well, for me, Klipper is great – the X11 copy behaviour sucks, and klipper goes a long way to fix it. The fact that it doesn’t always works perfectly I can live with – until the X11 protocol gets fixed.
If you can tell me of another tool which does what Klipper does (have a clipboard history, and keep stuff in memory even after the app is closed) please do so, if not, fix X11 or Nedit, not Klipper.
“Fix X11” — can you say arrogant and condescending ??
There’s nothing arrogant about the observation that X11 clipboard handling is not very good. The major problem that hits me a lot (that klipper solves btw) is that you can’t copy something from one app, then close that app, and paste that content somewhere else. That content is lost when the source app closes.
Yeah, that’s one major headache. I’ve seen some discussions between developers on X11 and Klipper, and even though I myself don’t know much about either’s internals, I’m pretty sure Klipper mostly consists of stuff X11 should’ve done properly in the first place…
So it’s lovely if whatever app works fine with X11 – adhering to a stupid standard counts as stupid itself, I’d say. Of course, I’m sure there are arguments in any direction, and I’m by far not knowledgable enough to really participate in any discussion concerning these issues – so I won’t. But I think I do have a good point saying at least SOME of the blame is to be put in X11, and the Klipper developers did one great job…
Sorry but it’s one of the major apps I can’t live without, and believe me, after you got used to it, many tasks aren’t the same on ANY OS or DE without it – they take far longer and are thus much more boring
“Fix X11” — can you say arrogant and condescending ??
Well, I personally can’t do without Klipper. I copy and paste stuff all the time, and I like a history of stuff I’ve copied. The X11 clipboard, as is, drives you absolutely up the wall. Not being able to bloody well keep text that you’ve copied when you’ve closed an app is one major pain, and has been the source of more than one expletive.
From a quick google, it seems that nedit tries to set the clipboard to empty. That is what causes the problems. That is not standard behaviour for clipboard use. (At least, I’ve not seen any other app do that).
It sound like the nedit developers should fix that problem, or you should simply configure klipper and untick “Prevent empty clipboard”.
Problem solved.
previously KDE has always made proper decisions, and i dont see that changing.
and as previously said lots of times, KDE4 isnt just 4.0. sure, some features are missing, but one thing is for sure, kde4 as a whole is a hell of alot better than 3.x, allowing for everything, and more, of the old features to be implemented, for example the panel, which is a shitload easier with plasma.
hey tom.. congrats, yeah, i’ll pick up that bottle of.. whatever it was. you’ll need to email me what it was again. i forget exactly which it was =)
I find arguments over usability testing interesting. There is NO scientific way to test usability. The best you can get is a survey. Expose a set of individuals to several interfaces and see which they think is most usable. This testing is empirical and repeatable, but does not answer the question which interface is most usable. It answers the question which interface did the test subjects think is most usable.
In short, it is an opinion poll about which interface the test group liked.
The answer to the question “which interface is most usable” is always “It depends”.
That’s why the way forward is to increase the number of choices users have in customizing the interface instead of restricting choice. If you like a certain UI element or feature it improves usability for you. When it comes to selecting a user interface to use, yours is the only opinion that matters.
Sorry, but most science is based on empirical and repeatable experiments. You are right in that such experiments can’t tell us what the most usable interface would look like, but they can certainly can identify differences between certain approaches.
As for saying that the test result only apply to test the actual test subjects, there have been studies on this. Read any book of Jacob Nielsen or some other auther dealing with usability engineering and they will tell you that you can get very good and repeatable results, with less than 20 test subject.
There are also a lot more sophisticated ways to measure usability on test subjects than just doing a survey. You can use eye movenent tracking to see what part of ths screen that first catches their attention, you can use EEG to see what parts of their brains that are active in different stages of interaction with the system.
All of these methods may be a bit too expensive to most open source developers, but there are also cheaper ways one example is the “thinking aloud protocol” method where you ask the test subject to describe how he thinks when he is using the interface, when he reaches something difficult he will have a hard time both talking and concentrate on what he is doing, and as a result he stops talking, and you have identified a problem.
You can also apply usability technology to help you before you build your system. E.g. you can use “Card sorting” to see how people relate different concepts.
However, you are right in that the answer to the quesiton which system is the most usable usually is “It depens”. More specifically, it depends on the intended user and the intended use. What’s usable to Linus Torwalds doing programming, is not necessariluy so for the average accountant performing his work tasks.
To some extent you can remedy this by configurability, but then, you have the problem of chosing sane defaults. Geneally such default should be aimed at the least skilled users that you expect to use the system, as they are the least least likely to be able to configure things.
Usability is even more complex than the usability goons like thom would have you believe. Not only do users not know what they want nor how to express it (thus invalidating what they say), the usability goons who do their studies are also a point of failure in their design and interpretation of experiments. If this is obvious to you, stop reading.
Part of the problem is that usability goons are usually computer scientists rather than biologists, so they try to fit everything into a math equation which doesn’t completely represent physical reality. The other part of the problem is that usability is a young science that is sometimes practiced poorly by young and inexperienced people, thus leading to the ridicule you see above.
The Novell study in question (that redesigned the Microsoft Start menu) isn’t a guaranteed universal optimality. That is another problem that makes a usability person a goon. They do a study and think they’ve hit upon the word of God. Well, get 999 other teams of people from around the world from different backgrounds to design what they think is the best PC user interface, and you’re going to get some different words from God. And hey, some them might not even copy the Microsoft Start menu!
For your information: I’ve modded you down for generally insulting usability people by calling them “goons”.
For your information: I’ve modded you up because even though OSAlert comment scores are meaningless to me, I know YOU care, so maybe I’ve made your day.
Actually, I doubt that most usability people have computer science background, most usability people I have come across in my work have informatics, or psycology background.
As I see it, the biggest problem, at least in the FOSS world, is that coders have a very hard time accepting that people that can’t write a single line of C++ actually can give valuable input to their projects.
As for the results from the Novell studies on the start menu they were quite predictable, and I actually expect that you will get similar results if you test on randomly selected accountants, lawyers, nurces and doctors, and other people that have a work outside the field of computer science, system development, or system administration.
First of all, most of them will be used to that kind of interface from Microsoft, Not having to learn something new and being able to resue old knowledge is part (but not all parts) of usability. Second, we can see how people behave on the web as its complexity have increased over the years.
In the beginning of the web age people used tree structues (e.g. that of yahoo) to manually navigate to the right page. Today the web is more complex and we see another behaviour. People google for the info they need and then they bookmark what they find useful.
As the operating environment of the desktop gets more complex (e.g having more programs) it gets harder and harder to make a cognitive map of the system or in ohter words, just like in the web case, the tree structure gets to big to grasp, and a search and bookmark strategy becomes much more natural.
Most usability people will also tell you that if you got a menu with more than six or so menu intems, people starts to read the menus instead of relying on motoric memory to select tha right choice. This reading process is much more time consuming. This is one of the reasons why the old K-Menu was far from optimal, especially as most users will never use all the programs that was/is installed in it.
There are of course other factors that may lead to other conclusions. E.g. if you run your tests on a slow machine where the Novell menu appers too slow, may appear very stressful to the user. But in general I would think that the Novell studies will be repeatable.
This is not the same thing as sayin everything should be done the Microsoft way, but for better of for worse Microsoft have done a lot to define the users expectations of a computer desktop.
> usability goons
poor form. really.
> users not know what they want
> nor how to express it (thus invalidating
> what they say)
apparently you don’t understand how this works then. users are put through tasks, observed, recorded and then the results are interpreted, based partly on the user’s feedback but also on interpreting that feedback and the context of their actual actions. it’s not a direct user feedback channel. we already have those: bug trackers, mailing lists and irc; there’s a reason we are augmenting that with actual testing.
> also a point of failure in their design and
> interpretation of experiments
of course, as is any experimenter with relation to the experiment. however, this does not mean that every usability tester is the cause of a failure, anymore than the fact that a physicist conducting a physics experiment is a possible point of failure means all physics experiments are therefore no good.
this is, of course, why we have peer review. which implies actually reviewing the experiment. not hand waving generalities where you insult others and elevate yourself.
> a guaranteed universal optimality
obviously. that’s why increasing the data set often helps. however, standing on the side and going “woah! the collective results of an experiment that was repeated across a representative cross section of users leads to a given conclusion, but that may not be the only conclusion!” without providing your own test data is not helpful. the way the scientific method works is by presenting new data, not saying that there might be more data therefore the current data sets should be ignored.
> to design what they think
yes, i’m sure we’d get variety. that’d be awesome, and something i’ve allowed for in plasma. however, if you think usability is about “designing what person X thinks”, then you’ve missed the point.
I like everything you’ve said except the part where I don’t know how this works
I’ve found usability to be a poorly practiced science. That was the impetus and message of the rant. The rest is just hot air.
deleted
Edited 2007-12-01 23:32
Hey I called it– I said in a comment here I wouldn’t be surprised if it slipped to January. I have to get something right every once and awhile (And I’m fine with that, 3.5.8 is quite fine.)
I just want to wish the KDE developers the best of luck in getting a ‘good’ KDE 4 out the door in the new year. I certain amount of developer fatigue is apparent on the lists, and blogs, and even here… I just want to say thanks for all of the hard work, and I hope you get a good release out the door in relatively short order without killing yourselves or each other.
Cheers!
Arts “… would have required a major rewrite in almost every regard with many new features to add to get into a future-ready shape. And there was no one really willing to do that.”
Kcontrol: “there is no maintainer for one of two solutions, leaves us with the other solution.”
Kicker: “…the code was hardly readable, and new introduced features often introduced bugs and problems at other places.”
Kmenu: “…one part of the answer is the code base of the old solution: it was, according to the developers, ugly.”
It seems that a lot of code is being replaced to satisfy developers’ whims. I guess that’s to be expected for volunteer-written software, since the opportunity to implement your own design attracts more developers than cleaning up someone else’s.
It seems that a lot of code is being replaced to satisfy developers’ whims. I guess that’s to be expected for volunteer-written software
That’s hardly something particular to open source software. Look at how Vista completely rewrote XP’s networking stack. They did that because they figured that in the long run it would be simpler and cheaper to maintain a new codebase than to fix all the little problems the old code had. The same is true for a lot of this KDE code as well.
This is the KDE project’s first major overhaul since v2.0 in Oct 2000 as much of the code in the 3.x series was just ported over (with some modest improvements) from the 2.x series from my understanding so a lot of the code probably did have to be replaced/rewritten this time around.
The KDE4 framework should allow the implementation of alternative menus and what not. Also when QT releases v5 of their toolkit they should be able to just port over most of the existing framework as KDE4 is very forward looking. This kind of stuff is difficult to implement and has required the almost complete reworking of the code base from my understanding.
KDE 4.0 is probably going to be the equivalent of MAC OS X 10.0 (probably not quite that extreme) and I think many of us remember how painful that was but now Apple is reaping the benefits from their major overhaul as will KDE. KDE4, along with the now rapidly advancing X.Org server, should allow the FOSS desktop to finally match and begin to surpass their proprietary competitors.
> It seems that a lot of code is being replaced to
> satisfy developers’ whims
nope.
arts: dead end design, as stated by the developer of arts itself. it also left us with non-portable code. there was very good reason for the move to phonon, and nothing to do with whims. in fact, at first the main phonon developer did the work solely because it needed to be done.
kcontrol: we decided 3 years ago to not bring kcontrol along as the default. replacing it was not a whim. there were 4 other options developed from among which one was selected (and tested). that kcontrol itself does not have a maintainer now is nothing to do with whims, but rather the inability for whingers on the sidelines to get into the game. too busy whinging.
kicker: guy, i maintained kicker for the last few years. i’m also the one who started plasma. that was not done on a whim as much as it was on a deep understanding of the code base and the implications of it.
kmenu: the quote you provide is innaccurate.
now, out of literally thousands of improvements you’ve cherry picked four, 1 one of which was an innaccurate assessment. leaving thousands – 3 as non-whims, though as you can see above even that’s a dubious number.
thanks for your faith and support, however, second only to the time you’ve taken to educate yourself on the manner.
Hello,
out of literally thousands of improvements you’ve cherry picked four
I did not cherry pick. My four examples are four of the six from the referenced article.
thanks for your faith and support, however, second only to the time you’ve taken to educate yourself on the manner
I am always glad to educate myself. That’s why I read the article, and also your helpful reply.
Anyway, I didn’t mean to sound critical. I like the way KDE is developing. Thank you for that as well.
Let me clarify, I did not mean that the results of a survey could not be applied to a larger population, this is done all the time in opinion polling. Instead what I was driving at was that the result of user testing/surveying was not the answer to the question which UI is better, but which UI do users prefer. This is an OK outcome because there is no single correct answer to which UI is better/more useful.
Maximum usefulness from a given set of software is attained by maximizing perceived usefulness to each individual using the software. This requires maximizing choice in UI configuration. UI design should not be concentrated on uncluttering the desktop/menu/dialog/whaterver, simplifying choices or anything of the sort. Instead the focus should be on making sure as many options as possible are available.
Fortunately, maximizing user choice fits well with open source software.
I like the KDE desktop and the ability to customize it to my liking, however it seems like it almost looks like a clone of the Vista desktop model. Other than that I have no problems with it, I just would like to see them totally separate themselves from the direction MS is going with Windows. I can understand the need to listen to the community and follow their direction. The end user is the one who will make the decision to use this window manager in whatever Linux distro they are using.
KDE has some excellent apps and the window manager has some advanced features to which closed source vendors would never be able to catch up with. The key to the success is going with the community at large and ignoring the direction of the mega corps in the industry. Gnome has went its own direction and it has worked well for them, I think the KDE group needs to do the same. It is troublesome for me when a lot of applications or other items start trying to mimic Windows Vista. To me the whole concept was to have functionality and the open source model allows the end user to chose what they want and change it.
The point being, Open Source is very powerful and it is time they chose their destiny with the community at large and they will be very successful in the return. Innovation is astounding to say the least and the last thing I want to see is a battle ground between KDE/Gnome where they try to destroy one another. Each desktop manager needs to plot its own course with the community backing of the end users. Options make this alternative like a fortress and the community involvement as a whole is stronger than mega corps trying to cram down some Vista cookie cutter fiasco.
The market place has plenty of room for growth and alternatives in design, theory and expansion it is still in its infancy. KDE needs to find that common ground and run with it. The sky is the limit and the user base continues to grow in size. Presently, closed source vendors may call the shots in the corps but like all other empires it will come to an end and a new king will emerge (Open Source) and this will will be able to quickly adapt and allow the end user to have the best in functionality and a platform that grants options.
> almost looks like a clone of the Vista desktop
> model.
which aspects? (and if you end up listing things we’ve had in kde since kde 2.x or 3.x that microsoft just recently added in vista and so you are only now aware of them .. i’ll be so highly dissapointed
> start trying to mimic Windows Vista.
well, we haven’t. so .. rest easy? =)
I like the KDE desktop and the ability to customize it to my liking, however it seems like it almost looks like a clone of the Vista desktop model. Other than that I have no problems with it, I just would like to see them totally separate themselves from the direction MS is going with Windows.
OMG! It has windows! And dark stuff! And some clear stuff! IT’S JUST LIKE VISTA!!!
Sorry, but this is what goes through my head any time someone says “It’s just like Vista”. For cryin’ out loud, Vista doesn’t even (directly) support multiple desktops. The “widgets” (which OSX did first, and KDE did second, and Vista did… later) are constrained to a section of the screen (they can be moved).
You want comparisons? Go look up how difficult plasma is to theme, and how difficult Aero is to theme, and come back and explain how they’re copying Microsoft.
For that matter, Microsoft is trying to move into a more secure direction with Vista. Are you suggesting KDE should be less secure? No, you couldn’t be that dense, since KDE is the desktop UI, rather than the underlying OS, but still.
As inconceivable as it might sound, not all of Microsoft’s ideas are bad. The “breadcrumbs” is implemented better by MS than most (the fact that it obfuscates the path is an annoyance), so it doesn’t bother me to see KDE adopting that.
Seeing KDE adopting the “simple is as simple does” philosophy from Gnome– that has me more worried than any supposed cloning from Vista. I hate being hamstrung by the UI, and while the devs are making promises, at the moment, there’s a lot of missing functionality.
For your homework, provide actual ideas as to how the interface could be improved, still be fast, responsive and usable, and yet look nothing like Windows or OSX. Otherwise, you’re at best ranting, at worst, parroting other people’s zealotry.
Assuming you take the time to think about this, here’s a test case: Given a directory with 600 objects, select the 25 oldest files, and move them to another directory two levels up, using only the mouse (using the keyboard to name the new folder is allowed).
few comments.
for those who don’t like the new menu: see ^above^, remake the old menu if you are of a mind
is the default KDE4 theme going to be black like the RC’s? i certainly hope so, and hope KDE4 apps follow the same.
please please please can we have KDE 4.1 in time for opensuse 11.0 around July 2008.