In his lengthy and interesting blog post covering the future of Plasma, KDE’s Aaron Seigo proposes Qt Quick and QML (a declarative language that embeds JavaScript) as replacement of the Graphics View architecture currently used by Plasma. This holds a promise of massive speedups and cheap effects as all paint operations become candidates for OpenGL acceleration, contrary to the aging Graphics View architecture that is still stuck with various inefficiencies caused by the underlying QPainter approach. Expressiveness and easy programmability of QML is a nice bonus, of course.
This blog post at Qt Labs describes the plans surrounding QML and (eventual) benefits from aggressive GPU acceleration.
KDE has always been on the cutting edge of Qt, and they already have a significant QML buy-in (Plasma Mobile is implemented in QML). Continuing this process with desktop Plasma overhaul, while not yet a committed goal, would seem like a natural progression.
Additional notes from Thom
The process of moving Plasma over from the Graphics View architecture over to Qt Quick and QML won’t be completed in a few snaps of the fingers. The plan is to make the transition over a number of KDE releases, during which more and more parts of Plasma are moved over to the new framework.
“Every Plasmoid, Containment, popup and window dressing (e.g. the add widgets interface, the panel controller, the activity manager, …) that does direct painting needs to be moved over to QML,” Seigo explains, “Thankfully we can do these one at a time with the results working very nicely with the current QGraphicsView based libplasma. This means we don’t need to do a ‘massive porting effort with a huge pause between releases’; each release can contain more and more QML driven elements and fewer and fewer uses of QPainter.”
Some breakage may occur in the process of moving from libplasma to the binary and source incompatible libplasma2, but Seigo explains the impact will be very small. “The good news is that this is an almost completely internal set of changes. The design of Plasma lends itself very nicely to this set of changes, and the C++ classes in libplasma are nicely aligned as well,” he notes, “So the external impact looks like it will be surprisingly small for anything written in Javascript or which uses QML for its user interface.”
One concern raised in the comments is that these changes mandate the use of OpenGL. Seigo hopes they’ll end up with a non-GPU render path as well. “This is all still work-in-progress and the plain CPU-rendered path is valuable to us in many use cases,” he states, “I’m hoping that we’ll end up with a proper OpenGL path (scene graph; OpenGL backend for QPainter just isn’t a full solution) as well as a QGrapicsScene fallback.”
Seigo does note, though, that eventually everything will move to OpenGL, and he’s right. That’s called progress, and cannot (and should not) be stopped. There’s enough choice in the Linux world when it comes to less demanding (hardware-wise) desktop environments.
So, after 4 years of almost getting there, now we are going to see another two years of new overhault, more testing, more crashings. Just to use QML. And after those two years who knows if another change will be nessesary.
Will they ever settle up a stable API? a stable desktop?, not in the following two years.
Edited 2010-10-14 13:26 UTC
By the sound of things the drawing models will live side-by-side for the foreseeable future, so your worry is probably misplaced.
Better performance at a base level is often helpful to be able to write more robust code too, since less hacking to make things perform usually makes things easier to handle.
Those same sentenses were told when QGraphicsWidget was released, now result that QML is the one. And after that who knows, Im glad you are so optimistic, im not.
first off, we’re already moving to QML. some parts started on QML (e.g. the Plasma Mobile shell and some of the mobile specific components). QML renders right now on top of QGraphicsScene, so it hasn’t been a shift for us in that respect. the real shift is moving away from QGraphicsScene to QSceneGraph, which in turn will require (in all practicality) all Plasma components to use QML (right now we can pick and choose which do). but that’s a decision that is still 1-2 years out, and in the meantime we continue to work on what we have done right now.
it’s also useful to consider QGraphicsScene’s path thus far: it absolutely was a large improvement over what we had prior to it, and due to its design (and how we used it in Plasma) moving to QSceneGraph as well as QML is relatively straightforward. it is nothing like what we had to go through to get a scene based component system (aka Plasma) together in the first place.
i don’t know if QML will be the “final” answer in the long term (5-10 years), but it is a move in the right direction and provides us a way to get more out of what we are already doing.
as a comparison, QGraphicsScene debuted 4 years ago and was in development for probably a full year before that. it’ll still be what we’re using, with improvements being made to it, for at least the next 1-2 years. so if Plasma does move to QSceneGraph, then we will have had 5+ years of service by QGraphicsScene and the benefits it brought us.
i’d expect at least that (if not more, as it embodies lessons learned from QGraphicsScene) from QSceneGraph.
given that the migration path to QSceneGraph is fairly evident at this point and can be done pretty smoothly for us, i think it’s a reasonable situation.
and at the end of the day, the vast majority of API and code in plasma itself will remain untouched due to clean separations between visualization and business logic.
The fact that all plasma widgets will have to be migrated to QML eventually or not is a lot of change, a lot of work, alot of testing, a new start over, is like admiting that the things were wrong in the beginning and now have to be corrected, tha impact is not only on the visual side, but also in the python and ruby bindings and if you say is doable it may be, but not w/o a lot of work and testing before. So it feels so frustrating that now may be mandatory to write plasma widgets in QML, and throw away many invested time.
I think it sucks when you don’t control the toolkit and the one who controls it makes these low moves that makes you change lots of stuff.
Time will tell, In a couple of years will see how it goes and how mutch was won or lost in that period.
Edited 2010-10-14 14:04 UTC
yes, it’s a fair amount of work. which is why i’m giving us up to 2 years to get there. that gives us enough room to work on other things as well and not rush.
it isn’t a new start at all. a lot of code is re-usable, and the code that uses QPainter should get a lot simpler with QML.
the design of libplasma when it comes to things like DataEngine, Service, Applet, Corona, Containment, Svg, FrameSvg, etc. all stand without change in this. it’s a change in the presentation layer, and yes, it is admitting that what we have now is not as good as it could be. that also doesn’t mean what we have now is crap, just that we can make that part of it better.
there are significant implications for the python and ruby bindings, yes. finding a way to make QML work nicely with them is not a solved matter at this point, but it also isn’t a problem that anyone has looked into extensively either. that’s something that is only starting to be done.
i do think it should be possible, but then again .. there is a reason i’ve been encouraging people to move to Javascript for Plasmoid development.
those costs are why we are looking into this early (QSceneGraph isn’t in Qt yet and its exact future is still under research; QML is brand new and libplasma will only have deep integrated support for it in KDE Platform 4.6). it’s not a light set of decisions to take.
the good news is that we have a couple of years to do this in. personally, if i was working on a plasmoid, i’d make the move to QML when i wanted to do some work on the user interface presentation, meaning it would be already some of the work i’d be doing.
we aren’t forced to move to QML or to QSceneGraph. they are new options, and they are compelling enough that we are electing to use QML more and more and investigate QSceneGraph as a serious future replacement. nobody is forcing our hand. we’re also involved with the toolkit below us.
similarly, people writing Plasmoids today can continue to not use QML in the near term. so these changes aren’t forcing any changes in the immediate.
it’s very important to me that we retain as much effort as possible put into plasmoids that exist (there are hundreds and hundreds of them out there) and make any migrations as easy as possible.
one part in achieving that is sharing our decision making processes early and openly. plasmoid developers should feel very welcome in joining in on that conversation.
indeed
I think you are forced to move to QML at least you want to mantaint all the legacy code your self. Witch I don’t think it will be the case.
But what kind of Nokia employee would you be if you had no faith in the toolkit your employeer mantains?
I just can’t avoid to compare KDE4 with with DukeNukem Forever.
Peace, and I wish you a flawless migration.
Edited 2010-10-14 14:21 UTC
I agree with many of your concerns, however the duke nukem comparison is out of line. There is a huge difference between vaporware that never gets released, and software that is continuously improved and released.
Kde 4 today works great … when it works. When it doesn’t work, it doesn’t work too well. I wouldn’t describe it as unstable, maybe just less stable than other desktop environments. Part of that is just the maturity of the code base. It also concerns me that the existing core is being changed, but it sounds like a lot of the framework will remain intact. Maybe it won’t be that much of a change, but it still has the possibility of removing focus from the small tweaks and bug fixes to the deeper changes.
Duke Nukem forever? Wishful thinking.
We’ve been through several releases of KDE4 and it’s no different to when Microsoft moves people from Winforms to WPF or Apple with their terrible framework churn – and a damnsight less painful.
But, whatever.
You don’t know what you are talking about.
I understand completely you Gnome fanboys aren’t used to changes.
Now let me write plasmoids in PostScript, and your journey to the dark side will be complete.
NeWS for eh VER.
Why not just use PDFs instead? :p
Wait, better idea, have the UI generated with Flash.
Well, considering you can write a plasmoid in JS, and have it rendered using alot of SVG, the main difference between that and Flash is the implementation of the ECMAScript+Vector Graphics, and that all the current Flash implementations are shite.
PS isn’t a bad language, really; it’s just that most people have only seen non-optimised generated code.
That’s all that most people’s use-case for PS need! Who cares if it takes twice as long to interpret? That’s still milliseconds!
I think PS is a better language than JS, but I’m no mystical programming wizard. Just an opinionated weirdo with a ton of free time and interest.
Ok TheGZeus where have you been. PostScript is a dieing breed. The printing stack on Linux and others is moving over to PDF. Now a PDF plasmoid could be scary.
PDF is a better page-description method, indeed.
I’m talking about _real programming_.
What PS needs is libraries, and it doesn’t need a graphics library, just a toolkit. Still a big project, but not unreasonable.
Need better implementations, though.
PS, again, I’m new-ish to programming, though I cut my teeth on Apple Logo on a //c in like ’92 (yeah, my dad didn’t understand how to deal with any changes to the family 486sx, so I couldn’t install anything. I was relegated to that little green screen for any real fun).
Where have I been? Learning, and trying to learn from the mistakes of the past and compromising only when I must.
I get really intense when it comes to tech. To the point that I consider the *nix family of OSen basically “done”. They do what they must do, they provide what they must to the programmers. The number of limitations isn’t high compared to other systems, but the flexibility I want isn’t there.
There’s a big jump in difficulty from the garbage-collected ‘scripting’ languages that work on it (and elsewhere) to C(et al) which you need to alter the fundamentals of the OS, and you can’t do that at run-time without terrifying hacks.
My workstations will run *nix. I’m not likely to hack much on them because I want my hands dirtier than that.
Edited 2010-10-16 16:48 UTC
QML is relatively new on the scene. It wasn’t available in Qt until Qt 4.6 (released in 2009) – and even then it was an add-on that had to be installed separately, and now it’s a native part of Qt as of 4.7 – released right at the end of last month (09/2010).
So to say that it’s like admitting that the things were wrong in the beginning and now have to be corrected would be false since it wasn’t an option when they started.
In other words, they’re trying to evolve the platform as Qt evolves the platform, taking advantage of new features as they become available instead of years afterwards.
That said, I do wonder how the performance of QML will come into play. Even the Qt guys don’t recommend it for all things – only things that do not need quick response (e.g. high FPS video players would not be good to do in QML).
When QML is used with QSceneGraph, the performance will be really good as it will take advantage of the computer’s hardware 3D acceleration. So by seperating code to manipulate data written in C++, from the visualization part in QML, you will make your app run faster.
Do you have citation for that?
Perhaps you are confusing using QML with doing heavy calculations with JavaScript inside QML. QML is being pushed specifically for high-FPS 2D user interfaces – this use case is pretty different from 3d games. You should do heavy calculations in C++, and let QML take care of the UI parts of the program.
If you are into 3d, you may find this QML video interesting:
http://www.youtube.com/watch?v=OXcxFZbKUNI
It’s pretty cool stuff, even if with very little practical applications for now.
Edited 2010-10-14 19:22 UTC
Check the archives for the Qt-Interests and Qt QML mailing lists (see qt.nokia.com). There’s a number of e-mail discussing what to do and not to do under QML. I might have just picked a bad example.
Plasma Mobile already uses QML and the general libplasma QML integration for use by Plasma components (e.g. plasmoids) is already being merged into trunk for 4.6.
from there we can migrate, at our own pace, one element at a time from QPainter to QML.
only once all of that is done do we need to (or can, for that matter) look at slimming out the then-unused portions of libplasma.
once that is done, we can then make the further decision of if/when to move away from QGraphicsScene and over to QSceneGraph.
it is all one step at a time, we are able to stop at any point it doesn’t make sense to continue down that path and we can happily wait while using QGraphicsScene for QSceneGraph to be where we need it to be. if QSceneGraph takes longer to properly mature and move into Qt than it does for us to move Plasma components to QML, that’s not a problem. (and vice versa)
that’s one of the best things about this set of changes: we are able to go at our pace, migrate one item at a time (and we should get some nice improvements while doing so, even when still on QGraphicsScene) and make the hard decisions when the time is right. there is no forced decisions, no required jumps (even while we do the necessary work to be able to make such a jump) and no API breakage required until we do make that jump.
so while it will be quite a bit of work stretched out over the next 1-2 years (in addition to the usual new feature development, stability and bug fixing we do), the disturbance to the user and 3rd party developers should be very minimal.
and that’s not an accidental situation, it’s the result of a set of very deliberate design decisions made both in Qt and in Plasma. migration has to be uncomplicated, paced and safe. not just for Plasma, either, but for any Qt based app that wishes to do similarly.
I couldn’t agree with you more and will probably get modded down for this, but it is the very reason I moved to gnome as I was sick of the KDE team pissing about.
It might be fun for them to try out new technologies and make radical changes, but I am an end user and I need something stable to work with that doesn’t crash.
KDE has a stable API. Plasma on the other hand is a moving target sometimes, but plasma is an application not an API.
Being “in motion” is both the strength and weakness of plasma, and why I love it and disapprove of the development model at the same time. Aaron has excited a lot of new developers, bringing new blood into KDE. We would never have accieved this much without this boost, but the price comes at focusing plasma on art, creativity and innovation and not the old KDE tradition of “just make it work”.
Still plasma at this point works, and if the art becomes too much you have the options necessary to tune it down. It is not even the “old guard” who have installed these, the options to turn plasma down a notch mostly comes from within the plasma sub-community as they are slowly getting more and more awesome.
I could vent my fur(r)y over the young whippersnappers working on plasma, and tell them to “get of my lawn”, but that is so 2007. At this point in 2010 plasma works much better than kdesktop and kicker ever did, the only minor issues have been that nvidia have moved from being 2-3 years behind in XOrg tech to now being 5-6 years behind. You can avoid that problem by using the open source nouvea drivers now.
Edited 2010-10-18 23:45 UTC
Indeed, but the resources taken for that migration are gonna be needed in other interesting and imho parts that needs to be priority that already has been delayed to mutch.
I don’t forget that QML is the “me too” answer to Adobe Air, Java FX and MS WPF, and all those three technologies stayed just on the hype, I don’t think QML will be different, anyway, the best of luck.
Edited 2010-10-14 13:44 UTC
not everyone will be working on this (in fact, just a few will be, most likely). others will continue their work on things such as activities and what not.
what parts are you concerned about not getting the necessary attention?
it’s really not so much an answer to Air, WPF, etc. as it is a similar kind of solution to an increasingly common problem. one very major difference with QML is that it isn’t a platform that is being created with the hopes of luring app develpers to, it’s a new tool being added to a framework app developers are already using.
as a result application developers are already using QML (Kontact Mobile and Plasma Mobile to name two from KDE; i’m aware of others, but will let them announce for themselves . it’s also being used by platform developers such as MeeGo. this essentially prevents it from becoming “hype only” since it already has an audience which is already starting to use it. it’s a significant difference.
regardless of whether QML takes over the world or not, it is starting to give us already using Qt some much needed tools that we’ve been missing. and that’s enough for me
Java FX, Adobe Air and WPF are also used by people, just not most people who still work with the legacy methods offered by those companies: Java, Flash, and Win32/Winforms. it wouldn’t be surprising if QML is similarly underused by QT developers.
This is a bit of a late post, and I’m not the person you were replying to, but the state of kdepim does concern me. I haven’t heard any updates from them on planet kde, their website (pim.kde.org) hasn’t been updated since February 2008, and Gentoo is still blocking KDE 4.5 until there is a 4.5 pim release. What’s going on there, anyhow?
Spoken like someone who hasn’t had worked with any of those technologies.
I used to develop multimedia applications in WPF. I could build complex user interfaces in WPF in a fraction of the time it would take using a more traditional framework.
WPF, FLEX, and now QML truly allow designers to get more involved with the development process. On of the best developers I’ve worked with was a XAML (WPF xml) guru who couldn’t write a line of code in any language.
If you want to develop ugly boring business apps, the go ahead and code in MFC or Winforms.
Edited 2010-10-14 14:14 UTC
I’ve also developed with WPF, is awesome but some how it did’t take over how it should, why? I don’t now.
But if you ask me I think it was opaqued by HTML5 and javascript.
If you want to develop ugly boring business apps, the go ahead and code in MFC or Winforms.
Depends, with WinForms you can do kick ass bussines apps. too, WPF lacks of some goodies WinForms has.
Edited 2010-10-14 14:20 UTC
WTF does that mean?
Like what? Apart from the fact that Winforms to WPF is a hell of a lot harder to migrate to than this will be.
Dude, you are nothing but a visual basic 6 developer, so don’t come here to give me programming lessons.
Sometimes I have to agree with Hiev. At least he has a sense of humor
Vector graphics, hardware rendering, Win7 integration, better printing framework to name a few.
Is WPF needed for business applications? No, especially when most desktops are still running XP. Winforms to WPF is expected to be a very slow migration. Not only have most companies built around Winforms but a lot of small and indy developers will continue to use Winforms since it leaves the option of porting through Mono.
I can’t speak for Air or FX but WPF is definitely not riding on hype. WPF vs Winforms is debated endlessly in .net forums but everyone agrees that WPF comes with discernible benefits. Very few would suggest using Winforms if the targeted audience is Windows7/Vista users.
You also can’t expect an explosion of applications immediately after a new framework is released. Software takes time to develop and for a lot of applications it isn’t cost effective for the company to switch over.
Some key benefits of WPF only work with Win7/Vista so WPF will become more appealing as the XP market starts to shrink. Give it some time, WPF applications are on the way.
KDE’s nothing but a tech playground these days. You don’t seriously expect anybody to use it.
Edited 2010-10-14 14:57 UTC
Despite the fact that you are a huge troll and i dont like feeding trolls, i have to answer.
KDE is among biggest FOSS projects and is seriously used worldwide. However, its one of the few projects which are brave enough to be early adopters of new technologies. And also take part in developing them.
Since the beginning everyone complained why FOSS always ‘follows’ and never takes the lead.
Well, guess what, it takes some courage to develop new technologies and use them, before anyone else does.
A huge troll that until the KDE4 debacle, used to enjoy KDE.
Nice to see that the standard FOSS answer to everything is still to label people as trolls. Perhaps that’s why 99% of computer users are just trolls and wouldn’t touch an open source desktop with a ten-foot pole.
Keep it up, you’re doing just great.
Edited 2010-10-15 07:59 UTC
If you used to enjoy KDE, then why wouldn’t you enjoy it now that it is better and more powerful than it has ever been?
Your response seems like pure deflection to me. You got accused of trolling so your best method of defense is attack, hey?
People are too smart for you. If they can run great software like KDE, have great performance and functionality for free, be free from viruses and malware, run a copy on as many machines as they please without fear of being pestered about licenses or plagued by endless adware, then they will go right ahead an do so, enjoy the benefits, and pay no heed at all to your silly debating bluffs.
Edited 2010-10-15 09:27 UTC
KDE 4.0 was problematic, and the change in attitude turned some people of. I personally believe people turned off by the first 4.x release should give the new ones a try. (Ditch the crappy nvidia drivers first though, or you will be disappointed.. but there is nothing KDE can do about NVidia’s unusually crappy binary drivers).
Yer I know. It’s the only open source desktop competing with the proprietary competition on development and the frameworks they have at their disposal. When it isn’t done open source desktops are playthings no one can use and when someone attempts to move things forward people shout that things should stay the way they are.
Go figure.
Edited 2010-10-14 15:40 UTC
Precisely. This, and this alone, is the reason why you get such a lot of spiteful hate-filled criticism of KDE … you can tell this because when you boil it down and analyse what is actually being criticised … it is nothing.
KDE is criticised only because commercial software interests need it to be criticised.
KDE is criticised only because commercial software interests need it to be criticised.
Even F/OSS proponents who criticise KDE4 do it because commercial software wants them to do it? Lolwut?
There are a lot of people who do not have much more to say other than “me too”. These people are not the ones who set the direction of what is discussed, and what is not.
There are too a number of people who pretend to be F/OSS proponents but who really are not. There is even an entire website dedicated to presenting articles about F/OSS but which somehow always seems to conclude that F/OSS is lacking in some way. I have in mind the site called Linux Insider.
All that you need to look at is what KDE is criticised for, and what about KDE is ignored. You might for example get a review and pages and pages of discussion about the design of KDE notifications and the gripe that an icon does not quite line up horizontally with the notification message or somesuch … at yet there will be no mention at all in reviews that the KDE photo manager/editor application digikam is considerably better than FSpot in every way, or that K3b has been better for many a year than any GNOME desktop CD/DVD burner. You might get an article about the “inadequacy” of Linux PDF viewers that talks about Evince and XPDF, but entirely ignores Okular.
What gives with that? Why is it so? Who is driving the discussion along the lines of absolutely trivial and often plain incorrect criticism and bitching about KDE, and almost complete suppression of mention of its benefits and advanced features?
Edited 2010-10-15 01:04 UTC
I’ll try to give you an example:
http://desktoplinuxreviews.com/2010/10/13/kubuntu-10-10/
This article purports to be a “review” of Kubuntu 10.10 but in effect all it says is that Kubuntu 10.10 is OK I suppose but “I found these six things which I didn’t like”. None of the criticism is actually correct. There is much uninformed discussion that ensues about the six things. There is an almost complete oversight of the actual quite decent KDE desktop in Kubuntu 10.10.
Sigh.
The very last user comment I feel sums it up nicely even if it is worded a little clumsily:
Edited 2010-10-15 01:30 UTC
This is something that has been bothering me for a while as well. There are many cases where KDE and/or Qt applications are clearly better than the competition but online reviewers somehow manage to completely ignore them. And despite people trying their best to set the record straight in the comments section, when one is available, these “articles” and “reviews” keep coming up with the same old glaring omissions and the same old rehashed complaints about KDE faults that in some cases no longer apply or are not even valid to begin with!
Even worse is the trend of crediting Ubuntu for *everything* that happens on the Linux desktop, completely ignoring that other Linux distros have pretty much the same feature set and in some cases, as in with Fedora, having them before Ubuntu.
From the things that actually are unique about Ubuntu, the stuff that Canonical itself actually develops, the only thing that I can think of that is worth something is Ubuntu One – which can be easily replaced with Dropbox to some extent so no biggie there – and the integration of Rhythmbox with 7Digital’s music store which is nice despite me not being a Rhythmbox user and Canonical’s complete failure to integrate other music players such as Amarok in the same mold.
Funny thing is that even though I find KDE the best looking and more powerful DE out there, I am not even THAT biased towards KDE/Qt applications and will happily use GNOME/GTK applications when they’re clearly better – GIMP, Audacity and Inkscape for instance – but these trends I discussed above and your observation have been really disturbing…
keep coming up with the same old glaring omissions and the same old rehashed complaints about KDE faults that in some cases no longer apply or are not even valid to begin with!
To be honest, it’s no different in regards to GNOME either, you know.
I am not sure we’re talking about the same thing here. The way I see it, there are mainly two high profile writers out there that judge KDE fairly on their reviews, giving brownie points when it gets something right and calling the developers on it when it gets something wrong. These are Joe Brockmeier and Bruce Byfield.
Byfield’s reviews tend to be as neutral as possible to the point that it becomes annoying to read them sometimes, but they are usually fair, pretty balanced and reasonably accurate even though one can easily spot faults here and there every now and then. Brockmeier was formerly community manager or some such for OpenSUSE which is, by most accounts, a KDE-centric distro so he might have a little bias there but his articles related to both GNOME and KDE and its applications look reasonably fair to me, for what it is worth.
Most other reviewers and columnists appear to evaluate KDE based on what they can get from Kubuntu or Ubuntu with the kubuntu-desktop meta package which, frankly, sets the bar pretty low. It is not a big secret that KDE is not exactly loved over there in *buntu waters and it shows. Even smaller profile distros such as SimplyMEPIS can claim to be a lot better than any of the *buntus as far as KDE is concerned.
Heck, recently I engaged on an online discussion on someone’s blog because he/she was claiming the upcoming GNOME Shell Activities as the pinnacle of usability and a revolution on the desktop “when it arrives” especially when compared to previous iterations of KDE’s Activities disregarding completely the fact that Activities were completely revamped on KDE SC 4.5 and that the ground breaking work that the KDE developers did when taking the plunge and going ahead with their plans to implement said Activities back when KDE 4.0 came out despite the shitload of crap that was thrown on them for daring to do so did draw a blueprint for GNOME developers to know where are the usability pitfalls, what to do and what not to do when working on theirs!
It is somewhat ironic that KDE SC 4.5 actually offers something closer to a traditional desktop with taskbar on the bottom, menu launchers, (optionally) icons on the desktop for those that want it even though Plasma does offer many of the same advantages of other composited desktops and many of its own while retaining the familiar work flow whereas GNOME Shell will be much more of departure of the classic GNOME DE when it arrives.
Like Lemur said above, it is not unusual at all to find articles evaluating GNOME and its related tools and correctly finding that they leave something to be desired when compared to proprietary alternatives available on Linux and elsewhere and completely disregarding KDE and its applications which does make some of us wonder why is that so.
correctly finding that they leave something to be desired
Uhh, it’s an opinion. Not everyone has the same tastes and thus claiming that people are correct by default whenever they say there is something to be desired from GNOME or GNOME applications.
Like e.g. I have still not found a single KDE application that I’d prefer over GTK+/GNOME ones. I still don’t go around claiming KDE is inferior or something, it just doesn’t suit me, personally.
Sorry, it sounded a lot harsher than I intended. What I meant to say is that in some cases, the criticism of some GNOME applications or the desktop per se might be somewhat valid but it is weird that some people leave it at that, as if the alternative was not worth mentioning as in the aforementioned case of comparison between Evince and XPDF and the omission of Okular. That’s all.
I certainly don’t want to imply that GNOME/GTK applications are inferior by default – in fact, I am a heavy user of certain GTK apps even on Windows – and apologize if that was the impression that I gave.
as if the alternative was not worth mentioning as in the aforementioned case of comparison between Evince and XPDF and the omission of Okular. That’s all.
Okay, I understand your point. It could of course be that the reviewer just isn’t familiar enough with the KDE-alternatives? If I was writing a review I’d probably omit mentioning them too since I have no experience of them and thus it would be unfair of me to say anything. Though yeah, there are biased people in all camps and they are sometimes willing to do almost anything to promote their own camp..
I certainly don’t want to imply that GNOME/GTK applications are inferior by default – in fact, I am a heavy user of certain GTK apps even on Windows – and apologize if that was the impression that I gave.
It did sound like you were giving the impression that GTK+/GNOME apps were somehow inferior to anything else by default, but thanks for clearing up the misunderstandin!
It would be somewhat understandable if we were talking about the smaller and lesser known desktop environments available for Linux but how can one reviewer seriously ignore the *other* major desktop that, depending to whom you ask, might hold between 40-50% of the Linux desktop userbase with a straight face?
But I agree with you; it could be that they simply are not familiar with KDE and its ecosystem. It is just… I don’t know… upsetting(?) that it keeps getting snubbed on these reviews and reviewers keep spreading misinformation when it is obvious that they know nothing about it.
Despite the best efforts of some people to disparage KDE, there are some objective measures available for its rate of use right now compared with GNOME.
http://www.phoronix.com/scan.php?page=article&item=lgs_2010_results…
Compiz 3131 : Kwin 2133
GNOME sppears to have a 50% larger use base right now compared to KDE according to these user-submitted survey results.
This despite the many years of continual sniping at KDE that some people have endlessly and uselessly been engaged in.
The design of KDE 4 was a very significant departure from KDE 3, and it took a fair effort to switch over, with a noteable drop in feature support during the transition. But now people may be able to see the benefits … due to the cleaner design of KDE 4, we can now contemplate a significant change in the presentation layers, bringing with it the benefits of multi-threading and more extensive GPU acceleration, without serious impact on other parts of the desktop software.
Given that KDE can now move with the times WITHOUT significant regressions or disruptions gives the lie to your original claim.
So much so that one has to wonder at your motivations for making claims that are so obviously wrong.
>> Seigo does note, though, that eventually everything will move to OpenGL, and he’s right. That’s called progress, and cannot (and should not) be stopped. There’s enough choice in the Linux world when it comes to less demanding (hardware-wise) desktop environments. <<
So a user should choose a *full desktop environement* according to whether he has hardware accelerated OpenGL or not??
It seems incredibly rigid!
IMHO, the users should only have to choose whether he wants to enable “shiny effects” or not, this is not progress..
While I see your point, the same could be said of bitmapped graphics and colour graphics.
Not everyone has 2D acceleration, either.
With the way xorg drivers are moving and the fact that probably 90% of bitmap graphics processing units sold today support 3D acceleration, this is a logical move.
Should it happing quickly? No.
Should it or something similar happen? Yes.
I’m completely disenchanted with OpenGL, though.
Well, I don’t agree with the statement that it’s the same situation. Be it only because bitmap blitting and color graphics are standard (through VESA), whereas 3D is implemented in a proprietary and generally undisclosed way, making implementation a PITA if you don’t work for the relevant HW manufacturer.
Intel and ATI aren’t really a problem, there.
Just don’t buy NVidia.
VESA is pretty much an Intel thing, too.
The less we think of ourselves as tied to x86(-64) the better off we will be.
Even if one must make some thermal/energy compromises, you can get more total power and RAM for less money elsewhere.
*glances at 700usd loaded V880 in living room*
Edited 2010-10-14 19:28 UTC
You’re disenchanted because the OpenGL on Linux that is integrated into Desktop Environments is still in the 1.x era.
Wake yourself up when it reaches 2.x let alone 3.x.
No, it’s more that I’m… disenchanted with OpenGL.
Considering the time it took to make D3D work with Gallium3D vs how long it took to get that far with OpenGL, and their comments on it, the fact that it’s really only designed to work with C-family languages, and still works with pixels and triangles, I think it’s crap.
I think X is crap, too.
I can’t actually think of a current windowing system that I don’t think is a long-term dead-end.
I’m not a fan of ~A 1/4 ber-static languages like (Obj)C(++ et al) that have no built-in means of interactive development or run-time modification.
I hate GTK.
I hate that to write a ‘web app’ I’m supposed to write it in 3 different languages with vastly different syntaxes.
I hate that there’s still C in Emacs.
I hate that Common Lisp won’t give up or improve, but instead stagnates and produces a ton of disparate efforts to extend the language, rather than remove the cruft.
I hate that Clojure doesn’t care how huge the JVM is memory-wise.
I hate that OpenFirmware/OpenBIOS was ignored by the people that became LinuxBIOS/CoreBoot because they didn’t like Forth, and OpenFirmware was extended to emulate enough BIOS calls in under a year, but CoreBoot works on a small handful of mobos.
I hate that FreeBSD makes a claims that are patently false, like supporting the Sun Fire V880 (the SCSI DVD-ROM doesn’t work with any known version, the bug’s known, it’s still listed as supported. what am I supposed to do? install over USB 1.1?).
I hate that OpenBSD doesn’t want to have a real installer.
I hate that NeWS was proprietary and _expensive as hell_.
I hate Ruby.
I hate George W. Bush’s entire administration and everyone that backed him.
I hate homophobes, racists, and pedophiles.
I hate red Twizzlers.
I hate driving, or rather, how amazingly dangerous it is.
I hate that weed is illegal.
I’m allowed to hate things I’m well-informed on, and believe to ‘be of sucks’, to paraphrase Henry Rollins’ impression of bad English.
So, yeah. The next time you’re going to assume something, spell it out.
Hey now! The rest I can understand, but red Twizzlers? That’s going just a bit too far…..
Hah!
I dig _real_ licorice.
Eating Australian stuff right now. It’s amazing.
Let me guess… you want a crappy GUI installer. No thanks.
The OpenBSD installer is a breath of fresh air compared to every other OS I’ve had to install over the years.
I want sensible defaults and clear, concise explanations.
Sysinstall and Debian Installer are lovely.
Try the installer with 4.8 or -current. It has been simplified even more so than it already was.
I hate both of those installers.
Heh.
I wish d-i would ask me all the questions up front, and it’s got its roots in sysintsall’s interface, so that’s got similar issues.
However, I could hand it and a readme off to a friend who’s never installed an OS and they’d probably walk away with a working install, and I can use it to do most install tasks.
Then you should find the new pc-sysinstall in FreeBSD 9.0 to be almost perfect. Text-mode, answer a bunch of questions, an install script is generated, then the install script is run. Every install is a scripted install, so you can manually tweak it as you see fit. And you can build any kind of TUI/GUI over top that you desire.
Wow, that sounds awesome.
I’m fighting with the results of OpenBSD “hmm. I’m not liking this. I see why people would. I don’t. I’ll try freebsd” sysinstall can’t write to MTA-formatted drives…
_trying_ to dd them away from MTA formatting, but I may have to do something less brute-force-hack.
In any case, I now see the benefits to both install methods. I just have different priorities from OpenBSD, and I think FreeBSD embodies the philosophy of this particular project. If I was building a NAS or wanted to see how far I could get with vanilla ‘real’ Unix, I’d totally be on OpenBSD.
FreeBSD has a few more bells and whistles I want, and more focus on performance.
Strokes, folks, etc.
This won’t happen for a while.
Right now, QML is based upon QGraphicsView, that has pluggable backends, software or opengl.
this is a blessing abd a curse, since using opengl here helps but only so much, at some point you hit a wall, where you can’t improve the visual quality/effects unless a big cost in terms of performance, unless the graphics system is designed towards the opengl (or direct3d for what matters) api.
it may be sad, but you don’t find modern 3d games with software rendering
Once upon a time there was only monochrome displays. I think it’s incredibly rigid that todays desktops require colors. And not just something simple as 8 or 16 colors, we are talking thousands and millions of colors.
…and NeWS re-appears implemented in a combination of languages with less extensibility.
There are advantages to doing it in multiple languages/namespaces etc, but there are disadvantages, too. Mainly extensibility is dropped, but yeah.
I’m not really a KDE user, but I’m a KDE fan.
Wow! Reading so much comments make me wonder… How many of you actually tried QML? How many of you installed the beta of QtCreator 2.1 and fiddled with it?
Just back from the Qt Developer Days in Munchen and in my perspective QML was the ‘buzz of the town’. The presentation about QML (there were a lot on the different aspects and possibilities with QML) we’re all … great!
Is it just me (and Aaron I presume who thinks that with QML (or correctly Qt QUICK) Nokia has developed a great technology?
Other than it’s new for Qt, nothing about it is new in terms of technologies. It’s just new for Qt.
It may be that SilverLight and JavaFx (and even Flash?) have similar tech, but those are not really available to desktop Linux crowd (or, are widely shunned!).
QML looks cool and after installing Kubuntu 10.10 in a VM I am super impressed with KDE 4.5. After I figured out that I needed to force font dpi to 96, it looks really nice. I’m impressed with how far KDE has come, I think I like it better than GNOME at the moment. Keep up the good work.
GTK is moving at a snail pace compared to Qt and the Gnome 3 shell looks like a parody of the desktop.
http://www.youtube.com/watch?v=cmKwFSwvjD0
If I was a KDE hater I would be worried too.
It^A's the 4.0 fiasco all over again, the same was told back then “Look, Qt has QProxyGraphicsWidgetsWhatEver, you can rotate them, you can zoom out”, but you can’t actually use it, if the next two years are gonna be more try and error, then these guys haven’t learned anything.
Edited 2010-10-14 21:04 UTC
I am a KDE user since the 1.x days back in ’99 and it serves as my sole desktop environment for the last couple of years. I really believe that KDE SC is one of the greatest and most innovative open source projects yet.
Aaron and the rest of the guys thank you and keep up the good work.
This article is perhaps worth a look:
http://www.linux-mag.com/id/7883/2/
…
While this article still manages somehow to have a kind of sideways ping at previous releases of KDE, it does illustrate the point nicely … KDE is moving ahead and IMO keeping pace with advances in hardware, increases in user expectations and advances in other competing desktop operating systems. KDE cannot be accused of stagnating.
The proposed advances in the KDE/Qt rendering layers described in this thread (hopefully without impacting much on other layers of the KDE applications stack) seem to be further significant, worthwhile advancements in this vein.
Edited 2010-10-15 02:12 UTC
I’m not too sure what i think about OpenGL being required either, but note the following:
1. Gnome 3 also has the same philosophy.
2. This won’t be ready to switch over to the SceneGraph tech for 2 years, possibly 3.
3. By that time, it’s entirely possible that LLVMPipe will be able to render basic desktop 3d (without effects enabled) at an acceptable speed, even if it is slow. Just like VESA can render 2D but not very fast.
4. The SceneGraph is under heavy development right now, and it’s entirely possible it will be given a non-3D rendering backend.
Edited 2010-10-15 02:56 UTC
“The plan is to make the transition over a number of KDE releases, during which more and more parts of Plasma are moved over to the new framework.”
Wise decision. Sort of a new “KDE 5.0” revolution would ended up in a new shiny but useless product as it was KDE 4.0.