This is a response to the article yesterday on the progress of GNOME and KDE. Aaron Seigo, a lead KDE developer, sets out the current state of progress with KDE 4, pointing out that KDE 4 is on track for a release in mid 2007. “Thom points to a quote from me that our goal is to have a 4.0 ready sometime in the first half of next year. That gives us until sometime in June and I’m still thinking we can make it.”
A nice response that pretty much sums up my opinion on Thom’s editorial. I think Aaron might be a bit optimistic about KDE4’s release date, but he’s nowhere near as off as Thom was.
I think Aaron might be a bit optimistic about KDE4’s release date, but he’s nowhere near as off as Thom was.
Imho the big problem with Thom’s article was that I think he started out with the conclusion and then went on to find evidence to support his conclusion. This is bound to produce clusterfscks, see the war in Iraq for a far more drastic example.
Let me add two things:
1. Thom’s article was probably in part a result of the “200X, the year of the Linux Desktop?” hype that’s been around for a decade. How about a more realistic approach instead of pendulum-like hype and anti-hype.
2. Kudos to Thom for posting this. Osnews often had bs articles (imho =) but you’ve also always been very fair with posting criticism and differing opinions.
Edited 2006-12-23 08:55
i agree it’s good he posted Aaron’s reaction here.
Still the best desktop around
I think everyone who works on KDE is to be applauded for trying to move the Linux desktop forward. Kudos, Aaron, for setting the record straight.
just because KDE and Gnome don’t hype their point releases in the way that MS and apple hype up the smallest improvements they make, doesn’t mean that there hasn’t been much progress.
@HK47
It would help KDE a lot.
“If you don’t show the world your toys, the world won’t know the cool stuff you’ve got to offer them.”
Heck, people would discover that KDE has it’s own office suite (KOffice) instead of defaulting to OpenOffice everytime.
Well, to be fair, its probably for the better that people see OpenOffice.org instead.
For all of the progress that it has made (and there has been a lot), KOffice really doesn’t measure up to OO.Org at this point.
Hopefully 2.0 will improve on that considerably.
Just remember though that Koffice has a LOT less developers working on it than OpenOffice. Furthermore, Sun backs OpenOffice, there’s no real financial backing for Koffice in a similar vein. And, probably, Sun has more information on office document compatibility in its hands than the KDE team, especially since Sun is still quite a big player in the market.
Dave
And, probably, Sun has more information on office document compatibility in its hands than the KDE team, especially since Sun is still quite a big player in the market.
I’m not so sure about that – as far as I can tell, MS never released any info about it’s closed binary formats, so both are starting from scratch. There’s no question that OOo does this a lot better though. Of course, both were heavily involved in creating ODF so neither should have much of an advantage there.
Openoffice is not starting from scratch. Openoffice used to be Staroffice which as been around for close to twenty years in one form or another.
So, Openoffice starts from a mature, aging and complex codebase. Additionally, as part of the Sun-Microsoft agreement, Sun does get access, at least that’s what we were told, to Microsoft file formats for interoperability purposes. Admittedly, nothing truly revolutionary has happened on this end so it’s hard to judge the effectiveness of the agreement.
Koffice is young,truly built from scratch, incredibly fast and needs time to get to the point that you can use it as a drop-in replacement for eithe MS Office or Openoffice, but it does a lot things already that neither of the above do particularly well.
In my opinion, koffice will be the long-term contender and choice of the free desktop as its architecture is cleaner, easier to learn and extend, but it will take a while to get there.
I’m not so sure about that – as far as I can tell, MS never released any info about it’s closed binary formats, so both are starting from scratch. There’s no question that OOo does this a lot better though. Of course, both were heavily involved in creating ODF so neither should have much of an advantage there.
But OpenOffice.org/StarOffice is a much older product than Koffice – easily older than 10 years old, and the support for Microsofts document format is alot older as well.
As Microsofts format has envolved – they’ve never created a new format with each release, just evolved it – so has OpenOffice.org/StarOffice built on previous experiences.
What I hope, however, is some focus by Koffice to use the information from Novell to get OpenXML supported – sure, it might not be the defacto format of now, but when it does become, they would have had a filter mature enough to handle files created in Office.
“KOffice really doesn’t measure up to OO.Org at this point. ”
Sure it does. The majority of users will never need 90% of the features in OOo and thus KOffice will do all they need.
The only thing OOo have over KOffice is, imho, that OOo handles MS Office formats better.
“KOffice really doesn’t measure up to OO.Org at this point. ”
Sure it does. The majority of users will never need 90% of the features in OOo and thus KOffice will do all they need.
The only thing OOo have over KOffice is, imho, that OOo handles MS Office formats better.
Unfortunately handling MS Office formats is probably what 90% of the users need. I know I do.
I like KOffice. It’s small, light, fast, integrates well into KDE, and does some cool stuff that OOo or MS Office do not (ie. editing PDFs).
But until I can interoperate with powerpoint and excel files, I simply can’t use it for my day-to-day no matter how much I’d like too. And I do want to, because it’s a pretty wicked package for what it does.
Hell, I’d be happy if I could interchange files with OOo2 using OpenDoc, but I can’t even do that reliably.
KOffice is great as a standalone office suite, but we don’t live in a standalone world. It sucks being tied to the MS-Office format, but that’s a sad reality, at least in the corporate world. I have high hopes that KOffice 2.0 will improve in this area.
Depending on just how open Microsoft’s new Office Open XML formats really are, compatibility with Office should be less difficult in the future.
The MS OpenXML format is open — but it is full of problems. Like codifying twenty-year old Excel bugs (did you know Excel still uses the Julian calendar, as obsoleted by Pope Gregory?), things like the — copyrighted! — border graphics for Word (all two hundred of them). People inside Microsoft and Adobe have computed that it will take about 50 man years to implement support for the OpenXML wordprocessor subformat _alone_. Basically, from lock-in through secrecy we’ve gone to lock-in through overload.
Simply put: writing import/export filters for MS Office file formats is not something that’s doable for volunteer hackers. The formats are designed to make that hard. If we had a bag of money we’d probably use it to out-source the development of GPL, library based filters, but we haven’t got a bag of money.
For KOffice 2.0, better MS file format is not a goal. If you get an MS Word document that KWord’s quite adequate import filter (it’s better than the one Pages has…) cannot handle, convert it to ODF with OpenOffice, then work in KWord. We believe that KWord will be so much more fun and pleasant to use, that that’ll be acceptable to many people.
We cannot be all things to all people, and we prefer to do one thing good: make a suite of application that are fun to use, that help people create, put their thoughts on paper and organize their business. There are many cases (and many current users) who do exactly that. We are glad that there’s OpenOffice for when better MS file format compatibility is needed.
(That said, one of the main problems with the old MS Word importer isn’t actually with the importer, which can handle anything but a particular kind of graphics, but the KWord table and layout code. Now that code is going to be completely rewritten and the result may well be that the importer suddenly starts working a lot better.)
Boudewijn Rempt, Krita maintainer
in my opinion in the last year both kde and gnome have both advanced twice as much as they have in the three years prior. kde is really making strides and i can’t wait for kde4. I am have never been a fan of gnome but even gnome is becoming usable to me which is amazing as i have always disliked gnome in the past. Kde is still my desktop of choice and kde4 is an exciting prospect, i am waiting and i hope it is everything the devs are working for it to be.
Edited 2006-12-23 01:36
I enjoyed seeing Thom’s article being tagged as flamebait over on Slashdot. They’re absolutely right!
Thom, I’ve said it before and I’ll say it again, think before you write. Sure you can state your opinion but don’t include supposed facts that aren’t true.
Best of luck to the KDE team, I’m still a GNOME user, but that might change when KDE 4 comes out!
Man, I remember when GNOME was the biggest piece of trash, and KDE was slightly better. Its amazing how far they both have come. I am a heavy GNOME/XFCE user (and GNUStep!), and never really liked KDE (even when it was better), but I am very exited about the new release.
thom, you can make a mistake…
and apparently you can also kind-of set things straight.
thanks for posting this reply to your article, in my view it gives osnews some credibility back.
Not really, he’d have got endless crap if he hadn’t posted it…
but their Menus are AWFUL… I keep going back to see how they are doing and I can not get around the menus.
Try Kubuntu, they uncluttered the menu’s a lot, just to show it just depends on your taste and if you don’t like it you can change it.
I think the problem here is not KDE or Gnome, but the distributions who set them up. Ubuntu and Fedora Core do it properly (for example, in Gnome you click on Applications -> Internet -> Firefox, for KDE you click the K -> Internet -> Firefox) but in distributions like Mandriva, you have K -> Internet -> Web Browsers -> Firefox, or in Gnome, Applications -> Internet -> Web Browsers -> Firefox. There is an extra level, and sometimes two for applications. There is no need for this unless of course the distribution packs 5 different kinds of each application. There is a word for this… starts with a b, ends with an t and sounds like ‘boat’.
Leech
or try Suse 10.2, with a totally new menu. has gone through extensive usability testing, and was found to be more efficient than Vista’s, apple’s, gnome’s and KDE’s current menu’s.
the slab for gnome is available in debian (and probably ubuntu) nowadays, the kde slab ought be coming soon
Menus… you know… file, edit, etc… the kicker is not the problem… the problem is the menu organization in the apps themselves.
Aaron’s response was pointed and defensive, as would be expected, but he was actually nicer to Thom than I would have predicted. Pretty calm, considering.
Now, if he could just learn to use capital letters for his sentence starters, I could have an easier time reading his stuff!
Aaron has proven himself to be a sensible person again and again
he doesn’t seem to loose his temper a lot, kudos to him.
and i don’t think he’ll ever start using capital letters at the start of a sentence…
Jaroslaw Staniek once wrote a “deaaronify” service [1] Unfortunately it seems to be out of service
Regards,
Aron
[1] http://www.kdedevelopers.org/node/2238
Thom, thanks for posting this.
…he did spark a lively debate about the progress of KDE 4, and even got some of the project’s leads to provide a “public” update on its progress.
I do think the tone of the original article was indeed inflammatory, but I suspect that Thom belongs to the polemicist school of rhetoric.
I also believe that there is a fundamental misunderstanding about the direction KDE4 has taken, i.e. it is mostly not a cosmetic redesign, but rather largely “under the hood”…
The thing I wonder now is how KDE 4 and other next-gen WMs will take the opportunities offered by new hardware-accelerated desktops.
I also believe that there is a fundamental misunderstanding about the direction KDE4 has taken, i.e. it is mostly not a cosmetic redesign, but rather largely “under the hood”…
I agree. When KDE4 was first being announced there was a ton of hype around Plasma and there hasn’t been nearly as much discussion about all the other changes going on. So when someone who isn’t really following KDE’s progress in detail takes a quick look and sees that not a lot of Plasma has gotten done yet, they think that not much of KDE4 has been finished. The reality is that Plasma is the tip of the iceberg as Seigo put it, and a lot of the hard work being done is under the hood.
Edited 2006-12-23 05:37
That’s how successful software development tends to go though… Things happen which you can’t see for the first 60-95% of the development, and then things explode at the end.
I agree. When KDE4 was first being announced there was a ton of hype around Plasma and there hasn’t been nearly as much discussion about all the other changes going on. So when someone who isn’t really following KDE’s progress in detail takes a quick look and sees that not a lot of Plasma has gotten done yet, they think that not much of KDE4 has been finished. The reality is that Plasma is the tip of the iceberg as Seigo put it, and a lot of the hard work being done is under the hood.
Right at the bottom of the blog article there was a good post relating to the issue; most of the pain and missery associated with the 4.0 development has been to do with the move from Qt 3.x to 4.x – the break in compatibility has resulted in a tonne of stuff that needed to be done.
You’re right about the old iceburg – once they get the whole qt 4.x and KDE dependency’s done, then development will sky rocket, because ultimately everything relies on that foundation; it isn’t as though they could work on Plasma whilst the porting was being taking place – aka Plasma relies on javascript, which sits inside kdelibs which relies on qt 4.x
For me, I’d sooner 4.x take a bit longer, and release a really stable and reliable system which can work great on all the supported platforms than hyping features, over promising, under delivering and as a result, destroy the reputation for opensource projects being an open and honest alternative to the scull duggery that occurs in the commercial software world.
[QUOTE]I do think the tone of the original article was indeed inflammatory, but I suspect that Thom belongs to the polemicist school of rhetoric. [\QUOTE]
He’s just dutch… Still nice he linked this article, thumbs up there.
[QUOTE]
I also believe that there is a fundamental misunderstanding about the direction KDE4 has taken, i.e. it is mostly not a cosmetic redesign, but rather largely “under the hood”…
[\QUOTE]
This is exactly what’s been going on, but plasma and the new kwin got hyped for flashy desktops, while we have compiz, and while plasma is just the last thing to be done, since it’s build on the stuff under the hood.
Edited 2006-12-23 09:56
about the hardware acceleration in KDE4, i’ve been testing Kwin_composite for some time. well, it works, but needs a lot of work… at least work is going on, and it’ll also support older systems by means of allowing acceleration through openGL *AND* XRENDER & friends.
Here’s an old blog post from Aaron (http://aseigo.blogspot.com/2006/02/support-kde4-by-using-kde3.html) to put it in perspective:
but someone on a mailing list pointed out in passing that many of those in the “bleeding edge” crew of people will likely move on to other options during the kde4 devel cycle just because it’s going to take a while. we’re not talking about an e17 or duke nukem forever type schedule, but it will be longer than our usual “what, it’s 9 months already? new release!” standard operating procedure.
i wish this weren’t so, but in my gut i can’t help but think, “yep, people will move on in search of the latest and greatest something.” this has a really negative effect on open source projects, as this fickle attention span can make it rather difficult for us to do longer development cycles (and therefore larger changes). why?
well, user base is everything for large projects. it’s what keeps the q/a going, where we get new contributors from, where user support comes from, the pool for regional support at things like tech shows, what packagers use to gauge what packages to give more love to and more …
in the proprietary world they just lock their users into their platform with file formats, hardware platforms and other nasties so they can take their time if they need to: their users ain’t goin’ anywhere. we don’t do that (because we respect Freedom), and so our users are free to roam.
but when they do roam in larger numbers, that can impact the project. when the project does release that spiffy new version that took a year and a half to complete, it often has to start building the user base back up to where it once was. and no, most projects don’t usually have the resource to simultaneously develop two trees indefinitely.
And so is the conundrum with the OSS development model.
Microsoft can take 5+ years to deliver an OS update because, seriously, WTF are you going to do about it? Nothing. Frankly, they can get away with it because, all of the OSX and even desktop linux hype aside, Microsoft owns the desktop and gets to do whatever they want with it.
But in the linux/FLOSS space, things are modular and users have options. Not happy with Red Hat? Look at Suse. Not happy with linux? Look at BSD. Not happy with KDE? Look at Gnome. Etc etc. This freedom and flexibility for the user to choose the tool that works best for them is a wonderful thing, in many ways it’s the entire keystone for the FLOSS movement, but it also forces developers into a different position than their proprietary counterparts face.
KDE made a difficult decision in terms of the desktop battle. To virtually drop everything and spend a couple of years reworking the foundation is a brave and bold move at the cost of further development on the current product, knowing full well the risk of attrition from a user base that is accustomed to immediate gratification, is ballsy.
But it was a smart decision because it needed to be done if KDE was going to remain viable long-term, at least in terms of attracting developers and ISV’s, and hence users.
A lot of detractors refer to KDE4 as vaporware, but that’s unfounded. If you actually follow the trail and look at what KDE4 is going to deliver, there’s nothing there that is over-the-top and wildly-optimistic. It’s basically a series of frameworks.
Solid is a framework to isolate kde applications from having to interface directly to the hardware layer, and not only leads to stability for linux-based KDE apps but allows for a degree of cross-platform portability. It’s not rocket science, but it takes some effort. And it already exists in svn.
Phonon is a framework to allow applications to use a multimedia framework independent of whatever the user selects as their preference. Again, we’re not talking about anything pie-in-the-sky, it’s common sense. KDE got burned by depending on arts, and isn’t willing to take the same risk by depending on an unstable backend like gstreamer. Phonon already exists in svn.
Akonadi is probably more ambitious, but is very cool. It’s the reason kdepim has become a corelibrary in addition to kdebase and kdelibs. A lot of the press I’ve seen focuses just on things like kitchensync and the framework for synching PDA’s etc., but the ambition is much greater than that. We’re looking at the potential for having a framework for managing datastores of information. It’s a significant undertaking, but once again, we’re not talking about anything epoch-shattering here. kdepim is in svn, though I’m not sure about the overall state of akonadi.
Tenor was intended to be the conceptual framework for data searching, and there is some promising work with their intergration with nepomuk, an ambitious framework for meta-based searching that is supported by a number of major IT organizations (IBM, HP et al.) as well as the EU. Right now, strigi, which is in very basic development, runs rings around beagle as a file-searching application. In fact, strigi is desktop independent and could even supplant beagle for non-KDE desktops.
And then there’s Plasma. Well, Plasma relies on the underlying framework, and a considerable portion of that framework is Qt itself. At the end of the day, much of plasma’s eye-candy will be driven by what Qt already provides. But of course, the problem is Plasma is what everybody is looking for, and wants screenshots et al. It’s like looking at mockups for new automobiles; the body is relatively simple to design compared to the framework underneath, but in this case they’re not going to design the finished body until the framework is done.
Behind all this, there has been a lot of work on defining useability guideliens, coding guidelines, documentaition guidelines etc. This is the un-sexy work that people on the outside don’t get to see, or get excited over.
KDE 4 is progressing nicely. If you want to roll up your sleeves, the svn is available, and Suse and Kubuntu have precompiled versions of the svn snapshots available to play with. There is stuff going on, it’s just not the sexy stuff that gets headlines. That won’t happen until their work is done and users get to enjoy the benefit.
But I guess one of the biggest problems is that there is no single source for updates on kde4-development and it’s progression.
For those really interested in watching the process, check out planetkde and dot.kde.org on a regular basis, it’s worth it.
Anyways, just wanted to re-assert that KDE 4 is not stalled, and it’s not vaporware. It already exists, and it’s still evolving, it’s just not sexy enough yet for the to drool over yet if all you’re looking for is screenshots.
But KDE 4 will rock. Eventually.
Just my 2c…
if they fufill their goals with the release, they will have zero problems whatosever drawing back the userbase.
Nicely put. KDE 4 will take some time, it’s a massive change from earlier KDE versions. The KDE team is thinking ahead, the KDE 4 trunk will probably see KDE through for the next 5-10 years. I hope KDE doesn’t lose a lot of users, because imho it’s the best desktop environment (overall) that Linux has.
Dave
The question is?
Whould a paper journalist have given Aaron and a representative from GNOME the ability to respond before publishing trollop like the previous article?
After some of the other sensationalist news on the internets (novell, patent, gpl3, etc x infinity) in the last few months I thought a well researched and balanced story would have been good for OSAlert. I shall keep waiting…
Reading the responses to that blog post was pretty entertaining. A bunch of inane uninformed rambling complaining that there is uninformed rambeling on OSAlert.
While I think Thom’s article was probably a bit more desperate sounding than is probably warrented, there is no doubt that there is a very big kernel of truth in his statements.
First of all, a KDE 4.0 release in “mid 2007” does not mean KDE 4.0 will be out in any practical form in 2007. The primary necessity of the 4.0 release is HIG-ifying the desktop, and that’s a process that starts with a release, and continues for years afterwards (as GNOME’s experience has shown).
Second of all, both Vista and OS X are in a different league technologically than any competition in Linux. No truely objective person can say otherwise. Both Vista and OS X have fully composited GUIs that are properly supported from the driver up to the desktop notification widget. Neither XGL nor AIGLX are playing the same game. XGL doesn’t have the necessary underlying infrastructure, and AIGLX is a solution more notable for its convenience than its technical merit.
There is a long process remaining before the Linux desktop has something comparable to OS X Tiger from a UI standpoint, or Vista from a technical standpoint. I’ll enumerate some of the main points here:
1) Proper driver-level support for a composited desktop. The DRI is not there yet. The new DRM memory manager has just landed in development versions of the Intel driver. Efficient support for hardware context switching and simultanious rendering from multiple contexts is just not there yet.
2) The window system-level support for the composited desktop needs to be improved. XRender is adequate for competing with OS X circa 2002. It cannot support the level of hardware-accelerated 2D Vista features, and that Leopard will presumably feature. Put simply: it doesn’t expose the power of shaders, and that makes it a non-starter for advanced features that require that power.
3) The new capabilities offered by an accelerated, composited desktop needs to be exposed through APIs in the toolkit. Cairo is a first step in that direction, but its far from the final target.
4) All these features need to percolate through the stack and throughout the desktop. Some fancy Compiz effects are nice, but they’re ad-hoc. You’re not competing with OS X Leopard until the UI folks go through these new features and rationalize them with the HIG and come up with some systematic way to integrate them into the environment as a whole.
5) The stack needs to be optimized. The details of their interaction needs to be worked out. For example, when resizing a window, you want the window/composite manager to synchronize with the toolkit, and to efficiently use the memory management primitives of the GL stack. That integration and optimization isn’t there yet.
Most of the pieces of what is shaping up to be a very great system are in place. However, anybody who says GNOME 2.18 + Cairo + Compiz + AIGLX is going to be in the same league as OS X Leopard is not being objective. And that’s from a purely technical consideration, not some abstract “Linux is not usable as OS X” bullshit. The Linux desktop stack that will exist in early 2007 will be comparable to OS X 10.2 in technical capability. It’ll be several releases and a year or two beyond that before you’re looking at something with the maturity and completeness of Vista or Leopard.
NOTE: In the interest of fairness, I should point out that being late isn’t necessarily a bad thing. It is very probable that Linux circa late-2008 is going to be better than OS X or Vista in the same timeframe. GNOME’s UI is better today than Vista’s, and is in very many respects better than OS X’s (and I type these words on a MacBook). X will still be network transparent, and neither Vista nor OS X will be. X’s indirect rendering model and XEvIE will allow the leveraging the user-interface potential of a composited desktop (as opposed to merely the graphical potential) in a way that OS X and Vista won’t.
Edited 2006-12-23 06:54
Kwin is build to be able to use XRENDER or OpenGL, so hardware acceleration will be there for all. now many effects can’t be done with XRENDER i suppose but even with linux, you can’t expect Vista-like effects on Win ’95-hardware…
Arthur, the Rendering engine in Qt4 is much more capable (and mature) than Cairo, so KDE doesn’t have to wait for Cairo to get a decent performance. You’re right in the HIG area, tough KDE 4 will have automated usability testing, and use it’s framework to get more usability as well. For gnome apps, you had to spend time on problems like ‘the icons in Gedit are 2 pixels further from each other than the HIG states’. you don’t have such trivial problems in KDE, as the framework dictates such settings. so you can focus on the really important stuff, which hopefully will result in much faster adoption of the HIG than Gnome has been able to pull off.
Also the KDE4 HIG will be more usable for developers, easier to apply with better and more examples than most other HIG’s, also speeding up adoption.
Kwin is build to be able to use XRENDER or OpenGL, so hardware acceleration will be there for all. now many effects can’t be done with XRENDER i suppose but even with linux, you can’t expect Vista-like effects on Win ’95-hardware…
As I said, composited windowing is 2002 level technology. The technologies offered by Vista and Leopard (and even Tiger today) go way beyond that.
Arthur, the Rendering engine in Qt4 is much more capable (and mature) than Cairo, so KDE doesn’t have to wait for Cairo to get a decent performance.
Arthur has the same problem Cairo does. If it goes through XRender, it can’t do all of Vista’s and OS X’s pixel-shader based tricks. If it goes through OpenGL directly, it’ll hit the DRI stack’s limitations on context switching and concurrent rendering.
well, we’re getting more technical than my knowledge can support, but afaik Arthur can have several rendering backends. as a lot of work is going into X.org lately, couldn’t that fix this deficit? after all, Zack Russin is put to work on X.org technology and integration with Arthur. In time, they may be able to extend things like XRENDER, AIGLX or other extensions to enable stuff Vista and Mac OS X have as well.
The problem is that Arthur or X.org is just one piece of the stack, while the solution to accelerated drawing and compositing cuts through the whole stack.
Consider how Vista handles drawing a path in a window: the app calls into Avalon to draw and fill a path. Avalon decomposes the mathematical form of the path and stores it into a texture. It then instructs D3D to draw a polygon encompassing the projection of that path onto the window. It associates with this polygon a special pixel shader that will later handle the actual rendering of the path. This data is packaged up and handed off to the D3D driver, which handles effectively virtualizes the GPU, and effectively manages the render command streams from the various apps and the textures to which those apps render. Once the driver dispatches the app’s command packet, the GPU loads the shader, draws the polygon, and then the shader reads the geometric data of the path from the texture, and fills in the correct pixels on the polygon to render the path.
Much of this technology is not there yet on the Linux side. Consider how Cairo handles rendering a path: the app instructs Cairo to stroke and fill a path. Cairo tesselates the path to trapezoids, and sends the data to the X server via the RENDER extension. Then, RENDER rasterizes, in software, the data to the window’s pixmap, and Compiz comes along and uses GL_texture_from_pixmap and OpenGL to composite the window to the front buffer. The only OpenGL client in this scenario is Compiz, and the DRI handles the single client just fine.
Note that there is nothing particularly wrong with this model. You can achieve very high-quality and adequately fast rendering in software. You can make a very nice desktop on this model (it’s basically exactly what Tiger does if you don’t consider CoreImage/CoreVideo). However, its not in the same league as Vista. You’re still drawing vector graphics on the CPU, like NeXTStep did in the early 1990s.
In order to use Blinn’s vector texture technique like Vista does (which is really the only practical way that exists right now to do high-quality anti-aliased accelerated vector graphics on existing GPUs that doesn’t involve comically high levels of multi-sampling), several pieces of this stack need to be changed.
1) Cairo needs a back-end that preserves the bezier information in the path, and doesn’t tesselate it before sending it forward. IIRC, there is a back-end interface in Cairo, or at least being tested in Cairo, that allows this.
2) RENDER needs to be able to pass the full geometric data from Cairo to the GPU. There are a number of potential solutions to this. First would be extending RENDER to expose pixel shaders and more general textures. Second would be to extend RENDER to allow Cairo to send it full bezier paths and associated fill information. Third would be to ditch RENDER, and use GL directly.
3) The DRM needs to be able to efficiently handle managing the memory of all of these window textures, which will be used in ways that are different to how textures are used traditionally. For example, when windows are resized, textures will be allocated and freed much more rapidly than a system designed for more conventional applications might expect.
4) Depending on the solution to (2), the DRM might need to handle large numbers of GL clients more efficiently. Specifically, if (2) is solved by ditching RENDER and having Cairo render via GL directly, it will need to deal with the fact that suddenly instead of a few GL contexts, you’re suddenly dealing with one for every window. You might get around this by using indirect GLX, and then multiplexing multiple GLX streams to a single GL context owned by the X server.
sounds like a lot of work, but also it sounds like it’s doable.
i don’t think it’s weird that a new Windows release (after 5 years…) puts the linux desktop behind on some stuff… but not on everything, there are areas in which linux has the lead.
no idea how long it’ll take to catch up where linux is behind, but i think it will be before the next Windows version. and by that time the areas in which linux is ahead have been improved as well. overall, the doom scenario Thom painted for us is imho – as i said it before, overdone.
btw, interesting writing, do you have any idea how this is with Arthur? for Cairo, it sounds like it “just” needs another painting backend…
Sure, it’s doable, but the question is “doable in what time frame?” Thom’s late 2008 prediction is probably a good one.
Regarding Arthur, I’m not conversant on its internal architecture, but I don’t see why it should be any more difficult for it than for Cairo. The problem isn’t Cairo or Arthur, but the rest of the stack. What Cairo/Arthur need is a way to get at the pixel shader hardware on the GPU. Providing that access within the existing XRENDER/DRI infrastructure is the tricky part. It’s emminantly doable (indeed, Cairo/Glitz uses the pixel shaders right now), but none of the OpenGL back-ends are something you’d put into production with the existing stack.
Edited 2006-12-23 23:52
Sure, it’s doable, but the question is “doable in what time frame?” Thom’s late 2008 prediction is probably a good one.
Except that Thom’s late 2008 prediction was not about when Linux graphics capabilities would catch up with Vista – it was about when KDE4 would be released. There is no way KDE4 is going to be delayed until all of that work is done, it will come out long before that happens.
Regarding Arthur, I’m not conversant on its internal architecture, but I don’t see why it should be any more difficult for it than for Cairo. The problem isn’t Cairo or Arthur, but the rest of the stack. What Cairo/Arthur need is a way to get at the pixel shader hardware on the GPU.
I don’t think that pixel shaders are responsibility of client-side (but are server side). if I understand correctly, problem is how to pass (Cairo, Arthur) vector data all to the compositing manager which can transform windows, and in fact is a good candidate to transform and rasterize vector data to screen (using pixel shaders!), so we can get DPI-independent and virtually aliasing-free view (i.e. no upsampling on zoom, well antialiased transformed fonts etc). It can contain shader programs to do high quality antialiasing of vector primitives.
I’d say there IS a problem with Arthur and Cairo. They don’t (beyond X protocol part) send information about window structure or visibility of components. With server bitmaps you don’t need this because new Xrender data will happily overwrite old bitmap when app decides to modify screen. Situation is however different if you e.g. want to rescale window without app knowing it. To perform this without artefacts, you’ll have to replay whole vector data stream and rasterize it again at higher resolution, which can be a bottleneck if there is much data. This needs better handling of what is visible and what is flushed as no longer visible, to replay least possible amount of vector data on redraw. Either it can be done using complex scene definition which compositing manager can understand so it can cut out invisible parts, or by trusting application to be aware of the issue, so it notifies when old vector data stored in server can be dumped.
Of course most apps will have rather simple structure so this won’t be much of an issue, but bad behaved app could potentially cause problems and generally it is always better to hide issues in the server (or toolkit) than to put additional burden on app developers.
Sending a scene description to the server would solve the problem I’m talking about, but such a retained-mode design completely goes against the X architecture. The way to achieve resolution-independence within the existing X architecture is to retain the immediate-mode semantics, and do scaling with minimal cooperation from the toolkit. See XEvIE for the general model of how this would be done.
As I said, composited windowing is 2002 level technology. The technologies offered by Vista and Leopard (and even Tiger today) go way beyond that.
I suppose the root question, as an end user, would be what would the more advanced approach buy me, and would I be able to tell the difference?
“””I suppose the root question, as an end user, would be what would the more advanced approach buy me,”””
As admin of about 50 Linux desktops, I’ve been asking myself the same thing while reading through this thread. The conclusion I have come to is “absolutely nothing”. My users need to browse the Web, send and receive email, do word processing and spreadsheets, and run a curses based accounting package. The local users do this via regular old X protocols from thin clients. The remote users come in over NX.
I’m not saying that these technologies don’t have their uses. But I don’t see where they mean diddly to me and my business users. And I think that the kind of users I support are probably the more common case.
I think hardware-accelerated compositing technology is going to be one of the biggest steps forward in user interfaces in the last decade. Apple is just scratching the surface of what’s possible with the technology. The benefits range from the aesthetics to efficiency. First of all, not-ugly is generally better than ugly, all else being equal.
Second, things like animation can allow for subtle cues that reduce the cognitive load on the user. For example, many people find multiple desktops in Virtue Desktops to be useful in a way they never found multiple desktops on other systems to be. That’s because the animated transitions help them keep track of the spatial orientation of the various desktops.
Third, scalable graphics offer substantial potential for compressing large amounts of information things like Expose are just the start. Imagine an IDE that used Expose-like techniques for browsing source records, used vector graphics to display complex class and call-graph hierarchies, automatically scaling the most relevant data up to be readable and scaling less relevant data down to fit more information on the screen.
Fourth, even if you don’t consider these things to be useful, you can’t argue that the lower-tech approach is comparable to Vista. It might be inferior along dimensions that aren’t important, but objectively it is still inferior in those dimensions.
Arthur has the same problem Cairo does. If it goes through XRender, it can’t do all of Vista’s and OS X’s pixel-shader based tricks.
I followed this discussion and while I understand that Vista has access to some pixel shaders that are trickier to use under Linux, I do wonder how much those pixel shaders are used by Vista, and how much more they bring.
In my experience as end-user, I’m not sure I would notice much difference. What can’t be noticed easily will not be missed much, even if it takes a year more to implement…
Seriously, there’s stuff in Beryl now that is much ahead of what will be in Vista when it ships. Sure, the technology is better and there’ll be third-party add-ons to get all kind of cool effects software, but these will not be available right away, and they won’t be part of the default package. If anything, it’ll be like a more advanced version WindowBlinds, cool stuff but try to convince your IT department to have it installed on your workstation.
Inless I’m missing something fundamental, it doesn’t seem to me that this slight advantage for Vista won’t make that much of a difference.
What do I know, anway…I’m just happy to have a hardware-accelerated exposé effect on my Kubuntu laptop. That’s more than enough for me!
(I have to admit I’m starting to like the exposé-like function even *more* than virtual desktops…)
I followed this discussion and while I understand that Vista has access to some pixel shaders that are trickier to use under Linux, I do wonder how much those pixel shaders are used by Vista, and how much more they bring.
Pixel shaders are at the core of Vista’s 2D rendering architecture. When Avalon renders a path on the screen, it does not tesselate it to triangles then use the GPU to render the triangles. Instead, it draws a polygon that covers the projection of the path, and uses a pixel shader to fill in the regions that are inside the path, ignore the pixels that are outside the path, and anti-alias regions that are on the edge of the path.
This allows them to achieve very high-quality coverage-based anti-aliasing without having to use any full-scene anti-aliasing in the scene itself. That’s a huge win because because even 16xFSAA (which itself incurs a huge memory cost and is only even supported on the latest cards) can’t touch the quality of a good coverage-based anti-aliasing system like the one in Cairo’s software renderer.
Without pixel shaders, using the GPU for rendering while maintaining quality becomes a rather difficult exercise. Basically, you either just punt and do everything in software (what Cairo currently does on AIGLX), or you use OpenGL only very late in the pipeline (what XGL does), thus giving up a lot of the potential benefit of using the GPU.
Note that this point doesn’t just apply to pixel shaders. What happens when GPU’s get advanced enough where you can just up an run most of pixman (Cairo’s software renderer) on the coprocessor? How are you going to facillitate THAT through XRENDER?
The difference between Vista and Linux’s stack as it stands today is the difference between OS X Jaguar and Vista. It’s the difference between using the GPU just for some desktop effects, versus leveraging the GPU for the whole graphics pipeline. You can get a very good desktop with just the former (GNOME + Compiz is a VERY good desktop), but you’re also giving up a ton of potential (and not to mention losing the feature war).
Edited 2006-12-24 16:29
You can get a very good desktop with just the former (GNOME + Compiz is a VERY good desktop), but you’re also giving up a ton of potential (and not to mention losing the feature war).
I understand this, however what I’m really curious about is how Vista will take advantage of this technological edge to implement such features. In other words, it’s all fine and dandy to have a more powerful stack, but are they really using it to its potential? From what I’ve seen of Vista, it doesn’t seem as if users will notice that much difference between it and, say, a Beryl-enhanced desktop.
To make a game console analogy: both the Xbox 360 and the PS3 are more powerful than the Wii, but the Wii’s the one who is the most innovative, and the one that has generated the most buzz. While Vista has the technological edge (for a year or two), movies of it have yet to provide the “wow” factor that I get when I showcase my Beryl-enhanced laptop (which, I have to say, has a cheap 128MB integrated ATI card – nothing to get excited about, and yet I get very good performance…)
BTW, you seem very knowledgeable on this…I have to wonder, are you collaborating in any way to help improve the FOSS stack? It seems to me the *nix world would greatly benefit from someone like you working on such a project…
Even Vista got lots of advertised GUI features axed. For example Aero Glass Diamond (vector vidgets) or most effects from that 2003 video on youtube.
Obviously their driver framework and development just wasn’t ready at the time for many planned features so they postponed them for Vista SE or later OS. Avalon for example isn’t very much used in Vista GUI.
True, Vista 3D driver stack is already here and probably currently better than DRI. it is focused on next-gen cards and DX10 exclusively which allowed MS developers to make advanced design quickly, but older hardware support got sacrified (talking about WGF2/DX10).
However new memory manager in DRI is proof that catch-up is ongoing (despite devs currently being focused more on low-end Intel hardware). Besides, latest generation of Nvidia and ATI hardware is supported only by binary drivers, anyway. I believe that those two should be persuaded to base their future linux drivers on DRI, even if their specific kernel- or userspace driver will remain closed-source. Unlikely to happen though.
Big problem is X protocol. As you say, xrender isn’t enough. We might even need something equivalent to WPF / XAML to be able to propagate and draw purely vector widgets and application windows WITH “scene” definition (borrow few ideas from SVG, Flash etc.?). Until then, workaround will be to do demanding stuff *and* 3D-on-desktop through windowed OpenGL.
Integration of compositing effects (or 3D) is another issue as you mentioned. Currently window managers are limited as they have no useful (=DE isn’t really aware of it) communication with desktop env. Also toolkits don’t have slightest idea what is happening, they just issue bitmaps to compositing managers and that’s it. I believe there is lots of room for improvement here. First try at cpopositing desktop integration might be seen with KDE4.
Even Vista got lots of advertised GUI features axed. For example Aero Glass Diamond (vector vidgets) or most effects from that 2003 video on youtube.
That’s just a maturity issue. New effects and widgets have to go through the UI people, and Vista doesn’t have the time for that. However, the underlying technology to do those things is present in Vista.
True, Vista 3D driver stack is already here and probably currently better than DRI. it is focused on next-gen cards and DX10 exclusively which allowed MS developers to make advanced design quickly, but older hardware support got sacrified (talking about WGF2/DX10).
That’s a cop-out. The DRI folks couldn’t have come up with a Vista-like stack even if they didn’t want to keep compatibility with older hardware. Not because of any lack of technical capability, of course, but the fact that they just don’t have the access to the specifications of next-gen hardware in the way MS does.
However new memory manager in DRI is proof that catch-up is ongoing (despite devs currently being focused more on low-end Intel hardware).
Barring a change of heart by NVIDIA and ATI, low-end Intel hardware is DRI’s best hope. Implementing a Vista-like 3D stack on reverse-engineered drivers is an incredibly daunting task. Even given the fact that the Intel drivers solve most of the spec-access issues, hoping for a Vista-like stack by mid-2007 is silly. Thom’s prediction of late 2008 is a much more reasonable one.
That’s just a maturity issue. New effects and widgets have to go through the UI people, and Vista doesn’t have the time for that. However, the underlying technology to do those things is present in Vista.
In fact this is not just UI people issue. Windows Desktop Manager (a compositor in essence) was supposed to handle vector data as well, and do high quality rasterization itself (instead of this happening on “client” side). But now there are no mentions of this in Vista. Yes, in X(org) having this would require new protocol extension to define window structure as vectored object (as opposed to just having a bitmap in the server) and new XAML-like definitions of windows which would be rasterised by wcompositing manager.
Windows does support DPI independent desktop, but this requires application (“client”) to redraw using higher res bitmaps (AFAIK).
Barring a change of heart by NVIDIA and ATI, low-end Intel hardware is DRI’s best hope. Implementing a Vista-like 3D stack on reverse-engineered drivers is an incredibly daunting task. Even given the fact that the Intel drivers solve most of the spec-access issues, hoping for a Vista-like stack by mid-2007 is silly. Thom’s prediction of late 2008 is a much more reasonable one.
Having a Vista-like DRI stack is a priority, but not urgent one, Memory manager is most important now and it might be ready by Xorg 7.3 release (bot VRAM & AGP handling for both Intel, r200 and r300 driver). Vista stack will support high granularity GPU scheduling for next gen cards, so currently in DRI it is possible only to implement what is possible with DX9-class hardware (similar to WGF 1.0) and I believe this won’t take so long (btw. Avalon is DX9 anyway, so scheduling on DX9 hardware obviously in’t a big issue).
While I’m confident that nvidia will have their own good implementation of MM and scheduling, there is hope that AMD might take similar approach as Intel regarding DRI. Roumored next-gen Intel hardware migh also, if it stays this way, drive development of DRI.
Following how KDE progresses, using it every day (not exclusively, just for the record), and havign expressed my opinion about pretty fake and false alarm raises about development being stalled, I can only agree with everything Aaron said. KDE hasn’t ever been a slow follower when talking about features and implementing novelties, and hopefully this tendency will just go on. All I can add to this is that I use a very large set of applications and developer tools on a daily basis, but if I had to pick my favourite apps and my favourite desktops, it would all lead to kde. Yes, this is subjective, this is personal preference, yet it’s not unfounded but a result of some years. And I’m not alone with this opinion and until this is the case, KDE will not stall and will not go away just because some internet journalists think everything is junk that doesn’t come with strict deadline promises and dropping three dozen features just to meet deadlines and two jumbos full of shiny bells.
It’s great to read that KDE should be on track for next year’s release, and that the devs have worked hard on the project: keep up the good work, I (and many other happy Linux users) will be following it closely.
But at the moement I also am an happy GNOME user who’s a little concerned about its future. The one thing Mr. Holwerda got right in his “article” (oh my God, I’m criticizing Linux stuff Thom, go note this in your diary) is that Gnome is, at the moment, in a worse position than KDE. That’s because:
1. the drive from 1.x to 2.x was largely based on usability: it was successful, but it’s now over;
2. the incremental increases did a lot for Gnome, as many have noticed in these forums there’s a sideral distance between 2.0 and 2.16: but is it enough to continue on this path? can you refine something from here to eternity?
3. the developer community looks divided (at least to my “external guy” eyes), see for instance the Mono debate, and surely undermanned, see the GTK+ dev remarks.
4. it also seems that the most involved devs (like Pennington) have been busy with other stuff, and that the whole subject of Gnome’s future has been put in the background for the moment being.
So I’m concerned about vision, about the future programming framework, about the will to imagine a revolutionary, more than evolutionary, Gnome 3.0. It doesn’t have to happen tomorrow, nor next year, evolutionary progress is fine (for me at least) at the moment. But it would be nice to know that there is a long term vision or, if not yet a vision, a movement towards creating one.
rehdon
1. the drive from 1.x to 2.x was largely based on usability: it was successful, but it’s now over;
1.x had dramatic usability issues, which are now taken care of. There is no obvious issue that would justify a complete rework (and all the pain 2.x went through) in the near future.
2. the incremental increases did a lot for Gnome, as many have noticed in these forums there’s a sideral distance between 2.0 and 2.16: but is it enough to continue on this path? can you refine something from here to eternity?
I’d say that GNOME 2.x has just reached a certain maturity as a platform. The exciting part should now be to observe what people build on top of this platform, while it is continuously being refined. The “next big thing” will certainly happen as a parallel development and won’t be affected by the six months release cycle.
3. the developer community looks divided (at least to my “external guy” eyes), see for instance the Mono debate, and surely undermanned, see the GTK+ dev remarks.
I see no large scale divide. Mono is just another language to write GNOME platform applications with, just like Python or Java. Gtk has always been undermanned, but that didn’t stop it from kicking ass in the past and certainly won’t stop it from kicking ass in the future. Things can always be better, but it’s not like our existence would be threatened by this.
4. it also seems that the most involved devs (like Pennington) have been busy with other stuff, and that the whole subject of Gnome’s future has been put in the background for the moment being.
It certainly is in the background, because there is no pressing need at the moment. GNOME 2.x does just fine for now and I don’t see anything in the competition that couldn’t be added incrementally to this platform.
So I’m concerned about vision, about the future programming framework, about the will to imagine a revolutionary, more than evolutionary, Gnome 3.0. It doesn’t have to happen tomorrow, nor next year, evolutionary progress is fine (for me at least) at the moment. But it would be nice to know that there is a long term vision or, if not yet a vision, a movement towards creating one.
There doesn’t have to be a grand vision. GNOME 3 is most likely to happen once someone sits down and develops a new desktop concept that actually works. It can happen at any time, whenever it is necessary or whenever someone has the right inspiration. There are many ideas that can be explored, but it isn’t as easy as “doing what everyone else is doing” anymore, so this probably should be an evolutionary process. Try different approaches and let the fittest survive.
you want a higher page ranking thom, its too bad that 95% of users use adblock plus with firefox
ooops
Thom’s original article did what a good article should do: it hit on a very raw nerve.
The key point doesn’t lie in how gtk+-this and kdelibs4-that are doing. Those are just details. What matters is how the whole Linux desktop project feels about itself – optimistic, or uncertain and a bit down in the dumps?
My guess is that the Linux desktop project doesn’t feel too good at the moment. There’s a bit of a whiff around. Hmmn, maybe some Chief Gerbil Engineers keeled over a while back and no one’s yet noticed. The often furious response to Thom’s article strongly suggests that he’s on to something here. Hmmn, how much progress does all this Gnome/KDE work really represent when Apple and Microsoft are shortly to move the game to a whole new level?
I can’t comment on Aaron Seigo’s piece. Nor, I would suggest, can 99 per cent of the rest of the world, since the details he cites are not matters of record but inside details for insiders available (at best) in an svn repository somewhere. He too fails to answer the key question: how does the Linux desktop project feel about itself at the moment?
I think Aaron Seigo’s reply was quite clear – people involved in KDE feel great about where they are right now. I’ve seen several other blogs and they all seem very upbeat – I haven’t seen any complaining about slow progress or other problems. OTOH, if you mean end users then I think there is a bit of worry.
The thing that really got me about Thom’s piece was his assertion that each OSX point release was major but each KDE/GNOME point release was minor. There are supposedly some revolutionary new features in 10.5 that we haven’t heard of and he simply takes them at their word, while KDE4 features have been discussed fully and many are already available through SVN if you want to get into it, and apparently they are only vision and no substance according to Thom. What? I don’t think Thom was purposely trying to write flamebait or anything, but that just really shouted out to me that he is biased in this area.
If I insult you, or if I start spread lies about you, I’m sure I’d get some reaction, you could say I “hit on a very raw nerve”: is that good communication? I don’t think so.
Est modus in rebus.
rehdon
“Thom’s original article did what a good article should do: it hit on a very raw nerve. ”
Eh no, good articles are well written and well researched. Thom’s was neither. Hitting a raw nerve is not a measurement of an article’s quality. Writing inflammatory articles that causes outrage is easy, writing articles that leads to fruitfull discussion isn’t.
The original article was thus a failure.
“when Apple and Microsoft are shortly to move the game to a whole new level? ”
They are? Really? I think you’re confusing hype with facts. Maybe they are, maybe not but right now we don’t know.
“how does the Linux desktop project feel about itself at the moment?”
it’s pretty obvious when you read Siego’s article that they feel good about it.
No, Thom generated a negative reaction because he was so off the mark. It’s tiring and a bit depressing to hear so many detractors of open source misconstrue the situation and spew so much gloom and doom. Why do people continue to think that unless our goals are achieved this year or next, all is lost?
For example, over the longer term, the problem with open source drivers will disappear. Eventually we will have all of the major 3D hardware covered by open source drivers. It will happen as the rate of change in the 3D hardware sector diminishes; the current rate of change just won’t continue. Once things settle down, there will be a more fixed target to reverse engineer for example. Things are ticking along just fine.
But even if Linux had the best desktop experience in every sense, it would not be an automatic ticket to mass acceptance today. Macintosh, was by all accounts a better desktop experience for quite some time before Microsoft _finally_ borrowed enough ideas to catch up. If it was just desktop that mattered, Apple would be the dominant company today and Microsoft would be the alsoran.
We must take the longer view; slow plodding improvement and promotion is the way to go. Not chicken-little like panic about the sky falling in (or bubbles bursting).
What matters is how the whole Linux desktop project feels about itself – optimistic, or uncertain and a bit down in the dumps?
I’d say definitely optimistic.
Over the last months there has been more cooperation between projects and individual developers than ever before.
Additionally to the shared specifications we see a lot of shared technology, i.e. actual implementation shared between developers, e.g. Poppler.
D-Bus has opened a whole new venue of integration, both between user session components (also some kind of shared technology) and especially system<->user session, e.g. network/wireless management, changes of hardware and hardware states, …
Only a year ago the OSDL desktop architect meeting 1 started a limited, yet fruitful dialog between commercial ISVs and desktop developers, allowing the later to base their development on actual needs rather than speculations
All in all the last year has been a very positive experience for the desktop projects so I’m confident that they look optimistically into the upcoming one.
Who cares if KDE4 arrives on time or not? What should be important is:
– KDE development is far from stalling. Quite the opposite, the truth is the pace of development has been increasing in the last months. The task of hand is a big one, that’s why it’s obviously taking time.
– Once it’s out, it fulfills what its developers had envisioned for it. It really makes no sense to rush its release date. More crucial, KDE4 will be the platform of choice for many apps, whether they are ported from KDE3, or brand new apps. Because of that, it’s very important to get it right.
Edited 2006-12-23 10:06
I’d really like to see some frameworks being laid down to make Gnome and KDE just work better together. I know there are already things from freedesktop.org, but what I’d like to see for example is more themes that are either cross toolkit, or that simply QT will read GTK themes and vice versa.
The problem right now is that (at least from my experience) if you use KDE and load up a Gnome application, it loads fairly quickly, since there aren’t a lot of under the hood frameworks that need to be loaded. But the opposite is not true. For example, if I’m inside Gnome and load up K3b, it takes longer the first time around because it has to load up a lot of the underlying KDE system. But once I’ve loaded up K3b, if I then load up Konqueror, it’ll be much faster, since KDE libraries have already been loaded.
I could be wrong, but I think the only thing Gnome apps usually load up is the gtk and gconf daemon. KDE mostly uses the KIO, etc.
I would also like to see Gnome being more configurable and KDE being less so. They should both be more towards a happy medium between the two. I use Gnome because I generally like the way it’s set up by default though I generally change the theme. Theming for KDE on the one hand is great because you can do anything and everything. But on the other hand can be overly complex to get your set up just right. A good example of this was when I was trying to set up Baghira.
One of these days I’ll do a straight no holds-barred comparison how KDE does things and how Gnome does things (for example, like configuring the time from 12 hours and 24 hours, or theming, etc. Not as a “KDE is better than Gnome” or “Gnome is better than KDE” flamewar, but as a simple “this is how each desktop environments work.” Maybe if I can find the time, I’ll add Windows into it. I’d add Mac OS X into it if I had access to one.)
“They should both be more towards a happy medium between the two.”
I’d much rather see them being different than melding into some bland, smallest-common-denominator borefest.
That’s not what I meant in the slightest. What I meant was that Gnome should have more visible configuration options and that KDE should have less. This does not mean in the slightest that they should become alike. The only thing that I think they should become alike in is things like theme support, menu support (not neccesarily layout, but at least to the point where if you install a program in the package manager, that a menu entry appears in both desktop environments. For the most part this is working now anyhow, but this is just a small example. In essence I’m saying they should work together rather than trying to just do the opposite of what the other is doing just for sake of competition.)
Thom doesn’t get it. Most of these guys don’t. First of all its not a competition. Its not KDE vs OSX vs Gnome vs Microsoft. It doesn’t matter if OSX is prettier or KDE is behind or Gnome is whatever. KDE will continue to grow and change and improve at its own pace – which is what makes it what it is. KDE is not ahead, its not behind, its not superior, its not inferior. It is what it is, nothing else, nothing more. If in reality it were a competition, KDE would be something else entirely.
I didnt respond to the original article like the other 266 because I didnt even want to read it in the first place – already from the headline – it was unappealing to me .
Read now – & yes : “”MacOSX & Vista beautiful – KDE & GNOME still in stoneage”” – & its predictable .
Even KDE3 has features which OSX & Vista have not implemented – & has had for years .
Thom – you got your bootey kicked good by Aaron – IMO right so .
Great reply from Aaron Seigo – okay Im also slighly venting by anoydness at some of your articles – but fully agree with Seigo’s view .
Just IMO
or gimme death!
Below is my own reply to the recent article published by OSAlert:
In 2001, both Apple and Microsoft released their last major revisions; Apple released Mac OS X 10.0 on March 24, while Microsoft followed shortly after with Windows XP on October. For the proprietary desktop, therefore, 2001 was an important year. Since then, we have continiously been fed point releases and service packs which added bits of functionaility and speed improvements, but no major revision has yet seen the light of day. What’s going on?
Both Apple and Microsoft have some serious problems which cannot be solved easily. Cocoa and Objective-C have an alarming shortage of industry adoption, and you do not need a degree at MIT to understand what that means: less quality assurance, and a slower bug fixing pace. Cocoa of course is the base on which Mac OS X is built, and hence any problems with it will have their effects on Mac OS.
The second big problem with Mac OS is that it lacks any form of a vision, a goal, for the next big revision. Mac OS XI is not even that- a name. There is not even one line of code for Mac OS XI, not even a goal or feature description. All Mac OS XI has are some random ideas by random people in random places. There is nobody actually working on defining what Mac OS XI should become, and hence the chances that Mac OS XI sees any light of day in the coming two years is highly dubious. Mac OS XI is supposed to be a radical departure from Mac OS X-current, and you just don’t do something like that in a 12 month release cycle. Mac OS XI won’t be on your desktop until at least 2009, which will mean that by then, Mac OS will not have seen a major revision in 8 years.
On the other side of the river the future may seem a little brighter, but do not let appearances fool you. Microsoft might have had a vision for what Windows Vista should have become, but with vision alone you will not actually get anywhere. Microsoft developers were indeed planning big things for Vista– but that is not what they delivered. Show me what the results are. Vista was supposed to be outdated out by now, with a release somewhere in 2003. However, if you now take a look at the latest Vista build– it is just XP, but uglier. We’ve been hearing WinFS this and NGSCB that for a very long time now; however, nothing solid has emerged.
In the meantime, the competition has not exactly been standing still. GNOME has continuously been improving its desktop, adding new and sometimes even innovative features, while also increasing the desktop’s speed with every release. GNOME 2.18 is scheduled for the first half of 2007, and even though what we have been showed so far is not really revolutionary, some previously ‘top secret’ features such as compiz are maturing quite fastly. I think GNOME’s recent track record in delivering allows us to believe this.
KDE has not been resting on its laurels either. KDE 4 builds are already available for testers and developers. Many anti-FOSS trolls complain that KDE 4 is nothing more than KDE 3 with a new Qt version, but anyone with an open mind who followed it for a short period of time (including me) realises this is absolutely not the case– in any case, we can say that all people used to KDE 3 will definitely view KDE 4 as a major upgrade, and in the end, that is what really matters.
The proprietary software world will not have any answers ready to GNOME’s and KDE’s big releases for at least the coming five years. Has the proprietary bubble burst? I would not go as far as saying that; however, it is certainly about to, and unless the Microsoft and Apple teams get a move on, it will do so shortly.
Which is a great.
Sometimes I wish we could mod posts up to +10 or +20 instead of just +5. If you haven’t read the post I’m replying to yet, then do so now!
So, let me get this straight.
Vista takes forever and a day to release a product that will not be in real consumers’ hands for possibly close to another year, produces a piece of garbage so obviously bloated to any impartial observer that you will require brand-new hardware, and yet this is heralded as good progress.
In the meantime, KDE produces point releases that improve performance, correct bugs, introduce new applications while creating the solid infrastructure for a real sustainable future and they get crap for it.
Most of us who are doing network and tech support want manageable and stable solutions, not the latest crap to come out of Redmond. I run 50 thin clients off a dual 2.4 Xeon with KDE 3.3 on Suse 9.3. Try that in the Windows world.
People want solutions. People are getting smarter. People are getting sick of buzzwords that do not deliver, which is what the security nightmare of the Windows world amounts to. Speak to the home users around you. Speak to people at work and find out how they really feel about a Windows desktop that you need to spend lots of money on third-party add-ons to keep it safe and lots of time to truly lock-it down.
Thanks, but no thanks.
One last point, polemics sale and create buzz which is what someone like Thom is after. Thom, you have big shoes to fill as an editorialist provocateur, but Dvorak has a pair he can lend you.
Peace.
Edited 2006-12-23 15:23
“the big problem with Thom’s article was that I think he started out with the conclusion and then went on to find evidence to support his conclusion.”
If you ask me, he started with the goal in mind to write something very provocative.
Something that would make it onto slasdot; something that would stir up a lot of reactions; something that would drive the OSAlert website hits up dramatically; something that would benefit their advertisement income figures.
He succeeded.
He didn’t take into account that his personal credibility would suffer from such a hit piece. Or maybe he didn’t care…
Hilarious!
This isn’t totally unrelated… although it’s a bit of a stretch… anyway it’s fun:
http://www.youtube.com/watch?v=QT6YO30GhmQ
Merry Xmas…
That is what I consider a perfect answer to a very uninformed article.
Kudos to thebluesgnr!
What about GNOME 3.0?
Because I m fan of GNOME and wanna know about it?
long live QT and long live KDE
Oh, and here’s a nice article:
http://www.computerworld.com.au/index.php/id;855780098
Cheers,
Dave