After MacOSX and Linux start to become viable alternatives to Windows on the desktop, more and more applications are developed to be cross- platform; all potential users can run them on their platform of choice. In the following article I will discuss different ways of creating a cross-platform application and their (dis)advantages for the user. The author’s native language is not English, so please forgive any grammar and/or spelling mistakes
First I will explain my definition of a multi-OS platform. A platform is an environment which an application is using to communicate and interact with the system and with the user. GUI, IO, and multimedia libraries are a part of a platform. Kernel API is a part of a platform. Applications, which are running on the same platform, have mostly the same look and feel and share the same global settings, which the user can adjust in the common preferences, and are able to communicate with each other. Quite often the applications offer interfaces so the user can write scripts, which are connecting these interfaces into a new application (think AppleScript or AREXX on Amiga). UI techniques like copy/paste and drag/drop are working across the applications; not only for unformatted text, but also for complex objects. From the programmers’ view a platform offers a consistent environment with one main programming language (usually the language which has been used for the creation of the shared libraries) and several supported languages with bindings.
About 10 years ago it was fair enough to make the equation that an operating system is a platform. MacOS, Windows, AmigaOS, BeOS, Atari TOS; all these operating systems offered their own unique platforms, their own environments. UNIX was a bit different though; while the combination of POSIX-compliant kernel, X11, and Motif was a very common platform, other variants were also possible. Nowadays an operating system can have several platforms: MacOSX has Classic, Carbon, and Cocoa; Linux has KDE, GNOME, and GnuStep (among lots of others); and Windows has .NET and MFC. They all can be considered as native, but a lot of effort has been done to enable interoperability between the platforms running on the same operating system. One example can be the standardization efforts from Freedesktop.org to enable interoperability between KDE and GNOME.
Now let’s talk about Multi-OS. Usually if an application is supposed to run on several operation systems you call it multi-platform. This term is applicable to the above equation ‘one OS = one platform’. However, since we want to talk about platforms running on multiple OSes, multi-platform platform sounds a bit silly; instead I prefer the term ‘Multi-OS’.
Let’s talk about Multi-OS application first. So a Multi-OS application still has to interact with the system and the user, but the difference is, that it must be adaptable to several environments (native platforms), so there must be a kind of translator between the application calls and the current environment. In case of Mozilla and Firefox it is the XUL-Toolkit, in case of OpenOffice.org it is VCL. These toolkits are mini-platforms. The question is how well do they coexist with the native platforms. The answer is: not very well. When the application is launched, the whole mini-platform must be launched as well, which takes time and resources. The communication with the rest of the system can only be good if a lot of integration works has been done. The situation might become even worse if several multi-platforms have to communicate with each other; this happens only through the native platform, so only the least smallest denominator is understood by all three participants. Since the mini-platform is highly optimized for one particular application, it is quite hard to take this platform as foundation for other applications, XUL is used for Sunbird and Thunderbird, VCL is foundation for some OpenOffice.org forks.
Other toolkits are more flexible, so a variety of Multi-OS applications can be written using them. Exmaples of this are wxWidgets and QT (and to some extent GTK, which is available on different OSes. However, it is really optimized only for UNIX/X11). Applications, which have been written based on these toolkits, share the look and feel of the native platform (more or less), but still they are not communicating between each other or the native platform, so their integration is suboptimal.
A different approach is to use a virtual machine. A VM is also a kind of translator, but the difference is that it tries to provide as many own libraries as possible, while not relying on the libraries provided by the native platform. This way, a large variety of different applications can be created, which are all running in a VM. Two main examples are the JAVA platform and Mono.
The problem with this approach is that these applications feel even more alien on a native platform, despite the fact that with the introduction of SWT and Java Swing versions the look and feel of Java programs resembles that of the native platform. However, the weak point is still that communication between different programs inside and outside of the VM is not sufficient and there is no easy way to use parts of JAVA program for interaction with a different application. There is no language like Visual Basic or AppleScript which glues parts of different JAVA-programs together (maybe Groovy can become such a language).
Another point is that there are simply not enough JAVA-programs where such combinations would make sense. A platform is only viable if it has a large ecosystem, which means that a lot of applications have been developed for this particular platform – only then all communication worries about drag/drop, copy/paste, and connections of different application parts are justified.
So now finally we are coming to the Multi-OS platforms. Multi-OS platforms might also have the same weaknesses as a VM has, but the difference is that there is so much software available for this platform, so interaction with the native platform might not be perfect, because most of the required tasks can be done using the applications written for this platform.
Let’s take a look at one of such platforms. A few days ago IBM announced Lotus Expeditor, based on the Eclipse Rich Clients platform. Lotus Expeditor runs on Linux, MacOSX and Windows. From the technical point of view it uses JAVA with SWT, but the ecosystem includes groupware Lotus Notes, instant messaging software Lotus Sametime and office software Productivity Tools. All the applications inside this platform are nicely integrated, interconnected and extendable with third-party plug-ins. Interaction with the native platform is not important in this case, as this is a platform for an office worker; most of the tasks he requires for his business are already there.
Before talking about the platform which might become the most important Multi-OS platform in the future, let’s take a look at two other failed attempts at creating a Multi-OS platform. The first attempt was OPENSTEP, a platform which was available for Solaris, Windows, and in its incarnation as NextSTEP as its own OS, which was available on several hardware platforms. The reasons for the failure are quite numerous: OPENSTEP was too alien on a platform like Windows and it had its completely own look and feel, meaning it was too hard for a user of a native platform to get used to it. Not many applications existed for this platform and the company Next was too small to push it.
The second attempt was the AVA Desktop. After the introduction of the JAVA Desktop on Linux, Sun ported it to Solaris and was thinking loud about porting its components to Windows to achieve a similar working experience across several platforms, but this remained a wish, and these days Sun changed its business directions like socks, dropping this idea. However, maybe such discussion will appear again, when the Looking Glass project becomes more mature and will become a toolkit for a complete platform.
Now finally I want to introduce the most promising Multi-OS platform: KDE 4.0. What is so exiting about it? KDE4.0 is based on QT, which, as we’ve already seen, is suited for Multi-OS development. However, KDE 4.0 is much more. KDE consists of thousands of programs, many of them quite high quality, so if a user installs KDE 4.0 on his OS he has potential access to most of them. Technologies like Phonon help to develop Multi-OS applications, because all the access to multimedia codecs, which are different on every native platform, happens through the Phonon layer, so the application does not care if it is Quicktime on Mac, GStreamer on Linux, or DirectShow on Windows. Applications communicate with each other via DBUS. Applications itself are so called KParts, which can be combined into new ones. Copy/paste and drag/drop work across the platform. Look and feel can be configured to resemble the native platform, so a user can quickly get into it.
So what are the advantages and disadvantages to have a complete Multi-OS platform beside the native platform?
Advantages
1. Compatibility across all OSes. Using the same office suite, the same instant messenger, the same groupware regardless of the operating system allows sharing all data without compatibility issues.
2. Painless switch between different OSes. Once the user understands the basic look and feel of the platform he can use the same programs, which might look a bit different, but are called the same (all the Windows users I met were not able to surf the Internet on a Mac, because they had no idea that Safari and Camino were web-browsers) and behave the same. Again compatibility is very important in the case of switching platforms, because all user data can be transfered, without any potential loss through convertion.
3. Since KDE 4.0 is open source and provides tons of applications, an average computer user does not have to buy expensive computer software for basic tasks; hence he can save money for software which really demands special functionality of the OS, which a cross-platform application cannot provide.
4. More programers who are using different OSes can write application software which is available for all desktop users.
5. Multi-OS platform can be ported to less widespread systems like Haiku, AmigaOS, MorphOS, and SkyOS giving the users of these platforms lots of software for their daily work.
Disadvantages
1. While QT mimics quite good the look and feel of a native platform, it still can be recognized as alien, which is highly unwelcome especially by Mac users, who react very sensible to a different look and feel than Cocoa or Carbon. This is especially valid in times when Apple changes the look and feel of the platform (usually with a new MacOSX release) and QT still mimics the old version. But this consistency arguement is true, if only few applications have different appearance, but what if the majority of the applications has different appearance? Is it the common appearance then and the native applications start to look differently?
2. Multi-OS platform can offer only the least common denominator, so the applications cannot use special advantages of the particular OS. This is true, but we’re not talking about highly specialized software packages, which really demand some special functionality of the OS, but such applications like an office suite. I’m not saying that everybody should use KOffice, but several different office suits can be created based on Multi-OS, which serve every taste. So a demanding user can use a suite which is as powerful as Microsoft Office, while a less demanding user can use a suite which is as design oriented as iWorks. There is no reason why such applications cannot be cross-platform; they do not require any special capabilities from the OS. The last office suite which was really optimized for an OS, was Gobe Production suite and even that one was ported to Windows.
3. But what about integration with the native platform which was so important in previous cases? Well, the close integration with the native platform is not required, because we have a whole ecosystem with plenty of applications, so communication outside the platform is becoming a nice-to-have and not an absolutely necessity.
Conclusion
We see that it makes a lot of sense to have a Multi-OS platform besides the native platform. Sure, there are disadvantages, but in my opinion most of the people can live with the fact that some applications behave slightly different then the rest. Imagine what will happen if KDE4.0 becomes such a winner like Firefox; a lot of people will use plenty of open source software, first on Windows, but then they see that they can easily switch to any other platform. This will be the moment when finally Windows will face a serious competition on the desktop.
About the author:
Currently I’m located in Bracknell, UK and I work for a major EDA company. If you want to find out more about me and my interests, please read my blog at kloty.blogspot.com (only in German), where you will find my profile and some other interests beyond operating systems.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSAlert.
After MacOSX and Linux start to become viable alternatives to Windows on the desktop
Mac OS X is a viable alternative for years, not ‘starting’ …
I’m a Mac user, but so is Linux a viable alternative to Windows for those it suits. Get the right tool. The world would be a very creative place if Windows was used for what it is good at, instead of everything it’s rubbish at (*cough*security*cough*)
Indeed. Both linux and Mac are viable alternatives, but the dominance of Windows makes it hard for them to get users. And this is not just due to how well known they are, but also because MS has been deliberately trying to make it harder to switch for years…
Ahem…
Before 2000 and the second coming, Mac was a dying platform. Obscene markups on overpriced beige boxes and no one at the wheel. And the “total widget” thing’s bound to limit your market share. And in those days, Quicktime wasn’t going anywhere. Apple did it all by itself giving MS all it’s market share.
Linux didn’t reach critical mass before Kernel 2.4
In the end, MS didn’t have to deliberately do anything. There was no one else. CBM could have had Europe to itself but there was no one at the wheel there either. lol
Edited 2007-02-19 19:42
linux has reached critical mass?
don’t get me wrong, i love messing around in it, and it can certainly be used full time by some, but it’s a long way from critical mass.
mac os x too, for that matter.
Edit: spelling
Edited 2007-02-19 19:56
“Critical mass” implies reaching a level of adoption wherein there are enough users to continue development of the platform at a healthy pace, and to sustain an ecosystem of software and services.* It doesn’t imply the point at which a product could take over the market (that would be “tipping point”).
OS X certainly has reached critical mass, as its got a small but stable ecosystem of developers and users, and the product is profitable for its parent company. Linux has a much larger market share than OS X, and so has certainly reached critical mass as well.
*) The definition makes sense if you remember that “critical mass” refers to the mass at which a star can shine through a self-sustaining fusion reaction. The phrase is applicable whether the star is a white dwarf or a blue giant.
> “Critical mass” implies reaching a level of
> adoption wherein there are enough users to continue > development of the platform at a healthy pace, and > to sustain an ecosystem of software and services.
> The definition makes sense if you remember
> that “critical mass” refers to the mass at which a > star can shine through a self-sustaining fusion
> reaction.
I think by that definition, Linux already had critical mass nearly from the beginning because it was already selfsustaining after a short time. Somewhat like the big bang
While the commercial OSes need a constant input of money to keep them burning.
Edited 2007-02-20 07:30
In 2000, Microsoft had windows 98 as it’s premier home user OS… How did that compare to Mac OS 9? pretty much equal, I’d say. And in 2001, when Windows XP came, Mac OS X was released. Way ahead in many (if not most) areas. And there was BeOS, probably more. There wouldn’t have been such a software-mono-culture if MS (AND the other players in the marked, they all played that game) didn’t try everything to lock customers in.
No, it’s not MS’ fault – almost all proprietary vendors do it. I’d say it’s proprietary software. It’s a bad thing, unless you regulate and control it heavily, as government. But they don’t. In EVERY other branch of the industry, a player with 95% of the marked would be split up. But not Microsoft. I don’t know the reason, there are many, I guess. But it’s bad for consumers, like very other monopoly, and the government should step in.
I see Free Software (and making it mandatory) as a good solution to the problem in the software market – it’s to easy to lock customers in. Free Software could change that.
Actually OS X came before XP so MS deciuded to make xp pretty.
I know, didn’t I say that? Anyway, OS X might indeed have played a role in the last-minute changes in XP in the theming area… So we might have Apple to thank for the Fisher Price look
No, it’s not MS’ fault – almost all proprietary vendors do it. I’d say it’s proprietary software. It’s a bad thing, unless you regulate and control it heavily, as government. But they don’t. In EVERY other branch of the industry, a player with 95% of the marked would be split up. But not Microsoft. I don’t know the reason, there are many, I guess. But it’s bad for consumers, like very other monopoly, and the government should step in.
You don’t know any one reason … so there must be many, you guess. But maybe there aren’t any?
Maybe most consumers don’t in fact feel that Microsoft is charging too much?
I don’t KNOW for sure – anything… I’m no government official, but I can give you some (pretty good) guesses:
– They don’t know sh*t about the software market. The Dutch government agency who’s supposed to ensure good competition *admitted* that was the reason they didn’t interfere…
– They don’t dare to – MS is big, has deep pockets, and especially in the US, money buys power. Yes, I really believe there are some immoral things going on in the political area…
– Lawyers. They’ve tried – but an average going-to-court against Microsoft takes 10 years, and it’s pretty impossible to get them convicted – and if you do, they still weasel out of it, paying by providing the money in the form of ‘free software’ to schools – which they would’ve loved to do anyway, as they can hook children as young as possible on windows…
Three good reasons, imho. Oh, and judges don’t know much about software, too… It’s just very complex, and MS has a strict policy of deleting email after 30 days so they don’t keep records – it’s very hard to prove anything that way…
Three good reasons, imho
Actually your reasons make very little sense. You want the government to intervene because you don’t like the Microsoft OS, which just happens to be a market leader. Not much of a capitalist, are you.
I ask you where is the abuse of monopoly. (Don’t say Netscape. We really don’t want to open that can of worms.)
I’m sure we can all agree that in most scenarios the Microsoft OS is now a fundamental tool in home and business.
If Microsoft were outrageously over-charging for the product, this would require a government action, sure. But it is not the case.
I went from 1 to 3 and back to a score of 0.
Such compassion. Gotta love the fanatics.
I would argue that Linux was mostly usable as a server with 2.2. And I would guess that it’s a safe assumption that Linux is most widely deployed as a server, not a desktop. So while I get your point, i think that Linux did what Linux does best even back then.
In the end, MS didn’t have to deliberately do anything. There was no one else.
As usual, ronaldst, your memory is quite selective and forgetful. The history of the PC can be divided neatly into two eras. Before 1996 there were several types of PCs in homes and businesses, running different operating systems and different productivity suites. After 1996… not so much.
What happened? In 1996, Microsoft released Windows NT 4.0 and, crucially, the first release of Microsoft Exchange Server. Email had grown to be synonymous with network computing, and in one simple invocation of the client-server model, Microsoft made email proprietary. For nearly a decade, there would be no third party email client that could reliably interact with Exchange, and interoperability is still incomplete.
From what became know as corporate IT on outward to multimedia systems, every client platform competitor effectively vanished. Email was the killer app on the PC, and Exchange bound Windows to the PC. It’s really that simple.
I am not so sure about that because at that time you had several email platforms out there on the business side.
I worked for the government at that time (Still do) and you had Notes, you had Groupwise and also what was really popular in the government at that time was Banyan and Beyondmail.
The thing MS did on the government side to crush the comp was give a way a lot of stuff. For instance Banyan would not make global patches most of the time. They would make patches for particular clients and charge a good amount of money for them. MS would make their patches for all of their products and give it away to everyone. For Free.
Banyan was great. It ran on top of Unix System V Release 3 (SVR3) (But that also limited it’s growth on new servers because of Memory management issues, drive sizes etc)
Also you had to buy stupid keys that you would plug into the sereal port to activate the OS. Also BAD marketing killed Banyan (Just like it has hurt everyone else)
Honestly MS is a marketing company more then a software company. MS’s software is JUST catching up to their marketing.
Don’t forget their Lawyers, they’re good… They spend at least 10 times more money on lawyers and courts than on developing software.
Remember, what made MS big was 4 things.
1. They were the first well-known hardware independent OS.
2. They were almost giving away everything but Windows.
3. They copied the competition. (For example I used to manage Banyan and Novell servers, MS sales people used to come in and tell us that LDAP was stupid, hard and would never go anyplace. Soon as Novell was sidelined and MS stole people like Jim Allchin from Banyan they come out with AD. Or LDAP lite as I call it)
4. They were the only company marketing software only. Other companies like IBM were marketing PC’s, they just happened to have software on them.
The biggest thing MS did was realize that the biggest cost was the hard ware, if you got every hardware company to put your software on all of their PC’s then people could shop around for hardware but always get the same software.
People could care less about the hardware as long as they have the ability to do the same thing at work and at home and share things with friends and family.
What’s a hardware independent OS? Do you mean that their first OS supported the PC platform (x86+BIOS among other things) and that vendors quickly moved to get their clones working with their software to fully clone the PC?
I wouldn’t call that a hardware independent OS… I’m sure there were OS’s out there with minimal, for the time, asm and driver mechanisms for hardware.
You’ve got me curious now though. I wonder if they really were the first to sell only software.
I have probably used the wrong terminology. What I meant was hardware vender independent.
Meaning like I said that they are still one of the only OS’s that works almost 100% with almost every PC you can go to the store and buy or that a business would buy.
Linux has not gotten to that point yet (Not the fault of Linux) And no one else is even close.
Back when Windows got popular companies like IBM were using the Apple model. Make a great OS but the key of the OS was to sell the hardware.
I wonder if there were any other companies that back when MS was small was selling Operating Systems that could run on any X86 PC?
Also the other thing that MS did was NOT care that people could copy their software. They figured out how to make profitable contracts and the fact that people copied Windows and Office just made more future customers for MS. The fact that companies like Banyan made stupid license KEYS (Physical Keys that plugged into the back of the server) and your server would not run without them so they could try and make every penny.
Shoot I know even in the government when we were running NT4 we would just pop up a server whenever we wanted. (Still do it) Prob breaking all kinds of licensing. But that was one more server that got counted when we renewed licenses (More money in MS’s pocket)
I must admit MS knows how money works! And they know how to make it!
> Linux has not gotten to that point yet.
I think that assumption is wrong nowadays. Linux is probably the os which supports the most architectures (x86, alpha, mips, sparc, power, …) available today. And when it comes to hardware on x86 I would also say that Linux has the most complete driver base. Maybe not for the bleeding edge but for most of the older hardware which wouldn’t even run on XP because the manufacturer wasn’t developing any driver.
And I think with the new Vista you can have the same hassles with hardware as in Linux. Beta drivers for new hardware, no drivers for older hardware. You know also have to think wether Vista will support all your older hardware or if you want to buy something new you also have to think whether Vista supports it.
You are right about the architectures and also I think they do have the biggest driver base. But remember that Windows Vista could have drivers for older machines if MS wanted it to. They don’t want it to because they want you to buy newer machines. Not for technical reasons but to keep the triple threat going. OS/PC/CPU They keep companies like Intel and Dell beholden to them by helping their sales. They force people to have to buy new machines.
But the comment I was making is that out the box I know I can get ether Windows 2000, XP or Vista to run on almost any X86 machine made in the last 10 years or more. Yes I may need to pull out drivers but like with Vista for instance I can for XP drivers to work and work pretty well if I need to.
For example, I have a pretty new compaq laptop. Designed for Windows XP but Vista ready. XP runs fine, (Just have to crack out the driver disk) Vista runs 100 percent on it no driver disks. Linux runs ok. Only a couple of Desktop versions run on it and they all run sub par.
Freespire: Wont install, cant see the serial ATA hard drive.
Xandros: Buggy on the video, sound and login.
Ubuntu: Wireless doesn’t work and video (Intel video card) only works at 1024 (Can get 1280 out of Windows Vista)
Open Suse: Same issues as Ubuntu but less stable.
And I know it’s not a Linux issue, a lot has to do with openness of drivers etc.
But MS has been able to make it look like a Linux issue.
Yes, they where pretty innovative in some areas. We talked about MS a lot in my study (I study Organizational Psychology), microsoft has been pretty innovative in the management- and Human Resource area. Interestingly they’ve seem to have lost that lately.
Though Microsoft isn’t a very nice chap, it’s not The Big Evil ™ – I think proprietary software at large is. If it wasn’t MS, it would’ve been Apple, Atari, Be or someone else…
I would like to see more cross-platform apps,
and games!
OpenGL games would run on every plattform; nur just on VIsta (DX10) and XP (Dx9)
That’s what it will take to get more users on Linux faster. Most people don’t buy an OS because of some ethical or moral reason, they buy it because it runs the applications they use. Well, that or because it came with the PC when they bought it.
Linux needs gamers simply because they do drive the market in a large way. Get Linux in the home and you will gain market share in the business. Going after business first, which is what it seems that many are trying to do, is just silly and counter intuitive.
businesses drive the market, not games. not saying that games aren’t a serious money maker, but it’s businesses that control market share.
Gamers are the ones buying the latest hardware and driving the need for better OS’s. Business probably spend the most money, but they aren’t the ones pushing for the latest and greatest.
More over, I still think that it’s the home user that will drive the adoption of Linux more than the business user. Home users are more willing to take the risk to save money or whatever reason they have. In turn those kids that grow up using Linux are also going to be the ones who end up making decisions at those business.s
Don’t get me wrong Linux should keep targeting business, but they need to look at really targeting home users as well.
I agree wholeheartedly with godawful. We need businesses to take on linux because it solves their business problem more cheaply. That’s not happening right now since Linux may not in fact be cheaper for businesses than Windows because the big name linux vendors charge a lot of money on a yearly basis. Hopefully linux will become easier and support will become cheaper.
I’ll also pick up Kaiwai’s anthem and cite the difficulty of having a consistent user experience across distributions because there is no way of knowing which libraries and versions of libraries will be installed or even which desktop environment to target. Making closed-source applications viable on the generic linux desktop would go a long way to boosting business adoption.
Yeah right. Why do you think gamers are using IBM-compatible PCs today and not Amigas? Because it’s business that drives the market. People use at home what they use at work. That’s been true for two decades now.
It also could have had something to do with Commodore sitting on their asses while the PC steadily improved in the graphics and sound department and it eventually surpassed the Amiga.
You have to consider that Linux and Macs aren’t very pleasent platforms for gamedevelopers.
The big majority of Macs sold are the cheaper models, like the small iMac, the MacBook and the MacMini.
Those models have an GMA950 GPU which does not have any Vertex acceleration and thus is performance wise on the level of 1999-2001 GPUs and not the acceptable for modern games. So you would have probably less then 2% of the market left and that likely won’t even be enough to pay the cost of the port, exept probably for the few blockbusters (yes, i’m aware of the Doom 3 port).
On Linux the ATI driver is very buggy and thus is going to increase costs of an port due to problematic development (that is the reaseon Linux Doom 3 only runs on nVidia). The opensource 3d drivers available are unusable for more demanding apps as well. So you can probably just develop for halv of the Linux market right now, which makes it unlikely to pay the porting costs. Another problem ist the higher support costs due to the deversity of the Linux market.
The fact that Linux users are less willing to pay anything for software (Quake 3 proofed that) isn’t going to help either.
I think we will rather see the dead of PC games than cross platform games, which i would find annoying because Console games suck bad imho.
The fact that Linux users are less willing to pay anything for software (Quake 3 proofed that) isn’t going to help either.
Actually, I think it just “proofed” that a lot of gamers have a dual boot system or a second machine running Windows.
I have yet to see a real study with comparable control groups, everything based on sales is highly skewed
But Quake 3 for Linux came out the exact same day as the Windows version. So if there was a Linux base of Quake 3 gamers that bought the Windows version despite the fact that there was a Linux version would clearly show that they don’t want linuxgames at all.
So if there was a Linux base of Quake 3 gamers that bought the Windows version despite the fact that there was a Linux version would clearly show that they don’t want linuxgames at all.
Not necessarily.
First, I am sure that quite some sales of Quake are handled by stores and just because the vendor had a Linux version available does not imply that the stores had as well.
Second, as already pointed out in an earlier posting on this topic here, problems with proprietary graphiccard drivers can also lead an otherwise Linux-using person to rather buy the Windows version.
I find it highly unlikely that gamers, who primarily use Linux, would be more relucant to pay for money than e.g. those who primarily use Windows.
In order to support such a claim one would need a valid statistical sample that is not skewed by other influences.
First, Classic isn’t really a viable platform of Mac OS X. It’s not even available on any new Mac you can buy from Apple anymore. Java is not an acronym and therefore writing it in all caps is meaningless.
Also, while the author stresses interopability between apps running on a single platform, he misses the point, which is: interopability between apps running on a single operating system (regardless of which ‘platform’ each app uses).
That means, I should be able to copy and paste between Java, Cocoa and Carbon apps without problems (and other more complicated things). This is the goal. The user shouldn’t care what platform an app was developed against, only that it works like it should.
Edited 2007-02-19 19:06
After reading this, did anyone else feel like you just participated in a KDE circle-jerk game of wet biscuit and were the last one off?
I can understand being freak, but for God’s sake… Take your fanatacism and shove it.
I know Kaj would mention it, so I’ll do it as I’m here:
REBOL ( http://www.rebol.com/ ) with REBOL/View is a powerful and proven “multi-OS platform”.
” Imagine what will happen if KDE4.0 becomes such a winner like Firefox;”
You can image alot of things, but when you look at what the big linux/Unix supporters you will see that they depended or started to depend on GNOME, eg: Novell SLED, Redhat RHEL, Sun Solaris.
I like the idea though, like many other ideas, but we have to suffer the real life side effects.
Anyway at least I know Xandros use KDE and they are considered an enterprise oriented linux producer.
Good luck for both.
“””
” Imagine what will happen if KDE4.0 becomes such a winner like Firefox;”
You can image alot of things, but when you look at what the big linux/Unix supporters you will see that they depended or started to depend on GNOME, eg: Novell SLED, Redhat RHEL, Sun Solaris.
“””
Yes. For smaller distributors, KDE’s big advantage is that it is easier to package up and maintain.
For distributors with more resources, and more customers, the usability advantages of Gnome outweigh its extra maintenance burden and its larger number of independent modules.
Edited 2007-02-19 21:26
KDE is also easier and more efficient to develop for.
It’s more a community project, and is technically ahead in most areas. I think plenty of reasons for the small distro’s to focus on KDE.
The usability advantages are there in Gnome, but I don’t think that’s the reason Novell, Redhat and Sun prefer it. RedHat did because of licensing issues, and I think that’s a big reason for Sun as well. They want to make it easy to allow proprietary software development on their platfroms. Novell just bought Ximian before Suse and the gnomes have always (well, until the last year) been better at advertising themselves.
Gnome can be steered by the big three in whatever direction they like – the develop it mostly together. It is Free Software, of course, they can’t fully control it (and that’s a good thing) but you can’t ignore 70% paid developers… So they made some choices not all Linux users like (the usability thing drove many away, though I’m not saying it was a bad move – it WAS innovative, and it helped the Free Desktop).
This control might be a reason, for the smaller distro’s, not to focus too much on Gnome. Even Ubuntu is increasing their KDE support, and I think that’s a smart move. Nobody knows exactly what KDE 4 will bring, but it might very well be ‘big’ in the sense of innovative and good…
KDE for windows would be a huge boost for linux and open source. It will cause people to start really developing applications for linux, because they’ll be able to get the large windows market for free.
Yes, AND Mac OS X…
“””
KDE is also easier and more efficient to develop for.
It’s more a community project, and is technically ahead in most areas.
“””
1. More efficient to develop for.
2. More community oriented. (That’s a new one!)
3. Technically ahead.
OK. Where are the superior community apps to back that up?
> OK. Where are the superior community apps to back that up?
Apps as in applications?
How about:
– k3b
– koffice
– konqueror with its powerful kio-slave plugins
– amarok
– kopete
– kate
– kpdf
– kmymoney
– krecipe
– konverstation
– ktorrent
– …
If really all apps of KDE are available for Windows you almost have everything you need for your daily computer use. Except good games and maybe a virus scanner.
Indeed. Most of the apps in this list are ahead of their competition on the Free Desktop… K3B hasn’t had competition for a long time, and aside from OpenOffice (which was a commercial product) Koffice doesn’t have real competiton as well (‘Gnome Office’ isn’t very complete nor integrated). And not many would argue Amarok isn’t ahead, even on the commercial competition..
The others (kpdf, kopete, kate) are ahead or at least equal to their Free competition. Konqi indeed doesn’t really have competition, and kpdf can barely see evince in it’s rear view.
And this is after a long time of no release in the KDE area. KDE 4 apps have been improving lately, and many are already way beyond their predecessors – if they continue to improve at this rate, KDE 4 might almost make Gnome obsolete…
gnome is already obselete
Well, it’s a bit harsh to say, but it might be true. It’s behind, even after more than a year of no big KDE releases. I wonder how it will recover from a seriously good KDE 4. It might survive a bad KDE 4, though, and we don’t know which one it’ll be for sure
And it is free software, so as long as ppl work on it and use it, it’ll continue to exist. Red Hat, Sun and Novell won’t give up on it soon anyway, though they may decrease their investments even more (they’ve been cutting down for quite some time now).
And in the usability area, Gnome still has some unique value. Though I wonder how long that’ll last.
I use Gnome myself, but there are quite a number of KDE apps that don’t have a good Gnome alternative. I use Klipper, Ktorrent, Kate, Krita and Scribus (Qt). K3b is an oft cited example (though I use Nautilus generally for burning).
Edited 2007-02-20 21:48
ah gnome boys, do i detect a tone of uncertainty, do you feel that you have shout some big-name companies to defend your platform
this is quite worrying- this just reinforces my voice that Gnome users are too ‘fruity’ to pass proper judgement
Comparing graphical toolkits with a windows manager and the set of applications that are available for it makes no sense.
According to the author interoperability with the native platform and other applications is a problem for the existing graphical toolkits but not for KDE because people will only depend on KDE apps, please give me a break.
But I don’t think it was very complete or in-depth.
I wouldn’t think of KDE as a platform, I think it’s a desktop environment. http://www.kde.org/ I think the acronym KDE actually stands for K Destop Environment. And I think it’s kind of silly to make up new words for ideas that have been around a long time… (OS-compatibility layer would have worked just as well, and been clearer). Still, a nice topic to start on.
I think Qt is nice though, except they started before the STL (well before a _usable_ STL compiler), so I’m wary of the fact they have all their own home-brewed containers. Eck. Can’t say I’ve worked with it much, though.
Of course there’s GTK for the C folk .
And C++ if you like. I’ve used pygtk, and found it quite nice. The one nice thing about gtk is all the bindings in different languages. I don’t think QT really has this, but I could be wrong (I’d imagine it’s harder to make this work with name-mangling, etc).
There’s lots of cross-platform (oop multi-os platform ) stuff now. For games, OpenGL is almost to low-level to bother with, especially with windows and their gradual hobbling of OpenGL. And now we’ve got CrystalSpace, Irrlicht, Ogre (just rendering), etc. And they all work well with OpenGL or DirectX, and often even do tricks straight with the hardware.
Oh, and don’t forget SDL!
That old workhorse is actually kind of nice.
My 2 cents to the conversation…
Have fun!
I wouldn’t think of KDE as a platform, I think it’s a desktop environment.
Well, KDE is
– a free software project
– a community of said project
– a platform
– on X11 a desktop environment
– an eco system of applications
I don’t think QT really has this, but I could be wrong
Depends what you are referring to.
If you mean that it doesn’t have C bindings, you are right.
If you mean that it doesn’t have language bindings, you are wrong (at least Python, Ruby, 2xJava, C#)
> Well, KDE is
> – a free software project
> – a community of said project
> – a platform
> – on X11 a desktop environment
> – an eco system of applications
Yup, Much better definition. But it’s not something you would use for a cross-platform (i.e. I want my program to work on Windows, Linux, OS/X) application. It would be kind of silly to require someone to install all of KDE on a machine just for one program. QT, maybe. That was what I was getting at. A platform, yet, but not the kind of platform to solve the problem this article was seemingly addressing.
> If you mean that it doesn’t have language bindings, you are wrong (at least Python, Ruby, 2xJava, C#)
I was wrong. Bindings in more languages than I know. Good enough
>It would be kind of silly to require someone
> to install all of KDE on a machine just for
> one program.
Then many ppl will behave silly within a year, because I really think many KDE apps will be used by Windows users. Remember, pretty much all KDE apps will become available for Windows. And many of the apps in KDE are now payed for by users on Windows… This might proof to become a hard time for small software vendors.
But it’s not something you would use for a cross-platform (i.e. I want my program to work on Windows, Linux, OS/X) application
Really depends what you want your application to do.
At some level of functionality you have the choice of either doing lots of platform dependent exceptions (e.g. #ifdef stuff in C/C++) or use a portable framework which does this for you.
t would be kind of silly to require someone to install all of KDE on a machine just for one program. QT, maybe
Well, you wouldn’t need “all of KDE”, just the “KDE as a platform” part.
And as I wrote above, depending on the functionality your application should provide, a smaller framework might lead to lots of platform dependent work for you. It’s always a tradeoff, dependency vs. increased work
<me>I think Qt is nice though, except they started before the STL (well before a _usable_ STL compiler), so I’m wary of the fact they have all their own home-brewed containers. Eck. Can’t say I’ve worked with it much, though.</em>
Then you need to start working with it before you make such blanket statements. While it’s true that Qt predated the standardization of STL, it is not true that you need to be wary of it. The Qt containers are so compatible with the STL, that you can use one’s containers with the other’s algorithms, and vice versa. (It goes without saying that conversion functions are included). Also, Qt’s containers are optimized for speed and efficiency, and their somewhat easier to use. But even so, nothing is stopping you from using the STL with Qt.
<em> I don’t think Qt really has this, but I could be wrong</em>
Qt doesn’t have nearly the language bindings as GTK+, but that’s because it’s geared towards professional developers rather than hobbyists. In the commercial desktop software domain, C++ and Java overwhelm all other languages, with Python as an up and coming challenger. (Web applications may have a different mix of languages, but web applications don’t use GUI toolkits).
Qt has native Java bindings (currently in beta), and the third party Python PyQt bindings are extremely popular.
That’s good to hear that Trolltech is doing Java bindings because that’s something I would really like to see! The QTJava project has been inactive for over half a decade now!
> I wouldn’t think of KDE as a platform, I think it’s
> a desktop environment. http://www.kde.org/ I think
> the acronym KDE actually stands for K Destop
> Environment
and several years ago you’d be right. the acronym is now something of an anachronism as we move not only to more platforms, including non-Linux/UNIX ones, but our set of APIs becomes more and more comprehensive.
when you can write one body of code and only see the APIs native to it, it’s a platform. welcome to today’s KDE.
as for bindings, we have bindings to Java, Python, Ruby and Javascript in addition to C++. people are working on C# bindings and even the perl bindings seem to be reviing; we’ll see if those gain any traction. bindings with real world use cases tend to thrive and those that are little more than cute toys tend not to.
There’s a lot more to multi-platform development than your GUI toolkit. It’s unlikely you’ll write a useful program where the only point of contingency for porting is the GUI…
You have to worry about:
Data retrieval (from the disk), possibly an xml library or odbc to connect to a relational database.
Network issues: Are you using system calls for sockets? Or are you using a wrapper library that’s already ported, or making your own?
Threading issues: Threading can change in subtle ways on some platforms, although most are largely equivalent today. However, if you’re using C you’ll have issues with threads on Windows vs Unix (you can do pthreads and use a compatibility layer or the other way around).
Related to GUI you have usability issues ( I don’t mean accessibility ), Mac users don’t want what Windows users want…
It’s not just about picking QT or wxWidgets or SWT and being home free… There’s a lot to consider. For example, you might fix almost everything I mentioned by using Java. But then you find that the speed isn’t good enough for some small part of your program: You can then write some ANSI-C code (and complicate your build cycle, a lot if you don’t understand your build cycle). Or you can force your users to deal with a little slowness.
Or maybe you choose C# because it’s portable right? There’s Mono, sort of. Good luck finding bindings! And good luck sticking to the part of Windows forms that Mono implements, not to mention the fact that Mac users will reject your program hands down because it will look very odd to them. After they install Mono just for that program.
Or maybe you choose Python… It’s a long long debate and it’s much more complicated than what GUI toolkit you want to use.
Or maybe you choose Python… It’s a long long debate and it’s much more complicated than what GUI toolkit you want to use.
For a lot of non-performance-critical apps, Python+wxPython could really be the way to go. Not only it’s a multiplatform choice by design, but it actually looks native everywhere (because it is native: the wxWidgets library works as a unified wrapper on top of the native libraries for each supported system. On Un*x the default wrapper is for GTK, -I don’t know if there are others)
I don’t understand why in 2007 people still always yell out “Java!” when think of interoperable languages when Python is at least as much interoperable, less complex and, thanks to wxPython, features native widgets for all OSes.
Unified wrappers are nice, but they still miss the mark with picky users. I’m not against them (I use them), but it’s an unfortunate reality that your users will bug you about platform specific behavior that they want. And you could implement it for their platform, and often you probably will.
But it’s still harder than just supporting one platform.
I believe they support motif as well, but it’s not pretty (and I don’t just mean aesthetically).
“There’s a lot more to multi-platform development than your GUI toolkit. It’s unlikely you’ll write a useful program where the only point of contingency for porting is the GUI…”
You’re right. Therefore, some standards exist, for example POSIX. Along with cross plattform compilers, one could be fine.
You mentioned some fields where this will be of importance:
– Data retrieval
– Network issues
– Threading issues
“Mac users don’t want what Windows users want…”
And sometimes, users don’t know what they want (or need).
“Or maybe you choose Python… It’s a long long debate and it’s much more complicated than what GUI toolkit you want to use.”
There’s more. What about internationalisation, language support and charsets. Do you offer multi language support? How? Via gettext? Or write all possibilities by yourself? Can you guarantee the correct charset to be loaded?
It’s because actually I’m thinking of making an application multi-OS capable. I’d like to use Gtk along with C. Will run on BSD and Linux. Will it run on MacOS, too? Or will I have to use Java? Or implement it as a kind of browser application? Uh…
All these things are provided by the KDE libraries. Read my post to the parent of this thread. If you go for the KDE 4 libraries, you won’t have to worry about any of these, they provide it, and platform independent. Easier and much more complete than GTKxyz or other toolkits.
You get free and easy multimedia support, network transparency, spell and grammarchecking (auto-language detection thrown in), communication support (easy cooperation between people over the network!), and internationalization et al.
Unless you’re a C-only guy, I think this is the way to go. Check http://developernew.kde.org/ and have fun.
(this site is being build, stuff is migrating from several sources, including the next link. they will change the name, no decision to what yet)
http://developer.kde.org/
Thank you for your comment.
“You get free and easy multimedia support, network transparency, spell and grammarchecking (auto-language detection thrown in), communication support (easy cooperation between people over the network!), and internationalization et al.”
I have to admit this really sounds good. I just worry about two things:
1. How about licensing for commercial projects?
2. How about performance on older systems?
At the moment no question, but if I’ll some day get startet… beginning concrete implementation…
“Unless you’re a C-only guy, I think this is the way to go. Check http://developernew.kde.org/ and have fun.”
I prefer C over C++, that’s true. KDE and its frameworks are usually to be used with C++, but that should be no problem.
The KDE libs are LGPL, so you can do pretty much anything with them, but Qt is GPL and you can’t mix Free with NON-Free software. So if you want to write non-GPL-compatible software with Qt, you need a non-GPL version of Qt, which you can purchase from Trolltech at http://trolltech.com/
They charge per developerseat, not per sold product, and are willing to make a special deal if you’re a startup and can’t afford their prices – they’re very reasonable, so just email them if you “would use it if it wasn’t so expensive”.
Btw your money is being used to work on Qt and KDE, so it’s well spend. Qt’s shares are owned mostly by it’s employees, and they sponsor a lot of KDE developers (fulltime!) – so it sure isn’t some kind of evil company
There’s more. What about internationalisation, language support and charsets. Do you offer multi language support? How? Via gettext? Or write all possibilities by yourself? Can you guarantee the correct charset to be loaded?
I’ve never experienced an issue with different platforms and multi-language tools… Otherwise I’d have included that.
It’s because actually I’m thinking of making an application multi-OS capable. I’d like to use Gtk along with C. Will run on BSD and Linux. Will it run on MacOS, too? Or will I have to use Java? Or implement it as a kind of browser application? Uh…
Mac users will shun Gtk applications until Gtk has been implemented natively on Mac: It runs under X11 right now. If you’re going to need X11 on Mac you’re not going to be well liked on Mac: X11 on Mac really really sucks for programs a user does a lot with.
You’re right. Therefore, some standards exist, for example POSIX. Along with cross plattform compilers, one could be fine.
Too bad Microsoft doesn’t implement much of it without extra tools that no one installs on a desktop. Not to mention, <sarcasm>the pthreads system is always wonderfully well implemented to its incredibly useful specification.</sarcasm>
KDE and Qt provide almost everything an app needs in a platform-independent way. Everything you mention, for example
KDE 4 apps, using KDE and Qt’s libraries will almost without effort compile for Windows and Mac OS X. So the guy is right there, a KDE app will be mostly platform independent, yet can be coded in C++ (or ruby, perl, java…).
Remember, the Qt4 and the KDE4 libraries are more than a GUI toolkit. They provide database functionality, network transparency, multimedia, configuration, xml-svg-etc in- and export – all of it platform independent.
Or that’s the plan, it isn’t out of course. Though you can download the KDE libraries development snapshots for Windows and Mac OS X (!)
C++ is a portability issue in itself: The only thing saving us today is G++. However, the other listed options are fine.
But KDE4 might turn out to be an incredible platform to go with. It does have one problem: Being QT based if you plan to sell your software you’ll have to pay Trolltech. But other than that, as long as it doesn’t require the horrible price KDE3 application had to pay (KDEINIT) it might be great!
I haven’t looked into it.
You can sell your software, for sure, but it has to be GPL compatible. Qt is Free Software, and you can’t use parts of Free Software in NON-Free Software – thus you’d have to pay if you want to make money selling proprietary software. You can still build in-house software and stuff like that, though.
Anyway, they’re very reasonable, if you can’t afford their prices (you’re a startup?) you can contact them and they can make a nice deal. It’s a really nice company, owned mostly by it’s own employees. They use the money from proprietary-software-vendors like Adobe and Skype to develop Qt and sponsor KDE – a beautiful deal, if you ask me
And what’s wrong with KDE-init? It can be disabled, but it was there to speed up the starting of applications by a factor 3 or more… As long as the GNU toolchain isn’t up to the task of efficiently starting C++ apps, resolving dynamic dependencies on the fly in a decent time, KDEinit will have to stay…
*sigh*
Spending money, no matter how reasonable the person you spend it with, is always an issue. Trolltech being reasonable and only charging you maybe $500 to license their software is still $500 more expensive than free. You have to consider this.
Most people who sell software do not sell GPL software: I’ve yet to see anyone sell GPL software (I’ve seen many sell GPL software support contracts).
KDEinit takes more than half a second, that’s what’s wrong with it. The cost to launch a KDE program outside of KDE the first time is very high right now: I know, I used kmail and not KDE for a long time.
Sounds like a C++ problem .
Sorry, but I can’t really feel for you – you want to use Free Software to make money off of other people, and you’re absolutely against paying for it?
I know money is always an issue, but Qt is worth it – it’s simply better than the free (as in money) competition, and you get support as well (which isn’t even available for many other toolkits). Last, I think the LGPL is bad – it doesn’t make everybody contribute, it allows freeriding – and that sucks. At least the GPL forces you to contribute, OR pay – either way, I’m happy.
KDE init doesn’t slow down the starting off apps outside KDE, (they would have to wait the same time to load their libraries themselves) and it greatly speeds up subsequent application startups after it has been started. AND you can turn it off.
I didn’t say I wanted to do that… I also don’t desire your sympathy… I’m simply pointing out that people exist who wish to use free software (due to some pragmatic benefits for them) to develop proprietary software. Regardless of whether or not I’ve ever done this (5 minutes on Google will put your foot firmly in your mouth) these people do exist and they are not sub-human (yes, I’m defending them).
QT probably is worth it though. However, some people may not have the funding, or may not be willing the risk the funding (maybe they shouldn’t be risking the time either though right?). It’s just one item to weigh into your considerations.
The GPL does not force you to contribute or pay. It forces you to contribute, or, being a single copyright entity you can have multiple licenses (yes, copyright holders are allowed to use as many licenses as they wish, they just have to uphold them all). This is what QT is doing: Multiple licenses. I’m not sure that their license mechanisms have been shown to hold up in court but I wouldn’t want to be the first to contest them either!
If KDE init isn’t slowing things down them something else is wrong because kmail takes longer to load than Evolution, and almost every other application for that matter. As does every other KDE application when it’s the first one you load. There’s something slowing the process down, and it’s a real problem for non-KDE platforms.
Is C++ linking just this horrible on Linux/BSD?
About the legal issues of the GPL license, the way Trolltech uses the GPL is given as a example of good ways making money with Free Software on the site of the Free Software Foundation, so I don’t question it…
About KDE init, I think what’s slowing down is the KDE configuration database checking all timestamps to see if the files have changed (and if so, re-reading them). It maintains a binary cache of them so retrieving configuration data is faster. But it has to run once before you can use KDE apps. It is normally started on the first startup of KDE, but when you use KDE apps outside of KDE, it is started the first time you start a KDE application. You can see all this, btw, if you start a KDE app from the commandline.
And yes, linux has some serious drawbacks in the area of starting apps, tough a lot of time has already been spend on this. An example was fontconfig – KDE applications did often spend more than 25% of their startup time loading fonts… This has been fixed in fontconfig > 2.4.x, if I recall this correctly. And there are (and have been) more issues like this one.
You see, Microsoft focuses a lot of effort on performance – and they might have some bad code and things holding them back, but in other area’s, they excel. (pre-fetching data, re-aranging data on the disk for faster load, stuff like that – linux doesn’t have it, or just a little)
The FSF being behind something doesn’t exactly make it a legal reality… I’m well aware that the FSF has no problem with people making money off free software, although it is news to me that they don’t mind dual licensing.
Windows applications, I believe, do a lot less linking and a lot more builtin static code. There are things like prelink on linux that work pretty well, but TMK don’t work with KDE applications.
I’ve not noticed a serious penalty for load times on C programs in Linux… And knowing that the C++ ABI on GNU has been something of a nightmare I wonder if that is partially to blame for KDE’s troubles?
The other trouble I’ve had with KDE programs outside KDE is that there seems to be race conditions in the startup if you start multiple KDE programs at the same time. I eventually learned to start one, wait for it to load, and then start the other.
Again, the FSF mentions dual licensing as a GOOD WAY of making money with Free Software. And as they wrote the GPL, their opinion on these matters should (will) be important when someone interprets the GPL during court.
It might be the C++ in GNU, I’m not sure about the exact reasons. Things have progressed lately, and the startup times have decreased. And there is the point that KDE apps share more code than the average C application (for example Gnome apps) so there is simply more library to load. The way the libs are loaded now isn’t efficient. Well, it IS in terms of memory usage (they are mmapped) but NOT in terms of startup speed as they aren’t read sequentially now but jump-jump-jump, which seriously degrades throughput on a diskdrive… There is a patch which fixes this by immediately loading the whole library in memory when you start the apps, increasing both startuptime and memory usage. Didn’t get in, though. Some distributions include this patch.
The race might be kbuildsycoca checking the stamps twice and interfering with itself.
> There’s a lot more to multi-platform development
> than your GUI toolkit.
you’re right. and everything you mention is available in Qt itself. and a lot more. KDE libraries provide even more on top of that.
perhaps it’s time to revisit the platform before you comment on it
The Runtime Revolution tool is a decendant of Apple HyperCard, it is very easy to use and it can cross-compile for Windows, Mac Classic, Mac OS X (Universal), Linux, FreeBSD, Solaris and HP-UX (the odd unices just for old RunRev version, not the current one). You can buy a license and develop on one platform and deploy native applications on any.
I think any developer that is faced with multiOS needs should really look into this tool. And today they ship a version compatible with Vista.
http://www.runrev.com
Cheers
andre