Since Nokia announced its switch to Windows Phone 7, people have been worried about the future of Qt. Well, it turns out Nokia is still going full steam ahead with Qt, since it has just announced the plans for Qt 5. Some major changes are afoot code and functionality-wise, but the biggest change is that Qt 5 will be developed out in the open from day one (unlike Qt 4). There will be no distinction between a Nokia developer or third party developer.
Lars Knoll announced the changes, as well as the intended goals and scope of Qt 5, on Qt Labs. There are four goals for Qt 5, and according to Knoll, to achieve these goals, it is needed to break binary compatibility; source compatibility will be maintained ‘for the majority of cases’. Knoll promises the transition will be far less painful than the transition from Qt 3 to Qt 4. In any case, these are the four goals:
- Make better use of the GPU, allowing you to create smooth (and accelerated) graphics performance even with limited resources
- Making your creation of advanced applications and UIs easier and faster (with QML and Javascript)
- Make apps connected to the web be as powerful as possible, i.e. to embed and power up web content and services into any Qt app
- Reduce the complexity and amount of code required to maintain and implement a port.
The focus will be on X11/Wayland on Linux, Windows, and Mac; other platforms will be on thebackburner from within Nokia. “The goal of the Qt 5 project is to offer the best possible functionality on each platform, implying that Qt will begin to offer more differentiated functionality on some OS’s, while still offering efficient re-use for the absolute majority of the code across platforms,” Knoll adds.
The remainder platforms currently supported by Qt will have to be added to Qt 5 by the community, which ought to be easier now since Qt is switching to open governance mode. “Qt 4 was mainly developed in-house in Trolltech and Nokia and the results were published to the developer community,” Knoll writes, “Qt 5 we plan to develop in the open, as an open source project from the very start. There will not be any differences between developers working on Qt from inside Nokia or contributors from the outside.”
The plan is to have a beta release somewhere at the end of 2011, with the final release hitting somewhere in 2012.
Sound good!
While open development is good (it was pretty open to begin with, stuff happens in almost real time in public git), I see the switch of graphics system as the most important one.
The focus now switches to making QML as fast as possible (with maximal GPU support), through making Scene Graph the primary drawing method.
This is what Aaron Seigo was waiting for, with eye on plasma speedups:
http://www.osnews.com/story/23899/Aaron_Seigo_Plasma_Overhaul_Plann…
I loved working with Qt. It’s a great toolkit.
Now I have just released my first .NET application (gotta learn that as well) and since I want to have it on Linux my only question is: how good is Qt in Mono? Will there be Qt5 bindings for Mono when Qt5 is released?
Has anyone tried Qt and Mono and is willing to share the experience?
you could try the Java QT binding (QTJambi) suppose to be pretty good and both Java and Mono are pretty similar languages.
But, the whole application is written in C#. What good is Qt bindings for Java when the language is C#? :S
The Java Jambi binding is dead anyway. I think KDE have a working C# binding (SMOKE / qyoto), but I have never heard anyone using it so I cant tell. If KDE have one, then Qt will work as well.
I can assure you it’s not dead. Trolltech abandoned it but theres a small group of people working very hard on it. Mavenizing the whole project, ironing out bugs, … It’s all being worked on.
http://qt-jambi.org/
The package for Qt bindings for C# is called Qyoto:
http://en.wikipedia.org/wiki/Qt_%28framework%29#Bindings
http://techbase.kde.org/Development/Languages/Qyoto
AFAIK you do not need to have Mono installed.
Edited 2011-05-09 23:51 UTC
Yes, I’ve heard of it. But have anyone used it? Is it good? Any thoughts on it? Experiences? Tips and tricks?
It is autogenerated and as far as I know, nobody use it. Sometime, doing a clean C++ reimplementation is a better idea. C# -> Qt is possible. After all, if you use mono, your code will still use .NET “stdl” instead Qt “stdl”, so it wont work that well. Then I am not sure how it handle signal and slots. It work fine in Python and did worked fine in Java, but I am not sure about C#. If the apps is less than 5K LOC, I would consider a rewrite, it will be easier than debugging a binding that your alone to use.
Of course you need Mono. Mono is the .NET Runtime required by C#.
As for the binding themselves, they’re not too good.
ALSO, worth noting in the anouncement by Nokia is the fact that they admit what I and others have been talking about for months, but people on this website have dismissed.
One platform to rule them all is not practical at all.
This is exactly whats true of the current situation with Silverlight. You can reuse 80% of your code across Xbox 360, Windows, Surface, Zune, and Windows Phone 7, but that other 10% must be specifically catered to the host platform.
The fairy tale that you can just write it once in Qt, and have it look, feel, and behave the same on all the platforms Qt supports is exactly that, a fairy tale.
So I’m glad Nokia has come out and said this, I mean, even if they are repeating what any competent developer has been screaming at the top of their lungs for some time now.
Qt: Good platform. Best C++ platform possibly (despite the fact that it heavily dresses up C++ into something useable using their Meta Object Compiler) for application development.
Lets not make things more than they are.
I looked into this … it appears that you are correct.
This requirement would make the whole thing a bust. ALthough C# is an ECMA standard and it is covered by Microsot’s Community Promise, Mono inseperably includes a lot more than just the C# runtime, including components that are claimed by Microsoft as proprietary to Microsoft, and covered by Microsoft-held patents.
This would appear to make the whole thing a bust.
Better to program in any language but C#.
Perhaps for those who do not like C++, D is the go.
http://en.wikipedia.org/wiki/D_%28programming_language%29
Edited 2011-05-10 03:52 UTC
Yeah, just use Java, then you’ll be safe from patent litigation then!
But seriously, patent suits could come from anywhere, to be overly concerned with .Net doesn’t seem useful. This is especially true when you consider that two implementations of Java have been taken to court, and AFAIK no .Net implementations have seen the same treatment.
I would say Microsoft’s Community Promise actually provides more protection than is usually provided (ie, none).
… and I would say that Java and .NET are the only two languages that one should avoid because of threats of lawsuit from the originators of these languages.
Most languages are MEANT for people to use and program applications in.
I agree that languages are meant to be used! C# is a pretty nice language with a good set of libraries, it should be in the arsenal for those who want to take advantage of it!
The point I’m making is that it is nearly impossible to determine in advance (who saw Oracle v. Google coming?) if there’s a body holding some patents which may or may not read on the implementation that you’ve chosen to use.
It doesn’t even have to be the originator of the language who holds the patents, it could be anyone.
The point I’m making is that this is ONLY a problem for Java and .NET.
Every other language is fine.
It’s a problem for /EVERY/ language that needs to be interpreted; Perl, Python and PHP included. And as every language is interpreted or compiled at some point, there is no “safe” language.
However C++ is safer than most as:
a/ you’re redistributing a compiled object
b/ and there are several C++ compilers, so should one C/C++ compiler be targeted by a patent lawsuit, developers could switch to another.
I was talking about threats from holders of patents. Perl, Python and PHP are all perfectly safe from such threats.
What are you talking about?
If Oracle has a patent on a garbage collection technique, then what saves the Perl, Python or PHP garbage collectors from the threat?
Programing languages don’t run natively on the hardware, they have to be compiled or interpreted. Thus every language’s compiler or interpreter might (accidentally) use patented technology.
A similar example would be the developers of WebM having to work around MPEG patents in WebM’s encoder/decoders. Except with lanuages it would be garbage collectors, methods of compiling array’s into ASM stacks, and so forth instead of methods of image compression.
To say that it’s only possible for Java and Mono to have accidentally used / stolen patented technology given the current state of the US patent system, is a little naive.
Edited 2011-05-11 07:30 UTC
Interpreter technology has been arond for ages. Donkeys years.
http://en.wikipedia.org/wiki/APL_%28programming_language%29…
http://en.wikipedia.org/wiki/Perl
http://en.wikipedia.org/wiki/Interpreted_language#Historical_backgr…
http://en.wikipedia.org/wiki/Interpreted_language#List_of_frequentl…
Prior art in this field is extensive. It is highly unlikely that a newer language such as Python or PHP implements techniques that are not foreshadowed by something like Perl, which is over 30 years old. The main differences are in the language syntax, and not in the fundamental concepts.
http://en.wikipedia.org/wiki/Prior_art
I still contend that Perl, Python and PHP are all perfectly safe from patent threats.
Who exactly is going to sue anyone for writing an implementation of Perl, given that Perl is 30 years old and that patents expire after 20 years?
Edited 2011-05-12 00:30 UTC
The car analogy: just because cars are 100 years old, doesn’t mean you can’t patent a new engine design.
The same holds for language implementations. To assume that a Perl interpreter uses the same internal structure it did 30 years ago would be wrong.
I’m guessing you’ve just ignored the news items on here about updates in design for BSD’s C++ compiler. Just because a language may standardise, it doesn’t mean development ceases on compilers nor interpreters.
Also, Perl isn’t a static language. It maybe 30 years old, but Perl v5 is only 17 years old and the CPAN archive is 16 and clearly there’s going to be libraries in CPAN which are much younger than that. Furthermore, you may check your own Perl code for known patents but who’s to say that the entirety of CPAN is clean of patent infringing code. So are you now advocating that Perl developers steer clear of CPAN and use a pre-v5 implementation of Perl?
As for Python, well the main interpreter (CPython) compiles to byte code (like Java and .NET) and runs inside a virtual machine (just like Java and .NET). So the CPython engine might well break an Oracle or MS patent. Granted you could move away from CPython and use another Python interpreter, but the next most complete project is written in Java anyway – thus on even more unstable ground (as raised by yourself) than CPython.
I could go on but I think I’ve made my point; no language is safe from patent threats these days.
C and C++ seem to be safe, unless you sue ISO. In emergency some compiler optimizations could be replaced with alternatives, or whatever.
ADA is probably safe too, unless you want to risk annoying the military-industrial complex
Why is it only a problem for Java and .NET ? What exactly saves the other languages?
+1
I would stay far away from Mono. It’s a lawsuit waiting to happen. Most of the useful classes and other things in Mono are all patented by Microsoft.
An alternative option I would happily recommend, is the Object Pascal language via the Free Pascal (FPC) compiler. FPC supports a lot of platforms (10+) including mobile ones, and is 32-bit and 64-bit enabled. The language is damn easy to learn and full Object Oriented Programming support is included (it is NOT the Pascal language from the 80’s!). FPC also has excellent documentation, loads of bindings to various libraries out of the box, and a huge FCL library (free pascal non-visual component library for a myriad of things).
There are also many GUI toolkits to choose from as well.
Lazarus LCL, which includes a very capable IDE too with a visual forms design, integrated debugging, and excellent editor etc. LCL uses native widgets from each platform and supports Mac, Windows, Linux, Windows CE etc.
Then there is fpGUI Toolkit, which is a 100% custom drawn toolkit, if you want the exact same look and behaviour on all platforms. It supports Windows, Linux, Mac, Windows CE and Embedded Linux.
There is also MSEgui which also includes it’s own IDE, which is also 100% custom drawn. Supporting Windows and Linux.
We have used FPC for over 8 years in a commercial environment (moving away from Delphi) and our products run under Windows, Linux and Mac with great success!
http://www.freepascal.org
http://www.lazarus.freepascal.org
http://fpgui.sourceforge.net
http://www.msegui.org
I can’t wait for Qt to replace GTK+ on desktop Linux. In fact I think most of the GNOME environment needs to be abandoned or replaced. Then maybe more people would make Linux applications.
Not going to happen, but we can dream.
(Sarcasm on)
Yeah…since multiple choice is alway bad and there should be no competition so that people can stick with only one software…hey, wait a minute!!
(Sarcasm off)
GPU acceleration required + more javascript = a big meh, as far as I’m concerned.
There are those who, like Haiku and Enlightenment, choose to optimize their graphic stack enough that it doesn’t require a gaming-grade GPU for smoothly drawing gradients and buttons. And there are those who just give up, use a slow web scripting language everywhere in the UI for optimal responsiveness, and turn on the battery hog that a GPU is for trivialities. Sadly, more and more organisms put themselves in the second category…
Edited 2011-05-10 06:04 UTC
Haiku’s app_server optimized? Don’t shout it too loud…
And mind, I love Haiku, as much hacked together as it is.
As I already told you in another threads, in the IT world VM based environments + JIT are more valuable than the previous all native solutions.
The hardware allows it, and people prefer the productivity gains to the difficulty to deal with low level APIs.
Surely you can also create high level APIs in native languages, but no one seems to care about it.
It does not mean I agree with it, but it is the reality, and I doubt any of us will be able to change it.
QML sort of lets you have your cake and eat it too. You have full access to the native platform on “metal” level (C++ & Qt), while allowing you to write as much of the code in “scripted” environment on QML side.
It’s a much neater concept than e.g. with C# / Silverlight, where you are forced to write almost everything in C# – and C#/CLR is still not “low level enough” when you really want that (to conserve RAM, hand tuning the algorithms to optimize cpu cache use, whatever).
I should point out on the ui level silverlight is a pretty similar concept to QtQuick but uses XAML instead of javascript. From what I’ve seen of qtquick binding to the ui is not as intuitive as in silverlight.
I don’t like javascript that much (probably due to lack of familiarity) but its syntax is far cleaner than XML (it gets worse when using a tool like expression blend as it literally vomits out XML tags).
Everything else is very true, but that has been the tradeoff with managed languages since forever…
Edited 2011-05-10 09:11 UTC
The big difference between QML and XAML is that you just can’t add nontrivial logic to XAML; all the logic ends up in C# file. XAML can only do the “declarative” part, i.e. you can do simple bindings there.
You can start going on about enforcing the separation of concerns as being a good thing, but you would only be covering up inherent flaws of selecting XML as the ui annotation technology ;-).
What does this mean?
you’ll get no argument from me there
It could be from my lack of familiarity. Or it might stem from silverlights more complete widget library but there are more explicit/predefined ways of binding events to the underlying logic or exposing events to the ui layer (the whole Model View ViewModel thing).
Then again the ViewModel approach is not required in Qt quick due to the ability to do as you said non-trivial logic can be done in the ui layer.
(just for clarity the view-model is essentially the controller architected as a model for the ui, the view is almost pure graphics and the model is the same as always).
Ah, you were contrasting against raw qml. Luckily we have Qt Quick Components hitting production status in the following few weeks:
http://confusingdevelopers.wordpress.com/2011/04/01/intels-qt-quick…
http://labs.qt.nokia.com/2010/09/10/building-the-future-reintroduci…
You can do multibinding, binding to ancestor, element bindings, binding with data validation, binding to pure XAML data sources, priority binding, result filtering in XAML.. ?
PLUS really anything else under the sun you can imagine by creating a markup extension (Hint: “{Binding …}” is simply a built in markup extension.)
But it speaks to a greater issue, and its a valid one, which is testability and that of a separation of concern.
The fundamental look of a View, and the mechanics behind that view, should be separate things in my opinion. This is the difference between a View (which should be pure XAML ideally, and a ViewModel which is simply an adapter for the Model).
So in my View I handle the layout, where bindings will appear, and do all the visuals (animations, caching brushes, freezing resources, etc..)
The ViewModel is where you’d communicate with the model and turn its platform agnostic data into Wpf/Sl specific entities .. you can also optionally present different values for design time and run time so I can see what my UI will look like without having to run my project.
The separation is an important one, and I’m glad its like that (for those who really have a problem with it: you *can* use inline code in XAML, though not in Silverlight iirc). Intermixing logic with my UI code in QML doesn’t sound like my cup of tea.
XAML is directly woven into the .NET object model. Every XAML element corresponds to a .NET Element 1:1. I can traverse them using C# and have first class support inside the IDE.
Besides, Javascript is such a yuck language to use when compared to C# for these kind of things. A typeless, completely dynamic language is really an abomination .. and should never be a part of the picture moving forward.
Hell, I’d rather do C++/QML than deal with the incoherent mess that is Javascript/QML.
Ok, so lets say “you can only do binding in xaml” instead. As an example, can you invoke XHR in xaml and store the result in a database?
Let me wager “markup extensions” need to be written in C#?
It’s not for everyone, but it’s pretty damn powerful and agile. You can do janitorial tasks like moving inline stuff out later, if you find it offensive. Just having the option to do it makes QML more expressive.
In QML, everything is a QObject (or QDeclarativeItem) as well. There is no “code generation” like with XAML, though. I don’t really miss it.
Javascript is not typeless, it’s dynamically typed (like Python, Ruby…). It’s yuckier than many other languages, but it’s pretty much here to stay – and as it appears, it’s a valuable skill to learn in todays job market, so tons of people know the language.
I also opted to go C++/QML first, but later changed my mind – almost everything could be done in QML/js in a more succinct way.
XAML has a killer flaw that makes it irrelevant to most people here though – it’s neither open, nor truly cross platform (win + mac != cross platform). QML, OTOH, is something everyone can pick up and start using.
Edited 2011-05-10 13:43 UTC
Yes you can. You can declaratively do it with XmlDataProvider (WPF only unfortunately). You can subclass it and override OnQueryFinished to insert it to a database, but really I don’t see the point.
You’re polluting the View with needless behavior though. Any good performance programmer will tell you “Cache implies policy” and policy is best defined in the ViewModel, not in the View.
The View is supposed to be reusable, replacable, and testable (UI Automation) having any kind of logic in the View outside what is absolutely necessary is breaking this workflow.
Yes, but also every <Button> corresponds to a Button control (written in C#) so a markup extension you define in XAML being written in C# is not a huge deal, and in fact, it is no different from what QML does.
That would make sense if you couldnt use inline code in XAML with WPF, but you can. Also, is it really that much of a hassle to wire up an event handler? i mean if you absolutely must break the MVVM programming pattern, do it in a less disgusting way.
I don’t get what you mean. QML, like XAML in Silverlight is loaded, parsed, interpreted, and generated at runtime into the corresponding Qt objects.
XAML on the other hand in WPF is compiled to BAML, which is even faster than interpreting QML or XAML.
Like I replied in another post, weak typing is effectively no typing, because implicit conversions beyond a few base types (integers, strings) are a largely undefined operation, which can cause all sorts of problems with functions not designed to handle them.
It’s more work, more room for error, and a lot slower than compiled C#. So why use it?
Its also a lot less mature than XAML, and in my opinion, a lot less easier to work with, the relationships between parent/child nodes are much more pronounced.
XAML is absolutely open, its vocabulary is covered under Microsoft’s OSP.
At the end of the day though, QML is good, it’s great, and I think I said that earlier. However, QML being great doesn’t make Silverlight/Wpf/Xaml bad, or inferior.
I don’t see how you’re eating your cake too, when Javascript is a typeless language, which can lead to all sorts of weird quirks when unexpected things happen.
I don’t know, the CoreCLR in Silverlight 5 is pretty damn fast now and is still tiny. In fact, last time I checked, the CLR’s JIT engine still outperformed Javascript JIT engines.
If most of the time is spent in QML/Javascript, I don’t see how you can claim the performance benefits of being “closer to the metal” while using an interpreted/JIT’d language.
I’m not sure if QML is interpreted or if its compiled to an intermediary language (XAML on WPF is compiled to BAML a binary representation, and XAML on Silverlight is interpreted)
JavaScript is not typeless. It’s dynamically (and duck) typed. It still has types and classes.
Without having type safety having a type system is effectively useless.
I’d say you don’t know what you’re talking about. Compile-time type safety is only one of the many benefits of having a type system. Regardless, JavaScript *does* have types, whether or not you personally consider it worthless for it to have one.
For all practical purposes, there is no point in a type system if its weak. The end game here is program correctness, and that is not something that comes out of the box with Javascript.
The types themselves in Javascript don’t matter as much as the values of the variables, which is where the problem lies.
The problem is that of undefined behavior from implicit conversions. Because of that, Javascript is effectively typeless for every reason that matters.
This abomination of a language has no place anywhere, at all.
You should brush up on how graphics works these days. It’s not at all about “optimizing” – it’s about creating a graphics model that works as fast as possible on modern hardware, instead of “generic” model that works okay everywhere.
GPU is always on anyway. You can waste it and burn the power on CPU instead, or get with the program and use that GPU. Using GPU over CPU prolongs battery life and makes the applications look better.
Speed does matter, but it’s not the only thing that matters. Things like battery life, stability and portability do matter to, in the context of QT. Sadly, some of the platforms which QT does run on simply do not have stable, efficient GPU drivers for recent hardware, because of the way the GPU ecosystem works (particularly on the desktop). In fact, considering how easy it is to crash my GPU drivers on Windows, and this driver update which broke something as basic as fan operation on high-end graphic cards some time ago, I have to wonder is such a thing as a reliable GPU driver for modern hardware does exist on any platform…
I don’t think it’s impossible to turn off a GPU. I think NVidia’s Optimus software does it, and someone (I think it’s oahiom) told me that on AMD and Intel GPU, there even is a fine-grained power management model allowing one to shut down parts of the GPU (e.g. individual execution units).
While if you can’t turn GPU off, the most efficient thing to do is indeed to use them, turning off GPUs, or parts of them, is totally worth it when CPU power is sufficient. I haven’t made a pure comparison of software rendering with all GPUs off (or in their minimal power state) and GPU-accelerated rendering yet, but I can already tell what the difference is between an idle Intel GPU and an idle NVidia GPU : the battery life of my laptop is halved in the latter case. This is a pretty impressive result, because it means that even if we neglect the power consumption of the Intel IGP, when it does nearly nothing, an NVidia GPU eats up as much power as all other power-hungry components of a modern laptop (screen, HDD, wireless, CPU…) combined !
A more debatable example would be a comparison between smartphones with and without GPUs : those without GPUs generally provide pretty smooth graphics too. You lose the shiny effect, granted, but you can also get hardware at half the price but with twice the battery life (3 days vs. 1,5 days).
What this shows is that while CPUs’ power management features have dramatically improved in the past few years, high-end GPUs are still the power-hungry beasts that they were in the days of graphics accelerators : something suitable for gaming or high-end graphics, but not for anything which has a battery plugged in it. When I see everyone embracing them for trivial tasks, I do think that the computing world has really gone completely mad.
Why have square wheels when you can have round ones?
That is essentially the logic that you’re proposing. It is more energetically efficient to use asics designed for the task at hand instead of throwing generic compute muscle at a problem, why is graphics any different?
If most machines include dedicated hardware for graphics why not optimise for the majority?
Last but not least…since when have any accelerated uis needed Gaming level gpus?
We are not talking gaming grade GPUs here. All CPUs will have GPUs integrated into them in the near future. AMD has its Fusion program, Intel already ships Sandy Bridge with an integrated GPU and all ARM chips in mobile phones have a GPU on die. So if you have this piece of sillicon already which can do these drawing operations more efficiently than a CPU can, why not use it?
Also GPUs require a different drawing model than traditional CPU based renderers do. Compositing window managers manage off-screen buffers which they use for compositing the desktop, while CPU based renderers just write more or less directly into the screen buffer. While this “deferred rendering” requires more memory than “immediate mode”, it is actually more efficient when moving windows around because applications do not have to repaint all the time.
Enlightenment uses such a deferred approach and can use a software rasterizer but it can also use the GPU via OpenGL. Haiku is left in the 90s for now.
Basing the drawing model on a GPU friendly paradigm and OpenGL does of course not mean you cannot run it on a CPU. There are already numerous ways to accelerate OpenGL on the CPU for backwards compatibility such as LLVMPipe.
ASeigo chips in on subject of KDE 5:
http://aseigo.blogspot.com/2011/05/qt5-kde5.html
ASeigo – relax
http://aseigo.blogspot.com/2011/05/relax.html
‘The Plasma team has no intention, desire or need to start “from scratch” nor engage in a massive redesign of the existing netbook or desktop shells.’
Unlike KDE 3, KDE4 is designed to cope with API breaks such as Qt5.
‘This will end up affecting binary compatibility, but source compatibility will remain largely in tact, especially for modules considered ‘done’ like QWidget based things.’
It will merely require some tweaks here and there, and a re-compile.
I would love to see first class Python support via PySide inside tools like Creator