In June 2009 we had some very good news about the integration of multitouch events support inside the Linux kernel. Since then, many multitouch device drivers were developed, mostly in collaboration with LII-ENAC, to take advantage from this. All the work was kernel-based, and multitouch supports needs more components to be added in a stack to get multitouch working out of the box. Canonical got interested in providing the needed user experience for multitouch by developing a new gesture engine that recognizes the grammar of natural hand gestures and provide them upstream in the stack as new events.
Many other components from the uTouch Framework are added to standardize the events gathered from devices as we have ones that do finger tracking and others which don’t. With the help of many people from the community, a protocol is being discussed in the xorg-devel mailing-list and would be ready for Maverick.
Mark Shuttleworth described the gesture grammar in his blog as “rather than single, magic gestures, we’re making it possible for basic gestures to be chained, or composed, into more sophisticated ‘sentences’. The basic gestures, or primitives, are like individual verbs, and stringing them together allows for richer interactions”. More information about this can be found in this Google document.
The stack will make the hardware vendors’ jobs easier as it does finger tracking and delivers a generalized gesture recognition system, which means that Ubuntu can become the standard platform for instant multitouch application prototyping and use as most components of the stack are already here and licensed under the GPLv3 and the LGPLv3.
The multitouch support is tested on hardware like the NTrig digitizers but it works on some multitouch touchpads as well. Multitouch device manufactures can contact the developers of uTouch to test if their hardware is supported or if it still needs some minor tweaking.
Canonical has announced uTouch on its blog and there is more technical documentation on the Launchpad multi-touch-dev mailing list.
Am I the only one seeing a problem with this?
There was an article here, some time ago, about the similarities between a gesture interface and a command line interface. A gesture interface, like a cli has no way to present itself to the user.
So the way you get around this is by having only a few gestures with a strictly defined meaning (pinching zooms, two finger dragging scrolls etc).
Allowing so much arbitrariness seem pointless to me. The whole point of multitouch is to present the user with the real life equivalences of the actions they perform with their computers (again, pinch to zoom is quite natural). If you are going to introduce arbitrary, hard-to-find gestures, you’d probably be better of with good ol’ keyboard shortcuts.
I couldn’t agree more. Most of the time when these developers try to make something easier for the user by trying to anticipate our needs, they usually end up with a complicated mess that nobody uses. KDE 4.0 anyone? But you know how it is around here…anytime an Ubuntu dev picks his nose it makes front page news.
The KDE4.0 mess was because users went out of their way to run beta software despite frequent warnings that the desktop environment was still buggy and unfinished.
Thus problem was that everybody used it when they shouldn’t have – so in that respect, you couldn’t have picked a worse example.
However I do agree with you on principle re many developers missing the point; but you also have to remember that what is a good interface for one person isn’t great for another. We all have different ways of working.
Developers often like to blame users for their own mistakes.
If you remove the word “beta” from the release, it is considered stable. This is a convention that is expected.
I understand the reasons for it, as it meant the core libs were ready, and it was to encourage developers to start porting etc. in addition to other reasons. But you still can’t expect users to do a bunch of research for your software. The release number “4.0” signified to users that it was stable, even though that wasn’t what the developers meant. Hence the confusion.
It was clearly marked on the website for every distro that shipped KDE that the 4.0 release was a testing release.
In fact, not one distro shipped KDE4.0 as default – users had to specifically go out of their way to get a copy of KDE4.0, ignoring the 3.5.x ISO’s found right there in big bold letters at the very top of the download pages. In fact, I don’t even recall KDE4.0 even being in the main / stable repositories. Users had to enable testing branches and the lark.
So yes, if someone deliberately puts effort into downloading a non-standard package, then I do expect then to at least read the warnings on the very page they hit “download” from. And what’s more, I don’t think I’m being unreasonable here either.
If KDE4.0 was default or hadn’t came with warnings on the websites of every single distro that shipped the optional packages, then I would agree with you. But the reality was that users knew the warnings, ignored them and then complained when they didn’t have a production-quality desktop.
Furthermore, how many times do you hear horror stories with 1st releases of Windows? Even OS X has had it’s fair share of teething problems from version upgrades. In fact it’s pretty common knowledge in IT that n.0 release users are often guinea pigs and that it’s never a good idea to upgrade a critical machine until the dust has settled and bugs have been fixed.
So as far as I’m concerned, the warnings were well known but people chose to ignore them and went out of their way to install KDE4.0 despite it being the non-standard KDE package. And sorry for the rant, but sometimes it just strikes me as if it’s easier for people to moan about open source than it is to use a little common sense nor contribute back)
Edited 2010-08-17 15:24 UTC
Actually, this isn’t entirely true. While most distros continued to ship KDE3 and offered KDE4 as an option, Fedora shipping KDE4 as the default (Fedora 9) and refused to provide a KDE3 package.
Though for the most part, the over-all commentary is correct. It was made perfectly clear by KDE, and by the distributions, that KDE 4.0 was an early development system and no one in their right minds should have assumed it would be stable.
People who are complaining it wasn’t ready are just pointing out their own foolishness.
Sigh. Go look at the official release notes for KDE 4.0. Is there something on them that even slightly resembles “unstable release”, “platform preview”, “development preview” or anything of the sort? Something that implies that only the base libraries are somewhat complete and the rest is in alpha state? Something that indicates that what is being released is nothing other than a complete, usable, final user-oriented desktop environment?
If they had been honest no one in their right mind would have blamed them. If they’d come out and say “look, it’s a huge endeavour and it’s taking far longer than we thought”, wouldn’t that have been much better for everyone?
Of course not. Why be open and honest when you can blame poor, foolish, stupid users? After all, it’s a strategy that is working wonders for open source desktops, right?
<p>Your read OSAlert, so you would have also read Ars Technica at the time of the KDE 4.0 release right? Here is a quote from Ars Technica about the KDE 4.0 release:</p>
<p>“..While reading this article, it is important to keep in mind that KDE 4 is still largely incomplete. Many of the details provided in this article reflect the fact that the 4.0 release is not a finished product. The KDE development team controversially decided to release 4.0 in a premature state in order to stimulate user interest and promote accelerated development. The result is that KDE 4.0 is, in many ways, like a preview for developers and technical enthusiasts rather than a release for enterprise desktops and production environments. My extensive testing shows that KDE 4.0 can be used on a day-to-day basis, but there are many inconveniences posed by the software’s current limitations. In this article, I will try to provide a balance of forward-looking analysis and detailed descriptions of the software’s current state..”</p>
<p>I’m just so tired of reading all these posts about KDE 4.0. Get over it, the release happened over two years ago. Without KDE 4.0 we wouldn’t be able to have the awsome KDE 4.5 release.</p>
<p>Let’s talk about more interesting stuff like the Ubuntu multi-touch apis for the Linux desktop instead of sounding like stuck records.</p>
<p>One thing I noticed when I was playing with the iPhone apis a while ago, was how primitive they were. They were in terms of things like ‘finger one is now is position x1.y1 and finger two is now position x2.y2. It seemed pretty low level to me. If the Ubuntu touch api is working of a higher level than that, then I think it could be a big step forward. A sign the the iOS apis are a bit low level is that iPad apps don’t have a standard vocabulary of touch gestures.</p>
Edited 2010-08-17 22:00 UTC
That’s completely wrong. I guess you missed UIGestureRecognizer.
Yes, you’re right – that looks really good. I see it was introduced in iOS 3.2, which is after I last looked at the iPhone api a year ago.
Yeah, why be honest when you can blame your own inability to read simple instructions on something else?
You can’t even download KDE from the press release so why should they post their bug list there. In fact the only way you can get KDE4.0 from their website was via svn – which if users are smart enough to do that and compile their own packages, then they’re also smart enough to know not to do so on a production machine.
They did say that though. They said that numerous times. In fact OSAlert even reported this on a few occasions. The KDE team can’t be held accountable because some users just click “install” without reading a single thing first. Heck, on Debian, KDE4.0 was only available on the “experimental” branch – surely that should have raised alarm bells. IIRC the same was try with ArchLinux. Kubuntu and OpenSuse even offered a LiveCD for users to try without installing.
Any major upgrade like this demands a little reading first anyway – so the fact that they missed the very public statements on both the distribution sites, news sites and KDE’s own site is even less excusable. It’s like upgrading your Windows box from XP to Vista then complaining when your 16bit apps aren’t working. If it’s a major upgrade to a production system then read, test and only then should you go live.
So yes, I do blame the users. It was made very clear that this was an unstable and very major upgrade. It was also a non-standard release on all but Fedora. And finally it was also made very easy to test 4.0 without even needing to install. Thus quibbling about version numbers is nothing more than an excuse to compensate for a complete lack of common sense.
Excuse me? Remember the problems with Windows Vista and how users were ignored by MS because they bought computers that weren’t powerful enough despite having Vista preinstalled? Or how about how Apple blamed users for holding the iPhone wrong?
At every point the KDE team responded with “run 3.5.x if you want a stable production environment”. If anything, that’s an acknowledgement that their software isn’t ready yet rather than blaming the users that something doesn’t work.
I’m sorry but that last pop at open source sounds really does sound more like prejudice than an informed opinion. Particularly as all I’m blaming users for is ignoring the warnings. Everybody else knew KDE4 was unstable so either stuck with 3.5.x or ran KDE4 on non-critical machines.
Except that in order to get KDE you’ll have to either compile it yourself or install it from your distro. Since regular users don’t build anything this is clearly a problem of distributions providing or switching to KDE4 too early.
If you can do a lot, you can also do little; but the opposite is not true.
In other words, it’s better for underlying systems to be very generic and flexible because then it makes it easier to build more specific pieces on top without resorting to hacks.
They are not excluding those basic, ancient multi-touch gestures, such as “pinching” and “two-finger scrolling.” As your quote from the article reads, they are just “making it possible for basic gestures to be chained, or composed, into more sophisticated ‘sentences'”.
Furthermore, basic gestures such as pinching and scrolling are not self-evident, and they still have to be learned by the uninitiated.
It is similar to the command line, you use pipes to “chain” commands together on the command line to do more complex tasks, and this sounds like it would work the same way, you string simpler gestures together to accomplish more complex tasks.
I’m kind of hoping we don’t end up with the nVidia situation, i.e. device manufacturers writing their own stuff anyway and replacing key bits of the framework, making everything more incompatible than it already is.
About not presenting itself to the user, shouldn’t this approach lend itself well to teach the user how input works using short videos?
And I do not really think this is similar to CLI. On the command-line you need to be precise. Mistype one character and it wont work. With gestures, just as natural language, you don’t need that level of precision to communicate the essence.
But in BASH, if you hit [TAB] then you get auto complete (or where there are more than one branch, the options present themselves).
So one could argue that you don’t need much precision so long as you know roughly how the command starts.
Plus there are -? / –help and man pages, so you have a means of communication to teach the users.
However I’m not disagreeing with your point entirely, more playing devils advocate
Gestures do not require precision ? You should try those wonderful laptops equipped with a multitouch trackpad. Unless you move your finger very carefully (because you got used to it, just like you get used to CLI), most of them easily mistake scrolling for zooming…
Edited 2010-08-17 08:45 UTC
This depends on the gesture recognition engine.
the engine of maverick provide information about gesture with the amount of move.
The precision can be modified the same way of single-touch touchpads.
How is that progress? Looks cool on CSI or Minority Report, but can you imagine the horror on a day to day basis? What was a twitch of the wrist is now a whole arm movement. Worse, you input and output devices are one. So to input you must cover your view of the output device. What happened to “do one thing, do it well”. Can’t beat a keyboard for text input or a mouse for providing a 2D position. Touch might add something new, but I can’t see it replacing everything as some seam to think. I would be very surprised if I had a machine and OS with this, that I used it much. Providing it’s optional that is. I hope good money hasn’t been burnt on this, there are other areas of development I would rather see the money spent on.
I think everyone would agree that the fully natural way to distribute mod points is by giving comments the finger
a protocol is being discussed in the xorg-devel mailing-list and would be ready for Maverick.
Ok and how about KDE? What’s happening with KDE’s touch implementation? Aren’t the KDE developers going to wait until this new protocol is completed or is KDE going its own way (typical for Linux) like the rest of the *nix software creating their own rules, guidelines and protocols leading to incompatibility nightmares between tool kits because they all compete with each other? My advise for KDE Touch Edition is to wait until this x.org level protocol is done and implement it before proceeding any further. This will not only make consistent behavior between KDE and GNOME but will make jobs a lot easier for OEMs as there will be only one protocol to follow,
“rather than single, magic gestures, we’re making it possible for basic gestures to be chained, or composed, into more sophisticated ‘sentences’
Hmmm, this is a bit scary if you are asking me. Exactly these type of “sophistications” have made the Linux desktop suck because it’s not behaving one coming from Windows would expect. So my advise is to make their touch input behave like the rest because I am nearly confident they’ll end up making a square wheel out of a round one. Give the users what they are used to and then slowly introduce new ways…
KDE is based on Qt.
The touch api of Qt still missing the Linux brick.
We are discussing with them about the coming support.
I hope it all ends up well then. Consistency is the key to success.
Thank goodness it doesn’t follow what Windows Users would expect. Hell, Windows 7 doesn’t either. At least in all the Linux desktops I’ve seen you can hold down Alt and click on a window to drag it. Windows.. fails at well, windowing!
Windows 7 Multitouch is kind of crap too. You can’t even re-size a window with two fingers. From what I’ve seen of the videos on Linux (haven’t quite gotten my multitouch to work, was waiting for this!) you can rubberband with multitouch as well as re-size, move multiple windows around, etc.
Linux is succeeding in a new user interface paradigm where Windows is failing miserably.
There is a good side and bad side of it. For us, yes it is good however if one wants to win users, they should provide what users are used to and slowly (as stated) introduce new ideas.
The society is facing problems with such laws. This has to go legal and it^aEURTMs needed to be sorted at the earlier.
http://autopartindex.com Auto Part Index
Edited 2010-08-17 20:41 UTC
http://autopartindex.com“ Auto Part Index