Dan Hon makes a thought-provoking assertion in his blog: remember all the ridiculous, unrealistic computer interfaces that Hollywood characters are always using, showing “hackers” infiltrating systems by flying through virtual reality worlds of strange codes, and cutesy animations accompanying every task? Hon’s point is that instead of ridiculing these unrealistic interfaces, maybe we should try to emulate them. He makes a pretty good case.I’m certain that most of the readers of OSAlert have spent a fair amount of time designing the perfect operating system in their heads, or talking it over with their friends. OSAlert was born after I spent a six hour car ride engaged in just such a conversation with a few friends. But technologists tend to focus on utility, and usually strive for simplicity. Computer power users often denigrate the features of their OS that are intended to help lesser mortals navigate the computer, because they’re less efficient. How many times have you watched someone painstakingly navigate a nest of menus to do something, and you want to shake them and say, “learn the keyboard shortcut, moron!”
But the part of this blog post that really struck me was the discussion of using animation to hint at the importance of what’s happening on the computer. When Mr. Incredible is lifting something heavy and important, you can see that on the screen, but when you’re deleting half your hard drive, or your doctoral dissertation, it’s displayed the same ways as when you’re clearing your web cache.
The idea of injecting animation-as-storytelling into the personal computer experience is so simple that it took me a while to realize that it could also be earth-shattering. And of course we now have both the powerful hardware and software to do the kinds of data analysis and real-time animation that would be necessary to make this a reality. And of course, most of the time all this power is just lying idle while we read blogs. I’d like to see this idea go someplace.
Of course, the problem with “Movie OS” is that an animation that moves the story along would be incredibly annoying to have to wait through every time you do a routine task, so you can’t take the joke too far. By no means would I advocate copying the silly ideas you see in movie UIs. But the idea that what we see on the screen should tell the story is worth thinking about.
This is a great idea. This should be implemented in a way so that real world, everyday paradigms are carried into the user’s computer experience. For example, the daily paradigm of moving through your “house” could be translated into navigating your PC. You could find “documents” by clicking papers on tables, and open “programs” by interacting with objects in the room. Here’s a mockup of what it might look like:
http://www.catb.org/~esr/writings/taouu/html/graphics/bobhome1p.png
Now we just need a cool name for it….
Whatever you say, Bob!
Ha ha! I was thinking exactly the same thing when I read Thom’s comments!
I could swear the point of computers was to make things easier for us, not harder.
So the files on my computer will be just as disorganized, “lost”, and cluttered as my house?!?
No thanks!
I remember seeing that type of interface on a C64, it worked great until you want to do something really diffirent.
BBS program – nope.
Special interface program for a slow daisy wheel printer that suffered a buffer over-run if you sent it data too fast – nope.
Ham radio – Morse code – nope.
These types of interfaces are bad for what makes computers really good, being flexible.
What if we:
1) Tracked the difference between a computer doing things and a person doing things? (And maybe there could be shades of grey, like when you access a shortcut and the machine therefore opens a file on your behalf)
2) Tracked how much people / computers used files or other data, toward the goal of profiling the difference in importance of a file so that, for example, upon deleting something I spent hours making, my computer could say, “Yeah, you spent hours making this. You sure you want to delete?”
Your first makes me think of *nix style systems that provide a very script-able machine level interface and a very pretty human level interface; the inclusion of one not limiting the other. The machine can have full control including changing config files through cli and scripting. The human can have full control including mouse and gestures.
Your second is a very interesting idea also; make the platform aware of files in a more meaningful way. The trick here is including that metadata without it becoming a security risk. This probably means metadata in the file system rather than the file itself. I’m thinking that combined with a localized google search type thing that can generate relevance based on the metadata and behavioral data collected.
2) Tracked how much people / computers used files or other data, toward the goal of profiling the difference in importance of a file so that, for example, upon deleting something I spent hours making, my computer could say, “Yeah, you spent hours making this. You sure you want to delete?”
Please don’t. The amount of time could be indicative of importance, but doesn’t have to be. It would be a shame if the concept for definitive cure for cancer, hastily written down in fifteen seconds on a digital note after a long afternoon of thinking, got deleted without warning, because you didn’t spend enough time on it for it to be important.
As a consolation, the horrid pixelstain you produced after three hours of mucking about with a Wacom tablet, while you know you aren’t a Van Gogh, won’t be accidentally deleted…
It’s when a program starts to assume I am doing someting or want something, it starts driving me up the wall. Clippy anyone?
As long as nothing gets Deleted! by automagic, I see no problem with the idea of the PC knowing the time the user have worked against a textdocument or other file.
If the user decides to delete a random folder with some potentially important based on time spent creating, the system would give a heads up.
The system should always warn against deletion. Most systems do.
A creation duration indicator could be included, but it won’t stop some (most?) people from skipping the reading part and still delete the stuff they wanted to keep.
No amount of coding can guard against nonchalance and computer illiteracy.
Of course, there’s already an article about this on TV Tropes. They call it the “Viewer Friendly Interface.”
I could keep quoting project after project that attempts to emulate this, but it seems silly when they’ve collected them all into one place…
Thank you for that! I’ve included it in my post.
What^aEURTMs so special about MovieOS is how effortlessly it represents information using maps or videos or 3D objects; when we all know such stuff has to be created by someone, somewhere and you can^aEURTMt just throw together a 3D diagram of the solar system to count down to the planets aligning.
However, what we are seeing with mashups and public data becoming machine readable across the web, we may actually one day have enough datasets freely available in accessible formats to throw together highly visual representations of data with just a few clicks or lines of code.
I think you hit the nail on the head here. It will take some AI to actually evaluate the nature of the data that we’re dealing with, and bring in information from the web to help represent it in order to achieve this goal. But I think we’re quite close to having the necessary technology.
The idea of making the user aware of big and possibly bad changes is good, however adding more eye candy will call for more powerful machines.
This is why every time a new OS comes out we need a machine several times more powerful than the one we have to perform the exact same tasks, as the article says, enough is enough and it is already getting in the way of actually doing things. Maybe someone will come up with an effective way to accomplish this while not using many resources and still make it look good.
Nothing. First, there isn’t any real MovieOS. Second, in most of movies actors use a bunch of Linux terminals showing code being compiled so it looks like something is going on on the screen. Third, touch screen user interfaces shown in some movies are nothing new.
The only OS which is a bit revolutionary today when we talk about user interface is iPhoneOS.
The thing that sticks out to me most often besides the random terminal spewing crap that nobody is ever supposed to read are the random bar charts and completely superfluous crap that has absolutely nothing to do with what the person is trying to do. I cannot imagine any universe where being assaulted rapidly by flashing images is representative of actions being performed on a computer.
Oh yeah N^Ao1 most irritating thing in movieOS are the stupid beep/buzz noises that happen whenever any action is performed.We have that capability in computers today, just not enabled because,again, it is downright annoying to real people.
But printing words on the screen is so computational intensive it must make the sound of a dot-matrix printer D:
My computer totally plays a bootup AVI showing a 3D rendered happy face. And when I get a virus it’s always the same; stupid waving 3D fractal display blocking my view of the AV pop-up..
If you asked me, it would be awesome if the bootup could have some extended animation(like the mouse running in xubuntu; or the happy face of Zer0Cool!)
Also, in financial systems and other data-entry systems like bank telling or casino cashiers, the mouse simply can’t be used for it will make the operator to ‘go slow’ in order to insert and retrieve data. Maybe this principle is used in all this keyboard-driven military applications. Can anyone tell if this is also true in real life?
Edited 2010-04-17 03:28 UTC
When I made my money with Excel I’d do anything that kept my hands on the keyboard. The mouse was only a button with variable position for those few functions faster without the keyboard. Custom database forms are also much faster if you can move through by keyboard; the input pages that slowed me down where double-hotkey broken forms. Also, if you can fully control it by keyboard then you can script against it. I can’t tell you how fun it is to turn data input into a mostly automated process with your own scripting; reduce twenty fields across multiple ok buttons into two or three. RSI pain is much less also without the mouse twist and reach.
I don’t know about real life but my experience is that the mouse is overused and really only appropriate for a few of the tasks it’s habitually used for.
Don’t forget how keyboard-intensive MovieOS is. The mouse is hardly used.
MovieOS also seems to make extensive use of passwords and encryption, but is still easily hackable.
I wish the MovieOS image manipulation software would be ported to other platforms. That “zoom and enhance” feature looks very useful.
They are not easily hackable, that’s the password recovery feature.
Doesn’t it suck when you forget your password and get locked out because you provided false data you can’t remember?
MovieOS helps and entertains you with a game of hangman and infinite lives. What I don’t get is why it always takes the “hacker” so long. He probably is brute-forcing it by typing.
Not easily crack ?
Have you seen the movie “sword fish” ???
The hacker crack the system while a gun is pointing to his head and a girl giving him a blow job.
1024 bit, no less; using nothing but randomly guessing passwords at a password prompt. Because, as we all know, hackers are really really good at guessing highly improbable things.
Though truth be told most massive hacks are really just that,or simple social engineering.
Oh, really?
http://ubuntuforums.org/showthread.php?t=531781
Haven’t seen too many movies or tv shows, have you?
I’ve noticed you rarely use a mouse with MovieOS. You want to zoom in on that license plate from a satellite? *tap tap tap*… Rotate the view of a scene taken from a 2D security camera? *tap tap*. They must know some crazy keyboard shortcuts to get that working.
They must know some crazy keyboard shortcuts to get that working.
No, don’t you know? They just use the “Any Key”.
MovieOS computers have special keyboards where all keys are any keys and all any keys are bonded to the dwim(x) function.
Yupp, I’d also like to know the keyboard combinations for obtaining views of an object from a camera feed that were never recorded. Would make lot of my work easier
Also, the combination that they use when they take a bad resolution reflection from a bad resolution image, and create absolute sharp and content rich zooms from it
I said it before and I’ll say it again; or is it asked before ask again?? Anyways here goes:
why are KDE 4 emulate the KDE3 look and feel? Forget I asked – it’s to do with office desktop I get it.
But can someone tell me that there’s some attempt(s) to fully use the power of ‘everything’s a plasmoid’ – lie to me if you must
Moving the desktop forward takes time and KDE4 tech seems like a solid place to start don’t ya think?
There is a link to a really interesting paper further down the article ->
http://research.sun.com/techrep/1995/smli_tr-95-33.pdf
loads of deja-vues in there …
There are some environments that remind me of MovieOS, but they tend to be for specific functions. For example, the Windows Media Center, or the Xbox 360. The iPad too, to a certain extent.
I think the common factor is single tasking. You can afford to be flashy with a task when you know its the only thing the user is wanting to look at. Having a full screen “password” prompt (Matrix 2) or full screen email sending animation (You’ve got mail) looks great when you’re watching it on screen, but holds little practical value in a real multi tasking desktop environment.
I generally *LOATHE* those fake UI in movies and series.
No really, I vehemently hate them. When I see one, I hit pause, and I take my time to meticulously ridiculize them.
They’re tacky, incoherent, full of bloat, and generally read like random technical concepts ran through a blender. They are evidently the product of visual designer types aiming to include at least 5 o 6 random buzzwords, so as sound “familiar”, and thus plausible, to a layman.
With that in mind, David, I’m inclined to disagree with your sentiment that grandiloquent animations can help us understand what’s going on.
Feedback doesn’t necessarily have to be glittery and eye-catching, like every movie director seems to think. It can be humane, smart and sensible without all the bang and smoke.
In fact, like you said yourself, I think they would make every-day usage painstakingly annoying.
You either aim for sensibilty, or for Hollywoodesque flash. I can’t see how you could reconcile that.
Surely the status quo can advance and more suggesting UIs can be developed, but the movies are a definitive counter’example which I *REALLY* hope never influence any real designs.
That’s just my $0.02.
I know this, This is Unix!
Computer Movie Classics
——————————————–
1. Raiders of the Lost Sparc
2. Eternal Sunshine of the Spotless Build
3. iSparticus
4. Avatar xvf
5. Clash of the Tyans
Classic Computer Literature
——————————————–
1. Insanely Great Expectations
2. The Old Man and the C++
3. A Farewell to Arm7
4. /bin/hur
5. A Room with a Vi
It’s Friday!
p.s. Sandra Bullock in “The Net. She can also act!
Oh come on, there’s an entire movie called “Solaris”
That’s one heck of a suspenseful drama, too.
I would like to see operating systems with GUIs from computer games. MovieOS doesn’t make sense most of the times.
GUI Like Farmville ?
I hope not… Why you ask?
Because the first thing I do in any OS is turn OFF the damned animated bull. I like my operating system to be RESPONSIVE. This means when I click on a menu I want the menu open NOW, not two seconds from now after some goofy animation plays. When I close or minimize a window I want it gone NOW, not two seconds from now after some goofy animation plays… To that end I don’t want some giant animated “access granted” page to sit there for 15 seconds, I want the page I’m trying to access to show NOW.
It’s this type of thinking that led to Microsoft Bob. To me most of this cutesy graphical bull is like driving with the parking brake on.
But what do I know, I consider windows 98 the pinnacle of UI design and everything since to be steps backwards in functionality.
Edited 2010-04-17 05:51 UTC
Mostly that we shouldn’t ever let the movie industry design our user interfaces or encryption technologies.
Agreed
In the movie swordfish, the hacker crack the password while a girl gave him a blow job and the bad guy pointing a gun at his head. If a hacker can crack the password with half the body’s blood down under, the password must be “password”.
Yeah, and they said it would normally take 60 minutes, so they gave him 60 seconds. Classic
I’d let them do the part of the design stage that involved dreaming up the concept. Let the imagination flow without current limitations and all that. Science Fiction has been filling this roll for years.
But, for Baud sake; hand it off to professional designers after that. That’s where I’d draw the line. And definitely not with encryption any deeper than the cosmetic gui layer.
I was wondering, as I haven’t seen this kind of movies in a while, do they still use those silly interfaces?
I know they wanted to make them look spectacular 15 years ago when computers were still new and the Internet barely hit the media’s radar, but nowadays when everybody and their moms has a computer connected to the Internet, do they still try to make computers look impressive?
Yea, they do. Normally with a blueish glowing interface with stripped lines and interaction with maps to put everything in context for the viewer, and the blue map thing seem to always do a good job. I think I will hack marble to work that way, some day, maybe.
Hi everyone ;-P
Please post the coolest TV/Movie OS you have seen
Then we shall have a vote to for the coolest
Movie OSes are usually good at what they are intended for: giving the potentially computer illiterate audience a clue of what the user is doing.
In real live though, when you are the user you know what you are doing. I mean, if you hit “send mail” you know you are sending an email. That’s why you opened the mail client and wrote the email text to begin with, so a full screen animation of a folding letter getting in an envelope might look pretty the first couple of times, but it’s unnecessary, annoying and distracting.
Imagine watching that animation 50 times every day at work (and waiting for it to finish so you can continue working).
Same about 3D representations of filesystems. There are working implementations of that already, and guess why no one uses them: they don’t improve your workflow, but rather the opposite.
It’s like that version of Doom where every monster has a PID over his head, and killing it kills the process. OK, sounds cool, buy why would you want to bother searching a 3D entity in a 3D scenario when you can just “kill PID”?