The Windows 7 craze is barely over, and yet the internet is already buzzing with the next big thing from Microsoft: a project called Midori. The SD Times claims to have seen internal Microsoft documents detailing what Midori actually is, and they say it’s the clean-break from Windows many of us have been waiting for. The SD Times article is heavy on the details, and quite technical, but luckily Ars Technica provides a more accessible summary of what Microsoft has in store for Midori.
Microsoft promised an operating system written in managed code a long time ago, but instead we got Vista – the managed code came with too many compatibility problems. Midori, being based on Singularity, is written entirely in managed code. In addition, it is built for a ‘cloud computing’ world.
According to Ars’ Peter Bright, Microsoft is facing two major problems in its future operating systems strategy. We are all very familiar with the first problem: compatibility. The soft and hardware world is ever changing and evolving, and Microsoft’s commitment to providing as much backwards compatibility as possible is holding the development of its flagship product back. Many have advocated using a virtual machine for backwards compatibility, much like Apple did with Mac OS 9 in Mac OS X, or os/2 with DOS and Windows 3.x. This would allow Microsoft to make a clean API break, without wrecking backwards compatibility.
Midori seems to be doing just that, while allowing for a rather clever migration path. Midori will not only run as a stand-alone operating system, but also under the Hyper-V hypervisor, and even as a process under Windows. Bright explains that the migration path is a three-stage process:
Initially, then, Midori might work as just another Windows program used for cloud applications. As Midori applications become more abundant and can be used for more day-to-day computing tasks, it can run as a complete OS under Hyper-V, so the machine would be shared between a (legacy) Windows virtual machine and a (new and shiny) Midori VM. Further still into the future, as Windows applications become less and less necessary, Midori can be run as the sole OS on a machine, with the occasional Windows app relegated to a virtual machine.
The second problem arises from the cloud computing thing. Being geared for cloud computing means you need to have the ability to run on not 2 or 4 cores, but maybe hundreds, thousands of them. Developing for multiple processors or cores is already a major challenge for developers today dealing with a few cores, so you can imagine how complicated things get when we’re talking hundreds of cores. With Midori, Microsoft is aiming to make it significantly easier to do parallel programming, enabling programers to efficiently utilise the benefits of having a vast amount of cores available.
This is all still speculation at this point, as Microsoft’s official response when inquired about Midori is that it is an incubation project, and one of many, at that. It is far too early to claim that Midori is the next Windows, just as it was far too early to claim that Singularity was the next Windows. Microsoft Research is a big place with lots of interesting projects going on, and Midori seems to “just” be one of them, no more, no less.
I concur with what Peter Bright concludes: developing an operating system to supersede Windows that is “fundamentally designed for cloud computing” seems like a “risky gamble”. As Bright concludes:
Midori could be anything from a complete dead-end, to the OS 95 percent of the world will be running in five to ten years.I suspect that the truth will lie somewhere in between; a future Microsoft OS will use virtualization to provide backwards compatibility, and that future OS will use managed code. Finally, the asynchronous, networked, fault-tolerant parts will materialize within the next year as part of Microsoft’s cloud computing initiative – a software platform, libraries, and tools. Indeed, this cloud computing platform might be Midori.
Midori is a browser.
http://software.twotoasts.de/index.php?/pages/midori_summary.html
“Midori” is a codename, too: which means it isn’t a final name that’ll be used, most likely. What did they call Windows Vista before it was referred to as Windows Vista? Same thing going on here, assuming it ever gets outside of research and development. In fact, if you look up “Midori” on Wikipedia, you’ll see that it is a generic word/name by itself, so getting upset that Microsoft is using “Midori” when it also happens to be used for a web browser is silly. Heck, Linus Torvalds used the name first for a Linux distro back in 2001, but that’s just what Wikipedia lists for computer field naming: there’s also the various films by the same name that pre-date that, too, not to mention who-knows-how-many other places and things.
“What did they call Windows Vista before it was referred to as Windows Vista?”
Doomed?
Hardly, they called it everything that they could possibly call it to promote it.
Exciting
Awesome
Fast
Feature-packed
Modern
Stable
Secure
I dont however see anything that said Doomed
Your list is incomplete
Er, thanks for the joke – but FYI it was “Longhorn” I believe.
“What did they call Windows Vista before it was referred to as Windows Vista?”
Doomed?
Er, thanks for the joke – but FYI it was “Longhorn” I believe.
That was before they started removing promised features they could not deliver. After that everyone just called it “shorthorn”.
You just topped my joke
midori is a melon liqueur from japan
Midori means “green” in Japanese, in addition to being a common name for women.
I guess its back to the dumb terminal days my mom use tell me stories about.
It is if you stick with Microsoft products. No-one is forcing you to do that anymore.
“The safety and portability of managed code would eliminate many of the security flaws that still regularly crop up in software. .NET already makes these bugs impossible”
A string datatype solved that for most languages many decades ago. Also unless .NET can cure lepers and bring the dead back to life, coders will just replace unsafe standard C functions by brand new unsafe C# objects to finish 3 minutes earlier. And even if your code is safe, now you have to trust the VM in addition to the OS core.
.NET != the end of security bugs.
They’re referring to security flaws such as buffer and stack overruns. Unless you’re dropping into native code, these flaws are indeed impossible using managed languages.
Managed code has nothing to do with that though! You can do it all with a sufficiently safe language at compile time.
The advantage of managed code is that it’s easier to verify (the required annotations are part of the VM’s language, and you don’t have to parse and analyze each the assembly for each processor you support separatedly.)
An intermediate bytecode representation also makes it easier to recompile code to partially evaluate a program in a language that allows stuff like currying. C# doesn’t support these sorts of features, though, so really, the only advantage is easier program verification. (recall, you can also verify raw assembly. it’s just hard, and it needs annotations)
People seem to get confused about what managed code really is; it’s really just a special assembly language that doesn’t run on a real processor, and has some extra type information mixed in. Garbage collection isn’t even a part of it, strictly — effectively, it’s a tightly integrated native-code library.
Edited 2008-08-05 03:32 UTC
He was (IMHO) saying that managed code only strengthens as far as the language itself is strong.
Or… The “VM” ( which would be the kernel in Midori, I ass_u_me – and would certainly hope to be the case ) could have bugs which could be exploited. MOST especially in the case(s) where c/++ or assembly code could run natively, though there are likely to be flaws exploitable from any given language.
In fact, there is NO possible way to create a PERFECTLY secure OS. Perfectly secure would mean no execution would occur at any level for any amount of clock cycles that was not safe. As long as there is an API to access in any given language, then there will be a bug to exploit, regardless of other security accommodations.
This is not to say that a fully secure OS isn’t possible, it is to say that so long as any user-commanded execution can take place or anyone can write a program to run on the OS, exploits will exist one way or another – at least if software alone is expected to carry the burden of security with only minor hardware accommodations.
A hardware solution would be ideal, in theory, but, of course, then you have problems with integrating security into hardware – bugs which can’t simply be fixed with a nicely packaged security update ( unless every PCB uses replaceable components – which gets expensive fast ( due to patents, mostly ) ), so we get stuck into software due to lack of absolute maturity ( which will take about 10-40 years to arise – with true quantum computing ).
So, managed code is a GREAT start, but it is NOT the be-all-end-all solution. It is a way to change the rules of the game in a very short period of time, but those who live only for violating the rules shall continue to find ways to bend, twist, and, ultimately, entirely break the rules which stand in the way of their freedom. Managed code, COULD, however, create an instant compatibility issue with ALL existing exploits, so exploits would take a couple of years to become a real PITA again.
The biggest obstacle, in reality, is that creating these exploits is PROFITABLE. So long as someone can make money breaking through, someone will. Windows XP created the spy/ad-ware phenomenon and came with (at least?) one pre-installed ( Alexia is found on many/(ALL?) XP cds even before they are ever connected to the internet ).
Besides, would Microsoft REALLY be willing to prevent c/++ or assembly code from running? Would they be willing to create a product so good and secure that there would be no reason to buy the “faster & improved” version of the SAME THING? .. I say NO WAY, JOSE! Not a CHANCE in hell, let alone on Earth! But I can PROMISE YOU, that they will try and make it LOOK as if they had, in fact, done so. But then, they have investors to worry about – to good a product is GREAT short term, potentially VERY bad long-term ( given Microsoft’s *CURRENT* business model ).
Indded, flaws shall persist so long as the star which guides the voyage of the wise rests above the wrong roof. Money over perfection.
–The loon
But it doesn’t. It provides 2 things, exactly:
1) It provides a language (MSIL) that’s easier to verify than assembly, since the required annotations are there, and it’s the same on all platforms
2) It allows easier dynamic recompilation for things like partial evaluation (which C# doesn’t support), removal of runtime checks (which, for C#, can be more or less optimized out), and polymorphic inline caches (which make method calls slightly faster if you don’t know the object type.)
The first point is the only real advantage of the virtual machine — You don’t need the source code to verify a binary. Current systems handle this with code signing (on embedded systems where you don’t want hardware protection) or hardware protection.
For languages like C#, the win of the second point is small, and probably becomes a cost when you factor in the recompilation cost.
As a side note — languages like Lisp are already compiling to native code by default. In some cases, they’re faster than C, with all the safety of an advanced dynamically typed and checked language. See CMUCL, GCL, SBCL, Allegro, Gambit, Stalin, etc.
Edited 2008-08-05 04:53 UTC
The strength of which I speak is on-topic, in regards to security. Yes, fewer bugs will be created in new software, however any flaw in the language or OS will still be exploitable without further accommodations. Often security flaws are not even classifiable as bugs, the code does what you want.. no bugs, no glitches.
Only problem is the design or implementation itself is exploitable for unsafe executions or data mining.
I remember back when exception handling ( try, throw, catch ) was expected to be the end-all of crashing programs. Problem then, and now, is that the correct features must be used in a very precise manner in order to achieve the intended results.
So, you write a PERFECT program, certified by GOD to be bug-free in and of itself. But it runs on platform X, which has bugs and exploits. You tie into a specific API in a less than ideal manner and an unknown exploit is created, making your application the potential gateway for exploitation.
Mind you, once the exploit is discovered, someone will use it to their full advantage, which begins to erode all the gains achieved by a paradigm & language shift.
In the end, nothing is entirely secure – not in hardware, not in software, not even in your mind.
No to say Midori ain’t a good idea, I think it is absolutely necessary and has much potential. If Microsoft releases a product I genuinely like, I buy it. Windows is simply not one of them.
Of course, I came from the OS/2 world to BeOS, and only played with Windows off and on during my love affairs with ‘niche’ OSes, so I expect more from a company with all the money & resource they have at their disposal, perfect security, I do not expect – but at least try ( which it seems they are seriously doing – I haven’t been running an anti-virus on Windows XP SP3 for a long time with no problems, but I can’t connect XP pre-SP1 to the internet for more than five minutes before an infection occurs ).
–The loon
in 10 years (when midori is actually shipping), nobody is going to be using c++ for anything other then very old legacy systems, like the way fortran is still around now.
I read a similar comment 10 years ago and C and C++ are still the base of today’s computing…. including Midori will have a part (though very small) written in C and Assembler.
there is no such thing as an unsafe C# object, unless you are talking about COM interop.
.NET is not the end of all security bugs, but it mitigates alot of them due to code access security and the sandboxing thing of having the OS abstracted away. It is the end of memory management bugs, which is about 70% of all bugs in non managed apps.
From where did you pull that statistic? I’m thinking of a orifice down in your nether regions. Most bugs are pure logic goofs. No type safe language is going to save you from those. Anecdotally, maybe half of _security related_ bugs are buffer overflows but certainly not 75% of _all_ bugs.
I don’t think it’s worth throwing away a more or less thoroughly debugged infrastructure just for type safety. After they get done rewriting the whole thing, they’re going to have to deal with the far more numerous and hard to detect type of bugs cause by the often fallible human brain. Then throw in a case of second system syndrome, and you have a complete disaster on your hands.
Most buffer overflows can be detected with static code analysis anyway and without any of the run-time overhead of having the VM enforce it or adding extra code the generated binaries.
I actually heard that from a ruby presentation awhile back. The guy through alot of statistics out there, and I recognized a bunch from studies I had run across myself(like 25% of people say they often find problems with code after it has been reviewed, and 70% of people have a hard time debugging), so I figured the rest would be accurate. If it is, I can’t find the study anywhere, so thanks for calling me on it, wont be quoting that stat any more
I never, ever said that old code should be re-written just because it is unmanaged. When you are looking at a new project though, unless you are talking about a very small subset of problems (games, drivers, operating systems, etc), the question should be strongly typed managed code or dynamic managed code, not unmanaged or managed.
Depends on the environment, but for the most part VM overhead is negligible. The big differentiators are typically the massive core libraries in both c# and java that get pulled in for even the most minimal of apps, and gc. I don’t know java that well, but on .net allocation is actually faster then unmanaged for the most part, because everything is going on the heap behind the scenes. But a gc has to walk up and down the all the references to figure out what it can free up, and then has to do packing, which in a big app can give you a noticeable perf hit. Some applications that doesn’t matter (say, a line of business app slowing down for a second or so every once in awhile), but anything that needs real time that is unacceptable.
FYI: With Java update 10 (current in release candidate and to be finalised in the next month or so) it is a bit ahead of .NET in this respect (it should be, it is older). Java now has improved modularity and a new packaging called the “Java Kernel”. This now allows Java programs to run with a subset of the full JRE (even though the JRE is smaller than the .NET distros). Hopefully .NET will also get this eventually.
Darn that +/- 1!!! I can sympathize, though, it can be VERY hard to remember the importance of ranges based upon a zero-index.
Seriously, this is a bug:
1: func(){
2:
3: int cnt = 10;
4: for ( int i = 0; i != cnt; ++ i )
5: {…} // loop should use i < cnt
6: // or the (slower) i != ( cnt – 1 )
7:
8: }
Of course, sometimes the matter is a little less obvious, but it seems Microsoft developers do the above more often than not – just say in all loops use “< cnt”, and you’ll be fine… unless you are using something that represents the max addressable value ( which is, really, really, really STUPID-but probably about as common ( I’ve seen it in code from script kiddies fairly often – when they think they are somehow optimizing a program by doing more work ).
Naturally, it is possible that the count itself is wrong, or that an object addressed at a given index is invalid ( which is a DIFFERENT issue ), those bugs can be harder to see/find/fix than anything.
The real fix would be in removing software loops, so that any range-bugs would cause complete execution failure ( this is IDEAL, we want *ALL* bugs present to cause VERY negative, reproducible, and traceable compile ( or at least run-time ) effects ).
Of course, software can provide the solution, a good example is with for_each statements, where the language or class[library] can provide the solution by strictly controlling access, but then you can get bugs from there as well, which could be even worse due to their prevalence ( think: one bug affecting 100,000 different execution pathways merely because it was compiled into those project to provide improved security ).
–The loon
Here’s the real fix, btw:
1: func(){
2:
3: unsigned int cnt = 10;
4: for (unsigned int i = 0; i < cnt; ++ i )
5: {…}
6:
7: }
The real fix for ranges is to not allow a negative number to start with. This is especially true when doing bounds checking on an array or other kind of index. A negative index is always a bad thing – and no, a negative value should not be returned when you request the index for something that does not exist – Java is broken in that fashion.
There’s several other reasons to:
1) Not having to check for the negative value reduces your checks. (Performance increase)
2) Reduce the logic you have to debug – it can only be positive so if it goes too high, you know the issue.
Well, I, for one, employ a debug-mode set of checks for error-case returns from any function within my own API ( I don’t use standard APIs without further abstraction and management ).
BeOS doesn’t provide access to enough low-level hooks which throw exceptions, instead debugger calls are made directly from within the API, which I dislike immensely. Granted, I hate dealing with exceptions as much as any other coder, but I prefer to do my own error handling as much as possible.
My error handling is accomplished via a global non-release-mode object called ‘ErrorTracker,” which is accessible via the _error pointer. So when I have an error a call is made: _error->Critical(ERR_LOOP_BOUNDS, ClassInfo(this).cleanName(), “functionName()”, __LINE, __FILE, kLoopBoundErrorString);
I use macros to clean it up:
CriticalError(ERR_LOOP_BOUNDS, this, “functionName()”);
The macros are disabled in release-mode and the _error pointer is inaccessible. Of course, my debug mode is really well layered, supporting four modes each giving more information than the last, the most advanced creates a GUI so that lists, allocators, and all other possible fail-points can be visually monitored with little reliance on tracing through the command line or falling back into the bdb ( Be Debugger ).
–The loon
If int is negative, the code never got that far anyway, as ‘someone’ is requesting something wrong, improper return is considered fatal in most cases, but not all.. but then you’d need to see my API to really get it, I guess- and it will not be made public for at least a year or so ( waiting on Haiku to repair one specific bug before I can make adjustments, as I’m ending all development on the old code base ).
Yes there is. C# allows “unsafe” blocks. See http://msdn.microsoft.com/en-us/library/chfa2zb8(VS.71).aspx
These would be disallowed on a Singularity based system, though.
yes there is. Add the unsafe keyword and it is unsafe.
http://msdn.microsoft.com/en-us/library/chfa2zb8(VS.71).aspx
No, it is not the end of memory management bugs. If your objects are still referenced they will never be garbage collected and you will have a memory leak.
Instead of seeing null-pointer errors from C++ programs, you now see “Object reference not set to an instance of an object”
.NET is good but *not* a revolution.
IIRC, in Singularity only the Garbage Collector itself and other I/O parts (plus something else I surely forgot) are written in ASM/C and unsafe C#, what you write inside the SIP is completely managed/safe code.
Edited 2008-08-05 20:55 UTC
It is so not the end of memory management bugs, not by a long shot. Arguably it’s easier to create a memory leak in managed code than it is in unmanaged code. Rooted objects are a source of constant memory battles in .Net, and trying to find out what the rooted object is usually requires breaking out WinDbg and doing a fairly non-trivial debugging session.
I’ve seen so many .Net developers go into writing an application assuming there cannot be memory leaks because of the above attitude. Then when their app swells to several hundred megs they are left wondering why. And it’s almost always due to objects not being able to be cleaned up by the GC due to rooting.
It’s years away from even potentially being able to ship as an actual product.
If you think it will, you’re smoking. Microsoft ain’t about to give up the cash cow of its shareware ecosystem feeds it. And as long as Windows — any version of it, present or post — retains that anchor, it will be the slowest OS on the market.
Meanwhile that turtle named Linux just continues to roll up a few million more dedicated users year after year….
You didn’t even read the blurb, let alone the article. They are talking backwards compatibility through virtualization.
And last time I checked, linux adoption took a dip recently. Still reaching for that 1% of the market
Actually, its increasing now but still around the 1% mark.
What will be interesting is if Midori does become a reality and if anything comparable will be developed?
Perhaps Midori could be a guest in Linux-based hypervisors but if it becomes a full-fledged OS then what will the alternative be?
SharpOS?
Linux would suddenly become a “last-generation” OS with traditional features and functionality.
Edited 2008-08-05 03:47 UTC
Like Plan 9?
Honestly, the whole cloud computing thing is one of the things ms has been talking about for a very, very long time that nobody else is really excited about. Internet apps are cool and everything, but they open up a new market for applications, they don’t really replace the existing one. The few cases that they do make sense as replacements for traditional client apps is completely due to the zero cost deployment for an unlimited amount of people. MS has put a lot of work into their “click once deployment” story, I think that is all we really need (or want).
The competition is clear – AmigaDE I hope my irony is clear enough
I wonder though, if environments like Flash/Flex could become full-fledged OS? E.g. REBOL3 design is close to async OS, with features like Amiga-like devices model, and Wildman project (R3 upon thin HW layer) is planned too …
Yer. Wine.
Midori is just only a research project, NOT a product.
Actually Singularity is a research project and Midori is a product in incubation stage i.e. it may never see the light of day and is only in the beginning phase. Much of its design is said to be inspired by Singularity.
Midori is NOT a product.
… “and they say it’s the clean-break from Windows many of us have been waiting for …”
Many of us have already been waiting for a few Fridays to see ‘the real break’. Has the XP been one of them? Maybe the Vista OS?
As far as I’ve experienced, every new major piece of SW from MS has almost always been a break into our wallet leaving it clean.
Not even metioning the reliability/security …
Edited 2008-08-05 08:04 UTC
there are many interesting aspects to Singularity that can’t be found in Inferno, although they have many things in common.
but still – the way they will deploy Midori [running hosted as “cloud” computing enviroment -> later standalone] is, what Inferno should’ve been.
hopefully ms does it right.
“The SD Times claims to have seen internal Microsoft documents detailing what Midori actually is, and they say it’s the clean-break from Windows many of us have been waiting for.”
The clean break from Windows has existed for some time…it’s called Linux. Or, if you prefer, Macintosh. I have never been able to figure out why so many people are willing to put up with a crap product, throwing good money after bad, upgrade after upgrade. If Windows is working for you, great; if not, there are alternatives if you’re not to stubborn or lazy to learn something new.
Singularity hasn’t yet been proven in any capacity whatsoever, so hyping Midori just seems like yet another Microsoft vapourware campaign to get people to wait for the non-existent product they have. It’s not like they haven’t done this before.
Cut the rhetoric, segedunum. I see you didn’t read the article at all, since it’s quite clear Microsoft isn’t hyping anything here – the SD Times is. In fact, Microsoft is ‘anti-hyping’ the whole thing by clearly and officially stating Midori is nothing more but yet another project in Microsoft Research.
Darn, some people are so stuck in Microsoft-bashing-troll mode it’s almost laughable.
Edited 2008-08-05 09:40 UTC
Oh yer. I opened my free newspaper that has no interest in IT and technology whatsoever on the train this morning, and what do I see? “Is this the end for Windows?”, and pretty much the same article word-for-word but with less of the technical details and more of the hype. Really, Thom, I thought you would have at least grokked this kind of ‘leaked details’ marketing campaign over the years.
Anyone who thinks this isn’t some kind of daft, orchestrated marketing campaign with lots of juicy ‘leaked’ information is an idiot, and it’s not like they haven’t done this before. Does anybody know where the object oriented and modular features of Windows went to we had so many articles on before Windows 2000? These articles look suspiciously similar.
Oh, and what do I find? The utterly laughable thing is that the ZDnet article linked to there calls this ‘Cairo revisited’. My stomach muscles are aching from the belly laugh.
An awful lot of people seem to know an awful lot about it all of a sudden, which is another trait ;-).
You think that’s Microsoft bashing? Excuse me while I get back on my chair from laughing. Again.
Well, actually, an awful lot of people have read the details of Singularity, learned about the Midori incubator project (and to be fair, many of those projects have become product of one kind or another in short order) and then embarked upon an orgy of speculation, in order to seem to have “the scoop” on a future version of Windows (which, regardless of your view of it, is of global significance).
Prompted by what, exactly? All of a sudden it’s going to be some cloud computing OS and virtually take over the internet, and it’s in publications where IT, computing and technology are not even considered.
Actually, that takes a concerted amount of PR effort ;-).
In the absence of anything newsworthy, and with all sorts of cool and hip terms such as ‘cloud computing’ that all the kids are using but you have nothing for, you might as well prod some people into talking about you again. Who knows, people might even wait for you ;-).
Who said this was going to be a future version of Windows?
Vapourware V2
One suspects it will be many, many years before a Midori becomes reality in a fully-fledged form.
In the first place, it is commercially very risky for a huge, risk-averse corporation. A full-strength Midori rolled out worldwide would mean hocking oneself to the cloud which in turns means hocking oneself to the major infrastructure companies or even potentially hostile infrastructure regimes (China, the former Soviet Union, etc.).
In the second place, the Midori path doesn’t present the user with anything different. The user still sees the same apps or at least the same functionality. The difference is that whether the app runs off a local hard disk or off a computer thousands of miles away has become transparent, as has where user files are kept and all the rest. That’s a hugely complex and expensive way of turning a full circle. At the end of the day user still sees the same app, whereas what might really get the user going is a much better app. Meet the new app, same as the old app and so on.
So I’d guess that for a long time, Midori will basically be presented as a money-saver for the enterprise: “Let us take over your whole IT shebang for you, run via the cloud (our own, custom-built server farms) into dumb or pretty dumb terminals.” That brings problems of its own, of course, but it also places Midori squarely in context. The context is meeting Google head-on in any attempt by Google to do exactly the same. This is more evolution than revolution. Ah, competition.
They have already done this once with NT. Aparently this wont be ready for another 7-8 years or so, and it will probably take another 7-8 years after that for the “XP” edition to come out that obsoletes NT. So my guess is we are talking 14-16 years before windows as we know it is fully gone.
I’ve been reading the Midori design notes on an off for the last year and a half. It’s still very much in incubation and it’s highly likely that the ideas and code fromthe project will be dispersed among other more traditional products rather than coming together into a single coherent product. All the ‘features’ being hyped up by the SD Times are aspirational… the group has come up with cool ideas and mechanisms for asynch programming, but the jury is still out on them.
Midori is something to be excited about, but I wouldn’t take it as something that Microsoft is hyping. The hype is mostly coming from the press which seems to have gotten leaked internal documents.
I have to wonder if Midori isn’t going to first show up as MS’s HPC OS. It sounds like it would be a good fit, massively parallel OS with nice dev tools that clusters easily.
The quote that really makes me think HPC is the direction Midori is going is below.
^aEURoeMere mortal developers need a programming model/application model that lets them distribute processing to massively parallel devices without having to become experts,” explained Forrester Research senior analyst Jeffrey Hammond in an e-mail. ^aEURoeEven with the quad-core Intel chips today, you have to have specialist teams to take full advantage of them,^aEUR he added.
As of now, MS doesn’t have a big presence in this market, and I have to wonder how well the NT kernel is scaling, Windows Server is not known for it’s clustering ability. A Windows HPC Server 2008 and Red Hat EL4 mixed cluster ranks 23rd in the Top 500 supercomputers, but Windows is getting help from RHEL4. There may be only so much MS can do with the NT cluster in the HPC space, and this may be their answer to that.
http://www.top500.org/list/2008/06/100
I can see where the shrewd business minds at Microsoft are going with this one:
1) Release Vista
2) No one buys Vista, they insist they still want XP.
3) Do not listen to the customers. Pull the plug on XP.
4) Instead, spend millions on publishing a fake experiment designed to convince people that they should just hand over their money to Microsoft anyway.
5) Also send out rumours that something called *Windows 7* is coming *real soon now*, so hopefully people will wait for *Win7* if they won’t use Vista, and not use Linux (in the process, making it less likely that anyone in their right mind will use Vista).
6) (OOH, just got to use more advertising $$$!) Publish info saying that the NEW NEW ALL-IMPROVED WINDOWS MIDORI-IN-A-CLOUD is coming *kinda-real-soon-now*,and that it will make all previous versions of Windows obsolete. Also, announce it will not be backward compatible with existing software, so it will be more traumatic and uncertain than moving to enterprise-proven products like Linux (thereby ensuring that no one will buy Vista or *Windows 7*, and everyone will switch to Linux or MacOSX instead of waiting for hell to freeze over and *Midori* to come out).
Oh yeah – I see that Microsoft really knows how to market their products. And their campaign IS working! I am typing this on Ubuntu, which took 20 minutes to install!