While newly minted Windows head Steven Sinofsky continues to play his cards close to his chest, we’re seeing signs that Microsoft is rethinking its monolithic approach to not only the mass-market Windows operating system but the entire family of Windows products from servers down to CE-based embedded devices. First up is a streamlined microkernel codenamed MinWin, around which a re-engineered Windows line will be built. Described as ‘the Windows 7 source-code base’, in reference to the successor to Windows Vista which is slated for a 2010 release, MinWin strips back the current NT-based kernel to the barest of bare metal. Ars Technica has more, including a one hour video presentation [.wmv] about MinWin. Sassy quote of the day by Microsoft kernel engineer Eric Traut: “A lot of people think of Windows as this really large bloated operating system, and that may be a fair characterisation, I have to admit.” My take: Maybe this will be closer to reality after all?
Why do we need more microkernel? Don’t we run Windows XP because it is so fast?
This is not ‘another’ microkernel. This is winnt with all the fluff stripped out, making it run on 25MB of disk space and 40MB of RAM.
That will surely grow, I’d assume.
Does this new kernel mean MS will finally give up on backward compatibility?
This isn’t just a kernel, nor is it new. It’s currently shipping with Vista.
http://community.winsupersite.com/blogs/paul/archive/2007/10/19/mic…
Read your own link, that’s about Windows 7
Read the bottom of the post at that link. It points to 3 articles about “Longhorn”/Vista and how they are built atop MinWin.
Quote:
Bloggers are *always* going on about where think they guessed something right in a previous blog entry. The problem is that his (and hence your) claims don’t align at all with what Eric Traut of Microsoft, the man who actually knows (as opposed to guessing and blogging about it), said about MinWin. MinWin is, indeed, something new. And has not been used in a product yet. It is a completely new kernel intended to be used at the core of Windows 7. Like I say, Windows 7 looks like the biggest overhaul since NT.
And you know what? I, a long-time, dyed in the wool, card carrying Microsoft detractor and Linux advocate, am actually a little excited about that efficient little core he was showing off. I get the impression that they’ve learned from the Vista development cycle, taken a page out of the *nix play book, and are taking a step in the right direction.
You’re right… MinWin is not in Vista or 2k8. It is also not a huge departure from the NT codebase.
“””
“””
If that’s true, someone’s playing games with the word “microkernel” again. You can’t go from the kernel that Windows uses now to a microkernel and call the change trivial.
If you watch the talk (which apparently no one who writes these articles and blurbs actually has), Traut mentions nothing about microkernels. It’s someone else’s imagination.
NT is not a microkernel, but it is unique in that it is explicitly designed to display different “personalities” to user-mode.
It’s Paul Thurrot, not just any old blogger. Yes, he is wrong on occasion, but not in this case. Traut got facts about Windows history wrong.
http://www.istartedsomething.com/20071019/eric-talk-demo-windows-7-…
It’s not as if he couldn’t have also screwed up the history of MinWin as well. He hasn’t always been part of the kernel team. He came in after MS acquired VPC et al. If you actually read the links at the bottom of that blog post, you’d see that the comments on MinWin were sourced from Microsoft and documented during “Longhorn’s” development.
If MinWin did not exist at that time, explain the existance of Paul’s articles. Also explain Microsoft documents on their website stating that MinWin is part of the parent partition above the hypervisor in Server 2008.
You can choose not to believe it. You can even choose to believe that Seven won’t be based on Vista. In either case, it’s an incorrect assumption.
Edited 2007-10-23 04:12
“””
“””
Right. Since he made a couple of minor errors regarding an obsolete MS product from 22 years ago, it’s obvious that Microsoft’s Director of Kernel and VM Development just doesn’t know that a major project which he and his team are currently working on is already being used as the basis of the company’s flagship product. He mistakenly thinks that it’s not being used anywhere yet.
*If* this is truly the case, I would not expect Windows 7 to be ready for a *very* *very* long time.
It could’ve been an error of omission. I’m not claiming incompetance. You can read about MinWin directly from MS’ website.
http://search.live.com/results.aspx?q=site%3Amicrosoft.com+minw…
If you still believe it’s new to Seven, so be it. But it isn’t new, and it isn’t a signal that Seven is a new code base. Believe the (non-Microsoft generated) hype or believe the docs. Time naturally will reveal more details.
I just know that people always have radical dreams of what’s to come from events like this, feed off of the inaccuracies and suppositions on the net, then blame all their misconceptions on MS. When Seven is indeed revealed to be an evolution of technologies in Vista (and of course, the advancements from SQL, .NET, etc.), but is not an all new code base, there will be disappointment and blame from those expecting otherwise.
Edited 2007-10-23 06:21
“””
“””
Well, there is one slightly less than salient factor which has dampened my enthusiasm. And that is the fact that rather than a minimal version of Windows running in 39MB with 7MB free, it’s really running a naked kernel with a web server built in, on 76MB of virtual memory and using 61MB of it. Far from a great technical accomplishment, it’s actually pretty sad. My Linksys WRT54GL router with 4MB of flash and 16 MB of total virtual space (all in the form of RAM), is a *lot* more functional than what he was showing off. It needs no swap, is running an ssh server, a web server, a dhcp server, a forwarding/caching nameserver, and maintaining an openvpn connection to my office… and excluding disk cache, is only using 8MB of RAM and 3.5MB of the flash for the whole OS and and associated applications. (I’ve loaded some other stuff on it, like tcpdump and other networking diag tools, else it wouldn’t be using more than 2.8MB of the flash.
Why would they do that? Backwards compatibility might be an issue for freeware operating systems with shoestring budgets, but it shouldn’t be for something that actually generates income…
Why would I want to use a “freeware” operating system?
Did you mean: “open source” ?
My point is simply, that with large changes to the architecture, providing backwards compatibility might prove to be problematic. XP was not 100% backwards compatible (meaning, a number of applications refusing to run properly/at all), and from what I have read for Vista it was even a bit worse.
So now I wonder how that will fare in the next version. The kernel seems to be an important part of the equation, although I could be wrong.
Edited 2007-10-22 13:07
No, I meant freeware — i.e., software that does not generate income. But if you pay for it, you kind of expect basic things, like backwards compatability…
How can this:
“Why would they do that? Backwards compatibility might be an issue for freeware operating systems with shoestring budgets, but it shouldn’t be for something that actually generates income…”
… can means this:
“No, I meant freeware — i.e., software that does not generate income. But if you pay for it, you kind of expect basic things, like backwards compatability…”
I can only assume that you the initial comments was supposed to be:
“… Backwards compatibility might NOT be an issue for freeware operating systems with…”
… And you’re wrong.
Having no/limited backward compatibility has nothing to do with free (as in beer) OS’ with shoestring budget. Free (as in speech) OS’ tend to care less about legacy support even if they generate a lot of income (RedHat, Novel) simply because they don’t force-upgrade-you (E.g. RHEL v2.1 is still supported.) and because you have access to their code.
When Microsoft drops the ball on Windows 2000, you are being forced to upgrade; Nobody but Microsoft can release security patches and OS updates. On the other hand, when RedHat stopped supporting RedHat Linux 9 (back in 2004), a large number of willing third parties took the code and continued releasing security updates. (And as far as I know you can still buy support for RedHat Linux 9)
– Gilboa
Edited 2007-10-23 06:54
You mean ‘copyleft’ OSs. Copyleft has nothing to do with freedom.
And such support for legacy code is required because they broke compatability with newer versions; you couldn’t expect to run your old programs on your new system, unless you got lucky. And we’re right back where we started again.
“You mean ‘copyleft’ OSs. Copyleft has nothing to do with freedom.”
Somehow I can’t really find a suitable comment to your remark.
I think I’ll just ignore it.
“And such support for legacy code is required because they broke compatability with newer versions; you couldn’t expect to run your old programs on your new system, unless you got lucky. And we’re right back where we started again.”
Wrong again.
Both RHEL 2.x/3.x and Fedora 1.0 ran RH9 applications just fine.
RedHat didn’t release the source of RH9 because of API changes in FC1 and RHEL 3; the source was released because RedHat is an OSS company that was built on OSS roots and because RedHat Linux 9 had a GPL license.
Same goes for the present – RedHat doesn’t release the sources of RHEL 5.0 due to a possible future (?!?!) API breakage – they don’t even do it because of the GPL license (RedHat releases the sources in easy to rebuild SRPM files – far-above-and-beyond what is required by the GPL) – they do it because it’s their basic philosophy.
Given your current post (and previous ones), one can only assume that you little about the OSS movement. I’d advise you do a little reading before you next comment.
– Gilboa
P.S. “compatability” should be “compatibility”.
I watched the 1 hour presentation yesterday. It does *not* contain 1 hour of technical content.
I don’t think what is interesting here is that Microsoft are thinking in terms of micro-kernels. The NT kernel is already a hybrid kernel. My understanding of what’s being presented here is essentially a Microsoft version of Xen, it consists of a hypervisor, a master partition (Dom0 anyone?) and lots of guests (“partitions”). The only difference I can see is guest OS’s won’t *require* modification and they are introducing a focus on bringing in 3rd party modules (Which is never surprising with MS).
The technology here isn’t that exciting or new. Whats interesting is this built in virtualisation could lead to a clean break in Windows 7 that drops all the nastier legacy API’s, removes the cruft and moves it into a safe virtual environment. This is hardly surprising either, a lot of people have been saying MS should be doing this.
Another gem in the presentation is a rag on people using uninstallers and that Microsoft and a hint is dropped that Microsoft are working on the clean installation and uninstallation of software. Are they going to push and strengthen MSI? Package manager anyone…?
Edited 2007-10-22 09:53
They already have several package managers depending on the servicing stack used. MSI/msiexec.exe for applictions, Update.exe for OS packages on pre-Vista Windows, Component-Based Servicing (CBS) (pkgmgr.exe/ocsetup.exe/pnputil.exe use it) on Vista and above replaces Update.exe. They are also continuing to update MSI. Windows Installer 3.5 shipped with Vista and 4 is currently in beta.
Most of what he was talking about is present in Vista via the Trusted Installer, the layering gates, and system resource virtualization.
Edited 2007-10-22 10:29
The NT kernel is already a hybrid kernel.
There’s no such thing as a hybrid kernel.
My understanding of what’s being presented here is essentially a Microsoft version of Xen, it consists of a hypervisor, a master partition (Dom0 anyone?) and lots of guests (“partitions”).
Yes, the broken Xen privileged guest model.
It looks as though what will happen in the future is that this Viridian model will be what you will install on your PC, and any given environment that you run will be run inside a virtual machine.
I’ll not go there this time .
They’re making some interesting choices, though. For instance, the Microsoft hypervisor does not require special kernels. Kernels do not have to be altered in order to run on the Microsoft hypervisor. With Xen, they do need to be altered.
Edited 2007-10-22 12:08 UTC
I’ll not go there this time .
Well, let’s not ;-).
They’re making some interesting choices, though. For instance, the Microsoft hypervisor does not require special kernels.
Well, I suppose it goes back to that whole debate recently on where you put the drivers. Can you really make an OS obsolete by running everything within the hypervisor, or is it just a ridiculous idea since you have to have drivers somewhere regardless? It goes to the heart of why some people think the notion of privileged guests are broken, especially in Xen, because you’re not working from a known base. The Xen kernel diverges wildly from mainline, and there’s no end in sight.
Whereas Linux can get away with running as the base system (you have to run it on something!), Windows probably can’t because of it’s large amount of bizarre dependencies, but even then, they could probably go the KVM type route and run everything on a very small system with a very small kernel. There tends to be a lot of context switching with privileged guests, so anyone who says this model is better is silly. Also, a privileged guest means one big point of failure for all your machines and domains are a maintenance nightmare.
Basically, just like XenSource and VMware with their very strange ideas, Microsoft would love to create some all-encompassing lock-in here, and hopefully lock out future Windows versions from any other virtualisation solution and create a Windows container (protector).
Kernels do not have to be altered in order to run on the Microsoft hypervisor.
Not sure really. They might be doing this through some form of hardware virtualisation (I can imagine Microsoft driving this), or falling back to full software virtualisation like VMware does as well as allowing for modified guests for better performance.
This probably explains why I run VMware Server, will continue to for some time, and why I’ll recommend ESX to people who need it – until VMware starts getting some bizarre ideas, at which point I really, really hope KVM will have saved us all from this madness.
Xen 3.0+ supports full hardware virtualization. Of course, it requires the heavily-modified Linux Dom0, but the Microsoft solution will similarly require a privileged, Viridian-aware Windows guest. Viridian is almost a one-to-one mapping of the Xen design philosophy from Linux to Windows.
In any hardware virtualization solution, there exists a certain functional subset that is a single point of failure for all hosted guests. The flaw in the Xen/Viridian model, IMHO, is that this functionality is split between two distinct software components, and the privileged guest contains a lot of functionality that wouldn’t otherwise be a part of this critical subset.
The hypervisor can’t fail, the host can’t fail, the coherency between them can’t fail, and the superfluous non-host code can’t cause the privileged guest to fail. If any one of them does, all of the guests go down. Contrast this with the KVM or VMware ESX approach, where the entire critical subset, and not much else, is in the hypervisor.
The hypervisor probably won’t fail. The root VM’s drivers could fail and that would be the source of problems. On the other hand, since we’re talking about normal Windows drivers here that are used on millions of machines under all kinds of circumstances, the core virtualization-sensitive drivers probably won’t fail more often than the hardware itself. After all, enterprise-grade network and storage drivers are usually quite well-tested. This is likely better than a custom special-purpose NIC driver to plug into the hypervisor. Traut mentioned that the MS hypervisor will NOT be extensible at all.
I agree that “hybrid” is not the best way the NT kernel series, and neither is “modified microkernel”. Hybrid implies a union of multiple designs. While NT is essentially a monolithic kernel, it cannot be considered a microkernel.
It’s a monolithic kernel that contains an architecturally distinct core. It has none of the characteristic of a microkernel besides this logical structure, and this is merely a programming expedient rather than a consequence of interprocess communication.
Maybe this structure disqualifies NT from being called a monolithic kernel. In that case, you can call it a macrokernel. A lot of people have taken to describing Linux as a macrokernel, and there’s not a lot of space between NT and Linux in this sense.
If MS is going to re-architecture windows like that it’s going to take more than 2-3 years to finish. But then perhaps, after ~20 years, MS might finally get windows right.
Im no one to say what Microsoft should or not do…
But wouldnt it be easier if they could just add a layer of virtualization to the OS and this would make backwards compatibility easier ?
Where the old applications could run easily even though not a good design Microsoft should look to the future. The userbase would be happily and the system wouldnt be so bloated…
Maybe also the same sets of APIs like wine/crossover..
Edited 2007-10-22 11:08 UTC
win nt used to run on machines with 24mb ram. i’m not quite sure what is supposed to be slim about this kernel if it needs 40mb without a gui. it’s neither a microkernel by design nor a micro-small kernel by appearance.
nothing to see here, move along.
Say QNX?
“Nothing new here folks, now move along.”
As a side note, it’s amazing to me that Microsoft has the power to clobber it’s faithful users with the cruft that Vista is, and yet, these same people should even care what what will be forced upon them in three to five years.
It’s more about the tech then anything else.
It’s always interesting to see what MS or Apple will do as they can popularize some radical concepts.
yeah, for real. Win 2k pro and XP pro can happily run in 45mb of ram if you disable a bunch of useless stuff. NT3/4 run fine in 24-32mb of ram.
I don’t know for sure, but I would wager strongly that this is 40 MB of total virtual address space. There are probably many pieces that could be paged out without performance loss given RAM limitations.
“””
“””
The numbers are actually all shown in the talk.
39MB total physical memory
7MB free physical memory
37MB total paging file
8MB free paging file
So there is 76MB of virtual memory. 61MB of which was being used at the time the snapshot was taken.
I’m now pretty curious about this. You’re right, of course; I forgot the memory statistics screen.
Now I wonder what’s so large down there.
“””
“””
Indeed. I just did a quick check, and I can boot my Ubuntu Gutsy laptop with a 2.6.22 kernel into single user mode and run lighttpd on 38MB of total virtual space without any tweaking. Over 2MB free. And over 24MB in combined cache and buffers after boot.
What I question is the logic behind dedicated so much resources to something that I doubt will yield the results they require to turn it into a maintainable product for the long term.
Someone at Microsoft need to be sent to their local university and take “System Analysis and Design” – the cycle can only repeat so many times before you have to throw it out and start again. Microsoft need to face reality; re-engineering Windows will take for ever, yield little in terms of benefits to them or the customers.
Throw out the code base, embrace FreeBSD 7, and build a virtualisation for compatibility; heck, even go so far to implement OpenStep and update to today. There is nothing stopping them, they have the resources – its too bad that they use the “UNIX Haters Hand Book” as the source of all direction within Microsoft rather than facts.
So basically you are saying MS should make OSX 10.0?
“So basically you are saying MS should make OSX 10.0?”
I think he means MS-X 11
“Tyrannosaus” or “Predator” could be a nice codename
Edited 2007-10-22 13:10
OR MSUX What ever they do, something needs to be done to correct the system deficiencies.
Throw out the code base, embrace FreeBSD 7
Why embrace an inferior OS? Face it: FreeBSD has inherited its design from the old BSD Unixes. It was never internally designed for SMP – and after years of work, their SMP scalability is not so good.
NT, as much as people hates it, it was designed from the ground up to support SMP. With multicore cores already there, it’d be stupid for them to use a operative system that needs a lot of work, meanwhile the NT kernel already did all that work a decade ago.
If I had to choose between NT and freebsd for a future OS, I’d choose NT without doubt. The only unixes that are worth of using are those that saw the huge UNIX design mistake that was not being designed for SMP machines, and took radical approaches to evolve their codebases to solve the problem, like solaris or linux.
You are right. As an OS design, Mac OS X is not that impressive except in its haphazardness. Especially the fact that there are two layers of APIs that are intermingled (the BSD layer and the Mach layer) without much apparent rhyme or reason.
Kaiwai is pretty much wrong on this one: the way to keep a system going over time is to loosen the couplings between its parts and replace them one or two at a time when the old design is outdated or when new techniques come about. Starting from scratch and rewriting is a recipe for disaster of Netscape Communicator proportions. The NT kernel is already designed well with componentization and loose-coupling in mind. Windows user-mode is evolving and it’s not a one-release job.
But that is looking at MacOS X – which isn’t a purely *BSD system. I only threw FreeBSD 7 as it was he first to come to mind – why not use the OpenSolaris core and build upon that?
Evolution is ok, if it is evolving forward in direction. Right now its going backwards in an accelerating pace – none of the problems are being addressed; layers upon layers of backwards compatibility pulling the operating system further and further backwards.
Sometimes one has to just stand up and say, “this is getting beyond a joke’ then doing something about it. A bit of short term pain for long term game.
As for Netscape, it failed because they dumped millions of lines worth of code onto the opensource world, poorly documented and badly written – with the expectation that the ‘magic of opensource’ would solve the problem like a silver bullet – we all know how that turned out.
From my use of Vista, there’s more than a little evidence that it is, in fact, a BSD now. I’ve been collecting “unnecessary BSDisms” to see if I can make an article out of it…
What’s wrong with the kernel? It’s not like everyone blames it for the bloat. The rest of the system and how it’s put together – that’s what matters. Users don’t “run the kernel”, they run an operating system.
I think he means MS-X 11
Xenix revival silly
But really, I hate all the constant Microsoft related articles – cut it out “Thom”