Earlier today, OSAlert ran a story on a presentation held by Microsoft’s Eric Traut, the man responsible for the 200 or so kernel and virtualisation engineers working at the company. Eric Traut is also the man who wrote the binary translation engine for in the earlier PowerPC versions of VirtualPC (interestingly, this engine is now used to run XBox 1 [x86] games on the XBox 360 [PowerPC]) – in other words, he knows what he is talking about when it comes to kernel engineering and virtualisation. His presentation was a very interesting thing to watch, and it offered a little bit more insight into Windows 7, the codename for the successor to Windows Vista, planned for 2010.A few months ago, I wrote a story called “Windows 7: Preventing Another Vista-esque Development Process“, in which I explained how I would like to see Windows 7 come to fruition: use the proven NT kernel as the base, discard the Vista userland and build a completely new one (discarding backwards compatibility, reusing code where it makes sense), and solve the backwards compatibility problem by incorporating virtualisation technology. Additionally, maintain a ‘legacy’ version of Windows based on Vista (or, more preferably, 2003) for businesses and enterprises who rely on older applications.
Traut’s talk gives some interesting insights into the possibility of this actually happening. There is a lot of talk on virtualisation, at four different levels:
- Server Virtualisation: Virtual Server 2005/Windows Server 2008
- Presentation Virtualisation: Terminal Services (RDP)
- Desktop Virtualisation: Virtual PC
- Application Virtualisation: SoftGrid Application Vitualization – this allows applications to run independently, so they do not conflict with each other. Traut: “You might think, well, isn’t that the job of the operating system? Yeah it is. That’s an example of where where we probably didn’t do as a good a job as we should have with the operating system to begin with, so now we have to put in this ‘after the thought’ solution in order to virtualise the applications.”
There was a lot of talk on the new hypervisor in Windows Server 2008. It is a small kernel (75 000 lines of code) that uses software “partitions” where the guest operating systems go. The hypervisor is a very thin layer of software, it does not have a built-in driver model (it uses ‘ordinary’ drivers, which run in the partition – access to for instance a NIC goes straight through the hypervisor, it is not even aware of the NIC), and, most importantly, it is completely OS agnostic. It has a well-defined, published interface, and Microsoft will allow others to create support for their operating systems as guests. In other words, it is not tied to Windows.
Traut also introduced the ‘Virtualization Stack’ – the functionality that has been stripped from the micro-hypervisor to allow it to be so small in the first place. This stack runs within a parent partition, and this parent partition manages the other ‘child’ partitions (you can have more than one virtualization stack). Interestingly – especially for the possibility of future Windows versions pushing backwards compatibility into a VM – this virtualization stack can be expanded with things like legacy device emulation, so the guest operating system is presented with vitualised instances of the legacy devices it expects to be there.
Interestingly, Traut made several direct references to application compatibility being one of the primary uses of virtual machines today. He gave the example of enterprise customers needing virtual machine technology to run older applications when they upgrade to the latest Windows release (“We always like them to upgrade to our latest operating system.”). Additionally, he acknowledged the frustrations of breaking older applications when moving to a new version of an operating system, and said that virtual machines can be used to overcome this problem. He did state, though, that this really is kind of a “sledgehammer approach” to solving this problem.
However, it is not difficult to imagine that, a few years from now, this technology will have developed into something that could provide a robust and transparent backwards compatibility path for desktop users – in fact, this path could be a lot more robust and trustworthy (as in, higher backwards compatibility) than the ‘built-in’ backwards compatibility of today. Additionally, it has a security advantage in that the virtual machines (as well as their applications) can be completely isolated from one another as well as from the host operating system.
The second important change that I would like to see in Windows 7 is a complete, ground-up overhaul of the userland on top of the proven Windows NT kernel, reusing XP/2003/Vista code there where it makes sense, disregarding backwards compatibility, which would be catered for by the use of virtualisation technology. Let’s just say that step 1 of this plan is already complete: strip the NT kernel of basically everything, bringing it back to a bare-metal kernel, on which a new userland can be built (again, re-using code where it makes sense).
The result was shown by Traut, running in VirtualPC 2007: Windows 7 MinWin. It is 25 MB on disk, and uses 40MB of RAM (still not as small as Traut wants it to be). Since MinWin lacks a graphical subsystem, it sported a slick ASCII-art bootscreen. It has a very small web server running inside of it, which can serve a task list, a file list (MinWin is made up of 100 files, compared to standard Windows having about 5000), and memory statistics in your web browser. Traut: “So that’s kind of proof that there is actually a pretty nice little core inside of Windows.”
MinWin is actually not the same as Windows Server Core. MinWin is infinitely smaller than Server Core (which is about 1.5GB on disk). MinWin is also infinitely more useless compared to Server Core – the latter is a full-featured server operating system, whereas MinWin is a stripped Windows NT kernel with minimal userland features.
Traut stressed more than once that MinWin is not going to be ‘productised’ in itself – you should see it as a base on which the various editions of Windows are going to be built – server, media center, desktop, notebook, PDA, phone, you name it. Of course, only time will tell if Microsoft will ‘simply’ dump an updated Vista userland on top of MinWin and call it Windows 7, or that it will actually build a new userland from the ground up on top of MinWin.
All in all, this presentation by one of the most important kernel engineers by Microsoft has taught us that at the very least, Microsoft is considering the virtualisation approach for Windows 7 – it only makes sense, of course, but having some proof is always a good thing. Additionally, the presentation also showed us that Microsoft is in fact working with a stripped-down, bare-metal version of the NT kernel, to be used as a base for future Windows releases.
I can now ask the same question I ended my previous Windows 7 article with: “Is it likely a similar course of action will pan out over the following years?” The answer is still “no”, but the possibility has inched closer.
And we can only rejoice about that.
So, can we admit then, finally, truthfully, and in an unbiased form, that Windows itself is done?
Windows – at we know it today – is in its downswing. Yes, it’s everywhere, and it’s the basis for virtually every corporate environment. But the word is out. Any major shop who isn’t evaluating alternatives is woefully delinquent. If Microsoft has any chance of surviving in this arena for more than the next decade or so, they need a dramatic change.
What Thom is proposing here, essentially, is scrapping Windows as a whole. Save only the kernel – nay, a subset of a fraction of the kernel – and rebuild a new OS atop.
I welcome this move. Windows is fundamentally used-up. The licensing is draconian. The software is a constant battle for most users. I should know, I support hundreds of them. They don’t work on Windows – they work in spite of it.
So I agree that a move like this would be a great strategic move for Microsoft if they want to stay relevant in the long run in this corner of the market.
Windows Vista is Microsoft’s OS9. It’s done, it’s used used up, there’s no more stretch in the elastics, as we Dutch say. It’s time to move on, and make a viable plan for the future – and I don’t see how Microsoft can maintain its relevancy by building atop Vista.
Edited 2007-10-22 14:42 UTC
I thought Vista was more like OSX.0. Slow, incompatible, horribly inadequate and it’ll be five revisions later before it’s up to scratch.
The problem with Vista is that if MS move to a new OS and virtualise Vista, Vista will end up being heavier than the new OS, what a drag that will be. It would have been far, far better if MS had virtualised XP inside of Vista and dropped all backcompat in the name of a cleaner, leaner stack on top of the Kernel.
Vista was mismanaged. I don’t doubt the programmers themselves because Microsoft produce good, solid products on every front except for consumer Windows releases!
Except OSX was (from the Mac community’s standpoint) a completely new OS, where Vista is just more stuffs piled onto NT5 (2000 and XP), and I highly doubt they can do much to make it better other than stripping it down to the bare essentials and putting something new on top of it (which will be “WinOSX”)
“Windows Vista is Microsoft’s OS9. It’s done, it’s used used up, there’s no more stretch in the elastics, as we Dutch say. It’s time to move on, and make a viable plan for the future – and I don’t see how Microsoft can maintain its relevancy by building atop Vista.”
Yes, exactly like OS9 – a 64-bit, preemptive multitasking, hardware graphics accelerated OS9 with virtual memory.
Not literally, of course. Don’t you grasp the concept of the analogy?
I would send this privately, but mine’s preemptive. 2000Hz swap rate, actually. It has the OPTION of turning-off preemption on a task-by-task basis. Other operating systems prevent potentially abusive features, like applications turning-off interrupts (not same as turing-off preemption). Mine allows both.
True, mine does not have hardware graphics acceleration — couldn’t bring myself to look at Linux code and steal it.
I worked for a certain nameless monopoly event-ticket-selling company (who probably wasn’t the one which crashed today selling world series tickets ) that had their own operating system and learned about processes voluntarily yielding the CPU before preemption. They had a propriatary VAX operating system and I’m pretty sure it once ran without preemption, since all code was controlled by the company and could be guarenteed not to abuse the privilege. In addition to some work on the operating system, I wrote business report applications and had to include commands to “swap-out” periodically so it didn’t hog the CPU and ruin other user’s responsiveness.
Yeah sure – apples are just like oranges.
Hmmm, let’s see, they’re both fruits, they both have coloured skins, they both grow in trees, they both contain seeds, they can both be made into juice, they both come in many variations. Yeah, they seem pretty similar to me.
It truly feels like the end of an era. Not just for software systems, but for a whole corporate American mindset on how to manage large projects of great economic and social import. You can almost feel the “whoosh” of the deflating ideology as its symbolic champions knowingly head for the exits and their dutiful sidekicks blindly rearrange the deck chairs on the Titanic.
Meanwhile, the era of centralized control and explicit agreement is gradually giving way to decentralized empowerment and implicit tolerance. The transition will be sticky and bumpy, with winners and losers of all shapes and sizes. The key for stalling giants like Microsoft is to reinvent itself with an eye toward sustainability. Computing is no longer a revolutionary frontier, it’s an evolving reality, and Microsoft has to reexamine its priorities with this in mind.
Where did all the frontiers go? We chewed through them all like caterpillars through leaves. Now it is time to contemplate our borderless reality and emerge as butterflies, elegantly endowed with a common vision for a sustainable future. Hopefully our cocoons won’t become our graves.
The transition will be sticky and bumpy, with winners and losers of all shapes and sizes.
I’m gonna be a winner of IT2.0 you just wait!
Try as some might to look to alternatives to Windows, in many cases it just won’t work, at least not without the help of virtualization running Windows for about 3 different critical apps in our case.
So we can be assured that it won’t be out until at least 2012. And if somebody thinks I’m trolling, has there EVER been a Microsoft OS that has actually been released when they originally said it woudl be since at least Windows 95?
Yes, Windows Home Server was the latest. I believe SBS before that.
“””
Yes, Windows Home Server was the latest. I believe SBS before that.
“””
Nice try. But wouldn’t those be more like Windows “distros”? Variations upon the themes of existing products? What *major new release* was ever even *remotely close* to being delivered on the original schedule?
The OP’s point is valid.
The OP didn’t ask about major releases. He asked about OSes in general.
Quote:
“””
“””
Again. Nice try. But the context made his meaning clear enough. And this sounds like the most major rethink and overhaul since NT.
The context was about any version of Windows shipping on time. The question was asked and answered. Stop trying to move the goalpost.
This is a continuation of changes made during the Vista dev cycle. That was the major rethink and overhaul for the near future. If you expect Seven to be an entirely new base, you’re going to be disappointed.
Well, technically, Windows Vista was perfectly on time. It was released on the only official release date ever set – not a day later.
hmmm, how did Thom get modded down ?
Did you mean: virtualization
http://www.google.com/search?q=virtualisation
I write in British English, because that’s what I got taught in primary and high school, and I study it now at university.
Microsofts been working with the idea of a new codebase for well…. forever vista was to be the new codebase with a ground up restructuring but it was pushed off due partly to stockholders impatience.
It’s good to see microsoft evolving and looking to the future.
Windows isnt dead it just needs a major garbage cleaning, you don’t throw out code that works XP was one of the best OS’s in history especially since SP2.
The issue is Vista didn’t do enough to move further than XP, it was XP SP3.5 perhaps but not the OS that microsoft had invisioned. They did a lot to make it close to what they wanted but their not getting the help they need.
The new TCP stack is great but only really if router and isp’s work to implement the needed protocols and the fact is most arent.
The new Graphics architecture is great if the drivers support it and the hardware, but the problem is ATI nor Nvidia can produce fast stable drivers for their lives.
The new systems for increasing speed like the hybrid drives is great but only if the hardware and drivers support it, which to date neither has really been accomplished.
The new UAC works it really does, but it’s very obvious a 1.0 attempt at it. It’s in my view a great move it’s just not a move that was 100% worked out, the fact that running installs requires a ok window before the secure UAC even launches is pretty much proof of that.
The new sandboxed environments is a wicked move, something even apple didnt do for safari and im very thankful microsoft did it for IE, but if we can’t easily lock applications in sandboxes by themselves when needed then its effect is not as great… its also a 1.0 move
Windows isnt dead it just needs a major garbage cleaning, you don’t throw out code that works XP was one of the best OS’s in history especially since SP2.
The XP code-based seemed to work. It was really a delinquent code base that really needs a lot of work, and a lot of legacy crap dropped from it. The author has a good approach for how to do so while still maintaining the backwards compatibility, and it would behoove Microsoft to actually do it.
As to throwing out a code base – yes, there are times when you do throw out a code base. Typically, it is when you can no longer control the code. Sure, you might be using CVS or SVN or something similar, but that doesn’t mean you can truly 100% control the code.
For instance, I worked on one project where the code base was really uncontrollable. It had a legacy history to it and we couldn’t solve the problems it had by continuing to use that code base. The only answer was to start a fresh – use new practices so that we could manage the resources of the code, ensure security, etc. The old code base, while it worked, wouldn’t have supported those efforts. Moreover, the new code base allowed us to add in new features quickly, easily, and maintainably. (When we fixed or added a new feature was added to the old code base, we would end up with more issues coming out than we went in with. It was really bad.)
The Windows code base is likely at that point. It was likely there before XP, and only made worse by XP. It’s easy to tell when you’re at that point as every new change takes longer to get in and keep the old code functional.
So yes, it’s high time Microsoft cut the cruft and started a new code base, and designed the code base to be more modular, maintainable, secure, etc. It’s the only way the software will survive another generation (e.g. Windows 7 and Windows 8). Otherwise, it will collapse under its own weight.
In large part, Vista is the beginning of the new code base. Again, MinWin isn’t new to Seven. It’s there in Vista/Server 2008. A lot of code was rewritten for Vista. They’ve started to virtualize system resources, and they’ve mapped/eliminated most dependencies and layering violations, and turned each feature into manifest-backed compoents. They are more agile in what they can add/remove without affecting other components because of this work and the processes put in place during Vista’s development.
They aren’t going to throw out all of that work in Seven. They’re going to build upon it. I expect there will be a greater shift towards updated versions of the managed code services they’ve added in Vista as the preferred method for application development. I also believe they’ll start to integrate application virtualization for legacy compatibility as well as driver virtualization for reliability, but the end product will be the offspring of Vista/Server 2008, not an all-new code base. I wouldn’t expect something that big for another 1 or 2 major releases.
and turned each feature into manifest-backed compoents
About that…
Have you ever taken a look at WindowsPackages or wherever they’re stored? All it is, is a manifest of bloat.
In large part, Vista is the beginning of the new code base. Again, MinWin isn’t new to Seven. It’s there in Vista/Server 2008. A lot of code was rewritten for Vista. They’ve started to virtualize system resources, and they’ve mapped/eliminated most dependencies and layering violations, and turned each feature into manifest-backed compoents. They are more agile in what they can add/remove without affecting other components because of this work and the processes put in place during Vista’s development.
It isn’t a matter of how agile the code is. It’s a matter of how much the code itself can take change. Windows, due to quite a lot of reasons (e.g. backward compatibility, competition stifling, incomplete and undocumented APIs, bugs, etc.), is a monolithic code base that is not very easy to change. Revising it, refactoring it is not going to help. The only way you solve that is by starting over.
Starting over is often good for a project too. You lose a lot of legacy code that is not needed, and you get the chance to do it better, more correctly. You can apply newer design and architectural principles and fix things proactively instead of retroactively. (Sure you’ll still have stuff to fix retroactively, but they’ll be different things than before if you did your job right.)
Every software project will at some point reach a point where it’ll have to have its entire code base thrown out and restarted. In many respects, it is really a sign of maturity of the program – you understand the program enough to know how to do it right and you need to give yourself the opportunity to do it. A clean cut is often the only way to do so.
Vista is better in some respects to modularity of parts. However, it is still far from what it needs to be and it has a lot of cruft in it – stuff Microsoft simply can’t get rid of unless they start over. Otherwise, they’re just continuing in the same paradigm, fixing the same issues over and over.
“It isn’t a matter of how agile the code is. It’s a matter of how much the code itself can take change. Windows, due to quite a lot of reasons (e.g. backward compatibility, competition stifling, incomplete and undocumented APIs, bugs, etc.), is a monolithic code base that is not very easy to change. Revising it, refactoring it is not going to help. The only way you solve that is by starting over. ”
The very fact of Microsoft’s existence, and spectacular stock valuation proves this point utterly and completely false. They’ve made built an extremely successful business around never starting over from square one.
The past few decades are littered with the carcasses of companies that were stupid enough to think they could start from scratch. In the mean time, Microsoft acquired code they didn’t have, and incrementally improved the code they did. We’ve come from DOS, all the way to Vista, and at no point along the way did MS ever start from scratch. I don’t expect them to any time soon.
The very fact of Microsoft’s existence, and spectacular stock valuation proves this point utterly and completely false. They’ve made built an extremely successful business around never starting over from square one.
I would hardly call their stock spectacular. It moved high in the bubble just like all the others. Since the bubble is has sat flat due to their inability to produce products and deliver on their primary programs in a timely manner. It took them 5 years (and two development cycles since they restarted the development 2.5 years into it) to deliver Vista and Windows 2008.
The fact is that Windows has become a monolith that they can no longer develop the way they have been, and it is causing them headaches. They’re producing products like Windows Server Core, projects like MinWin and others in order to get the code to manageable state so that they can even begin to compete.
So, yes – they are very likely to do so very soon. They’ve did it in the past with WinNT, which was a brand new, from scratch code base that they later (WinXP) merged their crap code and legacy support into. Win2k and earlier did not run their DOS based programs and vendors had to typically support two different code bases for products to run on both the WinNT line and the DOS/Win9x/WinME line.
They can do it, and they will. Otherwise, it will be the end of them. Oddly enough, this is pretty much what all the commentators are saying of Microsoft and Windows. They will likely choose to use isolated app-centric VM’s to manage legacy programs but they will have to do it.
NT wasn’t exactly written from scratch. It’s API was an extension of the pre-existing Windows API, and it’s design borrowed heavily from VMS. Much of the NT design/development team came from Digital, including Dave Cutler, one of VMS’s chief designers.
NT wasn’t exactly written from scratch. It’s API was an extension of the pre-existing Windows API…
It was still a largely incompatible code base with the pre-existing Windows and DOS programs. So the point still stands.
Gee, what the hell are you talking about?
Have you ever used Windows at all?!
I’m really getting sick hearing that XP is
when if reality it’s THE WORST in history (out of the 32bit ones).
Even OS/2 was/is better than XP.
And the stupid gimmicks you mention, the hybrid drives, UAC, sandboxed environments, are the lamest ever attempts to hide the complete incompetence and most horrible design of any OS in history.
TCP/IP stack, graphics architecture? What?!
Again, what are you talking about?
What’s so great about them?
Have you used Vista at all?
Nvidia has had the best graphics drivers in the industry for quite a few years now, so that tells me that the graphics problems are not Nvidia’s fault.
(ATI has been been crap since the early ’90).
And the whole networking system in vista is seriously demented. Only a complete moron could come up with something that stupid.
Have you seen the network dialogs and screens that Vista provides? Can you say confusing?
To this day I have to laugh when I recall my first encounter with Internet Explorer on Windows 2003 server.
I start IE, I type a web address, and I get some idiotic notice that this will not work.
What?! A web browser is not allowed to browse the web?!
Seriously, this is beyond ridiculous.
What’s next, Word without the ability to type text?
So Microsoft’s solution to security is to simply cut functionality.
That convinced me that Microsoft is a company that will never write a good OS.
And yes, Server 2003 is garbage too, even though it stinks a bit less than XP, it’s still garbage.
You might have used Windows but you apparently know little about operating systems. There’s a good reason Win2k3 denied you using the browser. Its a SERVER. You shouldn’t be trying to use the web browser with it! Yes, you can use it as a workstation or a desktop OS but thats not what its intended for. Why is IE included then? Well, because IE was still integral for much of the explorer based system. They’re getting much better at moving that dependency out but as far as Im aware it still exists.
As to both the graphics and networking stack in Vista they have been significantly improved. The changes can, and will, greatly increase stability and performance down the road. However, as with ANY version 1 major change release, nothing works perfectly. Much of the blame does lay in the hands of driver makers (actually, ATI’s drivers have been far, far superior to NVIDIAs in regards to Vista. They still are in most cases NVIDIA just has the higher performing and non-late hardware releases) but it doesnt help that its a completely different interface to the OS. That takes time to make up the changes. You think OSX.0 didnt perform like shit? Was highly stable? Must not have used it.
The same has mostly been true with Linux when they do big changes but again, most of the fault lies in the hands of the driver makers. Seeing a trend?
don’t you mean Windows 8?
Nope, Windows 7. It’s Microsoft’s internal versioning scheme:
Windows 3.1 = Windows 3
Windows NT = Windows 4
Windows 2000 = Windows 5
Windows XP = Windows 5.1
Windows Vista = Windows 6
So the next Windows is 7.
Windows 3.1 = Windows 3
Windows NT = Windows 4
Windows 2000 = Windows 5
Windows XP = Windows 5.1
Windows Vista = Windows 6
You forgot
Windows 1 = Windows 1
Windows 2 = Windows 2
Windows 286 = Windows 3
Windows 3.0 = Windows 4
Renumbering your list, we get
Windows 3.1 = Windows 5
Windows NT = Windows 6
Windows 95 = Windows 7 (You forgot that one)
Windows 2000 = Windows 8
Windows XP = Windows 9
Windows Vista = Windows 10
So in reality, any new version should be called Windows 11
Edited 2007-10-23 08:17
Not taking any stand on whether your numbering is correct or not. The web is already full of posts about kernel of coming windows 7. Everyone will always call it windows 7 even though it could be anything else. That’s just the way world goes, try to adjust
No, in reality it should be called Windows 4 since this would be the fourth version of the NT kernel if MS had done a proper 1.0 release instead of syncing it with the current Windows version. Windows 1-3.11, 95-98, and Me don’t factor into the count as they were built on DOS.
This is the say way that Mac OS X.4 is really NextStep 5.4 and not Mac OS 10.
Close, but not quite:
Win1.01-.04
Win2.0-.03, Win2.1-.11
Win3.0, Win3.1, Win3.11
WinNT3.1, WinNT3.5, WinNT3.5.1 (first non-DOS based Windows)
Win95(4), Win98(4.10), WinME(4.90) (end of DOS-based line)
WinNT4
Win2K(5.0), WinXP(5.1), WinServer2003(5.2)
WinVista(6)
Windows 7 (in development)
I keep posting it whenever this topic is being talked about.
WOW64 is proof, shipped with any 64bit Windows, that it’s entirely possible to run two different userlands (more like subsystems) on the same kernel. There’s a full 32bit subsystem installed to run any 32bit application, and apart from messaging, the 32bit subsystems runs completely on its own, only sharing the kernel as common code.
Nothing speaks against a completely new main subsystem, keep the old one running side by side for “legacy” applications. Glue put where needed (i.e. windowing).
Alternatively, Microsoft could take a clue from Solaris Zones, if there’s a more heavy-handed approach needed (quasi full hosting of an operating system), which is still lightweight in regards to resource sharing.
64bit computing on WIndows is useless to say the least, may was well run 32bit version because only a handful of apps are 64bit. Windows 7 pure 64bit like they said is there marketing team after a bad night out.
Edited 2007-10-22 16:03
The kernel, drivers, and all of the apps in the package are 64-bit. What’s not pure about it? In terms of third-party apps, most don’t need 64-bit versions. They run on x64 Windows just fine via WOW64.
Depending on your workload, 32-bit Windows may be fine, but some people benefit from the larger available address space (even when running 32-bit apps — particularly some games).
Thats what I mean, why have a 64bit OS if third party apps dont even support 64bit. I dont see the point of running 32bit apps on a 64bit OS, may as well use 32bit version.
Point being that you may as well use 32bit version because third part support is useless on Windows.
Uh, there’s quite a few apps in various fields (audio, video, virtualization) that support 64-bit. That would be the point of 64-bit operating systems.
The kernel, drivers, and all of the apps in the package are 64-bit. What’s not pure about it?<p>
<p>
It’s obvious. Your question contains the answer: apps that are not in the package. Something that simply doesn’t need to exist in the free world.<p>
<p>
When not all the apps you want to run that are part of the package, including many Microsoft apps, nobody can take win64 seriously.<p>
<p>
Oh, and drivers too. Most drivers aren’t made by MS. In fact, many aren’t even validated by them.<p>
<p>
In fact, when you consider the switching overhead, you might just end up with a slower system.<p>
The NT kernel indeed allows for subsystems (up until Windows 2000, for instance, it had an os/2 subsystem), but would you really want to run the entirety of win32 in an NT subsystem?
One of the prime points in these two articles of mine is that you really do! not! want! to ship/run the current Windows userland, because it is a mess – if you move it to a subsystem, you do just that: you move it to a subsystem. You’re just moving it around, you’re not sandboxing or isolating it.
The NT kernel indeed allows for subsystems (up until Windows 2000, for instance, it had an os/2 subsystem), but would you really want to run the entirety of win32 in an NT subsystem?
Actually, Win32 as it is, IS a subsystem in the very sense of the NT definition. Maybe with the years, they spaghetti coded some stuff, but that’s being unwired for quite some time now (been said a whole lot during the Vista development, and Traut said it in his presentation).
One of the prime points in these two articles of mine is that you really do! not! want! to ship/run the current Windows userland, because it is a mess – if you move it to a subsystem, you do just that: you move it to a subsystem. You’re just moving it around, you’re not sandboxing or isolating it.
If it runs in a tailored VM or in a controlled subsystem (using resource virtualization a la UAC), where’s the difference? Latter is easier on the total system.
As said earlier, the best way to implement this IMO would be using a construct like Solaris Zones, where there’s hard partitioning inside the kernel already, running full blown operating systems (well, everything right above the kernel) inside the partitions, but using the same kernel and as such able to share resources (mostly just CPU and memory). Using a huge shim, you would be able to keep the old Win32 system running, just like Solaris can run e.g. the whole unmodified Ubuntu userland using a syscall translator in a zone.
VMs really aren’t a solution for this, because they’re too static and have a huge footprint (memory). To make them more flexible in that regard, the guest operating system would have to be able to deal with fluctuating memory sizes. I don’t see that coming anytime soon, at least not automated, because different systems deal with memory pressure in different ways, resulting in a memory scheduling clusterf–k.
WoW has been around since the earliest releases of Windows NT 3.51. There were several “personalities”, as they were called, released with NT 3.51:
– Win16
– Win32
– OS/2
– Posix
– probably more, but that’s all I can remember off the top of my head
The OS/2 personality was dropped with Windows 2000, the Posix subsystem was “replaced” with Services for Unix, and Win64 was added.
This is not a new concept, and was one of the main selling points of Windows NT back in the day.
Considering that cache usage is contingent on fitting the loops into a small amount of memory. (I had paid an obscene amount of money for a sucky Micro-A1c just to run AmigaOS 4.0 based on this same reasoning!)
On one programming site somebody observed that (referring to compiler flags) optimizing code for less memory usage typically generates faster code than optimizing for speed.
If they manage to integrate this with their Singularity project that replaces some page faults with API functions that call the pager, etc., directly, then this might actually be a good version of Windows.
So, they’re stripping the windows kernel down until its almost as compact as linux and presumably less functional, at this point Windows’ only advantage over the competition, its familiar GUI for idiots who can’t grasp a true CLI driven OS, is gone.
If Microsoft tries to make Windows the new Linux people will start wondering why on earth they would want this new bastardised offering, if Linux, the BSDs and even Apple’s dodgy little OS, can already do all that microsoft are struggling so hard to replicate.
If they want a proper brawl with linux on its own territory maybe MS should buy out Minix
Modded to -1? didn’t realise it was so offtopic to discuss the subject of the original article…
Maybe giving everyone mod points all the time isn’t such a great move.
I dream of the day when Microsoft creates a sister company (just so that all those “monoply” aqusations wouldn’t hold place and MS could do what ever with that product – for instance integrate AV).
That sister company should create a following OS:
1) Purely 64-bit (maybe even 128…)
2) Based on Singularity (great stuff that)
3) The .NET Framework should BE-THE-API (no need for p/invoke and stuff like that)
4) Everything even remotely related to backwards compatibility should be handled via virtualization
5) New PC hardware wouldn’t hurt either. Throw out all the legacy crap (yes, your current hardware is also built around f**ked-up backwards compatibility layers) and definently redesign the USB stuff (programming for USB is such a-pain-in-the-A**)
6) Throughout, openess and well documentation should be embraced. For instance Bill Gates’s letter – in which he said that ooxml should render perfectly only in IE – made me sick to my bones (really, somebody shoot that idiot instead of throwing a cake in his face).
I guess I’ll be dead, burried and long forgotten before that ever happens…
Edited 2007-10-22 17:28
“””
“””
Oops! You lost a lot of credibility with that. What, pray tell, do you think that > 64 bits would buy you? At the traditional rate of memory increase of doubling about every 2 years, even the 48 bits allotted to memory access in current 64 bit processors will last us 30+ years. And that’s just a hardware limitation. It can easily be increased to 64 bits, extending us out to 60+ years. 64 bit filesystems are good for a about 40 years at the current exponential rates of expansion. (The 128 bitness of the otherwise excellent ZFS was, quite frankly, a marketing gimmick.)
And besides, what processor would you run this 128bit OS on? Did AMD announce something that I missed?
“””
“””
Well, you are thinking in the right time scale, anyway.
Edited 2007-10-22 18:00
Good point. Many novices use induction to say, well we ran out of 32-bit space and, now, 64 bit is needed, so lets jump to 128 bits.
Sometimes, people find uses for excess bits by incompletely utilizing the space… Like placing kernel stuff in 0x80000000-0xFFFFFFFF even before you run-out. I think I heard that IP numbers are way excessive, but good for routing.
One really good thing about living in the year 2007 is that both the hardware and software creators got out of the “let’s add 4 more bits” mindset that we lived with through the 80s and 90s. Remember all those “barriers” we broke, only to face them again in a few years? Now we have the opposite problem: Adding an excessive number of bits gratuitously for marketing reasons. That’s a far lesser problem though. As an old friend of mine was fond of saying: Better to have it and not need it than need it and not have it.
The ext3->ext4 transition is, hopefully, the last major barrier we will face for some time. (Famous last words!)
Edited 2007-10-22 18:17
Is ext4 128 bits? I tend to think that’s excessive, but maybe you might have virtual drives composed of several physical ones and it might prove convenient to use high bits for physical drive number, or if they were on a network, include numbers for that. I picked 64-bits for my filesystem, but I can see reasons for 128.
“””
Is ext4 128 bits?
“””
No. (Can you *imagine* the ruckus on LKML if anyone proposed such a thing?!) But the current ext3 filesystem size limitation is only about 16 terabytes, depending on architecture. It does not take full advantage of 64 bit. Ext4 raises that to an exabyte. And adds extents, as well. But that’s tangential.
Edited 2007-10-22 18:36
Now don’t get me wrong, I agree with 64 bit being enough for a long time.
But when I first read the this, I read no one will ever need more than 64 bit for a software. Is it possible that saying, we don’t need it is too short sighted or arrogant?
But ya.. 64 is enough, and even the idea of 128, I’m not so sure if the advantages out way the disadvantages. Memory space available vs usage needed for a typical software is the first thing that comes to mind.
Edited 2007-10-22 18:39
“””
“””
Never say never. And, of course, 40-60 years is not never.
But I don’t *think* I’m being short sighted. The 20th century conditioned mind thought in terms of arithmetic progressions with regards to computer hardware and was continually underestimating future requirements. The 21st century conditioned mind is used to thinking in terms of geometric progressions. And the geometric constants for the increase in disk space, ram, and transistor density in processors have remained remarkably constant over the 20 years I have been watching. If anything, I would expect the constants to *decrease* over the years.
At any rate, and for now at least, one can figure 1 bit for every two years on memory, and 1 bit for every year and a half of disk.
X86_64 took us from 32 bits to 48 bits for memory addressing. A difference of 16 bits. So I figure we’re good for 32 years. In fact, due to the way X86_32 works, one has to start making tradeoffs at less than 1GB. Not sure if and where such tradeoffs might need to be made with current processors.
I’m sure some kind soul will stop by to fill us in on that.
Edited 2007-10-22 19:38
sbergman27
Don’t underestimate the future. In 5 years someone might come up with something so revolutionary that will require that address space or maybe even more. God knows, maybe we’ll all be living in full HD worlds (sound, music, video, etc) and memory will gome in TB sticks. So to say that >64 bits is unnecessary is to say like IBM did in the early 80’s – “who needs personal computers?!?”
Creating a 128-bit or larger processor is a piece of cake anyways. All you have to do is enlargen the instruction size and mingle with the microcode. If I’m not mistaken…
The only reason no one is making these, is because there is no market for them yet.
But if you are starting a new and using a larger address space doesn’t seriously hurt performance, then why settle for less? Why not embrace the future right now?
And for those who still can’t see the point in 64-bit proccessors, all I’ve got to say to you is – memory, there is never enough of it.
“””
“””
Then it would be totally unfeasible to implement, because physical memory availability would not be within many orders of magnitude of that requirement. At the rate of exponential increase that we have seen in the last 20 years, which has remained fairly constant, 2^52 bytes of memory, the limit for future versions of X86_64 processors, would cost about 100 million dollars in 5 years time. (Requiring 262,144 16GB memory sticks, which are likely to be the largest available at that time.) Do you have some reason to think that the rate of *geometric* expansion will increase? It hasn’t over the last few decades.
Your terabyte sticks of memory would actually be scheduled for about 2023-2027, BTW.
“””
“””
Precisely. There is no reason in the world to think that memory will be available in large enough quantities to require > 64 bit processors for about 40-60 years.
BTW, I should take this opportunity to correct my previous posts now that I’ve refreshed my memory on the topic. The physical addressing limit of current AMD 64 bit processors is 2^40 bytes (not 2^48), giving us about 16 years respite. This can be increased to 2^52 (not 2^64), which would give us a total of 40 years.
My statement it not at all like “who needs personal computers”. It is more like “whether people need this much memory or not, it is unlikely to be available in such quantities for at least 40-60 years.
My statement is somewhat *more* like “Nobody will ever need more than 640k of ram”. But that statement, if it was ever actually made back then, was *demonstrably* short-sighted and wrong at the time. Can you provide actual *evidence* that my statement is demonstrably short-sighted and wrong?
Edited 2007-10-22 21:42
> Creating a 128-bit or larger processor is a piece of cake anyways. All
> you have to do is enlargen the instruction size and mingle with the
> microcode. If I’m not mistaken…
The details are a bit more complex, but yes, it would be piece of cake if there was any market for a 128-bit CPU.
> But if you are starting a new and using a larger address space doesn’t
> seriously hurt performance, then why settle for less? Why not embrace
> the future right now?
Increasing address space size *does* hurt performance. Modern programming sees a lot of “passing pointers around”, and even more so in reference-eager programming languages such as Java or C#. All those pointers would now be twice the size, meaning the size of the CPU cache measured in number of pointers halves and resulting in more actual memory accesses. And those are *really* bad for performance. Similar arguments apply to instruction size.
Unless you are changing to an entirely different memory model (e.g. non-uniform pointer size), 128-bit addressing would kill performance.
Where’s my flying car, personal robot butler and hologram?
Speaking from experience as a chip designer, right?
Sure, but at every point in time there’s a size at which there’s no gain from adding more.
As Dr. Albert Bartlett famously said, “the greatest shortcoming of the human race is our inability to understand the exponential function”.
200 engineers? how do they work effectively if they don’t share a single mind? do they just blindly code functions as per input/output specs? if so, aren’t they just coders? perhaps uni grads? perhaps that explains why there are so many variations of the same dialogs in windows, they all coded their own…
“””
“””
You’re forgetting the Borg implants.
Yes… it’s far more efficient for us to communicate that way (and assimilate people into the Windows collective).
I feel a Dr. Who episode coming…
Gee, I dunno. How does the Linux developers work?
If I was to make wild guess I’d say by communicating and using source control tools.
I’ll also go out on a limb and guess they’ll also have project managers who oversee things and delegate tasks.
I think they should remove all legacy support, and start from scratch. And please, no registry!
No, it’ll be called Registry.net and all your settings will be stored on live.com servers Imaging the startup latency that would create.
I didn’t know the Peter Principle is valid for some OS’s as well.