“Last month, in the first installment of this three-part series, I looked at Windows Vista kernel enhancements in the areas of processes and I/O. This time I’ll cover advances in the way Windows Vista manages memory, as well as major improvements to system startup, shutdown, and power management.” More here.
ReadyFetch seems interesting; not revolutionary, but more of a common-sense step to take.
I haven’t heard of hybrid HDs, though it seems logical… I’ll have to shop around when it’s upgrade time.
*later on* Sysinternals… it’s like Power Toys all over again. Why is it that Microsoft spends obscene amounts of money developing or purchasing really great software, releases it for free on their website, but they won’t bother to add it to their installs?
The software has to meet a broader set of requirements to be included in shipping products vs being released as a seperate web package.
I’m sure this has been detailed on one of MS’ employee blogs, but basically there are greater support, localization, and testing requirements, as well as different packaging (installer) requirements depending on how the application is distributed.
Windows PowerShell is a good example as it is an out-of-band operating system component, and had to fulfill the above requirements for distribution. One requirement is using the standard operating system update technologies for component servicing, so it uses a different package for Vista vs XP/Server 2003.
Edited 2007-02-22 23:05
Ah. I can see why that would be so, then; things you pay for have to be held to a higher standard than those that are free.
Every article from microsoft.com will tell You that Vista [or any other Windows] has thousands improvements and is tuned to maximum.
As usual useless bulsh!t.
score -2: looks like a lot of Windows fans/zealots here.
Edited 2007-02-22 21:58
Try actually reading the article before posting trollish comments next time. You couldn’t be further off the mark in terms of what it contains. Its author was well known in the past for posting very detailed analyses of how things actually work in Windows, this isn’t marketing fluff at all.
I’m sorry, but there’s simply no other way to say this: You’re an idiot.
Next time try reading the article, as well as making note of who the author is.
Then go compile a kernel or two to quell your anti-Microsoft anger.
One of the reasons Windows Vista uses the BCD is that it unifies the two current boot architectures supported by Windows: Master Boot Record (MBR) and Extensible Firmware Interface (EFI).
Does that mean EFI is supported? I didn’t think it was.
Not a bad article, always interesting to read, although the next one seems like it’s going to be better with Security and Reliability.
Also, here’s Part one too.
http://www.microsoft.com/technet/technetmag/issues/2007/02/VistaKer…
EFI is supported in Vista-x64 bit only (needs confirming, they may have pulled it before RTM). Support for the rest to be rolled out in SP1.
EFI is supported in Vista-x64 bit only (needs confirming, they may have pulled it before RTM). Support for the rest to be rolled out in SP1.
From documentation I’ve seen previously (I’ll see if I can dig up a link), Windows will never support (U)EFI on x86, only x64 and IA-64.
Edit: Here’s the link:
“EFI, UEFI Support, and Windows Vista
Windows 2003 Server supports EFI 1.10 on Intel Itanium platforms. Microsoft Windows Server codename “Longhorn” supports EFI 1.10 on Intel Itanium platforms, and also introduces support for native UEFI boot on x64 64-bit platforms. Although the initial release of Windows Vista will not include UEFI x64 64-bit support, a subsequent Windows Vista release will support UEFI.
Given the advent of mainstream 64-bit computing and the platform costs previously discussed, Microsoft determined that vendors would not have any interest in producing native UEFI 32-bit firmware. Microsoft has therefore chosen to not ship support for 32-bit UEFI implementations.”
http://www.microsoft.com/whdc/system/platform/firmware/efibrief.msp…
Edited 2007-02-22 23:17
Microsoft as usual, “let’s cling to the past for our dear life”
Not quite. It’s more like “why support UEFI on x86 when most implementations will be for x64”. With the exception of some current Macs, the majority of the UEFI market is running 64-bit CPUs.
Edited 2007-02-22 23:29
Sure, why not jump into the brave new world of bootloader technology so that we can put it on our box (even though it’ll only be relevant for 500 ms during the boot process)!
The only reason Vista uses BCD is to piss off people using other OS by screwing up their MBR and refusing to boot if they restore it.
This is against any sane practice and I believe one can probably find a law in France or elsewhere that it violates. (something like “hidden flaw”, when it doesn’t behave like any OS is expected to).
Besides it’s also yet another manifestation of the full DRM lock-in they have been pushing all along. No Vista here, never!
I could normally boot Vista RC1 along with Grub (from Ubuntu Edgy) installed in MBR. Of course I had to back it up before installing Vista as Ballmer’s minions still don’t care to ask a question about rewriting MBR on install. After restoring it and adding Windows loader entry to menu.lst, all OS’s (XP SP2, Vista RC1, Edgy Eft) were bootable. No better or worse than with any previous Windows version. Linux Live/install disks should have special repair menu option to check and fix ruined MBR (with a question to add or remove windows from grub list).
Also, Finchwizard, thanks for linking to part one! Before I read this, I was worried that Vista was mostly a GUI upgrade.
It’s interesting how Microsoft assumes most users will be using heavy audio/video applications, and is altering their OS to support those programs better. That will definately put Windows into a unique place on the market (besides that only-OS-with-significant-support thing, that is), and keep Windows as the king of multimedia OSs.
I’m not impressed by ‘symbolic links’, however; I can’t see the need or the use for them, although I suppose certain tasks might be made easier.
I hope that third-party vendors use the I/O priority system appropriately, because it seems like the sort of thing that many people will make assumptions about without trying to understand the reasoning behind. Additionally, I worry that if some malware author figures out a way to back-door his program’s I/O into Critical priority, what seems like a good idea could be easily abused.
I have windows vista business 32 bit edition on my notebook (turion@2000mhz, 1gb of RAM, ati x1600) and it pretty much sucks.
Until I turned off all the “new” features (security, indexing, “autodefrag”, etc) the thing was basically unusable, I had constant disk usage going on. After turning all those off (aero is ok to be at full), it became usable, but it’s still pretty crappy.
I’m also one of the “happy” people with the known vista “lid close/open -> no screen” problem.
There’s also quite bad battery drainage, even with “battery preserving” settings, I get much less time than in ubuntu.
On top of that, the thing eats ~450 mb of ram right on start (the “real usage” not caching/prefetch) and 8gb on disk, which is outrageous, for the nothing it does atleast.
Compared with ubuntu edgy which I have on second partition it’s horrible, the looks are not worth it.
I think I’ll be forced to remove it and put xp in.
Edited 2007-02-22 23:07
Out of curiousity, how did your computer score in that performance index thing? Perhaps more RAM would help (Vista eats tons of it. At least that’s my impression, I’ve only run Beta).
For comparison: when Win95 came out, I was very annoyed by the fact that I had to upgrade from 4MB to 8MB of RAM in order to be able to run (swapping all the time). And my first computer worked fine with 64KB. Software these days, tsk tsk
Edited 2007-02-22 23:20
It scored 4 overall, with processor being worst (4.0) and some parts going as high as 4.9.
I think Vista with enough RAM would be faster than Windows XP, even dell recommends 2 GB RAM for Vista. You could try upgrading your computer memory to 2 GB. We recently puchased new computer system at our office Pentium D 960 3.2 GHz with 1 GB RAM and Nvidia Quadro 4500 graphics card, vista seems to be a lot faster than windows xp was, maybe thats just my opinion.
The disk thrashing is most likely the index going.
This would also be the source of your battery life problems, though haveing Aero Glass on doesn’t help either.
I could also be the fact that Windows has always had incredibly aggressive page caching. It doesn’t help that NTFS read performance goes to crap (more than any other modern filesystem) if the page cache hit-rate drops.
Or it could be excessive vm paging due to the new pageable kernel buffers in Vista. Kernel paging is a double-edged sword. It only works well for systems with small amounts of physical memory and lots of disk throughput. Most Vista systems will be exactly the reverse. Plus kernel paging is incredibly complex and error-prone. There’s lots of interesting edge cases that trigger the hard reset trap.
The NT kernel code has always been pageable (the main cause for the IRQL_NOT_LESS_OR_EQUAL bluescreen). Kernel buffers have also been (mostly) pageable too. There’s a non-paged pool and non-paged code sections for things like the page-in code and for ISRs and other critical routines, but the whole kernel is pageable. Maybe now that the RAM is more expansive than the old days, they shouldn’t page certain parts of the kernel, but that’s another issue.
Vista uses more ram on boot because of its caching. The story article talks about this. You are comparing apples & oranges, so to speak. If you don’t like Vista that’s fine, but just know that its a whole different beast.
What got my attention, along with about 5 minutes of laughter, was the new kernel memory management. With XP and before each kernel subsystem (I/O, filesystem cache, device drivers, etc) was statically allocated its own region of kernel memory. Starting with Vista they all share the full 2GB space. From what I can tell, most other operating systems have had a global memory pool for kernel pages, and dynamically allocate from that.
I think you read that wrong.
XP and NT versions before it used a statically-allocated region of memory for all kernel subsystems, but those kernel subsystems could dynamically allocate from that pool all they wanted. The problem arised when some subsystem or driver wanted more memory than had been statically allocated to the whole pool on boot.
Vista now dynamically manages both the kernel and application pools.
^ That is at least my understanding based on the article.
Quoting from the article:
“… Prior to Windows Vista, the Memory Manager determined at boot time how much of the address space to assign to these different purposes, but this inflexibility sometimes led to situations where one of the regions became full while others still had plenty of available space. The exhaustion of an area can lead to application failures and prevent device drivers from completing I/O operations….”
This seems to support my understanding of the article.
Now somebody go ahead and mod this -2 as well.
.. the result is so slow? All improvements are done to make the OS more responsive, but I clearly see the opposite result. I guess, all this optimization logic is quite expensive itself and requires just faster hardware with more memory to work on.
Is the Microsoft Defense Brigade already out in full force? Why does Joe Average love wasting his money on inferior crapware from Microfart?
Because it’s the best desktop OS on the market. It’s that simple.
Windows users even get the best versions of the open source stuff like OpenOffice.org, BitTorrent and FireFox.
What do you mean by “the best versions”, since in Linux you can customize them like with Gaim.
–On topic–
Dont see any improvements in startup and shutdown, it takes just as long as XP does and XP x64 takes a age to shutdown sometimes.
Because your favourite operating system works wonders on a server yet is hopelessly amateuristic on a home computer.
I did the Linux, full time thing for 3 years and then I decided to join the rest of the world.
Wow … you lasted three years ….
As long as there was another Linux distro, there was still hope of finding the one that fixed the deficiences of the previous. After going through “all” of them, twice, I lost hope
I think there are fundamendal problems with the concept of Linux distribution which simply can’t be solved without breaking the whole model. But that’s a long story.
First, the operating system choice is decided by the preferences of the user and its useless disscuss here the preference of each one. Second point, the article is about Windows kernel and not the whole operating system, so comparisons with Linux Distribuitions as you did here are completly out of the scope and doesnt make any sense. Third and last, talking about kernels and not operating systems (wich are two completly different things despite the both concepts are related), Windows kernel is huge and much more buggy when compared with Linux. The point is Linux isnt bug-free and is far away from that but its more reliable and secure than windows kernel.
Edited 2007-02-23 20:15
I take it you’ve audited the windows kernel source to realize its level of bugginess?
I agree that there are many attacks against windows, but I think the core of the kernel is pretty well-designed and secure. No doubt, some drivers aren’t great and even some kernel-mode subsystems could have problems (like the RPC services or SRV.. the SMB daemon), but the level of stress testing the kernel goes through is quite immense.
/Is the Microsoft Defense Brigade already out in full force?/
No, but it looks like someone let the linsux flocktards out of the cage.
Edited 2007-02-23 02:50
OS News is hardly pro-Microsoft, and actual information about the operating system is hard to come by here. Please don’t attack those few here who are interested in Windows as an operating system.
Because rarely can anyone give a good, sensible argument against doing so.
It’s interesting that the article is from microsoft itself, that remind me of Bill putting his 10-year old daughter into ads for XBOX 360 :p
Mark Russinovich started Sysinternals in the middle 90s. He has been an NT kernel expert for more than a decade, and his company has just been recently aquired by MS. He is one of the most respected names in the Windows world, and has been writing technical articles about Windows for a very long time. You may be able to bitch about MS, but that guy is a serious NT wizard, regardless who published his article
Oh, I see.
Good to see there are still someone online who reckognizes Mark’s work.
For all those talking about how bad Windows is, this article is not about that at all, it’s information regarding the inner workings of Vista.
These are the kind of articles which were made for this site OSAlert.com, it’s interesting to read in plain english the inner workings, which provide a greater understanding of what is actually going on inside. Regardless of whatever i think of the OS itself, the article was very good and i would love other’s written in the same context for all the other OS’s out there (Mac OSX, Solaris etc..)
If you wanna bash microsoft for no reason find a forum, as this site is for Operating System News.
I agree, but my point is, with all the “new” (mostly ancient) ideas implemented in vista, it simply doesn’t show from a users perspective.
I could go on a sing about how superior Linux IO is to windows but nobody would listen if it didn’t show in the end.
High quality articles about the internals of BSD, Solaris and Linux are everywhere. There is no secret about it.
This article is roughly giving some hints at the internals, but in no way matches the quality of what is available for the open kernels.
It’s refreshing to see a article about Windows that actually says something, but it is imprecise and of poor value for a closed kernel since noone can change or tweak it anyway.
While I don’t agree with most of what SuperFetch does, I do like fetching running applications evicted by another active process once it is finished. In fact, thanks for Con Kolivas and his Swap Prefetch patch, I already have this:
“Therefore, if you leave your computer to go to lunch and a memory-intensive background task causes the code and data from your active applications to be evicted from memory while you’re gone, SuperFetch can often bring all or most of it back into memory before you return.”
And it works extremely well.
Unfortunately,the major kernel developers have never provided solid reasons as to why it ought not to be included in mainline. They keep bringing up hypotheticals about huge NUMA systems being affected in certain ways or that their mammoth development systems seem to not have any benefit. It does on this mostly *typical* desktop user’s system!
Con’s http://members.optusnet.com.au/ckolivas/kernel/“ patch set.