Paul Thurrott takes a long look at Vista. “It’s hard to put Windows Vista in perspective. On the one hand, the product has been in development for over five years, which means that Vista had one of the longest development cycles in the 20+ year history of Windows. Paradoxically, Windows Vista is both revolutionary and evolutionary. While it includes modern OS features, such as a new hardware-based graphical user interface, Vista will also feel like familiar territory, for the most part, to anyone that’s already familiar with Windows XP. And Mac advocates can claim, truthfully, that many of Vista’s best features appeared first on Mac OS X, sometimes years ago.”
Paul Thurrot.
Edited 2006-11-09 21:06
For me, a hardware based GUI is hardly a modern operating system feature.I rather thought a fine scheduler or effective multitasking are those what count really.
Yes, you’re oldschool.
I freezed RC2 every 2 hours… i hope RTM is much better.
And the new explorer is really terrible.. for the life of me I cannot understand how removing all action bars buttons except a stupid “burn” button that I never use is a good thing…
Just a minor correction. Every Windows since NT4 SP X have had “Instant Desktop Search”. It’s just turned off by default and is everything but user friendly.
But you _can_ use the Indexing Service to get instant results. You just have to enter parameterized queries in the free search textbox in Windows standard Search. And off we go, though it’s somewhat rough.
I don’t know why they aren’t utilizing it much more, bur perhaps bad management is the key?
About Windows N versions, Paul Thurrott doesn’t know what he’s talking about. Calling them unpopular is wrong, since nobody except a few geeks know about the existence. And ordinary users don’t care about the mostly ideological issues for geeks wanting the possibility to uninstall WMP and related services. Nor do they care about companies wanting to get rid of WMP on their servers.
Apart from that Thurrott should stay away from politics in a review of Vista. Offensive terms like “Created to satisfy power-hungry antitrust regulators in the EU” just makes him look like a cheap street whore. If I should ever decide to buy Vista, it would be Vista Business N.
He is right about a few things like the “watering down the uniqueness of the Vista platform” due to the backports to XP and Win2K3 (and hooray for those backports). And of course the fact that “many of Windows Vista’s best features have been jettisoned”. I actually liked the old Side Bar with integrated task bar and systray over the newer heavily crippled Side Bar.
A replacement of the aged start bar|quick panel|taskbar|systray thingy would have been great.
EDITED: Fixed a typo in Thurrott’s surname, and a few other typos.
Edited 2006-11-09 21:36
Offensive terms like “Created to satisfy power-hungry antitrust regulators in the EU” just makes him look like a cheap street whore.
I think he was accurate. Fining Microsoft because they bundled a media player with Windows for 10 years or more is evidence of bias in the EU anti-competitiion commision.
Calling someone “cheap street whore” means you are doing the usual venom spewing thing typical of cultists who disagree with anyone who doesn’t hate Microsoft.
accusing someone of ‘venom spewing’ and being a ‘cultist’ is typical of anyone who blindly loves microsoft and hates freedom and little puppies. Wow, isn’t accusing a person of a shortcoming whilst exhibiting that same shortcoming fun?
Anyway, I’ve read your posts here and there, so I won’t bother arguing with you over how the EU (and the US for that matter) had every right to label MS’ business practices as monopolistic.
Edited 2006-11-09 22:11
*LOL*
I just like the way NotParker always pick the parts where I critisize something or someone more or less related to Microsoft, while he leaves out the praise I give Microsoft in areas where they deserve it.
It appears to be more important for NotParker to harass me than it is for him to remind people that Microsoft was years ahead of FLOSS and Apple in regard to Desktop Search technologies.
I think that the point behind the EU ruling was fining Microsoft for being a monopolistic, anti-competitive, typical-example-of-corporate-America ugly bully. They just had to hang the fine on something legitimate.
Now please tell me how could you be so offensive to street whores:
“Just makes him look like a cheap street whore.”
Please for the sake of god do not equate street whores with mr.Parrot. They (whores) do, after all, have some integrity. And we all know who has none. It is them … Europians. They constantly whine about WMP, network protocols used by windows, other software vendors etc. How dare they make MS play in a leveled field???
Sorry guys couldn’t resist
Aahh..
Well, please accept my apologies on behalf of the cheap street whores. They too have a place in this universe (I think)
Though I use and respect Mac OS X, virtually every version Apple has shipped since 2001 has been a minor update, akin to a Windows 98 SE or Windows XP Service Pack 2 (SP2). Meanwhile, Microsoft has pushed an amazing variety of Windows versions out the door since 2001. Some highlights include Windows XP Embedded, Windows XP Media Center Edition (MCE), Windows XP MCE 2004, Windows XP MCE 2005, Windows XP Tablet PC Edition (TPC), Windows XP Tablet PC Edition 2005, and Windows XP Professional x64 Edition.
I’m curious, how much different are all the different versions of XP he’s listing? I mean, I’m fairly sure Embedded and 64bit are rather different from regular XP, but does anyone know how much changed between XP MCE, XP MCE 2004 and XP MCE 2005, for example? I guess I always assumed not that much.
All in all, this article seems to be a well-written exhaustive rundown of Windows Vista’s history and comparison of differences between the versions (or seems like it will be once he publishes the rest of it).
EDIT: Punctuation
Edited 2006-11-09 21:38
Windows XP Media Center Edition (MCE), Windows XP MCE 2004, Windows XP MCE 2005, Windows XP Tablet PC Edition (TPC) and Windows XP Tablet PC Edition 2005 are almost identical. The only difference in Media Center is the skin and Media Cneter (duh) and all the tablet PC versions have a tablet handwriting feature and a useless utility called “Journal”. Windows XP Professional x64 Edition is identical to XP Pro except for being 64 bit and embedded is just a heavily stripped down version.
Remember marketing is the art of selling twice the same thing, at different pricing, targets or branding.
Identical to being XP except 64 Bit.
is that ALL?
It’s not like migrating a 32Bit OS to a 64Bit platform isn’t hard or anything.
XP64 is actually based on Windows 2003. In many ways it is the true predecessor of Windows Vista.
Maybe because of :
“Linux has been running on 64 bits since the summer of 1994, when Linus Torvalds ported Linux to the Alpha chip.”
nix users are not impressed with ms catching up. As per Paul’s own admition of the fact that “Vista’s best features appeared first on Mac OS X, sometime years ago” may I recommend a new marketing catch phrase:
“the most caught up OS … puff puff … will you other guys please slow down …. “
You do know that NT also has been running on ALPHA for a long time, right? In fact, even earlier than Linux– NT 3.1 was released in 1993. NT also runs on SPARC, MIPS, Clipper, and probably some more I’m forgetting at the moment.
Forget the myth. NT is just as portable as Linux. It’s just that there is no financial interest into running NT on uncommon platforms. However, I am sure that if a big customer wants a SPARC build of Windows Server 2003, they’d get it.
Out of all people that post on this forum I would expect that OSN staff is a bit more informed on the subject. As it is obvious from your reply, CLEARLY I am wrong.
Please provide detailed information (a link will do) on: when (if ever) has NT 3.1 operate in 64 bit (not to be confused with operated ON a 64 bit processor)!!! I’ll gladly provide you with detailed info that Linux has worked as a 64 bit operating system if you need them.
Here is an article from our friend le Parrot:
“Microsoft pledged today to deliver a 64-bit version of Windows NT on the Digital Alpha CPU. The product is due in beta by the end of the year. The Alpha version of NT 5.0 will support 64-bit Very Large Memory (VLM), which allows more information to be processed simultaneously, speeds access to information in the system’s RAM, and is especially useful for any database and high transaction volume work.”
Date :
March 14, 1997
NOTE: NT 5.0
“Windows NT 3.1 was released for Intel x86 PC compatible, DEC Alpha, and ARC-compliant MIPS platforms. Windows NT 3.51 added support for the PowerPC processor in 1995, specifically PReP-compliant systems such as the IBM Power Series desktops/laptops and Motorola PowerStack series.”
http://en.wikipedia.org/wiki/Windows_NT
Yeah… 32-bit versions. The topic is 64-bit versions.
Removed. Dupe of notParker
Edited 2006-11-10 01:11
I am sorry. it was much earlier. NT 5.0 on alpha never made it to our company. But we had product on NT4.0 around 94/95 when i was working in DEC
WTF are you people talking about there was no 64 bit Windows NT (server or workstation). Quit smoking whatever you are smoking.
People get a grip we are talking about 64 bit OS not about 32 bit OS that can run on a 64 bit CPU. Please tell me that you can tell the difference. First crack at a 64 bit OS for MS was with w2k. Windows WAS and still IS an OS for low end computers (PC-s).
//NT is just as portable as Linux.//
Not correct.
Linux has been ported to more architectures than any other OS.
http://en.wikipedia.org/wiki/Linux
“Linux has been more widely ported to different computing platforms than any other operating system. It is used in devices ranging from supercomputers to mobile phones, and is gaining popularity in the personal computer market.”
http://en.wikipedia.org/wiki/Linux_%28kernel%29#Portability
“Linux supports the following machine architectures:
* Argonaut RISC Core (ARC) from ARC International
* ARM architecture:
o Acorn Archimedes and Risc PC series
o DEC StrongARM
o Marvel (formerly Intel) XScale
o Sharp Zaurus
o iPAQ
o Palm, Inc.’s Tungsten Handheld[14]
o Gamepark Holdings’ GP2X
o Nokia 770 Internet Tablet
o gumstix
* Atmel AVR32
* Axis Communications’ ETRAX CRIS
* Fujitsu FR-V
* Alpha architecture:
o DEC Alpha
o Samsung Alpha CPU.
* Hewlett-Packard’s PA-RISC family
* H8 architecture from Renesas Technology, formerly Hitachi.
o H8/300
o H8/500
* Intel IA-64 Itanium, Itanium II.
* IBM’s S/390 (31-bit)
* IBM’s zSeries and System z9 mainframes (64-bit)
* x86 architecture:
o 8086, 8088, 80186, 80188 and 80286 CPUs is under development[15]
o Intel 80386, 80486, and their AMD, Cyrix, Texas Instruments and IBM variants
o The entire Pentium series
o AMD 5×86, K5, K6, Athlon (all 32-bit versions), Duron, Sempron
o x86-64: AMD’s 64-bit processor architecture now known as AMD64 or EM64T (Intel); supported by the Athlon 64, Opteron and Intel Core 2 processors, among others
o Cyrix 5×86, 6×86 (M1), 6x86MX and MediaGX (National/AMD Geode) series
o VIA Technologies Eden (Samuel II), VIA C3, and VIA C7 processors
o Microsoft’s Xbox (Pentium III processor), through the Xbox Linux project
o SGI Visual Workstation (Pentium II/III processor(s) with SGI chipset)
o Sun Microsystem x86 Workstation (80386 and 80486).
* M32R from Mitsubishi.
* MIPS architecture:
o Jazz
o Cobalt Qube, Cobalt RaQ
o DECstation
o Godson (MIPS-like), Godson II, and Godson IIE from BLX IC Design Ltd (China).
o Some PlayStation 2 models, through the PS2 Linux project
* Freescale (formerly Motorola) 68K architecture (68020, 68030, 68040, 68060):
o Some Amigas: A1200, A2500, A3000, A4000
o Apple Macintosh II, LC, Quadra, Centris and early Performa series
o Sun Microsystems 3-series workstations (experimental, uses Sun-3 MMU) [citation needed]
* NEC v850e[citation needed]
* Power Architecture:
o IBM Servers.
* PowerPC architecture:
o IBM’s CELL
o Most newer Apple Computers (all PCI-based Power Macintoshes, limited support for the older NuBus Power Macs)
o Clones of the PCI Power Mac marketed by Power Computing, UMAX and Motorola
o Amigas upgraded with a “Power-UP” card (such as the Blizzard or CyberStorm)
o AmigaOne motherboard from Eyetech Group Ltd (UK)
o Samantha from Soft3 (Italy).
o Amy’05 PowerPC motherboard from Troika.
o IBM RS/6000, iSeries and pSeries systems
o Pegasos I and II boards from Genesi.
o Nintendo GameCube, through Gamecube Linux
o Project BlackDog from Realm Systems, Inc.
o Sony Playstation 3 (PS3).
o V-Dragon CPU from Culturecom.
o Virtex II Pro Field Programmable Array (FPGA) from Xilinx with PowerPC cores.
* SPARC:
o Sun-4 series[citation needed]
o SPARCstation/SPARCserver[citation needed]
o Sun Ultra series
o Sun Blade
o Sun Fire
o Clones made by the Tatung Company and others[citation needed]
* SuperH”
Linux has been ported to more architectures than any other OS.
How is that possible? Linux is not a OS, it’s only a kernel. NetBSD as a whole OS has been ported to over fifty-seven different architectures.
How is that possible? Linux is not a OS, it’s only a kernel. NetBSD as a whole OS has been ported to over fifty-seven different architectures
It’s possible because he was talking about GNU/Linux. Guess what, GNU is extremely portable too !
Not correct.
Linux has been ported to more architectures than any other OS.
Whether something is portable or not has nothing to do with to how many platforms it is actually ported; Linux is ported to more paltforms not because it is nescesrally more portable, but just because you don’t have to have a company’s approval to do so.
NT is a hybrid kernel, and therefore it is probably even *more* portable than the monolithic architecture of Linux.
NT is a hybrid kernel, and therefore it is probably even *more* portable than the monolithic architecture of Linux.
Too bad that “hybrid kernel” is a rather meaningless term, coined by marketeers who think “monolithic” sounds less cool. Please let’s refrain from kernel talk if we don’t know what a kernel really is.
It’s cool to repeat what others are saying, but maybe it’s a good idea if you actually try to explain why the term hybrid is meaningless?
It’s cool to repeat what others are saying, but maybe it’s a good idea if you actually try to explain why the term hybrid is meaningless?
It’s cool to claim that NT’s “hybrid kernel” is “probably more portable” than the monolithic kernel Linux, but maybe it’s a good idea if you actually try to explain why this would be the case?
The term hybrid is meaningless for NT at least because we’d like to see a little more stuff (usb, filesystems, networking, etc.) in user space, in stead of in kernel space, wouldn’t we.
As I said, I don’t know enough about this to discuss it in great length, but enough to recognize your claim as lacking substance.
h3rman, I’ve written articles on this subject, I’ve done research on this subject, so I know what I am talking about (sort of).
You seem to lack understanding of what it means to be a certain type of kernel. People often claim that the most distinctive feature of a microkernel (muK) is the fact it has all its processes running in userspace (save for stuff like thread management and IPC). While this definitely gives the muK its advantage over monolithic designs in various use-cases, it is not the most defining feature. The most definining feature of a muK, which is closesly related to the above, is that it is highly modular. It is this modularity that allows processes to live in userspace in the first place!
So, yes, NT might have 90% of its functionality running in kernelspace, but that does not mean it is a monolithic kernel. The various processes of the NT kernel (which in muK’s would live in userspace) are still seperate modules, which can actually be moved out of kernelspace relatively easily (some stuff has been moved into userspace for Vista, actually, such as the audio and display subsystems).
I can see how you came to think that the muK-has-everything-in-userpace charactaristic is what defines a kernel as being a muK; while this is definitely an important aspect, it is because of the highly modular structure of a muK that this is possible in the first place.
That’s where the term hybrid comes into play. NT is essentially a very muK-like design, but in order to eliminate the overhead created by having a true(tm) muK design (see my article on QNX’ Neutrino kernel [1] about this overhead), it has most of its processes in kernelspace. However, that does NOT make a monolithic design. Hence, the term ‘hybrid’, catchy as it may be, is actually a pretty dead-on term for the NT kernel (and other kernels like it).
[1] http://www.osnews.com/story.php/8911/QNX-The-Unexpected-Surprise
Thom, quoting you:
I strongly believe that a lot of people have a lot more interesting things to say about this subject than I do, seeing my limited knowledge on kernel design.
The same goes for me.
All I can say is, NT may have a certain design (much like a microkernel as you say), if it works for 90% like the opposite design (monolithic), then in what way (to come back to your original claim) is NT, as a “hybrid kernel”, more portable than monolithic Linux? (To be frank, I couldn’t care less which OS easily ports to more architectures, as I [amateur, aren’t I?] use only two ).
All I can say is, NT may have a certain design (much like a microkernel as you say), if it works for 90% like the opposite design (monolithic)
Like I said, it doesn’t. It works like a microkernel, but with most of its processes in kernelspace. As I just explained, the defining feature of a muK is not where its processes live, but how they live.
then in what way (to come back to your original claim) is NT, as a “hybrid kernel”, more portable than monolithic Linux?
I said, might be more portable; I can’t really tell. Theoreticaly, however, muKs are thought to be more easily portable than monolithic designs, since you can focus on one module/component at a time. You first port the relatively small kernel (i.e. MINIX3 has 4000 lines of code in kernelspace), after which you can port over component by component, injecting them into the system without affecting the running code in kernelspace or any of the other modules
You can’t do this with a monolithic kernel; a pure monoltihc kernel must be ported over to the new architecture in one big swoop, all the functionality in one big effort, and each piece of code you rewrite affecting the entire kernel, and not just the module fo which that piece of code is part.
But let me stress that this is all theoretical.
Theoreticaly, however, muKs are thought to be more easily portable than monolithic designs, since you can focus on one module/component at a time
I fail to see the relation there. More portable because you can focus on one module at a time ? How is it so ?
Anyway, Linux development works exactly the same, so it can’t be a monolithic design by your definition !
You first port the relatively small kernel (i.e. MINIX3 has 4000 lines of code in kernelspace), after which you can port over component by component, injecting them into the system without affecting the running code in kernelspace or any of the other modules
Strangely enough, Linux devs can do that too !!!
You can’t do this with a monolithic kernel; a pure monoltihc kernel must be ported over to the new architecture in one big swoop, all the functionality in one big effort, and each piece of code you rewrite affecting the entire kernel, and not just the module fo which that piece of code is part
Strangely enough, the Linux kernel has a big common code part for all the architectures, and several small specific ones. How is that so ?
But let me stress that this is all theoretical
Ah that’s why : a bad theory destroyed by practical evidences.
Unfortunately, when you focus too much on one point, you lose others. Like the time needed to port these so called portable modules in your microkernel.
I fail to see the relation there.
Of course. You need to get a clue first, then you’ll see the relation.
Please bring some decent arguments into the discussion; this is a technical discussion I’m having with h3rman, not some silly flamewar debate about how superior Linux is because it runs on my toaster.
I fail to see the relation there
Of course. You need to get a clue first, then you’ll see the relation
The implied question was : do you even have this clue ? Seems you don’t.
AFAIK, there is nothing technical about this, and being able to work on little modules never meant your code was more portable, in theory or in practice.
After your amazing answer, I still fail to see any relation between the two, so please enlighten me.
As it appears that to you, saying “this is a technical discussion I’m having with h3rman, not some silly flamewar debate about how superior Linux is because it runs on my toaster” is a decent argument, I’d also like to know why an OS that is knowingly portable and that you called monolithic, is suddenly shrugged off as a indecent argument because I or others supposedly run it on toaster ? And why is this a flamewar debate exactly when people mention Linux being more portable than any microkernel *in practice* ?
Is microkernel portability an ivory tower people concept or what ?
And finally, is this so called hybrid kernel in Windows Vista ? It should as it is NT, right ?
Edited 2006-11-10 15:25
Look, all I’m saying is that the fact that Linux runs on 2348729 different platforms does NOT, I repeat does NOT mean it is easily portable. NT runs on less platforms, but that does NOT, I repeat, does NOT mean it is any less portable than Linux. How hard is this to understand?
You are somehow assuming that just because Linux runs on many platforms, it is more portable than something else. Portable != how many platforms does it run on. Portable = how easy it is to port something.
Now, like I explained, using the charactiristics of two kernel designs, theoretically, muK designs are more easily ported (=more portable) to other platforms than monolithic designs. Since NT follows a muK design a lot more than Linux does, the theoretical implication is that it should be more portable.
The fact that Linux runs on more platforms than NT is irrelevant to that discussion, simply because it does not prove a damn thing, since there is no level playing field. ANYBODY can port Linux, NOBODY can port NT. If NT were to be open sourced, sure as hell it’d run on a lot of different platforms within months.
The fact that right from the release, NT ran on ALPHA, i386, and PowerPC, only proves the portability of NT. In fact, in order to prevent x86 specific code from slipping into NT, NT was developed on non i386 machines (it was actually developed on architectures designed in-houde by Microsoft). Linux, on the other hand, was not designed with portability in mind.
You just don’t get it, do you?
What determines if an OS (or to extend the question, any software) will run on a certain platform?
//What determines if an OS (or to extend the question, any software) will run on a certain platform?//
Let me guess the number one criteria. You have to have a cross-compiler?
That is, a compiler that you can run on an existing machine that will produce code for the new machine.
Let me guess the number two criteria. You have to have a means of getting your compiled code on to the target machine?
This might seem like a gimme, but it isn’t easy for some targets that Linux runs on *cough* Xbox *cough*.
//Look, all I’m saying is that the fact that Linux runs on 2348729 different platforms does NOT, I repeat does NOT mean it is easily portable. NT runs on less platforms, but that does NOT, I repeat, does NOT mean it is any less portable than Linux. How hard is this to understand?
You are somehow assuming that just because Linux runs on many platforms, it is more portable than something else. Portable != how many platforms does it run on. Portable = how easy it is to port something.
Now, like I explained, using the charactiristics of two kernel designs, theoretically, muK designs are more easily ported …. //
You were doing quite well, until that last bit.
Lets agree on this: “Portable = how easy it is to port something”.
OK then, instead of going into the escoterics of kernel design, and somehow get ourselves believing the obviously false conclusion that the kernel that runs on everything from wristwatches and mobile phones through to supercomputers is somehow less portable, lets actually look at how easy it is to port something.
It is easy to port if, and only if: (1) you have a cross-compiler for the target machine instruction set, (2) you have the code to port, and (3) the code doesn’t have many built-in dependencies on the machine hardware, and any dependencies that do exist are well documented and well understood.
Linux has all 3 of those attributes, and so is easy to port, and so it is ported.
NT … who knows? For most people, it misses out badly in at least two of those areas, probably all three.
//Linux, on the other hand, was not designed with portability in mind.//
That is true.
Yet amazingly, it (along with GNU) is far & away the most ported code on the planet.
Portability might not have been a design aim for the original Linux kernel (it was however for GNU), but that does not preclude the outcome of it actually being portable.
//ANYBODY can port Linux, NOBODY can port NT.//
Granted. How does this help your argument in any way? This actually says that Linux is more portable, because anyone can port it, and NT is not as portable because access to the source code (which is a necessary precondition for being able to port something) is far more limited.
//If NT were to be open sourced, sure as hell it’d run on a lot of different platforms within months. //
Since NT is not open sourced, this conclusion is pure speculation on your part at best. In any event, the fact that NT is not open source surely is a point arguing against NT portability not for it.
Edited 2006-11-11 00:21
Thom, although basically I get your point, I feel there is some confusing contradiction here in what you said:
Look, all I’m saying is that the fact that Linux runs on 2348729 different platforms does NOT, I repeat does NOT mean it is easily portable. NT runs on less platforms, but that does NOT, I repeat, does NOT mean it is any less portable than Linux. (…)
Theoretically, that’s correct I guess, because of
Portable != how many platforms does it run on. Portable = how easy it is to port something.
Now, like I explained, using the charactiristics of two kernel designs, theoretically, muK designs are more easily ported (=more portable) to other platforms than monolithic designs. Since NT follows a muK design a lot more than Linux does, the theoretical implication is that it should be more portable.
Now what I would like to know is, what is, technically, the reason for that? Say NT is really “hybrid”, given the fact that it mostly behaves like a monolithic kernel, what’s that little extra thing that does the trick?
The fact that Linux runs on more platforms than NT is irrelevant to that discussion (…)
The fact that right from the release, NT ran on ALPHA, i386, and PowerPC, only proves the portability of NT. (…) Linux, on the other hand, was not designed with portability in mind.
Here you contradict yourself. If Linux running on many different platforms proves nothing in the field of portability, then NT running on three different platforms proves nothing in the field of portability either.
If the only reason it somehow would prove something of the kind is that “Linux.. was not designed with portability in mind”, then I think we’re ignoring the fact that, back in 1991, Linux was a one man’s hobby project, whereas NT was not. At least, in the sense that it was developed by a corporate multi-billion dollar software and operating system giant. The Linux kernel has, obviously, evolved quite a bit, and evolution means change.
By way of analogy, it would be a little weird to say that since Linux was not designed to conquer the servers and desktops of millions, it would somehow be less capable of doing so (technically) than NT.
Now what I would like to know is, what is, technically, the reason for that? Say NT is really “hybrid”, given the fact that it mostly behaves like a monolithic kernel, what’s that little extra thing that does the trick?
For this explanation to work, let’s assume that we have kernel A and kernel B. Kernel A is strictly monolithic, while kernel B is a pure microkernel.
Now, a muK is designed in such a way that each kernel task, process, or whatever you want to call it (various drivers, memory management, you name it) is a seperate module, a module which can even be injected in running kernels, or even replaced in running microkernels. The actual microkernel itself, the bit of code (i.e. MINIX3: 4000 lines of code) running in kernelspace performing the task of (mostly) IPC, is small, and easily ported (because it is so small).
Now, kernel B consists of several small modules which can be ported module-by-module, without ever affecting the modules that have already been ported. Meaning, once you get the actual muK ported and running, any new module you inject, will not bring the running kernel down, nor will it affect the other modules. In other words, if you make a mistake in porting (resulting in a bug) it will not affect all the other modules or the actual muK. Any changes you make will only affect the module you made the changes in, not the other modules or the actual muK.
Now, compare that to the pure monolithic kernel A. In that kernel, any line of code can affect any other line of code. A bug in a networking driver can bring down the entire kernel, severely hindering the porting process. Any changes you make in a monolithic kernel might affect any other part of the kernel. This is not the case in kernel B, because there the bug in the networking driver will only bring down that specific driver– not anything else. You can focus completely on the part containing the bug.
You see how the muK design would make porting easier?
Now, again, let me stress that this is all theoretical, and we all know theory can differ greatly from practice. Also, for this example to work, I had to make the kernels either strict monolithic or muK, but kernels like that don’t really exist. Linux has loadable modules for example.
BUT, Linux is a monolithic kernel with muK-like capabilities; NT, on the other hand, is essentially a microkernel with most of its parts running in kernelspace, BUT, they are still seperate parts, just like in a muK.
Here you contradict yourself. If Linux running on many different platforms proves nothing in the field of portability, then NT running on three different platforms proves nothing in the field of portability either.
Yes, granted, I put that unclear, sorry. What I meant to say was that NT runs on many systems, and while this does not prove it is more portable than anything else, it does prove that NT is at least portable– something some people seem to forget.
Interesting; I guess I’ll really have to study kernel design a little more in order to appreciate all this.
…this does not prove it is more portable than anything else, it does prove that NT is at least portable– something some people seem to forget.
Obviously, you’re right; no matter how much some people may hate Microsoft (and I might be one of those), one shouldn’t forget that after all, NT, and Vista with it, is just one of the various operating system kernels out there.
We started out this discussion (sorry Paul T., we lost you there ) with the “hybrid kernel” idea, and another question was whether a certain type kernel is more portable than another one. Merely theoretical questions for most of us, and people, let’s not turn this into a “micturation” contest. :p
//Any changes you make in a monolithic kernel might affect any other part of the kernel. This is not the case in kernel B, because there the bug in the networking driver will only bring down that specific driver– not anything else. You can focus completely on the part containing the bug.
You see how the muK design would make porting easier?
Now, again, let me stress that this is all theoretical, and we all know theory can differ greatly from practice. Also, for this example to work, I had to make the kernels either strict monolithic or muK, but kernels like that don’t really exist. Linux has loadable modules for example.
BUT, Linux is a monolithic kernel with muK-like capabilities; NT, on the other hand, is essentially a microkernel with most of its parts running in kernelspace, BUT, they are still seperate parts, just like in a muK.//
//What I meant to say was that NT runs on many systems, and while this does not prove it is more portable than anything else, it does prove that NT is at least portable– something some people seem to forget.//
This is nice in theory, but it ignores the actual task of porting an OS.
When porting an OS, you have got on the one hand part of the code which is hardware independent, and then you have the drivers. You also have at least one “reference implementation”, which is hoe the code is supposed to work. This is the same with porting NT or porting Linux, microkernel architecture notwithstanding.
To port the OS, basically you have to re-write the drivers, and you have to recompile the hardware-independent bits.
This task is easier if you have lots of examples of re-written drivers, and lots of information on how to stitch in the drivers to the OS. The more open the better.
Linux is significantly easier to port than NT, given the actual process of how one ports an OS, and given the number of times it has already been ported so there are many previous examples of porting available to study.
Theoretical arguments about kernel organisation don’t really apply anyway, because neither NT nor Linux is strictly conformant to the different theoretical models.
Whether something is portable or not has nothing to do with to how many platforms it is actually ported
But when it’s actively ported to lots of platform and works on them in the latest version available, we know it’s portable.
When it’s not actively ported like that, we sure enough can’t assert it’s portable, and can’t disprove people that say it isn’t portable.
It’s even worse when the company puts out another product for other platforms.
Linux is ported to more paltforms not because it is nescesrally more portable, but just because you don’t have to have a company’s approval to do so
But Linux works on more platforms because it is more portable.
Saying MS has no incentive to make NT for more platforms is wrong as they made WinCE for some architectures, not WinNT : why ?
NT is a hybrid kernel, and therefore it is probably even *more* portable than the monolithic architecture of Linux
Which is BS as Linux has loadable modules and limited message passing capabilities heavily used, so is hybrid too.
Anyway, that Windows NT is probably more portable doesn’t change the fact that it is not ported on as many architectures, not as efficient as Linux on any of the architecture it runs, supported 64 bits after Linux, does not scale as well on SMP/SMT.
Given only that, I understand all you can do is brag on the probability of WinNT being more portable than Linux : there are not so many unprovable things left to brag about.
//NT is a hybrid kernel, and therefore it is probably even *more* portable than the monolithic architecture of Linux.//
Whether something is portable or not has almost nothing to do with being a hybrid or monolithic kernel, and more to do with (1) there being a cross-compiler, and (2) how much of the code you have to change to suit the new target architecture.
Those two things are immeasurably more important criteria in code portability.
//Whether something is portable or not has nothing to do with to how many platforms it is actually ported//
Are you sure about that?
Why isn’t the code that is seldom ported ported as often as the other?
I know what you are driving at, but “nothing to do with it” is far too strong a claim here.
Actually, NetBSd runs on mor Architectures then Linux, but it is close
//Actually, NetBSd runs on mor Architectures then Linux, but it is close//
I don’t know about architectures, but Linux runs on way more different machines … including something like 75% of the top 500 supercomputers (this is relevant because most supercomputers are “one-of” designs, or exist in only very low numbers of machines).
Let me just say….
WTF
???
You do know that NT also has been running on ALPHA for a long time, right? In fact, even earlier than Linux– NT 3.1 was released in 1993. NT also runs on SPARC, MIPS, Clipper, and probably some more I’m forgetting at the moment
The difference being, as always, that when done in Linux, it’s done right, not on Windows. Which is showed by the fact that the NT you talk about on Sparc, was 32 bit only, while it was 64 bit with Linux.
Forget the myth. NT is just as portable as Linux
Surely not. Thinking that an OS that have pain coming to a stable result on 32 bit Sparc is as portable as another that come as featureful on 64 bit Sparc is strange at best. Claiming that NT is as portable as Linux is even more bizarre, especially when it was never 64 bits, and doesn’t work on embedded (like Linux) systems.
It’s just that there is no financial interest into running NT on uncommon platforms. However, I am sure that if a big customer wants a SPARC build of Windows Server 2003, they’d get it
And if we want Windows NT on a phone ? You know Linux does this, do you ?
AFAIK you have to change to WinCE for embedded. So much for the so portable Windows NT …
Even a portable architecture is worthless if it can’t be ported by those who need it.
It is true, you can get 64 bit linux. When installing Etch on my new machine, amd64, I did. But I regret it. You have all kinds of problems with libraries, and the other day I really wanted to see the flash tutorials on the dabodev site: well, no flash. I also had problems with Mandriva 2006 64bit on another machine, and ended up putting in 32bit regardless. Problem is not all the apps you want are fully compatible. Its ok if all you want to do is Office and stuff (though Office behaves a bit strangely too – Base and Forms are either unstable or one or two features simply don’t work).
Just one person’s experience, but I feel its not quite ready for prime time. Been a while, too.
Flash is a known problem with 64bit linux. And flash is a known problem in linux generally. Its source is not really available, I know there is lots of information around from the person responsible for porting it.
Personally I would check out the archives at Gnash.http://www.gnu.org/software/gnash/ lots of work going on.
Your post indicates you have overexagerated your problems.
The reality is regardless of platform, the 64-bit transition will be tricky as they still sell 32bit CPU’s today.
I just thought to mention, althout this is off-topic, that as far as I know, you can use Crossover Office under 64-bit Linux, install Windows version of Flash player in it, and then enjoy flash content in your browser of choice. Also, the 64-bit version of Flash 9 for Linux is supposed to arrive some time next year. Gnash is a good try, but I haven’t gotten it working at all yet. We’ll see how long it takes for it to actually start working.
It’s Thurrott, and not Thurrot.
Nah it’s Parrot.
Now now, be nice
An ex-Parrot!
its not, it is just sleeping
This is so unimportant. Who could even bother to remember that?
Well, names _are_ important. They are a part of your identity
Besides that I didn’t remember anything. I discovered it while writing my own post.
It’s Thurrott, and not Thurrot.
LALALALALALALALALALA I’m not hearing you LALALALALALALALA.
Ok, it’s fixed.
*LOL*
Please don’t do that again. It makes my stomach hurt
Where’s the review?
Windows Ultimate Extras of course that isn’t included in home basic,doesn’t that sound logical?
What about SMP ?
> What about SMP ?
I guess the Home edition will barely allow a single CPU, maybe a dual core
THe home edition only allows 1 CPU, but unlimited cores, according to the article
Windows NT has supported SMP since 1994. Paul Thurrott evidently did not find it worth mentioning as a “modern OS feature”.
Unisys OS2200 supported multiple processors back in the mid-1960’s when the OS was called OS1100 and the boxes it ran on were branded UNIVAC.
It *isn’t* a modern OS feature.
Edited 2006-11-13 16:35
> “[…] many of Vista’s best features appeared first on Mac OS X, sometimes years ago.”
You mean those features that appeared 10 years ago in BeOS, AmigaOS, and others ?
I despise software patents, still I wish Be, Inc. would have patented those things long ago, they would still be in business and would get millions of royalties from MS and Apple.
nah they would be a part of microsoft after a hostile takeover
This is only the first (and second) part of an eight part review. The first part is only intended as an introduction the the following seven parts. So, it isn’t much of a review. There is a link to the second part at the bottom of the page. I guess that the other six parts haven’t been completed yet. For those expecting a complete review (like me), we’ll have to practice some patience.
Re: the first part. I was quite surprised by his implications/claims that Microsoft was the only OS supplier that did anything truly significant over the past few years. I was more surprised by some of the things he claimed to be significant. It was even more difficult to believe after seeing the comparisons he was making whether directly or indirectly.
Re: the second part. I have not seen any decent table of features comparing the different versions prior to this. Granted, I haven’t really looked, but it is much easier to see what differences exist between versions with this kind of format as opposed to a rant about how terribly inadequate Vista Basic will be.
In short, much of the “introduction” was quite unrealistic about Windows in comparison to alternative systems, and the second part was nicely informative about Vista’s different offerings. If I didn’t know any better I’d say that Thurrott lived in a Microsoft box seeing the world through Microsoft-colored glasses.
I love the spin they’ve put on what is essentially a 3 year development overrun. Trying to make it sound like a vintage .
But really, Microsoft have been playing catchup. A MS Exec admitted that they only announced Longhorn becuase Apple had released Tiger. No-one in Microsoft really expected Longhorn/Vista to come out before now, they just had to string the Businesses along for long enough for them to build something that could rival the main features of Tiger.
What was graphics acceleration before? It’s the same thing with a different set of drawing primitives.
Ok so they release Vista, big deal. We all have to upgrade to it at some point as MS is the defacto standard. Everyone uses it because everyone uses it.
Is MS changing their development cycles to stop this from happening again? Or are they going to just repeat this and make a Vista Longhorn 2 that will also never fully be released.
No matter how you slice it, Vista had some very major cutbacks from what they were promising. Maybe they should to have done what Apple has done and just slowly release features in a new version every year. Mr Thurott has put Apple in a bad light for only releasing minor upgrades to their OS instead of delivering a revolution like Vista was SUPPOSED to be. Instead we get an upgrade equal to 2-3 years of updates instead of a revolution.
Personally I think that MS was overreaching with their aspirations. They need to release minor updates all the time rather then one huge updates, like the frog in water, it’s better to slowly heat the frog rather then throw it in boiling water.
For many people, vista may just be too much of a change I mean why not jump to another (linux/mac) platform if moving to vista will be such a large change?
//Personally I think that MS was overreaching with their aspirations. They need to release minor updates all the time rather then one huge updates, like the frog in water, it’s better to slowly heat the frog rather then throw it in boiling water. //
MS does release minor updates. They are called service packs. MS does not charge for service packs.
MS cannot “release minor updates all the time” because then MS cannot charge you for buying Windows again. If MS holds off for a period, then releases “one huge update” amidst a lot of hype and gives it a new name … then MS can charge everybody all over again for the software.
your service pack analogy is old, tired and fallacious. until xp sp2 microsoft hasn’t ever released a service pack that added any new, exciting or must have features to windows, under the hood or otherwise.
oh unless you can call windows 3.1 a service pack to windows 3.0. of course you had to buy that one too…
Edited 2006-11-13 07:05
Mr Thurott has put Apple in a bad light for only releasing minor upgrades to their OS instead of delivering a revolution like Vista was SUPPOSED to be. Instead we get an upgrade equal to 2-3 years of updates instead of a revolution.
I wouldn’t characterize Apple’s OS X upgrades as minor upgrades. They usually include major underlying changes, such as new API, changes to the kernel, new window system, new currency model, new objective-c language features etc. Many earlier versions will also break application compatibility.
See this article to get an idea about the kind of changes you would see in Thurott’s “minor upgrade”: http://arstechnica.com/reviews/os/macosx-10.4.ars/
“They need to release minor updates all the time rather then one huge updates, like the frog in water, it’s better to slowly heat the frog rather then throw it in boiling water.”
That’s, um, not the point of the frog / water analogy at all.
I remember reading here that NT ran on PowerPC, MIPS, Alpha, and x86 in released versions.
Apparently the i860 and SPARC versions never were released.
I saw a 64-bit version of Windows 2000 advanced server running 64-bit sql server at pc expo at the javits center in 1999. It never got released either.
I think Win2K was the first 64 bit windows based on what I saw.
Browser: Opera/8.01 (J2ME/MIDP; Opera Mini/2.0.4719/1378; en; U; ssr)
here is a bit of history on the subject
http://www.theregister.co.uk/1999/08/23/compaq_alpha_cuts_pull_rug/
Vista supports more than 128GB of RAM in business & above versions – but only a maximum of 2 processors .
So Vista is not for 2+ processor workstations which might actually have up to that amount of RAM ?
Do all 2+ processor systems use Windows server ?
Check yourself if you don’t believe me. Literally, and seriously, the summary:
“Ummm.”, “Yer olde pops here!”, “Here is a picture of my childern”, followed by a soup of about a thousand permutations of words “Vista, Windows, XP, Longhorn, long wait, totally worth it”
Was the author drunk while writing the article?