And this is part two of the story: Microsoft has just confirmed the next version of Windows NT (referring to it as NT for clarity’s sake) will be available for ARM – or more specifically, SoCs from NVIDIA, Qualcomm, and Texas Instruments. Also announced today at CES is Microsoft Office for ARM. Both Windows NT and Microsoft Office were shown running on ARM during a press conference for the fact at CES in Las Vegas.
This is pretty big news, since it’s the first time since 29 July 1996, when Windows NT 4.0 was released, that a released version of Windows NT ran on anything not developed by either x86 partners or Intel (Itanium). This means that for the first time in 15 years, Microsoft itself is actively trying to break up the sometimes dreaded ‘WinTel’ alliance.
“With today’s announcement, we’re showing the flexibility and resiliency of Windows through the power of software and a commitment to world-class engineering. We continue to evolve Windows to deliver the functionality customers demand across the widest variety of hardware platforms and form factors,” said Steven Sinofsky, president of the Windows and Windows Live Division.
ARM CEO Warren East is full of glee, obviously. “We are excited by today’s announcement, which marks a significant milestone for ARM and the ARM Partnership, and we look forward to working with Microsoft on the next generation of Windows,” he said, “Windows combined with the scalability of the low-power ARM architecture, the market expertise of ARM silicon partners and the extensive SoC talent within the broad ARM ecosystem will enable innovative platforms to realize the future of computing, ultimately creating new market opportunities and delivering compelling products to consumers.”
Sadly, there’s not a lot of meat here just yet – what we are all interested in, of course, is how the ARM version of Windows NT will deal with the boatload of x86 software available for the Windows NT ecosystem. In a Q&A, Sinofsky said that more news about the underlying components of the next version of Windows NT will be made available over the coming months, much in the same way the company handled the Windows 7 release.
I’m leaving you with this nugget that really made my evil smirk come out to play: several companies delivered quotes to the press release – including AMD. Guess who wasn’t on the list?
I cant see a referral to Windows NT in the press release?
Maybe I’m just missing it?
It says Windows support for ARM on the next release of windows and support for system on a chip.
Are you just assuming this is Windows NT?
Windows 8 will be a new OS.
Windows 8, like Windows 7, and every version of Windows since XP, is Windows NT. Do you think that MS rewrite Windows every time that they release a new version?
You are and the people that mod me down is not very up to speed are you.
Win 8 wil have a rehauled filesystem.
http://www.geek.com/articles/news/windows-8-coming-in-2012-20090422…
And a lot of other references Im not even going to link.
Do you think I just suck up my comments.
Windows 8 will just be another evolution of Windows 7, as it was an evolution of Windows Vista, which is just an evolution of Windows XP, which is just the combination of the Windows 9x GUI-look on top of the Windows NT core, evolved from Windows 2K, and, originally, from Windows NT 4.0, and 3.5 before it.
They aren’t going to chuck all the code from Windows 7, start from scratch, and create an entirely new Windows OS. That’s like saying that Ubuntu 11.04 is completely new, and not an evolution of Debian.
Think Midori. It is a research OS being brewed at Microsoft Lab and rumored to be successor to Windows 7 (http://www.zdnet.com/blog/microsoft/goodbye-xp-hello-midori/1466). Nothing can be substantiated at the moment. All speculations…
A new file system does not mean a completely different operating system.
With any operating system there is always stuff that changes,but a lot remains the same. Determining when to call something a completely new operating system vs jsut a new version of an operating system is sometimes difficult to do, but I don’t think Windows 8 will have enough differences to qualify as “completely new”. Now if its really just midori ( kernel written in C#) with a compatibility api for legacy win32 stuff, that would be a “completely new” operating system. ANd there has been some speculation that something of that calibre is/or has been considered.
That has to be the worlds crappiest article; in one paragraph the author says: “Windows 8 will see a radical rehaul of the file system” then the next one the author says: “One job posting looks for someone to help program the next generation of Windows^aEURTM Distributed File System Replication storage technology, with ^aEURoenew critical features^aEUR| including cluster support and support for one way replication^aEUR and performance improvements a big plus.”
How on earth does the author leap from a job advertisement specialising in clustering file system and then conclude that there is a ‘radical overhaul’ for the file system for Windows 8? Pie in the sky circle jerks maybe fun but I’d sooner read an article with real substance instead of pie in the sky promises.
Windows 8 will not be a revolution, it will be an evolution. The foundations are there it is a matter of Microsoft making the changes to take advantage of them.
Agreed. Evolution. But it should be good evolution. No I don’t have proof of that, but it just makes good business sense at this point… use what you have that is obviously working well for them at the moment. They can continue to work on “midori” or whatever else might be in the oven in the background using fewer resources until the time comes when they really NEED to out it.
I’d say midori is more a ‘play ground’ for the future once there is a movement away from win32 but that won’t be for at least another 5-10 years at the earliest. There are lots of projects worked on that never really turn into complete end products – Microsoft has many projects on the go with the end result not necessarily turning the project into a product but what they learnt during the project being put into existing products.
Here is another cool article over at Neowin:
http://www.neowin.net/news/what-jupiter-means-for-windows-8
Now that is awesome; I hope that when they do open their ‘application store’ they put restrictions on it such as having to use the the latest Visual Studio and latest API’s – forcing developers to upgrade their code so that applications look gorgeous on the desktop rather than the epitome of fugly as many today look like.
The Filesystem is not related to the kernel, or any other part of Windows, it is just a subsystem, NT could work with NTFS, FAT, or if anybody wanted too, it could use ext3, XFS, really anything.
Does Linux change to some other OS depending on what filesystem it is currently booting with? No it does not. FreeBSD does not change to Solaris just because you are using ZFS.
To be correct yes Linux does sometimes has to be changed due to what file-system it is booting on. Like Linux Secuirty Module SELinux require particular features from the file-system to operate. This does give a OS that behaves differently. Also booting a real-time Linux does have particular requirements in file-systems you can and cannot use.
Yes you stay in the same family but the Distributions cannot always remain the same on different file-systems to the point in some cases not being installable on particular file-systems as root file-system.
Also subsystem has a very particular meaning when talking about NT nothing related to file-system. File-system in NT an “Installable File System” http://en.wikipedia.org/wiki/Installable_File_System
Linux is simpler to boot on a alien filesystem than windows.
Linux has a kernel image and intrd that are loaded by the bootloader this does not have to be the on the filesystem that the OS will boot into.
Lets move over to NT. The bootloader of NT style OS’s reads the Registry to work out what drivers to load with the kernel. Issue here bootloader must be able to read the filesystem the OS is on. So Windows boot loaders have a file-system driver independent to the “installable File System”.
Anybody cannot use anything with NT. They have to be able to rewrite the bootloader and make a IFS. Where with Linux person only has make Filesystem driver for Linux.
Now nasty part with windows replacing the bootloader you could get on the wrong side of a update. So really MS is fully in-charge of what File-systems you can boot windows on.
Claim that Filesystem is not related to kernel or any other part of Windows is invalid. Its related to the bootloader that loads the kernel that also loads the drivers the OS needs to boot.
Linux can claim that File-system is not related to kernel. Since file-system driver can be bundled into initrd and loaded by any Linux supporting boot loader for starting up on any file-system you like. But then you have to remember the other points above. That not all Linux Distributions will operate after that due to the limitations of the file-system drivers.
Yes Linux distributions can be broken down into groups by secuirty design. So Linux is not uniform system. Talking about it as one item is foolishness.
FreeBSD is a Distribution from the BSD classes of OS’s. Yes there is more than one in the BSD class of OS’s and some of those OS’s have file system limitations. Solaris again is a Distribution from the Solaris class of OS’s but at this stage Solaris has not branched to having a file-system support differences in its class. Single distributions compared to Multi is really a mistake.
There is nothing stopping MS from supporting other file systems. If MS put out a version of Windows 7 that used ext3, IT WOULD STILL BE WINDOWS. all the arguments in the world would still not make it something other than Windows.
You can continue to argue, but that doesn’t make you right. And your arguments about Linux and BSD do not make sense, because regardless what filesystem they are using (and they can use several at the same time), THEY ARE STILL LINUX AND BSD.
Edited 2011-01-06 14:18 UTC
Actually, I’ve used EXT3 on Windows, and it worked well.
where’d you find such a beast?
I was using Ext2IFS – it worked with Ext2 and Ext3 (Ext3 is just 2 with journaling). There are others solutions also.
Edited 2011-01-06 20:14 UTC
I needed once something equivalent and used
http://www.ext2fsd.com/
It’s a GPL2 driver, so you can even study it, modify it, etc.
Edited 2011-01-08 20:12 UTC
OMG! You made your own version of Windows? What did you call it?
Lindows
Windows lets you write and add file system drivers – there are projects for UFS and ext2/3 around, at least.
Did you just say that the filesystem has nothing to do with the kernel?
When I think “kernel” I think process scheduling, memory management, etc. I’m sure that is all he meant by that. I/O drivers and file systems can vary as long as the kernel can manage them, as long as they meet criteria.
Don’t know how it’s done in the Windows NT kernel, but in most other modern kernels, the drivers used to read ExtN, NTFS, FAT, etc… are removable modules of the kernel, not a core part of it. So it’s not too far-fetched to say that the kernel is not linked to a specific file system, as long as you have linked the drivers for all popular FSs in it.
Now if you’re talking about the virtual file system (VFS), that is the hierarchical directory-based organization of files which applications and users see, that’s another story. It’s a core part of most monolithic kernels.
I admit that the distinction is subtle.
Edited 2011-01-06 17:53 UTC
It still happens in Kernel space in NT, Linux, BSD…
If it’s not a microkernel, then it’s part of the kernel.
The distinction is technical, but should be made.
From the standpoint of filesystems, Sure, you can have 3rd party filesystems that run as drivers, and if MS added support for ext3 to Windows, for example, it would still be the NT kernel, regardless of filesystem support.
since Windows 2000…
Windows Me was released after Windows 2000.
When Windows 200 came out, it was the “business OS” WinME had also just came out, which is not NT.
There was a Windows 2000 Home version too. Windows 2000 was the first version designed to cater to both business and home users, somehow they just didn’t think home users was quite ready, or maybe multiple teams were competing inside Microsoft. It is still a mystery today WTF was up with the ME thing
There certainly was not a Windows 2000 Home version. There was Windows 2000 Server, and Windows 2000 Pro. No other versions.
Edited 2011-01-06 19:06 UTC
There was no Home version, as far as I recall. Here’s what Wikipedia says:
Four editions of Windows 2000 were released: Professional, Server, Advanced Server, and Datacenter Server. Additionally, Microsoft sold Windows 2000 Advanced Server Limited Edition and Windows 2000 Datacenter Server Limited Edition…
Carewolf mentioned…
Nice to know I’m not the only one who remembers Windows Neptune!
http://en.wikipedia.org/wiki/Windows_Neptune
–bornagainpenguin
But Neptune was never released. Therefore it never happened.
It is without doubt very much NT. The likelyhood of Microsoft completely rewriting every single subsystem to such an extent they are no longer the least bit compatible is very low. Particularly considering the portability of the NT kernel and Windows API.
But of course… Microsoft may have switched ported Windows API to Haiku, OS/2 or even the Linux-kernel. I doubt it though.
You read Thom’s summary wrong – he was the one clarifying this – to make sure nobody was confusing it with Windows CE/Phone/Mobile/Pocket (whatever they call it now).
Thus, Windows 8 for ARM will not be based on NT the same way that Apple’s iOS was not based on OS X.
No, he reads the summary like it’s written. Thom also claims the demonstrated Windows version has already been released. The truth is Microsoft demo’d some Windows Thom himself is referring to as ‘Windows NT’, and that particular version has not been released yet.
Therefore we still have to wait some time for the 1996’s case of multi-platform Windows release to come.
Edited 2011-01-06 06:39 UTC
And, there’s a (potentially doctored? it looks a bit odd) screenshot of “Windows 6.2.7867” – which fits right into the Win7 lineage, as well as a screenshot of this running Office 2010 on ARM.
This is NT, no doubt about it. (As opposed to CE, 9x, or anything like that.)
If this turns out to be succesful, this will be the biggest binary break in the history of mankind. Microsoft delayed this for a very good reason, and that’s not big/little endian ;-).
It means there is no longer value in all the “legacy” crap that runs only on windows (shareware, etc), and it means there will be a bunch of windows computers that are immune to computer viruses running around (for a while).
Viruses (in the strictest sense of the term)- perhaps, that depends on how MS handle x86 emulation (if at all).
Malware- definitely not. So long as shell scripting and other such interpreted code can still execute, malware can still be written. In fact with Office being ported to ARM, you instantly open up the problem that the same malicious VBA macros on x86 Office will work on ARM NT too. The same would be true for WSH, Powershell and even DHTML et al content.
Edited 2011-01-05 23:39 UTC
Also, if Windows NT and MS Office can both be recompiled for ARM, so too can any viruses or other malware also be recompiled for ARM.
Just about the only thing that needs to be retained (in order for Windows malware to still work on ARM) is that the OS API is still Windows. This means that the same source code can still be re-compiled for a different machine architecture.
That is probably exactly what Microsoft theselves did to make MS Offcie for ARM.
In the short term Windows on ARM won’t have any malware, but if Windows on ARM reaches significant usage numbers, Windows malware for ARM will very soon also start to appear.
The essential features for malware are that: (1) the API must be consistent (so that source code can be recompiled), and (2) trade secret source code with binary only executables which are routinely distributed and installed by end users.
Windows for ARM will faithfully retain those two essential elements from Windows for x86.
Edited 2011-01-06 09:31 UTC
1) This is also a necessary for 3rd parties to write good software for a platform that can run on multiple version of the same operating system on multiple platforms.
Which isn’t really a problem if people download the closed source executables from a reputable source i.e. the distributor.
If you downloaded a shell script for Unix/Linux without understanding from a random site and not understanding how it worked and just ran it, it would cause havok on your system as well.
Ergo the problem is user education not the fact that it is closed source. Funnily enough as a educated user I have no problems with viruses and malware even though I use both open and closed source applications.
But you will continue to push your anti-window/anti closed source agenda at every opportunity.
Edited 2011-01-06 10:24 UTC
There is indeed a great deal of closed-source software, which is distributed as binary executables only, which is perfectly good and functional software.
The problem is that almost all malware is also distributed as closed-source binary executables only, and that (being closed source) there is no way that anyone other than the creators of any given piece of such software can tell the difference. No amount of user education will change the fact that no-one (other than the authors of the software) can tell if a given closed-source binary executable does or does not contain new malware.
This fact is only relevant to this topic becasue someone stated that Windows for ARM would initially be free of malware, which is true, but my point is that there is nothing about ARM that would mean that this remains true for long.
It is “made-for-Windows“, and “distributed via closed-source binary executables“, that characterises 99% of existing malware. x86/x86_64 versus ARM really doesn’t come into the picture. Just as Microsoft can fairly readily make a version of MS Office for ARM, so can malware authors also rapidly make an ARM version of their trojan malware in a similar fashion. It merely has to become worth their while.
BTW … my agenda is merely to point out facts such as these to everybody, so they can make good decisions for themselves regarding which software they choose to run on their hardware. I make absolutely no apology for this agenda.
What exactly is your agenda in trying to disparage mine?
Edited 2011-01-06 12:10 UTC
Actually, it occurs to me that if Windows on ARM does gain appreciable market share, such that it does become worthwhile for malware authors to port their Windows malware (which is almost all malware) to ARM, then existing virus databases will be useless. Any re-compiled-for-ARM malware will have a different binary “signature” than the x86/x86_64 malware does.
This will open up the beginning of a “golden age” for Windows-for-ARM malware, until some lengthy time later when the antivirus and anti-malware scanner authors can build up a similar signature databse for the new for-Windows-for-ARM malware binaries.
Edited 2011-01-06 12:39 UTC
At least a part of malware can be blocked without knowing how a program works internally, by using a capability-based security model. If the binary blob is sandboxed, it can only do the amount of harm it has been allowed to do.
Most desktop applications, as an example, don’t need full access to the user’s home folder. Really, they don’t. Most of the time, they use this access to open either private config files, or user-designated files. Thus, if we only allow desktop apps to access their config files and user-designated files, we just got rid of that part of malware which used this universal access to the user’s home folder for privacy violation or silently deleting and corrupting files without the user knowing.
It’s exactly the same tactic as preventing forkbombing by not allowing a process to fork an infinite amount of times by default. Seriously, what kind of non-system software would require that with honest intents ?
This doesn’t block the “please enter your facebook password in the form below” kind of malware, though… But at least, the user is conscious of what he’s doing now. Only then may user education work.
Edited 2011-01-06 13:32 UTC
And that is why you get the software from the original author, and guess what … if you educate someone to always get the software from the original author … mmmmm.
Furthermore if someone is so uneducated as to how to to avoid threats how will it being open source help ??? A malware author can just offer an “alternative download source” and stick a key logger in there for example … having the source won’t help because the uneducated simply won’t know any different.
Also you obviously haven’t heard of a checksum then? They use this on Unix/Linux Binary packages as well and also can be used on any file to validate it’s integrity.
For example I remember Windows XP service pack 1 having a checksum key on in the installer properties … if this didn’t match what Microsoft had you had a duff/dodgy download.
The thing is you “facts” aren’t facts. They are opinions from someone that IMO doesn’t really have any practical experience of developing or deploying software.
Unless you work directly in the software industry as a developer or a manager for a development team you simply don’t understand the landscape and the issues that developers face.
Also you are biased in thinking that open sourcing everything is a cure to all software problems. This IMO couldn’t be further from the truth.
Because I think you are biased and do not presents the facts fairly.
The point is that if the original author is a malware author, then even going to the trouble of getting software directly from the original author won’t prevent it from containing malware.
It is a matter of adopting a self-imposed policy. Linux distributions all maintan repositories of source code, and parallel repositories of binary executables compiled from that source code. Anyone at all can download the source code and verify that compiling it produces the corresponding binary executable. This means that people who did not write the code can nevertheless see what it is in the code, they can compile it for themselves to verify the integrity, and they are users of that code on their systems.
Any user adopting a elf-imposed policy of only installing software directly from such repositories is guaranteed to never get a malware infection on his/her system. There is a very long history of vast amounts of open source software delivered via this means which proves this claim.
Yes, it will make a difference. Every single user doesn’t need to know how source code works, just one user needs to download the source code and discover the keylogger within it, and “blow the whistle” on that code. It can then be added to a blacklist for all users. It only takes one person to spot the malware, out of millions of users.
Certainly. If you use a checksum to verify that you have downloaded a closed source binary package (even directly from the original author) correctly, and the original author did deliberately include malware within that software, then all you have managed to do is confirm that you have a correct copy of the malware-containing package.
Fine. I don’t claim that this is not the case, and I do acknowledge that there is a great deal of perfectly legitimate closed-source non-malware software out there for Windows. Windows XP service pack 1 would be one such piece of software, no argument from me. So?
Oh yes they are. Each and every one of the claims I have made in this discussion is a verifiable fact.
I am a project engineer by profession, leading projects which develop and deploy bespoke software. I have many years of experience. We supply source code to our customers.
OK, so? I do happen to have many years of engineering experience at leading development teams.
You are of course as entitled to your opinion as I am to mine.
BTW, I have made no claim that “open sourcing everything is a cure to all software problems”. That is your strawman argument. My claim here is only that users who stick to a self-imposed policy of only installing open source software will be guaranteed that their system never is compromised by malware. If you are going to argue against what I am saying, then this is what you must argue against. Friendly advice … don’t make up something I did not say, and argue against that … doing that will get you nowhere.
And I think you are even more biased, you have no idea how to assess technical matters, and you simply do not heed what experienced people are telling you. How does this help the actual discussion?
Edited 2011-01-07 01:34 UTC
And you need to be educated, trained whatever you want to call it to do that. You don’t do it if you don’t understand that you need to do that.
Stop making circular arguments.
Actually, you don’t need to be trained at all.
For example, on an older Ubuntu system, there is an application right on the topmost level menu called “Add/remove applications”.
Click on that. It will present you with a searchable list of available applications organised into categories, with those that are already installed marked with a tick in an adjacent box.
Click un-ticked boxes to select new applications to be installed, and un-tick existing ticked applications to select them to be removed. Click apply.
This installs applications from the Ubuntu repositories, or removes them from the local machine.
Here is a picture so that you might get the idea:
https://help.ubuntu.com/community/InstallingSoftware?action=AttachFi…
Recently, this has been replaced in Ubuntu (not Kubuntu) with the Ubuntu Software Centre:
http://en.wikipedia.org/wiki/Ubuntu_Software_Center
http://www.ubuntu.com/desktop/features
“Get all the software you need
The Ubuntu Software Centre gives you instant access to thousands of open-source and carefully selected free applications. And now you can buy apps too. Browse software in categories including: education, games, sound and video, graphics, programming and office. All the applications are easy to find, easy to install and easy to buy.”
So, in order to follow such a self-imposed policy, all that an Ubuntu user needs to do is simply stick only to the Ubuntu Software Centre to install software. Use no other methods even if you read something on a website.
Simple. Everyone can do it, it is dead easy.
You are guaranteed to get no malware if you stick to installing software only via the Ubuntu Software Centre.
Other Linux distributions also have similar tools to install software from the distribution’s repository, although not all of them are quite as easy to use.
Here are a couple:
http://en.wikipedia.org/wiki/KPackage
http://en.wikipedia.org/wiki/File:Kpackage_3.5.5.png
http://en.wikipedia.org/wiki/Synaptic_%28software%29
http://en.wikipedia.org/wiki/File:Synaptic_screenshot.png
The principle is the same, however.
Edited 2011-01-07 13:19 UTC
Do you know what you just did .. you suggested someone pretty much format their PC, install an OS which may or may not work properly with their pc, their printer, their phone etc etc … instead of spending 10 minutes explaining to someone what they should do to protect themselves online.
Are you nuts?
Education is the key to solving a number of the world’s problems some of these issues are related to computing … other are life in general.
If you don’t believe me, watch Series 4 of The Wire …
http://en.wikipedia.org/wiki/The_Wire_%28season_4%29
And when reading that … try to think laterally.
I have mixed results with Linux. Some favorable impressions, and some things that need definite improvement. The entire “suppository” system falls into the latter.
The Ubuntu Software Centre is more attractive than other distros’ package management systems. But dang, they expect a person to decide whether or not to install JuK (an example from the screenshot you linked) based on the provided information? How about a real description? (“Music player,” are they joking?!?) User reviews? Screenshots?
Maybe there’s more to the Ubuntu Software Center than it appears from that screenshot?
And I certainly don’t mean to single out Ubuntu. Selecting software from GoBo’s repository was absolutely maddening!
So, you looked at a screenshot of one front-end, and decided that software repositories suck…
Great evaluation process!
No they are not … they are an opinion. You make circular arguments. Circular arguments have a fundamental problem and you just don’t see it.
Don’t believe it for a second. You linked me (in another discussion) to using C# binding for GTK when I said I will use Visual Studio and .NET because it works. This is crazy …
You also said “What is soo special about source code” (in another discussion) … if you lead software development teams you would know the sweat, blood and tears it takes to make a decent product and also the amount of money.
I also give my source code to my customers .. however in my contract states they may not disclose to 3rd parties else unless they ask for my permission. If they have their own developers they can work on it. Most customers are happy about this … they pay extra if they want to own it.
It is inferred in every post you make … most people “read between the lines”. It is certainly obvious to me, and other I have spoke to about your posts on OSAlert.
I assess technical matter everyday. I think though decisions on a logical basis almost everyday of my life.
However you have an “open source” agenda that skews your thinking.
Also in software engineer experience only counts for so much … and it not only me who thinks this … The author of Code Complete also agrees with me, one of the best books on Software Engineering ever written.
Edited 2011-01-07 22:04 UTC
I didn’t say I was a Software Engineer, I am a Systems Engineer.
http://en.wikipedia.org/wiki/Systems_engineering
Software is but one part of a system.
The type of systems my teams engineered are Cockpit Procedures Trainers (CPT) and Flight Training Devices (FTD). These indeed take a number of years to build, and there is much blood, sweat and tears to go into it. A decent FTD may use as many as twenty PCs to drive various simulated cockpit screens and the outside world visuals and other player tactical simulations.
http://en.wikipedia.org/wiki/Flight_simulator
This represents a bucketload of software and hardware all integrated together into a complex system. It is actually more complex than the aircraft being simulated.
Perhaps this might give you a feel for the scope of such a project:
http://www.cwu.edu/~aviation/facilit_simlators.html
Having said that, a full-feature A grade movie takes just about as much effort, and that venture is protected only by copyright.
Anyway, back to software … if one’s team had to write the entire software from whoa to go, it would be impossible (the final software deliverable occupies about 20 CDs, and even that uses common components such as the same OS on most machines). The airframe would reach end of life before the simulator on which to train the pilots was ready.
The best approach to providing software for a complex system is to use as much as possible of what already works and is proven.
For example, for the outside world graphics subsystems, we sometimes used this solution:
http://real-time.ccur.com/solutions_businessneed_imagegeneration.as…
The point is that even though this solution is based on open source, we still paid for it, and we still paid about twenty software engineers to integrate with it and write aircraft-specific parts of the FTD software, and it was still part of an overall engineering solution, and money was still made on the deal by both us and Concurrent. To re-use open source solutions for components of the overall system was better for us, better for our customer, better for the whole life-cycle cost (including software maintenance) of the solution because the customer got all the source code, and we got the FTD product out the door at about the same time as the real aircraft was first comissioned.
Where is the problem?
Edited 2011-01-08 12:46 UTC
Problem is that you are like a broken record.
When insulting, it’s the insulter who gets the worst part.
~aEUROEé|noté^1?~a^aEURoeé|noté^1?^aEUR~a`’è¨EUR~a+~a(R)~aEUR~alb~a|~a<~aEUR`
That makes no sense.
I have an opinion (not a fact) on what that meant if you might be willing to listen.
In this thread, you have tried to fling all kinds of insults at me, and I have responded civilly and calmly answered every one of your “points” and accustaions. You have even mounted (what you thought were) scathing attacks over things I did not say, and positions I did not espouse.
Frankly, that makes you look very bad, very biased, and it severly undermines any point you may actually have otherwise made. Your “case” is shot to tatters in the eye of an unbiased observer, merely because you have been unjustifiably aggressive.
Now, if you had been civil, you might have made your point, and convinced someone.
Edited 2011-01-09 13:40 UTC
True (providing one goes through the step of making the script executable after downloading it).
This is an excellent reason to avoid the practice of simply downloading software from some random site, making it executable, and then running it.
Fortuantely, it is entirely possible to install and run a complete Linux desktop (open source) software ensemble without ever once having to do such a thing.
Sticking to such a process as a self-imposed policy is the one known and well-proven way to be utterly certain to completely avoid malware and yet still be able to run a complete desktop software ensemble.
You need to be educated not to do this, what if someone for example was following commands from a website and one part was to run rm -rf ~/ … their home directory would be blown away … however the system is safe.
I see on various Linux forums incorrect advice given to new users everyday, just look at Ubuntu forums. I saw this on there for example
dd /<somefile> /dev/sda
Which would blow away someone whole hardrive.
Also is is possible with a Windows, MacOSX, Solaris, FreeBSD, Haiku, Amiga, OpenBSD etc. etc. as well.
Which again requires that you have a certain level of competence in the first place i.e. you have a certain set of specialist knowledge … you have been educated in this certain area of expertise.
Edited 2011-01-06 15:19 UTC
Really?
1. It is possible, but very, very difficult, to get a booting system without taking binary code from a source you didn’t generate yourself. Typically people use distributions as a starting point. But just like binary code on Windows, this relies on a chain of trust – that the binaries are not malware infested. If I want to create my own distribution tomorrow, users can’t know whether to trust me or not. In the end, users have to decide trust by word of mouth – what works, what doesn’t – just like Windows.
2. Even when compiling by source, it’s common to blindly execute code. Consider how autoconf/configure scripts work. Do you really read configure scripts before running them? Source availability gives a means to ensure trustworthiness, but that is only as effective as user habits. As the volume of source running on people’s machines increases, and assuming a human’s ability to read code does not increase, the practicality of reviewing code decreases over time. Again, this relies on others reviewing the code, and building up communities based on which code is trustworthy and which isn’t, which isn’t that different to binary components above.
With the source code, you can see what is going to be done, study it, modify it, etc. If you can’t do it by yourself now, you can study how to do it or you can contract someone to do it, etc.
Without the source code it’s not you who has the control, you don’t control the software or control your computing. As Stallman says: without freedoms, the software is who controls the users.
It is not like binary code on Windows, because people who did not write the code nevertheless can download the source code, compile it for themselves, and verify that it makes the binary as distributed.
It is not just one isolated instance of one person doing this that builds a trust in the code … the trust comes from the fact that a program such as gcc, and repositories such as Debian’s, have existed for well over a decade, through countless upgrades and versions of the code, downloaded by millions upon millions of users over the span of that decade, with the source code visible in plain sight to millions of people the entire time, and not once has malware been found in the code.
Not once.
We can trust Debian repositories by now.
Edited 2011-01-09 13:39 UTC
That’s an interesting point.
MS has talked (in the past) about continuing efforts to cleanly break the Win32 libs from any system-level entanglement.
Maybe this will also be an opportunity to move a little farther in this direction.
@Vivainio
Microsoft thinks there is lots of value in their legacy software. That is why they are porting Windows over to ARM. And remember this is not the first time Microsoft is working on ARM CPUs, they have been making Windows CE/Mobile for a long time. Maybe they will succeed this time, maybe not.
I do not understand your complain about shareware. The ability to test before you buy is much more to the favour of the user. Much better than novelty apps (I am rich 999USD) you must pay an Apple App-store before even testing.
A company I used to work for Used to run NT servers on Alpha, they were fast, really fast, and being servers, app compatibility was mostly irrelevant.
While I was there, we tested NT on an alpha desktop, and any apps run were emulated(very slowly) under a layer called FX32 (I think). Hopefully that won’t be an issue here, and a native version of office is encouraging.
Dont see how you tie this in.
Security improvement in the next version of windows is related to running all applications but the OS itself in a virtual environment.
http://www.zdnet.com/blog/microsoft/more-windows-8-hints-this-time-…
It is not going to be exclusive to ARM.
No where is Windows NT mentioned anyway.
Clearly they’re confused. Did they not read the previous OSAlert story about Windows on ARM where everyone claimed Windows on ARM would never happen because it would be impossible to run Office? Silly Microsoft.
What’s that? A compiler you say? Damn you Microsoft!
NT had been written as cross-platform from the start, so its nice to see that the code is still portable. I wonder if NT for ARM will get a port of FX!32, too…
In this article: http://www.osnews.com/story/24165/Windows_NT_on_ARM_It_s_a_Server_T… it it was mentioned Windows on ARM would only make sense on servers.
Microsoft seems to disagree on this as they also develop MS-Office for ARM; and normally you don’t run this type of programs on a server.
Either way I think it’s a bold move from Microsoft, but on the other hand they might think that if consumers accept new platforms (hardware/software) than they even might accept Windows on a new hardware platform.
What might also have triggered this decision is the push from Intel towards Meego – it’s payback time…
Please take all my comments with a grain of salt…
Yeah, I’m trying to be modest over my skills of predicting an obvious future. There will not be a version for servers just yet. Probably the version after it hits the desktop.
In the linked story, I kept asking why Windows on ARM. I never got a good response from anyone. The announcement from nvidia provides the answer I was looking for. ARM in servers ? Doesn’t make sense. ARM + Nvidia GPU on servers Huge amount of sense. The GPU does all the hard work, the main processor just sits there and looks pretty without consuming much energy.
Actually, in my experience, straight ARM in servers makes plenty of sense… the same way that Atom-based servers do. Power efficiency is always a major concern for large server farms.
It ultimately depends on the server load – for disk-heavy servers, a dual-core Atom (or ARM) may be plenty, and the low power utilization is a major bonus.
At the moment, both of my servers at home now utilize Atom processors each consuming ~30-40w of power at full load. ARM would be even better, but purchasing/building commodity-hardware-based ARM machines isn’t terribly easy to do yet
Yeah, Heavy disk loads also might make sense. Assuming the analysing of that data is minimal, and disk I/O is the bottle neck.
I was thinking that any low CPU loads would be better served off virtual machines, but disk I/O from virtual machines stinks.
If power efficiency matters, do not use server applications implemented in scripting languages.
Why not have Office on the server ? Many people/companies use some kind of terminal server-like solution running on Windows.
Actually, we do run Office on servers. We use it to generate spreadsheets for end users. They can request a process, the output of which is a spreadsheet in Excel format with nice formatting, pivot tables, etc. This is done, at least sometimes via actually running Office.
Windows on PPC and other platforms did exist. PPC did have MS Office and most of the MS product line as well. Just no one else was making applications for it.
Name the 1 problem about people migrating there desktop to Linux. Legacy windows programs that will not be ported.
Windows 8 on ARM will suffer the same problems. I do support for Wine we do get regular questions to port Wine to windows to run old legacy games that work in wine but not Windows.
The legacy issue is huge road block to Linux. But is also a reason why some users are using Linux.
With Windows 8 on Arm I cannot see where it will not be just a full road block. Not that I have seen .Net take off enough to counter this.
Final major question. What is going to happen to Windows CE. Is Windows Phone 7 going to be the last CE? It would make sense from cost cutting point of view.
I’m not sure how many legacy programs I used in last several years. Probably zero?
Everything what is not supported do not worth supporting.
Now Microsoft’s strategy with .NET makes quite a lot of sense.
Their hope is to get as much software as possible running on .NET so that it will work on x86 desktops and newer ARM computers.
Pretty sensible from the outset …
For devs it is a nice toolkit to use and it makes developing for Desktop, Web and Mobile nice and familiar.
Also considering how bad .NET 1.0 and 1.1 compared to .NET 2.0 and above. There has been a nice steady improvement in .NET since version 2.0.
Edited 2011-01-06 00:00 UTC
The whole new ground for mass infections. Yay!
[despiting the fact that CE was not infected and ARM is not x86, but you never know]
I sure hope they are planning to target only a future 64 bit ARM. It would be annoying if 32 bit addressing gets a new lease on life due to this. Personally, when I write a (linux) program, I generally don’t even consider whether it would be portable to a 32 bit machine, just like I don’t consider whether it would be portable to a 16 bit machine. I’d like it to stay that way.
Considering the Windows codebase is already portable across 32 or 64 bit addressing, it seems like it would be a (pointless) step backward to disable that capability just to spite people.
What a strange thing to be annoyed at… especially given that if you never need 64bit addressing, you’re potentially saving the overhead of having to address it that widely to begin with.
Just sounds like lazy development to me – what assumptions in your software would implicitly fail on 32 bit addressed systems? Don’t you use portable pointer types in your code? Is your code going to simply fail on 128bit systems someday? The proper use of abstraction goes both ways my friend…
It’s not “just to spite people”, why would you think that? It is to allocate development resources efficiently, both for OS developers (fewer builds to test) and for all application developers (same reason). In case you haven’t noticed, 2008 R2 is already 64 bit only for this very reason. It is a question of “I have X dollars to spend on this project, how can I most efficiently use them?”
Furthermore, there is no 32 bit application base on NT/ARM now, so there is no one who could be spited. My point is, if you are starting with a clean slate, make it clean!
Of course it’s lazy! Do you test all your C on VAX and M68K? How could somebody be so lazy as to not do that?
I own a couple of UNIX boxes from the early 90s. I like playing with them. But I wouldn’t actually expect anybody writing software in 2011 to worry about it being portable to OSF/1 or Solaris 2.3. My personal belief is that 32 bit x86 is on it’s way down that road; others are free to disagree, but as time goes on, I think fewer and fewer people will.
One other thing, just for fun: Let’s say that the biggest single system image machine you can buy now can handle 16TB of RAM (eg, the biggest Altix UV). To hit the 64 bit addressing limit, you need twenty doublings*, which even if you assume happen once per year (dubious), puts us around 2030. Obviously it is possible to hit the limit. But the question is, will the programming environment in 2030 be similar enough to UNIX now such that thinking about 128 bit pointers now would actually pay off? On the one hand you could cite my 1990 UNIX machines as evidence that the answer would be “yes”, but on the other, modern non-trivial C programs are not usually trivially portable to these machines. So it’s hard to say how much I should worry about 128 bit pointers; they may be the least of my problems in 2030. Or maybe not. Who knows.
* OK, disk access issues like mmap will make it useful before then. Maybe we’ll even want (sparse) process address spaces bigger than that before then. But it doesn’t change the core question of whether you can anticipate the programming environments of 2030.
The ARM Cortex-A15 MPCore CPU architecture, which is the one aimed at desktops and servers, is a 32-bit architecture. Nevertheless, it does not suffer from a limitation of 4GB of main memory, it can in fact address up to one terabyte (1TB) of main memory.
http://www.engadget.com/2010/09/09/arm-reveals-eagle-core-as-cortex…
I believe the Cortex-A15 MPCore architecture includes a built-in memory management unit to achieve this feat.
Edited 2011-01-06 12:21 UTC
Also important note the 4GB limit on 32 bit OS on a lot of x86 chips is garbage as well. PAE mode. 64gb to 128gb. 32 bit mode.
So 32 bit being limited to 4GB is mostly a market bending nothing more by Microsoft.
So we can expect MS to treat ARM the same as what they do x86. Different versions different limits nothing todo with real hardware limits.
Lolwut?
Windows’ 32bit client versions do PAE but limits the *operating system* to 4GB anyway due to problems it caused with instability with some drivers (Windows Server 32bit do support more than 4GB).
http://blogs.technet.com/b/markrussinovich/archive/2008/07/21/30920…
However, *applications* in 32bit Windows can access more than 4GB if they want to using AWE (Address Windowing Extensions).
In other words, you’re talking out of your ass.
Edited 2011-01-06 13:03 UTC
A Windows client SKU won’t address more than 4Gb of physical memory. This means that applications can’t use those physical pages either. If an app can use those physical pages, you’ll have those pages going through device drivers, which is what the article claims is not supported. If an application attempts to address more than 4Gb of memory, this can only be achieved by paging (ie., giving more than 4Gb of virtual address space but without more than 4Gb of physical pages.) So if you want to put 8Gb of RAM in a machine and actually use it, you have to choose between a 64-bit client SKU or a 32-bit server SKU; a 32-bit client SKU will not use half of that memory.
Please don’t include this kind of discourse. It’s not constructive, helpful, or informative.
I think the confusion is that Windows is usually limited to only using 3Gbyte RAM, but using PAE allows it to use up to 4Gbyte (even in client versions). Also the AWE API can be used in the client versions to access that extra 1-2Gbyte of memory if needed. (
Perhaps that was how they did it with WinXP (I don’t know, I’ve never used PAE mode on XP) because they needed a reason for people to upgrade later on, but on Windows Server 2000 Advanced Server/Datacenter Edition (yes, Win2k), I’ve seen PAE enabled to provide 16gb of RAM available to the OS *and* SQL Server (via AWE) – so I know you are wrong.
Edited 2011-01-06 17:59 UTC
This is simply wrong.
http://msdn.microsoft.com/en-us/library/aa366527(v=vs.85).aspx
It’s not this simple.
I don’t know how Microsoft implemented this in practice, but the way I see it they only have a few choices :
-Having application developers swap data in and out of their application’s address space all by themselves (cumbersome)
-Having applications not directly access their data, but only manipulate it through 64-bit pointers which are sent to the operating system for every single operation. They can do that e.g. by having the extra RAM being manipulated as a file (slow because of the kernel call overhead)
Really, PAE is only good for multiple large processes which each use less than 4GB. Having an individual process manipulate more than 4GB on a 32-bit system remains a highly cumbersome operation.
Edited 2011-01-06 20:04 UTC
32 bit Applications on Linux don’t know they are on PAE or not. So application developers don’t need to know about it.
A simple trick. Virtual memory ie swap out. To 32 be application this is what can appeared to have happened to the memory. Infact the memory block as just been placed in PAE Memory block outside the 4g address space. It way faster to get memory back using PAE than using swap.
Yes simple PAE treat it as a Ram based swapfile all complexity solved since 32 bit applications have to put up with swapfiles in the first place.
PAE is a good performance boost on 32 bit system running large applications that with 4 Gb limit would be running swap like mad. Yes Harddrive massively slow.
Now drivers and anything running in kernel mode is a different case. Lot of Windows drivers are not PAE huge memory aware. Even so there are ways around this issue while still taking advantage of PAE. Yes again drivers have to be swapaware or they will cause trouble. Being PAE aware does avoid having to pull page back into main memory for driver to place its data. Still way better than if that page had been sent to disk and had to be pulled back.
Basically there is technically no valid reason to limit particular version of windows to 4gb of memory. Heck Windows Starter has a fake limit of 1gb of memory. The 4gb limit was nice and simple to blame on 32 bit limit.
Cannot MS write a simple ram based swap system using PAE?
Yes it gets trickier with PAE 32 bit when you have duel core. Since to get most advantage out of PAE you have to use it for NUMA. Is this something applications need to worry about. Answer no.
Everything to support PAE is kernel based. Just has to be done right.
I can’t understand how that swapping you describe could possibly work.
As we both know, ordinary swapping occurs when you run out of physical memory, but you still have some spare linear addresses around. In that case, all the kernel has to do is to allocate a new range of linear addresses and return a pointer to the beginning of it as usual, but mark the corresponding pages as absent.
Later on, when the process tries to access one of these newly allocated linear addresses, a page fault occurs. The kernel is summoned, and it swaps some things in and out so that the requested data ends up being in RAM. Then it makes the non-present linear addresses point there, marks them as present, and the process starts running again as if nothing happened.
But what we’re talking about is totally different. It’s when we are running out of linear addresses, but still have some spare physical addresses. The virtual address space of the application is now full, even though the RAM is not.
In that case, malloc() and such simply can’t work anymore. Because there is no new pointer they could possibly use and return. All possible pointers of 32-bit addressing are already in use somewhere in the process. There is no such thing as a spare linear address range which we could mark as non-present as we do in swapping.
Edited 2011-01-06 22:25 UTC
Have you looked at my Bio? I work on Windows full time. If you want me to go over memory management, I can bore you to tears, but it’s very unlikely that you’ll be able to dismiss me that easily.
Firstly, here’s the page that describes limits on physical addressing:
http://msdn.microsoft.com/en-us/library/aa366778(v=VS.85).aspx#physical_memory_limits_windows_7
Second, this section might be helpful (the part that talks about how PAE, /3Gb, and AWE are related and not related):
http://msdn.microsoft.com/en-us/library/aa366796(v=VS.85).aspx
That’s all well and good, but it’s still subject to the physical memory limits described in the link I gave above. See the part where it says “The physical pages that can be allocated for an AWE region are limited by the number of physical pages present in the machine, since this memory is never paged…” What this link really shows is that a single process, which has 2Gb of VA, can use greater than 2Gb of physical pages on a 32-bit client system. It cannot use more than 4Gb of physical pages, since that’s the absolute maximum the client system will ever use.
It also shows that MS’ implementation of AWE requires physical pages and is therefore unsuitable to extend addressing. On client systems it’s only useful to get from 2Gb to (some value less than) 4Gb.
I’m sorry, I think we were talking past one another – I thought you meant that client versions of 32bit Windows had zero options to go past the 2GB limit for applications. We actually meant the same thing, except I was unaware of the 4GB limit of AWE. Thanks for clarifying!
PAE allows you to have more than 4 GB of addressable physical memory, but you can only map them in a 32-bit address space*. A single process thus still cannot hold more than 4 GB of data easily.
PAE is just fine for running lots of small processes on a big machine, as an example if you’re running lots of small virtual machines on a server. But for the power-hungry desktop user who wants to crunch terabytes of data in some video editing software, on the other hand, I don’t think it’ll ever be that useful. Except if we start coding multi-process video editing software, but since developers already have issues with multiple threads I don’t see this happening soon…
* AMD64 Vol2, r3.15 (11/2009), p120
Edited 2011-01-06 13:16 UTC
A single process can use AWE to map subsets of data in its limited address space at a time while still using more than 32-bits of physical memory. Or, in many cases, it can delegate that job to the operating system (eg. by using the OS file cache, which is not limited to the process’ 4Gb limit.)
So perhaps we’re moving to 64-bit for simplicity, not necessity.
Well, I wouldn’t bother going 64-bit if it wasn’t
1/A priori faster and certainly easier to use than 32-bit + PAE (no need to have the OS juggle with your data so that it fits in your 32-bit address space)
2/Much, much more convenient on x86 (AMD have taken the opportunity of AMD64 to clean up part of x86’s legacy mess, so x86 processors are easier to play with in 64-bit mode ^^)
Edited 2011-01-06 17:42 UTC
PAE allows the *kernel* to access more than 4 GB of RAM. However, *processes* can only see 4 GB of RAM, period. Each process can be given it’s own 4 GB chunk of memory, though. But they are still limited to 4 GB.
And the kernel has to do a lot of thunking and bounce buffers and hoop jumping and whatnot to manage PAE accesses. And all your drivers need to be coded to support PAE. And all your low-level apps need to be coded to support PAE. And on and on.
PAE is a mess, and should be avoided like the plague unless there’s absolutely no way to run a 64-bit OS/apps.
The only way for an app/process to access more than 4 GB of RAM (on x86) is to use a 64-bit CPU with a 64-bit kernel.
64 bit kernel is not necessary, see OS X. X86-64 CPU’s can switch between two modes of operations at run-time allowing 64 bits processes as well as a 32 bit kernel and drivers.
Maybe you could, but once you get enough 64-bit support in the kernel to be able to run 64-bit processes, it’s just weird to keep the rest of the kernel 32-bit.
Moreover, drivers and kernel could only write in the first 4GB of RAM without being PAE-aware, which could be problematic for things like DMA.
Yeah, I know about the hybrid mode that AMD CPUs can work in.
But, I thought it was only the other way around. You could run 32-bit userland on a 64-bit kernel. Not that you could run a 64-bit userland on a 32-bit kernel.
Really name a Linux program that has to be changed between PAE mode and non PAE mode. Answer zero.
PAE does not have to have anything todo with userspace.
PAE thunking is way lighter than swapspace.
What stunts do 32 bit programs that need to use more than 4gb of space use. Memory mapping to file. PAE Provides more access to storage space so can reduce number of disk operations on a memory mapped file.
So don’t quote trash. 64 bit system is not the only way to exceed the 4 GB limit.
Yes a program running on a Non PAE 32 bit machine can be using methods already to have more space than the 4 gb limit at the cost of performance. PAE enables you to reduce the cost of those stunts.
Shock horror is just using PAE for swap, disk cache and assisting mapped files to reduce disk accesses don’t require you to be running all PAE compatible drivers. Since most drivers should not be messing with this stuff.
Here is the best bit of all PAE used this way is not even new. Its basically using same style as Expanded memory. Yes breaking the 4 gb limit goes back to 1984.
The limit is 4 gb of memory at 1 time on x86 32 bit. Yes memory mapping and other methods means a program could be many times large than that in reality with or without PAE active.
Difference is PAE can remove the speed hits from the methods used by programs to exceed the 4gb limit.
That is the big mistake here. You are presuming that programs will not be using more than 4gb of memory. That presume is based on the idea that the OS did not provide programmers with a way around that problem. What is incorrect.
I’ve been claiming that ARM is the biggest competitive threat that x86/x64 has ever seen pretty much since ARM’s A8 core, when it was clear to me that they were eying ever higher performance. Intel confirmed the threat with Atom, and now Microsoft has endoresed ARM’s march into the x86 stronghold.
We’ve already seen niche thintop and netbooks based on ARM, servers are announced and nVidia says their high-end ARM design will find its way into fuller-featured laptops and even the desktop. Exactly as I was predicting around the time Atom was announced.
Windows 8 “for SOCs” as they are saying, is actually a pretty interesting product, and supporting a limited number of SOCs means that the number of hardware permutations is much lower, and a known quantity. They can also throw away tons of cruft — no BIOS, no AGP, PCI or ISA — but more importantly, no need to support every device ever conceived and built.
On the software side, port Windows itself, Microsoft’s First-party stuff, .Net, and get some primary ISVs involved and most people will be happy — particularly users of iPad-like tablets or i-ified netbooks, who’s usage/input model essentially demands new apps anyhow — people who live on the net and enjoy a few focused, snack-size apps.
Even if Windows 8 on ARM SOCs fails to oust the PC from its traditional space, Microsoft still wins because they’ll have succeeded in migrating mobile devices off of a Windows CE-based OS and onto an NT-based OS. One code-base to move forward, mostly-overlapping APIs — CE will hang on for awhile longer, ultimately, but be relegated to industrial-type uses, probably even morphing into an RTOS. Windows NT will be the only consumer / business -facing OS.
On the other hand, if it succeeds, Microsoft gains an exit strategy if x86 ever tops out, or programming models change so drastically anyhow that it no longer makes sense to be tied down to the legacy processor.
I don’t see how ARM would deal with upcoming programming models better than x86.
For now, ARM systems are cheaper and have more mature power management, but they are far away from x86 in performance. Calling x86 “legacy” in this light is a bit preposterous.
Totally agree there. It’s natural that everyone get excited by announcements like this, but ARM is still ARM – its got a lot of things going for it but has a LONG way to go to compete with x86 on pure performance.
ARM is good enough at what it does that I think it could easily become a serious player for systems where power use is critical, but it will have to undergo quite a lot of changes to compete on pure horsepower. I’m not an EE, but I would bet that mechanics of making a built-for-speed chip use less power (i.e. Atom/Bobcat) is a lot easier to tackle than the other way around.
i think for a lot of people that not so important…
arm now with dual core, soon tricore and quad core are enought powerfull for web, office….
Yes, I agree again. I’m simply saying that if you are one of those who DO care about performance and power usage is not your primary concern than ARM is not going to be very attractive now, and maybe never will be.
Here’s an idea: Don’t depend on the CPU for raw performance. CPUs are very “smart” when it comes to logic but they choke easily with big datasets. On the other hand, GPUs are vectorial, kinda “dumb” processors which excel at parallel number crunching. So why strain the CPU trying to make it decode one video frame as fast as it can when you can use the GPU to decode 10 frames simultaneously? Maybe in a 1 vs 1 comparison some CPUs can defeat GPUs, but when it comes to parallel processing the diference is abysmal in favor of the GPUs.
For now, we developers depend on tools like CUDA or OpenCL to tell the computer what kind of hardware we want to use for each task, but eventually we will have toolchains smart enough to figure that out for themselves.
It’s just a theory, of course, but this looks very similar to what AMD wants to do with Fusion. Maybe we should call this APU instead of CPU+GPU.
ARM may not be directly competetive now in performance, but they’ve been under 20+ years of evolution towards low-power, embedded applications. A single, current-gen ARM core alone draws maybe 500mw at load. Intel’s most frugal Atom draws, IIRC, 4w at idle and twice that under load. You can lay down 16 ARM cores in the same thermal envelope as a single Atom core (though doing so would be of dubious use). My point is, though, that the comparison isn’t all that fair, since all current ARM processors are fighting with both hands behind their back.
Even so, ARM performance has grown by leaps and bounds in the last 5 years, coming from PII levels of performance with Arm 9 and 11, to being nearly on par with Intel’s Atom with the A-9. Thus far they’ve made those advances without throwing power consumption to the wolves, but imagine if someone came along with the ‘radical’ idea of even a 10 or 20W power envelope on an ARM implementation. Imagine indeed — this is exactly what nVidia promised to do today, aiming at the desktop and server markets.
The ARM ISA isn’t what’s holding ARM back — its been the power/thermal requirements of their core markets (SOCs, Embedded). Given power and die-size to burn, there’s no reason ARM won’t make a processor just as beastly as AMD or ARM (experience in doing so notwithstanding).
ARM has a similar problem to Intel in that they utterly dominate all the current markets where they compete — this is why ARM is eyeing intel’s turf and vice versa.
Intel may have a massively larger market cap, but ARM has volume that Intel can only dream about — to give you an idea of that, it took McDonalds 21 years to sell a billion hamburgers — and 3 billion ARM cores were produced last year alone. When ARM itself (A-15) and others (nVidia) want to push ARM to the limits, they’ll find the market waiting.
Which is easier, to take a rich man who drives a fast car and convince him to drive a run-of-the-mill sedan, or to put a poor man into that same sedan?
Define easier.
By “easier”, I mean less technically challenging… Atom/Bobcat are fundamentally more similar to their ancestor designs than current designs. Intel and AMD are taking the route of removing complexities in order to achieve lower power use (Intel by going in order, AMD by sharing execution resources with a single frontend). The point is the complexities they are removing in many cases are what make their higher end parts perform so well.
ARM has never had those types of complexities in the first place – most of the special sauce on ARM (Thumb for example) is to optimize things for the embedded space – smaller binares, better performance with smaller caches, etc. No one has ever tried to make an ARM core where performance was the primary goal – all existing cores where designed for power envelopes an order of magnitude or more smaller than high end x86 parts.
I’m not at all saying you can’t make a very fast ARM core – I’m just saying it isn’t as simple as just ramping up the clock speed and doing some minor reworking – a 3Ghz ARM might be possible with current designs – but even at 3Ghz it would have a long way to go to reach the performance levels of a similarly clocked i5/Phenom core, let alone match them when they can legitimately run at 4Ghz or more themselves. Im just saying it will take a lot of work to make ARM competitive if you factor out power use, and I have no reason at all to believe that nVidia could accomplish such a feat.
Also, I want to stress I am talking about single threaded performance, i.e performance per core, not overall performance. ARM can scale up very well by just throwing more cores at the problem, but that is not the same thing.
Arm doesn’t deal better, and that’s kind of the point — no traditional CPU does or likely ever will. The closest paradigm shift on the horizon is GPGPU, and specifically heterogenous on-chip computing (AMDs Fusion, Nvidia’s Tegra2 and Project Denver announcement). The first of these products look like CPUs with little GPUs attached, but over time that will shift towards looking a lot more like big GPUs with little CPUs attached.
Ultimately there’s a limit on how many ‘serial’ processors (heretofore “CPUs”) are useful in a system. parallel processors (heretofore “GPUs”) on the other hand, are happy to spread the load across as many computing elements as they have available. Tasks for the GPU are high-throughput data parallel, while tasks suitable for the CPU are, comparatively, low throughput and data-serial or I/O bound — there’s only so much actual compute work to be spread around. Paralell tasks are also the ‘sexy’ ones — graphics, gaming, HPC and the Serial tasks are not. Eventually, the CPU will become little more than a traffic-cop routing data into and out-of the GPU.
Now, this, in and of itself is not good for ARM — they’re in no better position than x86, or MIPS or Sparc. What makes this a good thing for ARM is that we are nearing an inflection point where the traditional hardware ISA compatability isn’t going to amount to much. Its not actually true that Sparc or MIPS has as good a chance as ARM — neither are a ‘consumer-facing’ architecture, yes, only geeks know or care about ARM vs x86, but by consumer-facing I mean that ARM runs what the typical user desires (email, facebook, flash content, streaming video) and does it in form-factors that are popular and while undercutting the competition on price. When the CPU architecture no longer matters a great deal, the x86 (and specifically intel) market share is so high that it can only decline. My argument is that only ARM will be there to pick up the pieces if or when that happens.
There’s something of a perfect storm aligning against the traditional lockin Intel and x86 have enjoyed — heterogenous computing (Fusion, Cuda, OpenCL), the ‘cloud’, a shift away from desktops to laptops (and eventually smaller iPhone-like devices) — ARM is ready for this.
Initially it’ll be the low raw crunch servers, http://www.linuxfordevices.com/c/a/News/ZT-Systems-R1801e-/
Think that, but with 8x 2.5Ghz Cortex A15 based quads with 256Gb+ of ram running Windows Server 2012 for ARM.
It’s a scary thought, but thats what we’ll be seeing.
I’ll probably be another 2+ years before Win8 on ARM would be all that useful for general consumers though as it’ll take some time for the non business apps to filter in.
Wow, this is a very interesting turn of events. I heard about the new multi-core ARM CPU’s and thought it would be interesting if they would give intel a run for its money, but now with windows on ARM, this is a whole new can of worms.
I can’t wait to see what happens. I am sure nothing but incredible performance gains on both x86 and ARM platforms.
I’ve been hankering for a good ARM desktop for ages now but so far the biggest hurdle seems to have been a major software company delivering anything for such and thus no one hasn’t been bold enough to start producing ARM desktops. With Microsoft now openly embracing ARM there will definitely come such desktops out during the next 2-3 years.
And hell, having an NVIDIA GPU in addition to low-power, low-heat ARM processor core(s) means it’ll even be able to support gaming, very multimedia-rich applications and all that.
Companies will have no choice but to aim for easily portable code so as to reach Windows-users on both architechtures and that _could_ also spawn more Linux-versions, though I suppose the chances for that are still somewhat small. Something is still better than nothing.
As for Windows itself.. well, I am strongly for open-source, free software — not necessarily free as in beer, mind that — but ever since I got myself Windows 7 I’ve noticed myself booting to Linux less and less. Thus I’m slightly ashamed to admit it, but I will most likely be Win8 user on my ARM system unless they manage to screw it up in some really major way..
and i will have to pay for windows if i want a computer even if i will never ever use it
I’ve had a read over at a few other website I saw this interesting piece:
http://www.neowin.net/news/rumor-windows-8-to-feature-tile-based-in…
Apparently Silverlight is going to play a greater role in the future of Windows application development:
With the mixing and matching of native and managed code/Silverlight as shown with the improvements that’ll come in Silverlight 5 are we going to see a migration away from the win32 GUI components to using Silverlight for all the visual presentation. Siverlight is native for mouse and touch, a migration to a Silverlight interface will provide the sort of flexibility that allow Windows 8 to run on touch hand held devices, laptops, desktops etc. I just hope that Windows 8 have a back bone and push through instead of compromising for the whingers and whiners demanding that their 40 year old punch card application to work flawlessly with Windows.
With pure I mean without any unmanaged C/C++ legacy code.
These should run out of the box on the ARM variant of Windows 8.
Any numbers ???
pica
My suspicion is that Microsoft realised that ARM is a dynamic architecture and if they didn’t run with it then … others would as ARM dominates tablets and potentially other new forms of system.
The key driver here, I suspect, is the consumers who will (asap) become intolerant of anything less than (say) a 12 hour battery life.
If ARM takes less silicon to make and consumes less power then that battery life is easier to achieve.
Microsoft being a massive corporation will have ported NT (all versions) behind closed doors in any case.
As this was one of the announcements i expected to see in the sam place as Duke Forever getting released (oh wait ..) imho its a positive if only win can get on a clean slate and leave all the crummy baggage of legacy code lurking under its hood and maybe just maybe thay can try and be a tad more open this time.And secondly give linaro and the othe linuxARM affiandos a run for their money , complacency stifles innovation and with this hopefully well see more value in the LinuxARM space.Not to belittle whats already going on in that sphere of dev but with the headstart they already have now would be a good time to prove their worth before the 900 pound gorilla gets in on the act
Microsoft have many obstacles to overcome in this category of the market — the least of which is time. How long have ARM netbooks and mobile devices been around? How long has WindowsCE had an ARM port? And how much long term success has Microsoft had with its NT family of operating systems running on architectures other than x86-32/64 ? Sing the doom song, GIR.
-Gary
What would be REALLY interesting is OS X which probably would already run on it.
Windows on ARM? There isn’t enough NoDoze in the word for that.
This should really wake up Intel though.
Speaking of Apple (I was, I didn’t read the other posts because they are probably about Windows), they probably already know about this. I wonder if they know enough about that they will base their low end computers on it in the beginning and then later, maybe all of their computers on it.
Apple were rumoured to be running full-blown OS X on Intel machines years before they made that transition. I expect that they’ve also had full-blown OS X running on ARM for nearly as long. I also expect that they also have 10.6 running on PPC.
It seems to me that this move by Microsoft validates Apple’s pre-Intel position – where they perpetually claimed to mostly deaf ears that PowerPC was better and that it didn’t matter that Macs didn’t run on x86 chips.
It also seems to me that this opens up the prospect for Apple to not only produce low-end Macs based on ARM chips, but also should they desire to do so frees them up to produce high-end Macs based on PPC chips. They did after all buy PA Semi whose PA6T was pretty danged quick… Whilst IBM kinda failed a bit with their G5 roadmap, they do now make POWER7 chips with 8 cores, running at over 4GHz, and their PowerXCell 8i chips (as used in the IBM Roadrunner supercomputer, based on the Cell in the PS3) is pretty nifty and would work nicely with OpenCL and Grand Central….
Probably it wa s windows virtual machine running on top of QEMU (or something like that) on top of LINUX on top of ARM
having closed source drivers is the problem. With the PowerVR situation, VIA GFX situation and NVIDIA situation they just take the hardware support with proprietary drivers problem to another platform. The unfair war on Open Source by vendors continues.