An unpatched flaw in an ATI driver was at the center of the mysterious Purple Pill proof-of-concept tool that exposed a way to maliciously tamper with the Vista kernel.
Purple Pill, a utility released by Alex Ionescu [yes, that Ionescu] and yanked an hour later after the kernel developer realized that the ATI driver flaw was not yet patched, provided an easy way to load unsigned drivers onto Vista – effectively defeating the new anti-rootkit/anti-DRM mechanism built into Microsoft’s newest operating system.
This is yet another reason to appreciate the FSF campaign: http://defectivebydesign.org
Not really… even if you weren’t worried about DRM, you canuse this to exploit the Vista kernel. It’s just a defectively-designed video driver… nothing else.
Since ATI’s drivers are so buggy (and supposedly Nvidia’s too), I wonder about the dangers of installing their drivers on Linux machines, considering they run on kernel land.
Come to think of it, I suppose the Linux versions are even more dangerous, since I guess the developers pay more attention to stability and security on Windows drivers (by far their largest market) than Linux ones.
It’s a good thing most servers running linux are powered by slower graphics cards (they are servers, after all) or are simply headless, otherwise, I suppose people would have started exploiting those flawed drivers ages ago.
Edit: clarity
Edited 2007-08-10 23:23
It may very well be true the Linux drivers provided by ATI and NVIDIA –obfuscated binary blobs that they are– have more flaws, more security holes, more opportunities for infection with malware. This may someday be shown true, but unlike with Windows at least on Linux I have a choice of drivers. I can choose to run the community drivers, slow though they may be.
What do you do in W#indows when your drivers are faulty?
–bornagainpenguin
You install a different version.
How many Vista ATI drivers can you choose from?
Any alternatives?
In total, I can’t say. From ATI’s website, there are 14 (7 non-beta for Vista RTM) listed for Vista x64. This excludes OOTB Vista drivers (and the standard VGA driver), leaked drivers like 7.8 beta, OEM drivers, and unofficial distributions from groups like Omega (though Omega specifically has not yet released any Vista drivers as far as I can tell).
And how many of those have the source available for auditing, if it came to that?
I don’t know. It depends on if ATI makes their source available for external auditing. Likely none for the general public. Though source would be nice to have (and great should anyone care enough to start such a project), it isn’t required to check any of the other drivers for the vulnerability. You could just use the same methods Ionescu used.
From who? Containing what the same files as in the “official” drivers but “remixed” by a hobbyist cherrypicking which DLLs he or she prefers over whatever the current ones might be? Face it, even so called unofficial drivers like the Omega ones are nothing alike when compared to the community open sourced drivers and that’s a good thing! Since the community open sourced drivers are so different than the official binary blobs there is less of a likelihood of the same flaws or vulnerabilities existing in both.
You do not have this advantage in Windows drivers, because even if the packager of the unofficial drivers packages an older DLL (presumably not containing whatever popular exploit) there is still the risk of those older DLLs having unknown vulnerabilities*….
So again, what do you do in Windows when this comes up?
–bornagainpenguin
*that is they are unknown to the public perhaps but have been patched by the manufacturer without comment.
A microkernel architecture (and even better, an IO-MMU) would solve problems like this, wouldn’t it?
Considering the absolutely huge amount of closed source but hardware privileged code present in the average system, mostly Windows servers but occasionally Linux servers too, wouldn’t this be a good argument for microkernels?
Things like organisation and pure stability, which given a modern network and fault tolerance, are admittedly not too much of an issue. Certainly they are not as important as security and the potential for self propagating code on a trusted network in a situation like this except with a bad network driver.
Isn’t it sort of weird that a network driver has full access to my hard drive?
Also, it does highlight the folly of these security loading mechanisms and signed kernel modules unless they’re going to try and mathematically prove everything allowed to touch the ‘secure’ part of the kernel. They should at least do a lot of security checking; logic would tell me that should be the main reason for security loading mechanisms, even assuming they’re only used to implement DRM.
Also, now that one person can get access to the secure media processing path in Windows Vista, does that mean that all this protected media is capturable and is going to end up downloadable for the masses, who don’t need to be able to do the copying themselves?
This really is the point, digital copying gives you flawless reproduction so all it takes is one person to crack it and everyone can get a copy. I know that professional pirates spend thousands on their hardware, and as much as the internet contributes to piracy, I wouldn’t be surprised if the old fashioned copy and sell knock off DVD for lb0.80 isn’t the main source of pirated films. Still I’m not sure why they advertise in cinemas as though digital copies suffer from analogue degradation…
Edited 2007-08-10 23:45
I’m not a security expert, so take this with a grain of salt. AFAIK, in a microkernel the flaw wouldn’t have allowed direct access to the core of the kernel like this one did. However, it would still let the attacker do tons of bad stuff to your system, and from an end-user point of view really wouldn’t be any better at all. From a DRM provider point of view it probably would have been appreciated.
I don’t know, in theory microkernels are awesome, but in practice the flaws they fix tend to be smaller than you might think, and the performance cost associated with them tends to drive down widespread consumer use.
Wouldn’t the driver in a microkernel not even be in the same space as the kernel, and as such would not be able to wreak any havoc at all?
This is the video driver we’re talking about here. If you can comprimise that, you can compromise all other software that presents a GUI on the system (there is always a strong liklihood of vulnerability when a higher-privileged piece of software talks to a lower-privileged one, so the driver could often take the privs of whoever is calling it). Applications cannot and should not try to protect themselves from the graphics subsystem in current OS designs. And the driver controls hardware with a DMA engine, so it can arbitrarily overwrite memory anywhere in the system if it so chooses.
I think a better answer going forward is the Singularity approach of a fully type-safe compiler-verified runtime system. Then you don’t take the hardware costs of isolation, but thinks are just isolated by construction. This comes at the cost of everyone having to use one compiler, but that’s true anyway on both Linux and Windows for the most part.
Taking the singularity approach would require some redesign of the hardware, perhaps, because you would need to extend the compiler’s proving abilities to DMA devices, which can arbitrarily write memory on their own.
It’d still be a very difficult task to compromise an entire system if all you have control over is, for example, the OpenGL acceleration, which is where video drivers and user interfaces seem to be headed. The exploits would depend on both a specific piece of UI software and a specific vulnerable video driver. And at the most, it’s going to hurt the particular user logged in and not the entire system and all users. Plus it wouldn’t be able to install a system-wide keylogger to get admin passwords or whatever.
If you read some of the more recent information on micro/nano/exo kernels, you might find it somewhat interesting about the way system components interact. Fully comprehensive MAC/RBAC seems rather nice.
Security, unfortunately, seems to all be about layers, and the best way to get security is to accept that there’s very little that you can completely trust, including video drivers.
(IO-MMUs provide memory protection to DMA access, by the way)
Hmm really? I know in practise there are a lot of differences to the theoretical microkernel, mostly due to the difficulties of doing things like that in current computer architectures.
Surely in a true microkernel architecture where the video driver (and video card with the IO-MMU) was completely protected from the memory that had nothing to do with video processing, it wouldn’t be able to do things like directly accessing files and so on.
I do like true microkernels, and I think that people should persevere with them and try to work out their inherent performance difficulties.
Besides, it’s not like we don’t have massively powerful systems that can mask the latencies by sheer power alone.
“Besides, it’s not like we don’t have massively powerful systems that can mask the latencies by sheer power alone.”
Take this with a grain of salt, because I’m far from being an expert, but I think that an important part of the performance impact is due to architecture aspects; today’s chips (well, x86 stuff at least) aren’t really designed to run micro-kernels (context switches are expensive, no real support for messaging, etc.)
BTW, keep in mind context switches are nothing but waste of energy. Why not minimize them in the first place?
As I understand it, though I’m not an expert, x86 does provide hardware context switching but no mainstream operating systems use it because it doesn’t save all the registers present (specifically floating point registers) and I think it isn’t as fast as one might suspect.
Context switches are a huge waste of energy and recent schedulers are going a long way to try to minimise unnecessary ones while maintaining good interactivity. Work is also being put into ensuring threads don’t wake up for no reason. I don’t think that x86 makes this easy though. Maybe tasks are unnecessarily granulated into separate processes, particularly in (dare I say it) UNIX style operating systems.
Most operating systems are based around the ideas of a kernel, kernel modules, processes, threads, fibres, libraries. I think that there could possibly be more appropriate types of task than the ones outlined above, and I know research is being done into this sort of thing. Essentially, though, it generally comes down to interactivity vs throughput. If you have a lot of context switches (intelligent or not) you generally have better interactivity but worse throughput. The inverse is true.
Assuming hardware support for message queues and so on, I wonder if there’s any work being done on a completely event based operating system where libraries and apps are implemented entirely as services. That would seem to match UI principles. I think hardware generally operates in an event driven mode, so the way that tasks are implemented as “processes” with context switches would seem to be an inappropriate abstraction.
Edited 2007-08-12 16:03
About the completely event-based OS: You would still want to isolate work into different “processes” such that bugs or malicious code won’t hose the whole system (except for systems in which you can exclude both, but that would only be possible in embedded devices or similar custom systems). That means you get one event handler thread per process. Then some processes would want to do background work – this can be done on an event basis, but this is usually hard to implement since you’d fiddle around with events just to re-create the concept of a background thread. Soon you’re back to the old model.
Singularity and JNode wil bring an interesting twist to the whole scene since they don’t require context switches at all. But even JNode (don’t know about singularity) is all the old model when it comes to concurrency and the likes. Making it completely event-based would bring up other questions, such as: what if an event handler locks up? May another handler be run concurrently, and if so, what about synch locks?
> Singularity and JNode wil bring an interesting twist to the whole scene since they don’t require context
> switches at all
That isn’t quite right. From what I understand Singularity does away with virtual memory (Relying on code verification techniques and managed code to keep things tight) but doesn’t regress away preemptive multitasking or get rid of the concept of a context switch.
The situation is a lot more complex. First, context switches waste time in different actions (state saving and restoration, process management overhead, cache/tlb misses, etc.) that all have to be considered independently. This is further complicated by the fact that IPC mechanisms in a microkernel can be implemented in a lot of ways (for example, remote procedure calls can be implemented without thread switching, thus saving time).
Secondly, microkernels do not need to do context switching as often as usually claimed, *provided* that programmers get used to asynchronous system requests. If several system requests do not depend on each other’s results, they can all be issues with only a single context switch (or at least, a number of context switches equal to the number of targeted service processes).
Third, one has to consider that even microkernels aren’t the blue sky. While claimed to be much easier to use as a developer, microkernels produce nondeterministic behaviour between the different service processes. Stallman once talked in a speech about this problem and how it was one major issue with HURD. If there is any one argument that I could bring up against microkernels, then it’s this one. It’s not an unsolvable problem, but a really hard one. However, it assumes that the processes in a microkernel run concurrently, which is by no means set in stone (remember remote procedure call based IPC).
To sum it up, there is no such thing as “the” microkernel. Saying that microkernels are faster, slower, more secure, less secure, or whatever simply ignores that microkernels themselves are a huge range of possibilities, and much of that is still research area.
You’re assuming the driver supported the protected media path and that the vulnerability allows access to the unencrypted stream that is running in a protected process. PMP requires that components have an additional PMP-specific signature from Microsoft.
Also Purple Pill doesn’t bypass PMP:
“Vista is perfectly aware that an unsigned driver has been loaded: you will even get a warning a bit after the driver is loaded. This also means that PMP will become aware that the driver is loaded, and disable high-definition media playback. This means that this tool will not help you bypass DRM in any way, because the original Vista protection mechanisms are still in place. Note that on Vista 32-bit, this behavior already exists by default in the OS, so it is not a “bug” of Purple Pill.”
http://blogs.zdnet.com/security/?p=427
The articles are also inaccurate in stating MS can’t revoke ATI’s driver(s). MS can ship an updated driver via Windows Update, then blacklist any vulnerable drivers. This is exactly what the revocation mechanisms are there for.
About that Purple Pill description, I interpreted it as though the person who made the Purple Pill software made sure that it couldn’t be used to bypass DRM (I’d imagine to cover his ass legally), rather than the exploit itself couldn’t be used in such a way.
Not much info seems to be known about the tool, but I thought that it could be used to inject other code into the kernel and have it executed as though it was running under the ATi driver, which would be PMP signed, wouldn’t it?
Though, it’s fair to say I don’t know much about PMP, indeed I’d forgotten it’s name I’m curious as to how the kernel protects the unencrypted PMP stream from other kernel modules that theoretically should have access to the whole of the RAM?
He could likely read the memory of a protected process on 32-bit Vista by installing a kernel mode driver. In fact, he’s done work in that area before. However, this would not automatically give him access to the protected processes of the PMP (at least not while they are processing media). Signing isn’t enforced for kernel drivers on 32-bit because of legacy, however signing is enforced for PMP components (including video/audio drivers) on both 32 and 64-bit.
As stated in the article, PMP detected the presence of Purple Pill because not only is it not signed (which would mean the kernel wouldn’t even load it in the case of Vista x64), but it also isn’t signed with Microsoft’s special PMP-attributed signature. PMP checks that all components in the media path, including codecs, are signed and PMP attributed. If it finds any that aren’t, as with Purple Pill, PMP will not start playback of protected media.
… logs out and restarts X11 w/o XGL to be able to run a 3D game (sort of works, a bit laggy)… accidentally hits suspend-to-RAM key so laptop hangs.
… reboots and selects outdated Linux kernel + AMD/ATi driver versions… looses support for Xorg 7.3, looses new kernel improvements (virtualization, SD-card reader etc), looses track of current and maintained versions of distro packages (also needs to downgrade wireless driver version etc)… gets XV accelerated video and suspend functionality back…
Ha ha! Eat your hypothetical exploit Windows L00sers!
To me this news item is more about bad ATI software with bugs and than so-called Vista flaws. Yet, here come the fanatics to bash MS at every single opportunity. True though that video drivers for Vista are less than ideal right now. I just hope things improve – a lot.
This isn’t about FSF or open source or Linux. It’s about ATI’s buggy driver software and perhaps their lack of decent quality control. Maybe also its about MS putting a lot of unusual requirements on Vista video driver software. Market forces will work this out, I suspect.
“To me this news item is more about bad ATI software with bugs and than so-called Vista flaws. Yet, here come the fanatics to bash MS at every single opportunity.”
What a shame. The problem people often say is binary drivers, rather than just call people fanatics.
Thats a rather nasty flaw on Linux and the Nvidia and existed for 2 years, unheard of in all the other drivers in Linux.
To me this news item is more about bad ATI software with bugs and than so-called Vista flaws. Yet, here come the fanatics to bash MS at every single opportunity. True though that video drivers for Vista are less than ideal right now. I just hope things improve – a lot.
This is not only an ATI driver flaw but a Vista kernel flaw. Vista’s whole driver signing as security has just been smashed. Instead we need security that limits access to only needed files and services, not something vulnerable inserting code labeled as “trusted” that then can access anything on the system and is in fact NOT trusted. To be really secure you cannot trust anything.
To be really secure, you should just turn your computer off… because you simply can’t trust it.
Of course this is a tradeoff. Drivers that actually have functionality (and Video Card Drivers have crazy amounts of functionality) have a high risk of containing subtle security bugs. This isn’t even a subtle bug, if it’s as bad as it seems. Maybe there’s a requirement to be admin to call this API (you might need special privileges). But in any case, you already need to have local code running on the machine.
If you’re running a server on Windows, just use the in-box VGA driver and you’ll be safe. You don’t really need a gfx card on your server anyway. On a multi-user enterprise computer, go with Intel or Matrox, or some other low-powered graphics card. They probably have significantly simpler drivers and are more likely to share source with Microsoft for auditing.
effectively defeating the new anti-rootkit/anti-DRM mechanism built into Microsoft’s newest operating system.
Some people will find this a very useful feature. If this actually makes it possible to break DRM, it’s just another proof that DRM ony hurts paying customers, and do nothing to prevent the bad guys from pirating software and media.
DRM is unaffected for reasons expressed in a previous post:
http://www.osnews.com/permalink.php?news_id=18437&comment_id=262586
More ATI news that isn’t good.
Reminds me of the old joke.
“ATI cards are like a city Bus. They are Big, Red, and have TERRIBLE drivers.”