There are several ways to run Windows programs on Linux (virtualisation, WINE) and vice versa really isn’t a problem either with Cygwin, or better yet, native ports thanks to the Windows variants of Gtk+ and Qt. Still, what if Windows support was built straight into the Linux kernel? Is something like that even possible? Sure it is, and the Chinese figured it’d be an interesting challenge, and called it the Linux Unified Kernel.
I must admit I hadn’t heard of this project just yet, but I was intrigued right away. It’s called the Linux Unified Kernel, and aims to combine the Linux kernel and the Windows NT kernel to effectively create a new kernel that supports both Linux and Windows binaries natively. They do this by implementing Windows NT kernel mechanisms into the Linux kernel using kernel modules, such as NT’s process management, thread management, object management, virtual memory management, synchronisation, system calls, Windows registry (worms? Can? Of?), WDM (device driver framework), Windows DPC mechanism, and so on. Even Windows drivers would work using this new hybrid kernel (a little kernel geek humour, there).
The new kernel actually supports two sets of system calls, one for Windows, and one for Linux. Since I’m not a programmer (let alone a kernel programmer), I have to resort to Wikipedia to explain how this all works. If I understand it all correctly, Windows binaries call the syscal table via a different interrupt than Linux binaries do (int 0x2e
vs. int 0x80
). I’m sure some of you will be able to provide more insight into what, exactly, this all means. When it comes to the userland, the LUK project does not write its own material; instead, it relies on the GNU project for the Linux side of things, and on Wine and ReactOS for the Windows side.
The people behind LUK hope that eventually, their code will be merged into mainline. There’s still a lot of work to do before every Windows application will just work out of the box, but progress is steadily made, and the Chinese Linux distribution MagicLinux even has a special version built on top of LUK. Currently, it’s x86-only, but work is underway to port it to the Loongson architecture. The Loongson 3 processor includes 200 additional instructions to speed up x86 translation, in case you’re wondering how MIPS would run Windows.
In any case, version 0.2.4 of LUK was released recently, so if you’re adventurous, you can get it from their download page.
what about virus and malware that already exist for windows , will affect in the same way they of windows to this “hybrid”
It probably would, since the aim is to be able to run x86 Windows binaries natively.
Viruses and all types of malware that already exist are almost exclusively x86 Windows binaries.
Having said that, LUK should be able to run anti-malware programs natively also, just as they run on Windows. Perhaps this one would be an option:
http://www.moonsecure.com/
If running anti-virus and anti-malware addons is good enough for millions upon millions of Windows users, I don’t see why anyone would (justifiably) object to having to do it also for LUK.
PS: Thom … you did a good job of polishing this article from the original submission.
Edited 2009-05-28 12:35 UTC
It probably would, since the aim is to be able to run x86 Windows binaries natively.
Indeed, Windows viruses and malware would work just as well as any other app. The difference is that we have access to all the source-code and such so the devs can fix bugs and harden the default settings. And you don’t need all the services running under LUK that you have under “real” Windows.
Personally, if they can get this LUK thing working properly then I welcome it as long as they/distro makers make it easy to turn on and off the NT-kernel parts. I know many people don’t want those, reasons aside, and atleast on server machines it is better to leave the NT-kernel out. The thing that I am most looking out for is the ability to use Windows-drivers for the few odd hardware devices that don’t have Linux-drivers available.
Well, Windows Vista and onwards are pretty darn secure – so secure in fact, there hasn’t been a major breach since Vista got released (and as always, Conficker doesn’t count, only affects machine that weren’t updated, patch was out eons before the worm was created).
In other words, it all depends on how well the guys behind LUK handle security barriers. Windows is pretty advanced when it comes to that these days, so I suggest the LUK people take it all just as seriously.
I disagree. Conficker does count. A good security policy will prevent unknown threats as well as known threats. Most Windows malware comes directly from patches released by Microsoft. If you don’t count non-zero day vulnerabilites then Microsoft has had a good security track record for years now but reality says otherwise. It certainly is disappointing though because Vista does have advanced security features but they are not implemented well. Windows has a track record of implementing security features and then scaling back their policies to make the system easier to use but it also makes it easier to exploit.
Edited 2009-05-28 13:18 UTC
What? That’s the single most trollish statement I have ever read. Of course a good security policy will prevent unknown threats, and a good security policy includes KEEPING YOUR SYSTEM UP TO DATE. This includes all OSs, including Linux and OS X. The patch for conficker came out in october, if you caught conficker in February, then you have no one to blame but yourself.
And how you manage to believe that MS patches are the source of most of the malware on windows is beyond me. A bit of explanation may be in order, I think before anybody takes such utterances seriously.
Its true that its the users responsibility to keep their system up to date with patches. But I would imagine it would be more difficult with LUK . If LUK were emulating windows perfectly, they’d have to reverse engineer every security update Microsoft released to keep non zero day exploits in windows from being zero day exploits in LUK.
He’s right that most malware that actually exploits a vulnerability (as opposed to the typical trojans which just exploit users) are based on virus-writers reverse engineering Windows Updates (i.e. they’re non-0day).
I think Conficker is a good example of how policy can help protect the user. I believe that Vista and later OSes were less vulnerable to Conficker since the endpoint that was being attacked was locked down to be inaccessible to unauthenticated remote connections.
I think conficker is a good example of how patching systems is important, because I don’t think there would be any use in developing an exploit based on a patch if people just applied the patch when it was distributed through windows update.
If everybody had installed the patch, than conficker would not have existed, even if the patch was reverse engineered, because it would not have been able to spread.
The argument that most malware is developed from reverse engineering patches is a very weak argument.
I have to ask what planet your are living on where this is EVER going to happen? It’s pure fantasy to think that EVERYONE is going to update ALL THE TIME. As I said it’s not always possible either. Some people have very expensive software that will not work with an update and some software vendors take advantage of this by forcing you to purchase new software. Microsoft should know this by now. The whole point of my argument was that Microsoft has the capabilities to lock down the system to a much tighter degree but they don’t mainly because of legacy issues. If they took advantage of what is already there they could prevent whole classes of exploits.
People always tend to gloss over exploits that have been patched but OS vendors are never going to win that race. You need preventative measures that are based on general policy not specific exploits.
I’m from a planet where I think people are responsible for their own actions. Microsoft did their part, they fixed the hole. if even only 90% of users had patched, it would have mitigated conficker’s effects.
There is no evidence that this patch effects any software in an adverse way. It’s not XP SP 2, it’s just a small patch. That argument is pretty weak.
Vista, Win2k8 and Windows 7 are locked down to a much greater degree, but conficker wasn’t caused by the “computer not being locked down” it was caused by a vulnerability. having a locked down XP box wouldn’t help. Applying the patch would have.
That’s all well and good until a botnet takes down your website.
SP2 did break a lot of things and you’re just glossing over that fact. Everyone who had to deal with it knows what I am talking about. Service packs aren’t the only patches that break things either. A cursory google search makes that apparent.
I guess you missed my point. Vista and Win7 have better security models but they are implemented poorly.
Well it’s obvious you’re not a security expert. There are plenty of ways to mitigate attacks without a patch to the vulnerable application/service. For example a good security policy would have prevented conficker from downloading additional code, scanning for infected hosts, and disabling anti-malware which would have effectively mitigated the spread of conficker. Conficker was the result of a buffer overflow vulnerability which has several mitigation techniques. In fact Windows DEP should have stopped conficker but it is so shitty that the conficker worm was able to disable it.
Did you read? I was saying this patch is NOT XP SP2. I am not glossing anything over. There is no evidence that this patch is disruptive. I was comparing it to the very disruptive changes that SP2 introduced.
I guess you missed my point, the changes in Vista and 7 mitigate the conficker hole, so I guess they aren’t implemented that badly.
That’s why it’s called a security hole. No security policy can protect against everything, and MS did what they had to do, issued a patch. every OS has security holes, and every OS manufacturer issues updates. A good security policy also includes INSTALLING PATCHES. Every OS has buffer overflows. The only difference between this hole and the big ‘ol gaping hole in OS X’s version of Java, is that MS actually fixed it.
There is no way people can dodge the responsibility for the fact that a buffer overflow existed, MS patched and then millions of people did nothing. NADA. And then they feel the need to blame MS.
DEP is not turned on by default in XP. If you are talking about proper defaults, then maybe you do have a point. Except that they ISSUED A PATCH 9 MONTHS AGO.
The patch would have mitigated the risk, regardless of the defaults in windows. Some people however, feel the need to deflect the blame from the real problem, which is user education.
Security policy should be layered. Relying on updates alone is foolish. Microsoft’s patches sometimes break things and cannot be applied on some systems. It’s also incredibly naive to think that everyone is going to update their system regulary. It would be nice but it’s not reality and it can affect other people with a DDOS. A proper security policy would prevent conficker from spreading without a patch. That’s why security layers are important.
Where do you think conficker came from? It was reverse engineered from Microsoft’s patch. That’s generally how Microsoft exploits are created. There are a lot fewer zero-day exploits.
Edited 2009-05-29 13:49 UTC
The aim, apparently, is to achieve LUK via loadable kernel modules.
http://en.wikipedia.org/wiki/Loadable_kernel_module
I don’t know for sure, but I would say that if you didn’t want a given module, you could just blacklist it.
Indeed.
One could argue that if this works properly, it would completely undermine some if not all of the few remaining barriers to Linux adoption.
So long as binaries don’t have executable permissions based on extension, then I don’t think Linux/UNIX will have the same level of virus infestation on users desktops.
The reason being, if users have to set an executable flag on files, worms (such as dodgy e-mails / web scripts) couldn’t infect a Linux machine. Thus the only way a host machine could become infected is if the user intentionally runs a Trojan (e.g. dodgy warez or porn sites that “require” additional codecs to be installed) – which granted will happen, but you can’t protect against that kind of stupidity on any kind of OS
So in short – I don’t think integrating windows binary compatibility into the Linux kernel automatically makes Linux vulnerable to Windows malware.
I could just imagine trying to mix Microsoft patented and copyright technologies into Linux distros. That is all we need. Next thing we would be sending payment to Redmond every time we install a Linux distro.
This sounds like an attempt at a quick fix that would invite nothing but time sucking problems, security issues, and legal disasters into the pristine Linux landscape. Keep your distance with that stuff! it aint goin’ on my box.
Technically if that virus relied on a bug or exploit in Windows, it won’t work on Longene, just as it won’t work (or is very unlikely to) on ReactOS. If it just uses standrd Win32 API’s it will work. So that’s a yes and a no.
I don’t see the point.
Linux is (will be) usable for most users (within the next few years).
ReactOS will be usable in a few years more.
So make your choice, but don’t build an unsupportable monster OS.
What’s next? Add OSX?
I don’t see the point.
You’re lacking imagination.
I’d imagine that the point is being able to run the vast combined corpus of Windows software and linux software, including hardware drivers, natively on the one machine at the same time. In particular, this effort has the Loongson processor in mind, I would say. It is a Chinese project.
http://en.wikipedia.org/wiki/Linux_Unified_Kernel#Development
If this all works, it would almost cancel the technological head-start (due to the corpus of available applications) that X86 and x86_64 currently enjoys over Loongson.
That would be a very good outcome for China.
As for this bit:
I believe that this same mechanism is also exactly how Wine works:
http://en.wikipedia.org/wiki/Wine_(software)
Edited 2009-05-28 12:56 UTC
I do. And I’m mightily afraid.
Afraid of what? I’m looking on this rather dubiously myself, but I don’t really see anything to be afraid over… the one thing that does concern me is that we’ll have less alternative applications developed since the Windows ones will just run… that has concerns for the free software ecosystem, and I’ll be honest, the last thing I want to do is run buggy Windows drivers alongside my Linux kernel; let’s not forget that badly-written drivers are one of the main causes of Windows instability in the first place.
I don’t see anything to be afraid of in this project though… it’ll most likely have as much success as Wine has now anyway. Am I missing something to be very afraid of?
Edited 2009-05-28 14:42 UTC
Why not? I bet they’ve left the possibility open to support any operating system.
I think there’s another project to do the same thing – have programs from any operating system be able to run – but I think it’s based on a custom kernel and not directly on Linux.
It would be quite interesting to get LUK into kernel.org!
Why not? Remember OSX IS UNIX!, OSX is 100% source compatible with Linux, Solaris, BSD, …, (well for non-GUI calls), such as pthreads, fork, mmap, etc.. and all the other UNIX goodies that are not available natively on Windows. (side note, why are the Windows low level APIs so freaking ugly compared to their UNIX counterparts???)
As for GUI stuff, OSX is kind of, essentially source compatible, via GNUStep.
Now, for binary compatibility, there is a slight issue, in that Linux natively supports the ELF and a.out executable formats, and OSX is Mach-O, and they are slightly different, although, the actual executable code, i.e. the instructions are identical (they are both X86), just how they are loaded into memory, and external linking is resolved is different.
However, although I am not an expert, I suspect that it probably would not be that big of a deal to add Mach-O linking support to Linux, much like BSD supports a Linux binary compatible runtime. Considering that Linux supports ELF and a.out, I suspect that it should be possible to add some code to the linker to also support Mach-O.
Support for OSX binaries is easy otherwise but AFAIK there is no OSX-compatible userland available whereas for Windows-compatibility one can just use Wine. So, being able to run OSX-binaries under Linux would require a lot, LOT more work.
Besides, there is little incentive for that. There’s a whole lot more software for Windows than there is for OSX, and most of OSX software is already available for Windows as well. The same applies to drivers also; there are far fewer drivers for OSX whereas the ones that exist are also available for Windows.
Yeah, you probably could implement a method for running OS X applications on Linux. There are three basic approaches I can think of.
First, the WINE approach. You implement a user-space application that can load and run Mach-O binaries, and service OS X system calls. Then, you need a reimplementation of the OS X userland libraries.
Second, paravirtualization. Take the existing OS X kernel, make it work under KVM with appropriate paravirtualization hooks, and use that to run OS X binaries. Then, you need a reimplementation of the OS X userland libraries.
Finally, the approach that Solaris and FreeBSD use to run Linux applications, and the approach LUK takes to run Windows applications. Natively implement the system calls in the kernel, providing the ability to run Mach-O binaries directly. Then, you need a reimplementation of the OS X userland libraries.
In any case, you still need the OS X userland libraries. Those libraries aren’t nearly as numerous as the Windows ones, and at least the low-level libraries are open-source, but there are still a lot of them. Gnustep provides some of them, but certainly not all of them.
Probably a more realistic goal than WINE or ReactOS though.
Because Win32 is a C++ API.
That’s a really good question: I worked with C++/Win32 for one homework assignment in college… and swore, “never again.” That really is a fugly API. Amusingly, most of the hard-core Windows advocates in the class had the same opinion. They where used to Java, and had never actually directly used the Win32 API either; they where surprised at how hideous it actually turned out to be.
Edited 2009-05-28 17:01 UTC
Things that are COM/ActiveX appear to be C++ based, though they’re really quite agnostic to the host language: they just have a specific binary layout that’s most closely associated with C++, but can be done in other languages if you desire.
Win32 for the low-level stuff itself outside of 3D graphics and some other COM-based stuff, is C based: perhaps you’re thinking of the added on ActiveX libraries, DirectX, and more recently, some of the GUI controls, unless you’re confusing MFC with Win32: MFC is a bad C++ wrapper around the Win32 API that tends to add as much trouble as it claims to solve.
And AIX, OS/2, MacOS Classic, Solaris, etc. …wait. We already had this some 18 years ago, anyone remember Taligent?
Too bad we’ve been hearing this mantra for the past ten years!! I think adding NT compatibility to the Linux kernel by hybridizing them is a great idea! That being said, however, people should be careful what they wish for… The myriad viruses, malware, and other Windows weaknesses would readily transfer to our beloved OS should Unified Linux actually achieve its goal.
You are right not seeing any point because there isn’t. Windows emulation in an hypothetical Linux of the Desktops would be a sort of DOSBox meets Wine at most. Support shouldn’t be anywhere close to the kernel and processor emulation would be provided so that emulation isn’t limited to x86-compatible processors. Performance wouldn’t be an issue because new Windows programs would be a thing of the past.
If Linux needs (a working)ReactOS kernel to reach the desktop, that is it needs to be Windows, why not just use the Free Windows-like OS ReactOS and forget about POSIX, X11 and command lines altogether?
My thoughts exactly.
Why not OSX? Free and OpenBSD (not sure about Net) have Linux binary support already.. So what would be wrong with Linux getting BSD (Darwin/OSX) support?
NetBSD has had Linux binary support for ages. There was also work on Darwin binary compatibility on NetBSD, maybe someone will resurrect it one day. It got far enough to run X:
http://hcpnet.free.fr/applebsd.html
It seems to me that the idea is not all that “new”, it’s just that it’s the first major discussion of doing it with Windows binaries.
For years, Linix has (had?) an Intel Application Binary Interface (ABI) layer that let it run SCO and Unixware applications directly. Similarly, the BSDs have had Linux, SunOS, Solaris and Darwin compatibility depending on flavor and processor.
I agree completely: while it’s a nifty research project, it sounds like it’d be an unwieldy, unmaintainable, impractical monster. Between WINE, ReactOS, and the maturity of virtualization technologies, I’m not convinced there’s a practical call for it.
Hell, you could just dual-boot, or virtualize Linux from Windows, if high-performance and compatability (for your Windows apps) are really issues. I’m running Vista on my main/gaming rig, but I have VirtualBox and Sidux Linux installed, for when security and productivity are important: linux virtualizes quite well – especially flavors that are designed to be light-weight. I’m doing the reverse at work, virtualizing XP on a RedHat box, and that’s also working out fine.
If you really, really want to have access to the Win32 API from inside a linux environment, I’d bet that WINE is still a better bet — especially if we could coax desktop environment providers into integrating WINE support to make the use of Windows apps more transparent.
As a final note, I’m also not convinced that they’ll be able to pull off loading Windows drivers – and, even if they can, I doubt they’ll be able to get them to expose functionality to the “linux half” of this new monster kernal.
Linux finally gets the BSOD!!
Well, it somewhat already had one, just they aren’t blue and they’re called Kernel Panic
It is possible to have a real BSOD with the new Kernel Modesettings for Linux.
I think this is a very interesting project. However, what is even more interesting is the cosmic joke behind it:
So a passionate Anti-Microsoft Linux user and a passionate Anti-Linux Windows user walk into a bar ….
If windows drivers were able to load alongside linux drivers for devices that don’t have a linux driver associated with it, a lot of headaches would go away (at least for me)
You can have printers and add-ons that weren’t supported now automagically supported along with the software that runs great on windows now running on a linux distro.
I greatly welcome that addition because a whole new world opens up now.
Not to rain down on your parade to much, but printer drivers do not live in the Linux-kernel. And to be complete, video-drivers don’t live in the Linux-kernel either, we’ll until recently anyway. They are moving a small part into the kernel now.
Not to rain down on your parade to much, but LUK could probably accomodate running Windows printer drivers.
To further muddy this picture … Linux rather than Windows probably has the more complete set of printer drivers. Since OEMs are expected to write Windows drivers, currently shipping versions of Windows will typically fail to have drivers for a lot of hardware that was out of production when the version of Windows first came out. Where is the incentive for a printer OEM to write a driver for Vista or Windows 7 for a printer model they stopped making (and therefore selling) some years ago?
This situation is exacerbated on Windows because both Vista and Windows 7, I believe, will not install the XP drivers that are typically all that the printer actually shipped with, and the Linux drivers won’t work on Vista or Windows 7 either.
We only need 1 OS, to run System specific applications.
Making kernel bigger and dowing the same thing twice (or more) isn’t a good idea…
Using software interrupts for system calls is the old and slow way of crossing the user/kernel boundary. Modern Linux kernels on the x86 use the sysenter instruction (if available on the hardware) and I imagine that Windows does the same thing. Resorting to software interrupts is going to have an adverse effect on performance, especially for system-call heavy workloads.
On a different subject, I can understand the appeal of this project from a research point of view, but fail to see any real advantage coming out of it. There are proper (TM) ways for running Windows applications on Linux and vice versa (i.e., compatibility layers such as WINE) and for running multiple OS kernels (virtualisation).
This is a kernel module which does the process and thread management as opposed to a userspace wineserver. Most of the rest of WINE is used straightaway and without the resource bloat you’d see elsewhere. I actually think this is pretty darn handy, though I’ll want to see this stuff secured.
So what? I see no advantage in that. Performance-wise, the only reason to implement something as a kernel module is to reduce the number of user/kernel mode switches and data copies, which I don’t think that you achieve in this case. Other than that, the kernel has no magic power – only the nightmare of having more code executing at ring 0.
As far as I can tell, performance has never been a shortcoming of WINE. The only problem was (and, to a large extent, is) lack of completeness in the implementation.
I think it would be much more easier, well if somebody know how to do it, to add a Linux-personality to the NT kernel than adding the whole NT kernel to Linux. Its a bit like the Linux personality for the Mach kernel
That has already been done with coLinux and there are there are several distributions that support coLinux.
LUK would be a sort-of “inverse coLinux”, wherein Linux was the core OS and the NT kernel extensions were the tacked-on bits.
The major, major difference in these approaches would be that LUK would still give you an open source kernel, whereas coLinux only gives you the Linux add-ons as open source.
That has major implications from a licensing perspective. You would no longer require a Windows license to be able to run applications designed for Windows. You wouldn’t have to put up with WGA. There wouldn’t have to be a “Windows Update” backdoor. Updates to LUK itself could be delivered via Linux distribution repositories.
Finally, software vendors who wanted to easily extend their customer base could prepare additional (binary only if they wanted) repositories for their applications (without having to re-write anything), so that all applications on LUK machines could still auto-update via the one updater.
Edited 2009-05-28 23:34 UTC
Well nothing wrong with this project. Its highly unlikely to ever become an official merge part of the kernel.
In the end it appears just to consolidate multiple Windows-compatibility projects into kernel space for performance.
Yay! After 5-10 years, when this magnificent kernel will be released, we could run games and apps like Photoshop on linux.
I think it’s fair to say this has racked up a fair bit of interest so please can you keep us informed about the movement of this project!
Thinking into the future some what, I can see this becoming a fairly big thing (in both good and bad ways of course). However, good luck to them! Their perseverance to get this far with such a project is impressive at minimum.
Sam
The NT architecture has the capability to add “Environment Subsystems” that would allow to run applications from other operating systems.
This seems like adding that capability to the Linux kernel. I think that would be a cool feature to add to the Linux kernel. You could run applications from different OSs without the need of a relatively slow VM.
NT’s environment subsystem is in userspace and so doesn’t cover hardware drivers and stuff. Also, the subsystems are completely isolated, so you couldn’t do something like create a WDM userspace driver in the NT subsystem to provide hardware access to a Linux subsystem application. It appears that with this solution, you could just use NT drivers in Linux.
Is ia a good idea to mix a ketchup with the icecream, to explore a wider area of tastes? Or do you want to combine two different pairs of pants at the same time, to feel more flexible? Such mixes are not good. It wouldn’t be GNU/Linux and it wouldn’t be Windows. Personaly, I prefer both but separately. It’s better to have one solution for one purpose than to have universal tools.
But the idea is interesting indeed and that’s all for me.
What about keyloggers, adware, spyware, worms, viruses and other malware crap? Are they supported “out-of-the-box” by such kernel?
Edited 2009-05-28 17:35 UTC
I totally see the point as well as many others here…just to see if it can be done!
Tt would be cool to see someone do it! Kind of like running MacOSX on a non-Apple Intel box.
I am a firm subscriber to a Linux quote where he said (to paraphrase) the user shouldn’t have to worry about the OS s/he is using. I think just as an academic exercise it is an interesting concept but also in keeping with the aforementioned quote, just tackling the issues involved with making this LUK come to pass would allow us to learn what it might take to become a truly OS independent world…
Of course, I still believe that humanity is basically good while the people suck, so what do I know?
Just my 2 cents
Edited 2009-05-28 17:40 UTC
Cooperative Linux http://www.colinux.org/ allows running the linux kernel on windows. Much to the same effect as this project. Running Linux and Windows binairies side by side.
The same effect perhaps, but a totally different set of implications. With colinux, you still need a Windows kernel. Your OS is still closed source. You are still subject to being audited by the BSA or some similar organisation.
With LUK (if it works) you would still be running an open source kernel. Unlike colinux, or any virtualization solution, with LUK you would not need a Windows license at all.
If I understand correctly this hybrid kernel handles things just like the freebsd kernel which allows you to run linux binaries.
But a NT kernel implementation would not suffice. To run windows apps, you need to implement all win32 subsystems including directx.
For all those out there wondering how hard it’d be to support mach binaries it’s not very difficult at all. It has already been done on bsd. They got the osx bundled X server to load on netbsd years ago. They cancelled the full mach-o binary compat layer due to lack of interest. Of course this was before the x86 move. There’s also a softpear project unrelated to the pearpc emulator which was working on making powerpc mach-o executables link/run on x86. I tested it a while ago and was able to run ppc commandline mac applications. Should be possible to take the work of both projects and get it going on X86 Linux.
http://hcpnet.free.fr/applebsd.html – mach-o bsd compat
http://softpear.sourceforge.net/ – ppc osx binaries on linux
Edited 2009-05-29 09:43 UTC
I don’t see how this could work without choosing one that would precede the other. (in this case I guess it would be linux?) The way a program works is in user space or in the system (kernel mode) space (moving there require the program to make a system call interrupt to the OS and it will move you there itself).
I just don’t see how 2 kernel could “coexist” with all the interrupt handlings comming fro mperipherals and all … isn’t the goal of a kernel to manage those for the OS over it? What if 2 kernel want to access them at the same time … I guess t his is where the Linux over windows kernel thing would come into play, ultimately one have to grant access to the other as both can’t make their own decision for themself.
Very interesting topic and I’d love a detailed resource on this as I just finished my OS class at the university
Would this offer any advantage over Windows, since most popular Linux software is already available for Windows?
do i need pay for Linux and this this kernel patch to run windows and Linux programs??????, thats the advantage. Windows freaks like you does not like this project
Edited 2009-05-30 01:24 UTC
A Windows freak? You don’t know what you’re talking about. Yeah it’s true I run Windows, but it’s also true that I regularly boot BeOS, Linux, and about a dozen other operating systems (Geos, BSD,QNX…). That’s why I come to OSAlert, not to bash other operating system users or fanatically promote a favorite OS.
But I don’t need to explain myself to you.
I asked a legitimate question. The advantage of this Unified Kernel is obvious for Linux users, as many Windows software titles are not available as Linux versions. Anything to make Windows apps easier and more convenient to run on Linux is a fantastic achievement, IMO.
But for Windows users considering Linux compatibility the advantage appears less clear. I’m not sure though, and could be overlooking something; that’s why I asked. Is there a significant number of popular Linux applications that Windows users don’t currently have access to? Is there an advantage to running Linux ports over Windows ports of the same program? Is Unified Linux Kernel faster or simpler or more stable than other methods?
Edited 2009-05-30 05:26 UTC
i see this as been a great project to develop, this will increase the migration from Windows to Linux. Yes it may seem this project will slow down the development of Open source software, but i don’t believe that this is the case, people are circling around habits, to brake free their habit is first to give people what they understand first then slowly pulling them away from Windows. when people use Linux with the Unified Linux Kernel, they not only exposed to windows software but Linux software too, thus increasing exposers to Linux software. once expose they will find certain software in Linux is easier to use than in windows, they will start to migrate automatically.
a lot of people are not willing to move to Linux is because they have to change the way they do things in a short time, that change is to big, Linux community must also understand they project will increase movement and will benefit them in a good ways and bad ways. but a lot of time the bad way can become a good way.
having larger population moving to Linux will increase chance of major company to shift their attention toward Linux faster pace.
This project is a tactic to be use to shift population in faster pace, thus development will increase at faster pace.
i hope Linux community do see this as Tactic but not something to be afraid of.
i am afraid of making Linux insecure and i enjoy Linux and many Distro out there.
I think Linux community need to shift their attention of how to promote this project and also need to be concern on how to control this project over and make it as secure as possible.
i do not agree to integrating or merging Unified Linux into the main Linux kernel as the main source from kernel.org, i will only agree on a patch that patch the kernel, Linux community need keep distance and also need to keep in close contact with this project.
i do see this a great tactic to shift population in a very fast pace
Shit, that’s like the Borg taking over the Enterprise. This does not bode well for the Federation of Linux.