The Linux Foundation has made some analyzation the past two years into just how much code is being added to the project and who is doing that contribution. This year’s report is out, and the results are actually quite smile-worthy if you’re a Linux advocate: the increase in code contributions is phenomenal, the rate at which these contributions are being submitted is faster, and there are more individual developers than previously.The Linux Foundation’s Linux Kernel Development report, also known as the “Who Writes Linux” report, which is now in its second year, tracks the development between kernel 2.6.24 to 2.6.30. Its findings were very interesting, indeed.
Since the first edition of the report last year, the net amount of contributed code to the Linux kernel was a staggering 2.7 million lines. About a third of these 2.7 million lines was comprised of the addition of the staging tree, adding nearly 800,000 lines of code that were previously out-of-tree.
This monumental net increase in code includes the subtraction of outdated code as well. Including weekends and holidays, an average of 5,547 lines were removed, 2,243 lines were changed, and 10,923 lines of code were added every day since 2.6.24. There were also an average of 5.45 patches added to the tree every hour, up from the previous average of 3.83. All of this is 42 percent faster than in the previous paper. According to the report, this rate of change is larger than any other public software project.
The work is being done by 1,150 developers, ten percent more than the 2.6.24 kernel had. Over the past four and a half years (between kernel 2.6.11 and now), a grand total of 4,190 individual developers contributed code. According to the report, though, “despite the large number of individual developers, there is still a relatively small number who are doing the majority of the work.In any given development cycle, approximately 1/3 of the developers involved contribute exactly one patch. Over the past 4.5 years, the top 10 individual developers have contributed almost 12 percent of the number of changes and the top 30 developers have contributed over 25 percent.”
Some say it’s a surprise, but I say it’s natural that Linus Torvalds himself isn’t the top contributor in changes over the past year. Since 2.6.24, he’s made 254 changes while others contributed much more such asRed Hat kernel developer Ingo Molnar with 1,164 changes. It’s only obvious that Linus now has a lot more on his plate than just a kernel to develop. Linux has become quite the empire, and he’s doing a lot to direct it, as liquid and free as it is. There’s every day life to consider, too; he’s not a robot, after all. About Linus’ contributions, the report said, “Linus remains an active and crucial part of the development process; his contribution cannot be measured just by the number of changes made.” It also said that Linus is ninth in the list of managing kernel merges with 2.6 percent.
Company-wise, the top four contributingcorporationsare Red Hat with 12 percent of change contributions, IBM with 6.3 percent, Novell with 6.1 percent, and Intel at a round six. 21.1percent of total changes since 2.6.24 were made by developers with no corporate affiliation whatsoever.
Overall, it seems like Linux is being developed to greater heights and at a faster and larger pace than really ever before. No wonder Microsoft is getting scared.
The Xorg stack is where the problems are.
Maybe Chrome OS will offer an solution to that problem (or Moblin or Wayland or maybe Xorg will get its act together eventually .. who knows. My bet is with Google tbh.)
Xorg and (pulse)audio.
And as I understand it Google’s “OS” is just a Linux distribution, isn’t it? I thought they are using standard things like Xorg. Do they develop something different?
You best make that Xorg, (pulse)audio, and intel drivers
I think pulseaudio wouldn’t be necessary if ALSA was better, why not put the pulseaudio functionality in the core, on the audio stack where it should be, and let programs communicate over a simple API, instead of relying on 10,000 different APIs for outputting audio.
I think that’s the problem with sound on Linux nowadays, that we have many crap and layers on top of the audio stack (ALSA).
If we could improve ALSA without going the “band-aid” way we could have a better system today.
Pulseaudio, esd, etc is just like dealing with the symptoms and not with the problems’ cause or root of the problem.
Edited 2009-08-20 01:08 UTC
There’s nothing really wrong with ALSA (I realize it has some rough edges, but you can find issues with any API) People generally use PulseAudio’s API because it makes their app multiplatform.
As long as people want to build multiplatform audio apps, their will always be an abstraction layer like PulseAudio stuck in there.
No. Your wrong.
Pulseaudio is designed to address the limitations of the Linux audio API for desktop purposes.
If you want cross-platform Audio you use audio libraries like SDL (for games) or Gstreamer (for complex media applications). Those libraries are cross-platform and can run on Alsa, OSS, Solaris, FreeBSD, OS X Core Audio, Windows XP, and Windows Vista as well as other platforms.
You don’t program directly on Alsa or OSS if you can help it. You use a libraries. Alsa and OSS programming should mostly be for people making libraries or building stuff that is very low level.
Linux audio is a MESS and PulseAudio is the nearest thing we have to a sane API.
PA is the first serious attempt at consolidating and unifying all the dozens of different audio APIs that Linux supports.
Alsa fell short because the original developers were all over the map. Alsa was designed to be both a high-performance, low-level API for driver development and for high-level application programming. This was to much and combined with lack of documentation it became a mess.
Nowadays they are trying to get people away from programming Alsa directly and if application developers need to or insist on programming directly to Alsa they are told to restrict themselves to the ‘high level’ or ‘safe’ set of Alsa APIs. Using that approach allows for Alsa applications to run on top of Pulse Audio due to the Pulse alsa plugin.
=============================
Pulse Audio has a bad wrap.
People got used to the way Linux audio worked and figured out ways to deal with it’s brokenness. It’s a extremely flexible system and has high performance, but is difficult to deal with and has a lot of bad drivers.
Pulse Audio, instead of working around problems with things like custom asoundrc configurations actually brought out a lot of bad behavior out of drivers. This caused all sorts of problems for lots of users. Plus PA was not optimized and was still beta form when it started being introduced.
However, because of PA a lot of drivers that only sorta-worked are now fixed and are much more robust and application-friendly.
——————————-
This may come as a surprise but Pulse Audio is used as the sound API for Palm’s Pre Phone.
A smart phone is the very definition of a platform requiring a robust and flexible sound system with low latency and low CPU requirements. PulseAudio is able to fit that bill nicely.
————————–
If correctly setup and configured it can make life MUCH easier for users.
For example… I have a ‘USB Dock’ I use at work. It has a bunch of little things.. like serial port, a printer port, ethernet port, and audio port.
That way when I use my laptop all I have to do is plug in one cord to my laptop instead of a half a dozen.
With Pulse Audio the USB sound card is automatically detected and configured and ready for me to use.
I can take a running application like Totem or Mplayer, plug in the USB sound card, and then migrate the sound from my onboard speakers to the sound card seamlessly.
Plus I can combine the output of the two devices and listen to music on both set of speakers.
It is fast and efficient and all is done on the fly. I can add and remove the sound card at will without killing any sound applications, doing any text configuration or anything like that.
Something that like that is impossible using just Alsa.
——————————–
PA is not good for everything.
If your doing audio creation you should use JACK instead of PA.
Jack is designed from the ground up to provide a high performance low-latency way to route MIDI and PCM audio from one application to another.
This allows long chains of individual applications that can be used by sound editors and recording artists to create very complex workflow and unique sound combinations.
I know most people won’t believe me.. but I know better then most of them.
The combation of Realtime-RT patches on the kernel, ASLA low-level drivers, and Jack audio server Linux can easily outperform any Windows-based DAW and even OS X’s excellent Core Audio.
The downside is that the environment is unfamiliar and requires high level of expertise to setup correctly.
Also none of the catalogs called ‘Professional Audio Magazines’ are have a crapload of ads pushing Linux audio software. So it remains unknown by the vast majority of wannabees.
The truth of the matter is that Linux and JACK is used in dedicated professional Digital Audio Workstations that win awards time to time. Also Linux makes a very excellent multitrack recording box for studio work and is slowly gaining traction by the Linux-aware folks working in studios.
The requirements for professional audio and desktop/home theater stuff is so different it really is the best approach right now to have separate Jack and PulseAudio daemons. If you want to use regular applications with Jack (like pulling sound samples from movies or whatever) then you can run PA on top of Jack.
Edited 2009-08-20 06:27 UTC
Short and simple sample for you, why ALSA should loose the first A(astands for Advanced) in it’s name:
Try configuring Bluetooth audio anything and see that even Windows XP needs less intrusions when installing audio drivers.
Bull crap. Go read on their front page “What is Pulse Audio” First line reads: “PulseAudio is a sound server for POSIX and Win32 systems.”
It’s a sound server, not a sound sink and it supports a lot more than Linux.
So does Pulse Audio
ALSA offers both low level and high level APIs.
Linux supports 1, not dozens. (2 if you use an old kernel with both ALSA and OSS in it)
All those other systems are sound servers and abstraction layers, NOT a sound sink interface. They are supported outside of Linux, and interface one way or another with ALSA.
Sound and graphics are my two need currently. The Alsa driver for Creative XFI boards is in testing but installed clean on my Mandriva 2008.1 and seems to be working. I thought I was going to have to decide between true 5.1 sound under the gaming boot and any sound under the productivity boot. It’s not in the distirbution alsa packages though yet.
On the grphics side, I’d like to upgrade past my geforce 8800 but documented support seems to top out before the boards above the 8800 actually start showing a performance improvement justifying the cost difference. 9600 and 9800 don’t justify it, the 250 is also just a rebranded 8800 from what I read and nothing indicates support for the geforce280 yet let alone 250. I may be missing something here though.
For now I’m XFI and 8800 until I have more reason to move from Mandriva too a Lenny or Squeeze host OS.
People generally use pulseaudio because of one big failure: ALSA. Alsa is a severe pain in the backside and this is a known fact to everyone dealing with audio in a prof. way.
Do you remember the days in which there was only 1 API? The OSS? Today it’s open source and we still don’t use it on Linux. It supports everything that we need including 5.1, S/PDIF passthrought and the rest and the best thing is that it’s also supported on all UNIX like OS’s (FreeBSD, OpenSolaris, etc.).
Furthermore, it’s the _ONLY_ API that looks like a UNIX API. fopen(/dev/dsp) and you’re set. [/q]
There are currently only 2 reasons two use another sound API:
1) Jackd for stuff such as Rivendell (for radio stations). Nothing beats the patchbay that Jack offers you, but that’s only required for a few things.
2) Networked stuff. If I remotely login and I have a fiber at my disposal, the sound system should follow my X11 session.
For anything else the OSS API is perfect.
Please note the use of the word API next to OSS. I am not sure which implementation is best between ALSA and OSS, but I know that the OSS API is my personal favorite. Maybe we can keep the ALSA drivers and still use the OSS API.
Phonon is multiplatform, has a nice and simple API and none of the problems that PULSE has.
I think if more people were aware of how well it works, they would not have embarked on the mess that Pulse is.
Right now, it is broken in most major distributions and lots of applications do not work with it. They work fine without it.
Please stop the shitty Linux hype. Phonon isn’t multiplatform it’s ‘multi-Linux’. You have to do some really heavy chin-ups just to port it to multiple platforms.
You do know that Phonon has already been ported to OSX and Windows, right? Which means you don’t have to do the porting anymore.
Google said they’d be using “a new windowing system on top of a Linux kernel” for Chrome OS:
http://googleblog.blogspot.com/2009/07/introducing-google-chrome-os…
No. Chrome is a completely different OS built on top of the Linux kernel. It will be binary incompatible with GNU/Linux.
No, Chrome is a new distribution with a new (non X-Windows based) GUI application and framework on top of Linux; Linux is the OS. It will probably include many GNU applications and libraries as well.
Edited 2009-08-20 16:31 UTC
Where did you see that? All of the sketchy information I saw indicated that Google was going to use the same design philosophy as Android–a Linux kernel with a completely different OS stack.
It makes sense they would use some GNU libraries, but unless they use nearly all of them it will take porting to make a traditional application run on Chrome.
Which piece are you asking about? You agree that it will use Linux as the OS, and agree it will probably use some GNU applications and libraries. There was a link earlier to information that it will not be X-Windows based. So what exactly are you questioning?
I am not questioning your insight specifically and I don’t think we are very far apart. I am questioning that Chrome will be “just another Linux Distro.” The kernel alone doesn’t make an OS anymore than an engine alone makes a car.
Will Chrome use (u)glibc? How about the userland networking stuff or PAM or rc or bash? Android doesn’t and there is no reason to assume that Chrome will.
Since Chrome is pure vaporware right now, I think both of us are speculating.
Oh, I see; you don’t understand that the kernel is >98% of the OS; the rest is the userspace API, device driver API, and a bootstrapping mechanism. Everything else, be it libraries, frameworks, etc., are simply applications or parts of applications on top of the OS. Many layfolk are confused because MS-Windows, the class of operating systems they are most directly familiar with, have confused the issue by bundling them tightly together and creating a poor abstraction layer between them. The result is a blurring of the differences between an Operating System, and a Computing Platform (the latter including the base set of libraries, frameworks, and core applications). A particular version of Ubuntu, for example is a Computing Platform, consisting of some version of Linux as the OS, a set of device drivers, and a set of libraries, frameworks, and applications. In the case of Linux, where such platforms are created by a large number of different providers, it is also known as a (Linux) Distribution.
Look through any Operating Systems textbook or take a college-level course in Operating Systems (where either is part of a curriculum for a Computer Science degree) and you will get a much better feel for where the boundaries of Operating System truly lie.
Point taken. Thanks for the explanation.
I doubt Chrome OS will have feature parity with Xorg, even without the network transparency part. It’s meant for netbooks, not for advanced media and 3d.
I agree completely, I think people and companies should put more effort and manpower on things like X, the kernel is fine and very mature already, it works great too for today desktops and servers, etc.
I think the X and graphics stack is where the problems are, if people could improve X with Wayland or someone else could come with a replacement that works and is good enough for modern desktops, then that would be great.
I hope developers and companies, and people in general who have the chance to change and improve things would listen to their users this time for once .
Edited 2009-08-20 00:34 UTC
Wayland is a replacement for X, not a improvement.
It’s a alternative display technology.
And people are heavily improving the Xorg graphic stack.
============================================
The Traditional Linux Graphics Driver Model
=============================================
The traditional Linux graphical drivers are like this::
1) VGA console for x86 or Framebuffer for other platforms.
This provides virtual console access. Or ‘Text Only’ access. (although there are plenty of ways to display graphics on just a framebuffer)
This is a Linux In-Kernel driver.
——–
2) Xorg DDX. This is the ‘Intel’ or ‘ATI’ driver. DDX stands for ‘Device Dependent X’.
(as opposed to Device Independent X, which is Xlib/Xcb and other X features. None of which is hardware-specific)
The DDX provides 2D acceleration APIS. XAA was the traditional one and EXA is a updated version. The DDX bypasses the kernel and directly accesses and configures hardware itself. This is because traditionally operating systems lacked the capabilities to properly configure video cards and this way X could be cross-platform. The DDX also manages Mode Setting for configuring Monitors as well as input drivers.
The Xorg driver runs completely in userspace.
————————
3) The Linux Kernel DRM driver. The Kernel DRM driver allows controlled access to the video card. It is used primarily for 3D acceleration.
The DRM driver is rather minimal and provides the _stable_ API interfaces for 3D drivers called the ‘DRI Protocol’.
4) The Mesa-based DRI driver.
OpenGL is a programming API for 3D applications. Hardware acceleration is optional and even on high-end cards only a small part of the OpenGL API is ever accelerated.
So open source DRI drivers are created by taking the Mesa OpenGL stack and porting as much as possible to the video card hardware.
=====================
WHY THIS SUCKS:
=====================
Having 4 seperate drivers for VGA Console, 2D acceleration, and Kernel/Userspace drivers have a number of undesirable effects and such.
* The Xorg driver and Linux have a hard time getting along. The Xorg driver has full control over your video card at all times. When you want to run a 3D application the Xorg hands over a portion of the video card’s output framebuffer to Linux which is then were your application is rendered.
So:
Video card –> Xorg —> Linux DRM –> Linux DRI.
Not good.
* Coordinating between the drivers is hell and makes things much more complicated then they need to be. This makes bugs more common and development much harder. As anybody knows switching between X and the Virtual Console is one thing that caused problems in the past… X does NOT want to give up it’s hardware.
Also Xorg f–king around with hardware directly is done without Linux’s knowledge. You can imagine that will cause badness.
* Each application has it’s own memory space. The console, 2D, and 3D all have their separate memory management schemes and fight over the same video card memory. Usually users have to manually configure how much VRAM is handed over to 2D vs 3D APIs. Bad for efficiency, bad for performance.
When doing 3D desktops were you combine both 2D and 3D APIs then objects need to be copied over from Xorg DDX and converted to something that is compatible with the 3D DRI drivers. This has a lot of extra overhead and wastes memory and buss bandwidth.
* The DRI drivers and APIs are originally designed for fixed-function video cards and don’t match well with modern hardware. They don’t provide support for multiple APIs and are unable to take advantage of a lot of features.
* Lousy multiuser support. With multiple GUI desktops running usually only one user gets 3D acceleration…
* Security issues. X is a networking protocol. But it also runs as root and controls your hardware. This is NOT good.
=====================================
Steps to modernization:
=====================================
A) Remove X from the hardware. The only software that will be directly accessing the hardware will be Linux. X becomes just another application that can run on video application libraries.
B) Unify video memory management. This way objects form one API can be easily adapted to another making composited desktops much more efficient.
C) Use a single driver that can provide support for multiple APIs.
=============
Challenges:
=============
* Mode setting (configuring video outputs and monitors) is _EXTREMELY_ difficult. It’s a huge problem and much worse then you can imagine. Traditionally now X does mode setting, but if you remove X from the hardware then you can’t depend on that.
* Unified Video card memory management. Since X is not longer allowed to access the hardware it can’t manage memory directly. Since you want to have multiple APIs accelerated then they need have their memory controlled and virtualized so you can do multitasking without sacrificing efficiency and performance.
* New driver needs to be developed and abstractions need to be created in that driver so you can support multiple APIs and modernize it for GPGPUs and the like that are entirely opposite of the old fixed-function video cards and can be used as general purpose processors for all sorts of stuff. (video decoding acceleration, physics, 2D acceleration, 3d acceleration, etc)
* The port from Mesa to hardware-specific DRI driver requires a massive amount of work and is too complicated.
* Newer hardware has no 2D acceleration core anymore. Everything is done through 3D pipelines. 2D is now only emulated for ATI graphics and will be eliminated in the next generation hardware.
========================
The Solution
========================
* KMS — Kernel Mode setting. The mode setting features have been moved into the Linux kernel. This means X is no longer needed to configure your monitor and video outputs. This leads to improved stability and more unified visual look/n/feel.
* Kernel Video Memory Management (GEM/TTM) — Like KMS this is done through the improved DRM driver. ‘GEM’ for Intel and ‘Translation Table Maps’ (TTM) for open source Nvidia and ATI drivers.
Leveraging the mature and very efficient Linux memory management systems Linux now has the capability of managing video card memory.
* DRI2 Protocol. Updated API/ABI for userspace video drivers to talk to GEM/TTM/KMS-enabled kernel DRM drivers.
* Gallium. New DRI2 driver model to support multiple APIs. It uses Kernel memory objects, a small hardware-specific driver, and generic ‘State-Trackers’ so that it can support all the different APIs required by modern desktops. 2D acceleration, OpenGL, OpenCL, Video decoding acceleration, etc etc.
==========================
Current state of things
==========================
Intel is the most mature DRI2 driver. With newer Xorgs it is possible to run X without hardware access and Intel drivers. Intel still uses the older Mesa DRI driver ported to DRI2, but KMS and DRI2 is working great as well as the UXA (GEM-enabled EXA drivers) works fine.
Intel drivers recently have sucked because they are undergoing a transition from traditional to new. They had to support KMS and Non-KMS, DRI and DRI2, XAA, EXA, and UXA. That is too many combinations and is impossible to get done correctly. Next release of drivers will ONLY support DRI2 + KMS + UXA and will be much simpler and hopefully more stable.
Gallium is still a year away, but Intel users will get it first.
Once Gallium is here then Linux users will get good OpenCL support, improved OpenGL performance, and eventually Video codec acceleration. As well as better 2D support, and other possible features like raytracing acceleration.
See: http://www.x.org/wiki/GalliumStatus
Currently available state trackers Mesa.
—————————————–
You don’t need X to use Gallium. This is what Wayland is being designed for….
These are very nice info about the current state of X and the future but is there any chance that we will switch to Wayland in the future, and I mean, completely.
I’m so sick and tired of installing a newer distro just to find out that X still ships crap like this:
http://upload.wikimedia.org/wikipedia/en/2/2e/Xeyes.png
http://upload.wikimedia.org/wikipedia/en/c/cc/Xcalc_suse.png
http://upload.wikimedia.org/wikipedia/commons/d/d4/X-Window-System….
It makes me feel that X still has all these old and obsolete cruft sitting there doing nothing that no one uses anymore.
It makes me feel that X still has all these old and obsolete cruft that no one uses anymore sitting there and doing nothing.
apt-get remove x11-apps
or whatever the equivalent is for SuSE.
Maybe.
Like I said Wayland is designed to be much simpler then X Windows, a different display technology. It is designed to leverage all the new Linux graphic stuff I mentioned above.
One thing to remember is that there are multiple X Servers. The default Xfree X.org X server is what you use on your Linux desktop. But there is Xephyr (run a X Server on your desktop) and X servers for OS X (darwin) and X servers for MS Windows (Xming).
With composition, which Wayland will support, you can run a headless X server that outputs to a off-screen buffer and then import X application’s output into Wayland’s display. Then you can do the same thing with Android apps and that sort of thing.
Right now Wayland is pretty immature, but already has GTK ported to it for Gnome stuff and I _think_ QT4.
===========================
The reason there is so much crust with X Windows is that it’s designed to be backwards compatible.
X has been around since 1984 when it was developed as part of the Athena project. The version that X.org supports is X11, which was developed in 1988.
X is designed to be backwards compatible to that. So any X application developed for any Unix system since 1988 should work fine on modern Linux systems.
But this does not hold back development as much as it would seem. Since X11 is designed to be extensible we now have thing likes XRender and AIGLX that provide more modern features without sacrificing backwards compatibly.
So all the weird early-1990’s tech that X supports isn’t something that GTK or QT4 is using… they are all using much more modern APIs.
It would be nice to have a X12 that would get rid of all the crust, but that isn’t going to be a priority.
And remember that X Windows is a network protocol, like HTTP. The stuff I described above are all about removing X from the driver equation.
On Intel’s Moblin project when using their newest drivers that are KMS, DRI2 and UXA-only X no longer provides any substantial video driver features. Once Gallium gets out then Linux will be fully broken away from the tyranny of X.org video drivers.
There’s also the MicroXwin, another alternative for the X Windowing System.
http://en.wikipedia.org/wiki/MicroXwin
No.
That’s another X Server.
Remember that X is a network protocol, like HTTP.
Comparison:
HTTP == X Windows
Web Browser == X Sever
HTTP Server == X Clients
Like a HTTP server a X client can run on your local machine or remotely. If it’s running remotely then your using TCP/IP, if it’s local then you using fast Unix sockets.
And the X Server, like a Web Browser, you can all sorts of different X Servers. And like Firefox 3.5 vs Internet Explorer 5 you can have lots of different performance and feature support.
That is a alternative, proprietary X Server.
And it’s not the only one. Another one is:
http://www.xig.com/
That one was popular for a while and actually was available by default on early commercial Linux distros if you paid for them.
Now Wayland.. Wayland does not support X11 at all. It’s entirely something else. It can support X by running a headless X Server and importing X Client’s output into it’s composited desktop. Also Android does not use X either. It uses it’s own display tech.
I hate to be that guy, but keep in mind that “Linux” only refers to the kernel. It is going to be increasingly important as time goes on to differentiate GNU/Linux from Chrome/Linux, Android/Linux, Palm/Linux, etc.
While the criticisms of Xorg are completely valid, it is an application that runs on top of the Linux kernel. Xorg is as part of BSD as Linux.
ALSA and Pulseaudio ARE uniquely Linux.
Again, I hate to be that guy…
Edited 2009-08-20 05:15 UTC
Maybe you could count GNOME-oriented distros as “GNU” but Linux isn’t. It’s just propaganda of the FSF. You just hate yourself, so do we. Torvalds has said Linux is not a GNU project so forcefully tagging it as GNU/Linux is more of a trademark infringement.
There is at least 1000 more KDE software in my Linux than there is from GNU.
Edited 2009-08-20 09:52 UTC
Linux is definately not a GNU project. The Linux kernel is the Linux project.
As far as modern Linux systems go… GNU is as important, or a more important, component then the Linux kernel is. GNU provides for all your basic utilities, basic userland environment, development tools, management tools, etc etc.
The idea behind the GNU/Linux label is to remind people that Linux would be nothing without GNU and GNU would be nothing without Linux.
The difference between saying ‘Oh Apache is important and so is X! Why don’t we say’ Apache/GNU/X.org/Linux? Well the difference is that you can’t run Apache or X on Linux without GNU, but you can run GNU without Apache or X. GNU is the basic fundamental dependency and is what provides the personality for a modern Linux desktop, even if people are unaware of it.
Of course as the software matures there are more and more Linux systems that do not directly depend on GNU.
For example embedded systems often use things like uClibc and busybox. Both of those provide a much more lightweight environment that is often more suitable for embedded devices. Of course the trade off is that your sacrificing a huge amount of features and performance items as well as compatibility. But Busybox/Linux is definitely not GNU.
Another example is Android. Android uses a very stripped down Unix userland, but the majority of the action goes on it’s Java-like environment.
But both Busybox and Android based environments have totally different personalities then GNU/Linux.
BUT….
There are projects to run GNU on other kernels also.
There is Debian’s GNU/kFreeBSD, which takes the Kernel from FreeBSD and combines it GNU userland for source code compatibility with the Debian system.
Then there is GNU/Solaris called Nexenta, which takes the Solaris kernel and a few bits and adds the GNU environment for compatibility with Debian/Ubuntu system.
Now if you were to use GNU/kFreeBSD or Nexenta then that would be indistinguishable from a modern Linux system from a end user’s perspective. It would look the same, act the same, and on a source code level be _extremely_ compatible with any open source “linux” application.
————————————
So if you say Ubuntu is “Linux” then taking GNU and combining it with other kernels like FreeBSD and Solaris then you get a much more Linux-like environment then if you combine the Linux kernel with non-GNU environments like busybox or Android!
So while saying GNU/Linux is rather stupid saying “It’s Linux, not GNU/Linux” is actually much more inaccurate when talking about the entire OS.
However when you talk about the Linux kernel… then that is most DEFINITELY NOT a GNU project.
The GNU Kernel project is HURD… which is a series of ‘kernel’ services designed to run on a modern Microkernel like L4.
And since all Microkernels are shit, from a practical standpoint, then so is HURD.
Android for example is a mixture out of Linux kernel, libc from NetBSD/OpenBSD, GNU/BSD-userland. The latter is by the way true for many well-known Linux distros. So maybe you should call these distros GNU/BSD/Linux It’s in my opinion a silly approach, but maybe it’s some fun for Stallman and other zealots.
I do count the distros we know and love as GNU/Linux, but not because I am an ideological shill for the FSF in any way.
I was trying to make 2 points. First, Xorg is not a uniquely Linux application so criticism of it, while completely valid, don’t belong in responses to a kernel article. Second, we are starting to see a new world of Linux operating systems that use new stacks on top of the kernel. These new operating systems will offer limited or no compatibility with the traditional Linux distributions. That is the reason why I think it will be important to use the term GNU/Linux, not because of some ideologue that lives on a mountain and forgot to buy a pair of scissors.
Edited 2009-08-20 16:42 UTC
They’re one of the contributors. They might be on this list one year…
Analyzation ? Are you competing with Thom ?
ALSA works great for me (as a user) and I hate saying this but wouldn’t be better if we drop ALSA in favor of OSSv4 and we integrate that in the Linux kernel?
I mean, if all people do is talk about an unified and consistent way of having cross-platform audio, easy to use API, there is always OSSv4.
That way we could have an unified sound stack on every OS, and we could drop all the ALSA/PulseAudio non-sense.
OSSv4 does too much (like mixing) in the kernel. Programs shouldn’t be sending their data straight to the kernel.
Obviously some people will disagree, but most of the Linux audio people do not, which is why Alsa and Pulse Audio are the way they are. They may be a terrible mess, but dropping them and moving to something with a fundamental design error (from the viewpoint of those in charge) instead of fixing Alsa or Pulse simply isn’t going to happen.
Something like that.
Most application programmers should target Audio Libraries like SDL or Gstreamer (or whatever KDE users, etc). Those things are designed to be portable and support Windows, OS X, OSS, and Alsa as well as PulseAudio and whatnot.
That way application programmers can do their stuff without worrying about the underlining details.
The biggest problem with OSSv4 is that it was closed source for a very very long time. The original OSS used in Linux was crippled in comparison and to get the full features you needed to go out and purchase proprietary drivers.
Needless to say they wanted to get away from that and the OSS version at the time didn’t support any features in high-end audio cards… each audio driver needed a huge amount of work and special card-specific features. So Alsa was created.
And going from OSS to Alsa was a HUGE pain in the ass.
Now OSSv4 is now open source and addresses most of the shortcomings of the old OSS stuff… but Alsa is used right now and switching back would be ANOTHER huge PITA.
If OSS was fully open source from the get-go then things probably would be much different. But history is history and Alsa can provide all the same features that OSSv4 can and it is already supported, being used. There is no work required to keep using Alsa, but there is a huge amount of work required to support OSSv4 properly.
Instead of switching or making users choose between two sets of drivers, it’s much more efficient to put resources in Alsa and fixing it and then improving the desktop experience with PA then trying to make a switch back to OSS.
Remember also that while Unix systems support OSS, that is only one of a dozen different low-level API.
Neither Windows nor OS X support OSS. So between BSD, Solaris, Linux, Windows and OS X then OSS is the least likely to be supported low-level API for the desktop out there right now. So if you truely care about portability then programming either OSS or Alsa is just out.
Edited 2009-08-20 14:53 UTC
You’ve got it wrong: pulseaudio is the answer to this big failure called alsa. It tries to compensate many of the shortcomings and crappy design of alsa. Alsa was about politics not quality. So don’t blame pulseaudio for hardly correctable mistakes in alsa.
I have some questions.
Can Wayland run normal X11/Xorg apps, such as KDE/GNOME, etc… or normal X11 apps requires stuff like Xlib in order to run, etc?
Can’t we simply get XCB in Wayland with an Xlib emulation layer so that normal X11 software could run on it? and encourage people to start porting their stuff to newer APIs?
After last year’s report, Mark Shuttleworth promised that Canonical finally starts to contribute to Linux.
So where is it? I can’t see it listed in the file.