An open-source security audit program funded by the US Department of Homeland Security has flagged a critical vulnerability in the X Window System which is used in Unix and Linux systems. Coverity, the San Franciso-based company managing the project under a $1.25 million grant, described the flaw as the “biggest security vulnerability” found in the X Window System code since 2000.
I think I read somewhere that this had already been fixed, thus (if true) neatly illustrating the gaping gulf between the approaches to security taken by Linux and Windows (OS) developers.
How does OS X compare for speed of fix?
Fixing the hole is one thing. Updating all those millions of installations of vulnerable X out there is another.
As for OS X … last time I checked, they weren’t running X11. By reading the latest news, though, you’d think that if you’re running OS X, hackers will steal your house and first-born baby.
Heh.
I wasn’t referring to how quickly X Window System gets patched on OS X, but to how quickly Apple patch the OS in general.
OK, so technically X isn’t part of the (GNU/)Linux OS, either! But hopefully I’m getting my point across.
OK, so technically X isn’t part of the (GNU/)Linux OS
Whenever it comes to security, “Linux” means just the kernel. When it comes to functionality, “Linux” means everything from KDE to Firefox to the Win32 libraries for Xine and the kitchen sink.
Come on, you’ve been here more than a month. You should know the ways of OSAlert by now.
Heh.
“Fixing the hole is one thing. Updating all those millions of installations of vulnerable X out there is another.”
One of the reasons people should just give up and use a “real” distribution if they’re going to use *nix as a desktop OS.
I can’t speak for others, but I had a nice little clean alert come from my notification area telling me updates were available. A couple were for X so I highlighted them and read change logs, clicked update, and was finished.
I’d imagine even Fedora would have such a nice system.
‘a “real” distribution’ being what?
A ‘real distibution’ probably meaning, one that is properly maintained; one of the big ones like Fedora, OpenSuSE, Debian and so forth.
It pretty much falls down to this, the less visible the distribution, the less accountability there is, so there for, the maintaines are more likely to be lazy.
For example, a vulnerability is found in X Windows, such as this one, two distributions fail to provide a speedy update, ones called Fedora, the others called Peanut Linux – which one do you think will get the post amount of flack? of course, Fedora, so there fore, if you want a distribution that keeps up to date with the latest fixes and security patches, you’re better off getting a mainstream, highly visible distribution rather than relying on one that has been cobbled together by a guy operating out of a toilet cubical in southern siberia.
In this case, lazy developers would be fine as they’d still be on 6.8.x, which this vulnerability doesn’t affect.
But that’s a good point. The more used distributions are going to be more likely to get fixed quick.
Fedora is not a stable distribution, and is certainly not properly maintained. Maintenance cuts out after every new release, and new releases are made every nine months or so.. RedHat ES and WS are stable distributions. Fedora is just a testing ground.
OpenSUSE is more stable than Fedora, due to its lineage as a full, retail product, but in the long term I think SuSE Linux Enterpise Desktop (SLED) and Server (SLES) will be the truly stable distributions.
Debian, of course, is stable and well maintained.
Fedora is not a stable distribution, and is certainly not properly maintained. Maintenance cuts out after every new release, and new releases are made every nine months or so.. RedHat ES and WS are stable distributions. Fedora is just a testing ground.
Incorrect; firstly Fedora is a community based distribution, and secondly, there is a service setup called “Fedora Legacy” which supports releases all the way back to version 1 ( http://www.fedoralegacy.org/ ).
As for OpenSuSE; I doubt it; the eventual move will be to brand Novell as *the* linux distributor, they’ll phase out SuSE branding in favour of Novell; and Novell will use the OpenSuSE as the basis of their commercial distribution.
I don’t see anything wrong with the split between ‘commercial’ and ‘community’ – community can be controlled by its traditional ‘roots’ whilst the distributor itself can take the bits which it finds desirable, and make a commercial version out of it, coupled with support and application compatibility testing and support.
Incorrect; firstly Fedora is a community based distribution, and secondly, there is a service setup called “Fedora Legacy” which supports releases all the way back to version 1 ( http://www.fedoralegacy.org/ ).
Fedora Legacy does not cover all releases (in fact they’re having serious trouble covering all releases) and are furthermore unofficial, separate from the core Fedora group, and separate from Redhat. While they’re working as hard as they can, they’re too few, and there’s too many releases. You cannot rely on Fedora Legacy to keep your system secure. See these for more:
January 2006: http://lwn.net/Articles/168907/
April 2005: http://lwn.net/Articles/131982/
January 2005: http://lwn.net/Articles/119892/
As for OpenSuSE; I doubt it; the eventual move will be to brand Novell as *the* linux distributor, they’ll phase out SuSE branding in favour of Novell; and Novell will use the OpenSuSE as the basis of their commercial distribution.
If you re-read my comment again, you’ll see that I’m agreeing with you: i.e. that OpenSUSE will, like Fedora, become a testing ground for the full entreprise distributions for Desktop and Server. A consequence of this is that OpenSuSE, like Fedora, will cease to have any significant support (a new release every 9 months is not support).
I don’t see anything wrong with the split between ‘commercial’ and ‘community’ – community can be controlled by its traditional ‘roots’ whilst the distributor itself can take the bits which it finds desirable, and make a commercial version out of it, coupled with support and application compatibility testing and support.
Again, you’re not paying attention to what I wrote. Did I not list Debian – a completely free, community-supported distribution – as one of the sample stable distributions?
The point I’m making is that both Fedora and OpenSuSE are testing grounds for companies that aim to make money on products built on their foundation. As such they are geared to experimentation and rapid releases: in effect they’re a perpetual beta. There is no promise of long-term (2+ years) support for Fedora, and I doubt that there will be for OpenSuSE. There is with Debian, and will be with Dapper Drake, but not necessarily with the Edgy Eft.
Therefore the stable distributions are Redhat AS, ES and WS, SuSE/Novell Linux Enterprise Desktop and Server and Debian. To a lesser extent, there is also Mandrake, but the company seems to have stagnated lately. Ubunutu has no track record, but it does have a lot of funds, so Dapper Drake, once it’s released, may join this group (though I, frankly, would be wary of it).
Edited 2006-05-04 16:53
Thank you for your reply, I think at the end of the day, if one really wanted an ultra supported distribution, and required all the support, bells and whistles, then they could always go for a commercial distribution.
With that being said, however, I use FreeBSD, which promptly fixes problems, has a fairly good legacy support record.
I think ultimately with community distros, Debian being an exception, you’re really having to weigh up either a free distribution with limited legacy support or a paid distribution with a high level of legacy support.
Any one of the major ones actually designed for desktop use.
“This was caused by something as seemingly harmless as a missing closing parenthesis,” Chelf said,
I’m not a programmer, but that doesn’t seem like something hard to fix.
Am I missing something?
No doubt fixing it wasn’t the problem; just finding it.
Finding an aeroplane in an airport isn’t hard, but finding it in a desert is.
I know that, but your original comment make’s it sound like this wouldn’t have been the case if it was closed source.
Maybe I just misunderstood your comment, sorry.
“This was caused by something as seemingly harmless as a missing closing parenthesis,” Chelf said,
I’m not a programmer, but that doesn’t seem like something hard to fix.
Wouldn’t such code fail to compile?
Normally, but here a function and a variable had the same name “foo”. So:
if (foo=0){do something}
should have been:
if (foo()=0){do something}
Normally, but here a function and a variable had the same name “foo”. So:
if (foo=0){do something}
should have been:
if (foo()=0){do something}
But that is missing both the opening and the closing parenthesis. The article said, “This was caused by something as seemingly harmless as a missing closing parenthesis.”
Yes, well, the article is wrong then. It was a whole pair of parentheses: geteuid instead of geteuid().
Even without the variable, I think foo would be a function pointer which would be non-zero, and therefore foo==0 would always be false. Remember that X is written in C, so lots of things that higher-level languages would catch might slip through.
At first I thought they meant a single ) parenthesis was missing, but it would had to have been a pair or the compiler would have complained.
Edited 2006-05-03 23:26
It is fixed. The offending lines were because they used geteuid as a pointer and not a function.
It looked like, but wasn’t exactly:
if (geteuid)
…
Where it should be
if (geteuid())
…
C lets you do this, but it’s very rare that you’d want to so security scanners look for it.
Of course, the check is if you’re root (id 0, or false in c) so you can see how it’s a priv escalation.
I’m not sure, but I think it only works if the X11 server is running as root. It is possible for it not to be, as OS X runs it when it runs it. And some do it on Linux and BSD as well.
Last time I checked the OS X version of x11 didn’t actually contain this bug, Apples version of X11 is based on Xfree 4.4 code, not the X.org X11R9 source code and therefore inherently doesn’t contain this error
Oh that’s true. But I was saying that I didn’t think it’d be vulnerable even if it was.
I’d be surprised if it didn’t appear in XFree too. The code in question is directly derived from it — I think you’d find that section of code appears identically in X.org and XFree.
I think I read somewhere that this had already been fixed
You did read it somewhere: the article itself, where they say that it was fixed within one week of being reported.
The other poster (Tom K) might well have thought that the OS X comment implied you thought OS X uses X11 not Quartz. Of course OS X do ship an X server on their install DVD and for those of us who have it installed it will be interesting to see how long it takes for their software update to install a patched/fixed version.
Edited 2006-05-03 22:09
The flaw, which affects X11R6.9.0 and X11R7.0.0, was fixed within a week of its discovery
Already fixed, and if your distro is responsible, They’ll have acted on it.
Coverity has implemented a system to analyze the X Window System on a continuous basis to help prevent new defects from entering the project.
Nice, though the article does a good job of seeming like an advertisement at times, complete with testimonials:
“[Coverity’s tool exposed] vulnerabilities in our code that likely wouldn’t have been spotted with human eyes. Its attention to subtle detail throughout the entire code base—even parts you wouldn’t normally examine manually—makes it a very valuable tool in checking your code base,” – Daniel Stone
Anyway, it does seem useful to have some automated checking in a project with as much code as Xorg. Hopefully the FSF doesn’t freak like they did with the use of BitKeeper
Overall I’d say this is a demonstration of an advantage of open source. Flaw discovered, and fixed. It’d be a bit tougher to audit Windows’ sourcecode that way. Of course, that doesn’t prevent MS itself from doing so, and they may. It’s just not out in the open.
Anyway, it does seem useful to have some automated checking in a project with as much code as Xorg. Hopefully the FSF doesn’t freak like they did with the use of BitKeeper
I don’t think that’s much of a concern. These are two entirely different circumstances: in order to hack on the Linux source, you (as I understand it) had to use BitKeeper; a testing suite, on the other hand, is orthogonal to the ability to use or contribute to a project, and so it doesn’t matter that it’s not free except insofar as its existence reduces motivation to write a free alternative.
Coverity and the DHS are hyping this one, because it makes them both look good (DHS gave Coverity a contract to find bugs in open source).
Certainly far worse bugs have been found in the past.
Coverity probably is doing this for the publicity, but that doesn’t mean it’s not valuable. They’re also scanning several other projects[1], including Wine, and have helped uncover many bugs.
http://scan.coverity.com/
True, but at the same time, Coverity don’t do it out of the goodness of their own hearts, they use it to improve its bug finding capabilities – for example, in the wine scan, there were a number of ‘bugs’ that weren’t really bugs, for example, and coverity will use the feedback from wine to adjust their bug searching too to take into account odd programming approaches rather than miss detecting them as bugs.
Edited 2006-05-04 01:47
I’ve just been thinking about this a bit more …
The mistake they made was that they were referencing geteuid as a variable in the if statement, rather than a function, because of the missing (). Now, my question is … unless they have a declared variable called ‘geteuid’, which would be STUPID, why didn’t GCC error out with a “variable undeclared” error?
Have they disabled errors for undeclared variables? That’s just lazy programming, and the error would have been caught at the first attempt of compilation had this not been disabled.
Now, my question is … unless they have a declared variable called ‘geteuid’, which would be STUPID, why didn’t GCC error out with a “variable undeclared” error?
getuid != 0 checks if the address of the getuid function is 0, but the code was supposed to check if the *value* of the function was 0.
Because functions are pointers.
int (*myGeteuid) () = geteuid;
That statement is legitimate, and you do do things somewhat like it every now and then.
You’re not a c programmer are ya?
Haven’t done it seriously for about 7 years now.
Thanks, though. I get it.
A function designator is converted to a function pointer. So given void f(void), f == &f.
GCC does flag it. Unless you set the error level, though, it’s a warning rather than an error — because that’s the way C is supposed to work. This is perfectly valid:
if (printf == 0) { /* do something */ }
… assuming you included stdio.h, because printf is a pointer to a function and the conditional is testing to see if the pointer is null. The only problem is that the two items are not of the same type, so a warning is issued by the compiler. That warning can be made into an error by setting the proper option.
The real problem here is that X.org doesn’t yet compile without warnings. So many warnings, in fact, that you miss warnings of situations that could easily be errors (such as the one cited in the article).
C99 specifies, like C++, that 0 is a null pointer constant. A null pointer can be converted to a null pointer of another type. So the function designator is converted to a pointer to a function returning type T, the null pointer constant 0 is converted to a null pointer to a function returning type T, and the comparison is performed. The standard, IIRC, dictates that a null pointer is always unequal to a function. This would imply, the rules regarding conversion of a function designator (that is except in conjunction with sizeof and &) to a pointer to a function not withstanding, that such a comparison would be superfluous and deserving of a warning while the comparison with 0 would not.
The bug which they announce relates to X Windows, which runs on a number of platforms outside the Linux sphere, and yet, we have an article blazing at the top that it isa ‘Linux vulnerability’ – great mckiddy-kuddy, isn’t there a single journalist in the IT world who has a flaming clue about IT?
There is a distinct difference between Linux and X Windows, as both don’t necessarily have to exist on the computer – like I said, Linux can operate without X Windows, and for many out there running servers, they choose not to install X Windows, whilst there are others who run X Windows, but on FreeBSD (such as me), Solaris, HPUX or any number of other operating systems.
As for MacOS X; although X11 is available for MacOS X, I think the way in which it is setup; rootless local osting, that is, it doesn’t access remote resources, does not make it suseptible to this vulnerability.
the way people are justifying the bug…cheap linSux community.
Who’s justifying the bug? I don’t see anybody justifying the bug. It’s a bug, it was serious, it was fixed. Nobody’s justifying it.
One: it’s not a bug in X, it’s a bug in the X.org source code.
Two: it’s not a Linux bug.
What was that article a few days ago about guys like Dvorak whoring for attention?
I have found this interesting comment on slashdot..
” Fri Mar 10 17:29:51 2006 UTC (7 weeks, 4 days ago) by deraadt:
proper geteuid calls because suse hires people who mistype things”