“Open source code, much like its commercial counterpart, tends to contain one security exposure for every 1000 lines of code, according to a program launched by the Department of Homeland Security to review and tighten up open source code’s security. Popular open source projects, such as Samba, the PHP, Perl, and Tcl dynamic languages used to bind together elements of Web sites, and Amanda, the popular open source backup and recovery software running on half a million servers, were all found to have dozens or hundreds of security exposures and quality defects. A total of 7826 open source project defects have been fixed through the Homeland Security review, or one every two hours since it was launched in 2006, according to David Maxwell, open source strategist for Coverity, maker of the source code checking system, the Prevent Software Quality System, that’s being used in the review.” Note: I just want to state for the record that the headline has not been written by me. I do like the total kicking-in-open-doors air surrounding it, though.
Award me a captain obvious tag, but did you really think there would be much variation in code quality between closed and open source?
Morglum
Without arguing for either side of the fence. The argument goes that more eyes make bugs shallow. More people read and check prominent code from open source projects, so there is a higher probability to find bugs.
Of course, in practice it highly depends on the project. While “the more eyes make bugs shallow” mantra works for open source projects with a high number of developers, it may not apply to projects with few developers.
Additionally, it should be pointed out that Coverity’s software does static analysis. It does not uncover (security) bugs that can not be detected with static analysis.
I would say the results support the “more eyes” argument. The active projects with many contributors (like the linux kernel or libc) have far fewer than 1 bug/1kloc. Of course without a comparison to a comparable closed source system it’s difficult to tell whether that is a good result or not.
Maybe we’ll run it on the code for a prominent clos… Oh, I see the problem.
That statement is fine and it was created by Eric Raymond. I disagree with it. Performance bugs, bugs that cause software not to function properly sure, the statement holds water but if security holes were the same then that test would show NO security holes in open source software. Coding with security by design takes a specialist, someone who knows what they are looking for.
You may argue that Linux and Open Source are superior to proprietary software security wise. It may have that perception but look at the userbase, or lack of, and the fact that Linux “inherits” a lot from UNIX which was a system designed from with security in mind.
What I find interesting is how this is labelled news when it is confirming what is common knowledge.
The issue isn’t necessarily the number of bugs but the speed in which they are fixed, the speed in which the fixes are made available, and when there are security issues, the speed in which structural problems are addressed.
Anyone remember around 2 years ago KHTML went through a spate of security problems so over Christmas one year, one of the developers (on his holiday) did a complete code audit there were some development procedure changes etc. etc. lets remember, these weren’t serious security issues, but the developers were proactive enough to bite it in the butt before it became worse. Here we are, 2 years later, with a very secure and stable KHTML/Webkit.
The problem is, however, is that many companies don’t want to do the above; what is easier – fixing a problem correctly which might set them back several thousand, or simply continuing to patch which is cheaper (but later offset by a buggy, complex, ugly code based to maintain)? that is the issue at hand.
I think the news is not that there are actual bugs in OSS but that the department of Homeland Security has spent 1562 man(person) hours fixing them.
And if it were a commercial product they would have been royally screwed. Ring up the company then pray that the actual problem is fixed rather than simply be told, “here is a work around, the problem won’t be solved until months later (or the next release)”.
You either have flexibility or perceived ‘teh cheapness’. The fact is, if the DHS refuses to work with the community of security issues, how is it the problem of open source that they, the DHS, spent 1500 or so man hours on something that could have been avoided? why not setup security audit groups consisting of DHS IT personal and vendors to improve quality?
The whole point of open source is community; the fact that IT is a cost centre within a company, working together with other companies and organisations should not be an issue – you make no money from software, the fact that work with others on fixing common issues isn’t going to lead to loss of competitiveness – so there are no excuses as to why it isn’t possible.
To me it seems that businesses still live in an era where they’re an island – where they can’t seem to get their head around the idea of working with companies on common issues which all face, whilst at the same time still competing with each other in the product sphere.
(Removed)
Edited 2008-01-10 08:23 UTC
“That’s why I make sure to pack my code in 999 lines or less”, an engineer just told me over IM.
That figure does include some PHP. Did they include bind and sendmail as well? That would just be unfair.
I’m kidding. Mostly.
Heh. In all fairness though, BIND security has improved substantially over the last few years. Don’t know about sendmail since I’ve abandoned it completely in favor of postfix and lately qmail again, now that it’s got a workable license.
Yeah, I also have to rise in the defense of BIND. One of my past jobs involved static code analysis for a large software project, and we were evaluating whether or not to license Coverity Prevent. In order to judge its merit, I was to select an open source project and compare the Coverity results to the those from another analysis tool that we’d been using for some time.
I initially picked BIND, since it’s well-known and the defect rate from Coverity was very low. I figured it would be a good showcase for comparing the performance of the tools on high quality code. The problem was, when I ran the other tool on BIND, it hardly found any real defects at all. I was shocked. In order to produce more meaningful results, I settled on the FreeBSD kernel. That produced plenty of data.
During my time investigating BIND, I really liked what I saw. Most functions begin with a set of assertions. It has its own uber-paranoid memory allocator that’s used universally across the codebase. BIND is a very solid piece of code. Respect.
Coverity Prevent is also an outstanding static analysis tool. Definitely the best out there by far. It’s pretty damn expensive for proprietary projects (it can run several million dollars for a large codebase), but it’s free (beer) for any free software project as long as Coverity is credited in the bug reports. Every free software project ought to be running Coverity as a part of their development and release processes. Most of the bigger projects like Linux and Apache have already been running it for years.
I’m not sure I like our department of “Homeland Security” auditing all programs they can for “exploitable security holes.”
Call me paranoid with not trusting the government to not misuse that information. Especially considering the track record of our “security” department.
I’m not sure I like our department of “Homeland Security” auditing all programs they can for “exploitable security holes.”
They have to waste tax dollars doing something!
I suppose that you would rather someone else do it secretly & not release the results, but profit from it instead?
If Homeland Security is doing this and releasing the results, I say it’s about time they did something useful with all the money we’ve thrown at them over the last few years, especially after the Katrina debacle. (Remember that FEMA is part of HS now.) What color is the current alert anyway?
Would you prefer they kept it a secret backroom project, rather than presenting the results?
Don’t kid yourself, every government does this anyways. If anything, the surprising thing is that they are openly addressing it, and that’s nothing to criticize.
More to the point, at least the issues with OSS software can be published and addressed, without the nasty NDA’s behind proprietary software that would allow something like, say, Adobe Dreamweaver to produce flawed code exploiting Flash vulnerabilities that can lead to the potential compromise of thousands of web sites. For instance. And of course, hypothetically speaking…
I second Almafeta’s concerns.
It’s actually Stanford, Symantec, and Coverity on a DHS grant that are involved in the project not DHS proper. The program is called Vulnerability Discovery and Remediation Open Source Hardening Project.
Here is a link that explains everything without the sensationalism (ignorance on the subject matter?) of the linked piece.
http://www.iosn.net/network/news/news_item.2006-01-25.9430603968
http://arstechnica.com/news.ars/post/20060112-5966.html
If it’s the NSA, then yea I’d be a little paranoid. They’ve advocated putting gov only backdoors into software in order to facilitate investigations. Basically they want an easy route into systems in order to spy on people, but it would only be a matter of time before someone cracked in or leaked the keys.
This might be the National Cyber Security Division who is sponsoring this. Their job is to ensure that the US’s technology infrastructure is secure. I’m not really sure who is sponsoring this though as I can’t find any information on this.
http://www.dhs.gov/xabout/structure/editorial_0839.shtm
http://www.dhs.gov/xnews/releases/press_release_0337.shtm
I would rather have them openly auditing code than doing it covertly. If the DHS were to use this as a tool for compromising government “threats,” they wouldn’t have gone public with it in the first place.
The end result of all this is that these open source programs are now better. That’s not a bad thing.
I think it would be interesting to compare the number of defects found during this process and then measure against some proprietary code and also some BSD code, which is supposed to be well reviewed. That might give us a real comparison. As it is now, it just tells us that the defects found were all fixed but says nothing about any code that has never been reviewed.
Open source code, much like its commercial counterpart, tends to contain one security exposure for every 1000 lines of code, according to a program launched by the Department of Homeland Security to review and tighten up open source code’s security.
Dissecting this comment, it would appear that open and closed source compare favorably. Remember: The government has access to Microsoft source code via the Shared Source License.
coverity’s services to open source projects have been very welcome, but the actual number of flaws coverity’s software finds that are actually security issues are remarkably small.
this should really read “1 source code bug per 1000 lines of code”, and as others above have noted for many projects it is a *lot* lower than that.
for the kde4 scans, we are currently at 0.019 defects per 1000 LOC. all of those are in qt4 or kdesupport (e.g. non-kde specific projects) atm. the kde4 body of code they scan over 4.7 million lines, so .. yeah.
as for the many eyes thing: coverity has become part of those many eyes. which .. sort of proves the validity of it all. how many proprietary packages have had fixes thanks to coverity without their prior approval and paying for the service?
that said, we’ve closed over 1500 valid coverity finds in kde since this all started. so it has been valuable, just not quite the security juggernaut they are spinning it to be.
from the user perspective, most of those fixes were on code paths that rarely, and often times never, got taken with the state in the bug report. iow, many of them are valid but rather theoretical as users would not have actually tripped them.
all the same, it’s great to improve the codebase and coverity is providing a nice service and they are quite responsive to issues uncovered. in return they’re getting quite a bit of free advertising and some nice proving of their software, so it seems an equitable trade.
Edited 2008-01-10 01:59 UTC
Yay 7826 less bugs in the world. Nice work to everyone involved.
Seriously though this article should be seen in a positive light.
^^This whole thing is good news. You can’t even do the same kind of testing on closed source software, so you cannot be sure of closed source quality.
The article isn’t bad news, it is instead a celebration of why open source is GOOD!
Out of curiosity how would you go about exploiting a security hole in a back up/recovery program?
1) create a malicious file.
2) wait for it to get backed up.
3) “lose” the file and ask your admin to restore it.
4) insert rootkit, harvest hashes, and make mayhem
Maybe you can get access to the backup server of a competing company and you can restore files from their backup on your own systems. Or within the same company: you can restore files from other users and read their email like that. Basically: getting access to data you’re not supposed to have access to. Or even something else: through a security hole, you succeed in deleting the existing backups on the backup server.
This story is the same old FOSS bashing.
People who drive titanic Microsoft ships heading toward the gigantic iceberg in front of them should not laugh at the tea-cup sized ice cubes in the open-source waters.
How is this FOSS bashing? FOSS is not perfect. Neither is commercial software. All the article is saying is that the DHS has improved some popular open source software. This is a good thing.
The only bad thing about this article is that it’s an advertisement for Coverity. And that’s not *too* bad since apparently Coverity’s product is pretty good.
This is FOSS bashing because, obviously the same applies for closed source code as well.
With the way this title is written, it somewhat implies that closed source code may -not- contain security holes.
Yeah, riiiight.
News flash: It’s code, it’s going to have holes, it’s going to be imperfect.
According to the article:”Open source projects are different from commercial products in that commercial companies rarely acknowledge security defects in their code or whether they have been dealt with. “Our commercial customers wouldn’t like it too much if we aired the number of defects found in their code,” said Maxwell, when asked about the results from scans on 400 product lines of the firm’s private customers.
As usual the headlines are as suggestive as possible in order to attract a lot of hits. And hits are more important then people actually reading the article it seems.
I found this article to be supportive of FOSS, and the summary doesn’t do it justice. Compare it with the Slashdot summary
Stony Stevenson alerts us to a US Department of Homeland Security program in which subcontractors have been examining FOSS source code for security vulnerabilities. InformationWeek.com takes a glass-half-empty approach to reporting the story, saying that for FOSS code [0]on average 1 line in
1000 contains a security bug. From the article: ‘A total of 7,826 open source project defects have been fixed through the Homeland Security review, or one every two hours since it was launched in 2006 …’ ZDNet Australia prefers to emphasize those FOSS projects that [1]fixed every reported bug, thus achieving a clean bill of health according to DHS.
These include PHP, Perl, Python, Postfix, and Samba.
Some projects fixed *every* reported bug, isn’t that amazing? Closed source projects? Oh, they still have bugs, and now, they have way more than open source. The Linux kernel had way way way less than 1 vulnerability per 1000 lines.
I wonder why OpenBSD, one of the most secure OSes out there is not submitted to the projects list [1], scanning is free for open source projects, you just have to sign in, they fill all requirements [2].
Do they have something to hide? :p
[1] http://scan.coverity.com/rungAll.html
[2] http://scan.coverity.com/faq.html#newprojects
Edited 2008-01-10 08:37 UTC
No. Heck, there are even OpenBSD developers (Ted Unangst & Mark Kettenis) that works for Coverity.
I cant recall exactly why OpeBSD isn’t tested but the reason is somewhere in the misc@ archives.
I’m betting it’s something like “We keep getting 0 hits for OpenBSD so we wont be wasting our time on it any more.”
There, I fixed the headline for them.
All code contains security holes.
Open source code can be freely reviewed and audited for them.
Closed source code … you have to get access to the code first. Even when and if you do (and you will no doubt have to sign an NDA first), and you find the same or higher rate of security holes … the software vendor is just as likely to smile at you and say “thankyou for your effort, have a nice day”.
The response from open source projects is to fix the identified hole ASAP.
So I could propose yet another correction to their headline:
“Open Source code used to contain security holes”.
Edited 2008-01-10 11:28 UTC
“All code contains security holes.”
Yes. It depends on individual developers and on the nature of the software in question what kind of security some software may have; openness of source code is just one part of the whole thing. I’m pretty sure that there is both good and bad security in both closed and open source software.
It has been indeed proven that open source is often quite a good way to guarantee good security: more eyes may study the code, weak points are more easily seen and cannot be easily hidden by lazy developers, no unknown backdoors can be hidden in the code etc. However, not all open (or closed) source projects and developers are as security conscious, and not every open source project out there has an active and big developer community constantly viewing the code for security holes and other bugs. It is good to be reminded about this because then maybe even more open source developers will start to pay more attention to security. That can only be a Good Thing.
Edited 2008-01-10 21:23 UTC
Thinking on that way leave us to believe that we are more safe using closed source applications instead of open, but if you wait to think about it for a minute you will see that is not true.
Lets take Oracle IAS as example, its widely used, its considered very secure, its a closed source application but uses Apache as HTTP server and IBM WebSphere do the same.
That two giants use one “Open Source” application on his products because they are secure and stable, otherwise the companies will write they own web server application to merge with his products. I showed just two examples but you can see much others examples on that same way.
IMHO, that article doesnt show what really happen.
Closed Source Code Contains Security Holes.
Who’d of thunk it?
I’d say that anything written for public use, especially over the net, is going to be somewhat insecure. It’s been said already here, however. The one thing I do notice though is:
“A total of 7826 open source project defects have been fixed through the Homeland Security review, or one every two hours since it was launched in 2006, according to David Maxwell, open source strategist for Coverity, maker of the source code checking system.”
And I think the above is what really counts. Clearly, though there are bugs, there’s a whole lot of people willing to repair those exposures once known about. Not everyone has time to dedicate to detecting all issues and repairing them as well, so something like this is very beneficial. It’s too bad they wrote an awkward summary making it seem like it’s weak.
Oh, by the way, congratulations on the transition to the new version of OSAlert! =) I just noticed now, and it’s looking great.