OSDL lab manager and open source test-giver Tim Witham is on a mission to push Linux performance testing to higher-level, real-world applications, to produce reliable, retestable, comparable data that will let users compare the operating systems or open source applications in a transparent fashion. Witham said everybody seems to have a different idea of what performance metrics means.
If you ask me, there are a LOT of things not properly tested.But Linux? This guy (linux) is probably one of the most tested stuffs ever. Linux is always IN PRODUCTION , not acceptance or whatsoever.
You don’t really know how many bugs windows has because Microsoft releases bug fixes in big patches. They also do not publically document all of them. Although, if you look through their support docs, there are thousands upon thousands of them.
Linux, on the other hand, publicly documents the bugs and often times releases individual fixes for each program.
Additionally, the general rule is the more code, the more bugs. They just need to be discovered. I’m not sure where you are getting your numbers from for lines of code, but I’m pretty sure they are both wrong.
You have to distinguish between two types of test:
(a) “beta testing”, i.e. people using and testing the product: Linux gets a _lot_ of this, its entirely testing model is arguable one big beta test (interestingly, much complaint was argued at existing closed-shop vendors, e.g. MS over this approach).
(b) “v&v testing”, i.e. logical regression testing, via. test cases and scenarios, targetted and planned testing to verify recent changes and possible secondary effects, etc. this is the model you see (and I have experienced) in normal closed-shop, but Linux gets little of this in the “open world” (although, red hat, suse and big linux vendors may be doing this behind the doors).
What Linux would benefit from is a more well defined open source testing approach: e.g. creating a shared repository of test cases and making sure regression does happen before out-the-door. Unfortunately, I feel that this goes against the informal approach taken by the community who would rather stick with (a) hoping that sheer level of beta testing makes up for (b). So far it seems to work in adhoc way.
M, very nice post, thanks.
Not many people know about test cases and script testing. Heck, not even YellowTAB’s CVO knew anything about it in my interview with him a few months ago, but that’s THE way to ensure quality.
Having experienced testing efforts for multimillion dollar products, I can tell you that _concentrated_ test cycles take 3-6 months of excessive stress, performance and regression coverage before a release passes all the gating criteria and can exit production.
Mostly, that 3-6 month cycle is (a) 100% 24/7 soak testing under stress, (b) large scale automated regression tests, (c) targetted manual testing, (d) ongoing development to continually move the frameworks for (a) & (b) forward, (e) daily/weekly metrics monitoring coming out of (a)-(e) and bug open/closure rates, etc. All very rigorous, scientific, documented, etc.
I don’t see any of this in the Linux community. Like I said, it doesn’t seem to have caused problems so far, but it makes me wonder.
M, again, exactly! Your comment is on the spot again.
>Like I said, it doesn’t seem to have caused problems so far, but it makes me wonder.
Actually, depending on the distribution, it has. My personal experience with any Fedora version was always negative bug-wise and for Mandrake too. SuSE was so-so, while Slackware and Debian has proved really good (bug-wise).
Regarding Slackware, the reason they are more stable overall is not because Slackware is doing proper testing either, it is just because they refuse patches, so they always ship “more tested” code as intended by the app authors.
I don’t see any of this in the Linux community. Like I said, it doesn’t seem to have caused problems so far, but it makes me wonder.
Yeah, most will see this as a troll, but you’ve not tried Fedora yet have you?..
Pretty bad quality control…
”
I don’t see any of this in the Linux community. Like I said, it doesn’t seem to have caused problems so far, but it makes me wonder. ”
its not all that black and white. redhat and possibly various other commerical vendors do a lot of testing behind the scenes.
there are a whole lot of inbuilt regression tests into the kernel code notably ext3 filesystem code.
it could probably benefit from systematic tests but not sure it is comparable to internal development testing or build tests adopted by many proprietary shops
“Yeah, most will see this as a troll, but you’ve not tried Fedora yet have you?..
Pretty bad quality control…”
thats off topic. he is talking about kernel testing not distro test releases.
i am running fedora test3 on a test server right now btw
Not quite off topic. While OSDL does kernel development exclusively, none of the rest pieces of the *Linux Platform* (e.g. CUPS, Gnome/KDE, Samba etc etc) enjoy such techniques described by “m” either.
Thank you. One of the few voices of reason around here.
Hi
thats true. but you shouldnt talk about fedora core test releases when the topic is about kernel testing.
Good points. I guess in some sense, this is the open source business model showing its real teeth: some distributions distinguish themselves by the quality of the product because they invest in their testing effort, and thus they build a reputation and market share on that, even though the code is GPL and the fixes and stability flow back into the community (albeit, with some time delay). Over time, the mission critical commercial users of the distribution will be prepared to pay the extra cost for those distributions (above what a home linux user would be prepared to pay) for the assurance, so market forces will do the work to keep those distributions compelled at driving a high level of quality. Would be interesting to see some studies and data about how realistic and workable this approach is.
What would make sense is for all the vendors to collaborate and sponsor a centralised _kernel only_ testing effort to ensure integrity in the core.
i am not sure whether you would consider mozilla as part of the platform but they do these kind of testing
> but you shouldnt talk about fedora core test releases
who talked about “test” releases? I talked about ALL Fedora releases, tests or not. None was up to my standards regarding bugs found in them.
i am not sure whether you would consider mozilla as part of the platform but they do these kind of testing
Yes, but without completely stress testing the kernel, this is like building a castle on a swamp.
Hi
take a look @
http://ltp.sourceforge.net/ltphowto.php#3.1
its pretty easy to participate
Do you have any info on testing web applications? I’m trying to find the best way to go about creating unit tests for a medium-sized webmail site. Right now I’m considering writing my own tests with Pear’s Form_Request, but I wonder if there is more I should be doing, or a better way to do it.
Any info would be appreciated!
The LTP looks spot on …
“who talked about “test” releases? I talked about ALL Fedora releases, tests or not. None was up to my standards regarding bugs found in them.”
the article is about kernel testing. if fedora didnt live up to your expectations it doesnt say anything about the kernel. you already gave your review.
>the article is about kernel testing. if
It is not just about “kernel testing”, it is about OSS app testing, including the kernel, Java apps and databases. Read the article in its entirety please before you become so defensive.
”
It is not just about “kernel testing”, it is about OSS app testing, including the kernel, Java apps and databases. Read the article in its entirety please before you become so defensive.”
it doesnt talk about the platform.
It talks about OSS apps in general. THAT’S the platform anyway and these apps don’t get these stress testings! That’s the whole point! So, stop it please! You are the one who is getting off topic now.
[…] but let’s be fair. Windows XP has more than 50 million lines of code. How many lines are in loonix? Maybe 5 million […]
I don’t know what Loonix is, but the Linux kernel might have aa well 5 million lines of code. Now, the compare you make is kernel (Linux) with a whole OS (Windows XP) and for second lines of code doesn’t say much about its quality, stability, security, bloat, etc.
Howdy
I think what people don`t understand is the need for such testing all they see is the many many hours of coding required to add the tests to existing apps.
The time spent on good testing is worth far more then the actual coding of features IMHO and I remember seeing a quote somewhere that basically said ..
A working app is 100% faster then a non working app
If anyone is interested in learning more about testing and a good framework for development (not just unit tests etc) have a squiz at…
A Discipline for Software Engineering -Watts s. Humphrey
This is quite heavy on the paper work side of things but the ideas are well worth the read.
I’ve been testing “loonix” for years and it’s not doing so bad for a hobby project.
Must be great for those who actually is selling it to have an army of us poor bastards for free.
There is no product ever that is so “tesyed” as “loonix”!
Do you have any info on testing web applications?
Do you have poeple to do it for you or are you working alone?
I usually create a test-script first that I can run every now and then during the dev process. When I’m done I manually test it with some common procedures and a lot of unexpected ones. Really, the only key is to try pretty much everything and never expect anything.
Writing coding and doing debugging is not testing. Just using a product and finding a bug is not testing either. You need a disciplined approach:
1. write the code, do debug
2. I&T the code (integrate it with other components, then test everything
3. kick it back to the developers on I&T failures
4. when it passes, QC the system
5. kick it back down the chain when it doesn’t.
and most important
6. trade off what absolutely has to be fixed in your bug list against what is acceptable for release (showstoppers vs. workarounds/non-showstoppers)
7. when you’re ready, proceed to user acceptance testing. Failures are to be addressed in the next release cycle starting at (1).
Failure to stress-test with scenarios and following a chain ala the list above means that you can get into development release cycle 3, and some ‘tester’ (user) finds a critical flaw that sets you back a cycle or costs you a lot of money.
Most OSS projects for the most seems to implement point 1, and if the project is more advanced, throw in some bugtracking and a ‘project manager’. Note I said *for the most part* – companies like Red Hat have a test methodology for releases (not Fedora — RHEE), and you *pay* for it, as you should, since this type of logistical infrastructure can only be accomplished with efficiency by an organization with management, directives, and pressure to perform.
Howdy
Will the need for ever increasing standards of coding one day stop all the non-qualified people contributing code to linux?
I mean if the next step is verbose testing of all apps will the one after be that they must include extensive documentation (such as requirement docs, UML diagrams etc)?
It makes me feel ill when people can`t understand that writting good apps is not just learning how to make a nice GUI but all the little things that add up under the hood that matter, but it is good to see the online community starting to apreciate the differences between some guy/gal who learnt on the net and a Software Engineer and it just cannot be a bad thing IMO.
I hope that made sense .. now wheres my coffee gone…
Many of us who are running linux is all the time reporting bugs!
Testing in the real world is better than script testing.
Many of us who are running linux is all the time reporting bugs!
And many doesn’t. Some bugs requires a lot of testing to track down or even reproduce, so filing a bug report is not always trivial. And since most of those things happend when you are right in the middle of something you just want to get on with what you were doing, and then forget about it, unless it happends a lot of times.
Many people also assume that they allready know about the bug and that it will be fixed in the next release (most of the time this is true, but not always).
Testing in the real world is better than script testing.
Both of them are even better.
Actually, for a company like MS you’d think that they have pretty good testing. But since I was able to find several bugs during the first 10 minutes I sat down with the original release of WinXP, I somehow doubt that they take testing and quality control seriously. XP SP1 is ok though. shows that a public beta test can do wonders
It’s not a derogotary term, get a clue. I thought linux geeks were the king of google.
My personnal experience with all MS software is that it is needed to wait always for the SP1 to hit the streets, because the “off the shelf” release will have too much bugs to be usefull (not that it doesn’t work mind… it just takes too much time to fix that it becomes too expensive to use in real world).
“I don’t know what Loonix is, but the Linux kernel might have aa well 5 million lines of code. Now, the compare you make is kernel (Linux) with a whole OS (Windows XP) and for second lines of code doesn’t say much about its quality, stability, security, bloat, etc.”
That wouldn’t be fair either. And if you just compared kernels neither of them would have that many bugs anyway, because unlike what most people like to think the NT kernel (that includes windows 2000 and xp btw) is pretty much bug free. Almost all bugs on windows are on the userland libs.
What Linux certainly doesn’t lack is OSAlert’ hype, this site is turning into LInews, 10 out of the last 20 stories were Linux-centered,
1 Coder “t3rmin4t0r” seems to have ported MyXAML to Linux
2 Linux Lacks Testing Methodologies
3 What Lies Ahead For Linux
4 SUN: why would they market a Linux distribution?
5 Ballmer: “Linux Requires Our Concentrated Focus and Attention”
6 Asking Red Hat to Open GFS
7 Linuxbeginner.org spent a week with Linspire 4.5
8 Miguel de Icaza talks to Glyn Moody about Mono’s progress
9 Red Hat ES 3.0 vs. SuSE Server 8.0
10 WordPerfect for Linux
11 The Rise of Interface Elegance in OSS (on KDE and GNOME): +50% Linux
12 XDevConf: +50% Linux
Here is Apple’s share:
1 On 1st Birthday, iTunes Unwraps New Features
JUST ONE?
OpenBSD’s:
1 Review: OpenBSD 3.4 SPARC64 Edition
Microsoft “news”:
1 A virtual tour of Microsoft’s embedded “Device Alley”
2 Critical Update for Microsoft Windows XP Media Center Edition 2004
3 Ballmer: “Linux Requires Our Concentrated Focus and Attention”
OS neutral:
1 Subjectivity and Operating System Choice
2 Overview of Intel’s next-generation BIOS architecture
3 Interview with Dan Joncas of ALT Software
4 The Importance of Software Standards
So how about a Linux break?
If you have some interesting news about other systems/techs then submit them.
The reason why there’s so many linux articles is probably because there’s a lot of things happening there, and when there’s a lot of things happening people submit news about it.
Rejecting news just because they are about linux doesn’t sound reasonable.
I’m sure a lot of things happens over at Apple, but we don’t have much insight into the company do we?
I’ve worked as a professional developer for 11 years with small development companies and I can honestly say that the importance of quality assurance is always acknowledged, but seldom taken seriously in practice.
Developing, testing, and documenting medium/large software systems is very expensive. As the system functionality grows and as more dependencies are introduced (operating systems, DBMS, email systems, web-browsers, etc.) Development and QA testing increases dramatically. Automating unit testing, integrated testing, and regression testing is a solution that requires full-time staff with considerable technical ability and domain knowledge to maintain the testing environment.
I have never been associated with any organization that takes QA seriously until, of course, a major client screams about a bug. A development/QA meeting is held to discuss process improvements, but alas these improvements are never long-term, as it always seems more important to get something out the door quicker. Since scope is never reduced, quality will suffer once again.
I can honestly see the day when large companies and governments will not purchase software that has not been certified by some kind of quality assurance body. Much like a ISO 900x certification I suppose.
So, this is not only a FOSS issue. It’s an industry wide problem.
“Actually, for a company like MS you’d think that they have pretty good testing. But since I was able to find several bugs during the first 10 minutes I sat down with the original release of WinXP, I somehow doubt that they take testing and quality control seriously.”
No doubt. All these people in here complaining about bugs in this distro or that one, and not a word about Windows as if they’ve never used it before, or as if it didn’t get off to a very rough start in it’s early days. Come on, people, let’s at least be fair about this when we’re griping about bugs.