Linux founder Linus Torvalds on Tuesday called for more regular performance tests on the Linux kernel, so that any reduction in efficiency can be highlighted sooner.
Linux founder Linus Torvalds on Tuesday called for more regular performance tests on the Linux kernel, so that any reduction in efficiency can be highlighted sooner.
Its interesting to see the performance differences for a dot rev. 1-5% is not that significatn but 23%! WoW!
Linux will stand to benefit greatly if each new release operates more efficiently. Added features are nice, but I’d like to get the latest and greatest with worrying about usability on older hardware.
i’m behind the idea too but it should be automatic and on different archs/configurations + always the same things – rather ressource-intensive but good project
The Linux Test Project before each kernel release tests regressions. It uses over 100 benchmarks designed for specific system calls or subsystems, so I do not know what he means by more regular performance tests.
I’d be curious to see if linux’s competitiors have similar problems in performance. It’d be interesting to see the same test run on say, NetBSD and OpenBSD. (I’m discounting FreeBSD for the moment due to the massive kernel internal changes with the 5.x series, although I guess you could test the 4.x branch)
I was supprised to find that Linux doesn’t run any faster than a clean install of WindowsXP on the same machine. (I tested SuSE 9.1) From clean installs of both, Linux doesn’t seem to have any major performance improvements from what I percieve just as a linux n00b.
SuSE is pretty bloated, plus KDE isn’t very light. =/
If you’re really intrested in speed/responsivness, you might want to check out XFCE.
It’s a very beautiful WM, and also very light.
For desktop uses, no, I don’t find GNU/Linux faster than XP. Linux has a great advantage, though: its performance does not degrade with time as XP’s does.
“linux’s competitiors… NetBSD and OpenBSD…” – hurdboy
While they aren’t Linux, they shouldn’t be veiwed as competitors more as sorta-peers under slightly different licences.
Windows/Solaris are Linux’s competition.
(I’m not a fan of the BSD license though, it’s a shoot own foot sort thing.)
I was supprised to find that Linux doesn’t run any faster than a clean install of WindowsXP on the same machine. (I tested SuSE 9.1) From clean installs of both, Linux doesn’t seem to have any major performance improvements from what I percieve just as a linux n00b.
This article is about the kernel. What you “experience” more is all the stuff running on top of it. You’d see a bigger difference if KDE, GNOME, or your X11 drivers got significant improvements. I don’t think we’ll see much improvement from the kernel for desktop uses since the days of pre-empt.
Well have you tried FluxBox? I guarantee that it will blow the socks off your cherished XFCE (not that it’s a bad DE).
First off I’ll save you the bother of telling me that I’m wrong, and your right – your going to say it so lets save people the posts.
Linux Test Project – last published results as available on http://ltp.sourceforge.net/kernelresults.php is 2.5.62 – no 2.6 kernels published.
Benchmarks, where are the benchmark numbers? there are none published up there. And running benchmarks for hours is not the same as running benchmarks to get performance data. Back up your statements Adam.
Linux needs this work to be done, lets see IBM and co throw some bodies and machines at this.
Chen said that he will try to persuade his managers to allow him to do more regular performance tests. “I sure will make my management know that Linus wants to see the performance number on a daily basis,” he said.
That sounds a bit odd, it might give off the impression that Linus has more power than he actually does. It’s not just Linux that wants more tests. If the community wants to iron out the flaws or oddities that cause the differences in speed, more tests are needed, not just by Intel. That’s like saying “Linus want more info for a new driver, so we’ll tell management to look into that”.
I’d be curious to see if linux’s competitiors have similar problems in performance. It’d be interesting to see the same test run on say, NetBSD and OpenBSD. (I’m discounting FreeBSD for the moment due to the massive kernel internal changes with the 5.x series, although I guess you could test the 4.x branch)
Well, the system in question was a quad Itanium 2 with 64GB RAM and 450 disks.
I doubt free, net, or open BSDs could even boot such a machine, but they definitely would not be able to scale to 4 processors in this situation (ie. most time in kernel, driving IO).
So in that sense, they don’t have similar performance problems. They have very different problems.
That sounds a bit odd, it might give off the impression that Linus has more power than he actually does. It’s not just Linux that wants more tests. If the community wants to iron out the flaws or oddities that cause the differences in speed, more tests are needed, not just by Intel. That’s like saying “Linus want more info for a new driver, so we’ll tell management to look into that”.
Yeah, who is this Linus guy anyway?
Actually this sounds good, but let’s put the numbers up where everyone can see them, nothing fancy, just a simple web page. We’re either an open community or not.
Sort of sounds more like something a kernel or standards body would be better fit for handling the tests (say the OSDL folks). Depending on frequency of tests, we’re probably looking at a nearly full time job for somebody if done right.
I’d suggest taking say 3 accepted benchmarks (say File Serving, Web serving, and a database benchmark) and running them on standard low end, medium, and high end servers. I know, just make more work for someone, grin. But hey, do we go changing a kernel just because a single benchmark on a quad itanium box had a change in performance? Who says the change that caused a drop in perf on the quad box, didn’t give a boost on other configurations? Scaling issues across multiple CPU’s has always been a bit tricky and not just for the Linux OS.
Just a bit of rambling thought on this.
JT
The issue was raised when Intel employee Kenneth Chen announced some performance figures for various versions of the 2.6 kernel. The tests found that versions 2.6.11, 2.6.9, 2.6.8 and 2.6.2 of the kernel performed 13, 6, 23 and 1 percent slower respectively than the Red Hat Enterprise Linux 3 baseline–which runs on version 2.4 of the kernel, with some added features from version 2.6.
This allmost suggests the benefit of the 2.6.x series is only added functionallity and more drivers.Which means more code and more bugs.Compared to advance code analysis and fault injection and co doing some performance tests is trivial.Sad to see how much time and money isn’t being put in still today,(on whatever software project).
If the kernel was OOP there wouldn’t be this problem because you can test individual components without having to integrate the whole.
…better yet, multiparidigm.
The LTP contains over 2,500 different test programs to discover regressions. I believe stewardship of this project has been taken over by either the OSDL or Linux Technology Center at IBM.
So IBM has thrown bodies at it.
You do realize that the kernel code needs to be super-efficient, right? No automatic variables; no 16 levels of inheritance; no virtual function tables…
Oh… and NO STL!
Doing server in CPP is bad as it is.
Gilboa
> If the kernel was OOP there wouldn’t be this problem because
> you can test individual components without having to
> integrate the whole.
Modularity is not an exclusive privilege of OOP.
And all data structs should be xml, those struct statements is so… stoneage!
XML will will make it booth STABLE and fast.
And oh, oh, I’ve hear that this .NET thing really is great.