“This feature includes various hacks to boost Ubuntu’s performance, such as viewing running processes, identifying resources, finding process startups, tuning kernel parameters, and speeding up boot time. This is a complete chapter in the ExtremeTech book ‘Hacking Ubuntu: Serious Hacks Mods And Cusomtizations’.”
I wouldn’t really call this hacking. It is viewing/killing running processes and viewing/killing start up scripts and services.
omgz0rz! l33t h4x0rinz!!1 I killed running processes! WOOOWOOO!
Jeez, *ANYTHING* can be labeled ‘Hacking’ now-a-days! It’s so sad.
*opens task manager and kills a few processes*
w00t, I just h4x0r3d windows xp!
“w00t, I just h4x0r3d windows xp!”
Maybe you just load HIMEM.SYS and NOSMOKE.EXE in order to gain more l33t performance? Wow, I’m using the keyboard (the thing with the many anykeys) instead of the mouse, I must be a Hacker!
Ok
>_
Oh joy. The good ole days of fiddling with config.sys/autoexec.bat for hours on end to get some game working. I wish I could say I miss it but I really dont
Gah… That’s a lot of words for little help. If you want your linux box to be ‘l33t fast’ on any distro here’s what you do:
1. Prune your init. Use google. If you don’t use it turn it off. Also handy for security… Unless you really need to be running that ftp server. Also check to see if your running a ‘superserver’ that will autostart services when they’re needed. Turn that off, or make sure that it won’t spawn a ftp server when somebody hits the right port.
2. Make sure your hard disks are using DMA and are running in the proper mode. Use google for your distro, most will have a config file lurking about for that. Test with hdparam, make permanent in config file.
3. Make sure you’re using a 2.6 kernel, or a 2.4 kernel with the CK patch-set. You can role your own, but under 2.6, and for the sake of your health, just tune your vendor kernel. The article mentions all sort of fun things, but the two you might consider, and that weren’t mentioned were: Swapiness and the I/O Scheduler. In my case I find that I get less stutter with cfq as opposed to anticipatory, so I append elevator=cfq to the kernel in my bootloader. Once again google is your friend. There are more exotic schedulers out there as well. (If you do roll your own kernel- preemption, finding a balance between latency / overhead with ticks, gutting everything you’ll never use, along with the CK patch set are are good places to start. But the last kernel I hand rolled was < 2.6.10 so may the force be with you.)
4. Tweak your fstab. On fun thing to do is to use noatime on anything where you don’t need to know when the last time a file was touched was. Different filesystems have different options so man and google are your friends.
This article kind of aspires to turn you into a sysadmin, instead of just plucking the low hanging fruit. Also, the advice that you might want to turn off cron and at might tart up some automated cleanup processes so I wouldn’t suggest turning those off.
Really. 4 things:
1. Clean up your init
2. Make sure your drives are ‘working to capacity’. (Use DMA and fastest Modes)
3. Tune your kernel and use a 2.6 kernel. (I/O scheduler and Swappieness)
4. Tune your filesystem via fstab. (Noatime can be your friend)
That’s the low hanging fruit. That’s what you want to play with. Well until you blow up your system. Remember, you’re getting free advice from a guy on the internet
Just curious here, with newer SATA drives, do we have to mess with hdparm at all? (except for any PATA optical drives of course)
I’m adding a #5:
Tune your XServer. Your xorg.conf.
-Turn off extensions you don’t use
-Add performance options for your video card. Distros are notorious for conservative settings. ‘man YOURCHIPSET’ is your friend. Oh, heh, make sure you have a console text editor installed… Cough trial and error = hang and pain. (I’d suggest nano if you fear Vi and Emacs)
-Some say remove xfs (The X Font Server) and only use absolute font paths. Not sure how much gain you get there.
The big one is: ‘tune your video card’. Your desktop experience and games with thank you. Also, you might want to take a look at DRIconf to tune your DRI implementation.
>> -Some say remove xfs (The X Font Server) and only use absolute font paths. Not sure how much gain you get there.
In Mandriva 2007.0, the speed of using xfs (The X Font Server) is same as not use xfs (like Ubuntu) in my laptop (Pentium M 730 1.6GHz, 768MB RAM)
Edited 2007-04-17 11:36
By default, Mandriva boot is quite fast by the way thanks to the parralel init system.
The biggest thing that you can do as a casual sysadmin is removing unneeded services from your init, but even that is more of a security and boot-speed optimization than a runtime performance enhancement. The big picture is that the 2.6 kernel and the core userspace libraries are really efficient, but the desktop layers can be major offenders. Every time something like this comes up I link to Dave Jones’ enlightening paper from the 2006 OLS:
https://ols2006.108.redhat.com/reprints/jones-reprint.pdf
You can tweak your I/O scheduler all you want, but if GNOME is going nuts with stat() and open(), all that I/O traffic is going to take its toll on performance, especially in the form of noticeable latencies. Similarly, no process scheduler can counteract that self-important process that insists on waking up every few seconds to pat itself on the back. The OS provides services to satisfy user requests as efficiently as possible. But if the user requests are unreasonable, maximum efficiency might not be good enough.
Hacking for performance on free software systems starts and stops with the development community and its vendors. The truth of the matter is that we need to be doing a better job of profiling, benchmarking, and regression testing. We don’t need one of the top kernel hackers to spend his valuable time running strace and oprofile. Any developer can do this, and often glaring inefficiencies become immediately obvious. We have the tools, we just need to use them. And we need projects that run this stuff automatically and open upstream bugs for gross offenses and regressions.
Edited 2007-04-17 08:48
“””
Every time something like this comes up I link to Dave Jones’ enlightening paper from the 2006 OLS:
https://ols2006.108.redhat.com/reprints/jones-reprint.pdf
“””
So do I. It’s a real eye-opener. Horribly embarrassing to those of us who trumpet the quality of OSS code on the basis of it being better peer reviewed.
For balance, however, it’s probably wise to also include a link to this response by Dave on lwn.net from some months after the data for that paper was gathered:
http://lwn.net/Articles/192371/
“The truth of the matter is that we need to be doing a better job of profiling, benchmarking, and regression testing. We don’t need one of the top kernel hackers to spend his valuable time running strace and oprofile. Any developer can do this, and often glaring inefficiencies become immediately obvious. We have the tools, we just need to use them. And we need projects that run this stuff automatically and open upstream bugs for gross offenses and regressions.”
Quite simply: Yes.
But here’s the thing with that– it’s a ‘Yes but’. Yes write fast code in the fast path and profile it and test it to make sure you identified bottlenecks and regressions. But that pretty much rests with program developers and the narrow subset of a program’s users that can do that sort of thing. Don’t get me wrong– this work, and automated frameworks for this work are great, but in the ‘non-developer space’ there’s plenty of things the distros could be doing a better job of.
While the distros have gotten better at defaulting servers to off, and using DMA, and many other things, the simple fact that a skilled user will still go through a shortlist of a half a dozen things to get performance to a respectable level is a little discouraging. I understand that the distros don’t have the breadth of hardware access, and that they have to be conservative…
Actually I think the speech / article you referenced has had a positive effect not unlike that of the ‘UNIX Sux’ speech of yore.
On the KDE end (It’s the desktop I follow) they actually have an automated testing framework in place. It’s on the English Breakfast Network ( http://englishbreakfastnetwork.org/ ). Following the mailing lists test are being added pretty much continuously. Also, if you’re following KDE 4 development you’ll notice that they’re being quite methodical about adding per component test suits to avoid regressions. Then there’s the ‘optimization’ mailing list.
I’m not holding this out as an example of KDE being ‘l33t’, but as an example that people are hearing this message. In KDE’s case I think this desire predates your article… There was that perception that KDE 3.0 was ‘fat’, but now that it’s at the end of it’s life that perception is largely gone and you can see the developers trying very hard to make sure KDE 4 is sensibly optimized, and well tested.
Like KDE 3, GNOME 2 came in feeling quite heavy. The GNOMEies put a lot of work into making it better, but where they’ve fallen down a bit is that they’ve had some largeish high profile performance regressions. Then there’s Xorg (“Hammering /proc” and other fun quotes)…
I’m sure all of these groups are trying to some degree. It’s just that really capable of doing these jobs is rather small. Even the number of people that can write automated tests that are reflective of reality is somewhat small. But at least the message is out there, and some people (not just KDE) are trying.
It is not about hacking, because it just one chapter of the book. And it is usefull. I’d recommend to read this chapter (though it is long) and learn usefull commands and hacks. Or at least add it to the bookmarks as I did
From TFA:
I don’t know of any modern distros that ship without at least highmem enabled. Looking at my Ubuntu Edgy 6.10 kernel config, I see…
# CONFIG_NOHIGHMEM is not set
CONFIG_HIGHMEM4G=y
# CONFIG_HIGHMEM64G is not set
…
CONFIG_HIGHMEM=y
Looks like my system will support up to 4GB of physical memory. According to HIGHMEM64G (above):
That was the big error that stuck out to me. I’m sure there were others.
Yeah, this isn’t Windows XP (I have 3gb of ram, and while the OS itself claims to recognize it, All the games that have program to tell you how much ram I have state only 2gb.)
I’m not sure what void he pulls this one out of.
****
Edited 2007-04-17 00:41
Yeah… they clearly show some misunderstanding here. On a 32-bit system (or a 64-bit system with a 32-bit kernel), the virtual address space is limited to 4GB. All processes on the system have a “split” between kernel and userspace memory segments. Traditionally, 32-bit UNIX-like systems have a 1GB/3GB split between kernel and userspace respectively. In this layout, the kernel can only allocate 1GB of virtual memory, and likewise only 1GB of physical memory if you have more than 1GB installed. The other 3GB of virtual memory (and any physical memory beyond 1GB) is reserved for userspace.
So this is where TFA got that 1GB number from. The statement that any more than 1GB of RAM is a waste is completely inaccurate. 32-bit Linux systems can use all 4GB of addressable memory, and 64-bit Linux is running on systems with over 1TB of RAM.
However, in the relatively recent past, various Linux kernel patchsets have offered alternative layouts for 32-bit systems: 2/2 and 3/1. These provide a tradeoff between the address space demands of the kernel and userspace for systems with over 2GB or 3GB of physical memory. Other operating systems do this, too. There was a Vista installer bug where the installer would crash on 32-bit systems with 3GB+ of memory, because the kernel would configure a 3/1 split and the installer process would exhaust its address space.
The bottom line is that if you need/want 3GB+ of physical memory for your system, you best have a 64-bit CPU and a 64-bit OS. Don’t try to press up against the addressable memory limit. You’re going to end up with either sub-optimal page/disk caching or crashing processes. 64-bit hardware should have a lifetime of 40 years or more before we bump our heads on the ceiling again.
Edited 2007-04-17 00:00
I believe there was also a 4/4 patch floating around somewhere… This would come at a performance penalty since it has to switch pagetables on each syscall enter / exit.
If you can stand using synaptic a lot for a few days, download the server version and install that. Then just add gnome/gdm and the components that you want and watch it fly!!
Not only that, but you don’t have to download 700MB of some useless (for installation purposes) live-cd just to install your system.
Or *cough cough* just install Debian using a netinst if you’re going that route. Basically the same thing, though even Ubuntu’s alternative and server edition ISOs are still rather large compared to the 100mb netinst.
This is bad. Really, astonishingly bad.
“Lowering the number of PIDs can improve performance on systems with slow CPUs or little RAM since it reduces the number of simultaneous tasks.”
Unless you’re having tasks fail to start because you’re out of PIDs, this isn’t going to make any appreciable difference. And the symptom of that is going to be programs refusing to start. Not really a great optimisation.
“A 32-bit PC architecture like the Intel Pentium i686 family can only access 4 GB of RAM. ”
Except for the ones with PAE, though given the prevalence of 64-bit CPUs now this is still somewhat irrelevant.
“The kernel can only access 1 GB of RAM. Any remaining RAM is unused.”
Wrong. The stock 32-bit Ubuntu kernel is built with support for 4GB.
“anacron—As mentioned earlier, this subsystem periodically runs processes. You may want to disable it and move any critical services to cron.”
There’s an important difference between cron and anacron – anacron will “catch up” on jobs that haven’t been run because the machine has been switched off.
“atd and cron—By default, there are not at or cron jobs scheduled. If you do not need these services, then they can be disabled. Personally, I would always leave them enabled since they take relatively few resources.”
The default install contains important cron jobs.
“apmd—This service handles power management and is intended for older systems that do not support the ACPI interface. It only monitors the battery. If you have a newer laptop (or are not using a laptop), then you probably do not need this service enabled.”
Exits if you don’t have apm available, so disabling it won’t do much other than improve boot time by a really small amount.
“acpid—The acpid service monitors battery levels and special laptop buttons such as screen brightness, volume control, and wireless on/off. Although intended for laptops, it can also support some desktop computers that use special keys on the keyboard (for example, a www button to start the browser). If you are not using a laptop and do not have special buttons on your keyboard, then you probably do not need this service.”
Except that by disabling it, you also won’t have anything to load the various acpi support modules and so won’t get temperature feedback via ACPI. Various systems depend on this being available in order to perform proper CPU throttling to deal with overheating. Only disable it if you know that you’re not risking hardware damage by doing so.
“vbesave—rvices monitors the Video BIOS real-time configuration. This is an ACPI function and is usually used on laptops when switching between the laptop display and an external display. If your computer does not support APCI or does not switch between displays, then you do not need this service.”
As far as I can tell, this has just been made up entirely. It certainly doesn’t bear any resemblance to reality (hint: I wrote that code)
If this is actually representative of the content of the book, I recommend you stay well clear.
Matthew Garrett – Ubuntu kernel team
Ha, official confirmation that the article is junk. Shame on Extremetech.
It’s a good reminder that paper publishing isn’t immune to the quasi-truth we see on the internet so often. Unfortunately that is less of a boon for the internet, and more of a strike against humankind.
This was not a good article/chapter in my oppinion. As an advanced but “lazy” Linux user, what I wanted to see was an easy few tips with examples on what most people can disable (more than the 10 lines on services they had in the chapter) and tweak to get performance gains… and some benchmarks would have been nice too. That read was a waste of time and not what I expected..
It would have been nice if there were some benchmarks to show how each tweak affected the performance of the system.