I don’t actually have a reason for trying to build a Linux kernel with the CONFIG_PREEMPT_RT
patch set. There’s no way I can even measure the impact of it. Still, I felt like having a “real-time” Linux box, and set out to make one. Little did I know how difficult it would be to even get started.
A little bit of background. I haven’t used Linux all that much, though I’ve been using various *nix variants for about 20 years. I played with Linux in the pre-1.0 beta days, installed Yellow Dog on my PS3 (before Sony took it away), and played with Ubuntu when it started to get a lot of buzz. In college and at my first job I did a bit in SunOS (yes, I’m so old that I predate Solaris). For much of my career I have used the QNX real-time operating system. I wanted to build a system similar to a QNX box, even though I have no practical use for such a beast. I thought it would be fairly easy. I’m no expert, but I didn’t think it would require more than a basic familiarity and the help of Google.
First things first; what do I use as a base system? There are so many distributions that it’s difficult to pick one. I’ve been defaulting to Ubuntu, but I stopped at Maverick, and I wasn’t sure I wanted to learn the new Unity interface for this project. I settled on OpenBox as a desktop environment, and that cut things down to ArchBang and CrunchBang. Of the two, CrunchBang seemed like the better route for me since I was used to apt-get and Synaptic. So I installed CrunchBang in a virtual machine using VirtualBox. Installation was no problem. I downloaded the kernel and rt patch. No big. Extracted the kernel and patched it; easy enough. Most packages can be compiled with a three step process:
./configure<br />
make<br />
make install<br />
The Linux kernel, however, is a very large chunk of code, with lots and lots of options. Therefore, it has its own version of “configure.” Instead you run “make menuconfig
” or “make gconfig
“, depending on whether you want a console ncurses configuration program or a GTK+ graphical configure. This is where the problems began.
Though I’m perfectly comfortable on the command line, I see no reason not to take advantage of GUI’s when they’re useful. So I tried “make gconfig
.” It didn’t work. I lacked a dependency. It took me quite a while to find instructions on the Web as to what packages I needed to apt-get. When I did finally discover the correct package names, they failed to install. Very frustrating.
I decided to give up on CrunchBang and try ArchBang. I immediately liked it; it booted quickly, the archbang.org website looked nice, and there seemed to be plenty of instructions out there. I ran into trouble almost immediately. ArchBang creates 4 separate partitions; boot, swap, system, and home. I hate this. Twenty years ago maybe there was a reason to have a separate swap partition; no longer. Same with boot. Separating /home
can make a lot of sense, but for a single user development system it’s pretty pointless. With the 8GB default hard drive size I had used, I ended up with a home partition of about 500MB, which meant that I couldn’t actually extract the kernel in my home directory. Ugh. Fine, whatever, sudo mkdir /work
, chown
, and chgrp
, on with life. Still no good. I had the space, but make gconfig
errored out.
Well, at least ArchBang has a nice post-install instruction list that gave me a good idea of what to do next. It mainly involves using pacman (the Arch package manager, similar to apt-get) to update the system. Except it didn’t work. I followed the instructions exactly, and the commands failed. I also realized I didn’t have the VirtualBox Guest Additions installed. Stymied there as well, because I couldn’t figure out how to mount the (virtual) CD-ROM. Fortunately, “pacman -S virtualbox-archlinux-additions
” did the trick.
Then I had to go on a business trip. I switched to my laptop. Due to lack of foresight, I failed to move the virtual machine over. I tried to get it over Windows Remote Desktop and failed. I tried to have my desktop upload the (roughly 2GB) file to SkyDrive so I could get it from there. For some reason the file never uploaded. Fine, whatever, it doesn’t take that long to install ArchBang, so I started over. I got it mostly up and going, but then at some point I bricked the install. I’m not entirely sure what did it, but probably either “pacman -S linux
” or “pacman -S udev
” was the culprit. Most advice on the forums seems to be “re-install.” I could have tried that, but I didn’t see much point. I just started over from scratch, and hoped I could figure out a new set of post-install instructions that would work consistently.
Back at home, I decided maybe I had bitten off more than I could chew trying to get an “expert” distro going. I switched back to Ubuntu. Oneiric doesn’t split /home
on to a different partition, but that wasn’t the problem. After the install has updated everything, I was already out of space thanks to that 8GB default drive size in VirtualBox. 8GB should be plenty of space, but a default Ubuntu install eats almost all of it. Ugh.
Fine, fine, back to ArchBang. I started from scratch again, this time specifying a 20GB virtual disk. ArchBang still split off /home
into it’s own partition, but now I had about 10GB for it. I more carefully went through the post-install instructions and Google found me work-arounds for problem areas. I got everything in to good shape, but now the VirtualBox guest additions have stopped working. I managed to get the CD-ROM mounted and installed straight from the CD, rather than using pacman to get the ArchLinux specific version. Finally, I had good working system. Clipboard sharing worked, I could mount my VirtualBox shared folders, and my .bashrc
was configured well enough to not drive me insane. “make gconfig
” worked. Phew!
Notice the “Preemption Model” option. I didn’t do a whole lot besides turn on “Fully Preemptible Kernel (RT).” However, I had to go through the config / build process numerous times. I had ignored the advice of starting with a kernel configuration copied from a working system. Literally every site that talks about building the kernel mentions this. Unless you really know what you’re doing, start with a known-working configuration. Also, gconfig isn’t a very good program. I wasn’t sure whether to click or double-click to change an option. (Double-click.) Sometimes it wouldn’t let me change an option. I wanted to turn off the I2C drivers, for example, but couldn’t. I couldn’t change the custom name field from gconfig; I had to go edit the .config
file manually.
Eventually I got through a config / build sequence that I felt confident in. The actual build takes a while, about 4 hours on my Core i7 inside a virtual machine. Now, of course, I needed to install the kernel. This is not something I have any experience with. The ArchLinux wiki came to the rescue with a nice set of instructions. Even after this whole process, I don’t really know where all this stuff goes, so it’s a good thing the instructions worked. Then the scary part; reboot, and hope it works.
Hurray! I am running a custom kernel! Note the “rt” in the “Kernel” line. Twice. Once was automatic, and the one on the end was because I didn’t realize it would be there automatically. Also note the slightly older kernel rev; the rt patch only existed up to 3.2.12. When I started, the 3.3 kernel was already out, and while I was working on this 3.2.13, 3.2.14 and 3.3.1 were released. The rt patch for 3.2.13 came out by the time I finished this article, but by that point I was too exhausted to try again.
I’m afraid I started off a bit over-confident. Okay, a lot over-confident, or else I wouldn’t have had to write this article. I had started off thinking a trained monkey with access to Google could build the kernel. It turned out to be harder than I thought, but it still isn’t rocket science. The main thing to have is a fair amount of patience. And Google.
In addition to all of the help from the ArchBang and ArchLinux documentation and forums, I was also looking over the Linux From Scratch (LFS) book. While there wasn’t a specific set of instructions I used from LFS, I did encounter a concept that I should have embraced early on. When trying to a build a system following LFS, it helps to actually follow LFS. The forum abbreviates this to “FBBG.” This stands for “Follow Book, Book Good.” This was good advice. I know that sounds a bit like saying a trained monkey would have been better off than I was. That’s partially true. The point at which to deviate from the path is when (a) the path obviously isn’t working, and (b) you know why you’re stepping off the path.
So, was it worth it? Probably not. This was more a learning exercise than something useful. It was frustrating and ultimately doesn’t have any real benefit. Still, I learned an awful lot, and there was a definite sense of accomplishment. I’ll probably continue to run a custom kernel, if only for the whole mountain-climber “because it’s there” feeling.
The actual kernel build process:
- Download the CONFIG_PREEMPT_RT patch set from
https://www.kernel.org/pub/linux/kernel/projects/rt/
Do this before downloading the kernel because we need to know what the latest version of rt is. - Download the matching kernel from
https://www.kernel.org/pub/linux/kernel/
- Extract the kernel to a working directory.
- From within the linux-[version] directory, run “
bzcat patch-[version].patch.bz2 | patch -p1
- Get a copy of a working
.config file
. On Arch, this can be done by “zcat /proc/config.gz > /path/to/build_dir/.config
“ make O=/path/to/build_dir gconfig
- Edit the configuration to your liking. Particularly remember to turn on the Preemption, as it is not on by default.
make O=/path/to/build_dir
The kernel install procedure:
make O=/path/to/build_dir modules_install
- Copy kernel to /boot:
cp -v /path/to/build_dir/arch/x86/boot/bzImage /boot/vmlinuz-[KernelName]
- Make initial RAM disk:
mkinitcpio -k [FullKernelName] -g /boot/initramfs-[KernelName].img
- Copy kernel to /boot directory:
cp -v /path/to/build_dir/arch/x86/boot/bzImage /boot/vmlinuz-[KernelName]
- Configure the boot loader. This is different for each loader, but the short version for GRUB is edit
/boot/grub/menu.lst
. Copy-paste a working entry and edit it to match your new kernel and the files you put in /boot. - Reboot. Note that there is no evidence that a blood sacrifice would improve the odds of the system booting at this point.
A few more things to think about.
- The kernel in tar.bz2 form is 75MB. Expanded, the kernel is 505MB. Compiled, the kernel is 853MB. Huh? One would expect a much larger size of the binaries given the size of the source. Of course, not all of that code gets compiled. Just the
arch
directory is 117MB, of which only 10MB is for x86. There’s at least 30MB of drivers and filesystems I disabled. Still, that’s like a 3-to-1 size ratio. I made a quick “Hello, World!” style program and the binary came in at over 60 times the size of the source file. Suffice it to say that I really have no idea what is going on in the kernel source code or build process. - The build time can vary greatly. Just hit
make
and walk away. Even better, start it before you go to bed or leave for work or some other time when you can just ignore it, rather than go through the whole “watched pot never boils” pain. - Don’t use an important production system to try this out on. This a great time to use a virtual machine, be it VirtualBox, VirtualPC, VMWare, Parallels etc.
- It’s unlikely that you’ll be able to significantly impact your system, at least in a positive way. Rendering your machine unable to boot is pretty easy, but getting any actual benefit out of a custom kernel is tough.
- Start with a working
.config
. I know I said that already. It’s worth repeating. - Be careful with the boot loaders. A “
make all
” will run an install script for LILO that will wreck your world if you’re using GRUB. - Don’t try to go it alone. There are lots of resources out there. In particular, different distros will have specific instructions for anything odd.
Resources
A bit like an Academy Award speech, except that I’m genuinely grateful. My thanks to the anonymous creators of the resources I used; I really really couldn’t have done it without them.
- ArchLinux Wiki – Kernels/Compilation/Traditional
- ArchBang Post-Installation Instructions
- ArchLinux Wiki – pacman Tips
- Kernel.org, of course, and the
README
that comes in the Linux kernel tarball. This was the only place that recommended the “O=/path/to/build_dir
” option. However, it somewhat dubiously says that LILO is the most likely boot loader. - Real-Time Linux Wiki
- Linux From Scratch had quite a few interesting points, even though I didn’t directly use anything from the LFS book.
- I tried many things that didn’t work out, but I am still grateful to have had the option of trying CrunchBang, Ubuntu, Lubuntu, and Gentoo
About the author:
James Ingraham is the Software Development Team Leader for gantry robot manufacturer Sage Automation, Inc. A graduate of the University of Pennsylvania’s Management & Technology program, he’s been developing on various *nix platforms since the early 90’s.
The Linux kernel build process can definitely be a bit intimidating at first.
I am not sure why you needed the “O=/path/to/build_dir” option. I’ve never run across that before; maybe something different in your setup compared to the Debian systems I have built kernels on. Also, after the kernel is built, running make install, make modules_install, and update-grub should handle putting it into the /boot directory and adding it to the grub menu (not sure if update-grub is a Debian-specific tool). You shouldn’t have to know about the filenames of everything. As the previous kernel should still be in your grub menu, there isn’t much to worry about on whether it works or not: if not, just boot the old kernel and try again.
The kernel you compiled is huge and took a very long time to compile. I suspect this is because you used the Arch default config which (like any distro’s configuration) has every driver you could ever possibly want. Kernel builds usually take a few minutes for me (on a Core 2, so it should be faster on your Core i7), but that’s with only the drivers I actually use enabled (I vaguely remember some make option for detecting your hardware, but I don’t know what it was or if it works well). Also, on a modern multicore, don’t forget the -jn option to make where n is the number of processes to run in parallel. The recommendation I usually see is to use number of cores + 1, but I don’t know what the actual best value is.
I too noticed the far too long amount of build time. You’re probably right about the modules, and I guess Vbox does not help here. It took my little core2duo (throttled 1.2-1.6G) 20 minutes.
With a core i7 machine I would go LFS just to see if GCC test suite can run under 20 minutes.
I didn’t know about `make all` being coupled to LILO, always good to know.
I seem to remember an article where Linus Thorvalds argued why your grandmother needs to recompile her kernel. If there is a make option to just compile the modules that’s actually loaded, that would drastically reduce granny’s compile time.
Ah, found it. It’s called make localmodconfig: http://kernelnewbies.org/Linux_2_6_32#head-11f54cdac41ad6150ef817fd…
Yes, the first thing that struck me was 4 hours? On a core i7? Then I saw he wrote ‘make’, rather than ‘make -j5’. Using only one core out of 4 is quite inefficient.
The “O=/path/to/build_dir” option just puts the output somewhere else. This isn’t especially useful, but it made me feel better. I hate having all the source and binaries jumbled up.
Interesting to know about the compile time. It was longer than I expected, but not a LOT longer. I’ll have to see what I can do to trim it down.
Note that I was in a virtual machine. My experience has been the a VM has virtually no impact on purely processor-intensive tasks, but I only gave the VM 1 processor, which is why I didn’t use the -jn option.
If you want to do serious audio work in Linux, you would need a rt-capable kernel to run Jackd reliable. It is required by Ardour, Linux-sampler, Hydrogen (the list goes on).
I used to muck about with custom kernels, but now i just use av-linux which comes with everything you need for a Linux based studio. If you are both a musician and a Linux-user, you really should check out Ardour, it’s a full featured DAW that can be compared to anything other platforms have to offer, except maybe a lack of bling.
If you’re about to do serious audio work you’re better off with other operating systems.
That statement is just silly. Have you actually used the software I just mentioned. I usually do a lot of live audio processing with very cpu intensive plugins like impulse response. On other platforms, the norm seems to be to record with a raw monitor signal, and apply effects afterwards because the system can’t handle doing this without using large buffers.
I might have a play with building an FX unit using a Linux box this weekend.
I have low latency pro-audio sound card already – just wasn’t sure what software to install. What would you recommend for listening to line in and applying real time effects too? (last time I did this, I just did it in Ableton Live on XP – but there was around 32ms latency which made the thing impractical)
Rubbish. Try using Reaper under Mac/Windows. It can even run in Linux with WINE. I record live with FX on all the time. I also record ambient room with mic (so no FX required.) Truth is, latency is your enemy, so you need the right drivers for it to work with Windows (ASIO, though ASIO4ALL will work with standard cards), but Mac handles it fine with base system drivers. Even GarageBand will record live audio with FX. But honestly, you really need a real audio card. I use a Tascam 122mk2 personally. USB. Drivers work well enough. Low latency. Sorted.
I wonder what those OS’s are? Hrmmmm. Can’t think of any.
Not really, unless you define ‘serious’ as ‘serious amateur’, using Fruityloops or Garageband or whatever kids like these days. On the other hand, ‘serious’ might be ‘serious work’, in which case things like Csounds or Ardour and Linux might be what you want to use.
There are plenty of musicians and audio engineers out there who aren’t technophobic imbeciles.
thats like saying dismiss MacOSX as it has iPhoto which is not as powerful as Photoshop.
Apple Logic Pro and Mainstage are pro sound applications.
Ardour was pretty unusable last time I tried it. And this is after using real DAW software like Ableton Live, Cubase, Reaper and Logic Pro. I’m sure it got better, but I’d choose Reaper over it any day of the week.
No, that’s not dismissing anything.
That’s your professional opinion as a professional user of these kind of products, right? Right?
I suppose you could have searched the Arch User Repository (AUR) before having a whole lot of work.
There is already a PKGBUILD ready to use for linux-rt, just download, tar xf, cd, makepkg, wait, pacman -U, adjust menu.conf, reboot. For that extra something there is linux-rt-ice which mixes the rt patched and tuxonice.
You really got to love Arch :p
Ugh, that sounds awfully similar to running Gentoo
I guess I can see the similarity…
Regardless, the OP was just pointing out that there’s already a package for a realtime Linux kernel for Arch Linux, and therefore makes it pretty easy to install.
The other commands they included are just to tell GRUB to boot using the newly installed realtime kernel.
Edited 2012-04-13 12:58 UTC
But that wouldn’t have been nearly as much fun. It was much more about learning the process than the end result.
Or, install yaourt and it becomes a simple:
yaourt linux-rt
Everything else happens automatically, all nicely colour-coded and everything.
Not to get TOO far of topic, but any particular reason to prefer yaourt over packer + pacman-color? I went the latter route because that’s what most of the ArchBang forum posts seem to use.
yaourt was the first AUR pacman wrapper I found that worked. Didn’t try them all, just tried enough to find one that worked.
You don’t need a reason to recompile your kernel
You don’t even need a reason to run Linux.
Blah, and I wasted a perfectly good reply on a troll.
So you spent a lot of time finding a check box, checked it, and waited a long time for a build. And then, because you were running it in a VM, it didn’t really matter anyway.
I don’t remember finding so many complications back when I used to build my kernels on Gentoo, and I was a trained monkey with Google too.
Then again back then it took me about 8 hours to compile it, so getting a non working kernel because of some config options was a major PITA.
I guess I’ve always been lucky with Linux, from that first time I managed to install Debian (or was it redhat?) from floppies on first try, not even really knowing exactly what Linux was or what I would even want to do with it.
There’s Debian package of kernel with rt. It’s in squeeze-backports: http://packages.debian.org/squeeze-backports/linux-image-rt-amd64 .
apt-get -t squeeze-backports install linux-image-rt-amd64
Why on earth would you do make gconfig? make menuconfig is much easier and only requires ncurses, which is installed by default in nearly every competent Unix-like OS ever released.
Well, if I’d realized how bad gconfig was I might have gone the menuconfig route, and I actually did try it at one point. Still, it OUGHT to work, so why not?
Was just about to post the same:
Really? You couldn’t install a package to make “make gconfig” work, so you abandoned the entire OS? Really? Instead of just typing “make menuconfig” and carrying on with your day?
Aren’t techies supposed to be lazy? How is installing a completely new OS, learning a new package manager, yadda yadda easier than typing “make menuconfig” after “make gconfig” fails?
A ridiculous dependency loop is why I gave up on CrunchBang. Package managers are supposed to take care of this kind of thing automatically. The fact that I could have just not used those packages doesn’t change the fact that CrunchBang was inherently broken. I don’t want to take away from the difficulty of maintaining a distro, but when it fails so drastically out of the box, I don’t see the point in sticking with it. There are so many options out there; why bother with something when it has such a glaring flaw?
Hi Ingraham,
I have a strange dependency-loop in one of my #! boxes as well (I have 2 machines on that distro). What distro did you replace it with?
ArchBang is what I went with. I’m fairly happy with it. I also seriously considered Lubuntu, which might be an easier migration from CrunchBang. Then again, I considered throwing a dart at the GNU/Linux Distribution Timeline.
http://futurist.se/gldt/wp-content/uploads/12.02/gldt1202.svg
Fwiw, I think the problem you had with libcairo and libpango not installing in Crunchbang was because;
A) Crunchbang already included patched builds with higher version numbers, (at least for libcairo) or
B) The builds currently installed were from Crunchbang’s own repo, which was given higher pin-priority (apt-pinning) than the stock Debian Squeeze repos.
I’ve done my own compiling in stock Debian (albeit I run Sid ) and have had no issues in the past. Though, like the above poster, I prefer make menuconfig over make gconfig.
This sort of stuff is still more straightforward in Arch, imho. Though I’m talking about stock Arch, not derivatives like ArchBang which I have no experience with.
Edited 2012-04-13 02:37 UTC
Mod me down, I don’t care. But.
I have to say, reading this has not been any fun. Or funny. Or interesting. I’d say such stuff would’ve been better placed in a low class linux newbie blog. The steps described and the things done would be ok if my art major sister wrote them. Otherwise, geez.
Seriously, this is how OSAlert rolls now? Come on.
I agree. I’ve been disappointed to read this on OSAlert too.
+1
And written by a self proclaimed “Software Development Team Leader”, no less!
RT.
Yes, really. “I tried to build a kernel but didn’t do my homework and ended up frustrated” isn’t really newsworthy. Anywhere.
I’m inclined to agree as 3/4 of the article was about him distro hopping because the default behaviour wasn’t what he expected.
Ouch.
There, there fella. Don’t let haters get you down. I have no idea what in the world any of you are talking about. But Jimmy, keep on monkeying with this witchcraft and making the monies.
– Your friend John
PS. My little girl is about at the age where Dawnie learned to ski. Hint hint.
This article was ok. I think the biggest thing is that, in my opinion, it doesn’t quite fit in with the recent “feeling” of the articles on OSAlert. There wasn’t any news in it and there wasn’t much of an opinion. It was just kind of a “here’s something I did” article.
Instead, I think any lessons you learned about compiling a realtime kernel would be appreciated if you added them to a place like the Arch Linux wiki.
Immediately visible that Thom haven’t had any Slackware experience. This is pretty much essential thing for Slackware to compile a custom kernel (although the stock one works fine). Slackware installation book covers this with enough detail. Also it is fine to see kernel panic first couple of custom compiles due leaving out a critical component.
Ultimately you will not have that dependency hell in Slack. Hate this when package manager tries to baby sit me. If I miss something I see the “library cannot be loaded… ” message and that is enough to warn a missing piece I must install. Usually package managers demand unneccessary dependencies which actually does not prevent usage of the program. That is the reason why I like Slackware’s approach because dependencies are usually not justified and I can sort them out myself if needed.
So I was hoping to see the RT kernel experience but this article was abiout custom kernel compilation. Dissapointed!
I have to jump in here and point out that Thom didn’t write the article, so the blame falls on me.
Sorry. Obviously, that was the long-term goal. I am currently torn between continuing towards that goal and just keeping my mouth shut. On the one hand, I’d like to redeem myself. On the other, I think that may not be possible.
-James
Sorry about that very important missed detail that author wasn’t Thom this time Sorry again!
Still I would urge you to continue the guest of RT kernels and perhaps recommend to try out a little hands-on linux where you have to make things working yourself. This way you learn what is important and what is not, You learn linux in general, not just that particular distro. After getting a bit customed with tinkering and all the command-line stuff suggest to start trying out to compile your custom kernels from scratch – beginning from clean config that is. After couple trial and errors you learn what parts of the kernel configs are important to you and which are not. For example, if you leave V4L part out of your kernel then it probably still continues to boot, but if you choose IDE controller on SATA system then you will be greeted with Kernel panic on next boot. If you haven’t configured the boot loader with previous working kernel available, then that means reinstall Or if you are wizard enough, boot with some live CD, mount your partition, copy the working kernel with modules over there, configure boot loader and hope for the best. Linux is fun, if you have the time. Also you will learn A LOT.
Good luck and will be waiting for your RT kernel article still with all the comparisons, where it is better and where it is worse.
As you know it is hard to beat QNX with linux for real time.
One thing that I thought though was that it was basically impossible to get real time on any kernel after 2.8. I forget the reason but the rt is not really real time but a close to real time option.