“One of the biggest complaints about Linux, particularly from developers, is the speed with which Linux boots. By default, Linux is a general-purpose operating system that can serve as a client desktop or server right out of the box. Because of this flexibility, Linux serves a wide base but is suboptimal for any particular configuration. This article shows you options to increase the speed with which Linux boots, including two options for parallelizing the initialization process. It also shows you how to visualize graphically the performance of the boot process.”
I’ve found that paralyzing some of my applications helps my boot-up time.
Paralyzed applications might find it rather difficult to run.
I’d rather just see efforts to improve support for sleep and hibernate. Linux doesn’t need reboots often anyway, so why bother to optimize this?
One thing I don’t understand is why OSes or applications take a long time to shutdown. They should just save all volatile data, flush buffers and die(). No need to close file handles or do any nice cleanup… just go away.
How about those who don’t want their machines up and running whole day long. Enery costs money, not to mention that I wouldn’t be able to sleep at night.
I for one wish I could boot my laptop(s) under a minut without going through dirty terminal and config work.
I’d rather just see efforts to improve support for sleep and hibernate.
Amen from me! I’ve yet to have an ACPI laptop properly suspend and hibernate in Linux. I know many have, but it is a dream for me. Under Windows I just put it to sleep, and it easily wakes up. Same for Macs. I know it is improving as we speak, but maybe some day I’ll see it!
They should just save all volatile data, flush buffers and die().
Or just die[1] period.
[1] <wrong link…>
Edited 2007-03-13 22:59
Here: http://lwn.net/Articles/191059/
Here is mine:
http://fopref.meinungsverstaerker.de/div/bootchart1.png
My system rarely boots though, as I use software suspend on my desktop.
Everybody should . Boot time is one thing, the other one is re-open all your applications and get into a productive state again.
Well, my two laptops both suspend and hibernate well in Linux. However when I shutdown and boot properly, a minute is generally enough and they aren’t core9quadros. Now tell me, what’s the hurry that one can’t wait a minute for a damn ‘puter to start. On my desktop pc it takes about thirty seconds. Man, how many other things I could do in those seconds And you know what, I do those many other things, like read two lines from a paper, go halfway out for a coffee, scratch my elbow. And no, I don’t have time to waste to make it boot faster
I use KDE and have found that starting KDM first gives the illusion of a faster boot time. While you are logging in via KDM, all the other rc.d scripts (e.g. samba, hplip, etc.) are being executed in the background. On my laptop, it takes 19 seconds (instead of 35 seconds) to get to the KDM login prompt and by time I have logged in (also quick) all the rc.d scripts have up and running. I guess you could use this approach with XDM, etc. For some people this might not be ideal, but it works for me and has never failed me yet.
> I’d rather just see efforts to improve support for sleep and hibernate. Linux doesn’t need reboots often anyway, so why bother to optimize this?
Good argument, but to be honest, Linux should optimize both. If it wants to be a good OS it should act like one. The last thing you need is users or devs to start reducing development here and there because they believe that it’s not important. That’s where the OS will start to fail.
As for me personally, to be honest, i’d like to see improvements on load times of applications. It’s still a killer, especially if the application is being loaded the first time, but also even after it’s resided in memory…
But in the end, what we have so far is still a DAMN good result considering the vast different range of developers, skills, coders, and enthusiasts that have worked on the OS so far. Other organisations have spent hundreds of millions to come close in certain areas, fail in other, and to push further in other.
I’ve been thinking about this loadtime problem. Would it be possible to improve loading of dynamicly linked code by only loading actyally used symbols and then rearange them so that they can be read from disk in the shortes amount of time?
Prelink.
Late binding.
ELF relocation sorting.
Build with visibility=hidden to limit symbol count.
Yes, pretty much every way to speed up dynamic library load is being tried.
But in all those cases whole library files are loaded even if only parts of them are needed. Right?
I’m not precisely sure how the linux kernel deals with code memory, but Windows does not page the whole dll in; only the parts that are needed are demand-paged (i.e. Windows simply memory-maps the dll). I think it’s worse if fix-ups are required, though.
Moral of the story: rebase your dlls to avoid addresses that are likely to be already taken. This is one of the minor things that caused Netscape to go down: their load times were much slower than IE because all their dlls were at the default address (0x70000000 or something like that) and so everything always had to be rebased.
Even if memorymapped it still loads whole pages at a time I would think. The trick I’m thinking about is to rearrange the internals of dll’s dynamicly so that each page only contains code that’s actually used.
Edited 2007-03-15 02:00
Linux is not only about the desktop. How about linux cellphones? I have a friend who works for wind river and this is one area they are trying to optimize heavily. Specifically, if it takes too long to boot, people have a tendency to keep hitting the power on/off button
You could just use runit, freedt or daemontools, which, in addition to parallel startup, provides better process monitoring and control.
Boot time mainly depends on the distro you’re using.
Not trying to show off, but my LFS system boots in 11-12 seconds from grub to text login. This includes loading vsftpd, cups, a svn server, ssh and some other small things.
I gained a speed improvement by switching from the linux-init scripts to BSD style scripts. It is a little bit harder to start/stop/restart BSD started services, but the text files are (in my opinion) easier to read and start faster.
Another thing is the enormous amount of services started by other distros. I always wonder why some distros feel the need to start a time server or whatever. This fills up memory and reduces startup speed.
Finally compiling apps yourself and optimizing for your architecture (in my case an old athlon XP) instead of the popular i386 greatly improves speed.
So don’t blame the entire GNU/linux for the long bootup time!
Anybody time Windows XP – NOT to the desktop, which is quick – but to a fully functioning system.
The XP desktop comes up fast – then you spend another thirty seconds or more waiting for the services and the apps in the tray to load – especially the antivirus, the antispyware, the firewall, the anti-trojan, etc., ad nauseum.
And while that’s going on, the desktop really is not usable as you try to click on things with a busy cursor that doesn’t respond well because the system is still busy.
My Kubuntu boots fairly quickly as it is. And I have a ton of wallpapers that it needs to load the directories for which is the only thing that slows it down.
Of course, I don’t use four virtual desktops loaded with apps which is the same as using a ton of apps in the Windows system tray.
This improves a lot of you run a clean system (keep Norton, McAfee, Real, Quicktime, HP, and others off your systray). Also, Vista does a better job here than XP.