“A lack of physical memory can severely hamper Linux performance. In this article, learn how to accurately measure the amount of memory your Linux system uses. You also get practical advice on reducing your memory requirements using an Ubuntu system as an example.”
it’s not really memory related, but i would recommend to use the -ck patchset, esp on lower spec hardware. it makes the system more responsive (try moving a window over another one and watch the repainting of the lower one. now compare to -ck -> it’s better…
-ck (patchset by Con Kolivas) does this by having the more responsive Staircase CPU scheduler (which chooses between the running tasks) and some other patches, which have to do with swap usage (it’s less likely to use it compared to mainline).
it’s been out there quite a while, and it is very stable. it’s not included in the kernel, tough, as the current cpu scheduler maintainer thinks ppl should improve his instead of writing new ones, and he doesn’t want the ability to choose between them like is the case with IO schedulers (which schedule disk access).
code-wise, it’s smaller and supposedly cleaner, designed for interactivity instead of having it as a hack-on. tough i can’t comment on this myself as i don’t know the mainline’s code (and i’m not much of an coder anyway).
I used to like staircase, but it’s a little weird sometimes. I find that nicksched (by Nick Piggin) is better _on my system_. Neither are part of the mainline. The default was developed by Ingo Molnar and others. All are O(1) schedulers. They share the vast majority of their code. They just use different algorithms for calculating dynamic timeslices and priorities.
Traditional UNIX and Linux use completely different approaches to process scheduling. UNIX uses a fixed timeslice, usually 10ms. Every timeslice ends in a clock tick that invokes the dispatcher, which saves the running thread and loads the highest priority thread on the runqueue onto the CPU. The scheduler runs at a longer period, usually around 1 second. It does some fuzzy math to reduce each thread’s priority according to its recent CPU usage and then lowers its CPU usage factor to forgive it for past usage. The dispatcher is O(1), but the scheduler is O(N).
Linux uses dynamic timeslices, and therefore has no fixed-period dispatch or schedule routines. Timeslices are calculated based on how much time the thread spent sleeping during previous scheduling rounds. Threads that don’t sleep often are CPU hogs, threads that sleep a lot are interactive. Interactive threads are given longer timeslices, because if an interactive thread wakes up and it’s out of timeslice, it can’t run until the next scheduling round. CPU hogs take all of their timeslice and immediately want to run again, so they get shorter timeslices.
As threads exhaust their timeslices, they are moved from the active runqueue to the expired runqueue. When there are no more threads in the active runqueue, or when the oldest thread in the expired runqueue reaches a tunable age, the kernel swaps the active and expired pointers. This begins a new scheduling round. Since priority and timeslice adjustments occur asynchronously whenever a thread expires, there is no need to walk the thread table every second. Combined with the fact that starting a new scheduling round involves swapping two pointers, we end up with a completely O(1) scheduling system.
The different scheduling algorithms plug in the math for calculating timeslice adjustments and dynamic priority deltas. Staircase is named after the fact that its timeslice function looks like a staircase. It allows timeslices for a thread to change quite rapidly from one round to the next compared to the default algorithm. Sometimes this means a suddenly interactive thread will respond quicker, and sometimes this means that a CPU hog can block on I/O and then get a much longer timeslice to hog.
I think most people will be happier with the default scheduler. It’s less aggressive and therefore is less unfair on average. Nicksched is a less aggressive algorithm that doesn’t forget as easily as staircase does. It comes with a slight bit of overhead, but it can be really interactive without acting weird when high priority CPU hogs appear on the runqueue.
NOTE: I use the term ‘thread’ to represent a schedulable entity (stack and registers), whereas a ‘process’ is a collection of resources (address space, file descriptors, etc.) that threads need in order to run. UNIX makes a distinction in code, but Linux doesn’t. Linux refers to both as a ‘task’, where a process is a task that has its own resources, and a thread is a task that just happens to share the resources of its parent task. It turns out that, because of nitty-gritty stuff like the latency of allocating memory, process and thread creation are significantly faster using Linux’s approach than using the (perhaps more sensical) UNIX method.
Edited 2007-02-02 01:16
recently, a (the?) problem with IO was indeed found in -ck, but it had to do with the tunable /proc/sys/vm/dirty_ratio which is 0 by default on -ck and 40 (or 30?) with mainline. most likely, due to a bug somewhere in the IO codepath, the system can stop responding while the kernel waits for data. so i’m not sure if it’s staircase’s fault a cpu hog can block on I/O.
i also like to point out Con is/has been experimenting a lot with scheduler policies like SCHED_BATCH (background) and SCHED_RR (realtime) and SCHED_ISO (user realtime). this work can be very valuable, imho. for example, SCHED_BATCH processes don’t slow down the system, even not just a little bit like it does on mainline, and they get larger timeslices IF they run, increasing performance. So you get a ultra-responsive system even when compiling, but still the compile finishes as fast or faster than before due to less context switches…
I wonder why Linus is reluctant to include both schedulers by default in the mainline kernel… The BSD folks currently include two schedulers that are easily swappable (by uncommenting a single line in the kernel config file). FreeBSD 7 will have three schedulers actually (SCHED_ULE 2.0, SCHED_4BSD, SCHED_CORE). I don’t mean to say that FreeBSD is better in this respect – because 1) I don’t think so, and 2) I happen to like and use both linux and FreeBSD – so don’t go in that direction with this discussion please. I was just wondering if this was merely politics (as your post would imply) or there are other, technical reasons…
As it happens, I have a 4GB XDMCP server with about 50 desktop users.
The machine is starting to get a bit tight on memory. Looking at the output of top, by far the biggest offender on the RSS – Shared front were the 20 sessions of Firefox 1.5.
Turns out, the amount of cache memory allowed is autotuned based on system RAM. If you have 32 mb, it allows 2048k for the memory cache. But if you have 4GB of memory, it says “Wow! I can use as much as I want” and allows up to 60MB. 20 x 60MB ~= 1.2GB. Add to that, they also use 50MB of disk for cache… and I have them all going through a pretty good sized squid cache on another machine.
What a waste!
I went through and set everyone to 2048k. (Firefox really doesn’t like having the memcache turned off.)
It is sad to see even OSS projects all focused on single user boxes, to the exclusion of multiuser servers, which is, to me, *very much* the natural way to deploy Linux on business desktops.
Granted, a multiuser system *should* have an admin who can tune these things for the particular system.
But system tuning is a *very* time consuming activity, if done right. Intuition is notoriously misleading, and it is easy to screw things up because one *thought* they understood what a particular parameter did. It’s expensive.
It would be oh so much more efficient (timewise) if apps like firefox could be smarter. If they could look (even if just at start up) and say “Hey, this machine is under a fair amount of memory pressure; Maybe I shouldn’t suck up all the memory I can get” instead of saying “4GB! Wheee! We can have a memory orgy!”
Edited 2007-02-01 23:20
I agree that something needs to be done about that, but it must be done at the kernel level.
If you put a huge plate of food in front of somebody, they might not eat all of it, but they’ll eat until they think they’re had enough. If you give them a smaller plate with less food than they would have eaten off the big plate, they usually won’t ask for seconds.
The same logic applies here. The kernel needs to give each process a smaller plate. It should to scale the per-process pinned memory allotment more gradually with physical memory size. Obviously you are at the bottom end of the range in a very coarse scaling function. Perhaps the default should be closer to 8MB for a system with 4GB of physical memory.
Edited 2007-02-02 01:31
t is sad to see even OSS projects all focused on single user boxes, to the exclusion of multiuser servers, which is, to me, *very much* the natural way to deploy Linux on business desktops.
The particular problem comes from Firefox being primarily a Windows application, which also happens to run on Unix/Linux.
Forgotten are the days where Mozilla’s survival was basically achieved through user wanting to stay on Unix/Linux and not having to switch just for a browser.
This might be a reason, but still, but ff’s memory consumption is an issue on windows too, isn’t it?
My naive question is this: what prevents them from writing efficient code like the opera folks? What makes gecko such a resource hog? (Because I think it is their rendering engine, for epiphany or other browsers using gecko are similarly heavy-weight).
As you will have noticed in the Web-browsing section above, it is often the case that the best memory savings come from using an app that is tightly integrated with your desktop environment. This is because such apps make heavy use of shared libraries that are embedded into the DE and are most likely already loaded. For instance, Konqueror is the file manager for KDE as well as a Web browser; hence, it uses much less memory than Firefox when run on a KDE system, because much of its functionality is already loaded by other apps. Similarly, if you then wanted to use an RSS aggregator, Akregator may be a good choice, as it will most likely use those same libraries again.
That PC-BSD does the opposite of this, to prevent sharing libraries among different apps is what bothers me most about that otherwise good system..
How many times to I have to dispel this myth?
You have the bunch of applications that comes with your install cd – they work exactly like any *nix system (in fact, they are actually pkg_add-ed or portinstalled packages rolled in a big tarball, along with package registry information stored in /var/pkg).
Now there could be some truth of what you say when it comes to PBIs – but actually, even if you install progs from PBIs, they will use the system shared resources 90% of times. That is because when you build a PBI, you make certain assumptions about the presence of libraries on the base install. For instance, when I created a pbi for scribus, I didn’t have to include the qt library, because it is safe to say that it is present in all PC-BSD installs.
This holds true for most apps packaged in PBIs – they only hold extra libs that the packager cannot assume to be present on your PC-BSD install. These extra libs are less than 10% of the actual dependencies. You can even tell this from the size of the PBIs. Even a larger PBIs like OpenOffice contains only a fraction of its actual dependencies – if it contains anything at all! The openoffice 2.1 PBI 114 Mb, while the standard FreeBSD tbz package is 113!
So you need not be bothered any longer – the fact that PBIs contain extra libs doesn’t affect your system’s memory usage in any significant ways. Really. I’m a FreeBSD user, but used PC-BSD for a while, and memory consumption (using approximately the same application stack) was similar, because PC-BSD actually contains 90% of the dependencies any app would need by default, and 10% doesn’t translate into 10% more memory usage either, for more likely than not, these will be shared libs that no other progs would use anyway (because those that are common dependencies, as I said, are already present on the default install). I’d put – but this is widely speculative – PC-BSD’s extra memory needs in the range of 1-5 % (or zero, depending on the software you need).
> Even a larger PBIs like OpenOffice contains only a fraction of its actual dependencies – if it contains anything at all!
I don’t think this is a realistic nor fair comparison. OpenOffice suffers from a NIH syndrome and does not use outside libraries for much.
What would these figures be with things like KDE or Gnome applications, which heavily depend on shared libraries? Are they sort of hacked to live within the systems installed before?
I don’t think this is a realistic nor fair comparison. OpenOffice suffers from a NIH syndrome and does not use outside libraries for much.
I haven’t thought of this – but still, it has lots of runtime dependencies, some of which are shared libraries.
Anyhow, this works even better with KDE apps:
What would these figures be with things like KDE or Gnome applications, which heavily depend on shared libraries? Are they sort of hacked to live within the systems installed before?
They don’t have to be hacked – you simply don’t include any extra libraries in kde-based PBIs – unless there are some that are not available in the default install. In those rare cases, they would probably be libraries that are not shared with many apps anyway, so the point is moot.
Your PC-BSD install CD contains a complete system. All the dependencies of all the apps that come preinstalled as part of a PC-BSD install are there. Additional packages available as PBIs will only have to include the extra libs that are not needed by what comes with the default install. Take a look at this list:
http://www.pcbsd.org/?p=releasenotes
Now name a program that you think is an absolute necessity for a desktop oriented distro targeted and windows users. You’ll see that most of the dependencies are already there, and PBIs will automatically use those libraries – so this issue of PC-BSD not using shared libraries is blowed out of proportions by peeps who don’t really know what they are talking about.
I think that PC-BSD developers might be partly responsible though – I clear definition of packages that can be safely regarded as being present (being part of the basic install) in PC-BSD on the long term would be helpful for PBI developers, because I can imagine some who would include extra libs even though they are already present on the system, and can be expected to be present on the long term. I think I will mention this on their forums sometimes. This is especially true of KDE, that maintains API stability for a ridiculously long time – so even if someone built a PBI relying on kdelibs 3.0, that PBI will work with current kdelibs. Similarly, xorg maintains backwards compatibility fairly well. I built openoffice on FreeBSD 6.1 with xorg 6.9.0, and it works flawlessly on my current install of FreeBSD 6.2 and latest release of modular xorg (7.2). And if you think of it, xorg + kde are the major providers of shared libraries, that cover 90% of use cases for desktop applications.
Last time I installed oo I would up with a JVM I never wanted, switched back to abiword and gnumeric.
Yes I was thinking of PBI specifically but I do appreciate knowing that most PBIs can depend on the libs in the base install rather than including their own. I hope that it’s stressed though because the most touted advantage of PBIs are that they are self contained and I haven’t seen a list of “stable” base libs/progs that packagers can depend on in the docs.
You are right about the need for such a list – and it is absolutely doable, for the major shared library providers (see my reply above) maintain API stability for a long time.
I thought I mentioned the need for such a list when I was dabbling with PC-BSD – but I may be wrong. I think I will suggest it again, for it will help in clarifying a few things.
The much touted self-contained-ness is a misleading term in this respect. Obviously, when you create a PBI for a kde-style, you don’t have to include kdelibs, qt, and everything kde in the PBI. This is an extreme example of course, but it holds true for more complex packages as well. And just think about it: the more heavily shared libraries are utilized (the more they are shared between applications) the more likely it is that they are included in the default install.
I’m also sorry for my initial harsh tone – I was having this discussion with folks previously here, but they didn’t seem genuinely interested in an explanation: they just kept repeating their claims even after it was explained to them numerous times. I don’t remember their nicknames, so I assumed you are one of them (or was it just a single person?) – sorry about that.
Swap is never a solution for memory issues; as I have seen the GUI on RHEL 4.4AS almost frooze when HDD was accessed for P2P write operation while at the same time being used for java plugin on firefox to render a popup with 3d graphics.
Actually The OS should stop you from executing anything new when your physical RAM is saturated to 80%, while allowing the already runing applications to reach 90% threashold. There should be an OS popup to tell you that an application is exceeding your RAM limits, so please choose what to do
1. Stop starting up this application.
2. Contiue the application startup (Risk performance degradation)
3. Close some of the running applications to free more RAM (choose the application you want to stop: a, b, c,…)
4. Delayed startup: Automatically Startup the application whenever the used physical RAM drops below (drop down menu to choose from 70%, 75%, 60%, …).
I wish linux would have this kind of management to allow normal users master their systems performance.
There should be an OS popup to tell you that an application is exceeding your RAM limits, so please choose what to do
Good idea! Perhaps that would force Firefox developers to finally fix their exceedingly high memory (and cpu) consumption issues
The problem is that some people just couldnt’ see it.
Assume you’ve got a server running Apache which is taking all your memory because your site got hit by /., digg, or similar.
Which desktop is there to popup that notice upon?
In the case of desktop then sure KDE, or GNOME, could add something like that. Both of them have panel applets that show CPU load – so it should be a simple thing to write a plugin which stops launching and alerts when a threshold trips – but this is the place for it. Not in the kernel, etc.
This is the problem when people start saying “linux needs” – rather than thinking about other users other than themselves. Linux is the kernel. Distributions produce desktop systems, servers, etc upon the top of that.
You should say “Linux desktop environments need …” but perhaps I’m being overly pedantic as a server admin who uses X on maybe 1 one out of 250 of my machines.
Assume you’ve got a server running Apache which is taking all your memory because your site got hit by /., digg, or similar.
Which desktop is there to popup that notice upon?
Gkrellm
http://members.dslextreme.com/users/billw/gkrellm/gkrellm.html
Not that my apache servers would ever send me more than a page, but if you insist on graphical notification there’s one that does that.
Swap is never a solution for memory issues; as I have seen the GUI on RHEL 4.4AS almost frooze when HDD was accessed for P2P write operation while at the same time being used for java plugin on firefox to render a popup with 3d graphic
I’d rather think swap is a solution, but is not magic. If your drive has high latency, the OS won’t magically make it have less latency.
I have SCSI disks, and I never see such freeze. Actually I do, only when one app go mad and is eating up all the memory before crashing and releasing it.
With IDE disks (on laptop), I see slowdowns under load.
Swapping is natures way of telling you to get more ram.
I just asked myself if Epiphany, which is based on Mozilla-Code, suffers from the same drawbacks as Firefox of if it is the better option, especially on a 50+ user setup.
“””I just asked myself if Epiphany, which is based on Mozilla-Code, suffers from the same drawbacks as Firefox of if it is the better option, especially on a 50+ user setup.”””
Since we run Gnome desktops, Epiphany would, indeed, be a win for us, regarding memory usage. Unfortunately, Epiphany is quite nontrivial to get compiled on CentOS4, which is based upon Gnome 2.8.
But that will change when we upgrade to CentOS 5.
Also, though, I must admit to having a political reason that I will probably keep my clients on Firefox: Visibility.
I have 50+ mostly nontechnical people who know what “Firefox” is and are comfortable with it because they use it at work.
These are people who would probably never consider switching “The Internet” out for anything else on their home computers. But they just *might* now.