“It has been almost two years since LWN covered the swap prefetch patch. This work, done by Con Kolivas, is based on the idea that if a system is idle, and it has pushed user data out to swap, perhaps it should spend a little time speculatively fetching that swapped data back into any free memory that might be sitting around. Then, when some application wants that memory in the future, it will already be available and the time-consuming process of fetching it from disk can be avoided. There is a vocal set of users out there who will attest that swap prefetch has made their systems work better. Even so, the swap prefetch patch has languished in the -mm tree for almost all of those two years with no path to the mainline in sight. Con has given up on the patch (and on kernel development in general). It is an unfortunate thing when a talented and well-meaning developer runs afoul of the kernel development process and walks away. So it is worth the trouble to try to understand what went wrong.”
Personally, I *like* that the kernel maintainers are conservative when it comes to memory management. This article sheds some light on the sometimes tense environment of Linux kernel development.
Anal more like…
Yeah. The kernel devs should bow to the will of the Kolivas fan club, the Reiser fan club, the Gooch fan club, the Raymond fan club…
Only in that way can they please everyone, which should be their ultimate goal.
Edited 2007-08-05 22:37
They’re never a good thing. A very good article. People shouldn’t be surprised that such changes are difficult to push forward, even with Linux.
how do you know what exactly to prefetch? suppose pages that been swapped out > free memory. what is that you’re going to prefetch. clearly you need lots of heuristics, and just like with any heuristic, it may work well in one situation, and create mostly overhead with no performance gain in another situtation. ok, so there are some users who experienced some performance improvements. that’s anecdotal evidence, and nothing more than that.
Actually its simple. The topic summary even explains it.
Whatever gets swapped to disk due to a lack of available free RAM, slowly gets moved back into RAM when free space becomes available. No fancy heuristics needed.
oh, now I see. it’s soo simple. (sarcasm intended) thanks for the clear and detailed explanation. at least it makes clear why the logic never found its way to the main tree.
Fine, I’ll try to give you a more complex explanation.
So I have an application that requires 500MB of RAM during runtime, but my system only has 512MB of RAM total.
Now lets assume I was already using 256MB of that ram for Xorg, Gnome and Firefox before I started the application that needs 500MB of RAM during runtime.
That 256MB of RAM now needs to be swapped out to disk since 256MB + 500MB > 512MB. It simply will not fit in the available RAM. So, copy sequential blocks of data from RAM onto a sequential (as laid out on a hard drive) swap file located on my hard disk.
Remember that RAM is a memory storage device that contains blocks of data in different sizes. It’s basically an array of data, and every application knows where to point to its array located in the RAM.
After my application finishes 500MB of free RAM should now be available. Remember that previously we swapped out 256MB of data from RAM onto the disk.
So now that my application is finished, I expect Firefox to load up without lag, but all of its data is located on the swap partition, and so the virtual memory system now has to quickly copy back in Firefox’s data from swap into RAM before user interaction is possible; this is a slow and painful process (from the users perspective).
The copy isn’t complicated, again we are just copying entire blocks of data from the hard drive back into RAM. Every OS’s VM system that has support for swap files already does this operation.
What does swap_prefetch do that’s different? Instead of waiting on the user to try and interact with Firefox before copying data from the swap to RAM, it slowly over time copies in the data as soon as free RAM is available. Again, this isn’t complicated, and no additional heuristics are needed since the VM system already knows how to copy data from swap to RAM.
swap_prefetch doesn’t care what type of data or what application the data belongs to. It just copies the entire 256MB swapped out data into RAM in a sequential order ASAP without hurting performance.
To continue your example. You click on Firefox and because there are no heuristics implemented the memory which was prefetched did not include that needed by Firefox. Now in order to switch to Firefox the memory which was just read in now has to be swapped back out in order to make room to read in the memory segments needed by Firefox.
——————————————-
To continue your example. You click on Firefox and because there are no heuristics implemented the memory which was prefetched did not include that needed by Firefox. Now in order to switch to Firefox the memory which was just read in now has to be swapped back out in order to make room to read in the memory segments needed by Firefox.
——————————————-
Yup, this would be true. But if you look at it from a performance perspective, even if the wrong pages were swapped in for Firefox in this example, the very worst case is that the VM has to bring in the correct pages… as it normally would without pre-fetching.
Actually, there would be a slight penalty for writing back to disk the unneeded pages in a low RAM situation. But most likely at least some portion of the swapped in pages are useful. All the benchmarks I’ve seen on lkml show a significant positive improvement.
Edited 2007-08-06 03:48
Actually, swap-prefetch is smart enough to not evict prefetched pages from the swap. The prefetched pages don’t have to be swapped out again if the RAM is needed by something else.
The incorrectly prefetched pages can just be overwritten by the firefox pages in your example.
Edited 2007-08-06 13:55
swap_prefetch doesn’t care what type of data or what application the data belongs to. It just copies the entire 256MB swapped out data into RAM in a sequential order ASAP without hurting performance.
I’m not going to negate the whole issue, but you’re wrong. There’s this nice, everyday use case when:
a) you start the big program
b) you end the big program
c) you wait, or just listen to music for a while
d) you start the big program
In this case, with prefetch, you’d get some bad ass (useless) swapping going on. 1st the OS would slowly move old apps back (which you didn’t need) and then again swap them because you need those 500MB again.
So it’s a 2 way highway. I don’t think it’s worth it, don’t also forget it adds background slowness (the prefetching itself) and some minor slowdowns due to managing the whole thing.
It reminds me of Vista superfetch and prefetch which, both are useless and have to be turned off, along with indexing.
In this case, with prefetch, you’d get some bad ass (useless) swapping going on. 1st the OS would slowly move old apps back (which you didn’t need) and then again swap them because you need those 500MB again.
If the design of swap prefetch were that stupid we wouldn’t be even discussing the possibility to get swap prefetch merged to mainline…
As I understand it swap prefetch leaves the fetched data in the swap so it doesn’t result in that kind of “swap madness”.
From http://ck.wikia.com/wiki/SwapPrefetch :
In short, swap prefetch makes use of a computer’s idle time to swap back in applications that may have been swapped out while using another application. Interestingly, swap prefetch does not evict from swap data that was swapped back in to physical memory. Therefore, an application that was swapped back by prefetching may end up needing to be swapped out again, but it’ll happen essentially for “free,” thereby helping to reduce perceptible desktop lag due to I/O bottlenecking.
Edit: fixed link
Edited 2007-08-06 14:14
yeah, but if you can’t fit all of swap into RAM, then you need to think about what should go back, think about when it should go back, how much, so on… there are situations where it’s definitely a plus, but the overhead from ‘thinking’ isn’t always worth it (especially in power-limited situations like laptop).
Hence why swap-prefetch doesn’t activate by default when running on battery. I believe there is a tunable for this. There is also a tunable for the frequency (default of 5 seconds) that swap-prefetch gets data.
Complicated logic is not needed:
while ((swap_file_usage > 0MB) && (RAM != full))
{ move a small packet of data to ram;
sleep(5secs);
}
Finally, in response to one of the posters, yes a benchmark was made! RTFA.
“but the overhead from ‘thinking’ isn’t always worth it”
So couldn’t swap prefetch be optional or configurable? So that those who need, could turn it on, and those who oppose it, could also switch it off? Or distros like Debian could have prefetch-enabled optional kernels like they already have low-latency kernels for music editors etc?
I’m not against inclusion of it and am aware that it’s not hard to do / configure, it’s just that I didn’t feel anyone had posted in what case it actually be ‘bad’.
I think everyone was thinking of a very simplistic mechanism based on time stamping, a last out first in type of thing. (I might be wrong about this.)
I don’t think anyone is talking pre-fetch on the level that Vista does it where the OS does all sorts of calculations on trend analysis to predict what the enduser is going use first or next.
Really the only thing that I think pre-fetch is masking is hard drive latencies and context switching penalties.
Whatever gets swapped to disk due to a lack of available free RAM, slowly gets moved back into RAM when free space becomes available. No fancy heuristics needed.
There’s no such thing as free memory. For example:
A bunch of memory belonging to process A gets paged out when you start process B. On desktop systems, processes rarely exit on their own. More often, they go to sleep when they have nothing to do. So when process B goes to sleep, should swap prefetch page out memory from B in favor of memory from A, which is also asleep? Shouldn’t we wait to see if B wakes up first?
Traditional OS theory holds that the most recently used pages have the greatest probability of reference. So when process B goes to sleep, its pages still have a higher chance of reference than the pages from process A, which has been sleeping for longer. Who’s to say which process will wake up first? Our best guess is that the last process to go to sleep is the most likely to wake up.
Now let’s consider the case where a process really does exit. For example, you finish watching a video and close your media player. Should swap prefetch evict the player’s memory? What happens if you immediately restart the player on the next video in the directory? Oops, we just filled “free memory” with pages from disk. It didn’t occur to us that you might restart the process or run it repeatedly in a script.
The LRU eviction policy isn’t beyond criticism, but it’s the best policy we know. Until resident pages are least-recently-used, they don’t deserve to be swapped out. I’m not convinced that we should ever evict pages from memory until we need to in order to satisfy demand (hence demand paging), but I’m sure that we should not evict pages in favor of less-recently-used pages on disk.
If you asked a computer scientist to broadly classify the Linux kernel, she would say that Linux is a UNIX-like kernel with preemptive multitasking and demand paging. Swap prefetch alters the most basic properties of the kernel. It deserves at least as much debate a kernel preemption, which was one of the key features introduced in the 2.6 kernel series. Some kernel developers are still unsure if full or even voluntary kernel preemption is worth the overhead and complexity.
Swap prefetch is at least as contentious. Instead of preempting running tasks, we’re preempting resident pages. While tasks become more deserving of CPU time while they wait, pages become less deserving of physical memory frames while they wait. So the issue goes beyond overhead and complexity versus granularity. We’re suggesting that in some cases, the most recently used pages should be evicted. We’re not tweaking paging to become more ideal, we’re contemplating that maybe our ideal is sometimes the opposite of what we really want to do.
There hasn’t been nearly enough debate on swap prefetch. Not nearly enough. But the Wikipedia entry on Virtual Memory already hails Con Kolivas’ swap prefetch as an improvement. It doesn’t even mention Vista’s SuperFetch. While there’s plenty of articles that gush about SuperFetch in theory, benchmarking results are hard to come by. In fact, benchmarkers consider SuperFetch a considerable adversary.
How are we all of a sudden so sure that this is a good idea? We already know that Microsoft’s flash cache technologies are far less beneficial than we’d been led to believe. What about the fastest growing sector of the client market, mobile and ultra-mobile form factors? With power and disk throughput at a premium, does swap prefetch make sense?
So before we make fundamental changes to the virtual memory model, let’s make sure that we aren’t pandering to some edge cases in certain environments. Let’s see if this is generalizable to mobile and server environments. If not, then is it really a good idea for anybody?
Point is, the only thing swap prefetch does is replace a portion off the longest-unused-caches with the most-recently-used application data from swap. In such a way that the memory filled with this swap data is marked as cache, so when it is needed, it doesn’t have to be written to swap (it’s there already).
So there is indeed one scenario where swap-prefetch does the wrong thing: when you need that disk cache more than you need the application data. Using swap prefetch assumes you need the most recently used application data more often than you need those oldest disk caches.
All used ‘logic’ here is just what is already in the VM subsystem (eg LRU info).
As on a desktop this presumption is generally right, it works out fine. What vista’s prefetch tools do is much more complex and hence controversial and prone to errors.
You speak of fundamental changes to the VM system. Swap Prefetch is no such thing. It is just a deamon-like tool which does the above thing. It’s not complex, it’s actually pretty small. I think the most complex part might be where it tries to figure out if it should run, to ensure it doesn’t disturb any normal function on the desktop.
Absolutly, and I can say that for Vista, the jury is still out on SuperFetch. Although personally I have seen some nice improvements with SuperFetch, reading through user sites it seems YMMV.
Yeah, Vista can seem more responsive when dealing with certain desktop workloads but for server systems, I doubt SuperFetch would be a good idea and I doubt we will see any great improvements in Windows Server because of it.
Then again, time will tell.
No thanks. I’d rather let my system keep sleeping process’ memory on swap and use free ram for disk buffers.
You do realize that on a desktop system, the VAST majority of processes spend the VAST majority of their time sleeping?
How about maintaining working set of non-I/O intensive processes (in addition to maintaining working set of pages for those processes). Then we can prefetch working set of pages for those working set of processes.
just my $0.02
Tejas Kokje
Point is, the only thing swap prefetch does is replace a portion off (it’s limited to a certain % of the ram) the longest-unused-caches with the most-recently-used application data from swap. In such a way that the memory filled with this swap data is marked as cache, so when it is needed, it doesn’t have to be written to swap (it’s there already – essentially a almost-free lunch).
So there is indeed one scenario where swap-prefetch does the wrong thing: when you need that old disk cache more than you need the most recently swapped out application data. Using swap prefetch assumes you need the most recently used application data more often than you need those oldest disk caches.
All used ‘logic’ here is just what is already in the VM subsystem (eg LRU info).
As on a desktop this presumption is generally right, it works out fine. What vista’s prefetch tools do is much more complex and hence controversial and prone to errors.
Does anybody else think the argument against including swap-prefetch is just nonsense?
I’ll give the non-inclusion people one valid argument. Con no longer wants to maintain the swap-prefetch code, and for good reason. So unless somebody can be found who wants to maintain the code, including it is a no-go.
But now what is the harm in slowly pulling in data from the swap file back into RAM? Obviously this is intended mostly for people with small amounts of RAM, i.e. around 512MB or less, where it is perfectly feasible to consume all the RAM with todays memory hungry applications. On my own system I routinely use 1GB of RAM for caching and applications, and sometimes even start to swap out when memory usage goes past my 2GB of RAM. Swap prefetch is not really needed for my system, but for many others it’s a win-win situation.
The non-inclusion supporters argue that perhaps the problem is with the applications, or some other virtual memory system bug. Ok; but unless developers go back and rewrite their applications to consume less RAM, swap prefetch is the only viable solution.
What other options are there? What other options fit in a single file swap_prefetch.c that isn’t so complicated? Swap prefetch has already proven significant performance improvements, and has shown no obvious regressions.
From the article:
“To attack the second question we could start out with bug reports: system A with workload B produces result C. I think result C is wrong for “reasons” and would prefer to see result D.”
Ok, so system A with workload B uses 500MB of RAM on a 512MB of system. The solution? Magically make workload B consume less RAM… Maybe not magic, but assuming we can somehow “fix” the amount of ram needed for workload B is naive.
Perhaps the solution is to do a “swap off” and thus have no swap at all. The system will just not work anymore after you have run out of RAM. Heh.
Forget that! My development machine has 16GB of ram so I don’t need swap refetch!
> I’ll give the non-inclusion people one valid argument.
Do you have a bench ?
No bench, no “valid argument” to include swap-prefetch.
PS : My computer : 512 Mo.
There are numerous benchmarks done. In essence, it’s pretty simple. The perfect benchmark shows how swap prefetch speeds up the application use by exactly the difference between (almost-random) disc access and reading from memory.
The only thing swap prefetch does is replace a portion off the least-used disc caches with the most-recently-used application data from swap.
Using swap prefetch assumes you need the most recently used application data more often than you need those oldest disk caches.
As on a desktop this presumption is generally right, it works out fine.
When it is wrong, you have to re-read the data which was in the caches, generally continously laid out on the disk (fast). When it is right, your app is back in a fraction of a second, instead of forcing you to wait while it is swapped (almost random-access, slow) in.
Unless you never use swap space because you have plenty of free RAM left;-)
In which case it simply does nothing at all, and you won’t ever notice it even exists
i always have it turned off completely on my PC I’ve never maxed my ram out… Guess i’m not much of a power user!
I wouldn’t necessarily agree it is just nonsense, but I’ll admit I’m cynical over the whole thing.
Linus has put aggressive patches into the kernel before under the logic that -mm testing can only go so far, and at some point it needs to live or die under fire.
However, as someone who subscribes to lkml and scans through the posts, I have to admit there is something to the point that CK’s proponents are a little over-zealous at times. I can understand why this undermines the process.
I do think it’s a drag that CK is leaving. I’ve used -ck on all of my kernels since 2.6.18 and can feel an improvement in GUI responsiveness, even if it’s nothing I can quantify in numbers. I have run various -mm kernels, but I can’t attest to the effectiveness of swap-prefetch since I don’t use a swap any more. I have noticed an perceivable improvement with adaptive-readahead though, which is another limbo-patch that has existed for some time in -mm so it makes me wonder.
I won’t second guess the devs decisions, but I will admit I sometimes wonder about the criteria that goes into those decisions…
Just my 2 pennies…
BTW swap prefetch is by no means ‘aggressive’. It’s most complex part might very well be where it tries to be as unobtrusive as possible… And it’s pretty solid at that.
It uses mostly already-there logic, can be easily (and on run-time) turned on and off, and it works in such a way that the memory filled with the swap data is marked as cache, so when it is needed, it doesn’t have to be written to swap (it’s there already).
I can’t believe how many commentators also in that LWN.net article suggested people only to buy more RAM as a perfect solution to Linux memory and swap management problems. It feels too much like hiding rubbish under carpet.
Sure, more RAM is always good, and RAM may be relatively cheap these days. But most ordinary PC users don’t know why they should constantly spend more money to new PC hardware upgrades in order to keep up with the ever more bloated software demands. They expect that the PC they bought just a couple of years ago new should still do the job as it is, and why shouldn’t it?
A scenario that may be quite familiar to many Linux users: a GNOME or KDE desktop environment, after some heavy use, may get and stay slow even long after the programs that used that swap and RAM have been closed. Often it may be better to restart the PC, or at least X, in order to free memory and swap again. Why?
I’m sure lots of things should be more effective in desktop environments and desktop software, and that may well be where the real bloat and bugs are, and so not so much in kernel space itself. But Linux kernel is very important too. It is also important what sort of a message the kernel developers give to other software developers: what are & should be the main ideals and goals when developing software for Linux? Is ineffective memory management only a small trivial problem that can be solved by simply upgrading your PC hardware, or a serious software level problem that should be solved?
Furthermore, it isn’t always possible/feasible to buy more RAM, due to the limited number of slots, maximum capacity supported by the m/b, and constantly changing memory form factor (DDR1 is already becoming rare and expensive).
I don’t think it’s entirely fair to level the blame entirely at Linux itself: whilst it’s certainly true that there’s always room for improvement in the kernel’s memory management strategies, and that the kernel has grown over the years (remember when “make zImage” produced a kernel that fit on a floppy?), as Linux moves further into the desktop arena people are running a lot more on their Linux systems now, and many of the things that they’re running are memory hogs by anybody’s standards.
It’s by no means a problem limited to Linux, of course—other operating systems suffer just as much from it, and you are limited if you’ve got less memory than would be ideal in what you can actually do about it.
I know when I first started using Linux, running X was a luxury (not least because it was quite sluggish on a 386), MySQL didn’t exist, and Apache may not even have done either (I could be wrong about that part, though). Take those out of the equation and you find that you’ve got loads of spare RAM and a very snappy system indeed… but then you’re pretty limited in what you’re going to do with it.
As RAM became less of a scarce resource, more people ran X setups (the really limited ran twm, the Sun geeks ran ol(v)wm, and pretty much everybody else ran fvwm), but largely just so that you could squeeze as many xterms as possible into 1024×768—the desktop environment, with all the components and background services (many of whom won’t stay swapped out for long if they do get swapped out because they’re in use whenever you interact with the DE) that it brings was only something you ran on “proper” Unix systems with more memory than most of us could afford.
Some time later, KDE and GNOME variously appeared on the scene, and people did complain that they were slow. People complained about Mozilla, too (but then, it was huge). Disks got bigger, processors faster and RAM cheaper, and people complained less and less. Everything was good, because everything was quicker. As time went by, though, more people developed useful stuff that people wanted to use, and the existing applications grew over time as their user-bases did.
The end result is that invariably people want to have their cake and eat it: they want to run lots and lots of programs that inevitably consume more resources than are readily available. Thanks to the wonders of virtual memory, that’s cool—the OS will fake it. The problem is that the expectation is there that not only should there be no performance drain when you add more programs into the mix, but that swapping things in and out is something you shouldn’t notice happening. Eventually, you’re going to get to the point where the amount of data you’re shifting to and from disk is pretty immense by the standards of a few years ago, and that’s just to cope with you Alt+Tab cycling through running applications.
Swap prefetch helps this, certainly, because it lessens the effect of the “swap hang” effect, but the underlying problem is still there: very few people writing software that gets used on modern computer systems actually care about the RAM footprint. 30MB here, 45MB there—it all adds up, and it adds up really quickly.
Hmmm. I mostly know of the attitude towards resource usage in KDE, of course, but I think I should comment here.
You say very few people care about ram usage etc. Well, the problem is that there is a trade-off in complexity of the software and time needed to write it versus resource usage. Often, you could optimize the software more, but it would degrade readability, and of course take time in itself. Bad readability is a big NO-NO in FOSS. It’s why we are slowly transitioning to higher-level languages: they increase the speed of development at the expense of higher resource usage. Sorry about the latter, but the first is important for the survival and advancement of FOSS…
Oh well, reading your saying I feel constantly rememberd on Windows :o)
all this makes me think of the recent talk about how vista must be tuned so as to not overload the limited machines that umpc’s are.
the last thing the community need is a buttload of speciality modules or similar that one have to load or unload depending on what one plans to do at the moment.
i think that was con’s biggest problem, his solutions where to specialized. they only worked for his limited sphere of interest, the desktop.
and same as with pluggable shedulers, it and same as with
pluggable shedulers,
it may well balcanize
the kernel as everyone
runs their own
specialized setup…
Browser: Mozilla/4.0 (compatible; MSIE 6.0; ; Linux armv5tejl; U) Opera 8.02 [en_US] Maemo browser 0.4.34 N770/SU-18
Yeah, because no one cares about the desktop, we all prefer our 23″ displays with big black and white characters.
huh?
i know my post got f–ked up (i blame it on the browser having a bad day) but still…
Edited 2007-08-06 18:16
I have an older Gateway machine with a 1.7GHz P4 that originally came with two sticks of memory (total: 128MB). Right after buying it, I upgraded it to 256MB, realizing that WinMe ran like **** on it. The only problem: this machine uses RD-RAM. Way overpriced memory. I’d be better off buying a whole machine, though I don’t really want to (yet…). Especially after desperately getting it up to a measly 256MB right after buying it, and given its age now, I just don’t think pouring any more money on memory for this machine is worth it.
A couple years ago, I switched the system to Linux. One of the biggest reasons for the switch (and there were a lot of big ones) was that XP was just too memory-hungry… unless you disable dozens of services, turn off lots unnecessary visual crap, defragment all the time, keep the system clean, be extremely selective when choosing your apps (ie. Burnatonce instead of Nero; Opera instead of Firefox; Winamp instead of the other bloated players; Media Player Classic; etc.), and disable all the startup apps (why does just about every Windows program insist on having its icon on the desktop, quick launch bar, system tray, AND set to run at startup?). When making the move, I had two primary goals: speed and memory use. As I learned more about Linux, simplicity became another goal, which is why I tend to favor Slackware-derived distros (the best of all three .
Long story short, I’m impressed with my chosen distro’s–Zenwalk’s–speed. It takes a while before it even starts eating into my swap, and usually Firefox is the program to do it (something needs to be done with its bloat/memory management/leaks… but that doesn’t seem to be happening). Normally swap usage is somewhere between 2-6 megs, and one of the FIRST things to get swapped out is Xfce’s right-click menu, including all the icons, causing an unresponsive menu half the time. This is quite unacceptable, given the right-click (applications) menu is the probably the most used part of just about any desktop. This happens after about an hour or two (or less) of using Firefox, or an hour or so with the screensaver running. I tried setting the swappiness to a lower value, and setting the screensaver to just turn the screen black, but it still hasn’t solved the problem.
As a side note, I’ve played around with some of the heavyweights like Ubuntu and Fedora, and even those run quite well (though they are quicker to bite into my swap space), and still better than Windows’ 150+ megs of virtual memory at boot (and it was hard to even get it *down* to 150…). In conclusion… yeah, I’d obviously really like to see swap prefetch enter the Linux kernel. I can imagine all my current issues memory issues completely diminishing, especially in those few times the swap does manage to hit 50MB (not very often).
Yeah, XP isn’t too happy with anything under 512 MB. Default and clean it loads 233 MB into memory by itself, and anything more gets swapped out. I had a 1.3 Duron desktop with 256 MG for a while, and it was much happier after I upgraded it to 1.5 GB.
I’m not sure what FireFox’s problem is. The thing will just sit there and eat RAM. I’ve heard this attributed to Linux’s memory allocator not being able to trash pages that aren’t last in the stack.
I’m not sure Prefetch would help in that situation as there isn’t enough RAM to hold everything to begin with. I’d first start looking for ways to trim it down, like getting rid of unneeded services. The most extreme thing I would suggest is trashing Xfce in favor of Fluxbox.
The same story, again and again…
Lkml patches are about kernel performance and improvement, changing small tricks in beautiful code.
Con (& friends) patches are about desktop usability, hacking all layers of the system if necessary.
Swap prefetch is a “Desktop feature”. As a desktop user, the memory pages you need to be available are those of your current interactive desktop applications (X11).
And that’s the problem: you need to have an explicit dependency between the X11 Layer (unactive/active X11 windows) and the kernel memory manager (unused/used pages).
That’s unacceptable from the Lkml developers point of view. The kernel MUST be application agnostic. They prefer to code a (bad) detection algorithm in the kernel rather than rely on an explicit indication from the upper (X11) layers.
Edited 2007-08-06 07:58
That’s absolute rubbish. Swap-prefetch is completely application agnostic. It just restores stuff from swap once the memory is free again without a user having to revisit an application. It has absolutely no knowledge of what applications it is working on. The fact it helps with X11 apps is coincidental. It would have an impact in a console-only environment too if you were using a bunch of apps and you used enough memory to have them pushed into swap.
[[And that’s the problem: you need to have an explicit dependency between the X11 Layer (unactive/active X11 windows) and the kernel memory manager (unused/used pages).]]
AFAIK this is false: what the swap prefetch will do is restore the page that was there before being swapped out due to a memory hungry application which stopped: nothing to do with X.
As for those who claim that buying more memory will solve the issue: do you realise that one kind application which swaps out all the other is badly written updatedb which scans all the disk?
As the disk is much bigger than the memory, a badly written updatedb will swap out all the other applications whatever the amount of memory you have, so a swap-prefetcher is nice as it makes the kernel more robust to such kind of abuse.
You’re absolutly right. The actual swap prefetch loads into RAM the most recently swaped pages (when free RAM is available).
As was said earlier, this is a simple “X11 agnostic” way to modelize the behaviour of a Desktop user:
1. start and use appli #1
2. start and use appli #2 => swap out appli #1
3. end appli #2 => reload appli #1 (prefetch)
4. continue to use appli #1
This implementation of “Swap Prefetch” is not “a killing feature” because it only operates on memory pages, not including process, applications, windows…
For Server config, the “Swap Prefetch” is mostly useless. For Desktop config, it’s a minor optimization as long as the “Swap Prefetch” ignores the upper (X11,DM) layers.
I know for a fact that swap-prefetch is application independent.
However, application-dependent optimization maybe needed in the future.
Just look at TCP/IP. The 7 layer OSI model (or the 5 layer TCP model for that matter) looks all and nice. People are finding performance bottleneck when all the layers are separate. These days a lot of research has be going into Cross-Layer Design (CLD). The implementation won’t be “clean” at all but the performance improvements are there. My guess is that eventually even kernel design will have to move in this direction, so that optimization can be performed for specific workload classes.
Systems with RAM > 1GB are quite common today. Hence most new PC’s ship 2GB or more. What’s there to fetch back from swap space back into memory if nothing has been swapped in the first place?
Nothing wrong with optimising the overall performance for challenged systems. But what about something that benefits all?
Edited 2007-08-06 11:22
“Systems with RAM > 1GB are quite common today.”
Nope. They are not. Although most new PCs sold might perhaps already be of that caliber, most computer users do not run brand new but old PCs. And often they have no option to upgrade hardware either.
There are thousands of old PCs still in active desktop (and server) use in people’s homes all around the globe. Linux should still be a good option for such PCs.
"Ubergeeks who upgrade their PC hardware all the time are a very, very small minority among PC users… Most people can’t and don’t buy new hardware all the time. It is a completely wrong geek idea that everyone would be running modern powerful hardware anyway. They don’t.
Ok, there is a point when really old hardware may not be worth supporting anymore. But something like a 5 year old PC is certainly not yet in that class and should still run Linux quite fine. In my opinion viewing Linux development needs only from the perspective of the most modern and very powerful hardware is completely wrong.
Edited 2007-08-06 20:37
“There are thousands of old PCs still in active desktop (and server) use in people’s homes all around the globe.”
EDIT: and thousands of old PCs in active use at offices too, not just homes.
Even if you have a PC with a huge memory, some background task such as updatedb can fill the memory with the disk cache (unless it’s programmed very carefully): even if you have 2GB of RAM, your PC will be slow when you arrive in the morning.
I see two case in which swap prefetch is useful:
A) you just stopped a huge application.
In this case more memory could prevent the problem and it’s possible than the swap prefetcher does the wrong thing: assume that you want to restart the big application, without swap prefetcher, it’ll be mostly in memory so the restart will be fast, so the swap prefetcher could slow down the restart.
B) a background task has to run in the night and its disk cache has filled the memory: more RAM won’t help, but either
– better programming of the task,
– a daemon to take a ‘memory snapshot’ before the task run and swap it again after the task has run.
– the swap prefetcher
could solve the ‘morning slowness’
I wonder if there is the necessary interface from the kernel to do the ‘memory snapshot’ tool in user space?
Swap on Linux could do even better than just prefetching. Actually, it could take a lesson from Windows and start copying stale pages into swap, so that it can free memory instantly without the need of swapping.
The same into would then work into the other direction. Pages would be swapped back in and kept marked as stale, so that they can be used when necessary or dumped yet again in record time.