A recent bug report on the lkml complained of significant performance degradation from enabling CONFIG_PREEMPT, kernel preemption. 2.6 kernel maintainer Andrew Morton pointed out that such degradation from enabling kernel preemption is not normal, instead likely from it triggering a bug. Also, it was recently asked on the lkml what value there is in upgrading the microcode on an Intel processor. Tigran Aivazian, author of the IA32 microcode driver and Microcode Update Utility for Linux explained.
What do Linux Kernel Preemption and the Windows Registry have in common?
People are blaming a perfectly good concept when all there is to blame is a shoddy implementation (or, as in this case, an implementation *perceived* as shoddy).
People who’ve made those test have certainly not read the kernel doc as for this option ( Linux Kernel Preemption ) it says that it should be enabled for desktop use, not for server use as they seems to be working with.
I find this option really great as it’s very easy to feel the difference using some GUI (KDE,Gnome)
what the microcode acutally does?
i thought that all x86 instruction were executed by dedicated hardware (for example the ALU).
where does the microcode work, and if it is software, why is it not slow, and where is it executed?
kind regards,
Int.
Intel x86 is a CISC (Complex Instruction Set Computer). RISC processors do everything in hardware, e.g. hardware multiplication that takes one cycle, or multiplying two float numbers in one (!) cycle. RISC processors are more complex. x86 is a CISC, because it contain microcode. For example CPU implements some operations in hardware, so that you can combine some operations to create e.g. multiplication. Microcode is some kind of firmware, that tells CPU how to execute instructions. That’s why x86 executes code slower than RISC (I wish x86 architecture would be RISC). For example you can implement instruction that converts BCD code to binary using microcode. RISC cpu would do it in hardware, but x86 does it in hardware and builtin software.
http://www.arstechnica.com/cpu/4q99/risc-cisc/rvc-1.html
A good read for CISC/RISC stuff.
Sorry to say it, but Jarek is wide off the mark. Better read the Ars Technica link from Richard.
to experiment it :
install mandrake 9.2
urpmi some sources and contrib
urpmi kernel-multimedia (with this patch already in)
open a xterm
launch mozilla
everything looks fine, huh ???
music good, net is good…
what about having some compile (or latex stuff) ?
Just make a ./configure, make and make install on any c/c++ program and you’ll see what slugish mean….
Yes, there’s no crap or lag in mp3 playback but everything else is… how to say ? crappy and laggy…
BUT with another kernel without preemptive patch but with the low latency one, no problem… every things is fine…
go figure…….
Djamé
I’ll explain it because I’m bored. The RISC processor does NOT do every instruction in one cycle. It takes multiple cycles to do each instruction. The reason it is faster is that the processor is pipelined so it can be working on multiple instructions at one time. The result is that one instruction can FINISH executing every cycle, but each instruction has to go through the processor’s pipeline.
Since the processor is broken into pipeline stages, such as memory access, register read/write, ALU access, etc., there can be a different instruction working in each stage at the same time. This is called pipelineing. Picture a car assembly line; each car is like an instruction and each worker is like a pipeline stage. Sure, a car comes off the production line every 5 minutes, but that sure as hell doesn’t mean that the whole car was built in 5 minutes! It’s the instruction throughput that increases – it does not take only 1 cycle to complete an instruction. This is a common misconception about RISC.
What the microcode does is take an instruction and tell the processor what it should do during each cycle for that instruction. It goes something like this: Fetch instruction from memory. Now send the instruction to the instruction decoder while you read some values from the register file. Now add the two values. Now use the result as a memory address and fetch that word. Etc, etc. etc.
Monkeying with your microcode can render your processor non-function or highly instable at best, if you don’t duplicate exactly the intended behaivor of the instruction. If there’s bugs, I say let the intel engineers fix them.
My understanding is that Intel also uses the microcode as a way to fix bugginess in the processors. Presumably, its similar to a firmware update, but the microcode must be loaded at boot-time and is volatile, i.e. it must be reloaded on the next boot.
You, personally, shouldn’t need to mess with the microcode, but you may wish to update it if Intel provides such an update that fixes a problem you’ve noticed with your processor.
That’s not really correct. In theory, the difference between RISC and CISC is:
CISC: Complex, usually variable-length instructions
RISC: Simple, usually fixed-length instructions
This means that the RISC CPU wouldn’t even have a BCD instruction, because they wouldn’t be able to implement it in a way that’d get 1 op/cycle throughput. Some pure-ish RISC CPUs like the SPARC didn’t originally even have integer multiply — they had a partial multiply instruction that did one part of the operation per instruction. RISC CPUs also generally lack other instructions that could affect the 1 op/cycle throughput — no memory operands other than for store/load, for example.
Of course, modern x86 CPUs are RISC internally. They mimic the x86 CISC ISA by breaking x86 instructions up into internal RISC ops. Most instructions break up into one or two RISC ops. Some instructions (like the string instructions) break up into more, so are executed out of microcode.
Whether you are talking about CISC or RISC machines, microcode is something entirely different. It determines how the control unit (common to both CISC and RISC) operates. In some systems it is implemented entirely in hardware, sometimes it is implemented using ROMs and in other cases it is implemented using microcode.
It is not a way to fix “bugginess”. The control unit in modern processors would be way too huge if implemented directly in hardware. That is why a “software” style solution is normally used to implement the control unit.
Like someone pointed, the intel CPUs are RISC machines with a complex decoder.
NT has had a preemptable Kernel since day one. Nice to see Linux is catching up.
I don’t want to give the full definition of RISC here (yes, there is more to it than the acronym) as defined by David Patterson, but I want to mention one thing:
According to Patterson a RISC architecture is simply enough to be implemented without Microcode, but it is *not* a necessity.
You could take a RISC architecture and implement it using Microcode, and it still would be RISC.
NT has had a preemptable Kernel since day one. Nice to see Linux is catching up.
Linux’s worst case latency has historically been much, much lower than NT’s, so technically, it hasn’t needed it. NT hides this fact by defaulting to giving foreground processes CPU bias.
Also note, people that enable preemption, must also make sure that their distro is NOT renicing X. Otherwise, you’ll find you have worse performance than before as doing so greatly messes up the kernel’s idea of idle, as it relates to X and all of it’s subprocesses.
Sorry, I was wrong about RISC/CISC. I just wanted to explain idea of microcode. My teacher told me about mutliplication of floating point numbers in one cycle in some RISC CPU
@Edward: don’t confuse “preemptable Kernel” with “preemptive multitasking”
Linux had preemptive multitasking from about 1992,
so during the win 3.11 years. (which uses cooperative multitasking)
No, he’s not confusing anything. He is right — the NT kernel has been preemptible since the beginning. Of NT also was started in 1988, while Linux was started in 1991. Further, Linux, for several years, had a development team of a few coders, while NT started out with a couple of dozen highly-experienced engineers. And, in any case, NT was originally designed to be a workstation OS, while much of Linux’s early development was in the server market (where preemption can harm throughput).
Modern Intel CPUs (Pentium Pro till Pentium IV, usually called P6) are based on superscalar, superpipelined RISC cores.
The assembly language (visible to programmers) is still a CISC, M-R (Memory-Register), 2-1 for compatibility reasons, but this instructions is translated by hardware in simpler DLX-like, RISC, R-R, 3-3 instructions during (not sure… maybe even earlier) the Instruction Fetch stage.
This translations simplifies the pipeline (in P-IV more than 20 stages) and datapath realisation.
Hi
The discussion has been on the lines of whether preemptible kernel processes actually give a performance boost in a server side usage scenario.some people like con and andreas argue that in 2.6 kernel preempt should be off by default and be used for debugging purposes. comparing NT and Linux is not appropriate since the design goals and model is very different.
it looks like 2.6 series kernels wouldnt have preempt enabled by default for kernel processes. redhat kernels certainly dont as of now
regards
Rahul
Hmm. I had a problem with preempt in one of the mm kernels but haven’t had any before or since.
NT has had a preemptable Kernel since day one. Nice to see Linux is catching up.
Glad to see you know nothing about Linux. Preempt has been around for a while.
Anyone want to explain why nice-ing X screws performance?
Are you sure? There is a difference between preempt tasks (or process or threads) and preempting the kernel itself (the former any Unix-like will do except the first versions of Minix).
Preempting the kernel means that it must be at large reentrant. May I be wrong, but I think that just now BSD and Linux are getting this.
Preempting in the kernel is only now a standard part of the kernel (as of 2.6) but there has been a patch for it for while now. Some kernels from certain distributions have included it by default. I have been using a preemptable kernel for I think a year now. In fact I think the patched 2.4 kernel had less issues than 2.6 has reported.
Because the new O(1) scheduler and Con’s interactivity work can handle things dynamically. The problem with renicing X is that its a hack. X shouldn’t necessarily have a higher priority. For example, when an application gets a ConfigureNotify (during a resize, for example), you want the application to run quickly to redraw the contents in response. However, after the application has sent the redrawn contents to X, you want X to get the higher priority so it can draw that to the screen. This phenomenon is especially noticible during resize, where the app which needs priority (either the client, the X server, or the window manager) quickly cycles during every update.
Because, when you renice X, it gives bias in te schedulure. Normally, tihs would be a good thing, however, with preemption, it can cause problems for the scheduler to get things done correctly (as in timely). In a nut shell, it causes the scheduler to falsely schedule the wrong task which should actually be getting a priority boost from being idle. Since it was re-niced, it was given prioirty and more than likely running, which means it was not idle. That, in turn, causes it to go to the bottom of the list when X actually needs to update the screen. In turn, that causes extra latency which is exactly 180′ from what enabling preemption is supposed to be doing for you. IIRC, there were several other corner cases as well. Remember, in X, you have clients (applications) and servers (the display). By boosting the display, you cause things to be displayed timely at the expense of the clients (the applications) being able to timely tell X to actually perform said actions. This is known as a priority inversion.
Long story short, by boosting priority of X, you are working against the scheduler and its ability to preempt processes in a timely manner to ensure low latency and fast updates. Renicing X is and always has been a hack. With preemption enabled, this hack works againsts the kernel’s best efforts to fix the original problem which initiated the need for the hack.
Above, where I said, “worst case latency”, I should of actually said, “average case latency”. It’s the worst case latency which initiated desktop users asking for preemption. By adding support for preemption, they addressed most (all?) of the horrible cases so that worst cases is on par, if not slightly better than NT and average is far, far, far, better than NT. IIRC.
Sorry about the confusion folks.
Sorry, this is OT, but as usual when the NT people and the Linux people start arguing, I just *can’t* resist…
Linux was 1992. NT was 1988.
AmigaOS was 1985, with a fully preemptive microkernel architecture (Exec) mostly written by *one* person (Carl Sassenrath, today CEO of rebol.com). Hail to the king.
thanks for clarifying this,
obviously it’s not that bad for a server o.s.
to not make a preemptable kernel
Although I guess it could be an asset on an application server that handles many small transactions.
Actually, preemption is bad for throughput. This means, from a typical server perspective, you would not want preemption enabled for a server. Not at all.
On a very busy server, many, many tasks can constantly preempt another, making it difficult for any single task to make forward progress. As a result, throughput can greatly suffer. This is exaclty why preemption targets nothing more than desktop and maybe workstation users. Server users should disable it. Period.
IMO, this is one of the reasons why, as bad as I/O has been under Linux 2.4, it’s on par with NT; compare 2.4.x versus BSD to see what I mean. The 2.6 kernels horribly laugh at NT’s throughput and MS’ recent “advancements” with XP has further degraded throughput in most areas. Long story short, I would hazzard a guess that preemption is a major reason for NT’s poor I/O performance when compared to recent Linux kernels or BSD. Furthermore, Linux now has both the anticipatory and deadline elevators. For those of you with heavy linear disk I/O needs (i.e. database), the deadline elevator should be seriously considered.