“The default timer resolution on Windows is 15.6 ms – a timer interrupt 64 times a second. When programs increase the timer frequency they increase power consumption and harm battery life. They also waste more compute power than I would ever have expected ^aEUR“ they make your computer run slower! Because of these problems Microsoft has been telling developers to not increase the timer frequency for years. So how come almost every time I notice that my timer frequency has been raised it’s been done by a Microsoft program?” Fascinating article.
How does Windows compare to other OSes here? I guess once Linux is really fully tickless (which is still a long way to go) it will be better in this regard, wouldn’t it? What about OS X? Also I heard Windows CE is already tickless. Is Microsoft porting this ability to Windows? Is such a ting possible? I don’t know much about these things.
Unless I misread the article, I think it says that windows 8 is tickless.
Linux has the ability to be tickless as it is by default in RHEL 6, but I think you can still make it tick at compile time.
What do you mean by “fully tickless?” Just wondering, because I thought that Linux–speaking specifically of the kernel–was fully tickless (or at least comes with the option). Are you referring to “Linux” in the distribution sense, as in the userland and all the other tools and programs running on top of it?
Back when the “tickless” setting was introduced into the kernel (quite a while ago) I remember reading that a lot of packages had to be updated to take advantage of it. I’m guessing that’s what you’re referring to? Either way, it’s been a long time… I figured by now virtually everything that needed updating should be updated by now. What could be left?
Ah, out of curiosity I just stumbled upon this link in a Google search:
http://lwn.net/Articles/549580/
So, that was the first step, allowing an kernel to go into a tickless mode whenever it is idle. And now they are making further progress in making it tickless all the time. I think I understand now.
Edited 2013-07-10 19:56 UTC
XNU is tickless
I never had any idea that it was this bad with windows.
With linux, this is user definable when compiling the kernel.
I always felt that the system would run faster with a faster timer. It just felt like latencies were lower. Things felt snappier.
Yes, it’s the old latency vs throughput dilema. Raise one and you lower the other. That’s why it’s common to recomend a high frequency for desktops (better latency) and a low one for servers (better throughput).
Funny to see that Windows lets app tune this setting while it is compile-time for Linux. That’s what you get when your identical kernel has to work for everybody, but I wonder if that feature costs the Windown kernel performance in and of itself (on top of the “give userspace some power and it’ll missuse it” problem described in the article).
Getting the best of both worlds (fully tickless kernel) is obviously hard, so it’s great to see so much engineering time poured into it for Linux. It’s the kind of endeavour routinely seen in the Linux community, that I imagine happens less often elsewhere.
Yeah, for servers, i usually run 1khz.
I’ve also upped it to 1Khz on my netbook, and powertop is showing MUCH less wakes, which is logical.
There might be a kernel parameter that changes the timer, i’m looking.
It’s not something that is often discussed. Pretty interesting stuff!
You aren’t going to see tremendous improvements using a full tickless system. Although it doesn’t see to get a lot of obvious public discussion, this isn’t a new subject at all.
If the Windows kernel and its timer are so hard on battery power then why does Windows typically have such a longer life when running on laptop battery power when compared against Linux? Except for a few specialty distros Linux tends to be really hard on battery power compared to Windows.
The primary reason: drivers. For example, the open-source AMD Radeon-drivers do not support power-management at all, yet, and that is obviously going to hurt. Many of the drivers under Linux are reverse-engineered and thus there simply may not be power-management code in there at all whereas under Windows the manufacturers provide the drivers and therefore do not need to reverse-engineer anything. Luckily, atleast the open-source AMD Radeon-drivers are getting power-management soon.
Also, I hear there was a regression in the kernel recently, but that should be fixed by now.
Just because A is not as bad as B that doesn’t mean A is good.
On all modern computer hardware, you can set timers in single-shot mode for any duration you want, with microsecond to nanosecond timing accuracies. So how come we are still using periodic timers, apart for truly periodic phenomena such as round-robin scheduling without any external timing disturbance ? Is it so costly to set up timers this way in terms of CPU cycles or power consumption ?
Edited 2013-07-11 06:05 UTC
Neolander,
“So how come we are still using periodic timers…Is it so costly to set up timers this way in terms of CPU cycles or power consumption ?”
Knowing you, you’ve probably already read this but it might provide some insight here.
http://wiki.osdev.org/APIC_timer
The APIC timer is dependent upon the bus & CPU frequency, which modern systems can adjust on the fly depending on CPU load. The PIT has a reliable independent clock however I know from experience that the PIT IS expensive to re-program. It does seem that the timer API was designed for the PIT, so it wouldn’t surprise me that there are legacy reasons for keeping the periodic timer design.
In theory an OS should be able to use APIC timers and recompute the appropriate timer scaling factors on the fly as the CPU changes frequencies. However therein might lie the problem, I think the CPU sleep/scaling functions are often controlled in System Management Mode, whereas the system timers are controlled by the OS. This lacks a needed citation on wikipedia, but…
https://en.wikipedia.org/wiki/System_Management_Mode
“Since the SMM code (SMI handler) is installed by the system firmware (BIOS), the OS and the SMM code may have expectations about hardware settings that are incompatible, such as different ideas of how the Advanced Programmable Interrupt Controller (APIC) should be set up.”
Also using variable-rate timer references may result in timer drift that does not occur with the PIT. Not that I think it should matter so much, in my opinion though typical consumer software should never really need such fast & precise timing anyways.
Indeed, that’s something that’s been bugging me forever about APIC timers: how are they supposed to replace the legacy PIT if the rate at which these timers are going keeps changing, in part due to factors outside of OS control ?