“If you’re doing any kind of animation or running a process that needs to poll another process or a particular piece of hardware on a regular basis, you need an accurate timer. Depending on your application, the timer might need to be accurate to within one second, or within a fraction of a millisecond. If your timer’s resolution is too coarse or its margin of error is too large, your animations will appear jerky or uneven, and your program that’s collecting data from custom hardware will miss data or will fail altogether. Windows has two primary methods of measuring elapsed time, and two ways to provide periodic events.”
I am writing software for BeOS and would love a function like ‘QueryPerformanceFrequency’, and everything up to the discussion of ‘QueryPerformanceCounter’ made perfect sense to me.
However, the following paragraph says a 2 GHz processor takes about 5 microseconds to perform ‘QueryPerformanceCounter’. If I understand the article right this is about the same function as ‘system_time()’ in BeOS, but in BeOS this call takes less than 450 nanoseconds on my 366 MHz Dell laptop. It is the same hardware basics, why such a large performance difference? If this function called thru an interrupt handler in Windows?
Well high performance timers on PC hardware is not a universally solved issue.
As far as I know, the best granularity can be gained by the rdtsc instruction, which reads cpu tick counter. However power management changes the frequency of the cpu, making the method unreliable.
So Windows internally uses several alternative sources (varying with HAL), and tries to give the best answer.
The adjustment algorithms used here may be the reason for the delay.
There is a blog post on the issue too:
http://blogs.msdn.com/oldnewthing/archive/2005/09/02/459952.aspx
Edited 2006-11-26 21:52
Thanks, that matches with a timing problem I am finding in BeOS on my Dell, there seems to be an irregular glich that messes up some of my timing loops, not enough to cause sound quality problems but just enough that no two timing loops show the same exact timing stats.
Sound like BeOS only uses one method/call to do the high resolution timing which makes the system_time() call fast, but I can easily see that there are times that a slower but more stable (timing) call would be considered the better solution.
Just got to read the entire blog without interruptions, and noticed some of the problems I am having seem to also exist under the Windows API too! Glad to see it is not just me Again, thanks for the pointer – it gave me some ideas on stable interrupt timing handlers.
Edited 2006-11-28 19:17
BeOS uses nanosecond resolution natively, throughout the system and API.
The granularity of the threading and the absence of 16-bit code means that BeOS can provide every program the level of timing performance Windows developers fight so hard to implement.
It is one of BeOS’s strongest points, and is why the BeOS messaging system can be so featureful and yet deliver a message, under normal load, from one app to another ( on my machine ), in about 1800 nanoseconds.
This is, of course, also the result of other optimizations, but simple timing code on BeOS can realize multi-nanosecond-accurate timing. You still have to carry out your work load, but the underlying system is no longer just giving an API by which to demand the system to do something, but instead tells you when things happen, and you can tell it what to say
The article was a nice read though. I just finally realized a few things about the structure of Windows that I didn’t really care to have crammed in my head ( which is currently filled with lovely lines of C++ and perl and the Haiku code base ( been thinking about contributing to the posix support )), but I guess that is all I should expect as I start delving into porting applications from that retched platform ( well, libs anyway ).
Anyway, hope this helps ya!
–The loon
>> “since core 2 is lagging a little behind core 1, the returned endTime value is less than the startTime. All of a sudden, you have a negative elapsed time” <<
This quote from the middle of page 2, just below the code-snippet. Thus programs which make decisions based on QueryPerformanceCounter() may branch differently, I had not heard about this, oops !
system_time() relies exclusively on RDTSC. This is one of the main reasons why BeOS cannot run on anything less than a Pentium, since that instruction was added with that series of CPUs. And yes it will result in problems if you have any form of frequency scaling via power management enabled.
Read http://www.ccsl.carleton.ca/~jamuir/rdtscpm1.pdf
http://www.midnightbeach.com/rdtsc.html and http://en.wikipedia.org/wiki/RDTSC
I see what is going on now.
The article brought to my attention the timeBeginPeriod() and timeEndPeriod() functions. These functions seem to greatly improve Sleep(), which I use to (somewhat) control fps in my game loop. For some reason, I am unable to get anything out of timeGetTime(). I am using DevC++. -lwinmm is added to the Linker.
http://www.cminusgames.com