Does the timeBeginPeriod API affect the system clock? - windows

I have a Win32 application, and there are some animiation UI in my application, to make the animiation more smooth, I called timeBeginPeriod to improve the time resolution, but I found the system clock will delay some seconds if my application is running very long time. Does the timeBeginPeriod affect the system clock?

Good question. I didn't know this but yes it can. According to MSDN: "Use caution when calling timeBeginPeriod, as frequent calls can significantly affect the system clock, system power usage, and the scheduler."

A call to timeBeginPeriod changes the systems interrupt period. As a consequence the update rate and the update increment of the system time changes accordingly. This answer proves a closer look at system time and timeBeginPeriod.

Related

Change system clock by some increment or decrement

Assuming I am able to determine that my Windows 7 system clock is running 300ms slow, is there a safe way to programmatically advance it by that amount?
These steps usually work:
get current time
add 300 ms
set clock to new time (using win32api.SetSystemTime)
But there is a delay between steps 1 and 3, this might cause a bigger problem.
Is there a system call for a safe increment/decrement?
Or can I prevent interruptions or delays? (Using python!)
Or is there a command-line utility I can invoke to do this operation safely?
You should look at the SetSystemTimeAdjustment function.
This function allows to speed up / slow down the system clock.
You may apply the system time adjustment (change of speed of the system clock) as long as required to finally obtain an offset of zero.
Note: You're specifically asking for Windows 7. I'm confident you are aware that support for 7 will end shortly. Windows 10 offers the SetSystemTimeAdjustmentPrecise function.

How bad is modifying the timer interrupt?

Suppose we're talking about a cloud linux server.
For a project I have. How bad would it be to modify the timer interrupt such that on each tick the processor will also check 1-4 cached dwords ?
Will that run the system totally unstable? Much slower?
Second, is the timer interrupt is anywhere near the cpu's clock or much slower?
(System_timer, not rtc)
Bad.
An OS does a lot of things on a timer interrupt. It sounds like what you are proposing to add is insignificant. But I still wouldn't recommend adding it to the timer interrupt handler itself. Interrupt handlers are tricky business.
You should use the systems in place in the kernel to schedule your task to run. (Sorry I can't be more specific, but if you are seriously considering changing a fundamental interrupt handler, then you should have no trouble figuring it out.)

forced preemption on windows (occurs or not here)

Sorry for my weak english, by preemption I mean forced context
(process) switch applied to my process.
My question is :
If I write and run my own program game in such way that it does 20 millisecond period work, then 5 millisecond sleep, and then windows pump (peek message/dispatch message) in loop again and again - is it ever preempted by force in windows or no, this preemption does not occur?
I suppose that this preemption would occur if I would not voluntary give control back to system by sleep or peek/dispatch in by a larger amount of time. Here, will it occur or not?
The short answer is: Yes, it can be, and it will be preempted.
Not only driver events (interrupts) can preempt your thread at any time, such thing may also happen due to temporary priority boost, for example when a waitable object is signalled on which a thread is blocked, or for example due to another window becoming the topmost window. Or, another process might simply adjust its priority class.
There is no way (short of giving your process realtime priority, and this is a very bad idea -- forget about it immediately) to guarantee that no "normal" thread will preempt you, and even then hardware interrupts will preempt you, and certain threads such as the one handling disk I/O and the mouse will compete with you over time quantums. So, even if you run with realtime priority (which is not truly "realtime"), you still have no guarantee, but you seriously interfere with important system services.
On top of that, Sleeping for 5 milliseconds is unprecise at best, and unreliable otherwise.
Sleeping will make your thread ready (ready does not mean "it will run", it merely means that it may run -- if and only if a time slice becomes available and no other ready thread is first in line) on the next scheduler tick. This effectively means that the amount of time you sleep is rounded to the granularity of the system timer resolution (see timeBeginPeriod function), plus some unknown time.
By default, the timer resolution is 15.6ms, so your 5ms will be 7.8 seconds on the average (assuming the best, uncontended case), but possibly a lot more. If you adjust the system timer resolution to 1ms (which is often the lowest possible, though some systems allow 0.5ms), it's somewhat better, but still not precise or reliable. Plus, making the scheduler run more often burns a considerable amount of CPU cycles in interrupts, and power. Therefore, it is not something that is generally advisable.
To make things even worse, you cannot even rely on Sleep's rounding mode, since Windows 2000/XP round differently from Windows Vista/7/8.
It can be interrupted by a driver at any time. The driver may signal another thread and then ask the OS to schedule/dispatch. The newly-ready thread may well run instead of yours.
These desktop OS, like Windows, do not provide any real-time guarantees - they were not designed to provide it.

windows c++ and possibility of a microsecond sleep

Is there anyway at all in the windows environment to sleep for ~1 microsecond? After researching and reading many threads and various websites, I have not been able to see that this is possible. Since the scheduler appears to be the limiting factor and it operates
at the 1 millisecond level, then I believe it can't be done without going to a real time OS.
It may not be the most portable, and I've not used these functions myself, but it might be possible to use the information in the High-Resolution Timer section of this link and block: QueryPerformanceCounter
Despite the fact that windows is claimed to be not a "real-time" OS, events can be generated at microsecond resolution. The use of a combination of system time (file_time) and the performance counter frequency has been described at other places. However, careful
implementation with taking care about processor affinity and process/thread priorities opens the door to timed events at microsecond resolution.
Since the windows scheduler and the windows timer services do rely on the systems interrupt
mechanism, the microsecond can only be tuned for by polling. Particulary on multicore systems
polling is not so ugly anymore. And the polling only has to last for the shortest possible
interrupt period. The multimedia timer interface allows to put the interrupt period down to
about 1ms, thus one can get near the desired (microsecond resolution) time and the polling will last for 1ms at most.
My implementation of microsecond resolution time services for windows, test code and an extensive description can be found at the
Windows Timestamp Project located at windowstimestamp.com

Timeout resolution of WaitForSingleObject

When I wait on a non-signaled Event using the WaitForSingleObject function, I find that in some cases the call will return WAIT_TIMEOUT in less than the specified timeout period. Simply looping on the call with a timeout set to 1000ms, I've seen the call return in periods as low as 990ms (running on WinXP). I'm using QueryPerformanceCounter to get a system-clock independent time measurement, so I don't think clock drift is likely to be an answer.
This behavior doesn't present any practical problems for me, but I'd like to understand it better. It looks like it may be working at roughly the resolution of a timer tick. Does Microsoft publish any further details on the precision of this function? Should I expect greater precision in Vista?
Yes, WaitForSingleObject uses the timer tick resolution, it does not use a high-resolution timer like QueryPerformanceCounter.
http://msdn.microsoft.com/en-us/library/ms687069(VS.85).aspx, the MSDN article on "Wait Functions" expands on this:
The accuracy of the specified time-out
interval depends on the resolution of
the system clock. The system clock
"ticks" at a constant rate. If the
time-out interval is less than the
resolution of the system clock, the
wait may time out in less than the
specified length of time. If the
time-out interval is greater than one
tick but less than two, the wait can
be anywhere between one and two ticks,
and so on.
This article also explains how to use timeBeginPeriod to increase system clock resolution - but this is not recommended.
I can think of several reasons why. First, a higher resolution is not needed for nearly all use cases of WaitForSingleObject. Using a high-resolution timer would require the kernel to constantly poll the timer (not feasible since kernel code is not guaranteed to be always running) or reprogram it frequently to generate an interrupt (since there could be multiple WaitForSingleObjects and most likely only a single programmable interrupt).
On the other hand, there already is a timing source that is constantly updatable at a resolution that is more than good enough for WaitForSingleObject, SetWaitableTimer, and Sleep.

Resources