Change system clock by some increment or decrement - windows-7

Assuming I am able to determine that my Windows 7 system clock is running 300ms slow, is there a safe way to programmatically advance it by that amount?
These steps usually work:
get current time
add 300 ms
set clock to new time (using win32api.SetSystemTime)
But there is a delay between steps 1 and 3, this might cause a bigger problem.
Is there a system call for a safe increment/decrement?
Or can I prevent interruptions or delays? (Using python!)
Or is there a command-line utility I can invoke to do this operation safely?

You should look at the SetSystemTimeAdjustment function.
This function allows to speed up / slow down the system clock.
You may apply the system time adjustment (change of speed of the system clock) as long as required to finally obtain an offset of zero.
Note: You're specifically asking for Windows 7. I'm confident you are aware that support for 7 will end shortly. Windows 10 offers the SetSystemTimeAdjustmentPrecise function.

Related

Is there a way to read the current time stored in RTC using windows?

I am looking for a way to query the current RTC from the motherboard while running under windows. I am looking for a simple unaltered time as it is stored in the hardware clock (no drift compensation, no NTP time synchronization, not an old timestamp which is continued using a performance counter, ...).
I looked at the windows calls GetSystemTime, GetSystemTimeAdjustment, QueryInterruptTime, QueryPerformanceCounter, GetTickCount/GetTickCount64, GetLocalTime. I read about the Windows Time Service (and that I can shut it off), looked if there is a way to get to the BIOS using the old DOS ways (port 70/71, INT 21h, INT 1Ah), looked at the WMI classes, ... but I'm running out of ideas.
I understand that windows queries the hardware clock from time to time and adjusts the system time accordingly, when the deviations exceed 60sec. This is without NTP. The docs I found do not say what happens after that reading of the hardware clock. There must be another timer in use to do the micro-timing between hardware reads.
Since I want to draw conclusions about the drift of clock sources, this would defeat all reasoning when asking windows for the "local time" and comparing its progress against any high resolution timer (multimedia timer, time stamp counter, ...).
Does anybody know a way how to obtain the time currently stored in the hardware clock (RTC) as raw as possible while running under windows?

How to measure ISR execution time?

I am on linux kernel 2.6.32.
I am facing an issue in which one of the two ISR (serial and ethernet) are taking more time (hundreds of microseconds) on several occasion/under some scenarios which I don't know. I would like to get the time difference every time the ISR executes.
What would be the best way (least expensive in terms of overhead involved). I don't see ARM architecture has some TSC register (read_tsc api) which would give me direct access to time as it offers on some other architecture.
So Idea is
1) The moment ISR is invoked measure time
2) the moment ISR is complete measure the time.
3) get the difference of 1 and 2 store it in some variable.
4) Keep doing the steps 1 to 2 and when the value received in the step 3 is greater than the past value overwrite it (keep/preserve value with maximum latency).
When the issue happens (some abrupt condition print the value) or array of last 10 value).
I need to do in kernel driver so let me know what would be the least expensive way.
OMAP3 has Cortex-A8 core. That does have Performance Monitor Unit (PMU). Cycle Count (CCNT) would correspond to x86 TSC, except probably you have to enable it counting before you read. Good info in BeagleBoard post.
In 2.6.32.55 I see arch/arm/oprofile/op_model_v7.c gives full access and control. My need was bare-metal, I used ARM example code that was simple and worked for me.
It would also be possible to use an OMAP3 GPT, but that would be more work, e.g. to get its clock input set up from PRCM.

windows c++ and possibility of a microsecond sleep

Is there anyway at all in the windows environment to sleep for ~1 microsecond? After researching and reading many threads and various websites, I have not been able to see that this is possible. Since the scheduler appears to be the limiting factor and it operates
at the 1 millisecond level, then I believe it can't be done without going to a real time OS.
It may not be the most portable, and I've not used these functions myself, but it might be possible to use the information in the High-Resolution Timer section of this link and block: QueryPerformanceCounter
Despite the fact that windows is claimed to be not a "real-time" OS, events can be generated at microsecond resolution. The use of a combination of system time (file_time) and the performance counter frequency has been described at other places. However, careful
implementation with taking care about processor affinity and process/thread priorities opens the door to timed events at microsecond resolution.
Since the windows scheduler and the windows timer services do rely on the systems interrupt
mechanism, the microsecond can only be tuned for by polling. Particulary on multicore systems
polling is not so ugly anymore. And the polling only has to last for the shortest possible
interrupt period. The multimedia timer interface allows to put the interrupt period down to
about 1ms, thus one can get near the desired (microsecond resolution) time and the polling will last for 1ms at most.
My implementation of microsecond resolution time services for windows, test code and an extensive description can be found at the
Windows Timestamp Project located at windowstimestamp.com

Does the timeBeginPeriod API affect the system clock?

I have a Win32 application, and there are some animiation UI in my application, to make the animiation more smooth, I called timeBeginPeriod to improve the time resolution, but I found the system clock will delay some seconds if my application is running very long time. Does the timeBeginPeriod affect the system clock?
Good question. I didn't know this but yes it can. According to MSDN: "Use caution when calling timeBeginPeriod, as frequent calls can significantly affect the system clock, system power usage, and the scheduler."
A call to timeBeginPeriod changes the systems interrupt period. As a consequence the update rate and the update increment of the system time changes accordingly. This answer proves a closer look at system time and timeBeginPeriod.

How do I obtain CPU cycle count in Win32?

In Win32, is there any way to get a unique cpu cycle count or something similar that would be uniform for multiple processes/languages/systems/etc.
I'm creating some log files, but have to produce multiple logfiles because we're hosting the .NET runtime, and I'd like to avoid calling from one to the other to log. As such, I was thinking I'd just produce two files, combine them, and then sort them, to get a coherent timeline involving cross-world calls.
However, GetTickCount does not increase for every call, so that's not reliable. Is there a better number, so that I get the calls in the right order when sorting?
Edit: Thanks to #Greg that put me on the track to QueryPerformanceCounter, which did the trick.
Heres an interesting article! says not to use RDTSC, but to instead use QueryPerformanceCounter.
Conclusion:
Using regular old timeGetTime() to do
timing is not reliable on many
Windows-based operating systems
because the granularity of the system
timer can be as high as 10-15
milliseconds, meaning that
timeGetTime() is only accurate to
10-15 milliseconds. [Note that the
high granularities occur on NT-based
operation systems like Windows NT,
2000, and XP. Windows 95 and 98 tend
to have much better granularity,
around 1-5 ms.]
However, if you call
timeBeginPeriod(1) at the beginning of
your program (and timeEndPeriod(1) at
the end), timeGetTime() will usually
become accurate to 1-2 milliseconds,
and will provide you with extremely
accurate timing information.
Sleep() behaves similarly; the length
of time that Sleep() actually sleeps
for goes hand-in-hand with the
granularity of timeGetTime(), so after
calling timeBeginPeriod(1) once,
Sleep(1) will actually sleep for 1-2
milliseconds,Sleep(2) for 2-3, and so
on (instead of sleeping in increments
as high as 10-15 ms).
For higher precision timing
(sub-millisecond accuracy), you'll
probably want to avoid using the
assembly mnemonic RDTSC because it is
hard to calibrate; instead, use
QueryPerformanceFrequency and
QueryPerformanceCounter, which are
accurate to less than 10 microseconds
(0.00001 seconds).
For simple timing, both timeGetTime
and QueryPerformanceCounter work well,
and QueryPerformanceCounter is
obviously more accurate. However, if
you need to do any kind of "timed
pauses" (such as those necessary for
framerate limiting), you need to be
careful of sitting in a loop calling
QueryPerformanceCounter, waiting for
it to reach a certain value; this will
eat up 100% of your processor.
Instead, consider a hybrid scheme,
where you call Sleep(1) (don't forget
timeBeginPeriod(1) first!) whenever
you need to pass more than 1 ms of
time, and then only enter the
QueryPerformanceCounter 100%-busy loop
to finish off the last < 1/1000th of a
second of the delay you need. This
will give you ultra-accurate delays
(accurate to 10 microseconds), with
very minimal CPU usage. See the code
above.
You can use the RDTSC CPU instruction (assuming x86). This instruction gives the CPU cycle counter, but be aware that it will increase very quickly to its maximum value, and then reset to 0. As the Wikipedia article mentions, you might be better off using the QueryPerformanceCounter function.
System.Diagnostics.Stopwatch.GetTimestamp() return the number of CPU cycle since a time origin (maybe when the computer start, but I'm not sure) and I've never seen it not increased between 2 calls.
The CPU Cycles will be specific for each computer so you can't use it to merge log file between 2 computers.
RDTSC output may depend on the current core's clock frequency, which for modern CPUs is neither constant nor, in a multicore machine, consistent.
Use the system time, and if dealing with feeds from multiple systems use an NTP time source. You can get reliable, consistent time readings that way; if the overhead is too much for your purposes, using the HPET to work out time elapsed since the last known reliable time reading is better than using the HPET alone.
Use the GetTickCount and add another counter as you merge the log files. Won't give you perfect sequence between the different log files, but it will at least keep all logs from each file in the correct order.

Resources