I have read this answer somewhere but I don't understand it exactly:
I understand Windows increments the clock every curTimeIncrement
(156001 100 nanoseconds) with the value of curTimeAdjustment (156001
+- N). But when the clock is read using GetSystemTime does the routine interpolate within the 156001 nanosecond*100 interval to
produce the precision indicated?
Can someone try to explain it to me?
What is curTimeIncrement, curTimeAdjustment and how can Windows do this?
What is the effect for this on getting the accurate time?
Is that true just for windows 7 or also other OS Win8, Linux, etcetera?
It refers to the values returned by GetSystemTimeAdjustment() on Windows. It tells you how the clock is being adjusted to catch up or slow down to match the real time. "Real" being the time kept by an institution like the NIST in the USA, they have an atomic clock whose accuracy is far, far higher than the clock built into your machine.
The Real Time Clock (RTC) in your machine has limited accuracy, a side-effect of keeping the hardware affordable, it tends to be off by a few seconds each month. So periodically the operating system contacts a time server through the Internet, time.windows.com is the common selection on Windows. Which tells it the current real time according to the atom clock oracle.
The inaccuracy of the RTC is not the only source for drift, sometimes the real time is changed intentionally. Adding a leap second to resynchronize the clocks with the true rotation of the Earth. The current day (24 x 60 x 60 seconds) is a bit too short, the Earth's rotation is slowing down by ~1.5 msec every century and is in general irregular due to large storms and earth-quakes. The inserted leap second makes up for that. The most recent one was added on June 30th of this year at 23:59:60 UTC. 60 is not a typo :)
The one previous to that was on June 30th, 2012. A bit notorious, the insertion of the leap second crashed a lot of Linux servers. Google "Linux leap second bug" to learn more about it.
Which is in general what the machinery underneath GetSystemTimeAdjustment() is trying to avoid, instantly changing the time with the value obtained from the time server is very dangerous. Software often has a hard assumption that time progresses steadily and misbehaves when it doesn't. Like observing the same time twice when the clock is set back. Or observing a fake time like 23:59:60 UTC due to the leap second insertion.
So it doesn't, the clock is updated 64 times per second, at the clock tick interrupt. Or in other words 1 / 64 = 0.015625 between ticks, 156250 in nanoseconds*100 units. If a clock adjustment needs to be made then it doesn't just add 156250 but slightly more or less. Thus slowly resynchronizing the clock to the true time and avoiding upsetting software.
This of course has an unpleasant side-effect on software that paints a clock. An obvious way to do it is to use a one second timer. But sometimes that is not a second, it won't be when a time adjustment is in progress. Then Nyquist's sampling theorem comes into play, sometimes a timer tick does not update the clock at all or it skips a second. Notable is that this is not the only reason why it is hard to keep a painted clock accurate, the timer notification itself is always delayed as well. A side-effect of software not being able to instantly execute. This is in fact the much more likely source of trouble, the clock adjustment is just icing on the cake that's easier to understand.
Awkward problem, Mr. Nyquist has taught us that you have to sample more frequently to eliminate undesirable aliasing effects. So a workaround is to just set the timer at a small interval, like 15 or 31 milliseconds, short enough for the user to no longer observe the missing update.
Related
If you watch windows' clock under windows 7 carefully, you will find there is quick tick for every 6 slow tick(with the same length).
I googled, find an article that tells windows 2K/XP/2K3 set the clock by:
SetTimer (hWnd, TimerID, OPEN_TLEN, 0L);
and it gives that explain:
OPEN_TLEN is the length of the timer, it is a constant. So when we
look at clock.h, you will get the number, which is 450. What does this
450 mean? This means, every 450ms the timer will be triggered, it will
detect time changes and redraw the clock.
then he mentioned my problem:
BTW – the clock application under Vista/2K8 is completely rewritten,
so you may not have that problem. But if you watch it for a minute,
you will still notice a quite “quick” second. :)
If what he said is true, there is no doubt with the unusual behavior of the clock, but my question is:
Why the developer of windows choose that weird 450ms? If it's 500ms, every tick will show with same length.
And in windows 7, they rewrote the clock and don't fix the problem at all, I think, there must be some reason, the developer have to choose that weird redrawing time.
So I want to know, what is the mysterious reason?
The clock will register a timer for 1s and simply move the "seconds" hand every time the timer handler is triggered.
Depending upon the dynamic system load, the tick-ing movement of the "seconds" hand will be a few milliseconds delayed at every iteration.
Once in a while (not sure how often, but seems like a minute at most), the clock app will check the actual system time to sync the timer (and hence the movement of the "seconds" hand). At this moment, it will catch-up with the actual time and all the delay accumulated over the past few iterations will be corrected. This is observed as a quick tick of the "seconds" hand of the clock.
Why is this done?
The clock app is NOT guaranteed to be accurate to the millisecond. As checking the system time once every second is expensive it is avoided with the above re-design. However, to compensate for "drift" of the time shown on the clock-app (w.r.t. the actual system time) the app does check the system-time once in a while; ergo the "fast" tick.
I have NTP client implementation (on Linux) to send/receive packets to (Stratum 1 or 2) NTP server and get the server time on the board. Also, I have another application running on Linux which gives me the GPS time.
Before I get the time information from NTP and GPS source, I will be setting the time manually(using date) on the board close to current GPS time(this info is taken from http://leapsecond.com/java/gpsclock.htm).
Keeping the system time on the board as the reference, I will take the difference of this reference time with NTP(say X) and GPS(Y). The difference between X and Y is coming to be 500+ ms. I was curious to know the time precision between NTP and GPS.
Is 500 ms an expected value?
I tried enabling hardware time-stamping on the NTP client,however it has not made any difference.
Using a GPS as a reference clock boils down to one thing: PPS (Pulse-Per-Second). Everything else is pretty jittery (unstable/unpredictable).
The PPS output is extremely accurate (nanoseconds).
PPS does not contain any information except when a second starts. That means we have to feed our clock the date and time from another source. NMEA (the actual data from the GPS) is fine as long as it’s good enough to determine the time to one second accuracy.
My guess is that your “GPS time” is the time (and date) from the GPS “data outlet”. This time can be off by 500ms (or even worse) and that’s normal. That’s why we do not use that time as an accurate reference clock.
You might want to read about time references. I think the GPS time system is not strictly identical to the UTC time returned by those time-server. The time measured by the atomic clocks have a leap second added periodically to get UTC time within +/1 second of the astronomical time which is not stable.
Is your NTP implementation able to correct network latency ? Try using a NTP server with a low-latency to you...
These factors may explain the difference you see.
Is it possible in go to get system time in less than nano second means in pico-second or like that? Actually, I want to measure two consecutive events time gap which I can't catch in nano-second in our fast system.
The cost of calling profiling functions/instructions on modern hardware is larger (and mor espread and prone to deviance) than the interval you're going to measure. So even if you try, you'll get erroneous results.
Consider tracking time lapse for 100 events, if that's at all possible.
The time resolution is hardware and operating system dependent. The Go time.Nanoseconds function provides up to nanosecond time resolution. On a PC, it's usually 1000 nanoseconds (1 microsecond) or 100 nanoseconds at best.
Computers keep time normally with a built in clock on the motherboard. But out of curiosity, can a computer determine when a certain interval of time has passed?
I would think not as a computer can only execute instructions. Of course, you could rely on the computer knowing its own processing speed, but that would be an extremely crude hack that would take up way too many clock cycles and be error prone.
Not without it constantly running to keep track of it, it pulling the time off of the internet constantly, or a piece of hardware to get the time from the constantly broadcast signal.
In certain cases, this is possible. Some microcontrollers and older processors execute their instructions in a defined time period, so by tracking the number of instructions executed, one can keep track of periods of time. This works well, and is useful, for certain cases (such as oscillating to play a sound wave), but in general, you're right, it's not particularly useful.
In the olden days there was a fixed timer interrupt. Often every 60th of a second in the US.
Old OS's simply counted these interrupts -- effectively making it a clock.
And, in addition to counting them, it also used this to reschedule tasks, thereby preventing a long-running task from monopolizing the procesor.
This scheme did mean that you had to set the time every time you turned the power on.
But in those days, powering down a computer (and getting it to reboot) was an epic task performed only by specialists.
I recall days when the machine wouldn't "IPL" (Initial Program Load) for hours because something was not right in the OS and the patches and the hardware.
I was under the impression that QueryPerformanceCounter was actually accessing the counter that feeds the HPET (High Performance Event Timer)---the difference of course being that HPET is a timer which send an interrupt when the counter value matches the desired interval whereas to make a timer "out of" QueryPerformanceCounter you have to write your own loop in software.
The only reason I had assumed the hardware behind the two was the same is because somewhere I had read that QueryPerformanceCounter was accessing a counter on the chipset.
http://www.gamedev.net/reference/programming/features/timing/ claims that QueryPerformanceCounter use chipset timers which apparently have a specified clock rate. However, I can verify that QueryPerformanceFrequency returns wildly different numbers on different machines, and in fact, the number can change slightly from boot to boot.
The numbers returned can sometimes be totally ridiculous---implying ticks in the nanosecond range. Of course when put together it all works; that is, writing timer software using QueryPerformanceCounter/QueryPerformanceFrequency allows you to get proper timing and latency is pretty low.
A software timer using these functions can be pretty good. For example, with an interval of 1 millisecond, over 30 seconds it's easy to nearly 100% of ticks to fall within 10% of the intended interval. With an interval of 100 microseconds, you still get a high success rate (99.7%) but the worst ticks can be way off (200 microseconds).
I'm wondering if the clock behind the HPET is the same. Supposedly HPET should still increase accuracy since it is a hardware timer, but as of yet I don't know how to access it in Windows.
Sounds like Microsoft has made these functions use "whatever best timer there is":
http://www.microsoft.com/whdc/system/sysinternals/mm-timer.mspx
Did you try updating your CPU driver for your AMD multicore system? Did you check whether your motherboard chipset is on the "bad" list? Did you try setting the CPU affinity?
One can also use the RTC-based time functions and/or a skip-detecting heuristic to eliminate trouble with QPC.
This has some hints: CPU clock frequency and thus QueryPerformanceCounter wrong?
Please improve this. It is a community wiki.