NTP is doing it's best in syncing my local time with the various NTP servers. If my computer (which has no hardware clock) doesn't have Internet access for a long period of time, the time starts to drift. NTPD slowly corrects it when it's back online, but it can take a long time if the offset is big. I understand the point of this method, but I don't want it to be careful. I want it to be strict in changing the date and time, even if it means huge leaping in time.
Is it possible to make NTPD stricter and less careful?
Check out chrony. Chrony is designed to handle intermittent network connections, where as the ntp reference implementation is not.
http://chrony.tuxfamily.org/
NTPd actually can compensate for the drift by learning the error rate of your local oscillator. Quoting this:
The driftfile entry specifies which file is used to store the system clock's frequency offset. ntpd uses this to automatically compensate for the clock's natural drift, allowing it to maintain a reasonably correct setting even if it is cut off from all external time sources for a period of time.
If you would like to set the time almost immediately manually you can use the -g option to ignore the 1000s safety check (usually, ntpd will not sync if/once the delta is beyong this threshold):
ntpd -qg
Related
I am looking for a way to query the current RTC from the motherboard while running under windows. I am looking for a simple unaltered time as it is stored in the hardware clock (no drift compensation, no NTP time synchronization, not an old timestamp which is continued using a performance counter, ...).
I looked at the windows calls GetSystemTime, GetSystemTimeAdjustment, QueryInterruptTime, QueryPerformanceCounter, GetTickCount/GetTickCount64, GetLocalTime. I read about the Windows Time Service (and that I can shut it off), looked if there is a way to get to the BIOS using the old DOS ways (port 70/71, INT 21h, INT 1Ah), looked at the WMI classes, ... but I'm running out of ideas.
I understand that windows queries the hardware clock from time to time and adjusts the system time accordingly, when the deviations exceed 60sec. This is without NTP. The docs I found do not say what happens after that reading of the hardware clock. There must be another timer in use to do the micro-timing between hardware reads.
Since I want to draw conclusions about the drift of clock sources, this would defeat all reasoning when asking windows for the "local time" and comparing its progress against any high resolution timer (multimedia timer, time stamp counter, ...).
Does anybody know a way how to obtain the time currently stored in the hardware clock (RTC) as raw as possible while running under windows?
I have an embedded system with WiFi (based on an 80MHz ESP8266, developed using Arduino IDE), and I would like it to keep reasonably accurate clock time (to one second), using the two tools at its disposal: the internet, and its own internal timers.
Challenges:
The processor clock will likely drift over time, ostensibly in a
predictable manner.
NTP uses UDP, so return packets with the time are not guaranteed to
return in order, or to return within any set interval, or to return
at all.
Latency of return NTP packets varies widely over time, from under
100ms to (potentially) several seconds.
Latency of DNS varies widely over time, from under 100ms up to
several seconds (I can't control the timeout). DNS is needed to look
up IP addresses for the NTP server pool(s).
The system takes various actions based on certain times and
intervals, so I don't want to over-control the time, setting it
forward and backward continually, causing actions to be needlessly
missed or duplicated (or, alternatively, to complicate the program
logic handling all of these conditions - an occasional miss or
duplicate is not mission critical).
Sometimes an NTP server will return the wrong time (e.g.,
pool.ntp.org occasionally returns 0 seconds since 1900, which is easy
to detect)
Sometimes a stray return packet with an old time will arrive just
ahead of the return packet from the current request.
Current Approach:
Keep a local device time incrementing using an ISR triggered every 0.1
second.
Periodically poll (currently every 6 minutes, but really doesn't have
to be this often) an NTP server (pool).
Try again if there's no response within a short interval (currently 1
second, which is shorter than typical but most requests return in
under 150ms).
Vary the NTP server (pool) on each try, to spread the load and to
average out response times and any service errors.
Extract the time to the nearest 0.1 second (and adjust for typical
receive latency).
If the NTP time is off from the local device time by more than a
second, update the local device time (in a critical section).
Timeout, retry, and re-initialize (where appropriate), for failed
network elements of the process. Abandon the request after most hope
is lost, and just try again next time.
Is there a better, or canonical, or best practices way to do this time synchronization? Are there other factors or approaches I'm missing?
For automatic updating time from the internet or GPS system will overkill a cheap arduino project, for you may require a Ethernet shield and write a TSR program that will regularly send updated time to the arduino. Instead you can purchase more accurate RTC clock ( some are getting more accurate these days) and manually update the clock from time to time using keyboard and LED display incorporated with the arduino. If at all your project is very time critical, it is better to you some cheap crystal based solution instead.
I have NTP client implementation (on Linux) to send/receive packets to (Stratum 1 or 2) NTP server and get the server time on the board. Also, I have another application running on Linux which gives me the GPS time.
Before I get the time information from NTP and GPS source, I will be setting the time manually(using date) on the board close to current GPS time(this info is taken from http://leapsecond.com/java/gpsclock.htm).
Keeping the system time on the board as the reference, I will take the difference of this reference time with NTP(say X) and GPS(Y). The difference between X and Y is coming to be 500+ ms. I was curious to know the time precision between NTP and GPS.
Is 500 ms an expected value?
I tried enabling hardware time-stamping on the NTP client,however it has not made any difference.
Using a GPS as a reference clock boils down to one thing: PPS (Pulse-Per-Second). Everything else is pretty jittery (unstable/unpredictable).
The PPS output is extremely accurate (nanoseconds).
PPS does not contain any information except when a second starts. That means we have to feed our clock the date and time from another source. NMEA (the actual data from the GPS) is fine as long as it’s good enough to determine the time to one second accuracy.
My guess is that your “GPS time” is the time (and date) from the GPS “data outlet”. This time can be off by 500ms (or even worse) and that’s normal. That’s why we do not use that time as an accurate reference clock.
You might want to read about time references. I think the GPS time system is not strictly identical to the UTC time returned by those time-server. The time measured by the atomic clocks have a leap second added periodically to get UTC time within +/1 second of the astronomical time which is not stable.
Is your NTP implementation able to correct network latency ? Try using a NTP server with a low-latency to you...
These factors may explain the difference you see.
I am in a weird situation and I need some direction with NTP time adjustments.
I have a PC (Red Hat) that runs NTP daemon and this PC adjusts its time with a Stratum 2 time server on my LAN.
My PC is also connected to a DVR over serial port (RS-232). This device and my PC time needs to in synchronization.
However after some time the clocks of my PC and DVR begin to drift away, so I need a way of detecting time adjustment on my PC and apply same adjustment to DVR as well.
Is there any way of doing this ?
I am hoping to find a way of subscribing to some kind of event at OS level for system clock changes on Red Hat. (If this is possible at all for RedHat)
It seems it is possible on Windows with SystemEvents.TimeChanged event but I could not find a counterpart on RedHat using C++.
Any help is appreciated.
Most of the time the NTP server does not make discrete adjustments to the system clock, it only slows it down or speeds it up using adjtime, in an effort to keep the drift rate under control. The NTP server might be doing this fairly continuously, and it will not tell you every time it makes an adjustment.
Even if the NTP server told you whenever it made adjustments, that information is not useful for you. Your DVR's clock is driven by different hardware and therefore has a different drift rate and would require a different set of adjustments at different times. Ideally this would be done by an NTP daemon on the DVR!
The NTP daemon does under some circumstances cause a jump in the clock. It may happen at startup or if the clock gets way off, but this should be a very rare event. It probably emits a log message when it does this, so one possibility would be to watch the logs. Another possibility would be to compare the results from clock_gettime(CLOCK_REALTIME) and clock_gettime(CLOCK_MONOTONIC) from time to time. When you notice that the delta between these two clocks has changed, it must be because someone made the system time jump. But beware: the results will be jittery because an unpredictable and variable amount of time elapses from the moment you fetch one of the clocks until the moment you fetch the other one.
Depending on your needs, you might be able to achieve what you need by ignoring the system time and using only clock_gettime(CLOCK_MONOTONIC) for synchronizing with the DVR. That clock is guaranteed not to jump. But beware! (again?! haha!) I believe CLOCK_MONOTONIC may still slow down and speed up under the direction of the NTP daemon.
Since there doesn't seem to really be a solution to this problem short of an API change from Microsoft, I've added a feature request. Please upvote if you're having this same issue.
Computers keep time normally with a built in clock on the motherboard. But out of curiosity, can a computer determine when a certain interval of time has passed?
I would think not as a computer can only execute instructions. Of course, you could rely on the computer knowing its own processing speed, but that would be an extremely crude hack that would take up way too many clock cycles and be error prone.
Not without it constantly running to keep track of it, it pulling the time off of the internet constantly, or a piece of hardware to get the time from the constantly broadcast signal.
In certain cases, this is possible. Some microcontrollers and older processors execute their instructions in a defined time period, so by tracking the number of instructions executed, one can keep track of periods of time. This works well, and is useful, for certain cases (such as oscillating to play a sound wave), but in general, you're right, it's not particularly useful.
In the olden days there was a fixed timer interrupt. Often every 60th of a second in the US.
Old OS's simply counted these interrupts -- effectively making it a clock.
And, in addition to counting them, it also used this to reschedule tasks, thereby preventing a long-running task from monopolizing the procesor.
This scheme did mean that you had to set the time every time you turned the power on.
But in those days, powering down a computer (and getting it to reboot) was an epic task performed only by specialists.
I recall days when the machine wouldn't "IPL" (Initial Program Load) for hours because something was not right in the OS and the patches and the hardware.