How to periodically synchronize NTP time with local timer-based time - time

I have an embedded system with WiFi (based on an 80MHz ESP8266, developed using Arduino IDE), and I would like it to keep reasonably accurate clock time (to one second), using the two tools at its disposal: the internet, and its own internal timers.
Challenges:
The processor clock will likely drift over time, ostensibly in a
predictable manner.
NTP uses UDP, so return packets with the time are not guaranteed to
return in order, or to return within any set interval, or to return
at all.
Latency of return NTP packets varies widely over time, from under
100ms to (potentially) several seconds.
Latency of DNS varies widely over time, from under 100ms up to
several seconds (I can't control the timeout). DNS is needed to look
up IP addresses for the NTP server pool(s).
The system takes various actions based on certain times and
intervals, so I don't want to over-control the time, setting it
forward and backward continually, causing actions to be needlessly
missed or duplicated (or, alternatively, to complicate the program
logic handling all of these conditions - an occasional miss or
duplicate is not mission critical).
Sometimes an NTP server will return the wrong time (e.g.,
pool.ntp.org occasionally returns 0 seconds since 1900, which is easy
to detect)
Sometimes a stray return packet with an old time will arrive just
ahead of the return packet from the current request.
Current Approach:
Keep a local device time incrementing using an ISR triggered every 0.1
second.
Periodically poll (currently every 6 minutes, but really doesn't have
to be this often) an NTP server (pool).
Try again if there's no response within a short interval (currently 1
second, which is shorter than typical but most requests return in
under 150ms).
Vary the NTP server (pool) on each try, to spread the load and to
average out response times and any service errors.
Extract the time to the nearest 0.1 second (and adjust for typical
receive latency).
If the NTP time is off from the local device time by more than a
second, update the local device time (in a critical section).
Timeout, retry, and re-initialize (where appropriate), for failed
network elements of the process. Abandon the request after most hope
is lost, and just try again next time.
Is there a better, or canonical, or best practices way to do this time synchronization? Are there other factors or approaches I'm missing?

For automatic updating time from the internet or GPS system will overkill a cheap arduino project, for you may require a Ethernet shield and write a TSR program that will regularly send updated time to the arduino. Instead you can purchase more accurate RTC clock ( some are getting more accurate these days) and manually update the clock from time to time using keyboard and LED display incorporated with the arduino. If at all your project is very time critical, it is better to you some cheap crystal based solution instead.

Related

Adjusting parameter - Machine learning

I am transfering some data remotely packet by packet.
Before sending each packet I need to have a sleep for some time (milliseconds). After transferring each file I have a feedback: fail or success.
Of course as smaller delay I have as smaller success rate will be however time for transferring will be less.
My goal is to adjust automatically current delay to make average SUCCESS RATE equal some constant (say 98%).
Intuitevly I assume:
After each success transfer I'll increase current delay
After each unsuccess transfer I'll decrease current delay
In time I'll modify current delay less (fade)
What algorithms would you suggest for efficient (from viewpoint of time to learn, memory) finding optimal parameter value?
You are essentially describing a network congestion solution. Look at http://en.wikipedia.org/wiki/Network_congestion_avoidance#Avoidance for much more information on the subject.
One algorithm that might suit you well is to decrease the time you wait after each successful transfer. After an unsuccessful transfer increase the time (either by a set amount or dynamically) and repeat indefinitely. I wish I could remember the specific name for this algorithm but at the moment it is escaping me.
Note if you are truly sending packages over a network and not just a play network "optimal" is not a constant as the network is always in a state of change.

Time precision between NTP and GPS source

I have NTP client implementation (on Linux) to send/receive packets to (Stratum 1 or 2) NTP server and get the server time on the board. Also, I have another application running on Linux which gives me the GPS time.
Before I get the time information from NTP and GPS source, I will be setting the time manually(using date) on the board close to current GPS time(this info is taken from http://leapsecond.com/java/gpsclock.htm).
Keeping the system time on the board as the reference, I will take the difference of this reference time with NTP(say X) and GPS(Y). The difference between X and Y is coming to be 500+ ms. I was curious to know the time precision between NTP and GPS.
Is 500 ms an expected value?
I tried enabling hardware time-stamping on the NTP client,however it has not made any difference.
Using a GPS as a reference clock boils down to one thing: PPS (Pulse-Per-Second). Everything else is pretty jittery (unstable/unpredictable).
The PPS output is extremely accurate (nanoseconds).
PPS does not contain any information except when a second starts. That means we have to feed our clock the date and time from another source. NMEA (the actual data from the GPS) is fine as long as it’s good enough to determine the time to one second accuracy.
My guess is that your “GPS time” is the time (and date) from the GPS “data outlet”. This time can be off by 500ms (or even worse) and that’s normal. That’s why we do not use that time as an accurate reference clock.
You might want to read about time references. I think the GPS time system is not strictly identical to the UTC time returned by those time-server. The time measured by the atomic clocks have a leap second added periodically to get UTC time within +/1 second of the astronomical time which is not stable.
Is your NTP implementation able to correct network latency ? Try using a NTP server with a low-latency to you...
These factors may explain the difference you see.

500Hz or higher serial port data recording

Hello I'm trying to read some data from the serial port and record it in the hard drive. I'm using visual C++ express, and made an application using the windows form.
The program basically sends a byte ("s") every t seconds, this trigger the device connected to the serial port to send back 3 bytes. The baud rate now is on 38400bps. The time t is controlled by the timer class of visual c++.
The problem I have is that if I set the ticking time of the timer to 1ms, the data is not recorded every 1ms, but around every 15ms. I've read that maybe the resolution of the timer is set to 15ms, but not sure about it. Anyhow, how can I make the timer event to trigger every 1ms, instead of every 15ms? or is there another way to read the serial port data faster? I'm looking for 500Hz or higher.
The device connected to the serial port is a 32bit microcontroller, which I have control over the program as well so I can easily change it, but just can't figure out another way to make this transmission.
Thanks for any help!
Windows is not a real-time OS, and regardless of what period your timer is set to there are no guarantees that it will be consistently maintained. Moreover the OS clock resolution is dependent on the hardware vendor's HAL implementation and varies from system to system. Multi-media timers have higher resolution, but the real-time guarantees are still not there.
Apart from that, you need to do a little arithmetic on the timing you are trying to achieve. At 38400,N,8,1, you can only transfer at most 3.84 characters in 1ms, so your timing is tight in any case since you are pinging with one character and expecting three characters to be returned. You can certainly go no faster without increasing the bit rate.
A better solution would be to have the PC host send the required reporting period to the embedded target once then have the embedded target perform its own timing so that it autonomously emits data every period until the PC requests that it stop or sends a different period. Your embedded system is far more capable of maintaining hard-real-time constraints.
Alternatively you could simply have your device perform its sample and transmit the three characters with the timing entirely determined by the transmission time of the three characters, and stream the data constantly. This will give you a sample period of 781.25us (1280Hz) without any triggering from the PC and it will be truly periodic and jitter free. If you want a faster sample rate, simply increase the bit rate.
Windows Forms timer resolution is about 15-20 ms. You can try multimedia timer, see timeSetEvent function.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd757634%28v=vs.85%29.aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/dd743609%28v=vs.85%29.aspx
Timer precision is set by uResolution parameter (0 - maximal possible precision). In any case, you cannot get timer callback every ms - Windows is not real-time system.

Get notified of ntp adjustments

I am in a weird situation and I need some direction with NTP time adjustments.
I have a PC (Red Hat) that runs NTP daemon and this PC adjusts its time with a Stratum 2 time server on my LAN.
My PC is also connected to a DVR over serial port (RS-232). This device and my PC time needs to in synchronization.
However after some time the clocks of my PC and DVR begin to drift away, so I need a way of detecting time adjustment on my PC and apply same adjustment to DVR as well.
Is there any way of doing this ?
I am hoping to find a way of subscribing to some kind of event at OS level for system clock changes on Red Hat. (If this is possible at all for RedHat)
It seems it is possible on Windows with SystemEvents.TimeChanged event but I could not find a counterpart on RedHat using C++.
Any help is appreciated.
Most of the time the NTP server does not make discrete adjustments to the system clock, it only slows it down or speeds it up using adjtime, in an effort to keep the drift rate under control. The NTP server might be doing this fairly continuously, and it will not tell you every time it makes an adjustment.
Even if the NTP server told you whenever it made adjustments, that information is not useful for you. Your DVR's clock is driven by different hardware and therefore has a different drift rate and would require a different set of adjustments at different times. Ideally this would be done by an NTP daemon on the DVR!
The NTP daemon does under some circumstances cause a jump in the clock. It may happen at startup or if the clock gets way off, but this should be a very rare event. It probably emits a log message when it does this, so one possibility would be to watch the logs. Another possibility would be to compare the results from clock_gettime(CLOCK_REALTIME) and clock_gettime(CLOCK_MONOTONIC) from time to time. When you notice that the delta between these two clocks has changed, it must be because someone made the system time jump. But beware: the results will be jittery because an unpredictable and variable amount of time elapses from the moment you fetch one of the clocks until the moment you fetch the other one.
Depending on your needs, you might be able to achieve what you need by ignoring the system time and using only clock_gettime(CLOCK_MONOTONIC) for synchronizing with the DVR. That clock is guaranteed not to jump. But beware! (again?! haha!) I believe CLOCK_MONOTONIC may still slow down and speed up under the direction of the NTP daemon.
Since there doesn't seem to really be a solution to this problem short of an API change from Microsoft, I've added a feature request. Please upvote if you're having this same issue.

What is the best way for "Polling"?

This question is related with Microcontroller programming but anyone may suggest a good algorithm to handle this situation.
I have a one central console and set of remote sensors. The central console has a receiver and the each sensor has a transmitter operates on same frequency. So we can only implement Simplex communication.
Since the transmitters work on same frequency we cannot have 2 sensors sending data to central console at the same time.
Now I want to program the sensors to perform some "polling". The central console should get some idea about the existence of sensors (Whether the each sensor is responding or not)
I can imagine several ways.
Using a same interval between the poll messages for each sensor and start the sensors randomly. So they will not transmit at the same time.
Use of some round mechanism. Sensor 1 starts polling at 5 seconds the second at 10 seconds etc. More formal version of method 1.
The maximum data transfer rate is around 4800 bps so we need to consider that as well.
Can some one imagine a good way to resolve this with less usage of communication links. Note that we can use different poll intervals for each sensors if necessary.
I presume what you describe is that the sensors and the central unit are connected to a bus that can deliver only one message at a time.
A normal way to handle this is to have collision detection. This is e.g. how Ethernet operates as far as I know. You try to send a message; then attempt to detect collision. If you detect a collision, wait for a random amount (to break ties) and then re-transmit, of course with collision check again.
If you can't detect collisions, the different sensors could have polling intervals that are all distinct prime numbers. This would guarantee that every sensor would have dedicated slots for successful polling. Of course there would be still collisions, but they wouldn't need to be detected. Here example with primes 5, 7 and 11:
----|----|----|----|----|----|----|----| (5)
------|------|------|------|------|----- (7)
----------|----------|----------|-:----- (11)
* COLLISION
Notable it doesn't matter if the sensor starts "in phase" or "out of phase".
I think you need to look into a collision detection system (a la Ethernet). If you have time-based synchronization, you rely on the clocks on the console and sensors never drifting out of sync. This might be ok if they are connected to an external, reliable time reference, or if you go to the expense of having a battery backed RTC on each one (expensive).
Consider using all or part of an existing protocol, unless protocol design is an end in itself - apart from saving time you reduce the probability that your protocol will have a race condition that causes rare irreproducible bugs.
A lot of protocols for this situation have the sensors keeping quiet until the master specifically asks them for the current value. This makes it easy to avoid collisions, and it makes it easy for the master to request retransmissions if it thinks it has missed a packet, or if it is more interested in keeping up to date with one sensor than with others. This may even give you better performance than a system based on collision detection, especially if commands from the master are much shorter than sensor responses. If you end up with something like Alohanet (see http://en.wikipedia.org/wiki/ALOHAnet#The_ALOHA_protocol) you will find that the tradeoff between not transmitting very often and having too many collisions forces you to use less than 50% of the available bandwidth.
Is it possible to assign a unique address to each sensor?
In that case you can implement a Master/Slave protocol (like Modbus or similar), with all devices sharing the same communication link:
Master is the only device which can initiate communication. It can poll each sensor separately (one by one), by broadcasting its address to all slaves.
Only the slave device which was addressed will reply.
If there is no response after a certain period of time (timeout), device is not available and Master can poll the next device.
See also: List of automation protocols
I worked with some Zigbee systems a few years back. It only had two sensors so we just hard-coded them with different wait times and had them always respond to requests. But since Zigbee has systems However, we considered something along the lines of this:
Start out with an announcement from the console 'Hey everyone, let's make a network!'
Nodes all attempt to respond with something like 'I'm hardware address x, can I join?'
At first it's crazy, but with some random retry times, eventually the console responds to all nodes: 'Yes hardware address x, you can join. You are node #y and you will have a wait time of z milliseconds from the time you receive your request for data'
Then it should be easy. Every time the console asks for data the nodes respond in their turn. Assuming transmission of all of the data takes less time than the polling period you're set. It's best not to acknowledge the messages. If the console fails to respond, then very likely the node will try to retransmit just when another node is trying to send data, messing both of them up. Then it snowballs into complete failure...

Resources