What is the best way for "Polling"? - algorithm

This question is related with Microcontroller programming but anyone may suggest a good algorithm to handle this situation.
I have a one central console and set of remote sensors. The central console has a receiver and the each sensor has a transmitter operates on same frequency. So we can only implement Simplex communication.
Since the transmitters work on same frequency we cannot have 2 sensors sending data to central console at the same time.
Now I want to program the sensors to perform some "polling". The central console should get some idea about the existence of sensors (Whether the each sensor is responding or not)
I can imagine several ways.
Using a same interval between the poll messages for each sensor and start the sensors randomly. So they will not transmit at the same time.
Use of some round mechanism. Sensor 1 starts polling at 5 seconds the second at 10 seconds etc. More formal version of method 1.
The maximum data transfer rate is around 4800 bps so we need to consider that as well.
Can some one imagine a good way to resolve this with less usage of communication links. Note that we can use different poll intervals for each sensors if necessary.

I presume what you describe is that the sensors and the central unit are connected to a bus that can deliver only one message at a time.
A normal way to handle this is to have collision detection. This is e.g. how Ethernet operates as far as I know. You try to send a message; then attempt to detect collision. If you detect a collision, wait for a random amount (to break ties) and then re-transmit, of course with collision check again.
If you can't detect collisions, the different sensors could have polling intervals that are all distinct prime numbers. This would guarantee that every sensor would have dedicated slots for successful polling. Of course there would be still collisions, but they wouldn't need to be detected. Here example with primes 5, 7 and 11:
----|----|----|----|----|----|----|----| (5)
------|------|------|------|------|----- (7)
----------|----------|----------|-:----- (11)
* COLLISION
Notable it doesn't matter if the sensor starts "in phase" or "out of phase".

I think you need to look into a collision detection system (a la Ethernet). If you have time-based synchronization, you rely on the clocks on the console and sensors never drifting out of sync. This might be ok if they are connected to an external, reliable time reference, or if you go to the expense of having a battery backed RTC on each one (expensive).

Consider using all or part of an existing protocol, unless protocol design is an end in itself - apart from saving time you reduce the probability that your protocol will have a race condition that causes rare irreproducible bugs.
A lot of protocols for this situation have the sensors keeping quiet until the master specifically asks them for the current value. This makes it easy to avoid collisions, and it makes it easy for the master to request retransmissions if it thinks it has missed a packet, or if it is more interested in keeping up to date with one sensor than with others. This may even give you better performance than a system based on collision detection, especially if commands from the master are much shorter than sensor responses. If you end up with something like Alohanet (see http://en.wikipedia.org/wiki/ALOHAnet#The_ALOHA_protocol) you will find that the tradeoff between not transmitting very often and having too many collisions forces you to use less than 50% of the available bandwidth.

Is it possible to assign a unique address to each sensor?
In that case you can implement a Master/Slave protocol (like Modbus or similar), with all devices sharing the same communication link:
Master is the only device which can initiate communication. It can poll each sensor separately (one by one), by broadcasting its address to all slaves.
Only the slave device which was addressed will reply.
If there is no response after a certain period of time (timeout), device is not available and Master can poll the next device.
See also: List of automation protocols

I worked with some Zigbee systems a few years back. It only had two sensors so we just hard-coded them with different wait times and had them always respond to requests. But since Zigbee has systems However, we considered something along the lines of this:
Start out with an announcement from the console 'Hey everyone, let's make a network!'
Nodes all attempt to respond with something like 'I'm hardware address x, can I join?'
At first it's crazy, but with some random retry times, eventually the console responds to all nodes: 'Yes hardware address x, you can join. You are node #y and you will have a wait time of z milliseconds from the time you receive your request for data'
Then it should be easy. Every time the console asks for data the nodes respond in their turn. Assuming transmission of all of the data takes less time than the polling period you're set. It's best not to acknowledge the messages. If the console fails to respond, then very likely the node will try to retransmit just when another node is trying to send data, messing both of them up. Then it snowballs into complete failure...

Related

ZeroMQ - Handling slow receivers without dropping

I have an architecture where I have a ROUTER socket and multiple DEALER sockets.
Some DEALER sockets will only send data, some will only receive data and others can do a mixture of both.
I have a scenario, where I have one DEALER socket, that is sending data at an extremely fast rate. This data is received by another DEALER, that will process this as fast as it can. The send rate is always going to be higher than the receive.
In my current setup the ZMQ_SNDHWM on my ROUTER socket gets hit for the receive client and will silently drop messages. I do not want this to be the case.
What is the best way to do so as to deal with this scenario?
I have looked at DEALER->DEALER on a different port, but this could be hard to maintain, depending on the number of sessions that are created I could potentially have to have one port per session.
The other way I can think of solving this is to do some pipe-lining in which the receiving DEALER socket will tell the sender when it is ready to receive but this seems to add a lot of complication to the overall protocol and a lot more state management. It also seems to defeat the ability to be able to naturally block on DEALER sockets which is really what I need in this case; the DEALER sockets will never have to communicate with any other socket.
Do not rely on blocking, the less on uncontrolled use of resources
In distributed-system there is not much space for optimistic beliefs and expectations. In many posts I advocate, where possible, not to rely on blocking states, as your code gets out-of-control and you cannot do anything about leaving such a state, but pray for a message to come any time soon, if ever.
Rather bear the responsibility, end-to-end, which in distributed-system means that you need to also design strategies how to survive a "remote" death and similar situations, that are outside of the range of your-domain-of-control, but which your code design has no right to abstract from.
Even if not willing to, an explicit flow-management is the way to go
Late 90-ies have demonstrated many flow-control strategies for distributed systems, so this is definitely not a new field.
Increasing the box-of-worm size does not help to manage the un-controlled / un-managed flow of events. While ZMQ_???HMW + ZMQ_???BUF may help somehow tweak non-brutal cases, where having a bit more space may temporarily postpone the principal weakness of un-controlled message-flows, yet the problem is like remaining stand still with closed eyes right in the middle of the cross-fire shooting. Such agent may survive, but it's survival is not a result of it's design cleverness, but just an accidental luck.
Token-passing seems to be the cheapest way how to throttle the flow so as to remain acceptable / process-able on the slowest node. Increasing such a node-processing performance may be done with a use of a static, an incrementally expanded or a fully adaptive pooling-proxy, so even this known bottleneck remains manageable and under your design-control.
The highest layer of robustness is in making your distributed-system's design resilient to spurious bursts of events ( be it messages or .connect() attempts ). So, independently of selecting the building blocks, designer has also the responsibility to design in all the survival-strategies. Not doing this leaves your system vulnerable to capacity-directed vector of attack or other sort of unhandled exploits of these kinds of known weakness vulnerabilites.

How to periodically synchronize NTP time with local timer-based time

I have an embedded system with WiFi (based on an 80MHz ESP8266, developed using Arduino IDE), and I would like it to keep reasonably accurate clock time (to one second), using the two tools at its disposal: the internet, and its own internal timers.
Challenges:
The processor clock will likely drift over time, ostensibly in a
predictable manner.
NTP uses UDP, so return packets with the time are not guaranteed to
return in order, or to return within any set interval, or to return
at all.
Latency of return NTP packets varies widely over time, from under
100ms to (potentially) several seconds.
Latency of DNS varies widely over time, from under 100ms up to
several seconds (I can't control the timeout). DNS is needed to look
up IP addresses for the NTP server pool(s).
The system takes various actions based on certain times and
intervals, so I don't want to over-control the time, setting it
forward and backward continually, causing actions to be needlessly
missed or duplicated (or, alternatively, to complicate the program
logic handling all of these conditions - an occasional miss or
duplicate is not mission critical).
Sometimes an NTP server will return the wrong time (e.g.,
pool.ntp.org occasionally returns 0 seconds since 1900, which is easy
to detect)
Sometimes a stray return packet with an old time will arrive just
ahead of the return packet from the current request.
Current Approach:
Keep a local device time incrementing using an ISR triggered every 0.1
second.
Periodically poll (currently every 6 minutes, but really doesn't have
to be this often) an NTP server (pool).
Try again if there's no response within a short interval (currently 1
second, which is shorter than typical but most requests return in
under 150ms).
Vary the NTP server (pool) on each try, to spread the load and to
average out response times and any service errors.
Extract the time to the nearest 0.1 second (and adjust for typical
receive latency).
If the NTP time is off from the local device time by more than a
second, update the local device time (in a critical section).
Timeout, retry, and re-initialize (where appropriate), for failed
network elements of the process. Abandon the request after most hope
is lost, and just try again next time.
Is there a better, or canonical, or best practices way to do this time synchronization? Are there other factors or approaches I'm missing?
For automatic updating time from the internet or GPS system will overkill a cheap arduino project, for you may require a Ethernet shield and write a TSR program that will regularly send updated time to the arduino. Instead you can purchase more accurate RTC clock ( some are getting more accurate these days) and manually update the clock from time to time using keyboard and LED display incorporated with the arduino. If at all your project is very time critical, it is better to you some cheap crystal based solution instead.

Synchronizing a counter across a network

I have two computers that can talk to each other over a serial connection. The connection is made over a wireless network. There is a variable, changing delay in communications between the two systems. On both systems I have a counter runtime that increments by 1 every ms. They both start as soon as the applications start. Say each computer is started at different times. How can I with with the serial connection synchronize the counters so that systemA.counter will equal systemB.counter and so that both counters increment at the same time (or as close as possible).
Ideally once synchronized the counters would drift only slowly apart so that once every 3 or 4 thousand incs I could re-synchronize.
I'm looking for good resources on the topic, example algorythms, example code (c/c++), anything to point me in the right direction.
Update
This is a closed system, no internet. For all intents and purposes no real protocol at all besides and open serial line over the wireless link. That link at the moment is bluetooth, but I'm thinking over moving it to a ZigBee Mesh. There are currently 2 nodes, but if I have 30 nodes all running this same application I would want them all to synchronize. There is not client/server designation, just a couple of devices running the same program with a counter. I don't have access to anything like time, just this counter that increments once a millisecond and whatever algorithm I can put in place.
Once I can get this working, I would like to put in place a propositioning and mapping system, but to figure out distances between nodes, I need actuate timing synchronized on the devices.
If you use this counters to order events in a system, you should look at vector clocks or Lamport timestamps.
The obvious resource is NTP, which is documented for example at http://www.eecis.udel.edu/~mills/ntp.html and with links off there. Basically, this uses timestamps to adjust the frequency at which local clocks run. The protocol has been around for years and been the subject of continuous research - I can't see any pack of slides there which immediately makes it clear how it works. You might be better to see if there is already an NTP implementation available than to try and re-implement it yourself.
It appears (e.g. from searching) that there is a small industry of people working on time synchronisation algorithms, especially in the context of wireless sensor networks. One jumping-off point, apart from searches, is the survey paper at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.85.2012 - Time synchronization in sensor networks: A survey (2004)

500Hz or higher serial port data recording

Hello I'm trying to read some data from the serial port and record it in the hard drive. I'm using visual C++ express, and made an application using the windows form.
The program basically sends a byte ("s") every t seconds, this trigger the device connected to the serial port to send back 3 bytes. The baud rate now is on 38400bps. The time t is controlled by the timer class of visual c++.
The problem I have is that if I set the ticking time of the timer to 1ms, the data is not recorded every 1ms, but around every 15ms. I've read that maybe the resolution of the timer is set to 15ms, but not sure about it. Anyhow, how can I make the timer event to trigger every 1ms, instead of every 15ms? or is there another way to read the serial port data faster? I'm looking for 500Hz or higher.
The device connected to the serial port is a 32bit microcontroller, which I have control over the program as well so I can easily change it, but just can't figure out another way to make this transmission.
Thanks for any help!
Windows is not a real-time OS, and regardless of what period your timer is set to there are no guarantees that it will be consistently maintained. Moreover the OS clock resolution is dependent on the hardware vendor's HAL implementation and varies from system to system. Multi-media timers have higher resolution, but the real-time guarantees are still not there.
Apart from that, you need to do a little arithmetic on the timing you are trying to achieve. At 38400,N,8,1, you can only transfer at most 3.84 characters in 1ms, so your timing is tight in any case since you are pinging with one character and expecting three characters to be returned. You can certainly go no faster without increasing the bit rate.
A better solution would be to have the PC host send the required reporting period to the embedded target once then have the embedded target perform its own timing so that it autonomously emits data every period until the PC requests that it stop or sends a different period. Your embedded system is far more capable of maintaining hard-real-time constraints.
Alternatively you could simply have your device perform its sample and transmit the three characters with the timing entirely determined by the transmission time of the three characters, and stream the data constantly. This will give you a sample period of 781.25us (1280Hz) without any triggering from the PC and it will be truly periodic and jitter free. If you want a faster sample rate, simply increase the bit rate.
Windows Forms timer resolution is about 15-20 ms. You can try multimedia timer, see timeSetEvent function.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd757634%28v=vs.85%29.aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/dd743609%28v=vs.85%29.aspx
Timer precision is set by uResolution parameter (0 - maximal possible precision). In any case, you cannot get timer callback every ms - Windows is not real-time system.

maximum response time on keypress

I work for a company that develops psychological tests. One of these tests measures the reaction time of a candidate.
Anyone has an idea of the maximum delay between a key press and the time that this key event is available? What are the dependencies?
Is there a guaranteed maximum response time? I readed something about 5 - 25 ms.
What is the best way to handle keyevents to have a minimum delay?
Thanks in advance,
Kevin
Windows UI processing is very complex. It includes algorithms like priority promotion on keypress, but it will usually wait until the next scheduler ticklet (at worst, 30ms on desktop systems and 60ms on server systems) if another process asks for a full CPU cycle,
To overcome that, you would need a special keyboard driver that would provide event at the same latency, but also measure the accurate time. Accurate time measurement is possible on windows systems if dynamic CPU clock switching is disabled (Lookup entry QueryPerformanceCounter(), you would need to know how to invoke it from DDK), in which case, the keyboard event would still arrive with unpredictable latency, but the original bus event would be time-stamped correctly. Then you remain only with bus latencis which should be smaller than the standard deviation of your measurements. See also What happens from the moment we press a key on the keyboard, until it appears in your word document

Resources