Synchronizing the timestamp of multiple Allied Vision GenICam cameras through PTP - genicam

I am developing a system with multiple industrial Allied Vision Mako-cameras. I need those cameras to be synchronized, for which Allied Vision recommends the PTP protocol. Therefore, I have a time-server which acts as a PTP master clock. The cameras are connected to that server through an Ethernet switch. Unfortunately, that switch is not PTP-enabled, meaning that it introduces latency when delivering the PTP packets. This causes the cameras to remain in PtpStatus == Uncalibrated.
As far as I understand the Allied Vision GigE-Features manual, PTP causes the camera's timestamp to be synchronized across all cameras, i. e. GevTimestampValue should be the same on all cameras at any given time. However, during an experiment where I filmed a clock with multiple cameras, I observed that the timestamp delivered by two different cameras was about 187511041595600 ticks off (approx. 187511 seconds) while the clock visible in the frame shows the actual time difference of approx. 0.04 seconds.
Therefore, my questions are:
Did I understand the PTP interface of Allied Vision correctly?
Is maybe PtpStatus == Uncalibrated causing this to not work?

After some investigative work, I found the cause and will share my findings here, in case someone else is stuck in the same situation:
Short answer:
The cause of my issues is indeed my switch. I replaced it with a different switch temporarily which fixed the problem.
I did understand PTP correctly, however, the cameras need to be at least once in PtpStatus == Slave in order to be in sync. If they loose synchronization later, they will revert to Uncalibrated and remain somewhat in sync, but if they never were in PtpStatus == Slave, they are not yet synchronized. This caused the timestamps to be way off.
Long answer:
I configured my switch to mirror the ports to which the time server and the camera are connected to my laptop. With WireShark, I was then able to investigate the PTP traffic and found out, that the Sync and Delay_Req packets get delivered, which causes the cameras to transition from Listening to Uncalibrated (My time server does not send Follow_Up). However, the Delay_Resp (which is sent by the time server) gets dropped by the switch. It thus seems that my switch is misconfigured in some way, letting those other multicast packets pass while dropping Delay_Resp.

Related

High UDP communication latency because of audio rendering (Windows, C++)

I am trying to communicate with an external robot at 1 kHz using UDP protocol with WinSocket library. I am using Windows 10 Pro Version 21H2. In the hardware side, I use a pc with intel core i9 10900X 32 GB RAM and Intel I219.
At a certain point it work pretty well, I did measure the time spent for the communication (both sending and receiving the packets sequentially takes between 200 microseconds and 500 microseconds), and I did also measure using wireshark the number of packets exchanged (1000 packets sent per second and 1000 packets received per second too). The throughput sending is 2 Mbps and receiving is 3Mbps.
The issue start when any audio is rendered (even the sound happening when changing the volume on windows), this lead to a noticeable latency (about 10 to 15 milliseconds).
When I stop the Windows Audio Service, this solves the issue but in our application, we need to render a sound permanently.
graph : round trip time and frequency vs index of udp query, using NIC PCI
A temporary solution was to use a USB/Ethernet adapter instead of NICs. Using this type of device we have no latency but we already experienced in the past some issues related to drops of performance due to thermal throttling.
graph : round trip time and frequency vs index of udp query, using USB/Ethernet adapter
I also did try to reduce the audio process priority, no difference. I also tried to set the affinity mask of the audio service in different threads than my application, no difference neither.
My questions : is there a way to increase the audio latency in order to prioritize the udp communication or to reduce the latency of the udp communication to meet our need of 1 kHz frequency.
This problem is due to the Receive Side Throttle feature some NICs support.
In order to fix it, you need to set the register variable
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Multimedia\SystemProfile\NetworkThrottlingIndex to 0xffffffff and reboot windows.
This registry key is private and internal to Windows OS, it is not supposed to be used publicly and it's not officially supported by Microsoft.

Is it possible to cause 2.4Ghz co-channel interference if no clients are transmitting

I installed some APs at a facility. This facility is now complaining they are having issues with their 2.4 phone system.
The APs that I installed (different SSID) are running but no clients are associated or transmitting data.
Is it possible to cause co-channel interference without data being transmitted?
Thanks
Yes, 802.11 access points are chatty, users or no. You can expect every access point to transmit beacon frames on the order of 5-15 times per second.
These frames are transmitted very quickly and 2.4 GHz is generally very noisy, so I have difficulty believing that a 2.4 GHz phone system would fail in this scenario -- at least, assuming you didn't install an AP right on top of the phone system. Any device transmitting +20 dBm a few inches away from a device listening for -90 dBm signals could easily cause problems. Similarly, 2.4 GHz devices don't actually operate on 2.4 GHz the entire signal path; it's generally shifted down towards baseband at something like 100 MHz, and sometimes (particularly with high-power APs) this section is poorly shielded, and this leakage can cause issues even outside the target frequency band.
That said, none of that really matters for troubleshooting. The line of questions I would pursue is: does the problem go away if you shut off all your devices? If so, does it go away if you shut off one in particular? If so, what makes that one special?

Synchronizing a counter across a network

I have two computers that can talk to each other over a serial connection. The connection is made over a wireless network. There is a variable, changing delay in communications between the two systems. On both systems I have a counter runtime that increments by 1 every ms. They both start as soon as the applications start. Say each computer is started at different times. How can I with with the serial connection synchronize the counters so that systemA.counter will equal systemB.counter and so that both counters increment at the same time (or as close as possible).
Ideally once synchronized the counters would drift only slowly apart so that once every 3 or 4 thousand incs I could re-synchronize.
I'm looking for good resources on the topic, example algorythms, example code (c/c++), anything to point me in the right direction.
Update
This is a closed system, no internet. For all intents and purposes no real protocol at all besides and open serial line over the wireless link. That link at the moment is bluetooth, but I'm thinking over moving it to a ZigBee Mesh. There are currently 2 nodes, but if I have 30 nodes all running this same application I would want them all to synchronize. There is not client/server designation, just a couple of devices running the same program with a counter. I don't have access to anything like time, just this counter that increments once a millisecond and whatever algorithm I can put in place.
Once I can get this working, I would like to put in place a propositioning and mapping system, but to figure out distances between nodes, I need actuate timing synchronized on the devices.
If you use this counters to order events in a system, you should look at vector clocks or Lamport timestamps.
The obvious resource is NTP, which is documented for example at http://www.eecis.udel.edu/~mills/ntp.html and with links off there. Basically, this uses timestamps to adjust the frequency at which local clocks run. The protocol has been around for years and been the subject of continuous research - I can't see any pack of slides there which immediately makes it clear how it works. You might be better to see if there is already an NTP implementation available than to try and re-implement it yourself.
It appears (e.g. from searching) that there is a small industry of people working on time synchronisation algorithms, especially in the context of wireless sensor networks. One jumping-off point, apart from searches, is the survey paper at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.85.2012 - Time synchronization in sensor networks: A survey (2004)

What is the best way for "Polling"?

This question is related with Microcontroller programming but anyone may suggest a good algorithm to handle this situation.
I have a one central console and set of remote sensors. The central console has a receiver and the each sensor has a transmitter operates on same frequency. So we can only implement Simplex communication.
Since the transmitters work on same frequency we cannot have 2 sensors sending data to central console at the same time.
Now I want to program the sensors to perform some "polling". The central console should get some idea about the existence of sensors (Whether the each sensor is responding or not)
I can imagine several ways.
Using a same interval between the poll messages for each sensor and start the sensors randomly. So they will not transmit at the same time.
Use of some round mechanism. Sensor 1 starts polling at 5 seconds the second at 10 seconds etc. More formal version of method 1.
The maximum data transfer rate is around 4800 bps so we need to consider that as well.
Can some one imagine a good way to resolve this with less usage of communication links. Note that we can use different poll intervals for each sensors if necessary.
I presume what you describe is that the sensors and the central unit are connected to a bus that can deliver only one message at a time.
A normal way to handle this is to have collision detection. This is e.g. how Ethernet operates as far as I know. You try to send a message; then attempt to detect collision. If you detect a collision, wait for a random amount (to break ties) and then re-transmit, of course with collision check again.
If you can't detect collisions, the different sensors could have polling intervals that are all distinct prime numbers. This would guarantee that every sensor would have dedicated slots for successful polling. Of course there would be still collisions, but they wouldn't need to be detected. Here example with primes 5, 7 and 11:
----|----|----|----|----|----|----|----| (5)
------|------|------|------|------|----- (7)
----------|----------|----------|-:----- (11)
* COLLISION
Notable it doesn't matter if the sensor starts "in phase" or "out of phase".
I think you need to look into a collision detection system (a la Ethernet). If you have time-based synchronization, you rely on the clocks on the console and sensors never drifting out of sync. This might be ok if they are connected to an external, reliable time reference, or if you go to the expense of having a battery backed RTC on each one (expensive).
Consider using all or part of an existing protocol, unless protocol design is an end in itself - apart from saving time you reduce the probability that your protocol will have a race condition that causes rare irreproducible bugs.
A lot of protocols for this situation have the sensors keeping quiet until the master specifically asks them for the current value. This makes it easy to avoid collisions, and it makes it easy for the master to request retransmissions if it thinks it has missed a packet, or if it is more interested in keeping up to date with one sensor than with others. This may even give you better performance than a system based on collision detection, especially if commands from the master are much shorter than sensor responses. If you end up with something like Alohanet (see http://en.wikipedia.org/wiki/ALOHAnet#The_ALOHA_protocol) you will find that the tradeoff between not transmitting very often and having too many collisions forces you to use less than 50% of the available bandwidth.
Is it possible to assign a unique address to each sensor?
In that case you can implement a Master/Slave protocol (like Modbus or similar), with all devices sharing the same communication link:
Master is the only device which can initiate communication. It can poll each sensor separately (one by one), by broadcasting its address to all slaves.
Only the slave device which was addressed will reply.
If there is no response after a certain period of time (timeout), device is not available and Master can poll the next device.
See also: List of automation protocols
I worked with some Zigbee systems a few years back. It only had two sensors so we just hard-coded them with different wait times and had them always respond to requests. But since Zigbee has systems However, we considered something along the lines of this:
Start out with an announcement from the console 'Hey everyone, let's make a network!'
Nodes all attempt to respond with something like 'I'm hardware address x, can I join?'
At first it's crazy, but with some random retry times, eventually the console responds to all nodes: 'Yes hardware address x, you can join. You are node #y and you will have a wait time of z milliseconds from the time you receive your request for data'
Then it should be easy. Every time the console asks for data the nodes respond in their turn. Assuming transmission of all of the data takes less time than the polling period you're set. It's best not to acknowledge the messages. If the console fails to respond, then very likely the node will try to retransmit just when another node is trying to send data, messing both of them up. Then it snowballs into complete failure...

Cannot achieve full speed on Symmetrical Internet Connection

We are using a business Ethernet connection (3Mbit upload, 3Mbit download) and trying to understand issues with our tested bandwidth speeds. When uploading a large file we sustain 340 KB/s; downloading we sustain 340KB/s. However when we run these transfers simultaneously the two transfer speeds rise and fall erratically with a average speed for both at around 250 KB/s. We're using a Hatteras HN404 CPi and we've bypassed the router (plugged a machine directly into the Hatteras; set the NIC to full-duplex).
Is this expected? Should a max upload interfere with a max download on this type of Internet connection?
Are you sure the bottleneck is your connection?
Do you also see this behavior when the simultaneous upload and download are occurring on different systems, or only when one system is handling both the upload and download?
If the problem goes away when independent machines are doing the work, the bottleneck is likely closer to the hard drive.
This sounds expected from my experience with lower end lines. On a home line, I've found that traffic shaping and changing buffer sizes can be a huge help.
TCP/IP without any unusual traffic shaping will favor the most aggressive traffic at the expense of everything else. In your case, this means responses to the outgoing ACKs and such for the download will be delayed or maybe even dropped. See if your HN404 supports class based queuing or something similar and try it out.
Yes it is expected. This is symptomatic of any case in which you have a throttled or capped connection. If you saturate your uplink it will affect your downlink and vice versa.
This is because the your connection's rate-limiting impacts the TCP handshake acknowledgement packets (ACKs) and disrupts the normal "balance" of how these packets flow.
This is very thoroughly described on this page about Cable Modem Troubleshooting Tips, although it is not limited to cable modems:
If you saturate your cable modem's
upload cap with an upload, the ACK
packets of your download will have to
queue up waiting for a gap between the
congested upload data packets. So your
ACKs will be delayed getting back to
the remote download server, and it
will therefore believe you are on a
very slow link, and slow down the
transmission of further data to you.
So how do you avoid this? The best way is to implement some sort of traffic-shaping or QoS (Quality of Service) on individual sessions to limit them to a maximum throughput based on a percentage of your total available bandwidth.
For example on my home network I have it so that no outbound connection can utilize any more than 67% (2/3rd) of my 192Kbps uplink. That means any single outbound session can only utilized 128Kbps, therefore protecting my downlink speed by preventing the uplink from becoming saturated.
In most cases you are able to perform this kind of traffic-shaping based on any available criteria such as source ip, destination ip, protocol, port, time of day, etc.
It appears that I was wrong about the simultaneous transfer speeds. The 250KB/s speeds up and down were miscalculated by the transfer program (seemed to have been showing a high average speed). Apparently the Business Ethernet (in this case it is an XO circuit provisioned by Speakeasy) only supports 3Mb total, not up AND down (for 6Mbit total). So if I am transferring up and down at the same time in theory I should only have 1.5Mbit up and down or 187.5KB/s at the maximum (if there was zero overhead).

Resources