GPS Time synchronisation - time

I'm parsing NMEA GPS data from a device which sends timestamps without milliseconds. As far as I heard, these devices will use a specific trigger point on when they send the sentence with the .000 timestamp - afaik the $ in the GGA sentence.
So I'm parsing the GGA sentence, and take the timestamp when the $ is received (I compensate for any further characters being read in the same operation using the serial port baudrate).
From this information I calculate the offset for correcting the system time, but when I compare the time set to some NTP servers, I will get a constant difference of 250ms - when I correct this manually, I'm within a deviation of 20ms, which is ok for my application.
But of course I'm not sure where this offset comes from, and if it is somehow specific to the GPS mouse I'm using or my system. Am I using the wrong $ character, or does someone know how exactly this should be handled? I know this question is very fuzzy, but any hints on what could cause this offset would be very helpful!
Here is some sample data from my device, with the $ character I will take as the time offset marked:
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003538.000,A,5046.8555,N,00606.2913,E,0.00,22.37,160209,,,A*58
-> $ <- GPGGA,003539.000,5046.8549,N,00606.2922,E,1,07,1.5,249.9,M,47.6,M,,0000*5C
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPGSV,3,1,10,09,77,107,17,12,63,243,30,05,51,249,16,14,26,315,20*7E
$GPGSV,3,2,10,30,24,246,25,17,23,045,22,15,15,170,16,22,14,274,24*7E
$GPGSV,3,3,10,04,08,092,22,18,07,243,22*74
$GPRMC,003539.000,A,5046.8549,N,00606.2922,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003540.000,5046.8536,N,00606.2935,E,1,07,1.5,249.0,M,47.6,M,,0000*55
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003540.000,A,5046.8536,N,00606.2935,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003541.000,5046.8521,N,00606.2948,E,1,07,1.5,247.8,M,47.6,M,,0000*5E

You have to take into account things that are going on in GPS device:
receive satellite signal and calculates position, velocity and time.
prepare NMEA message and put it into serial port buffer
transmit message
GPS devices have relatively slow CPUs (compared to modern computers), so this latency you are observing is result of processing that device must do between generation of position and moment it begin transmitting data.
Here is one analysis of latency in consumer grade GPS receivers from 2005. There you can find measurement of latency for specific NMEA sentences.

Related

How to prevent SD card from creating write delays during logging?

I've been working on an Arduino (ATMega328p) prototype that has to log data during certain events. An LSM6DS33 sensor is used to generate 6 values (2 bytes each) at a sample rate of 104 Hz. This data needs to be logged for a period of 500-20000ms.
In my code, I generate an interrupt every 1/104 sec using Timer1. When this interrupt occurs, data is read from the sensor, calibrated and then written to an SD card. Normally, this is not an issue. Reading the data from the sensor takes ~3350us, calibrating ~5us and writing ~550us. This means a total cycle takes ~4000us, whereas 9615us is available.
In order to save power, I wish to lower the voltage to 3.3V. According to the atmel datasheet, this also means that the clock frequency should be lowered to 8MHz. Assuming everything will go twice as slow, a measurement cycle would still be possible because ~8000us < 9615us.
After some testing (still 5V#16MHz), however, it occured to me that every now and then, a write cycle would take ~1880us instead of ~550us. I am using the library SdFat to write and test SD cards (RawWrite example). The following results came in when I tested the card:
Start raw write of 100000 KB
Target rate: 100 KB/sec
Target time: 100 seconds
Min block write time: 1244 micros
Max block write time: 12324 micros
Avg block write time: 1247 micros
As seen, the average time to write is fairly consistent, but sometimes a peak duration of 10x average occurs! According to the writer of the library, this is because the SD card needs some erase cycles in between x amount of write cycles. This causes a write delay (src:post#18&#22). This delay, however, pushes the time required for a cycle out of the available 9615us bracket, because the total measure cycle would be 10672us.
The data I am trying to write, is first put into a string using sprintf:
char buf[20] = "";
sprintf(buf,"%li\t%li\t%li\t%li\t%li\t%li",rawData[0],rawData[1],rawData[2],rawData[3],rawData[4],rawData[5]);
myLog.println(buf);
This writes the data to a txt file. But at my speed rate, only 21*104=2184 B/s would suffice. Lowering the speed of the RawWrite example to 6 KB/s, causes the SD card to write without getting an extended write delay. Yet my code still has them, even though less data is written.
My question is: how do I prevent this delay from occurring (if possible)? And if not possible, how can I work around it? It would help if I understood why exactly the delay occurs, because the interval is not always the same (every 10-15 writes).
Some additional info:
The sketch currently uses 69% of RAM (2kB) with variables. Creating two 512 byte buffers - like suggested in the same forum - is not possible for me.
Initially, I used two strings. Merging them into one, didn't affect the write speed with any significance.
I don't know how to work around the delay, but I experience a more stable and faster writing time, if I wrote to a binary file instead of a ".csv" or .txt" file.
The following link provide a fine script to write data as a binary struct to the SD card. (There are some small typo in his example, it is easily fixed)
https://hackingmajenkoblog.wordpress.com/2016/03/25/fast-efficient-data-storage-on-an-arduino/
This will not help you with the time variation, but it might minimize the writing time, and thus negleting the time issue.

Getting a ETA of a packet in Omnet++

I embedded the Omnet++ simulation kernel into my application and I'm using the inet framework for my simulation. The problem I'm having is that I need to estimate at witch time a packet is going to arrive at it's destination.
So more specific: There are two EtherHost, named H0 and H1 (inet.node.ethernet.EtherHost) and one EtherSwitch named switch (inet.node.ethernet.EtherSwitch), the three are connected like this:
H0 <-> C <-> switch <-> C <-> H1
C denotes a DatarateChannel with datarate = 100Mbps and delay = 0.1us. When the EtherAppCli in H0 sends a EtherAppReq to H1, I need to get a ETA on the EterAppReq packet during the transfer of said packet.
My first thought was to always get the encapsulating package of EtherAppReq, wich is added in the EtherLLC and EtherMAC module but this is not as simple as I thought.. I would need to change all the encapsulating functions in all the lower layers to always get a pointer to the encapsulating package, or am I wrong?
Or is there another way to get a ETA of a packet mid-transfer?
Edit: For my purpose I only need the arrival time at the next module, so if the packet is in the mac module of H0 I need the arrival time at the mac layer in the switch (So no multi module hops). Like when you have a cMessage you can call getArrivalTime() on the message and get a estimate of the arrival time if I'm not mistaken.
Thank you very much for the help in advance!
It is impossible to obtain ETA of a packet in advance. The time of packet's arriving depends on many factors (like in real network), for example: current number of packets in host's and switch's queues, processing time in switch, number of CPU of switch, processing time of MAC layer in a host etc. Therefore at the moment of sending a packet neither the host nor simulation environment do not know when this packet arrive a destination host. So we prepare a model and do a simulation in order to measure this time and learn how some factors influence it.
By the way: there is no H1 in your figure.

Real Time Workaround using windows for fixed sampling time

I am trying to collect data off an accelerometer sensor. I have an Arduino doing the analog to digital conversion of the signal and sending it through a serial port to MATLAB on Windows.
I send a reading every 5ms from the Arduino through the serial port. I am saving that data using MATLAB's serial read in a vector as well as the time at which it was read using the clock method.
If I was to plot the column of the vector where I have saved at which second I read, I get a curve (non-linear), and when I look at the difference between 1 read and another, I see that it is slightly varying.
Is there any way to get the data saved in real time with fixed sampling time?
Note: I am using 250000 baud rate.
Matlab Code:
%%%%% Initialisation %%%%%
clear all
clc
format shortg
cnt = 1;%File name changer
sw = 1;%switch: 0 we add to current vector and 1 to start new vector
%%%%% Initialisation %%%%%
%%%%% Communication %%%%%
arduino=serial('COM7','BaudRate',250000);
fopen(arduino);
%%%%% Communication %%%%%
%%%%% Reading from Serial and Writing to .mat file%%%%%
while true,
if sw == 0,
if (length(Vib(:,1))==1000),% XXXX Samples in XX minutes
filename = sprintf('C:/Directory/%d_VibrationReading.mat',cnt);
save (filename,'Vib');
clear Vib
cnt= cnt+1;
sw = 1;
end
end
scan = fscanf(arduino,'%f');
if isfloat(scan) && length(scan(:,1))==6,% Change length for validation
vib = scan';
if sw == 1,
Vib = [vib clock];
sw = 0;
else
Vib = [Vib;vib clock];
end
end
end
%%%%% Reading from Serial and Writing to .mat file%%%%%
% Close Arduino Serial Port
fclose(arduino);
Image 1 shows the data received through serial (each Row corresponding to 1 serial read)
Image 2 shows that data saved with the clock
Image 1:
Image 2:
I know that my answer does not contain a quick and easy solution. Instead it primarily gives advice how to redesign your system. I worked with real-time systems for several years and saw it done wrong too many time. It might be possible to just "fix", but working with your current communication pattern tweaking the performance but I am convinced you will never receive reliable time information.
I will answer this from a general system design perspective, instead of trying to fix your code. Where I see the problems:
In general, it is a bad idea to append time information on the receiving PC. Whenever the sensor is capable and has a clock, append the time information on the sensor system itself. This allows for an accurate relative timing between the measurements. Some clock adjustment might be necessary when the clock on the sensor is not set properly, but that is just a constant offset.
Switch from ASCII-encoded data to binary data. With your sample rate and baut rate set, you only have 50 bytes for each message.
Write a robust receiver. Just dropping messages you "don't understand" is not a good idea. Whenever the buffer is full, you might receive multiple messages unless you use a proper terminator.
Use preallocation. You know how large the batches you want to write are.
A simple solution for a message:
2 bytes - clock milliseconds
4 bytes - unix timestamp of measurement
For each sensor
2 bytes int32 sensor data
2 bytes - Terminator, constant value. Use a value which is outside the range for all previous integers, e.g. intmax
This message format should theoretically allow you to use 21 sensors. Now to the receiving part:
To get a first version running with a good performance, call fread (serial) with large batches of data (size parameter) and dump all readings into a large cell array. Something like:
C=cell(1000,1)
%seek until you hit a terminator
while not(terminator==fread(arduino,1));
for ix=1:numel(C)
C{ix}=fread(arduino,'int16',1000)
end
fclose(arduino);
Once you read the data append it to a single vector: C=[C{:}]; and try to parse it in post-processing. If you manage the performance you may later return to on-the-fly processing, but I recommend to start this way to get the system established.

how to measure channel busy time in veins?

is there any function that returns channel busy time? I use veins-2.2, mac and decider 802.11p. if there is not such function, how measuring the channel busy time is possible?
Channel busy time in Veins 2.2 is measured at two points: in the Phy layer and in the Mac layer. Both record a corresponding scalar value at the end of the simulation. Note that there is a difference in meaning between the two:
Mac busy time is (in almost all cases) what you want to record: it records how many seconds the Mac treated the channel as busy. Divide the scalar totalBusyTime by the total simulation time and you know the fraction of time that the Mac could not send.
Phy busy time is calculated very different: its value busyTime increases for each frame received above the sensitivity threshold. To give an example, if 1 frame is being received at any given time during the simulation, the value of this scalar would be 100%. If 4 frames are interfering for all of your simulation, the value of this scalar would be 400% (which is different to the Mac busy time you probably want).

Why wait SIFS time before sending ACK?

Question about the MAC-protocol of 802.11 Wifi.
We have learned that when a station has received the data it waits for SIFS time. Then it sends the packet. When searching online the reason that is always mentioned is to give ACK packets a higher priority. This is understandable since a station first has to wait DIFS time when it wants to send normal data (and DIFS is larger than SIFS).
But why wait at all? Why not immediately send the ACK? The station knows the data has arrived and the CRC is correct, so why wait?
It is theoretically possible to know that the CRC is correct at the exact end of the received data from the wire, but in practice, you need to accumulate all the samples in the last block in order to run the IFFT, deconvolution, FEC, and then, finally, at the very end, after finally getting the input data out of the waveform, do you know that the CRC is correct. Also, you sometimes need to turn on transmit circuitry to send the ACK, which can hamper receive performance. If each step in the processing chain were instantaneous, and if the transmit circuitry definitely didn't interfere with the receive circuitry, and if there were no lead-time necessary for building the waveform for the ACK, it'd be possible to send the ACK immediately after getting the last bit of the wave-form. But, while each element in this chain takes some deterministic time, it is not instantaneous. SIFS gives the receiver time to get the data from the PHY, verify it, and send the response.
Disclaimer: I'm more familiar with Homeplug than 802.11.
It is like that because Distributed Coordination Function (DCF) and Point Coordination Function (PCF) mode can coexist within one cell. That is a base station may use polling while at the same time the cell can use disitributed coordination using CSMA/CA.
So during SIFS, control frames or next fragment may be sent. During PIFS, PCF frames may be sent and during DIFS DCF frames may be sent. During SIFS and PIFS, PCF can work its magic.
Even though not all base stations support PCF all stations must wait since some may support it.
Update:
The way I understand this now is that during SIFS the station may send RTS,CTS or ACK and have enough time to switch back to receiving mode before the sender starts to transmit. If that's correct, it will send ACK during SIFS. Then it will change to receive mode and wait until SIFS elapses. When SIFS has elapsed the transmitter will start sending.
Also, PCF is controlled by PIFS which comes after SIFS and before DIFS and is therefor not relevant for this discussion (my mistake). That is, SIFS < PIFS < DIFS < EIFS.
Sources: This PDF (page 8) and Computer Networks by Andrew S. Tanenbaum
SIFS = RTT (based on PHY Transmission rate) + FRAME PROCESSING DELAY AT RECEIVER (PHY PROCESSING DELAY + MAC PROCESSING DELAY) + FRAME PROCESSING DELAY (FOR COMPOSING RESPONSE CTS/ACK)+ RF TUNER DELAY (CHANGE FROM RX to TX)
A the Transmitter side, after last PHY Symbol (of RTS, e.g), the time required to change to RX mode (at RF). So, I would see SIFS as a RX side calculation than a TX side.
I can't say for sure but it sounds like an optimization strategy similar to IP. If you don't require an ACK for every data packet, it makes sense to hold off for a bit so that, if more data packets arrive, you can acknowledge them all with a single ACK.
Example: client sends 400 packets really fast to the server. Rather than the server sending back 400 ACKs, it can simply wait until the client takes a breather before sending a single ACK back. Combined with the likelihood that the client will take a breather even under heavy load (it has to as its unacknowledged-packets buffer fills up), this would be workable.
This is possible in systems where the ACK(n) means "I've received everything up to and including packet # n.
You'll get better performance and less traffic by using such a strategy. As long as the wait-before-sending-ack time on the receiver is less than the retransmit-if-no-ack-before time on the sender (taking transmission delays into account), there should be no problem.
Quick crash-course on 802.11:
802.11 is a essentially a giant system of timers. The most common implementations of 802.11 utilize the distributed coordination function, DCF. The DCF allows for nodes to come in and out of the range of a radio channel being used for 802.11 and coordinate in a distributed fashion who should be sending and receiving data (ignoring hidden and exposed node problems for this discussion). Before any node can begin sending data on the channel they all must wait a period of DIFS, in which the channel is determined to be idle, if it is idle during a DIFS period the first node to grab the channel begins transmitting. In standard 802.11, i.e. non-802.11e implementations and non 802.11n, every single data packet that gets transmitted needs to be acknowledged by a physical layer, PHY, acknowledgment packet, irregardless of the upper layer protocol being used. After a data packet gets sent a SIFS time period needs to expire, after SIFS expires control frames destined for the node that has "taken" control of the channel may be sent, in this instance and acknowledgment frame is transmitted. SIFS allows the node that sent the data packet to switch from transmitting to receiving mode. If a packet does get lost and no ACK is received after SIFS/ACK timeout occurs, then exponential back-off is invoked. Exponential back-off, a.k.a contention window (CW), begins at a value CWmin, in some linux implementation this is 15 slot times, where a slot time varies depending on the 802.11 protocol that is being used. The CW value is then chosen from 1 to whatever the upper limit that has been calculated for CW. If the current packet was lost, then the CW is incremented from 15 to 30, and then a random value is chosen between 1 and 30. Every-time there is a consecutive lose the CW doubles up to 1023, at which point it hits a limit. Once a packet is received successfully the CW is reset back to CWmin.
In regards to 802.11n / 802.11e:
Every data packet still needs to be acknowledged, but when using 802.11e (implemented into 802.11n) multiple data packets can be aggregated together in two different ways A-MSDU or A-MPDU. A-MSDU is a jumbo-frame that has one checksum for the entire aggregated packet being sent, within it are many sub-frames that contain each of the data frames that needed to be sent. If there is any error in the A-MSDU frame and it needs to be retransmitted, then every sub-frame is required to be resent. However, when using A-MPDU, each sub-frame has a small header and checksum that allow for any sub-frame that has an error in it to be retransmitted by itself/within another aggregated frame the next time the sending nodes gains the channel. With these aggregated packet sending schemes there is the notion of the block-ack. The block-ack contains a bitmap of the frames from a starting sequence number that were just sent in the aggregated packet and received correctly or incorrectly. Usage of aggregated frame sending greatly improves throughput performance as more data can be sent per channel acquisition by a sending node, also allowing out-of-order packet sending. However, out-order packet sending greatly complicates the 802.11 MAC layer.
SIFS=D+M+Rx/Tx
Where,
D=(Receiver delay (RF delay) and decoding of physical layer convergence procedure (PLCP) preamble/header)
M=(MAC processing delay)
Rx/Tx=(transceiver turnaround time)
Above all the delays can not be avoided so It has to wait SIFS time before sending acknowledgement

Resources