I embedded the Omnet++ simulation kernel into my application and I'm using the inet framework for my simulation. The problem I'm having is that I need to estimate at witch time a packet is going to arrive at it's destination.
So more specific: There are two EtherHost, named H0 and H1 (inet.node.ethernet.EtherHost) and one EtherSwitch named switch (inet.node.ethernet.EtherSwitch), the three are connected like this:
H0 <-> C <-> switch <-> C <-> H1
C denotes a DatarateChannel with datarate = 100Mbps and delay = 0.1us. When the EtherAppCli in H0 sends a EtherAppReq to H1, I need to get a ETA on the EterAppReq packet during the transfer of said packet.
My first thought was to always get the encapsulating package of EtherAppReq, wich is added in the EtherLLC and EtherMAC module but this is not as simple as I thought.. I would need to change all the encapsulating functions in all the lower layers to always get a pointer to the encapsulating package, or am I wrong?
Or is there another way to get a ETA of a packet mid-transfer?
Edit: For my purpose I only need the arrival time at the next module, so if the packet is in the mac module of H0 I need the arrival time at the mac layer in the switch (So no multi module hops). Like when you have a cMessage you can call getArrivalTime() on the message and get a estimate of the arrival time if I'm not mistaken.
Thank you very much for the help in advance!
It is impossible to obtain ETA of a packet in advance. The time of packet's arriving depends on many factors (like in real network), for example: current number of packets in host's and switch's queues, processing time in switch, number of CPU of switch, processing time of MAC layer in a host etc. Therefore at the moment of sending a packet neither the host nor simulation environment do not know when this packet arrive a destination host. So we prepare a model and do a simulation in order to measure this time and learn how some factors influence it.
By the way: there is no H1 in your figure.
Question about the MAC-protocol of 802.11 Wifi.
We have learned that when a station has received the data it waits for SIFS time. Then it sends the packet. When searching online the reason that is always mentioned is to give ACK packets a higher priority. This is understandable since a station first has to wait DIFS time when it wants to send normal data (and DIFS is larger than SIFS).
But why wait at all? Why not immediately send the ACK? The station knows the data has arrived and the CRC is correct, so why wait?
It is theoretically possible to know that the CRC is correct at the exact end of the received data from the wire, but in practice, you need to accumulate all the samples in the last block in order to run the IFFT, deconvolution, FEC, and then, finally, at the very end, after finally getting the input data out of the waveform, do you know that the CRC is correct. Also, you sometimes need to turn on transmit circuitry to send the ACK, which can hamper receive performance. If each step in the processing chain were instantaneous, and if the transmit circuitry definitely didn't interfere with the receive circuitry, and if there were no lead-time necessary for building the waveform for the ACK, it'd be possible to send the ACK immediately after getting the last bit of the wave-form. But, while each element in this chain takes some deterministic time, it is not instantaneous. SIFS gives the receiver time to get the data from the PHY, verify it, and send the response.
Disclaimer: I'm more familiar with Homeplug than 802.11.
It is like that because Distributed Coordination Function (DCF) and Point Coordination Function (PCF) mode can coexist within one cell. That is a base station may use polling while at the same time the cell can use disitributed coordination using CSMA/CA.
So during SIFS, control frames or next fragment may be sent. During PIFS, PCF frames may be sent and during DIFS DCF frames may be sent. During SIFS and PIFS, PCF can work its magic.
Even though not all base stations support PCF all stations must wait since some may support it.
Update:
The way I understand this now is that during SIFS the station may send RTS,CTS or ACK and have enough time to switch back to receiving mode before the sender starts to transmit. If that's correct, it will send ACK during SIFS. Then it will change to receive mode and wait until SIFS elapses. When SIFS has elapsed the transmitter will start sending.
Also, PCF is controlled by PIFS which comes after SIFS and before DIFS and is therefor not relevant for this discussion (my mistake). That is, SIFS < PIFS < DIFS < EIFS.
Sources: This PDF (page 8) and Computer Networks by Andrew S. Tanenbaum
SIFS = RTT (based on PHY Transmission rate) + FRAME PROCESSING DELAY AT RECEIVER (PHY PROCESSING DELAY + MAC PROCESSING DELAY) + FRAME PROCESSING DELAY (FOR COMPOSING RESPONSE CTS/ACK)+ RF TUNER DELAY (CHANGE FROM RX to TX)
A the Transmitter side, after last PHY Symbol (of RTS, e.g), the time required to change to RX mode (at RF). So, I would see SIFS as a RX side calculation than a TX side.
I can't say for sure but it sounds like an optimization strategy similar to IP. If you don't require an ACK for every data packet, it makes sense to hold off for a bit so that, if more data packets arrive, you can acknowledge them all with a single ACK.
Example: client sends 400 packets really fast to the server. Rather than the server sending back 400 ACKs, it can simply wait until the client takes a breather before sending a single ACK back. Combined with the likelihood that the client will take a breather even under heavy load (it has to as its unacknowledged-packets buffer fills up), this would be workable.
This is possible in systems where the ACK(n) means "I've received everything up to and including packet # n.
You'll get better performance and less traffic by using such a strategy. As long as the wait-before-sending-ack time on the receiver is less than the retransmit-if-no-ack-before time on the sender (taking transmission delays into account), there should be no problem.
Quick crash-course on 802.11:
802.11 is a essentially a giant system of timers. The most common implementations of 802.11 utilize the distributed coordination function, DCF. The DCF allows for nodes to come in and out of the range of a radio channel being used for 802.11 and coordinate in a distributed fashion who should be sending and receiving data (ignoring hidden and exposed node problems for this discussion). Before any node can begin sending data on the channel they all must wait a period of DIFS, in which the channel is determined to be idle, if it is idle during a DIFS period the first node to grab the channel begins transmitting. In standard 802.11, i.e. non-802.11e implementations and non 802.11n, every single data packet that gets transmitted needs to be acknowledged by a physical layer, PHY, acknowledgment packet, irregardless of the upper layer protocol being used. After a data packet gets sent a SIFS time period needs to expire, after SIFS expires control frames destined for the node that has "taken" control of the channel may be sent, in this instance and acknowledgment frame is transmitted. SIFS allows the node that sent the data packet to switch from transmitting to receiving mode. If a packet does get lost and no ACK is received after SIFS/ACK timeout occurs, then exponential back-off is invoked. Exponential back-off, a.k.a contention window (CW), begins at a value CWmin, in some linux implementation this is 15 slot times, where a slot time varies depending on the 802.11 protocol that is being used. The CW value is then chosen from 1 to whatever the upper limit that has been calculated for CW. If the current packet was lost, then the CW is incremented from 15 to 30, and then a random value is chosen between 1 and 30. Every-time there is a consecutive lose the CW doubles up to 1023, at which point it hits a limit. Once a packet is received successfully the CW is reset back to CWmin.
In regards to 802.11n / 802.11e:
Every data packet still needs to be acknowledged, but when using 802.11e (implemented into 802.11n) multiple data packets can be aggregated together in two different ways A-MSDU or A-MPDU. A-MSDU is a jumbo-frame that has one checksum for the entire aggregated packet being sent, within it are many sub-frames that contain each of the data frames that needed to be sent. If there is any error in the A-MSDU frame and it needs to be retransmitted, then every sub-frame is required to be resent. However, when using A-MPDU, each sub-frame has a small header and checksum that allow for any sub-frame that has an error in it to be retransmitted by itself/within another aggregated frame the next time the sending nodes gains the channel. With these aggregated packet sending schemes there is the notion of the block-ack. The block-ack contains a bitmap of the frames from a starting sequence number that were just sent in the aggregated packet and received correctly or incorrectly. Usage of aggregated frame sending greatly improves throughput performance as more data can be sent per channel acquisition by a sending node, also allowing out-of-order packet sending. However, out-order packet sending greatly complicates the 802.11 MAC layer.
SIFS=D+M+Rx/Tx
Where,
D=(Receiver delay (RF delay) and decoding of physical layer convergence procedure (PLCP) preamble/header)
M=(MAC processing delay)
Rx/Tx=(transceiver turnaround time)
Above all the delays can not be avoided so It has to wait SIFS time before sending acknowledgement
I'm parsing NMEA GPS data from a device which sends timestamps without milliseconds. As far as I heard, these devices will use a specific trigger point on when they send the sentence with the .000 timestamp - afaik the $ in the GGA sentence.
So I'm parsing the GGA sentence, and take the timestamp when the $ is received (I compensate for any further characters being read in the same operation using the serial port baudrate).
From this information I calculate the offset for correcting the system time, but when I compare the time set to some NTP servers, I will get a constant difference of 250ms - when I correct this manually, I'm within a deviation of 20ms, which is ok for my application.
But of course I'm not sure where this offset comes from, and if it is somehow specific to the GPS mouse I'm using or my system. Am I using the wrong $ character, or does someone know how exactly this should be handled? I know this question is very fuzzy, but any hints on what could cause this offset would be very helpful!
Here is some sample data from my device, with the $ character I will take as the time offset marked:
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003538.000,A,5046.8555,N,00606.2913,E,0.00,22.37,160209,,,A*58
-> $ <- GPGGA,003539.000,5046.8549,N,00606.2922,E,1,07,1.5,249.9,M,47.6,M,,0000*5C
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPGSV,3,1,10,09,77,107,17,12,63,243,30,05,51,249,16,14,26,315,20*7E
$GPGSV,3,2,10,30,24,246,25,17,23,045,22,15,15,170,16,22,14,274,24*7E
$GPGSV,3,3,10,04,08,092,22,18,07,243,22*74
$GPRMC,003539.000,A,5046.8549,N,00606.2922,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003540.000,5046.8536,N,00606.2935,E,1,07,1.5,249.0,M,47.6,M,,0000*55
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003540.000,A,5046.8536,N,00606.2935,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003541.000,5046.8521,N,00606.2948,E,1,07,1.5,247.8,M,47.6,M,,0000*5E
You have to take into account things that are going on in GPS device:
receive satellite signal and calculates position, velocity and time.
prepare NMEA message and put it into serial port buffer
transmit message
GPS devices have relatively slow CPUs (compared to modern computers), so this latency you are observing is result of processing that device must do between generation of position and moment it begin transmitting data.
Here is one analysis of latency in consumer grade GPS receivers from 2005. There you can find measurement of latency for specific NMEA sentences.