iBeacon is receiving abnormal RSSI signal - ibeacon

I developed an ibeacon-based ios APP, but the RSSI signal it received jumps between 0 and a normal value during beacon ranging(there is kinda like a pattern showing a normal RSSI signal every 4-6 zero RSSI).
I am trying to let my iphone have a real time response based on the RSSI signal received, but I won't be able to do anything with this much unstable signal. I don't know this is because of hardware or battery problem or anything else. Any idea is appreciated.

When ranging for beacons on iOS, if no beacon packets have been received in the last second (but beacon packets have been received in the last five seconds), the beacon will be included in the list of CLBeacon objects in the callback, but it will be given an rssi value of 0.
You can confirm this is true by turning off a beacon. You will notice you will continue to get it in ranging callbacks for about 5 seconds, but the rssi will always be zero. After those five seconds, it is removed from the list.
If you are seeing it bounce back and forth between 0 and a normal value, this indicates that beacon packets are only being received every few seconds. The most likely cause is a beacon transmitter that rarely sends packets (say every 3 to 5 seconds). Some manufacturers sell beacons that do this to conserve battery life.
For best ranging performance, turn up the advertising rate to 10 Hz if your beacon manufacturer allows it, and also increase the transmitter power to maximum. This will use much more battery but will alleviate the spots you are seeing.

Related

Android Beacon Library - Long Search Time

Update: I would still want advice as the start up time is still slow but reduced to about 10 seconds compared to 2 minutes.
As far as I understand, the default beacon search time is around once every 1.1 second for this library. However, despite setting my beacon broadcast frequency to 10Hz (I-Beacon) and 'didDetermineStateForRegion' reporting a detection of beacons coming into range, it takes about 1 minute for 'didEnter/ExitRegion' and the 'range notifier' to give me an alert that a beacon is in range / give me a list of beacons that are in range. After it starts giving me alerts of beacons entering into range, the response is great, at less than 0.5 seconds for a beacon that is turned on/off.
What are the possible reasons and solutions for the issue? I am trying to create an I-Beacon Attendance App. Many thanks.
*I also tried advices given from other posts like turning off Wifi to minimise interference.
Clement
The time it takes to detect a beacon in the background is largely derermined by how the phone scans in low power mode, combined with the beacon transmitter's advertising rate.
Android devices generally put BLE scans into low power mode when the screen is off. The Android Beacon Library does so explicitly when using BackgroundPowerSaver and the OS enforces this anyway on newer Android versions.
Low power mode means the BLE chip is commanded to use a duty cycle when scans are on. On open source Android, this is set to a 5120 ms interval with only 512 ms window of active scanning -- a 10% duty cycle. This saves battery by about 90% vs constant scanning, but it delays detections.
private static final int SCAN_MODE_LOW_POWER_WINDOW_MS = 512;
private static final int SCAN_MODE_LOW_POWER_INTERVAL_MS = 5120;
private static final int SCAN_MODE_BALANCED_WINDOW_MS = 1024;
private static final int SCAN_MODE_BALANCED_INTERVAL_MS = 4096;
private static final int SCAN_MODE_LOW_LATENCY_WINDOW_MS = 4096;
private static final int SCAN_MODE_LOW_LATENCY_INTERVAL_MS = 4096;
See here at AOSP
This is where the transmitter's advertising rate cones in. If the transmitter is advertising at 10Hz, there should be about 10 packets per second to detect. These are spaced randomly but on average one every 100ms. So you might usually be able to detect 4 packets during the 450 ms active scan window. In practice, you almost never detect that many as some are lost due to noise and collisions in radio space. At close range, and 80 percent receive rate is typical. At further ranges, the receive rate goes down further.
If a packet is detected in one scan interval under this scenario, the OS will get a callback on just under 5 seconds. If for some reason no packet is detected in the first scan window but is in the second, the callback will come in just over 9 seconds.
Improving this time means changing the scan interval to be smaller. On Android, you can only do this by changing the scan mode to high power as the window size is fixed by the OS. This usually means having the screen on.
The numbers above are for open source Android (e.g. Pixel phones). Some manufacturers (secretly) customize these settings. My testing suggests most Samsung devices with Android 6+ set the scan interval to 10 seconds with an unknown active scan duration. This means Samsung devices will give you about the results you describe even under the best conditions. Other manufacturers may vary. Getting the value for your manufacturer is impossible without the source code -- the only alternative is experimentation like you are doing.
Finally, do not confuse the Android Beacon Library's scanPeriod and betweenScanPeriod with the scan window/interval described above. While both have similar goals and effects, the OS scan window is not configurable and enforced at a much lower level, usually by the Bluetooth chip itself on newer devices.

Veins delay does not change with beacon frequency or number of nodes

I'm trying to simulate an imergancy breaking application using veins and analyze its performance. Research papers on 802.11p shows that as beacon frequency and number of vehicles increase delay should increase considerably due to mac layer delay of the protocol ( for 50 vehicles at 8Hz - about 300ms average delay).
But when simulating application with veins delay values does not show much different ( it ranges from 1ms-4ms).I've checked the Mac layer functionality and it appears that the channel is idle most of the time. So when a packet reaches Mac layer the channel has already been idle for more than the DIFS so packet gets sent quickly. I tried increasing packet size and reducing bitrate. It increase the previous delay by a certain amount. But drastic increase of delay due to backoff process cannot be seen.
Do you know why this happens ???
As you use 802.11p the default data rate on the control channel is 6Mbits (source: ETSI EN 302 663)
750Mbyte/s = 750.000bytes/s
Your beacons contain 500bytes. So the transmission of a beacon takes about 0.0007 seconds. As you have about 50 cars in your multi lane scenario and for example they are sending beacons with a 10 hertz frequency, it takes about 0.35s from 1 second to transmit your 500 beacons.
In my opinion, this are to less cars to create your mentioned effect, because the channel is idling for about 60% of the time.

how to measure channel busy time in veins?

is there any function that returns channel busy time? I use veins-2.2, mac and decider 802.11p. if there is not such function, how measuring the channel busy time is possible?
Channel busy time in Veins 2.2 is measured at two points: in the Phy layer and in the Mac layer. Both record a corresponding scalar value at the end of the simulation. Note that there is a difference in meaning between the two:
Mac busy time is (in almost all cases) what you want to record: it records how many seconds the Mac treated the channel as busy. Divide the scalar totalBusyTime by the total simulation time and you know the fraction of time that the Mac could not send.
Phy busy time is calculated very different: its value busyTime increases for each frame received above the sensitivity threshold. To give an example, if 1 frame is being received at any given time during the simulation, the value of this scalar would be 100%. If 4 frames are interfering for all of your simulation, the value of this scalar would be 400% (which is different to the Mac busy time you probably want).

Why wait SIFS time before sending ACK?

Question about the MAC-protocol of 802.11 Wifi.
We have learned that when a station has received the data it waits for SIFS time. Then it sends the packet. When searching online the reason that is always mentioned is to give ACK packets a higher priority. This is understandable since a station first has to wait DIFS time when it wants to send normal data (and DIFS is larger than SIFS).
But why wait at all? Why not immediately send the ACK? The station knows the data has arrived and the CRC is correct, so why wait?
It is theoretically possible to know that the CRC is correct at the exact end of the received data from the wire, but in practice, you need to accumulate all the samples in the last block in order to run the IFFT, deconvolution, FEC, and then, finally, at the very end, after finally getting the input data out of the waveform, do you know that the CRC is correct. Also, you sometimes need to turn on transmit circuitry to send the ACK, which can hamper receive performance. If each step in the processing chain were instantaneous, and if the transmit circuitry definitely didn't interfere with the receive circuitry, and if there were no lead-time necessary for building the waveform for the ACK, it'd be possible to send the ACK immediately after getting the last bit of the wave-form. But, while each element in this chain takes some deterministic time, it is not instantaneous. SIFS gives the receiver time to get the data from the PHY, verify it, and send the response.
Disclaimer: I'm more familiar with Homeplug than 802.11.
It is like that because Distributed Coordination Function (DCF) and Point Coordination Function (PCF) mode can coexist within one cell. That is a base station may use polling while at the same time the cell can use disitributed coordination using CSMA/CA.
So during SIFS, control frames or next fragment may be sent. During PIFS, PCF frames may be sent and during DIFS DCF frames may be sent. During SIFS and PIFS, PCF can work its magic.
Even though not all base stations support PCF all stations must wait since some may support it.
Update:
The way I understand this now is that during SIFS the station may send RTS,CTS or ACK and have enough time to switch back to receiving mode before the sender starts to transmit. If that's correct, it will send ACK during SIFS. Then it will change to receive mode and wait until SIFS elapses. When SIFS has elapsed the transmitter will start sending.
Also, PCF is controlled by PIFS which comes after SIFS and before DIFS and is therefor not relevant for this discussion (my mistake). That is, SIFS < PIFS < DIFS < EIFS.
Sources: This PDF (page 8) and Computer Networks by Andrew S. Tanenbaum
SIFS = RTT (based on PHY Transmission rate) + FRAME PROCESSING DELAY AT RECEIVER (PHY PROCESSING DELAY + MAC PROCESSING DELAY) + FRAME PROCESSING DELAY (FOR COMPOSING RESPONSE CTS/ACK)+ RF TUNER DELAY (CHANGE FROM RX to TX)
A the Transmitter side, after last PHY Symbol (of RTS, e.g), the time required to change to RX mode (at RF). So, I would see SIFS as a RX side calculation than a TX side.
I can't say for sure but it sounds like an optimization strategy similar to IP. If you don't require an ACK for every data packet, it makes sense to hold off for a bit so that, if more data packets arrive, you can acknowledge them all with a single ACK.
Example: client sends 400 packets really fast to the server. Rather than the server sending back 400 ACKs, it can simply wait until the client takes a breather before sending a single ACK back. Combined with the likelihood that the client will take a breather even under heavy load (it has to as its unacknowledged-packets buffer fills up), this would be workable.
This is possible in systems where the ACK(n) means "I've received everything up to and including packet # n.
You'll get better performance and less traffic by using such a strategy. As long as the wait-before-sending-ack time on the receiver is less than the retransmit-if-no-ack-before time on the sender (taking transmission delays into account), there should be no problem.
Quick crash-course on 802.11:
802.11 is a essentially a giant system of timers. The most common implementations of 802.11 utilize the distributed coordination function, DCF. The DCF allows for nodes to come in and out of the range of a radio channel being used for 802.11 and coordinate in a distributed fashion who should be sending and receiving data (ignoring hidden and exposed node problems for this discussion). Before any node can begin sending data on the channel they all must wait a period of DIFS, in which the channel is determined to be idle, if it is idle during a DIFS period the first node to grab the channel begins transmitting. In standard 802.11, i.e. non-802.11e implementations and non 802.11n, every single data packet that gets transmitted needs to be acknowledged by a physical layer, PHY, acknowledgment packet, irregardless of the upper layer protocol being used. After a data packet gets sent a SIFS time period needs to expire, after SIFS expires control frames destined for the node that has "taken" control of the channel may be sent, in this instance and acknowledgment frame is transmitted. SIFS allows the node that sent the data packet to switch from transmitting to receiving mode. If a packet does get lost and no ACK is received after SIFS/ACK timeout occurs, then exponential back-off is invoked. Exponential back-off, a.k.a contention window (CW), begins at a value CWmin, in some linux implementation this is 15 slot times, where a slot time varies depending on the 802.11 protocol that is being used. The CW value is then chosen from 1 to whatever the upper limit that has been calculated for CW. If the current packet was lost, then the CW is incremented from 15 to 30, and then a random value is chosen between 1 and 30. Every-time there is a consecutive lose the CW doubles up to 1023, at which point it hits a limit. Once a packet is received successfully the CW is reset back to CWmin.
In regards to 802.11n / 802.11e:
Every data packet still needs to be acknowledged, but when using 802.11e (implemented into 802.11n) multiple data packets can be aggregated together in two different ways A-MSDU or A-MPDU. A-MSDU is a jumbo-frame that has one checksum for the entire aggregated packet being sent, within it are many sub-frames that contain each of the data frames that needed to be sent. If there is any error in the A-MSDU frame and it needs to be retransmitted, then every sub-frame is required to be resent. However, when using A-MPDU, each sub-frame has a small header and checksum that allow for any sub-frame that has an error in it to be retransmitted by itself/within another aggregated frame the next time the sending nodes gains the channel. With these aggregated packet sending schemes there is the notion of the block-ack. The block-ack contains a bitmap of the frames from a starting sequence number that were just sent in the aggregated packet and received correctly or incorrectly. Usage of aggregated frame sending greatly improves throughput performance as more data can be sent per channel acquisition by a sending node, also allowing out-of-order packet sending. However, out-order packet sending greatly complicates the 802.11 MAC layer.
SIFS=D+M+Rx/Tx
Where,
D=(Receiver delay (RF delay) and decoding of physical layer convergence procedure (PLCP) preamble/header)
M=(MAC processing delay)
Rx/Tx=(transceiver turnaround time)
Above all the delays can not be avoided so It has to wait SIFS time before sending acknowledgement

GPS Time synchronisation

I'm parsing NMEA GPS data from a device which sends timestamps without milliseconds. As far as I heard, these devices will use a specific trigger point on when they send the sentence with the .000 timestamp - afaik the $ in the GGA sentence.
So I'm parsing the GGA sentence, and take the timestamp when the $ is received (I compensate for any further characters being read in the same operation using the serial port baudrate).
From this information I calculate the offset for correcting the system time, but when I compare the time set to some NTP servers, I will get a constant difference of 250ms - when I correct this manually, I'm within a deviation of 20ms, which is ok for my application.
But of course I'm not sure where this offset comes from, and if it is somehow specific to the GPS mouse I'm using or my system. Am I using the wrong $ character, or does someone know how exactly this should be handled? I know this question is very fuzzy, but any hints on what could cause this offset would be very helpful!
Here is some sample data from my device, with the $ character I will take as the time offset marked:
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003538.000,A,5046.8555,N,00606.2913,E,0.00,22.37,160209,,,A*58
-> $ <- GPGGA,003539.000,5046.8549,N,00606.2922,E,1,07,1.5,249.9,M,47.6,M,,0000*5C
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPGSV,3,1,10,09,77,107,17,12,63,243,30,05,51,249,16,14,26,315,20*7E
$GPGSV,3,2,10,30,24,246,25,17,23,045,22,15,15,170,16,22,14,274,24*7E
$GPGSV,3,3,10,04,08,092,22,18,07,243,22*74
$GPRMC,003539.000,A,5046.8549,N,00606.2922,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003540.000,5046.8536,N,00606.2935,E,1,07,1.5,249.0,M,47.6,M,,0000*55
$GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31
$GPRMC,003540.000,A,5046.8536,N,00606.2935,E,0.00,22.37,160209,,,A*56
-> $ <- GPGGA,003541.000,5046.8521,N,00606.2948,E,1,07,1.5,247.8,M,47.6,M,,0000*5E
You have to take into account things that are going on in GPS device:
receive satellite signal and calculates position, velocity and time.
prepare NMEA message and put it into serial port buffer
transmit message
GPS devices have relatively slow CPUs (compared to modern computers), so this latency you are observing is result of processing that device must do between generation of position and moment it begin transmitting data.
Here is one analysis of latency in consumer grade GPS receivers from 2005. There you can find measurement of latency for specific NMEA sentences.

Resources