Impoving ContactTracing Api efficiency with bluetooth signal strength - ibeacon

As per current specs only duration in 5 min increments is tracked. Suggested interval is 200-300ms it seems. In Singapore signal strength was accounted for but this is variable per device. What if we do still also track the signal strength during that time? You would get a curve from weak to strong that gives an indication of the speed of travel while approaching, and couldn't you also derive fairly accurate indications of proximity after just one day of data?
I noticed that beacon libraries already attempt to estimate distance: Understanding ibeacon distancing
But it does not seem these self-calibrate yet based for instance on min-max readings versus moving targets. I'm thinking that could work especially as phones are modified to be always on in that respect.

It is very difficult to accurately determine distance by Bluetooth RSSI measured between two phones because there is a huge variation in the way different phone models measure bluetooth signals. Check out this graph produced by the Open Trace folks behind the effort in Singapore:
Those variations are consistent with my work in this area for the Android Beacon Library open source project. The fragmentation of Android devices has made it impossible to keep up with all the variations in signal strength response.
One point that the Open Trace team did not address in their work, is that there are a number of different bluetooth channels, and RSSI varies greatly on a given phone depending on which channel is being used. Mobile phones give you no indication of what channel the radio was on when a measurement was taken. The channel difference probably accounts for much of the "height" of the blue bars in the graph.
Unfortunately, there is no way to know if a device is approaching or stationary by reading RSSI updates. The changes could be because of natural variation, motion, or changes in obstacles. I do not believe self-calibration in a contact tracing app is viable.
This does not mean that RSSI is worthless for distance estimates, but it does mean that the margin of error is very high in what you can measure. If you see a device at all, there is a very good chance it is within 50 meters. And if you see that the RSSI is stronger than -70 dBm, there is a good chance you are within 2 meters. But there will always be false positives and false negatives.

Related

Is GPS inaccuracy consistent over short time spans?

I'm interested in developing a semi-autonomous RC lawnmower.
That is, the operator would decide when to stop, turn, etc., but could request "slightly overlap previous cut" and the mower would automatically do so. (Having operated high-end RC mowers at trade shows, this is the tedious part. Overcoming that, plus the high cost -- which I believe is possible -- would make a commercial success.)
This feature would require accurate horizontal positioning. I have investigated ultrasonic, laser, optical, and GPS. Each has its problems in this application. (I'll resist the temptation to go off on these tangents here.)
So... my question...
I know GPS horizontal accuracy is only 3-4m. Not good enough, but:
I don't need to know where I am on the planet. I only need to know where I am relative to where I was a minute ago.
So, my question is, is the inaccuracy consistent in the short term? if so, I think it would work for me. If it varies wildly by +- 1.5m from one second to the next, then it will not work.
I have tried to find this information but have had no success (possibly because of the ubiquity of other GPS-accuracy discussion), so I appreciate any guidance.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Edit ~~~~~~~~~~~~~~~~~~~~~~
It's looking to me like GPS is not just skewed but granular. I'd be interested in hearing from anyone who can give better insight into this, but for now I'm going to explore other options.
I realized that even though my intended application is "outdoor", this question is technically in the field of "indoor positioning systems" so I am adding that tag.
My latest thinking is to have 3 "intelligent" high-dB ultrasonic (US) speaker units. The mower emits RF requests for a tone from each speaker in rapid sequence, measuring the time it takes to "hear" each unit's response, thereby calculating distance to each of these fixed point and using trilateration to get position. if the fixed-point speakers are 300' away from the mower, the mower may have moved several feet between the 1st and 3rd response, so this would have to be allowed for in the software. If it is possible to differentiate 3 different US frequencies, they could be requested/received "simultaneously". Though you still run into issues when you're close to one fixed unit and far from another. So some software correction may still be necessary. If we can assume the mower is moving in a straight line, this isn't too complicated.
Another variation is the mower does not request the tones. The fixed units send RF "here comes tone from unit A" etc., and the mower unit just monitors both RF info and US tones. This may simplify things somewhat, but it seems it really requires the ability to determine which speaker a tone is coming from.
This seems like the kind of thing you could (and should) measure empirically. Just set a GPS of your liking down in the middle of a field on a clear day and wait an hour. Then come back and see what you find.
Because I'm in a city, I can't run out and do this for you. However, I found a paper entitled iGeoTrans – A novel iOS application for GPS positioning in geosciences.
That includes this figure which duplicates the test I propose. You'll note that both the iPhone4 and Garmin eTrex10 perform pretty poorly versus the accuracy you say you need.
But the authors do some Math Magic™ to reduce the uncertainty in the position, presumably by using some kind of averaging. That gets them to a 3.53m RMSE measure.
If you have real-time differential GPS, you can do better. But this requires relatively expensive hardware and software.
Even aside from the above, you have the potential issue of GPS reflection and multipath error. What if your mower has to go under a deck, or thick trees, or near the wall of a house? These common yard features will likely break the assumptions needed to make a good averaging algorithm work and even frustrate attempts at DGPS by blocking critical signals.
To my mind, this seems like a computer vision problem. And not just because that'll give you more accurate row overlaps... you definitely don't want to run over a dog!
In my opinion a standard GPS is no way accurate enough for this application. A typical consumer grade receiver that I have used has a position accuracy defined as a CEP of 2.5 metres. This means that for a stationary receiver in a "perfect" sky view environment over time 50% of the position fixes will lie within a circle with a radius of 2.5 metres. If you look at the position that the receiver reports it appears to wander at random around the true position sometimes moving a number of metres away from its true location. When I have monitored the position data from a number of stationary units that I have used they could appear to be moving at speeds of up to 0.5 metres per second. In your application this would mean that the lawnmower could be out of position by some not insignificant distance (with disastrous consequences for your prized flowerbeds).
There is a way that this can be done, as has been proved by the tractor manufacturers who can position the seed drills and agricultural sprayers to millimetre accuracy. These systems use Differential GPS where there is a fixed reference station positioned in the neighbourhood of the tractor being controlled. This reference station transmits error corrections to the mobile unit allowing it to correct its reported position to a high degree of accuracy. Unfortunately this sort of positioning system is very expensive.

Beacon: Why we need calibrate Tx power

as far as i know, in the package which is sent by a beacon, it contains the information about calibrated Tx Power (or measured power - power value at 1 meter). I just wonder why beacon send calibrated Tx power, not broadcasting power (the signal power that beacon sends from the source). Because the calculation logic can be different a little bit but it makes the terms and configuration more simple.
Think about a beacon like a friend calling out to you by shouting your name. You can estimate how far your friend is away by the volume of the sound when it reaches you.
But some people have louder voices than others. A friend with a loud voice who is far away may be heard at the same volume as a friend with a soft voice who is nearby. To tell the difference, you might have each friend shout out how loud their voice is on a scale of one to ten.
This is the same concept behind beacons transmitting their "measuredPower" (also known as txPower or calibratedPower). The beacon transmits the "volume" that should be heard by the receiver (measured in dBm) if the receiver is 1 meter away. This way, beacons with strong transmitters can work alongside those with weak transmitters.
The reason 1 meter is used as a reference is because it is relatively easy to measure a signal at 1 meter. Practical considerations make it difficult to impossible to accurately measure signal levels at a distance of 0 meters. Also, physical environmental factors +like a metal cabinet behind the beacon or a wooden door in front of one) can cause reflections or attenuations that affect the signal level. A 1 meter reference point makes it easier to account for this.
The distance estimate provided by iOS is based on the ratio of the iBeacon signal strength (rssi) over the calibrated transmitter power (txPower). The txPower is the known measured signal strength in rssi at 1 meter away. Each iBeacon must be calibrated with this txPower value to allow accurate distance estimates.

Do iBeacons have influence on each other?

I have a question about iBeacons. I've been checking the stability of the iBeacons by using different applications to detect them and the distance from the device to the beacons itself.
What I've noticed is that if two beacons are put directly next to each other or on top of each other, the distance isn't correct anymore.
So my question is, do iBeacons interrupt each others signal?
Thanks in advance.
Bluetooth LE beacons generally do not cause significant disruption in eachothers' signal. While they share the same radio band, there are multiple channels for advertising and the devices automatically detect collisions and retry transmitting when needed.
When you get in extreme density (hundreds of beacons in radio range) it can start to reduce detection counts for individual beacons, but this is an extreme scenario that is not what is described in the question.
For normal operations the above is true. For signal strength readings, any radio signals, noise, reflections or obstructions can affect the signal strength that is used to provide distance estimates. The important thing to note is that these are just estimates and many things can throw them off. Rarely are they very accurate.

Improve location using Bluetooth?

For my project i need estimate the point on a grid that i use. To check my method works i took some Readings through x axis as below when y=2;
Blue, Brown and Grey are the access point 1,2 and 3 average RSSI readings. The node moves from red dot to blue which is 120 cm.The fluctuations of the RSSI readings are not linear and this is a very big problem in my case to get the accurate position. I use Knn to get the nearest position. What can i do to make it correct.? Use some other classifier will help ?
Check out Is rssi a reliable parameter in sensor localization algorithms: An experimental study
While i dont completely agree to the way those tests were executed and analyzed, they miss alot of details on analyzing the results such as differentiating the readings of RSSI over the different BLE channels, or measurements on antenna characteristics and orientation, the core statement i consider quite on point.
RSSI cannot be used as a reliable metric in localization algorithms
Beside the issues reflection, shadowing, antenna characteristics, etc. pose on the difficulty, the BLE Specification itself adds to the problem as the RSSI is not defined as an absolute value, it is specified to be used in a relative manner related to the distance to the golden receiver range, the rx power that would be not too weak and not too strong for the receiver to have the best receive quality. Also this RSSI can vary +/- 6dBm from the real value.
This means, we can hardly rely on the same readings across different devices and secondly, the accurracy is allowed to vary alot according to the specification.
For that reason, projects relying too hard on that accurracy are doomed to fail one way or another. However there are still applications possible getting something positive out of these RSSI readings, i.e. not relying completely on them, but instead use them as indicator in a supportive way.
If you are interested more on this matters, search for Indoor localization rssi i.e. on google scholar.

iBeacons: bearing to beacon?

Partly a coding problem, partly math problem.
Q1. I have an iOS device with compass active. If it knows I'm moving through the field of an iBeacon - or the Beacon is moving through my detection range - would it be possible for a phone to work out (roughly) the relative direction/bearing of that beacon with a series of readings by comparing signal strengths? Has anyone had a try at this?
Q2. Would it be possible to change the Major and Minor values of a beacon regularly (eg: every second) to pass small pieces of info - such as a second user's Bearing and Course?
Q1. It MIGHT be possible but you would need a controlled environment. Either the beacon or the phone needs to be fixed. You also need to be in an area with no obstructions or sources of radio interference.
Then you'd need to use the signal strength (which is sloppy and varies by a fair amount) as one input, and the device's heading info (which is also grossly inaccurate) and do some petty gnarly math on it.
Assuming you could work out the math, the slop in the input readings might make the results too iffy to be useful. (For example, how would you distinguish moving directly towards the beacon from moving 30 degrees to one side or the other? The signal strength would still increase, just not as quickly.
And your algorithm would have to deal with edge cases like moving along a circle around the beacon. In that case the signal strength should not change.
My gut is that even with clever algorithms that input data is just too unreliable to make much sense out of it, beyond "getting warmer" and "getting colder."
As mentioned above, you'd have to track your device's movement within the field, including distance covered and direction, then with multiple readings of signal strength you could theoretically calculate relative direction to the beacon to some degree of accuracy.
As to your second question about changing the minor version number, I have not seen any beacon APIs that allow that, either from the beacon manufacturers or from Apple's implementation.
However, a typical beacon is an ARM or other low power processor with a BLE transceiver, running a program. In theory it should be possible to create your own iBeacon transmitter that changed one of the parameters in order to transmit changing information. You'd have to set up the iOS device with the beacon region only specifying the UUID or UUID and major ID (depending on whether you wanted to change just the minor or change both the major and minor ID in order to transmit changing information.)
Note, too, that iBeacons are a special case of BLE, and the BLE standard does support the sending of arbitrary, changing data. You might be better off implementing your own BLE scheme either instead of or in addition to iBeacons.

Resources