Time stamp information in BLE Beacon packets - ibeacon
I have a requirement that in BLE Beacon Packets, I want to send the time stamp information with in Beacon packets . My questions are :
1. Is it possible to include time stamp in BLE beacon packets ? .
2. If we can send the time stamp information with in Beacon packets , where exactly i should store this information in payload ? . UUID ?
You do not have many bytes to work with in a BLE beacon packet. The max data payload is about 20-25 bytes, but if using iBeacon on iOS this drops drastically to 4 bytes because the 16 byte UUID portion of that beacon format takes of 16 of 24 of the readable data bytes, and the full UUID must be fixed and specified to the OS up front.
So on iOS you can use the 4 byte major/minor combo to store a timestamp. This would store a time value in seconds since 1970 that would not roll over until about the year 2136.
Related
Synchronisation for audio decoders
There's a following setup (it's basically a pair of TWS earbuds and a smartphone): 2 audio sink devices (or buds), both are connected to the same source device. One of these devices is primary (and is responsible for handling connection), other is secondary (and simply sniffs data). Source device transmits a stream of encoded data and sink device need to decode and play it in sync with each other. There problem is that there's a considerable delay between each receiver (~5 ms # 300 kbps, ~10 ms # 600 kbps and # 900 kbps). It seems that synchronisation mechanism which is already implemented simply doesn't want to work, so it seems that my only option is to implement another one. It's possible to send messages between buds (but because this uses the same radio interface as sink-to-source communication, only small amount of bytes at relatively big interval could be transferred, i.e. 48 bytes per 300 ms, maybe few times more, but probably not by much) and to control the decoder library. I tried the following simple algorithm: secondary will send every 50 milliseconds message to primary containing number of decoded packets. Primary would receive it and update state of decoder accordingly. The decoder on primary only decodes if the difference between number of already decoded frame and received one from peer is from 0 to 100 (every frame is 2.(6) ms) and the cycle continues. This actually only makes things worse: now latency is about 200 ms or even higher. Is there something that could be done to my synchronization method or I'd be better using something other? If so, what would be the best in such case? Probably fixing already existing implementation would be the best way, but it seems that it's closed-source, so I cannot modify it.
iBeacon is receiving abnormal RSSI signal
I developed an ibeacon-based ios APP, but the RSSI signal it received jumps between 0 and a normal value during beacon ranging(there is kinda like a pattern showing a normal RSSI signal every 4-6 zero RSSI). I am trying to let my iphone have a real time response based on the RSSI signal received, but I won't be able to do anything with this much unstable signal. I don't know this is because of hardware or battery problem or anything else. Any idea is appreciated.
When ranging for beacons on iOS, if no beacon packets have been received in the last second (but beacon packets have been received in the last five seconds), the beacon will be included in the list of CLBeacon objects in the callback, but it will be given an rssi value of 0. You can confirm this is true by turning off a beacon. You will notice you will continue to get it in ranging callbacks for about 5 seconds, but the rssi will always be zero. After those five seconds, it is removed from the list. If you are seeing it bounce back and forth between 0 and a normal value, this indicates that beacon packets are only being received every few seconds. The most likely cause is a beacon transmitter that rarely sends packets (say every 3 to 5 seconds). Some manufacturers sell beacons that do this to conserve battery life. For best ranging performance, turn up the advertising rate to 10 Hz if your beacon manufacturer allows it, and also increase the transmitter power to maximum. This will use much more battery but will alleviate the spots you are seeing.
Is Realterm dropping characters or am I?
I'm using a SAML21 board to accept some data over a serial connection, and at the moment, just mirror it to a serial port on a computer. However this data is 6 bytes at ~250Hz(It was closer to 3KHz before). As far as I can tell I'm tracking the start and end bytes correctly, however my columnar alignment gets out of whack on occasion in realterm. I have it set up for 6 bytes in single mode. SO all columns should be presenting the same bytes up and down. However, over time as I increase the rate at which I mirror(I am still receiving the data at a fixed rate) the first column's byte tends to float. I have not used realterm at speeds this high before, so I am not aware of it's limitations.
What is the reason to big overhead while send data using spidev
I'm using spidev driver (linux embedded) to send data through spi communication. I sent 7 bytes of data (in 1 Mb clock rate using "write" command) and I noticed that it takes me approximately 200 microseconds to complete that operation (I used scope to verify that the clock rate is correct). Time of sending that data should be 56 microseconds + some overhead but it seems to me too much. What can be the reason for that overhead? Is it connected to the switch between user space and kernel space? or is it connected to spidev implementation?
GPS Time synchronisation
I'm parsing NMEA GPS data from a device which sends timestamps without milliseconds. As far as I heard, these devices will use a specific trigger point on when they send the sentence with the .000 timestamp - afaik the $ in the GGA sentence. So I'm parsing the GGA sentence, and take the timestamp when the $ is received (I compensate for any further characters being read in the same operation using the serial port baudrate). From this information I calculate the offset for correcting the system time, but when I compare the time set to some NTP servers, I will get a constant difference of 250ms - when I correct this manually, I'm within a deviation of 20ms, which is ok for my application. But of course I'm not sure where this offset comes from, and if it is somehow specific to the GPS mouse I'm using or my system. Am I using the wrong $ character, or does someone know how exactly this should be handled? I know this question is very fuzzy, but any hints on what could cause this offset would be very helpful! Here is some sample data from my device, with the $ character I will take as the time offset marked: $GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31 $GPRMC,003538.000,A,5046.8555,N,00606.2913,E,0.00,22.37,160209,,,A*58 -> $ <- GPGGA,003539.000,5046.8549,N,00606.2922,E,1,07,1.5,249.9,M,47.6,M,,0000*5C $GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31 $GPGSV,3,1,10,09,77,107,17,12,63,243,30,05,51,249,16,14,26,315,20*7E $GPGSV,3,2,10,30,24,246,25,17,23,045,22,15,15,170,16,22,14,274,24*7E $GPGSV,3,3,10,04,08,092,22,18,07,243,22*74 $GPRMC,003539.000,A,5046.8549,N,00606.2922,E,0.00,22.37,160209,,,A*56 -> $ <- GPGGA,003540.000,5046.8536,N,00606.2935,E,1,07,1.5,249.0,M,47.6,M,,0000*55 $GPGSA,A,3,17,12,22,18,09,30,14,,,,,,2.1,1.5,1.6*31 $GPRMC,003540.000,A,5046.8536,N,00606.2935,E,0.00,22.37,160209,,,A*56 -> $ <- GPGGA,003541.000,5046.8521,N,00606.2948,E,1,07,1.5,247.8,M,47.6,M,,0000*5E
You have to take into account things that are going on in GPS device: receive satellite signal and calculates position, velocity and time. prepare NMEA message and put it into serial port buffer transmit message GPS devices have relatively slow CPUs (compared to modern computers), so this latency you are observing is result of processing that device must do between generation of position and moment it begin transmitting data. Here is one analysis of latency in consumer grade GPS receivers from 2005. There you can find measurement of latency for specific NMEA sentences.