Android Beacon Library - Long Search Time - ibeacon

Update: I would still want advice as the start up time is still slow but reduced to about 10 seconds compared to 2 minutes.
As far as I understand, the default beacon search time is around once every 1.1 second for this library. However, despite setting my beacon broadcast frequency to 10Hz (I-Beacon) and 'didDetermineStateForRegion' reporting a detection of beacons coming into range, it takes about 1 minute for 'didEnter/ExitRegion' and the 'range notifier' to give me an alert that a beacon is in range / give me a list of beacons that are in range. After it starts giving me alerts of beacons entering into range, the response is great, at less than 0.5 seconds for a beacon that is turned on/off.
What are the possible reasons and solutions for the issue? I am trying to create an I-Beacon Attendance App. Many thanks.
*I also tried advices given from other posts like turning off Wifi to minimise interference.
Clement

The time it takes to detect a beacon in the background is largely derermined by how the phone scans in low power mode, combined with the beacon transmitter's advertising rate.
Android devices generally put BLE scans into low power mode when the screen is off. The Android Beacon Library does so explicitly when using BackgroundPowerSaver and the OS enforces this anyway on newer Android versions.
Low power mode means the BLE chip is commanded to use a duty cycle when scans are on. On open source Android, this is set to a 5120 ms interval with only 512 ms window of active scanning -- a 10% duty cycle. This saves battery by about 90% vs constant scanning, but it delays detections.
private static final int SCAN_MODE_LOW_POWER_WINDOW_MS = 512;
private static final int SCAN_MODE_LOW_POWER_INTERVAL_MS = 5120;
private static final int SCAN_MODE_BALANCED_WINDOW_MS = 1024;
private static final int SCAN_MODE_BALANCED_INTERVAL_MS = 4096;
private static final int SCAN_MODE_LOW_LATENCY_WINDOW_MS = 4096;
private static final int SCAN_MODE_LOW_LATENCY_INTERVAL_MS = 4096;
See here at AOSP
This is where the transmitter's advertising rate cones in. If the transmitter is advertising at 10Hz, there should be about 10 packets per second to detect. These are spaced randomly but on average one every 100ms. So you might usually be able to detect 4 packets during the 450 ms active scan window. In practice, you almost never detect that many as some are lost due to noise and collisions in radio space. At close range, and 80 percent receive rate is typical. At further ranges, the receive rate goes down further.
If a packet is detected in one scan interval under this scenario, the OS will get a callback on just under 5 seconds. If for some reason no packet is detected in the first scan window but is in the second, the callback will come in just over 9 seconds.
Improving this time means changing the scan interval to be smaller. On Android, you can only do this by changing the scan mode to high power as the window size is fixed by the OS. This usually means having the screen on.
The numbers above are for open source Android (e.g. Pixel phones). Some manufacturers (secretly) customize these settings. My testing suggests most Samsung devices with Android 6+ set the scan interval to 10 seconds with an unknown active scan duration. This means Samsung devices will give you about the results you describe even under the best conditions. Other manufacturers may vary. Getting the value for your manufacturer is impossible without the source code -- the only alternative is experimentation like you are doing.
Finally, do not confuse the Android Beacon Library's scanPeriod and betweenScanPeriod with the scan window/interval described above. While both have similar goals and effects, the OS scan window is not configurable and enforced at a much lower level, usually by the Bluetooth chip itself on newer devices.

Related

Veins delay does not change with beacon frequency or number of nodes

I'm trying to simulate an imergancy breaking application using veins and analyze its performance. Research papers on 802.11p shows that as beacon frequency and number of vehicles increase delay should increase considerably due to mac layer delay of the protocol ( for 50 vehicles at 8Hz - about 300ms average delay).
But when simulating application with veins delay values does not show much different ( it ranges from 1ms-4ms).I've checked the Mac layer functionality and it appears that the channel is idle most of the time. So when a packet reaches Mac layer the channel has already been idle for more than the DIFS so packet gets sent quickly. I tried increasing packet size and reducing bitrate. It increase the previous delay by a certain amount. But drastic increase of delay due to backoff process cannot be seen.
Do you know why this happens ???
As you use 802.11p the default data rate on the control channel is 6Mbits (source: ETSI EN 302 663)
750Mbyte/s = 750.000bytes/s
Your beacons contain 500bytes. So the transmission of a beacon takes about 0.0007 seconds. As you have about 50 cars in your multi lane scenario and for example they are sending beacons with a 10 hertz frequency, it takes about 0.35s from 1 second to transmit your 500 beacons.
In my opinion, this are to less cars to create your mentioned effect, because the channel is idling for about 60% of the time.

iBeacon is receiving abnormal RSSI signal

I developed an ibeacon-based ios APP, but the RSSI signal it received jumps between 0 and a normal value during beacon ranging(there is kinda like a pattern showing a normal RSSI signal every 4-6 zero RSSI).
I am trying to let my iphone have a real time response based on the RSSI signal received, but I won't be able to do anything with this much unstable signal. I don't know this is because of hardware or battery problem or anything else. Any idea is appreciated.
When ranging for beacons on iOS, if no beacon packets have been received in the last second (but beacon packets have been received in the last five seconds), the beacon will be included in the list of CLBeacon objects in the callback, but it will be given an rssi value of 0.
You can confirm this is true by turning off a beacon. You will notice you will continue to get it in ranging callbacks for about 5 seconds, but the rssi will always be zero. After those five seconds, it is removed from the list.
If you are seeing it bounce back and forth between 0 and a normal value, this indicates that beacon packets are only being received every few seconds. The most likely cause is a beacon transmitter that rarely sends packets (say every 3 to 5 seconds). Some manufacturers sell beacons that do this to conserve battery life.
For best ranging performance, turn up the advertising rate to 10 Hz if your beacon manufacturer allows it, and also increase the transmitter power to maximum. This will use much more battery but will alleviate the spots you are seeing.

OSX CoreAudio: Getting inNumberFrames in advance - on initialization?

I'm experimenting with writing a simplistic single-AU play-through based, (almost)-no-latency tracking phase vocoder prototype in C. It's a standalone program. I want to find how much processing load can a single render callback safely bear, so I prefer keeping off async DSP.
My concept is to have only one pre-determined value which is window step, or hop size or decimation factor (different names for same term used in different literature sources). This number would equal inNumberFrames, which somehow depends on the device sampling rate (and what else?). All other parameters, such as window size and FFT size would be set in relation to the window step. This seems the simplest method for keeipng everything inside one callback.
Is there a safe method to machine-independently and safely guess or query which could be the inNumberFrames before the actual rendering starts, i.e. before calling AudioOutputUnitStart()?
The phase vocoder algorithm is mostly standard and very simple, using vDSP functions for FFT, plus custom phase integration and I have no problems with it.
Additional debugging info
This code is monitoring timings within the input callback:
static Float64 prev_stime; //prev. sample time
static UInt64 prev_htime; //prev. host time
printf("inBus: %d\tframes: %d\tHtime: %lld\tStime: %7.2lf\n",
(unsigned int)inBusNumber,
(unsigned int)inNumberFrames,
inTimeStamp->mHostTime - prev_htime,
inTimeStamp->mSampleTime - prev_stime);
prev_htime = inTimeStamp->mHostTime;
prev_stime = inTimeStamp->mSampleTime;
Curious enough, the argument inTimeStamp->mSampleTime actually shows the number of rendered frames (name of the argument seems somewhat misguiding). This number is always 512, no matter if another sampling rate has been re-set through AudioMIDISetup.app at runtime, as if the value had been programmatically hard-coded. On one hand, the
inTimeStamp->mHostTime - prev_htime
interval gets dynamically changed depending on the sampling rate set in a mathematically clear way. As long as sampling rate values match multiples of 44100Hz, actual rendering is going on. On the other hand 48kHz multiples produce the rendering error -10863 ( =
kAudioUnitErr_CannotDoInCurrentContext ). I must have missed a very important point.
The number of frames is usually the sample rate times the buffer duration. There is an Audio Unit API to request a sample rate and a preferred buffer duration (such as 44100 and 5.8 mS resulting in 256 frames), but not all hardware on all OS versions honors all requested buffer durations or sample rates.
Assuming audioUnit is an input audio unit:
UInt32 inNumberFrames = 0;
UInt32 propSize = sizeof(UInt32);
AudioUnitGetProperty(audioUnit,
kAudioDevicePropertyBufferFrameSize,
kAudioUnitScope_Global,
0,
&inNumberFrames,
&propSize);
This number would equal inNumberFrames, which somehow depends on the device sampling rate (and what else?)
It depends on what you attempt to set it to. You can set it.
// attempt to set duration
NSTimeInterval _preferredDuration = ...
NSError* err;
[[AVAudioSession sharedInstance]setPreferredIOBufferDuration:_preferredDuration error:&err];
// now get the actual duration it uses
NSTimeInterval _actualBufferDuration;
_actualBufferDuration = [[AVAudioSession sharedInstance] IOBufferDuration];
It would use a value roughly around the preferred value you set. The actual value used is a time interval based on a power of 2 and the current sample rate.
If you are looking for consistency across devices, choose a value around 10ms. The worst performing reasonable modern device is iOS iPod touch 16gb without the rear facing camera. However, this device can do around 10ms callbacks with no problem. On some devices, you "can" set the duration quite low and get very fast callbacks, but often times it will crackle up because the processing is not finished in the callback before the next callback happens.

A COM Port on a Windows PC indicates the bit rate, or the Baud rate?

If you search around the internet, you can easily find websites, google images, as well as many (YouTube) videos that explain the various properties of COM/serial/RS232 ports. As far as i'm concerned in most of these they state that in the COM port dialogue box the baud rate can be seen (and not just in Windows OS), such as here, here and even on Sparkfun here. And this is clearly false, since it explicitly states the bit rate. Here's an image from my Windows 8.1 PC as well:
And we know that bit rate isn't the same as baud rate. Also numerous times i've heard people e.g. on youtube videos talking about messing around with the "baud-rate" on windows pc. Now i'm confused. What is going on here. It clearly states the bit rate, isn't that right? Am i missing something?
Despite being marked "bits per second", that dialog actually displays baud as a rate in symbols per second. (Symbols include data bits but also start, stop, and parity. For serial ports these are often also called "bits".)
Besides framing symbols, the other cause for a difference between bit rate and baud would be multilevel signalling -- however this doesn't apply to PC serial ports since they only use binary signalling, therefore one data symbol = one bit. Don't be confused by the fact that many serial-attached modems use a larger signal constellation, this refers to the link between the modem and computer, not between two modems.
The selections shown in the image in the question will result in 9600 baud, but only 960 bytes per second. (1 byte = 8 bits but due to start and stop intervals, the serial port sends 10 symbols per byte)
According to this answer:
What is the difference between baud rate and bit rate?
It looks like it's due to the fact that with early analog phones, bps = baud rate. ie 1 symbol = 1 bit. That would lead to the assumption that a UI designer at some point simply made some assumptions and mixed the terms based on some expectation that COM ports were going to be used to plug modems in.
Modems don't use a strict digital transmission method, but instead use FSK, which allows for a baud (your "symbol") o be more than one bit (binary data). A phone line has a high frequency limit of about 3300 Hz. If that was the cutoff, your modem couldn't send more than 2400 baud (bit rate). By shifting the signal within one cycle, it's able to transmit more than 1 bit in 1 baud. Add 4 shifts and you up the bit rate from 2400 to 9600.
At least that's what I remember from some 20 years ago.

How does Windows calculate remaining time on battery?

I'm testing an app to let users know when to plug and unplug their laptop to get the most life out of their laptop battery. As well as this I'm trying to replicate the tooltip from the Windows power meter.
It's fairly successful so far with a couple of differences.
The Windows time remaining notification, e.g. "X hr XX min (XX%) remaining", doesn't show up until after around a minute.
The Windows time remaining seems more stable under changing battery loads
These lead me to think that the Windows time remaining algorithm is averaging over the past minute or so but I can't find any documentation of that. Does anyone know exactly what it does so I can reproduce it?
Here's my implementation (in Python but the question is language-agnostic). I'm thinking I'll need to average the most recent x discharge rates from polling every y seconds but need to know the values for x and y.
t = wmi.WMI(moniker = "//./root/wmi")
batts = t.ExecQuery('Select * from BatteryStatus where Voltage > 0')
time_left = 0
for _i, b in enumerate(batts):
time_left += float(b.RemainingCapacity) / float(b.DischargeRate)
hours = int(time_left)
mins = 60 * (time_left % 1.0)
return '%i hr %i min' % (hours, mins)
Windows follows the ACPI specification, and given the specification gives a method of calculating remaining battery time, I'd assume this would be how they'd do it.
Edit: Found a somewhat confirming source.
I'm referring specifically to chapter 3.9.3 "Battery Gas Gauge".
Remaining Battery Percentage[%] = Battery Remaining Capacity [mAh/mWh] / Last Full Charged Capacity [mAh/mWh]* 100
if you need that in hours:
Remaining Battery Life [h]= Battery Remaining Capacity [mAh/mWh] / Battery Present Drain Rate [mA/mW]
This essentially represents the current rate of change in charge capacity per unit time, you'll need to look at the ACPI spec to see how Windows implements it specifically.
The variables in question I'd assume would have to be queried from the battery controller and I'd let Windows handle all the compatibility issues there.
For this there exists the Windows Management Instrumentation classes Win32_Battery and (probably more appropriate) Win32_PortableBattery. Upon some further digging, it seems like these classes calculate the remaining time for you and don't expose the current charge of the battery (probably to encourage people to have it calculated only one way/rounding issues, etc). The closest "cool" thing you can do is estimate/calculate battery wear by FullChargeCapacity / DesignCapacity.
The next best thing I could find appears to be a lower level API exposed through IOCTL_BATTERY_QUERY_INFORMATION, but that seems like it also doesn't give current charge capacity in milliWatt-Hours.
tl;dr Use the remaining times and percentages calculated for you by the above classes if possible :/
As an aside, some laptop manufacturers bundle their own tools to calculate time remaining and query specific micro controller implementations in their own batteries and are able to make a more informed/non-linear guestimate about remaining battery.

Resources