Detecting a "zone exit" with iBeacon - ibeacon

After how many time the iBeacon stack is deciding that the user is exiting a zone covered by an iBeacon ?
By example, if my beacon is advertising 10 times/s during 5s and stops during 15s is an "exit zone" event fired ?
What is the limit without signal to not send an "exit zone" event ? 5s ? 2s ?

Short answer: 3 seconds. If iOS goes 3 seconds without seeing an iBeacon in a scan, it will fire an exit region event.
Long answer: If your app is in the background or if it is not ranging in the foreground, iOS will not be doing constant Bluetooth scans to look for iBeacons. The period between scans can be up to 15 minutes, so it is possible that your region exit event could come as much as 15:03 after the iBeacon is no longer seen.
See details on these measurements here: http://developer.radiusnetworks.com/2014/03/12/ios7-1-background-detection-times.html

Related

Veins simulation running very slow

I am running Veins simulation with 25 cars and 100 Rsu's. Simulation is running awfully slow. I tried with the example given and the case is same. What can I do ?
I have tried with Release mode, switching off animations, with command mode, increased number of parallel processes from 1 to 4. Nothing is helping, in express mode it slows down to milliseconds.
Update: Simulation has become more slow when more message are sent and received.
With 5 cars and 50 RSU with Range of 500 meter this is my simulation speed:
** Event #27359744 t=46.268980990815 Elapsed: 5387.036s (1h 29m) 46% completed
Speed: ev/sec=4094.07 simsec/sec=0.00195494 ev/simsec=2.09421e+006
Messages: created: 21616816 present: 20679 in FES: 20123
Currently using command mode.
I figured it out why simulation was slow, events per seconds were ev/simsec=2.09421e+006. So I debugged the code and found that for every message RSU receives it sends a message back. So after removing this the simulation is running much faster.

event and transaction in vhdl(timing diagram)

I tried to solve the problem, but I got a different table than the table that xilinx shows. I attatched both my answer and real answer. Xilinx shows that "out" is 'U' until 36ns, after 36ns, it is '1'. Can anyone help me about why the "out" graphics is not assigned any value before 36ns?(I think it should be assigned first at 20 ns).
my answer
question
This turned out to be a really good question. I initially thought you had done something wrong when simulating, but then I ran my own simulation and got the same result.
It turns out that the a <= b after x assignment uses something called the "inertial time model" by default. In this mode scheduled events will be cancelled if b changes again before x time has passed. The purpose is to filter pulses shorter than the specified delay. In your case this is what the simulator will do:
At t=0, out is scheduled to change to 1 at t=20.
At t=12, tem1 or tem2 changes to 0. The scheduled change at t=20 is cancelled and a new change to 0 is scheduled at t=32.
At t=16, tem1 or tem2 changes back to 1. Again the scheduled change is cancelled and a new change is scheduled at t=36.
After this tem1 or tem2 remains at 1, so the change at t=36 is executed and out finally changes from U.
You can change to the "transport delay model" using out <= transport tem1 or tem2 after 20 ns; In this case your drawn waveform will match with simulation.

How to create a resetable timer?

On the MIT App Inventor (similar but not the same to Scratch), I need to create a timer that can be reset when an action happens to complete an App. But, I have been unable to find a way to make a resetable timer. Is there a way using this piece of software? This is a link to the App Inventor.
The first 4 blocks are the codes for when the player interacts/clicks one of the 4 colored boxes.
The last block is the code outside of the 4 .Click blocks.
Btw. there is a lot of redundancy in your blocks, see Enis' tips here how to simplify this...
If you want to reset the clock, just set Clock.TimerEnabled = false and then set
Clock.TimerEnabled = true again and the clock will restart
see also the following example blocks (let's assume, you have a clock component and the timer interval is 10 seconds)
in the example I reset the clock after 5 seconds and as you can see, the clock starts from the beginning...
You can download the test project from here

Failed TWI transaction after sleep on Xmega

we've had some troubles with TWI/I2C after waking up from sleep with the Atmel Xmega256A3. Instead of digging into the details of TWI/I2C we've decided to use the supplied twi_master_driver from Atmel attached to AVR1308 application note.
The problem is one or a few failed TWI transactions just after waking up from sleep. On the I2C-bus connected to the XMega we have a few potentiometers, a thermometer and an RTC. The XMega acts as the only master on the bus.
We use the sleep functions found in AVRLIBC:
{code for turning of VCC to all I2C connected devices}
set_sleep_mode(SLEEP_MODE_PWR_DOWN);
sleep_enable();
sleep_cpu();
{code for turning on VCC to all I2C connected devices}
The XMega as woken from sleep by the RTC which sets a pin high. After the XMega is woken from sleep, we want to set a value on one of the potentiometers, but this fails. For some reason, the TWI-transaction result is TWIM_RESULT_NACK_RECEIVED instead of TWIM_RESULT_OK in the first transaction. After that everything seems to work again.
Have we missed anything here? Is there any known issues with the XMega, sleep and TWI? Do we need to reset the TWI of clear any flags after waking from sleep?
Best regards
Fredrik
There is a common problem on I2C/TWI where the internal state machine gets stuck in an intermediate state if a transaction is not completed fully. The slave then does not respond correctly when addressed on the next transaction. This commonly happens when the master is reset or stops outputting the SCK signal part way through the read or write. A solution is to toggle the SCK line manually 8 or 9 times before starting any data transactions so the that the internal state machines in the slaves are all reset to the start of transfer point and they are all then looking for their address byte.

Recording Returns - Voice Msg Too Short

I have an Electronic Workforce (EWF) application that records the caller speaking. The system needs to record for 120 seconds then play a message and hangup. I set a maximum length of 120 seconds and a minimum length of 1 second. I didn't want any input to disrupt the recording, so I checked "Discard Earlier User Input", "Tone Input Stops Recording" (with keys that stop recording = ""), and "Discard the Key".
I also added "VCE.RECORD.beeptime = 0" to the cta.cfg file to the remove the beep before the recording. To the cta file I also added "VCE.RECORD.gain = 2" to increase the volume of the recordings and "VCE.RECORD.silencetime = 120000" to allow up to 120 seconds of silence if the user doesn't say anything to be recorded.
These settings all worked fine in my testing in that the only way I was able to get a file shorter than 120 seconds was to hangup early. Now that we have gone live though, customers seem to have found a way to get a file consistently five seconds long. We have about 120 recordings a day and about 10 a day are exactly five seconds long. The exception returned is "Voice Msg Too Short".
My question is how is this happening and what can I do (if anything) to prevent it?
User -BMM- on the Edify/Intervoice/Convergys customer forum gave me a good answer to this question. There are two settings that can cause a recording step to timeout with the Voice Msg Too Short error as follows...
VCE.RECORD.novoicetime = 0
VCE.RECORD.silencetime = 0
The value is in seconds, but zero disables the timeouts entirely so that silence at the start of a sound and silence at the end do not cause the exception to be thrown.

Resources