iBeacon indoor map 'heat map' - ibeacon

I'm sure we have all heard of apples iBeacon by now... We've been working on a few projects using the technology and have been wondering about one usage that I have seen others promoting.. that is using the LE Bluetooth radios to create a dwell time heat map in a space...
The concept sounds simple enough place a LE Beacon in an area and as people pass by it 'counts' that person which is then overlaid over a store map to create traffic patterns.. that's the claim. I'm trying to figure out how that can be possible?
The concept uses the mobile device on the passerby as the 'trigger' for the count. There is no way at all to achieve this with out the user having a certain app downloaded on their device correct? The only feasible way I can see it working is if the user has an app downloaded on their device and that app pings a web server every time it sees a beacon.. that is then mapped.. but that also will use data and battery resources on the mobile device which most likely will result in the user deleting the app before long...
This also leaves a large number of passers by who will not be accounted for... making the results very difficult to quantify.
Am I wrong in this assumption? Is there something that I'm missing?

Your analysis of the possibilities and challenges of the technology are largely correct. My company, Radius Networks, has done similar traffic visualizations for large events.
A few points:
Even if most users do not have an app on their phone, the data are still valuable if there are enough to provide a representative statistical sample.
When using iBeacons for this purpose, you must have quite coarse grained locations for two reasons:
The range of Bluetooth LE is about 50 meters.
Assuming the users will only be passively running the app in the background, beacon detection can take minutes on iOS.
Combining the two challenges above, you can really only use the technology to do this for very large venues.
The battery drain is not really a problem if the phone only wakes up every few minutes to report a beacon detection to a server.

Related

Tap/NFC-like Eddystone Experience

(how) Is it possible to have the Eddysone-URL provide functionality, similar to NFC, that would have the user only within a close proximity be able to get the URL?
I've been testing using the eddystone-beacon library on the Intel Bluetooth 4 enabled Wifi card to send the signal successfully. But I find that I can receive the signal from far (20+m) away, when I'd like to limit it to within one meter.
The library has options to attenuate the power txPowerLevel: -22, // override TX Power Level, but I find that changing this only messes with the distance calculation, and not the ability to receive the signal.
Is this perhaps an issue with the hardware (maybe a dedicated USB would allow control?)
Eddystone-URL is not designed to work this way using Google's standard services. However, it is possible to do what you want if you have a dedicated app on the mobile device that detects the beacon.
If this is an option for you, then you won't want to reduce the transmitter power on your hardware device. Even if you get hardware that allows this, sending a very weak signal will lead to unpredictable minimum detection ranges of 3 feet or more on devices with strong receivers, and not detections at all (even if touching the beacon) on devices with weak receivers.
Instead, leave it at the maximum transmission power and then filter for a strong RSSI on the receiving device, showing the detection only when the RSSI meets a threshold. You'll still have trouble with varying strengths of receivers, but it is much more predictable. I have used this technique combined with a device database that tracks the strongest signal level seen for a device model, so I know what RSSI a specific device model will detect when it is right next to the beacon.
If you are game for this approach, you can use the Android Beacon Library to detect Eddytstone-URL for your app on Android devices and the iOS Beacon tools on iOS devices.

How to modify the TI SensorTag CC2650 Firmware to speed up data transfer?

I'd like to modify the SensorTag Software from TI for the CC2650STK kit, so that it speeds up the reading and also the transmission of the sensor values.
Do I need to modify only the Sensor Software (CCS BLE Sensor stack from TI) or also the android app?
I'd principally need only one temperature, so other sub-question is: how can the other sensors be deactivated if not needed or if they conflict with the higher speed of the temperature sensor?
What do you mean by "speeding up"?
There are a number of different things you might mean.
Reduce the latency between opening the mobile app and displaying a
reading.
Refactor the mobile app to make it simpler to get new
readings.
Increase the frequency with which notifications are sent
by the device, if you use it in that way.
Change the firmware interaction with the sensors to obtain a reading.
Each of these meanings entails a different approach.
The period for each sensor is described in the User Guide that you reference and is typically between hundreds of milliseconds and one or two seconds. Do you really need readings more frequently than that? Typically each sensor will need an amount of time in order to obtain a reliable reading. This would be described in the sensor data sheet, along with options for working with the sensor.
More generally 'speed' will be a function of the bluetooth handshake, the throughput available over the physical radio link, the processing within the sensor tag and the processing within the sensors. I would expect the most variable part of this would be the physical link.
It is up to the mobile app to decide which sensors services it wishes to use.
Have you studied the Software Developer's Guide, available at the same page as the BLE Stack?

Are iBeacon advertising IDs unique?

We are discussing a large scale deployment scenario with iBeaons in several locations cross-country. The question was raised as to whether the IDs with which iBeacons advertise their presence is unique? Because our client wants to be really sure that the app only responds to a specific iBeacons and not to something else that's impersonating with the same ID (even if inadvertently).
If not unique, does the protocol allow iBecaons to advertise any additional authentication information?
It is absolutely possible to impersonate another iBeacon. I went to the Apple Store in Washington DC with a copy of the Android iBeacon Locate app, and used it to scan the identifiers of the iBeacons in Apple's store. I then went back to my office and configured my own iBeacon to transmit this same three-part identifier, and was able to make my iPhone get the same in store messaging from Apple. You cannot stop other people from doing this if they really want to. But the good news is that for most use cases, there isn't a real motivation for other people to do this.
That said, an inadvertent overlap of iBeacon identifiers is extremely unlikely. If you generate your own ProximityUUID using a standard UUID generator, the odds of another generated ProximityUUID being accidentally the same are infinitesimally small -- less than the odds of being hit by a meteorite.
Standard iBeacons do not have any other authentication mechanism. They are connectionless, transmit-only devices that only send out a three-part identifier (Proximity UUID, Major, Minor) and a transmitter power calibration value.
I work on the beacons at Gelo ( http://www.getgelo.com ). Payload confidentiality and anti-spoofing are very large concerns with a few of our customers.
UUIDs themselves are not guaranteed to be unique. It is entirely possible to spoof an UUID and all of their advertisement data (including major/minor). This presents a number of security risks.
There are rotational UUID schemes that some beacon manufacturers employ in whice every X minutes, seconds, or hours the UUID itself is changed. This would mean that someone wanting to intercept and/or spoof the beacon would need require either being in the same location as the original device and constantly matching the new values or figuring out the rotational scheme or algorithm.
The problem with rotational UUID approach is that it doesn't protect the payload (the advertising message or the scan response) so an attacker could mimic another beacon and change the value(s) being sent. Based on what the beacon communicates and how it's used by any listening devices (observers, centrals in BLE terms) or consuming applications this could not be a problem or it could be a very large problem.
We've spent time researching how-to mitigate the risk at all levels while taking into account power consumption. This is because most BLE beacons run on batteries and you want to extend the battery life as much as possible. We've come up with an approach that successfully mitigates the risk for an international organization with nearly 100k locations.
Solving this problem is possible and it's something that we've been working on. If this is what you're looking for give Gelo a call or email. We may be able to help you.
There is definitely no "UUID anti-spoofing" in place in iBeacon technology. In fact, many developers make the situation even worse and just use the default UUID provided by the iBeacon vendor. As a result, whenever you go - lets say - around an Estimote iBeacon, you see an app that is not valid in the current context, therefore just adding to users' confusion.
You can help preventing this issue and keep the environment cleaner by using globally unique proximity UUID generator and catalogue for your deployment.
See our OpenUUID service, that aims to do exactly that...
iBeacon ids are 20bytes (16 byte UUID, plus a 2-byte "Major" number and a 2-byte "Minor" number). The odds that someone will guess or accidentally choose all 20 bytes exactly the same AND be in range of the same beacon at the same time are extremely small. The combination of the near-unique number and relative short range of the BLE signal make an accidental collision pretty unlikely.
In addition to sensing the above mentioned parameters you can usually get info about the beacon mac address. If it´s based on any of the more common circuits such as the TI CC240x chips the MAC address is hardcoded unique to every chip. So that one is less easy to spoof.
One typical idea if you are both beacon deployer and app provider is to program some custom service/characteristic into the beacon as well so that your app can connect to it and verify it´s a known beacon. BUT if you at all allow someone to connect it means the beacon is extremely sensitive to a Denial of Service attack. Most beacons are single tasking and cannot radiate and id and handle a connection attempt at the same time. So some dark force could install "beacon timewaster modules" in the vicinity that keeping your beacons busy talking to a waster rather than providing the id radiation you want them to. Those rotating UUID schemes may be good enough in a hostile environment. For the most part I would say the beacons are likely to work pretty much undisturbed. It is very easy to develop a beacon quality monitoring app or custom BLE device that will keep listening for deployed beacons and report on the uptime. That way a deployer of a deployed farm of beacons will be alerted if a node goes out of service.

Autonomous behaviour via Orbbasic or streaming?

The Orbbasic language is suggested as a good way for kids to have hands on controlling the sphero in this interview.
What are the limitations of orbbasic? Does it achieves the same 1ms granularity as macros ?
In which range of time granularity would it be equally acceptable to stream the data and excecute orbbasic?
Can the stabilization of sphero motion be programmed with orbbasic? with data streaming?
You can read all about the abilities of orbBasic in our online document here:
https://github.com/orbotix/DeveloperResources/tree/master/docs
But in short, you can run about 9,000 lines of code/sec so it's 9x the density of macros but with more power. You can use print statements to send data back to the Bluetooth client but you have to make sure you don't exceed some rational limits; orbBasic can generate data faster than Bluetooth can transmit it to some devices.
Stabilization can be turned on and off in orbBasic, and when on you can generate your own roll commands that are processed exactly as if they came from a smartphone.
Just to be clear, data streaming is just an automated way of retrieving sensor data from Sphero without having to continually ask for it. You can use it to examine the motion of Sphero but you cannot "control" Sphero with it (since that implies sending commands to the robot; data streaming is just reading).
Dan Danknick
FW Engineer, Orbotix

Average User Download Speeds

Any ideas what the average user's download speed is? I'm working on a site that streams video and am trying to figure out what an average download speed as to determine quality.
I know i might be comparing apples with oranges but I'm just looking for something to get a basis for where to start.
Speedtest.net has a lot of stats broken down by country, region, city and ISP. Not sure about accuracy, since it's only based on the people using their "bandwidth measurement" service.
It would depend on the geography that you are targeting. For example, in India, you can safely assume it would be a number below 256kbps.
Try attacking it from the other angle. Look at streaming services that cater to the customer you want, and have significant volume (maybe youtube) and see what they're pushing. You'll find there'a pretty direct correlation between alexa rating (popularity) and quality(minimum bitrate required). Vimeo will always have fewer users than Youtube because the user experience is poor for low bitrate users.
There are many other factors, and this should only form one small facet of your bandwidth decision, but it's a useful comparison to make.
Keep in mind, however, that you want to degrade gracefully. As more and more sites come online you'll start bumping into ISPs that limit total transfer, and being able to tell your customers how much of their bandwidth your site is consuming is useful, as well as proclaiming that you are a low bandwidth site.
Further, more and more users are using portable cellular connections (iPhone) where limited bandwidth is a big deal. AT&T has oversold many markets so being able to get useful video through a tiny link will enable you to capture market that vimeo and Hulu cannot.
Quite frankly, though, the best thing to do is degrade on the fly gracefully. Measure the bandwidth of the connection continuously and adjust bandwidth as needed for a smooth playback experience with good audio. Then you can take all users across the gamut...
-Adam
You could try looking at the lower tier offerings from AT&T and Comcast. Probably 1.5 Mbps for the basic level (which I imagine most people get).
The "test your bandwidth" sites may have some stats on this, too.
There are a lot of factors involved (server bandwidth, local ISP, network in between, etc) which make it difficult to give a hard answer. With my current ISP, I typically get 200-300 kB/sec. Although when the planets align I've gotten as much as 2 MB/sec (the "quoted" peak downlink speed). That was with parallel streams, however. The peak bandwidth I've achieved on a single stream is 1.2 MB/sec
The best strategy is always to give your users options. Why don't you start the stream at a low bitrate that will work for everyone and provide a "High Quality" link for those of us with FTTH connections? I believe YouTube has started doing this.
According to CWA, the average US resident has a 1.9Mbps download speed. They have data by state, so if you have money then you can probably get a more specific report for your intended audience. Keep in mind, however, that more and more people are sharing this with multiple computers, using VOIP devices, and running background processes that consume bandwidth.
-Adam
Wow.
This is so dependent on the device, connection method, connection type, ISP throttling, etc. involved in the end-to-end link.
To try and work out an average speed would be fairly impossible.
Think, fat pipe at home (8Gb plus) versus bad wireless connection provided for free at the airport (9.6kb) and you can start to get an idea of the range of connections you're trying to average over.
Then we move onto variations in screen sizes and device capabilities.
Maybe trawl the UA stings of incoming connectins to get an idea of the capabilities of the user devices being used out there.
Maybe see if you can use some sort of geolocation solution to try and see how people are connecting to your site to get an idea of connection capabilities as well.
Are you offering the video in a fixed format, i.e. X x Y pixel size?
HTH.
cheers,
Rob
If I'm using your site, "average" doesn't matter. All I care about is MY experience, and so you either need to make the site adaptive, design for a pretty low speed (iPhone 2G gets you 70-80 kbps if you're lucky, to take one common case), or be very clear about the requirements so I can decide whether or not my connection-of-the-moment will work or not.
What you don't want to subject your users to is unpredictably choppy, intermittent video and audio.

Resources