I noticed that a beacon didn't detect it's own transmission. Did that mean that the beacon alternate between the transmission and the scan, and turn off the transmission when it scan? If yes, did that affect the transmission rate, even a little bit, if I am using a transmission rate of 10Hz ?
No, transmission does not affect scanning. Bluetooth radios are designed to both scan on multiple channels (slightly different frequencies) and transmit on those same channels. There is a channel hopping mechanism that governs this.
Devices by design are not supposed to pick up their own advertisements in scans. This is a separate issue than described above.
Related
I would like to create the indoor positioning system. I hope the system can collect all beacon signal and create map automatically. However, I know that far away beacon maybe cannot detect. Therefore, is that possible to discovery far away beacon based on other beacon? That is, beacon can transfer their signal based on other beacon?
Sorry, but no, it is not possible to use a beacon to relay signals of more distant beacons. Bluetooth beacons are extremely simple devices that just transmit a unique identifier. They are transmit only, and therefore completely unaware of other beacons around them.
I am using Estimote beacons to determine if something moved. This is done by monitoring which beacons are in range (MonitoringListener) and which beacons have moved (TelemetryListener).
The problem is EstimoteTelemetry has a field UniqueId but Beacon uses UUID, Major and Minor to determine the unique beacon. EstimoteTelemetry does not broadcasat UUID, Major and Minor..
I need to know which beacon is broadcasting the telemetry packets. I can't see any fields that are the same in both. Anyone know how to do this on Android or iOS?
As you mentioned there are no data fields in the BT packets that are shared among iBeacon and Telemetry packets. These are completely independent packets and contain different set of information. It is not possible to use iBeacon identification in telemetry packet - it takes too much space so telemetry data would be extremely limited.
If you need to collect both packets and keep them together look-up table in your app/server is the only solution. Estimote does not provide this kind of functionality.
Each Estimote beacon has single non-changing identifier (16 bytes) assigned during production. Telemetry packet contains first half of it (8 bytes). You need to create table where this 8 bytes are related to exact iBeacon identification you use.
(how) Is it possible to have the Eddysone-URL provide functionality, similar to NFC, that would have the user only within a close proximity be able to get the URL?
I've been testing using the eddystone-beacon library on the Intel Bluetooth 4 enabled Wifi card to send the signal successfully. But I find that I can receive the signal from far (20+m) away, when I'd like to limit it to within one meter.
The library has options to attenuate the power txPowerLevel: -22, // override TX Power Level, but I find that changing this only messes with the distance calculation, and not the ability to receive the signal.
Is this perhaps an issue with the hardware (maybe a dedicated USB would allow control?)
Eddystone-URL is not designed to work this way using Google's standard services. However, it is possible to do what you want if you have a dedicated app on the mobile device that detects the beacon.
If this is an option for you, then you won't want to reduce the transmitter power on your hardware device. Even if you get hardware that allows this, sending a very weak signal will lead to unpredictable minimum detection ranges of 3 feet or more on devices with strong receivers, and not detections at all (even if touching the beacon) on devices with weak receivers.
Instead, leave it at the maximum transmission power and then filter for a strong RSSI on the receiving device, showing the detection only when the RSSI meets a threshold. You'll still have trouble with varying strengths of receivers, but it is much more predictable. I have used this technique combined with a device database that tracks the strongest signal level seen for a device model, so I know what RSSI a specific device model will detect when it is right next to the beacon.
If you are game for this approach, you can use the Android Beacon Library to detect Eddytstone-URL for your app on Android devices and the iOS Beacon tools on iOS devices.
I am using a 32-bit AVR microcontroller (AT32UC3A3256) with High speed USB support. I want to stream data regularly from my PC to the device (without acknowledge of data), so exactly like a USB audio interface, except the data I want to send isn't audio. Such an interface is described here: http://www.edn.com/design/consumer/4376143/Fundamentals-of-USB-Audio.
I am a bit confused about USB isochronous transfers. I understand how a single transfer works, but how and when is the next subsequent transfer planned? I want a continuous stream of data that is calculated a little ahead of time, but streamed with minimum latency and without interruptions (except some occasional data loss). From my understanding, Windows is not a realtime OS so I think the transfers should not be planned with a timer every x milliseconds, but rather using interrupts/events? Or maybe a buffer needs to be filled continuously with as much data as there is available?
I think my question is still about the concepts of USB and not code-related, but if anyone wants to see my code, I am testing and modifying the "USB Vendor Class" example in the ASF framework of Atmel Studio, which contains the firmware source for the AVR and the source for the Windows EXE as well. The Windows example program uses libusb with a supplied driver.
Stephen -
You say "exactly like USB Audio"; but beware! The USB Audio class is very, very complicated because it implements a closed-loop servo system to establish long-term synchronisation between the PC and the audio device. You probably don't need all of that in your application.
To explain a bit more about long-term synchronisation: The audio codec at one end (e.g. the USB headphones) may run at a nominal 48KHz sampling rate, and the audio file at the other end (e.g. the PC) may be designed to offer 48 thousand samples per second, but the PC and the headphones are never going to run at exactly the same speed. Sooner or later there is going to be a buffer overrun or under-run. So the USB audio class implements a control pipe as well as the audio pipe(s). The control pipe is used to negotiate a slight speed-up or slow-down at one end, usually the Device end (e.g. headphones), to avoid data loss. That's why the USB descriptors for audio device class products are so incredibly complex.
If your application can tolerate a slight error in the speed at which data is delivered to the AVR from the PC, you can dispense with the closed-loop servo. That makes things much, much simpler.
You are absolutely right in assuming the need for long-term buffering when streaming data using isochronous pipes. A single isochronous transfer is pointless - you may as well use a bulk pipe for that. The whole reason for isochronous pipes is to handle data streaming. So a lot of look-ahead buffering has to be set up, just as you say.
I use LibUsbK for my iso transfers in product-specific applications which do not fit any preconceived USB classes. There is reasonably good documentation at libusbk for iso transfers. In short - you decide how many bytes per packet and how many packets per transfer. You decide how many buffers to pre-fill (I use five), and offer the libusbk driver the whole lot to start things going. Then you get callbacks as each of those buffers gets emptied by the driver, so you can fill them with new data. It works well for me, even though I have awkward sampling rates to deal with. In my case I set up a bunch of twenty-one packets where twenty of them carry 40 bytes and the twenty-first carries 44 bytes!
Hope that helps
- Tony
This question is related with Microcontroller programming but anyone may suggest a good algorithm to handle this situation.
I have a one central console and set of remote sensors. The central console has a receiver and the each sensor has a transmitter operates on same frequency. So we can only implement Simplex communication.
Since the transmitters work on same frequency we cannot have 2 sensors sending data to central console at the same time.
Now I want to program the sensors to perform some "polling". The central console should get some idea about the existence of sensors (Whether the each sensor is responding or not)
I can imagine several ways.
Using a same interval between the poll messages for each sensor and start the sensors randomly. So they will not transmit at the same time.
Use of some round mechanism. Sensor 1 starts polling at 5 seconds the second at 10 seconds etc. More formal version of method 1.
The maximum data transfer rate is around 4800 bps so we need to consider that as well.
Can some one imagine a good way to resolve this with less usage of communication links. Note that we can use different poll intervals for each sensors if necessary.
I presume what you describe is that the sensors and the central unit are connected to a bus that can deliver only one message at a time.
A normal way to handle this is to have collision detection. This is e.g. how Ethernet operates as far as I know. You try to send a message; then attempt to detect collision. If you detect a collision, wait for a random amount (to break ties) and then re-transmit, of course with collision check again.
If you can't detect collisions, the different sensors could have polling intervals that are all distinct prime numbers. This would guarantee that every sensor would have dedicated slots for successful polling. Of course there would be still collisions, but they wouldn't need to be detected. Here example with primes 5, 7 and 11:
----|----|----|----|----|----|----|----| (5)
------|------|------|------|------|----- (7)
----------|----------|----------|-:----- (11)
* COLLISION
Notable it doesn't matter if the sensor starts "in phase" or "out of phase".
I think you need to look into a collision detection system (a la Ethernet). If you have time-based synchronization, you rely on the clocks on the console and sensors never drifting out of sync. This might be ok if they are connected to an external, reliable time reference, or if you go to the expense of having a battery backed RTC on each one (expensive).
Consider using all or part of an existing protocol, unless protocol design is an end in itself - apart from saving time you reduce the probability that your protocol will have a race condition that causes rare irreproducible bugs.
A lot of protocols for this situation have the sensors keeping quiet until the master specifically asks them for the current value. This makes it easy to avoid collisions, and it makes it easy for the master to request retransmissions if it thinks it has missed a packet, or if it is more interested in keeping up to date with one sensor than with others. This may even give you better performance than a system based on collision detection, especially if commands from the master are much shorter than sensor responses. If you end up with something like Alohanet (see http://en.wikipedia.org/wiki/ALOHAnet#The_ALOHA_protocol) you will find that the tradeoff between not transmitting very often and having too many collisions forces you to use less than 50% of the available bandwidth.
Is it possible to assign a unique address to each sensor?
In that case you can implement a Master/Slave protocol (like Modbus or similar), with all devices sharing the same communication link:
Master is the only device which can initiate communication. It can poll each sensor separately (one by one), by broadcasting its address to all slaves.
Only the slave device which was addressed will reply.
If there is no response after a certain period of time (timeout), device is not available and Master can poll the next device.
See also: List of automation protocols
I worked with some Zigbee systems a few years back. It only had two sensors so we just hard-coded them with different wait times and had them always respond to requests. But since Zigbee has systems However, we considered something along the lines of this:
Start out with an announcement from the console 'Hey everyone, let's make a network!'
Nodes all attempt to respond with something like 'I'm hardware address x, can I join?'
At first it's crazy, but with some random retry times, eventually the console responds to all nodes: 'Yes hardware address x, you can join. You are node #y and you will have a wait time of z milliseconds from the time you receive your request for data'
Then it should be easy. Every time the console asks for data the nodes respond in their turn. Assuming transmission of all of the data takes less time than the polling period you're set. It's best not to acknowledge the messages. If the console fails to respond, then very likely the node will try to retransmit just when another node is trying to send data, messing both of them up. Then it snowballs into complete failure...