How to modify the TI SensorTag CC2650 Firmware to speed up data transfer? - performance

I'd like to modify the SensorTag Software from TI for the CC2650STK kit, so that it speeds up the reading and also the transmission of the sensor values.
Do I need to modify only the Sensor Software (CCS BLE Sensor stack from TI) or also the android app?
I'd principally need only one temperature, so other sub-question is: how can the other sensors be deactivated if not needed or if they conflict with the higher speed of the temperature sensor?

What do you mean by "speeding up"?
There are a number of different things you might mean.
Reduce the latency between opening the mobile app and displaying a
reading.
Refactor the mobile app to make it simpler to get new
readings.
Increase the frequency with which notifications are sent
by the device, if you use it in that way.
Change the firmware interaction with the sensors to obtain a reading.
Each of these meanings entails a different approach.
The period for each sensor is described in the User Guide that you reference and is typically between hundreds of milliseconds and one or two seconds. Do you really need readings more frequently than that? Typically each sensor will need an amount of time in order to obtain a reliable reading. This would be described in the sensor data sheet, along with options for working with the sensor.
More generally 'speed' will be a function of the bluetooth handshake, the throughput available over the physical radio link, the processing within the sensor tag and the processing within the sensors. I would expect the most variable part of this would be the physical link.
It is up to the mobile app to decide which sensors services it wishes to use.
Have you studied the Software Developer's Guide, available at the same page as the BLE Stack?

Related

esp32 EEPROM read/write cycle

I am using ESP32 module for BLE & WiFi functionality, I am writing data on EEPROM of ESP32 module after every 2 seconds.
How many read/write cycles are allowed as per standard features of ESP32 module? based on which I need to calculate EEPROM life time and number of readings (with frequency) I can store.
The ESP32 doesn’t have an actual EEPROM; instead it uses some of its flash storage to mimic an EEPROM. The specs will depend on the specific SPI flash chip, but they’re likely to be closer to 10,000 cycles than 100,000. Writing to it every couple of seconds will likely wear it out pretty quickly - it’s not a good design choice, especially if you keep rewriting the same location.
I'm very late here, but an SD card seems like the ideal option for you. If you want to save just a few bytes, you can use FeRAM (also called FRAM). It's a combination between RAM and ROM, it's vast, and the data stays on it after power off. It is pretty expensive, so you might want to go with the SD card or web server option. I just wanted to tell you that this existed, I also know this for like a few months.
At that write rate even automotive grade EEPROM like the 24LC001 which supports at least 1,000,000 writes will only last about 2 months!
I think microchip has EERAM which supports infinite writes and will not loose contents on power loss.
Check the microchips 47L series.

Tap/NFC-like Eddystone Experience

(how) Is it possible to have the Eddysone-URL provide functionality, similar to NFC, that would have the user only within a close proximity be able to get the URL?
I've been testing using the eddystone-beacon library on the Intel Bluetooth 4 enabled Wifi card to send the signal successfully. But I find that I can receive the signal from far (20+m) away, when I'd like to limit it to within one meter.
The library has options to attenuate the power txPowerLevel: -22, // override TX Power Level, but I find that changing this only messes with the distance calculation, and not the ability to receive the signal.
Is this perhaps an issue with the hardware (maybe a dedicated USB would allow control?)
Eddystone-URL is not designed to work this way using Google's standard services. However, it is possible to do what you want if you have a dedicated app on the mobile device that detects the beacon.
If this is an option for you, then you won't want to reduce the transmitter power on your hardware device. Even if you get hardware that allows this, sending a very weak signal will lead to unpredictable minimum detection ranges of 3 feet or more on devices with strong receivers, and not detections at all (even if touching the beacon) on devices with weak receivers.
Instead, leave it at the maximum transmission power and then filter for a strong RSSI on the receiving device, showing the detection only when the RSSI meets a threshold. You'll still have trouble with varying strengths of receivers, but it is much more predictable. I have used this technique combined with a device database that tracks the strongest signal level seen for a device model, so I know what RSSI a specific device model will detect when it is right next to the beacon.
If you are game for this approach, you can use the Android Beacon Library to detect Eddytstone-URL for your app on Android devices and the iOS Beacon tools on iOS devices.

iBeacon indoor map 'heat map'

I'm sure we have all heard of apples iBeacon by now... We've been working on a few projects using the technology and have been wondering about one usage that I have seen others promoting.. that is using the LE Bluetooth radios to create a dwell time heat map in a space...
The concept sounds simple enough place a LE Beacon in an area and as people pass by it 'counts' that person which is then overlaid over a store map to create traffic patterns.. that's the claim. I'm trying to figure out how that can be possible?
The concept uses the mobile device on the passerby as the 'trigger' for the count. There is no way at all to achieve this with out the user having a certain app downloaded on their device correct? The only feasible way I can see it working is if the user has an app downloaded on their device and that app pings a web server every time it sees a beacon.. that is then mapped.. but that also will use data and battery resources on the mobile device which most likely will result in the user deleting the app before long...
This also leaves a large number of passers by who will not be accounted for... making the results very difficult to quantify.
Am I wrong in this assumption? Is there something that I'm missing?
Your analysis of the possibilities and challenges of the technology are largely correct. My company, Radius Networks, has done similar traffic visualizations for large events.
A few points:
Even if most users do not have an app on their phone, the data are still valuable if there are enough to provide a representative statistical sample.
When using iBeacons for this purpose, you must have quite coarse grained locations for two reasons:
The range of Bluetooth LE is about 50 meters.
Assuming the users will only be passively running the app in the background, beacon detection can take minutes on iOS.
Combining the two challenges above, you can really only use the technology to do this for very large venues.
The battery drain is not really a problem if the phone only wakes up every few minutes to report a beacon detection to a server.

Autonomous behaviour via Orbbasic or streaming?

The Orbbasic language is suggested as a good way for kids to have hands on controlling the sphero in this interview.
What are the limitations of orbbasic? Does it achieves the same 1ms granularity as macros ?
In which range of time granularity would it be equally acceptable to stream the data and excecute orbbasic?
Can the stabilization of sphero motion be programmed with orbbasic? with data streaming?
You can read all about the abilities of orbBasic in our online document here:
https://github.com/orbotix/DeveloperResources/tree/master/docs
But in short, you can run about 9,000 lines of code/sec so it's 9x the density of macros but with more power. You can use print statements to send data back to the Bluetooth client but you have to make sure you don't exceed some rational limits; orbBasic can generate data faster than Bluetooth can transmit it to some devices.
Stabilization can be turned on and off in orbBasic, and when on you can generate your own roll commands that are processed exactly as if they came from a smartphone.
Just to be clear, data streaming is just an automated way of retrieving sensor data from Sphero without having to continually ask for it. You can use it to examine the motion of Sphero but you cannot "control" Sphero with it (since that implies sending commands to the robot; data streaming is just reading).
Dan Danknick
FW Engineer, Orbotix

What are some practical applications of an FPGA?

I'm super excited about my program powering a little seven-segment display, but when I show it off to people not in the field, they always say "well what can you do with it?" I'm never able to give them a concise answer. Can anyone help me out?
First: They don't need to have volatile memory.
Indeed the big players (Xilinx, Altera) usually have their configuration on-chip in SRAM, so you need additional EEPROM/Flash/WhatEver(TM) to store it outside.
But there are others, e.g. Actel is one big player that come to mind, that has non-volatile configuration storage on their FPGAs (btw. this has also other advantages, as SRAM is usually not very radiation tolerant, and you have to require special measurements when you go into orbit).
There are two big things that justify FPGAS:
Price - They are not cheap. But sometimes you can't do something in software, and you need hardware for it. And when you are below a certain point in your required volume (e.g. because its just small series, or a prototype) an FPGA is MUCH cheaper than an ASIC. Also, while developing ASICs this allows - before a final state is reached - much higher turn-around times.
Reconfiguration - You can reconfigure your FPGA. That is something a processor or an ASIC can't do. There are some applications where you can use this: E.g. When you need the ability to fix something in the design, but you can't get physically to the device. Example for this: The mars orbiters/rovers used Xilinx FPGAs. When someone finds there a mistake (or wants to switch to a different coding for transmitting data or whatever), you can't replace the ship, as it is just not reachable. But with an FPGA you can just reconfigure and can apply your changes. Another scenario is, that you can have one single chip which is able to perform different accelerations, depending on the scenario. Imagine a smartphone, when telephoning the FPGA can be configured to make audio en-/decoding, when surfing it can work as a compression engine, when playing videos it can be configured as h264 decoder/accelerator. Another thing you could do is that you can match your hardware to your problem instance. E.g. Cisco uses many FPGAs in their hardware. You need the hardware to perform switching/routing/packet inspection with the required speed, and you can generate from actual setting matching engines directly into hardware.
Another thing which might come up soon (I know some car manufacturer thought about it), is for devices which include a lot of different electronics and have a big supply chain. It's more or less a combination of price and reconfiguration. It's more expensive to have 10 ASICs than 10 FPGAs - where both perform the same task, but it's cheaper to have 10 FPGAs with just one supplier and the need to hold just 1 type of chip at service and supply than to have 10 suppliers with the necessity to hold and manage 10 different chips in supply and service.
True story.
They allow you to fix design flaws in the custom data-acquisition boards for a multi-million dollar particle physics experiment that become obvious only after you have everything installed and are doing integration work and detector characterization.
You can evolve circuits, this is a bit old school evolutionary algorithms but starting from a set of random individuals you can select the circuits that score higher in a fitness function than the rest and breed them to create a new population ad infinitum. read up about Evolutionary Hardware, think this book covers FPGA's http://www.amazon.co.uk/Introduction-Evolvable-Hardware-Self-Adaptive-Computational/dp/0471719773/ref=sr_1_1?ie=UTF8&qid=1316308403&sr=8-1
Say for example you wanted a DSP circuit, you have an input signal and a desired output signal, starting with a random population you select perhaps only the fittest (bad) or perhaps a mixture of fitties and odd ones to create the next generation. after a number of generations you can open the lid and discover low and behold evolution has taken place and you have a circuit that may even out perform your initial expectations!
also read the field guide to genetic programming, it's free on the web somewhere.
There are limitations to software. On software, you're running at the CPU's clock rate, enabling you to only execute one instruction per clock cycle. On software, everything is high level, you do not control details that happen in the low level. You'll always be limited by the operating system or development board you are programming. This is true for popular development boards out there such as Arduinos and Raspberry Pi.
In FPGA hardware, you can precisely program and control what happens between each clock cycle, providing your computations the speed at the electron level (note: speed of electrons determines speed of electric signal transfers between hardware)
Now, we know FPGA implies Hardware, Speed of Electrons, which is much better than
CPU that implies Software, 1 instruction per clock cycle.
So why use FPGA when we can design our own boards using Printed Circuit Board, transistor level?
This is because FPGA's are programmable hardware! It is built such that you can program the connections of a board instead of wiring it up for a specific application. This explains why FPGA's are expensive! It is sort of a 'general hardware' or Programmable Hardware.
To argue why you should pick FPGA's despite their cost, the programmable hardware component allows:
Longer product cycle (you can update the programmable hardware on the customer's products which contains your FPGA by simply allowing them to programmed your updated HDL code into their FPGA)
Recovery for hardware bug. You simply allow them to download the corrected program onto their FPGA. (note: you cannot do this with specific hardware designs as you will have to spend millions to gather back your products, create new ones, and ship them back to customers)
For examples on the cool things FPGA can do, refer to Stanford's infamous ECE5760 course.
http://people.ece.cornell.edu/land/courses/ece5760/FinalProjects/
Hope this helps!
Soon Chee Loong,
University of Toronto
FPGA are also used to test/research circuit design before they start mass production. This is happening in several sectors: image processing, signal processing, etc.
Edit - after few years we can now see more practical applications including finance and machine earning:
aerepospace
emulation
automotive
broadcast
high performance computers
medical
machine learning
finance (including cryptocoins)
I like this article: http://www.hpcwire.com/hpcwire/2011-07-13/jp_morgan_buys_into_fpga_supercomputing.html
My feeling is that FPGA's can sit directly in your streaming data at the point where it enters your the systems under your control. You can then crunch that data without going through the steps a GPGPU would require (bringing the data in off the network, passing it across the PCI Express bus and crunching it a Gb at a time).
There are good reasons for both, but I think the notion of whether you mind buffering the data is a good bellwether.
Here's another cool FPGA application:
https://ehsm.eu/m-labs.hk/m1.html
Automotive image processing is one interesting domain:
Providing lane-keeping support to the driver (disclosure: I wrote this page!):
http://www.conekt.co.uk/capabilities/50-fpga-for-ldw
Providing an aerial view of a car from 4 fisheye-lens cameras (with video):
http://www.logicbricks.com/Solutions/Surround-View-DA-System/Xylon-Test-Vehicle.aspx

Resources