Autonomous behaviour via Orbbasic or streaming? - sphero-api

The Orbbasic language is suggested as a good way for kids to have hands on controlling the sphero in this interview.
What are the limitations of orbbasic? Does it achieves the same 1ms granularity as macros ?
In which range of time granularity would it be equally acceptable to stream the data and excecute orbbasic?
Can the stabilization of sphero motion be programmed with orbbasic? with data streaming?

You can read all about the abilities of orbBasic in our online document here:
https://github.com/orbotix/DeveloperResources/tree/master/docs
But in short, you can run about 9,000 lines of code/sec so it's 9x the density of macros but with more power. You can use print statements to send data back to the Bluetooth client but you have to make sure you don't exceed some rational limits; orbBasic can generate data faster than Bluetooth can transmit it to some devices.
Stabilization can be turned on and off in orbBasic, and when on you can generate your own roll commands that are processed exactly as if they came from a smartphone.
Just to be clear, data streaming is just an automated way of retrieving sensor data from Sphero without having to continually ask for it. You can use it to examine the motion of Sphero but you cannot "control" Sphero with it (since that implies sending commands to the robot; data streaming is just reading).
Dan Danknick
FW Engineer, Orbotix

Related

esp32 EEPROM read/write cycle

I am using ESP32 module for BLE & WiFi functionality, I am writing data on EEPROM of ESP32 module after every 2 seconds.
How many read/write cycles are allowed as per standard features of ESP32 module? based on which I need to calculate EEPROM life time and number of readings (with frequency) I can store.
The ESP32 doesn’t have an actual EEPROM; instead it uses some of its flash storage to mimic an EEPROM. The specs will depend on the specific SPI flash chip, but they’re likely to be closer to 10,000 cycles than 100,000. Writing to it every couple of seconds will likely wear it out pretty quickly - it’s not a good design choice, especially if you keep rewriting the same location.
I'm very late here, but an SD card seems like the ideal option for you. If you want to save just a few bytes, you can use FeRAM (also called FRAM). It's a combination between RAM and ROM, it's vast, and the data stays on it after power off. It is pretty expensive, so you might want to go with the SD card or web server option. I just wanted to tell you that this existed, I also know this for like a few months.
At that write rate even automotive grade EEPROM like the 24LC001 which supports at least 1,000,000 writes will only last about 2 months!
I think microchip has EERAM which supports infinite writes and will not loose contents on power loss.
Check the microchips 47L series.

Real Time indoor streaming and music mixing

I am working on this project where we are doing a live performance with about 6 musicians placed away from each other in a big space. The audience will be wearing their headphones and as they move around we want them to hear different kinds of effects in different areas of the place. For calculating the position of users we are using bluetooth beacons. We're expecting around a 100 users and we can't have a latency of more than 2 seconds.
Is such kind of a setup possible?
The current way we're thinking of implementing this is that we'll divide the place into about 30 different sections.
For the server we'll take the input from all the musicians and mix a different stream for every section and stream it on a local WLAN using the RTP protocol.
We'll have Android and iOS apps that will locate the users using Bluetooth beacons and switch the live streams accordingly.
Presonus Studio One music mixer - Can have multiple channels that can be output to devices. 30 channels.
Virtual Audio Cable - Used to create virtual devices that will get the output from the channels. 30 devices.
FFMpeg streaming - Used to create an RTP stream for each of the devices. 30 streams.
Is this a good idea? Are there other ways of doing this?
Any help will be appreciated.
Audio Capture and Mixing
First, you need to capture those six channels of audio into something you can use. I don't think your idea of virtual audio cables is sustainable. In my experience, once you get more than a few, they don't work so great. You need to be able to go from your mixer directly to what's doing the encoding for the stream, which means you need something like JACK audio.
There are two ways to do this. One is to use a digital mixer to create those 30 mixes for you, and send you the resulting stream. Another is to simply capture the 6 channels of audio and then do the mixing in software. Normally I think the flexibility of mixing externally is what you want, and typically I'd recommend the Behringer X32 series for you. I haven't tried it with JACK audio myself, but I've heard it can work and the price point is good. You can get just a rackmount package of it for cheap which has all the functionality without the surface for control (cheaper, and sufficient for what you need). However, the X32 only has 16 buses so you would need two of them to get the number of mixes you need. (You could get creative with the matrix mixes, but that only gets you 6 more, a total of 22.)
I think what you'll need to do is capture that audio and mix in software. You'll probably want to use Liquidsoap for this. It can programmatically mix audio streams pulled in via JACK, and create internet radio style streams on the output end.
Streaming
You're going to need a server. There are plenty of RTP/RTSP servers available, but I'd recommend Icecast. It's going to be easier to setup and clients are more compatible. (Rather than making an app for example, you could easily play back these streams in HTML5 audio tags on a web page.) Liquidsoap can send streams directly to Icecast.
Latency
Keeping latency under 2 seconds is going to be a problem. You'll want to lower the buffers everywhere you can, particularly on your Icecast server. This is on the fringe of what is reasonably possible, so you'll want to test to ensure the latency meets your requirements.
Network
100 clients on the same spectrum is also problematic. What you need depends on the specifics of your space, but you're right on the line of what you can get away with using regular consumer access points. Given your latency and bandwidth requirements, I'd recommend getting some commercial access points with built-in sector antennas and multiple radios. There are many manufacturers of such gear.
Best of luck with this unique project! Please post some photos of your setup once you've done it.

How to modify the TI SensorTag CC2650 Firmware to speed up data transfer?

I'd like to modify the SensorTag Software from TI for the CC2650STK kit, so that it speeds up the reading and also the transmission of the sensor values.
Do I need to modify only the Sensor Software (CCS BLE Sensor stack from TI) or also the android app?
I'd principally need only one temperature, so other sub-question is: how can the other sensors be deactivated if not needed or if they conflict with the higher speed of the temperature sensor?
What do you mean by "speeding up"?
There are a number of different things you might mean.
Reduce the latency between opening the mobile app and displaying a
reading.
Refactor the mobile app to make it simpler to get new
readings.
Increase the frequency with which notifications are sent
by the device, if you use it in that way.
Change the firmware interaction with the sensors to obtain a reading.
Each of these meanings entails a different approach.
The period for each sensor is described in the User Guide that you reference and is typically between hundreds of milliseconds and one or two seconds. Do you really need readings more frequently than that? Typically each sensor will need an amount of time in order to obtain a reliable reading. This would be described in the sensor data sheet, along with options for working with the sensor.
More generally 'speed' will be a function of the bluetooth handshake, the throughput available over the physical radio link, the processing within the sensor tag and the processing within the sensors. I would expect the most variable part of this would be the physical link.
It is up to the mobile app to decide which sensors services it wishes to use.
Have you studied the Software Developer's Guide, available at the same page as the BLE Stack?

(libusb) Confusion about continous isochronous USB streams

I am using a 32-bit AVR microcontroller (AT32UC3A3256) with High speed USB support. I want to stream data regularly from my PC to the device (without acknowledge of data), so exactly like a USB audio interface, except the data I want to send isn't audio. Such an interface is described here: http://www.edn.com/design/consumer/4376143/Fundamentals-of-USB-Audio.
I am a bit confused about USB isochronous transfers. I understand how a single transfer works, but how and when is the next subsequent transfer planned? I want a continuous stream of data that is calculated a little ahead of time, but streamed with minimum latency and without interruptions (except some occasional data loss). From my understanding, Windows is not a realtime OS so I think the transfers should not be planned with a timer every x milliseconds, but rather using interrupts/events? Or maybe a buffer needs to be filled continuously with as much data as there is available?
I think my question is still about the concepts of USB and not code-related, but if anyone wants to see my code, I am testing and modifying the "USB Vendor Class" example in the ASF framework of Atmel Studio, which contains the firmware source for the AVR and the source for the Windows EXE as well. The Windows example program uses libusb with a supplied driver.
Stephen -
You say "exactly like USB Audio"; but beware! The USB Audio class is very, very complicated because it implements a closed-loop servo system to establish long-term synchronisation between the PC and the audio device. You probably don't need all of that in your application.
To explain a bit more about long-term synchronisation: The audio codec at one end (e.g. the USB headphones) may run at a nominal 48KHz sampling rate, and the audio file at the other end (e.g. the PC) may be designed to offer 48 thousand samples per second, but the PC and the headphones are never going to run at exactly the same speed. Sooner or later there is going to be a buffer overrun or under-run. So the USB audio class implements a control pipe as well as the audio pipe(s). The control pipe is used to negotiate a slight speed-up or slow-down at one end, usually the Device end (e.g. headphones), to avoid data loss. That's why the USB descriptors for audio device class products are so incredibly complex.
If your application can tolerate a slight error in the speed at which data is delivered to the AVR from the PC, you can dispense with the closed-loop servo. That makes things much, much simpler.
You are absolutely right in assuming the need for long-term buffering when streaming data using isochronous pipes. A single isochronous transfer is pointless - you may as well use a bulk pipe for that. The whole reason for isochronous pipes is to handle data streaming. So a lot of look-ahead buffering has to be set up, just as you say.
I use LibUsbK for my iso transfers in product-specific applications which do not fit any preconceived USB classes. There is reasonably good documentation at libusbk for iso transfers. In short - you decide how many bytes per packet and how many packets per transfer. You decide how many buffers to pre-fill (I use five), and offer the libusbk driver the whole lot to start things going. Then you get callbacks as each of those buffers gets emptied by the driver, so you can fill them with new data. It works well for me, even though I have awkward sampling rates to deal with. In my case I set up a bunch of twenty-one packets where twenty of them carry 40 bytes and the twenty-first carries 44 bytes!
Hope that helps
- Tony

iBeacon indoor map 'heat map'

I'm sure we have all heard of apples iBeacon by now... We've been working on a few projects using the technology and have been wondering about one usage that I have seen others promoting.. that is using the LE Bluetooth radios to create a dwell time heat map in a space...
The concept sounds simple enough place a LE Beacon in an area and as people pass by it 'counts' that person which is then overlaid over a store map to create traffic patterns.. that's the claim. I'm trying to figure out how that can be possible?
The concept uses the mobile device on the passerby as the 'trigger' for the count. There is no way at all to achieve this with out the user having a certain app downloaded on their device correct? The only feasible way I can see it working is if the user has an app downloaded on their device and that app pings a web server every time it sees a beacon.. that is then mapped.. but that also will use data and battery resources on the mobile device which most likely will result in the user deleting the app before long...
This also leaves a large number of passers by who will not be accounted for... making the results very difficult to quantify.
Am I wrong in this assumption? Is there something that I'm missing?
Your analysis of the possibilities and challenges of the technology are largely correct. My company, Radius Networks, has done similar traffic visualizations for large events.
A few points:
Even if most users do not have an app on their phone, the data are still valuable if there are enough to provide a representative statistical sample.
When using iBeacons for this purpose, you must have quite coarse grained locations for two reasons:
The range of Bluetooth LE is about 50 meters.
Assuming the users will only be passively running the app in the background, beacon detection can take minutes on iOS.
Combining the two challenges above, you can really only use the technology to do this for very large venues.
The battery drain is not really a problem if the phone only wakes up every few minutes to report a beacon detection to a server.

Resources