Maximum number of addressable LED's to be controlled by arduino - memory-management

I'm working on a project that involves building a big 'light wall' (h*w = 3,5 * 7 m). The wall is covered in semi-transparent fabric, and behind this is gonna be mounted strips of addressable RGB-LED's. The strips are gonna be mounted vertically and function as a big display for showing primitive graphics and visuals. I plan to control the diodes using an arduino. The reason for the choice of the arduino as controller is that is must be easily programmable, (relatively) cheap, and should work as a part of different interaction design setups.
Current version of the light wall (without addressable LED's)
My ultimate concern is now this:
How many individual diodes can I control before the arduino becomes inadequate?
As I see it this comes down to two points:
When does the update rate of the diodes become too slow? I want to update the picture at a rate of minimum 10 Hz.
When will the arduino run out of memory? The state of all the diodes on the light wall should be stored for the next frame. At some point then the memory (256 KB for the arduino MEGA 2560) will run out. How many diodes can I control before this happens?
I realize that both questions are related to how the arduino is programmed. I am very open to any tips regarding speed optimization and memory conservation.
Thanks in advance!

Related

esp32 EEPROM read/write cycle

I am using ESP32 module for BLE & WiFi functionality, I am writing data on EEPROM of ESP32 module after every 2 seconds.
How many read/write cycles are allowed as per standard features of ESP32 module? based on which I need to calculate EEPROM life time and number of readings (with frequency) I can store.
The ESP32 doesn’t have an actual EEPROM; instead it uses some of its flash storage to mimic an EEPROM. The specs will depend on the specific SPI flash chip, but they’re likely to be closer to 10,000 cycles than 100,000. Writing to it every couple of seconds will likely wear it out pretty quickly - it’s not a good design choice, especially if you keep rewriting the same location.
I'm very late here, but an SD card seems like the ideal option for you. If you want to save just a few bytes, you can use FeRAM (also called FRAM). It's a combination between RAM and ROM, it's vast, and the data stays on it after power off. It is pretty expensive, so you might want to go with the SD card or web server option. I just wanted to tell you that this existed, I also know this for like a few months.
At that write rate even automotive grade EEPROM like the 24LC001 which supports at least 1,000,000 writes will only last about 2 months!
I think microchip has EERAM which supports infinite writes and will not loose contents on power loss.
Check the microchips 47L series.

CR2032 Battery Percentage

We are working on a project right now, where we're using coin cell batteries (3 x CR2032). The device is connected to the application via bluetooth and we're showing battery percentage on the app (Reading by turning ADC on during reading and turning ADC off after reading is taken. This saves battery life).
My question is how do we display the percentage on the application throughout the life of the battery.
Eg. 3.2 V - 100%
3.0 V - 80%
2.8 V - 60%
These values are exaggerated just to show why i'm trying to guess here.
coin cell batteries discharge quickly from 3.2 to 2.9V and then they discharge very slowly. We want to show the readings, considering the nature of a coin cell battery. Eg. From 3.2V - 2.9V, we can show a reduction of 4-5% only and then do the rest of the calculations according to a slow rate.
Please suggest calculations which we can implement in our code.
We are currently following this but it doesn't make sense to me.
https://devzone.nordicsemi.com/question/37130/battery-level-discharge-curve-how-can-i-read/
#2.9 V if we show less than half of the battery on the app, then the user would be confused as to why the battery drained quickly even when the user hardly used it.
If the device consumes the same current all the time, you could build self discharge diagram (in your link the discharge diagram is build for a specific load - resistor 15k).
After that, you could use polynoms to get function which provides the dependence of battery capacity from time. Use the inverse funсtion to predict how much time is left to work.
Commonly, you could collect some information about battery life, If your device has remote server, you could send statistics on it and analyze received data in future.

Rendering for large MatLab figure is slow

I'm using MatLab R2014B on Win 8.1 I have a figure with two sub-plots. The data for the first sub-plot is about 700,000 points; the second is about 50,000 points. When I display it or manipulate it in any way (zoom, say), there's a huge lag in time, up to about 30 seconds. Obviously I'd like to improve performance. Here's what I know:
If I break it into 4 plots, each covering 1/4 of the data, performance is fast. Much more than 4 times as fast. The difference seems exponential.
A colleague (running R2014A I believe) has a machine that should be slower but in fact the figure displays with near-realtime speed.
The problem is perhaps how the figure is being rendered. I ran MatLab's "opengl info" and it reports that the Software flag is false. That should mean it's using the display's hardware rendering.
So maybe the display adapter isn't set quite right. My machine (it's a Lenovo laptop) has two display adapters: Intel HD Graphics 3000 and NVIDIA NVS 4200M. I don't know why there are both or whether there are any relevant settings.
Any thoughts on how to proceed?
It could be that you're running it through your integrated graphics processor (Intel HD Graphics 3000) rather than your dedicated graphics processor (NVIDIA NVS 4200M). If your Lenovo has "switchable graphics" enabled, you should be able to switch to the NVIDIA, or check that you are indeed rendering through that. Right click on your power manager in the taskbar. If you see a menu item that says "switchable graphics," you can change it to your NVIDIA. Note, you'll have to close out of MATLAB to do switch.
It does sound like a slowdown caused by the rendering configuration. When you run opengl info in MATLAB, what device is listed as "Renderer"?
If you don't need to manipulate it (say you only want an image file), you can always create your figure with figure('Visible','Off') and save it without actually showing the figure on screen.
MATLAB releases ever since R2014b use a new graphics engine which is known to be extremely slow with large data sets; see for example
http://www.mathworks.com/matlabcentral/newsreader/view_thread/337755
The solution has nothing to do with graphics drivers etc. Revert back to MATLAB R2014a and stay there.
I wrote a function, plotECG, that enables you to show plots with millions of samples. It includes sliders for quick scrolling and zooming.
If you have multiple timeseries and want them to be displayed in a synchronized way, you can pass them as matrix all at once and define the key 'AutoStackSignals', followed by a cell array of strings with the signal's names. Then, the signals are shown one below the other in the same axis with the corresponding name as YTickLabel.
https://de.mathworks.com/matlabcentral/fileexchange/59296

Linux Driver real time constraints

I need to build a platform for logging some sensor data. And possibly later doing some calculations on this logged data.
The Raspberry Pi seem like an interesting (and cheap!) device for this.
I have a gyroscope that can sample at 800 Hz which is equivalent to one sample every 1.25 ms.
The gyroscope has a built-in FIFO that can store 32 samples.
This means that the FIFO has to be emptied at least every 32 * 1.25 = 40 ms, otherwise samples will be dropped.
So my question is: Can I be 100% certain that my kernel driver will be able to extract the data from this FIFO within the specified time?
The gyroscope communicates with the host via i2c, and it can also trigger an interrupt pin on a "almost full"-event if that would make things simpler.
But it would be easiest if I could just have a loop in the driver that retrieves the data at regular intervals.
I can live with storing the data in kernel space, and move it to user space more infrequently (no constraint on time).
I can also live with sampling the gyroscope at lower sample rates (400 or 200 Hz is acceptable).
This is with regards to the stock kernel, and not the special real-time kernel as it seems like this is currently not supported for the Raspberry Pi.
You will need a real-time linux environment for tight timing:
You could try Xenomai on Raspberry Pi:
http://diy.powet.eu/2012/07/25/raspberry-pi-xenomai/
However, following along this blog:
http://linuxcnc.mah.priv.at/rpi/rpi-rtperf.html (dead, and I could not find it in wayback or google cache)
It seems he is getting repeatable +/- 20µS timing out of the stock kernel. As your timing resolution is 1250µS you may be fine with the stock kernel if you willing to lose a sample once in a blue moon YMMV.
I have not tested this yet myself but I have been reading up in an attempt to try to drive a ws2811 LED controller with the Raspberry Pi and this was looking the most promising to me.
There is also the RT linux patch: https://rt.wiki.kernel.org/index.php/Main_Page
Which has at lest one pi version: https://github.com/licaon-kter/raspi-rt
However I have run into a bunch of nay-sayers when looking deeper into this patch.
Your best bet it to read the MS timer and log or light an LED if you miss an interval and then try some of the solutions. Happy Hacking..

What are some practical applications of an FPGA?

I'm super excited about my program powering a little seven-segment display, but when I show it off to people not in the field, they always say "well what can you do with it?" I'm never able to give them a concise answer. Can anyone help me out?
First: They don't need to have volatile memory.
Indeed the big players (Xilinx, Altera) usually have their configuration on-chip in SRAM, so you need additional EEPROM/Flash/WhatEver(TM) to store it outside.
But there are others, e.g. Actel is one big player that come to mind, that has non-volatile configuration storage on their FPGAs (btw. this has also other advantages, as SRAM is usually not very radiation tolerant, and you have to require special measurements when you go into orbit).
There are two big things that justify FPGAS:
Price - They are not cheap. But sometimes you can't do something in software, and you need hardware for it. And when you are below a certain point in your required volume (e.g. because its just small series, or a prototype) an FPGA is MUCH cheaper than an ASIC. Also, while developing ASICs this allows - before a final state is reached - much higher turn-around times.
Reconfiguration - You can reconfigure your FPGA. That is something a processor or an ASIC can't do. There are some applications where you can use this: E.g. When you need the ability to fix something in the design, but you can't get physically to the device. Example for this: The mars orbiters/rovers used Xilinx FPGAs. When someone finds there a mistake (or wants to switch to a different coding for transmitting data or whatever), you can't replace the ship, as it is just not reachable. But with an FPGA you can just reconfigure and can apply your changes. Another scenario is, that you can have one single chip which is able to perform different accelerations, depending on the scenario. Imagine a smartphone, when telephoning the FPGA can be configured to make audio en-/decoding, when surfing it can work as a compression engine, when playing videos it can be configured as h264 decoder/accelerator. Another thing you could do is that you can match your hardware to your problem instance. E.g. Cisco uses many FPGAs in their hardware. You need the hardware to perform switching/routing/packet inspection with the required speed, and you can generate from actual setting matching engines directly into hardware.
Another thing which might come up soon (I know some car manufacturer thought about it), is for devices which include a lot of different electronics and have a big supply chain. It's more or less a combination of price and reconfiguration. It's more expensive to have 10 ASICs than 10 FPGAs - where both perform the same task, but it's cheaper to have 10 FPGAs with just one supplier and the need to hold just 1 type of chip at service and supply than to have 10 suppliers with the necessity to hold and manage 10 different chips in supply and service.
True story.
They allow you to fix design flaws in the custom data-acquisition boards for a multi-million dollar particle physics experiment that become obvious only after you have everything installed and are doing integration work and detector characterization.
You can evolve circuits, this is a bit old school evolutionary algorithms but starting from a set of random individuals you can select the circuits that score higher in a fitness function than the rest and breed them to create a new population ad infinitum. read up about Evolutionary Hardware, think this book covers FPGA's http://www.amazon.co.uk/Introduction-Evolvable-Hardware-Self-Adaptive-Computational/dp/0471719773/ref=sr_1_1?ie=UTF8&qid=1316308403&sr=8-1
Say for example you wanted a DSP circuit, you have an input signal and a desired output signal, starting with a random population you select perhaps only the fittest (bad) or perhaps a mixture of fitties and odd ones to create the next generation. after a number of generations you can open the lid and discover low and behold evolution has taken place and you have a circuit that may even out perform your initial expectations!
also read the field guide to genetic programming, it's free on the web somewhere.
There are limitations to software. On software, you're running at the CPU's clock rate, enabling you to only execute one instruction per clock cycle. On software, everything is high level, you do not control details that happen in the low level. You'll always be limited by the operating system or development board you are programming. This is true for popular development boards out there such as Arduinos and Raspberry Pi.
In FPGA hardware, you can precisely program and control what happens between each clock cycle, providing your computations the speed at the electron level (note: speed of electrons determines speed of electric signal transfers between hardware)
Now, we know FPGA implies Hardware, Speed of Electrons, which is much better than
CPU that implies Software, 1 instruction per clock cycle.
So why use FPGA when we can design our own boards using Printed Circuit Board, transistor level?
This is because FPGA's are programmable hardware! It is built such that you can program the connections of a board instead of wiring it up for a specific application. This explains why FPGA's are expensive! It is sort of a 'general hardware' or Programmable Hardware.
To argue why you should pick FPGA's despite their cost, the programmable hardware component allows:
Longer product cycle (you can update the programmable hardware on the customer's products which contains your FPGA by simply allowing them to programmed your updated HDL code into their FPGA)
Recovery for hardware bug. You simply allow them to download the corrected program onto their FPGA. (note: you cannot do this with specific hardware designs as you will have to spend millions to gather back your products, create new ones, and ship them back to customers)
For examples on the cool things FPGA can do, refer to Stanford's infamous ECE5760 course.
http://people.ece.cornell.edu/land/courses/ece5760/FinalProjects/
Hope this helps!
Soon Chee Loong,
University of Toronto
FPGA are also used to test/research circuit design before they start mass production. This is happening in several sectors: image processing, signal processing, etc.
Edit - after few years we can now see more practical applications including finance and machine earning:
aerepospace
emulation
automotive
broadcast
high performance computers
medical
machine learning
finance (including cryptocoins)
I like this article: http://www.hpcwire.com/hpcwire/2011-07-13/jp_morgan_buys_into_fpga_supercomputing.html
My feeling is that FPGA's can sit directly in your streaming data at the point where it enters your the systems under your control. You can then crunch that data without going through the steps a GPGPU would require (bringing the data in off the network, passing it across the PCI Express bus and crunching it a Gb at a time).
There are good reasons for both, but I think the notion of whether you mind buffering the data is a good bellwether.
Here's another cool FPGA application:
https://ehsm.eu/m-labs.hk/m1.html
Automotive image processing is one interesting domain:
Providing lane-keeping support to the driver (disclosure: I wrote this page!):
http://www.conekt.co.uk/capabilities/50-fpga-for-ldw
Providing an aerial view of a car from 4 fisheye-lens cameras (with video):
http://www.logicbricks.com/Solutions/Surround-View-DA-System/Xylon-Test-Vehicle.aspx

Resources