I am using ESP32-CAM, and CameraWebServer example from standard Arduino IDE package.
It works fine, but the image I receive in a browser is noisy: color lines appear randomly over the picture. Any idea what causes it and how to fix?
There could be a number of reasons for this behaviour, and it possibly down to a cumulative number of issues which will affect the picture quality.
Power supply quality. ESP32's draw a lot of current under certain conditions, and this can cause a brownout condition to occur. This could be down to your USB port not being able to supply enough current. Check the serial terminal for messages, if you see brownout error messages on your serial monitor, try a powered USB hub or a better quality USB cable
Power supply noise. If you have access to an oscilloscope, check the 3.3 and 5v rails for garbage. If excessive, try adding 2 x 1000uf capacitors on each rail
RF interference. The ribbon cable between the camera and the board is not shielded. Try lifting it away from the board, or even wrapping it in a thin layer of foil and some insulating tape, ensuring no shorts occur. If the ribbon cable is long, try a camera with a shorter cable
Lighting. With fluorescent and LED lighting, some forms of illumination seem noisier than others. Try increasingly natural daylight
Interface settings. The defaults on the webserver example are not ideal for certain lighting conditions. Try disabling Lens correction and manually adjusting the gain control, AE level and exposure. Tweaking these settings will ellemminate much of the background noise.
I found all of these small improvements makes a big difference to picture quality. In my scenario, low light and noisy power seemed to be the worst culprits, but your YMMV. By implementing these I managed to improve the picture quality from unwatchable to reasonable.
I am getting my first Tango in the next day or so; worked a little bit with Occipital's Structure Sensor - which is where my background in depth perceiving camera's come from.
Has anyone used multiple Tango at once (lets say 6-10), looking at the same part of a room, using depth for identification and placement of 3d character/content? I have been told that multiple devices looking at the same part of a room will confuse each Tango as they will see the other Tango's IR dots.
Thanks for your input.
Grisly
I have not tried to use several Tangos, but I have however tried to use my Tango in a room where I had a Kinect 2 sensor, which caused the Tango to go bananas. It seems however like the Tango has lower intensity on its IR projector in comparison, but I would still say that it is a reasonable assumption that it will not work.
It might work under certain angles but I doubt that you will be able to find a configuration of that many cameras without any of them interfering with each other. If you would make it work however, I would be very interested to know how.
You could lower the depth camera rate (defaults to 5/second I believe) to avoid conflicts, but that might not be desirable given what you're using the system for.
Alternatively, only enable the depth camera when placing your 3D models on surfaces, then disable said depth camera when it is not needed. This can also help conserve CPU and battery power.
It did not work. Occipital Structure Sensor on the other hand, did work (multiple devices in one place)!
I have kind of a proof-of-concept project ahead. I want to transmit a very short message to a smartphone via light.
What I have is:
a LED strip (NOT individually addressable)
an Arduino
a typical smartphone
8 Bytes I want to transmit
The obstacles:
not every smartphone camera works well under all light conditions (the recorded color is sometimes not the same as the one the human eye recognizes)
I don't have complete darkness around, but tough daylight :D.
I want to encode a message in a sequence of light, for example by varying color or pulse duration.
Are there any suitable encoding algorithms or libraries around that can be recommended and I should take a look at?
I am trying to create a small project wherein I need to capture/read the video frame buffer and calculate the average RGB value of the screen.
I don't need to write anything on the screen. I'm doing this in Windows.
Can anyone help me with any Windows API which will read the video frame buffer and calculate the average RGB value?
What I came to know is that I need to write a kernel driver which will have access to read the frame buffer.
Is this the only solution?
Is there any other way of reading frame buffer?
Is there an algorithm to calculate the RGB value from frame buffer data?
If you want really good performance, you might have to use directx and capture the backbuffer to a texture. Using mipmaps, it will automatically create downsamples all the way to 1X1. Justgrab the color of that 1 pixel and you're good to go.
Good luck, though. I'm working on implimenting this as we speak. I'm creating an ambient light control for my room. I was getting about 15FPS using device contexts and StretchBLT. Only got decent performance if I grabbed 1 pixel with GetPixel(). That's an i5 3570K # 4.5GHz
But with the directx method, you could technically get hundreds if not thousands of frames per second. (when I make a spinning triangle, my 660 gets about 24,000 FPS. It couldn't be TOO much slower, minus the CPU calls.)
Setup
I have a couple hundred Sparkfun LED pixels (similar to https://www.sparkfun.com/products/11020) connected to an Arduino Uno and want to control the pixels from a PC using the built-in Serial-over-USB connection of the Arduino.
The pixels are individually adressable, each has 24 bits for the color (RGB). Since I want to be able to change the color of each pixel very quickly, the transmission of the data from the pc to the Arduino has to be very efficient (the further transmission of data from the Arduino to the pixels is very fast already).
Problem
I've tried simply sending the desired RGB-Values directly as is to the Arduino but this leads to a visible delay, when I want to for example turn on all LEDs at the same time. My straightforward idea to minimize the amount of data is to reduce the available colors from 24-bit to 8-bit, which is more than enough for my application.
If I do this, I have to expand the 8-bit values from the PC to 24-bit values on the Arduino to set the actual color on the pixels. The obvious solution here would be a palette that holds all available 8-bit values and the corresponding 24-bit colors. I would like to have a solution without a palette though, mostly for memory space reasons.
Question
What is an efficient way to expand a 8-bit color to a 24-bit one, preferrably one that preserves the color information accurately? Are there standard algorithms for this task?
Possible solution
I was considering a format with 2 bits for each R and B and 3 bits for G. These values would be packed into a single byte that would be transmitted to the Arduino and then be unpacked using bit-shifting and interpolated using the map() function (http://arduino.cc/en/Reference/Map).
Any thoughts on that solution? What would be a better way to do this?
R2B2G3 would give you very few colors (there's actually one more bit left). I don't know if it would be enough for your application. You can use dithering technique to make 8-bit images look a little better.
Alternatively, if you have any preferred set of colors, you can store known palette on your device and never send it over the wire. You can also store multiple palettes for different situations and specify which one to use with small integer index.
On top of that it's possible to implement some simple compression algorithm like RLE or LZW and decompress after receiving.
And there are some very fast compression libraries with small footprint you can use: Snappy, miniLZO.
Regarding your question “What would be a better way to do this?”, one of the first things to do (if not yet done) is increase the serial data rate. An Arduino Forum suggests using 115200 bps as a standard rate, and trying 230400 bps. At those rates you would need to write the receiving software so it quickly transfers data from the relatively small receive buffer into a larger buffer, instead of trying to work on the data out of the small receive buffer.
A second possibility is to put activation times into your data packets. Suppose F1, F2, F3... are a series of frames you will display on the LED array. Send those frames from the PC ahead of time, or during idle or wait times, and let the Arduino buffer them until they are scheduled to appear. When the activation time arrives for a given frame, have the Arduino turn it on. If you know in advance the frames but not the activation times, send and buffer the frames and send just activation codes at appropriate times.
Third, you can have multiple palettes and dynamic palettes that change on the fly and can use pixel addresses or pixel lists as well as pixel maps. That is, you might use different protocols at different times. Protocol 3 might download a whole palette, 4 might change an element of a palette, 5 might send a 24-bit value v, a time t, a count n, and a list of n pixels to be set to v at time t, 6 might send a bit map of pixel settings, and so forth. Bit maps can be simple 1-bit-per-pixel maps indicating on or off, or can be k-bits-per-pixel maps, where a k-bit entry could specify a palette number or a frame number for a pixel. This is all a bit vague because there are so many possibilities; but in short, define protocols that work well with whatever you are displaying.
Fourth, given the ATmega328P's small (2KB) RAM but larger (32KB) flash memory, consider hard-coding several palettes, frames, and macros into the program. By macros, I mean routines that generate graphic elements like arcs, lines, open or filled rectangles. Any display element that is known in advance is a candidate for flash instead of RAM storage.
Your (2, 3, 2) bit idea is used "in the wild." It should be extremely simple to try out. The quality will be pretty low, but try it out and see if it meets your needs.
It seems unlikely that any other solution could save much memory compared to a 256-color lookup table, if the lookup table stays constant over time. I think anything successful would have to exploit a pattern in the kind of images you are sending to the pixels.
Any way you look at it, what you're really going for is image compression. So, I would recommend looking at the likes of PNG and JPG compression, to see if they're fast enough for your application.
If not, then you might consider rolling your own. There's only so far you can go with per-pixel compression; size-wise, your (2,3,2) idea is about as good as you can expect to get. You could try a quadtree-type format instead: take the average of a 4-pixel block, transmit a compressed (lossy) representation of the differences, then apply the same operation to the half-resolution image of averages...
As others point out, dithering will make your images look better at (2,3,2). Perhaps the easiest way to dither for your application is to choose a different (random or quasi-random) fixed quantization threshold offset for each color of each pixel. Both the PC and the Arduino would have a copy of this threshold table; the distribution of thresholds would prevent posterization, and the Arduino-side table would help maintain accuracy.