I am using ESP32-CAM, and CameraWebServer example from standard Arduino IDE package.
It works fine, but the image I receive in a browser is noisy: color lines appear randomly over the picture. Any idea what causes it and how to fix?
There could be a number of reasons for this behaviour, and it possibly down to a cumulative number of issues which will affect the picture quality.
Power supply quality. ESP32's draw a lot of current under certain conditions, and this can cause a brownout condition to occur. This could be down to your USB port not being able to supply enough current. Check the serial terminal for messages, if you see brownout error messages on your serial monitor, try a powered USB hub or a better quality USB cable
Power supply noise. If you have access to an oscilloscope, check the 3.3 and 5v rails for garbage. If excessive, try adding 2 x 1000uf capacitors on each rail
RF interference. The ribbon cable between the camera and the board is not shielded. Try lifting it away from the board, or even wrapping it in a thin layer of foil and some insulating tape, ensuring no shorts occur. If the ribbon cable is long, try a camera with a shorter cable
Lighting. With fluorescent and LED lighting, some forms of illumination seem noisier than others. Try increasingly natural daylight
Interface settings. The defaults on the webserver example are not ideal for certain lighting conditions. Try disabling Lens correction and manually adjusting the gain control, AE level and exposure. Tweaking these settings will ellemminate much of the background noise.
I found all of these small improvements makes a big difference to picture quality. In my scenario, low light and noisy power seemed to be the worst culprits, but your YMMV. By implementing these I managed to improve the picture quality from unwatchable to reasonable.
I am writing an OS and want to have GUI. I can't find good tutorial for drawing pixels on the screen.
I'd like to have some assembly + C example which I can build and run on some emulator like BOCHS or v86
The basic idea is:
1) bootloader uses firmware (VBE on BIOS, GOP or UGA on UEFI) to set a graphics mode that is supported by the monitor, video card and OS; and while doing this it gets relevant information (physical address of frame buffer, horizontal and vertical resolution, pixel format, bytes between horizontal lines) about the frame buffer from the firmware that it can pass to the OS; so that the OS can use this information during "early initialisation" (before a native video driver is started), and can continue using it (as a kind of "limp mode") if there is no suitable native video driver.
2) The OS uses the information to figure out how to write to the frame buffer. This may be a calculation like physical_address = base_address + y * bytes_between_lines + x * bytes_per_pixel (where bytes_per_pixel is determined from the pixel format).
Notes for "early initialisation":
for performance reasons, it's better to draw everything in a buffer in RAM and then copy ("blit") the data from the buffer in RAM to the frame buffer.
for performance reasons, the code to copy ("blit") the data from the buffer in RAM to the frame buffer can/should use some tricks to avoid copying data that didn't change since last time
to support many different pixel formats, it's possible to use a "standard" pixel format for the buffer in RAM (e.g. maybe "8-bit red, 8-bit green, 8-bit blue, 8-bit padding") and convert that to whichever pixel format the video card happens to want (e.g. maybe "5-bit blue, 6-bit green, 5-bit red, no padding") while copying data from the buffer in RAM to the frame buffer. This allows you to have a single version of all the functions to draw things (characters, lines, rectangles, icons, ...) instead of having multiple different versions of many different functions (one for each possible pixel format).
Notes for "middle initialisation":
eventually the OS will try to find and start suitable device drivers for all the different devices. This includes trying to find a suitable driver for video card/s (e.g. that supports things like vertical sync, GPU, GPGPU, etc).
you will need to design a video driver interface that native video drivers can use that (ideally) supports modern features (e.g. full 3D graphics and shaders maybe).
when there is no native video driver, the OS can/should start a "generic frame buffer" driver that implements the same video driver interface (that was designed to support hardware acceleration) that does everything in software without the benefit of hardware acceleration.
when video driver/s are started, the OS needs to have some kind of "hand off" where ownership of the frame buffer is passed from the earlier boot code to the video driver. After this "hand off" the earlier boot code (which was designed to draw things directly to the frame buffer) should not touch the frame buffer and should ask the video driver to do the "convert pixel data and copy to frame buffer" work.
Notes for "after initialisation":
For a traditional "2D GUI"; typically you have one buffer (or "canvas" or "texture" or whatever) for the background/desktop, plus more buffers/canvases for each window or dialog box, and possibly more buffers/canvases for smaller things (e.g. mouse pointer, drop down menus, "widgets", etc); such that applications can modify their buffer/canvas (but are prevented from directly or indirectly accessing any other buffer/canvas for security reasons). Then the GUI tells the video driver where each of these buffers/canvases should be drawn; and the video driver (using hardware acceleration if its a native video driver) combines these pieces together ("composes") to get pixel data for the whole frame, then does the pixel format conversion (using GPU hopefully) to get raw pixel data to display/to send to the monitor. This means various actions (moving windows around the screen, "alt tabbing" between windows, moving the mouse around, etc) become extremely fast when there's a native video driver because the CPU is doing nothing and the video card itself is doing all the work.
ideally there would be a way (e.g. OpenGL) for the application to ask the video driver to draw stuff in the application's buffer/canvas; such that more work can be done by the video card (and not done by the CPU). This is especially important for 3D games, but there's no reason why normal 2D applications can't benefit from using the same approach for 2D graphics.
Note that most beginners do everything wrong (don't have a well designed native video driver interface) and therefore will never have any native video drivers because all their software can't use a native video driver anyway. These people will probably try to convince you that it's not worth the hassle (because in their experience native video drivers won't ever exist). The reality is that most native video drivers are extremely hard to write, but some of them (for virtual machines) aren't hard to write; and your goal should be to allow other people write drivers eventually (by designing suitable interfaces and providing adequate documentation) rather than writing all the drivers yourself.
Top answer did a very good job of explaining. You did ask for some example code, so here's a code snippet from my GitHub, and a detailed explanation will follow.
1. bios_setup:
2. mov ah, 00h ; tell the bios we'll be in graphics mode
3. mov al, 13h
4. int 10h ; call the BIOS
5. mov ah, 0Ch ; set video mode
6. mov bh, 0 ; set output vga
7. mov al, 0 ; set initial color
8. mov cx, 0 ; x = 0
9. mov dx, 0 ; y = 0
10. int 10h ; BIOS interrupt
Line 2 is where the fun begins. Firstly, we move the value 0 into the ah register. At line 3, we move 13 hex into al - now we're ready for our BIOS call.
Line 4 calls the bios with interrupt vector 10 hex. BIOS now checks in ah and al.
AH:
- tells BIOS to set video mode
AL:
- tells BIOS to enter write string mode.
Now that we called the interrupt on line 4, we're ready to move new values into some registers.
At line 5, we put 0C hex into the ah register.
This tells BIOS that we want to write a graphics pixel.
At line 6, we throw 0 into the bh register, which tells BIOS that we'll be either using a CGA, EGA, MCGA, or VGA adapter to output. So output mode 0 basically.
And next all we have to do is set our color. So lets start at 0, which is black.
That's all nice, but where do we want to actually draw this black pixel to?
That's where lines 8-9 come in, where registers cx and dx store the x,y coordinates of the pixel to draw, respectively.
Once they are set, we call the BIOS with interrupt 10 hex. And the pixel in drawn.
After reading Brendan's elaborate and informative answer, this code will make
much more sense. Certain values must be in certain registers before calling the
BIOS simply because those are the registers in which the according interrupt
will check. Everything else is pretty straight forward. If you want another
color, simply change the value in al. You want to blit your pixel somewhere else?
Mess around with the x and y values in cx and dx. Again, this isn't very
efficient for graphics intensive programs as it is pretty slow.
For educational purposes, however, it beats writing your own graphics driver ;)
You can still get some efficiency by drawing everything in a buffer in RAM before
blitting to the screen, as Brendan said, but I'd much rather keep it simple in
my example.
Check out the full - free - example on my GitHub. I've also included a README and a Makefile, but they are Linux exclusive. If you're running on Windows, some googling will yield any information necessary to assembling the OS to a bootable floppy, and just about any virtual machine host will do. Also, feel free to ask me about anything that's unclear. Cheers!
Ps: I did not write a tool, simply a small script in NASM that is meant to be assembled to a floppy and ran as a kernel (in a VM if you will)
i'm working on a school project "automatic railway system"
my project suppose to close the gate when the train coming to the station with a buzzer on with 90 sec count down display on 7-seg. and a led flashing.
after the train leaving the station, the gate opens and the buzzer off and the led off .
i tried to use a dc motor to open and close the gate but it didn't give me the accurate angle that i need to i try to use a servo motor .
so i need it to open the gate at position zero and close it at position 90 .
all the code i found on the internet they using PWM and timers which i didn't take it in my course , so can anyone help me to do this with simple code ,please ?
i'm using Atmega32 running at 16000000 HZ
Its depend on your analog servo (which is controlled by PWM) frequency specification. After you learn about the servo specification, you can set your PWM using build-in features on cvavr compiler, or you can do some research about PWM registers.
Here is some example of PWM setup
//using OC0 (B.3)
DDRB.3 = 1; //set B.3 as output
TCCR0=0b0111 0001;
TCNT0=0; //set to Phase Correct PWM mode, no prescaler, and inverted output
//to assign a value to your PWM
OCR0 = 127 //50% duty cycle since it was 8 bit
I have kind of a proof-of-concept project ahead. I want to transmit a very short message to a smartphone via light.
What I have is:
a LED strip (NOT individually addressable)
an Arduino
a typical smartphone
8 Bytes I want to transmit
The obstacles:
not every smartphone camera works well under all light conditions (the recorded color is sometimes not the same as the one the human eye recognizes)
I don't have complete darkness around, but tough daylight :D.
I want to encode a message in a sequence of light, for example by varying color or pulse duration.
Are there any suitable encoding algorithms or libraries around that can be recommended and I should take a look at?
What happens during a display mode change (resolution, depth) on an ordinary computer? (classical stationarys and laptops)
It might not be so trivial since video cards are so different, but one thing is common to all of them:
The screen goes black (understandable since the signal is turned off)
It takes many seconds for the signal to return with the new mode
and if it is under D3D or GL:
The graphics device is lost and all VRAM objects must be reloaded, making the mode change take even longer
Can someone explain the underlying nature of this, and specifically why a display mode change is not a trivial reallocation of the backbuffer(s) and takes such a "long" time?
The only thing that actually changes are the settings of the so called RAMDAC (a Digital Analog Converter directly attached to the video RAM), well today with digital connections it's more like a RAMTX (a DVI/HDMI/DisplayPort Transmitter attached to the video RAM). DOS graphics programmer veterans probably remember the fights between the RAMDAC, the specification and the woes of one's own code.
It actually doesn't take seconds until the signal returns. This is a rather quick process, but most display devices take their time to synchronize with the new signal parameters. Actually with well written drivers the change happens almost immediately, between vertical blanks. A few years ago, when the displays were, errr, stupider and analogue, after changing the video mode settings, one could see the picture going berserk for a short moment, until the display resynchronized (maybe I should take a video of this, while I still own equipment capable of this).
Since what actually is going on is just a change of RAMDAC settings there's also not neccesary data lost as long as the basic parameters stays the same: Number of Bits per Pixel, number of components per pixel and pixel stride. And in fact OpenGL contexts usually don't loose their data with an video mode change. Of course visible framebuffer layouts change, but that happens also when moving the window around.
DirectX Graphics is a bit of different story, though. There is device exclusive access and whenever switching between Direct3D fullscreen mode and regular desktop mode all graphics objects are swapped, so that's the reason for DirectX Graphics being so laggy when switching from/to a game to the Windows desktop.
If the pixel data format changes it usually requires a full reinitialization of the visible framebuffer, but today GPUs are exceptionally good in maping heterogenous pixel formats into a target framebuffer, so no delays neccesary there, too.