i am trying to take a long video that is week long with 6 raspberry pi 3 cameras. every camera is connected to electricity port with through a regular phone transformer that connects each 2 usb ports for the raspberries. i start the shooting using ssh from my computer and i can see the cameras log when each camera shoots a frame at the terminal log. the problem is that each time that i start a long run that is longer than several hours, some cameras deferentially just stop shooting, every time a different camera, and with no perticular reason. are there common reasons for cameras to stop shooting? maybe memory problem? or electricity problem?
would love to hear about a lead so i could check it.
thanks in advance
I am writing an OS and want to have GUI. I can't find good tutorial for drawing pixels on the screen.
I'd like to have some assembly + C example which I can build and run on some emulator like BOCHS or v86
The basic idea is:
1) bootloader uses firmware (VBE on BIOS, GOP or UGA on UEFI) to set a graphics mode that is supported by the monitor, video card and OS; and while doing this it gets relevant information (physical address of frame buffer, horizontal and vertical resolution, pixel format, bytes between horizontal lines) about the frame buffer from the firmware that it can pass to the OS; so that the OS can use this information during "early initialisation" (before a native video driver is started), and can continue using it (as a kind of "limp mode") if there is no suitable native video driver.
2) The OS uses the information to figure out how to write to the frame buffer. This may be a calculation like physical_address = base_address + y * bytes_between_lines + x * bytes_per_pixel (where bytes_per_pixel is determined from the pixel format).
Notes for "early initialisation":
for performance reasons, it's better to draw everything in a buffer in RAM and then copy ("blit") the data from the buffer in RAM to the frame buffer.
for performance reasons, the code to copy ("blit") the data from the buffer in RAM to the frame buffer can/should use some tricks to avoid copying data that didn't change since last time
to support many different pixel formats, it's possible to use a "standard" pixel format for the buffer in RAM (e.g. maybe "8-bit red, 8-bit green, 8-bit blue, 8-bit padding") and convert that to whichever pixel format the video card happens to want (e.g. maybe "5-bit blue, 6-bit green, 5-bit red, no padding") while copying data from the buffer in RAM to the frame buffer. This allows you to have a single version of all the functions to draw things (characters, lines, rectangles, icons, ...) instead of having multiple different versions of many different functions (one for each possible pixel format).
Notes for "middle initialisation":
eventually the OS will try to find and start suitable device drivers for all the different devices. This includes trying to find a suitable driver for video card/s (e.g. that supports things like vertical sync, GPU, GPGPU, etc).
you will need to design a video driver interface that native video drivers can use that (ideally) supports modern features (e.g. full 3D graphics and shaders maybe).
when there is no native video driver, the OS can/should start a "generic frame buffer" driver that implements the same video driver interface (that was designed to support hardware acceleration) that does everything in software without the benefit of hardware acceleration.
when video driver/s are started, the OS needs to have some kind of "hand off" where ownership of the frame buffer is passed from the earlier boot code to the video driver. After this "hand off" the earlier boot code (which was designed to draw things directly to the frame buffer) should not touch the frame buffer and should ask the video driver to do the "convert pixel data and copy to frame buffer" work.
Notes for "after initialisation":
For a traditional "2D GUI"; typically you have one buffer (or "canvas" or "texture" or whatever) for the background/desktop, plus more buffers/canvases for each window or dialog box, and possibly more buffers/canvases for smaller things (e.g. mouse pointer, drop down menus, "widgets", etc); such that applications can modify their buffer/canvas (but are prevented from directly or indirectly accessing any other buffer/canvas for security reasons). Then the GUI tells the video driver where each of these buffers/canvases should be drawn; and the video driver (using hardware acceleration if its a native video driver) combines these pieces together ("composes") to get pixel data for the whole frame, then does the pixel format conversion (using GPU hopefully) to get raw pixel data to display/to send to the monitor. This means various actions (moving windows around the screen, "alt tabbing" between windows, moving the mouse around, etc) become extremely fast when there's a native video driver because the CPU is doing nothing and the video card itself is doing all the work.
ideally there would be a way (e.g. OpenGL) for the application to ask the video driver to draw stuff in the application's buffer/canvas; such that more work can be done by the video card (and not done by the CPU). This is especially important for 3D games, but there's no reason why normal 2D applications can't benefit from using the same approach for 2D graphics.
Note that most beginners do everything wrong (don't have a well designed native video driver interface) and therefore will never have any native video drivers because all their software can't use a native video driver anyway. These people will probably try to convince you that it's not worth the hassle (because in their experience native video drivers won't ever exist). The reality is that most native video drivers are extremely hard to write, but some of them (for virtual machines) aren't hard to write; and your goal should be to allow other people write drivers eventually (by designing suitable interfaces and providing adequate documentation) rather than writing all the drivers yourself.
Top answer did a very good job of explaining. You did ask for some example code, so here's a code snippet from my GitHub, and a detailed explanation will follow.
1. bios_setup:
2. mov ah, 00h ; tell the bios we'll be in graphics mode
3. mov al, 13h
4. int 10h ; call the BIOS
5. mov ah, 0Ch ; set video mode
6. mov bh, 0 ; set output vga
7. mov al, 0 ; set initial color
8. mov cx, 0 ; x = 0
9. mov dx, 0 ; y = 0
10. int 10h ; BIOS interrupt
Line 2 is where the fun begins. Firstly, we move the value 0 into the ah register. At line 3, we move 13 hex into al - now we're ready for our BIOS call.
Line 4 calls the bios with interrupt vector 10 hex. BIOS now checks in ah and al.
AH:
- tells BIOS to set video mode
AL:
- tells BIOS to enter write string mode.
Now that we called the interrupt on line 4, we're ready to move new values into some registers.
At line 5, we put 0C hex into the ah register.
This tells BIOS that we want to write a graphics pixel.
At line 6, we throw 0 into the bh register, which tells BIOS that we'll be either using a CGA, EGA, MCGA, or VGA adapter to output. So output mode 0 basically.
And next all we have to do is set our color. So lets start at 0, which is black.
That's all nice, but where do we want to actually draw this black pixel to?
That's where lines 8-9 come in, where registers cx and dx store the x,y coordinates of the pixel to draw, respectively.
Once they are set, we call the BIOS with interrupt 10 hex. And the pixel in drawn.
After reading Brendan's elaborate and informative answer, this code will make
much more sense. Certain values must be in certain registers before calling the
BIOS simply because those are the registers in which the according interrupt
will check. Everything else is pretty straight forward. If you want another
color, simply change the value in al. You want to blit your pixel somewhere else?
Mess around with the x and y values in cx and dx. Again, this isn't very
efficient for graphics intensive programs as it is pretty slow.
For educational purposes, however, it beats writing your own graphics driver ;)
You can still get some efficiency by drawing everything in a buffer in RAM before
blitting to the screen, as Brendan said, but I'd much rather keep it simple in
my example.
Check out the full - free - example on my GitHub. I've also included a README and a Makefile, but they are Linux exclusive. If you're running on Windows, some googling will yield any information necessary to assembling the OS to a bootable floppy, and just about any virtual machine host will do. Also, feel free to ask me about anything that's unclear. Cheers!
Ps: I did not write a tool, simply a small script in NASM that is meant to be assembled to a floppy and ran as a kernel (in a VM if you will)
Using a raspberry pi I'm using omxplayer to play 3 screens of IP cameras. When I add the 4th, the screen goes blank and we have to start over.
GPU mem is set to 512. And log says that there is over 700mb GPU remaining.
So what do I do?
I am a PhD student in the Faculty of Agriculture in Turkey. We have to measure the leaf area of some plants in our studies. I have used a method that gives almost realistic results.
I cut off leaves from plants and put them on an A4 sheet of paper so that they don't touch each other. Then, I take pictures vertically of the A4 paper with leaves. Later, I get the pictures to Photoshop. I select the leaves with the color range tool and check the pixel numbers of leaves from the histogram. So I have an A4 paper's real size and pixel number, and I have the leaves pixel numbers. I can calculate a realistic leaf area by using these components.
So this manual method is more effective than old methods, but if you have to measure a lot of samples, it takes a lot of time. I need to make my own Arduino-based device and it has to take pictures and analyse leaf area with the pixel-counting method as I explained above.
Is it possible to do it with Arduino? Any idea will be beneficial for me.
Best regards.
I would suggest you to use a Raspberry Pi and a Pi camera to capture the image and then using Python and Python libraries like Numpy, pandas, and OpenCV ( open source image processing library) you can perform the task.
It is easier to capture the image and store it on the server and then easily retrieve it and perform the process.( using raspberry pi)
But if you still want to use Arduino for this then You will have to use a camera shield and then use Python for the process( but i recommend you to use raspberry pi and a pi camera)
Is it posible to get average color (or more) of the computer screen in real time on a Mac?
For example, if I'm watching this site it will probably be white. I'm trying to make very simple ambient light with Arduino.
I'm not a Mac developer, so I may be off the mark here, but I doubt Mac OS X provides a method specifically to get the average color of the screen. What you could do is take a screenshot and resize it down to one pixel using Core Image, which should efficiently (using hardware acceleration where available) calculate the average.