Is there a way to install a lighter version of opencv on pi pico? If not, is there a way to install opencv library on an SD card and make pico to fetch those library files from that SD card?
I am trying to record video using a OV7670 camera module and save the video it to SD card. Later I need to upload the video to a custom AWS server. I have the modules to capture images in micropython but can not find any modules or libraries to record video.
No. OpenCV is a huge library, with many moving parts. Even if there is a minimal version of OpenCV, I highly doubt that the 2MB of flash memory of the RP2040 will be enough for your use case. Coupling this alongside the limited number of cores, internal RAM, etc. of the CPU, you will probably end up with nothing. From what I know, you can use TinyML with MicroPython.
Related
I am working on a GPU server from my college with the computing capability less than 3.0, Windows 7 Professional, 64-bit operating system and 48GB RAM. I have tried to install tensorflow earlier but then I got to know that my GPU cannot support it.
I now want to work on keras but as tensorflow is not there so will it work or not as I am also not able to import it?
I have to do video processing and have to work on big video datasets for Dynamic Sign Language Recognition. Can anyone suggest me what I can do to get going in the field of Deep Learning with such GPU server? Or if I want to work on CPU only, then will there be any problem in this field of video processing?
I also have an HP Probook 440 G4 Laptop with Windows 10 Pro so is it better than the GPU server I have or not?
I am totally new to this field and cannot find a way to work properly in it.
Your opinions are needed right now!
The 'dxdiag' information for my laptop is shown and .
Thanks in advance!
For Keras to work you need either Tensorflow or Theano. Your Laptop seems to have a GeForce 930M GPU. This card has a compute capability or 5.0 according to the NVIDIA documentation (https://developer.nvidia.com/cuda-gpus). So you are better off with your Laptop if my research was right.
I guess you will use CNN with your video processing and therefore I would advise you to use a GPU. You can run your code also on a CPU but training will be much slower since GPUs are made for parallel computing and CPUs are not (the big matrix multiplication profit a lot from the parallel computing).
Maybe you could try a cloud computing provider if you think training is too slow on your laptop
I'm trying to record timing video from a Thorlabs DCC cmos 1280x1024 with a code in Matlab but Matlab don't recognize it as a video device in imaqhwinfo,
a=imaqhwinfo('winvideo',1) command.
How can I fix it?
It's not supported hardware, see the Supported Hardware page.
Your options are to either request for your hardware to be added to the supported hardware list (if you're prepared to wait a few years...) or use the adaptor kit to write a communication layer between the Image Acquisition Toolbox acquisition engine and the third-party SDK and drivers for your hardware. However, this is a very advanced manoeuvre for experienced users and requires extensive knowledge of the hardware vendor's SDK (and is not something I can help you with).
I'm looking for an audio processing library that i can use to do some on-the-fly audio editing in my program, such as turn a knob and it'll increase the pitch of the audio file being played, without saving the change to the song file itself. And i plan to make this program for windows and mac so i would need a cross platform library. I don't have much money to spare so it can't cost too much either. My program will be commercially available if that changes anything. Thanks in advance for any help.
SoX at http://sox.sourceforge.net/
Wavesurfer at http://www.speech.kth.se/wavesurfer/
I am writing a video using OpenCV on Linux machine. I want to read the same video using OpenCV on a Windows machine. I am not able to do this using the standard codecs provided in openCV.
Can anybody suggest how I can read/write videos across the two platforms?
The OpenCV Wiki directly addresses this issue. See http://opencv.willowgarage.com/wiki/VideoCodecs and specifically the heading "Compatibility list."
Unfortunately the only codecs supported on all three platforms (Linux, Windows & OSX) are 'DIB' 'I420' and 'IYUV' which are all uncompressed video codecs and thus make for really huge file sizes.
The wiki also lists some codecs to try that may work on any two platforms but not on all three.
If you decide to use uncompressed video files, you can convert them to something with a smaller filesize once they are on your windows machine using a program like VirtualDub.
Edit: FYI, On Windows I have OpenCV output in Motion-JPEG and then I use VirtualDub in directstream copy mode to resave the file which corrects a bug with the movie's index. These M-JPEG video files then play by default on Mac and Windows.
If I am trying to read video into OpenCV, I often will first convert my video to Cinepak, (using virtual dub, quicktime etc.) and then feed it into OpenCV. I use Cinepak because for some reason Cinepak encoders seem more prevalentthan MJPEG encoders.
I don't think the problem is with OpenCV, I think it is with codecs, as you mentioned. I also don't think OpenCV comes with codecs... double check that you have the proper codecs installed under Windows.
Did you look at the documentation on video codecs?
I have this garden variety USB video camera, and it came with two mini-apps, one that just lets you see what the camera sees, and one that records to an .avi file.
But what's the API if I want to grab images from the camera in my own C program? I am making the assumptions that it's (1) possible and (2) desirable to make some call and have a 2D array of pixel information filled in.
What I really want to do is tinker with image processing algorithms, and for that I'd really like to get my code around some live data.
EDIT -
Having had a healthy exposure to Linux, I can grasp how (ideally/in theory) you could open() the device, use ioctl() to configure it, and read() the data. And I'm virtually certain that that's not how Windows is going to present the API. Not knowing what function names Windows might use for a video device API, or even if it has one, makes it difficult to look up, at least with the win32 api search capabilities that I have at my disposal.
You'll probably need the DirectShow API, provided that's how the camera operates. If the manufacturer created their own code path, you'll need their API.
Your first step, as pointed out by ChrisBD, is to check if Windows supports your device.
If that is the case you have three possible Windows APIs for capture:
DirectShow
VFW. Has more or less been replaced by DirectShow
MediaFoundation. Is the newest API that is intended to replace DirectShow. AFAIK not fully implemented yet and only available in Vista.
From the three DirectShow is the best choice. However, learning and using DirectShow is not a trivial task. An excellent example can be found here.
Another possibility is to use OpenCV. OpenCV is an image processing library, that you can also use for processing the images. OpenCV has an image capture API that provides a simpler abstraction and is easier to use than the Windows APIs.
The API is the way to go.
A good indication of whether the camera requires a bespoke one or not is to see if it is recognised by a PC without the manufacturer's applications installed. If windows has the drivers built in the you should be able to use the windows APIs to capture the images.
Alternatively if you know what compression codec has been used for the AVI file you could unpack it.
Ideally it would be good if you could capture the video in native (YUV, RGB15 or similar) format as then you can work on compression as well as manipulation.