how to save images captured by my industrial camera fast into SSD - image

I have a industrial camera which takes 120 frames per second. It connects with my PC through a USB 3.0 cable and the frames are stably transferred into my PC at a frame rate of 120fps if I just show them. However, when I try to save the frames into my SSD (M.2 PCIE interface) in jpeg format, I can only save 30 frames per second. Each jpeg file is about 80KB and it will be about 10MBps if I save all 120 frames per second. I have tested my SSD and it has at least 100MBps for writing data. So where can it go wrong?
Btw, the API provided by the camera is Windows C++ API and I am using Visual studio for the capturing and writing program. One way I can think of is to save the images in the memory for a certain time and then I stop capturing and dump the images out to the disk. My application needs to keep the camera capturing all the time. So I am thinking if there is a way to save those images continuously in the SSD. My PC has i7 and 32GB memory.

The bottleneck is not I/O but the processing of converting raw to jpeg which is done by SDK on PC. Try to save raw in real-time and convert it to jpeg offline.

Related

LabVIEW - Store very large 3D array (array of images)

I am working on a LabVIEW project on which I have to process some video (with for example 5000 images of 640*480 pixels, so lot of data to process). Using a for loop I am processing one image at the time so in this side all is okay. But in the other side, I have to store the results to visualise the results of the wished image after the processing. Until now I always worked with array but here LabVIEW has not enough memory to do the job (which is quite normal).
Is there a best way to change my way to deal with the data, using other solution such as cluster, save the image on the local disk, etc?
For information, the processing is quite long (several minutes for only one image) and I don't have to save the result before the user ask for so I am anticipating the case where all the video is processed without saving the result.
Thank you in advance.
How much RAM do you have? Assuming 4 bytes per pixel, 5000 640 x 480 images would take about 6 GB, so if you have 16 GB RAM or more then you might be able to handle this data in RAM as long as you're using 64-bit LabVIEW and you're careful about how memory is allocated - read through VI Memory Usage from the help, for a start.
Alternatively you can look at storing the data on disk in a format where you can access an arbitrary chunk from the file. I haven't used it much myself but HDF5 seems to be the obvious choice - if you're on Windows you can install the LiveHDF5 library from the VI package manager.
Did you consider to store images as files in the system temporary directory and delete it afterwards? Since the processing takes long time per image, it should be easily possible to have a "image queue" of 5 images always loaded into memory (to avoid aby performance drop due to loading from file right before the processing) and rest would sit on the disk.

Scaling Application for video decoding using ffmpeg

I am building an application to read multiple IP camera streams (rtsp) and run different Machine Learning Algorithms over it in real time. For each camera stream,
I spawn an ffmpeg process which continuously break the rtsp streams
into frames and store them as images (JPEG). The streams use H.264
encoding. I am taking 1 frame every second as output.
Message queues corresponding to models are given the message
containing location of the file.
Models keep on picking up the file and drawing inferences
The problem I am facing is the CPU usage by ffmpeg decoding process. For real time inference without any loss of frames, I have to beef up my server by a core for every 2 camera streams. Is there any optimization I am missing for ffmpeg?
I am using Intel Xeon Gold processor with Ubuntu 18.04 OS

How to record 720p videos without dropping frames in Android

I compiled ffmpeg and h264 libraries for android using NDK.
I am recording videos using the muxing.c example from the ffmpeg library. Everything works correct (still haven't worked on the audio) but the camera is dropping frames and it takes around 100ms to save each frame, which is unacceptable.
I have also tried making a queue and saving them into another thread (let's call it B) but at the end I need to wait for around 120 seconds because the background thread (B)is still recording the frames.
Is there a workaround for this issue, besides reducing the video size? Ideally I would like to save the frames in real time, at least reduce the saving time. Is it just that Android is incapable of doing this? .
First of all, check if you can be better served by the hardware encoder (via MediaRecorder or MediaCodec in Java, or using OpenMax from native code).
If for some reason you must encode in software, and your device is multicore, you can gain a lot by compiling x264 to use sliced multithreading. Let me cite my post of 2 years ago:
We are using x264 directly (no ffmpeg code involved), and with ultafast/zerolatency preset we get 30 FPS for VGA on Samsung Note10 (http://www.gsmarena.com/samsung_galaxy_note_10_1_n8000-4573.php) with Quad-core 1.4 GHz Cortex-A9 Exynos 4412 CPU, which is on the paper weaker than Droid's Quad-core 1.5 GHz Krait Qualcomm MDM615m/APQ8064 (http://www.gsmarena.com/htc_droid_dna-5113.php).
Note that x264 build scripts do not enable pthreads for Android (because NDK does not include libpthread.a), bit you can build the library with multithread support (very nice for a Quad-core CPU) if you simply create a dummy libpthread.a, see https://mailman.videolan.org/pipermail/x264-devel/2013-March/009941.html.
Note that encoder setup is only one part of the problem. If you work with the deprecated camera API, you should use preallocated buffers and a background thread for camera callbacks, as I explained elsewhere.

Saving highest quality video from video-capture card

I have a machine with 2x3 3ghz dual-core xeon and 4x10krpm scsi 320 disks in raid0.
The capture card is an osprey 560 64 bit pci card.
Operating system is currently Windows Server 2003.
The video-stream that I can open with VLC using direct show is rather nice quality.
However, trying to save this video-stream without loss of quality has proven quite difficult,
using the h264 codec I am able to achieve a satisfying quality, however, all 4 cores jump to 100% load after a few second and then it start dropping frames, the machine is not powerful enough for realtime encoding. I've not been able to achieve satisfying mpeg1 or 4 quality, no matter which bitrate I set..
Thing is, the disks in this machine are pretty fast even by todays standard, and they are bored.. I don't care about disk-usage, I want quality.
I have searched in vain for a way to pump that beautiful videostream that I see in VLC onto the disk for later encoding, I reckon the disks would be fast enough, or maybe something which would apply a light compression, enough that the disks can keep up, but not so much as to loose visible quality.
I have tried FFMPEG as it seems capable of streaming a yuv4 stream down to the disk, but ofcause FFMPEG is unable to open the dshow device ( same error as this guy Ffmpeg streaming from capturing device Osprey 450e fails )
Please recommend a capable and (preferably) software which can do this.
I finally found it out, it is deceptively simple:
Just uncheck the "transcode" option in the window where you can select an encoding preset.
2 gigabytes per minutes is a low price to pay for finally getting those precious memories off of old videotapes in the quality they deserve.

Fast way to save images from 3 different webcams

I have 3 webcams and I would like to store all the frames on my HDD in Delphi. I have done this, but the problem is that it's quite slow. I was thinking about storing the data into a big file like an iso I tried, with BlockWrite and it is about two times slower than saving them with a different name in a folder as bitmaps.
Edit: I attached a a new screenshot, where you can see it's performances. In this test, it had only one hd webcam with 15 frames/sec and saving the frames as JPG(using Delphi XE2 native JPEG library) in the a HDD folder.I was able to see that the software actually store only 2 I/O output Mega byte of data each second on my HDD from only one high resolution 3D camera. But in one minute the software loose 70-80 frames.
Any suggestions, solutions? Thanks
if you want to write video you can use component TAVIRecorder of GLScene.
I wrote four HD(1280*720)*25fps video from IP cams and have good result with it and x264 codec and less than 40% of processor using i7 4770
So, after writing complete you can play it with any videoplayer and get nedded picture

Resources