I am building an application to read multiple IP camera streams (rtsp) and run different Machine Learning Algorithms over it in real time. For each camera stream,
I spawn an ffmpeg process which continuously break the rtsp streams
into frames and store them as images (JPEG). The streams use H.264
encoding. I am taking 1 frame every second as output.
Message queues corresponding to models are given the message
containing location of the file.
Models keep on picking up the file and drawing inferences
The problem I am facing is the CPU usage by ffmpeg decoding process. For real time inference without any loss of frames, I have to beef up my server by a core for every 2 camera streams. Is there any optimization I am missing for ffmpeg?
I am using Intel Xeon Gold processor with Ubuntu 18.04 OS
Related
I am using libvlc for RTSP h.264 bitstream decoding and display in PC. For the best experience (i.e. low latency), I uses the following options
:file-caching=0
:tcp-caching=0
:rtsp-caching=0
:network-caching=0
:clock-jitter=0
:avcodec-fast
With these parameters, the latency is acceptable in comparison to the open-source project "ONVIF device manager"(ODM) where FFMPEG is used for decoding.
When RTSP server deliver low frame-rate RTSP stream, (1 frame/second). These configuration will freeze after displaying a few frames (3-5 frames).
I have tried 2 different approaches
Disable synchronization by
:clock-synchro=0
This will enable the decoding process to go on, however, an accumulated period of time lagging could be observed.
Use network-cache
My experiments shows that
:network-caching=1200
Will make decoding go smoothly, however, the latency is over 1-2 second in comparison to ODM.
Is there a way to handle the low frame rate issue in libvlc without providing such big latency?
I compiled ffmpeg and h264 libraries for android using NDK.
I am recording videos using the muxing.c example from the ffmpeg library. Everything works correct (still haven't worked on the audio) but the camera is dropping frames and it takes around 100ms to save each frame, which is unacceptable.
I have also tried making a queue and saving them into another thread (let's call it B) but at the end I need to wait for around 120 seconds because the background thread (B)is still recording the frames.
Is there a workaround for this issue, besides reducing the video size? Ideally I would like to save the frames in real time, at least reduce the saving time. Is it just that Android is incapable of doing this? .
First of all, check if you can be better served by the hardware encoder (via MediaRecorder or MediaCodec in Java, or using OpenMax from native code).
If for some reason you must encode in software, and your device is multicore, you can gain a lot by compiling x264 to use sliced multithreading. Let me cite my post of 2 years ago:
We are using x264 directly (no ffmpeg code involved), and with ultafast/zerolatency preset we get 30 FPS for VGA on Samsung Note10 (http://www.gsmarena.com/samsung_galaxy_note_10_1_n8000-4573.php) with Quad-core 1.4 GHz Cortex-A9 Exynos 4412 CPU, which is on the paper weaker than Droid's Quad-core 1.5 GHz Krait Qualcomm MDM615m/APQ8064 (http://www.gsmarena.com/htc_droid_dna-5113.php).
Note that x264 build scripts do not enable pthreads for Android (because NDK does not include libpthread.a), bit you can build the library with multithread support (very nice for a Quad-core CPU) if you simply create a dummy libpthread.a, see https://mailman.videolan.org/pipermail/x264-devel/2013-March/009941.html.
Note that encoder setup is only one part of the problem. If you work with the deprecated camera API, you should use preallocated buffers and a background thread for camera callbacks, as I explained elsewhere.
I'm programming video capturing app and need to have 2 input sources (USB cams) to record from at the same time.
When I record only the raw footage simultaneously without compression at is working quite well (Low CPU load, no video lags), but when the compression is turned on the CPU is very high and the footage is lagging.
How to solve it? Or how to tune-up the settings so that it can be accomplished?
Note: the Raw streams are to big and thus cannot be used, otherwise I would not bother with compression at all and just leave it as it is.
The AVFoundation framework in its current configuration is setup to provide HW acceleration only for one source at time. For multiple accelerated sources one need to go deeper to VideoToolbox framework and even deeper.
I have 3 webcams and I would like to store all the frames on my HDD in Delphi. I have done this, but the problem is that it's quite slow. I was thinking about storing the data into a big file like an iso I tried, with BlockWrite and it is about two times slower than saving them with a different name in a folder as bitmaps.
Edit: I attached a a new screenshot, where you can see it's performances. In this test, it had only one hd webcam with 15 frames/sec and saving the frames as JPG(using Delphi XE2 native JPEG library) in the a HDD folder.I was able to see that the software actually store only 2 I/O output Mega byte of data each second on my HDD from only one high resolution 3D camera. But in one minute the software loose 70-80 frames.
Any suggestions, solutions? Thanks
if you want to write video you can use component TAVIRecorder of GLScene.
I wrote four HD(1280*720)*25fps video from IP cams and have good result with it and x264 codec and less than 40% of processor using i7 4770
So, after writing complete you can play it with any videoplayer and get nedded picture
I'm looking for the fastest way to encode a webcam stream that will be viewable in a html5 video tag. I'm using a Pandaboard: http://www.digikey.com/product-highlights/us/en/texas-instruments-pandaboard/686#tabs-2 for the hardware. Can use gstreamer, cvlc, ffmpeg. I'll be using it to drive a robot, so need the least amount of lag in the video stream. Quality doesn't have to be great and it doesn't need audio. Also, this is only for one client so bandwidth isn't an issue. The best solution so far is using ffmpeg with a mpjpeg gives me around 1 sec delay. Anything better?
I have been asked this many times so I will try and answer this a bit generically and not just for mjpeg. Getting very low delays in a system requires a bit of system engineering effort and also understanding of the components.
Some simple top level tweaks I can think of are:
Ensure the codec is configured for the lowest delay. Codecs will have (especially embedded system codecs) a low delay configuration. Enable it. If you are using H.264 it's most useful. Most people don't realize that by standard requirements H.264 decoders need to buffer frames before displaying it. This can be upto 16 for Qcif and upto 5 frames for 720p. That is a lot of delay in getting the first frame out. If you do not use H.264 still ensure you do not have B pictures enabled. This adds delay to getting the first picture out.
Since you are using mjpeg, I don't think this is applicable to you much.
Encoders will also have a rate control delay. (Called init delay or vbv buf size). Set it to the smallest value that gives you acceptable quality. That will also reduce the delay. Think of this as the bitstream buffer between encoder and decoder. If you are using x264 that would be the vbv buffer size.
Some simple other configurations: Use as few I pictures as possible (large intra period).
I pictures are huge and add to the delay to send over the network. This may not be very visible in systems where end to end delay is in the range of 1 second or more but when you are designing systems that need end to end delay of 100ms or less, this and several other aspects come into play. Also ensure you are using a low latency audio codec aac-lc (and not heaac).
In your case to get to lower latencies I would suggest moving away from mjpeg and use at least mpeg4 without B pictures (Simple profile) or best is H.264 baseline profile (x264 gives a zerolatency option). The simple reason you will get lower latency is that you will get lower bitrate post encoding to send the data out and you can go to full framerate. If you must stick to mjpeg you have close to what you can get without more advanced features support from the codec and system using the open source components as is.
Another aspect is the transmission of the content to the display unit. If you can use udp it will reduce latency quite a lot compared to tcp, though it can be lossy at times depending on network conditions. You have mentioned html5 video. I am curious as to how you are doing live streaming to a html5 video tag.
There are other aspects that can also be tweaked which I would put in the advanced category and requires the system engineer to try various things out
What is the network buffering in the OS? The OS also buffers data before sending it out for performance reasons. Tweak this to get a good balance between performance and speed.
Are you using CR or VBR encoding? While CBR is great for low jitter you can also use capped vbr if the codec provides it.
Can your decoder start decoding partial frames? So you don't have to worry about framing the data before providing it to the decoder. Just keep pushing the data to the decoder as soon as possible.
Can you do field encoding? Halves the time from frame encoding before getting the first picture out.
Can you do sliced encoding with callbacks whenever a slice is available to send over the network immediately?
In sub 100 ms latency systems that I have worked in all of the above are used. Some of the features may not be available in open source components but if you really need it and are enthusiastic you could go ahead and implement them.
EDIT:
I realize you cannot do a lot of the above for a ipad streaming solution and there are limitations because of hls also to the latency you can achieve. But I hope it will prove useful in other cases when you need any low latency system.
We had a similar problem, in our case it was necessary to time external events and sync them with the video stream. We tried several solutions but the one described here solved the problem and is extremely low latency:
Github Link
It uses gstreamer transcode to mjpeg which is then sent to a small python streaming server. This has the advantage that it uses the tag instead of so it can be viewed by most modern browsers, including the iPhone.
As you want the <video> tag, a simple solution is to use http-launch. That
had the lowest latency of all the solutions we tried so it might work for you. Be warned that ogg/theora will not work on Safari or IE so those wishing to target the Mac or Windows will have to modify the pipe to use MP4 or WebM.
Another solution that looks promising, gst-streaming-server. We simply couldn't find enough documentation to make it worth pursuing. I'd grateful if somebody could ask a stackoverflow question about how it should be used!