FFMpeg crashes on decoding MJpeg - ffmpeg

I'm working with FFMpeg for decoding Mjpeg streams.
Recently I've bumped into access violation exceptions from FFMpeg, after investigating, I found that due to network packet drops, I'm passing to the FFMpeg a frame that might have "gaps" in it.
The FFMpeg probably crash since it jumps to a marker payload which doesn't exist in the frame's memory.
Any idea where I can find a mjpeg structure validator?
Is there any way to configure FFMpeg to perform such validations by itself?
Thanks.

I would be inclined to use Gstreamer here instead of ffmpeg and set "max-errors" property in jpegdec plugin to -1.
gst-launch -v souphttpsrc location="http://[ip]:[port]/[dir]/xxx.cgi" do-timestamp=true is_live=true ! multipartdemux ! jpegdec max-errors=-1 ! ffmpegcolorspace ! autovideosink.
This takes care of the corrupt jpeg frames and continues the stream.

Didn't really found an answer to the question.
Apparently, ffmpeg doesn't handle corrupted frames very well.
Decided to try a different 3rd party decoder instead of ffmpeg. For now, at least for Jpeg, it works faster and much more robust.

Related

Live Streaming Using Red5 with H.265 Raw Imcoming Video

How do you stream a raw H.265 incoming video using Red5?
I've seen this example to stream flv file, and this for the client side, and for H.264 with or without ffmpeg.
Basically the question can be split into two:
How do you stream it from a .h265 file? If from .265 file is not possible, how do you do it from a file that contains H.265 video? Any example?
How do you stream it from an incoming RTP session? I can get the session UDP/RTP unpacked and decode into raw H.265 NAL packets. I'm assuming some conversion is needed, any libraries available for that purpose? Examples?
If I can get an answer to the above first split question, I know I can redirect the incoming stream to a named pipe. That can be used as an indirect solution to the second split question. Streaming from the incoming UDP session directly is preferred, though.
This is some preliminary idea, though surely not the best solution.
From a previous article "How to stream in h265 using gstreamer", there is a solution to include the x265enc element to gstreamer.
From an aws kinesis video examples page, a command line can be used to process rtsp/rtp input in h.265 and convert it to h.264 format:
gst-launch-1.0 rtspsrc location="rtsp://192.168.1.<x>:8554/h265video" ! \
decodebin ! x264enc ! <video-sink>
The <video-sink> should be something specific to Red5 server. Or if streaming in h.265 format which Red5 might or might not take:
gst-launch-1.0 rtspsrc location="rtsp://192.168.1.<x>:8554/h265video" \
short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! \
<video-sink>

How do I add a delay to a live stream sourced from webcam (v4l2) with FFMPEG?

How can I use FFMPEG to add a delay to a stream being sent from a (v4l2) webcam to a media server?
The use case here is something like a security camera where I want to be able to stream video to a server when something is detected in the video. The easiest way to ensure the event of interest is captured on the video is to use FFMPEG to stream from the camera to a virtual loopback device with an added delay. That loopback device can then be used to initiate live streaming when an even of interest occurs.
In GStreamer, I would accomplish a delay of this sort with the queue element's min-threshold-time parameter. For example the following (much-simplified) example pipeline adds a 2 second delay to the output coming from a v4l2 webcam before displaying it:
gst-launch-1.0 v4l2src device=/dev/video1 ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=2000000000 ! xvimagesink
How do I accomplish the same thing with FFMPEG? There are some technical challenges that prevent us from using GStreamer for this.
I have investigated the itsoffset option for this, but as far as I can tell it is only usable for already-recorded files, and it is not clear what a good alternative would be.
With a recent git build of ffmpeg, basic template is
ffmpeg -i input -vf tpad=start_duration=5 -af adelay=5000|5000 stream-out
The tpad filter will add 5 seconds of black at the start of the video stream, and the apad filter will add 5000 milliseconds of silence to the first two channels of the audio.

How to decode a video at a certain fps using ffmpeg

I am working on video decoding using FFmpeg.
When I try to decode a video which is encoded with h265 at a certain fps (ex: fps=25), the result is a decoded video but at a different fps.
How can I decode a video at exactly fps=25, even if I have a high miss rate or dropped frames?
I use this command to decode:
ffmpeg -benchmark -i -f null /dev/null
I am running the above command on Odroid-XU3 board that contains 8 cores. The OS is Ubuntu 14.04 LTS.
Please, any help is welcome.
Thank you in advance.
You can add ‘-re’ to the ffmpeg command lint to process in real time. Ffmpeg will not drop frames though. So if it can’t decode that fast, you will still fall behind.

Stabilize ffmpeg rtsp to hls transmuxing

I'm using ffmpeg to convert a RTSP stream (from a security camera) into a HLS stream which I then play on a website using hls.js.
I start the transmuxing with: ffmpeg -i rtsp:<stream> -fflags flush_packets -max_delay 1 -an -flags -global_header -hls_time 1 -hls_list_size 3 -hls_wrap 3 -vcodec copy -y <file>.m3u8
I can get the stream to play, but the quality isn't good at all... Sometimes the stream jumps on time or freezes for a while. If I open it using VLC I get the same kind of problems.
Any idea why? Or how can I stabilize it?
I've had a similar issue one time and the issue ended up being not enough bandwidth. Be it an issue with whatever means the camera is using to stream, the connection to the server, etc. In my case, I had a set bandwidth listed as an FFMPEG argument that I simply had to increase. I know sometimes really low framerates set on the camera can cause oddities where you may have to add the "-framerate (Frames Per Second)" argument (no quotes) depending on how the page is set up.
If it is a bandwidth issue, the only way to resolve it as far as I'm aware is to increase the bandwidth somehow or make sure you aren't limiting yourself in some way which could come down to exactly how you are hosting the website/server and verifying speeds from each point the best you can. If you can't find the oddity in the connection yourself or need additional help, comment and I will help further.
This is an old question, so I don't know if OP will see this or not, but I'll leave this here to potentially give something to troubleshoot to anyone else having the same or similar issue since this is what helped me on a very similar issue.

Extract frames as images from an RTMP stream in real-time

I am streaming short videos (4 or 5 seconds) encoded in H264 at 15 fps in VGA quality from different clients to a server using RTMP which produced an FLV file. I need to analyse the frames from the video as images as soon as possible so I need the frames to be written as PNG images as they are received.
Currently I use Wowza to receive the streams and I have tried using the transcoder API to access the individual frames and write them to PNGs. This partially works but there is about a second delay before the transcoder starts processing and when the stream ends Wowza flushes its buffers causing the last second not to get transcoded meaning I can lose the last 25% of the video frames. I have tried to find a workaround but Wowza say that it is not possible to prevent the buffer getting flushed. It is also not the ideal solution because there is a 1 second delay before I start getting frames and I have to re-encode the video when using the transcoder which is computationally expensive and unnecessarily for my needs.
I have also tried piping a video in real-time to FFmpeg and getting it to produce the PNG images but unfortunately it waits until it receives the entire video before producing the PNG frames.
How can I extract all of the frames from the stream as close to real-time as possible? I don’t mind what language or technology is used as long as it can run on a Linux server. I would be happy to use FFmpeg if I can find a way to get it to write the images while it is still receiving the video or even Wowza if I can find a way not to lose frames and not to re-encode.
Thanks for any help or suggestions.
Since you linked this question from the red5 user list, I'll add my two cents. You may certainly grab the video frames on the server side, but the issue you'll run into is transcoding from h.264 into PNG. The easiest was would be to use ffmpeg / avconv after getting the VideoData object. Here is a post that gives some details about getting the VideoData: http://red5.5842.n7.nabble.com/Snapshot-Image-from-VideoData-td44603.html
Another option is on the player side using one of Dan Rossi's FlowPlayer plugins: http://flowplayer.electroteque.org/snapshot
I finally found a way to do this with FFmpeg. The trick was to disable audio, use a different flv meta data analyser and to reduce the duration that FFmpeg waits for before processing. My FFmpeg command now starts like this:
ffmpeg -an -flv_metadata 1 -analyzeduration 1 ...
This starts producing frames within a second of receiving an input from a pipe so writes the streamed frames pretty close to real-time.

Resources