Grabbing a series of frames from an RTSP stream - ffmpeg

I'm looking for a way to continuously grab frames, as jpg, from a RTSP stream. I've stumbled upon ffmpeg but it seems that the time between starting it and grabbing the first frame is quite high. Is there any good tool in order to do this?
Regards

I've used gstreamer libraries in the past to extract frames from mobile video

Related

Why does it take forever just to add audio to an mp4?

I am currently using Kdenlive, but have also used ffmpeg when I have the simple task of adding audio to a video that does not yet have audio. Since it is just a matter of putting the video file together with the audio, it seems like it ought to be simple. Is there something about encoding mp4's that means it must take a lot of processing to complete?
I have good hardware (i7 6700k and gtx 1080), but kdenlive currently estimates 2.5 hours to complete adding audio to a 10 minute video.
Without more info (encoder, settings, video width x height, instructions to duplicate the behavior, etc) we can only guess. It's probably re-encoding the video instead of only muxing it. Encoding is CPU intensive and takes a long time. Although 2.5 hours for 10 minutes seems excessive, but there is not enough info in the question to say why it takes this long.
If you want to add audio with ffmpeg see How to add a new audio into a video using ffmpeg? This will allow you to mux the video (and optionally the audio) without encoding it: like a copy and paste.

Why is live video stream not fluent while audio stream is normal when they are played by Flash RTMP Player after being encoded

My video stream is encoded with H.264, and audio stream is encoded with AAC. In fact, I get these streams by reading a file whose format is flv. I only decode video stream in order to get all video frames, then I do something by using ffmpeg before encoding them, such as change some pixels. At last I will push the video and audio stream to Crtmpserver. When I pull the live stream from this server, I find the video is not fluent but audio is normal. But when I change gop_size from 12 to 3, everything is OK. What reasons cause that problem, can anyone explain something to me?
Either the CPU, or the bandwidth is not sufficient for your usage. RTMP will always process audio before video. If ffmpeg, or the network is not able to keep up with the live stream, Video frames will be dropped. Because audio is so much smaller, and cheaper to encode, a very slow CPU or congested network will usually have no problems keeping up.

Timing Issues When Muxing Audio and Video with libav

I have series of encoded packets, H.264 video and AAC audio. As they're coming on, I'm writing them to a video file, using av_write_frame.
Given the following situation in a row
10 seconds of video, then
10 seconds of video and audio, then
10 seconds of video.
Everything muxes fine and when played back via VLC or QuickTime, everything looks good. If I play it in Windows Media Player, the audio is played immediately.
It seems I'm doing something wrong, but checking the PTS of the audio stream packets, they are set to 10 seconds based on the time base of the audio stream.
It seems that it's best to inject empty audio packets at the beginning of the stream. This is the only way that video playback in WMP would work. Every player handles the streams differently and this is the best way to ensure compatibility across players.

Can the frame size be cropped during decoding using libavcodec?

I've followed Dranger's tutorial for displaying video using libav and FFMPEG. http://dranger.com/ffmpeg/
avcodec_decode_video2 seems to be the slowest part of the video decoding process. I will occasionally have two videos decoding simultaneously but only displaying half of each video side by side. In other words, half of each video will be off-screen. In order to speed up decoding, is there a way to only decode a portion of a frame?
No.
Codecs using interframe prediction need whole reference frames, so there's no way this could possibly work.

Video Slideshow from png files + mp3 audio

I have a bunch of .png frames and a .mp3 audio file which I would like to convert into a video. Unfortunately, the frames do not correspond to a constant frame rate. For instance, one frame may need to be displayed for 1 second, whereas another may need to be displayed for 3 seconds.
Is there any open-source software (something like ffmpeg) which would help me accomplish this? Any feedback would be greatly appreciated.
Many thanks!
This is not an elegant solution, but it will do the trick: duplicate frames as necessary so that you end up with some resulting (fairly high) constant framerate, 30 or 60 fps (or higher if you need higher time resolution). You simply change which frame is duplicated at the closest new frame to the exact timestamp you want. Frames which are exact duplicates will be encoded to a tiny size (a few bytes) with any decent codec, so this is fairly compact. Then just encode with ffmpeg as usual.
If you have a whole lot of these and need to do it the "right" way: you can indicate the timing either in the container (such as mp4, mkv, etc) or in the codec. For example in an H.264 stream you will have to insert SEI messages of type pic_timing to specify the timing of each frame. Alternately you will have to write your own muxer relying on a container library such as Matroska (mkv) or GPAC (mp4) to indicate the timing in the container. Note that not all codecs/containers support arbitrarily variable frame rate. Only a few codecs support timing in the codec. Also, if timing is specified in both container and codec, the container timing is used (but if you are muxing a stream into a container, the muxer should pick up the individual frame timestamps from the codec).

Resources