I would like to pass raw audio and video buffers from ffmpeg to gstreamer. For video a 1920x1080 25fps RGB output is what ffmpeg is producing.
What is the best method to pass this from ffmpeg to gstreamer on the same hardware.
The end goal is to not block either ffmpeg from outputting if gstreamer cannot take the frame and to not block gstreamer if no frames are available.
For this we have looked at sockets and tcp/udp plugins. However if we get any buffering issues the gstreamer pipline will block until the buffer is clear/full.
we will have multiple pairs of TX/RX running on same linux instance so stdI/O will not work.
Is there a current preferred method for this type of transfer?
Related
How can I use FFMPEG to add a delay to a stream being sent from a (v4l2) webcam to a media server?
The use case here is something like a security camera where I want to be able to stream video to a server when something is detected in the video. The easiest way to ensure the event of interest is captured on the video is to use FFMPEG to stream from the camera to a virtual loopback device with an added delay. That loopback device can then be used to initiate live streaming when an even of interest occurs.
In GStreamer, I would accomplish a delay of this sort with the queue element's min-threshold-time parameter. For example the following (much-simplified) example pipeline adds a 2 second delay to the output coming from a v4l2 webcam before displaying it:
gst-launch-1.0 v4l2src device=/dev/video1 ! queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=2000000000 ! xvimagesink
How do I accomplish the same thing with FFMPEG? There are some technical challenges that prevent us from using GStreamer for this.
I have investigated the itsoffset option for this, but as far as I can tell it is only usable for already-recorded files, and it is not clear what a good alternative would be.
With a recent git build of ffmpeg, basic template is
ffmpeg -i input -vf tpad=start_duration=5 -af adelay=5000|5000 stream-out
The tpad filter will add 5 seconds of black at the start of the video stream, and the apad filter will add 5000 milliseconds of silence to the first two channels of the audio.
I am working on video decoding using FFmpeg.
When I try to decode a video which is encoded with h265 at a certain fps (ex: fps=25), the result is a decoded video but at a different fps.
How can I decode a video at exactly fps=25, even if I have a high miss rate or dropped frames?
I use this command to decode:
ffmpeg -benchmark -i -f null /dev/null
I am running the above command on Odroid-XU3 board that contains 8 cores. The OS is Ubuntu 14.04 LTS.
Please, any help is welcome.
Thank you in advance.
You can add ‘-re’ to the ffmpeg command lint to process in real time. Ffmpeg will not drop frames though. So if it can’t decode that fast, you will still fall behind.
I am developing a player based on ffmpeg.
Now I try to decode hls video. The video stream has several programs (AVProgram) separated by quality. I want to select one specific program with desired quality. But ffmpeg reads packets from all programs (all streams).
How can I tell ffmpeg which streams to read?
Solved by using disard field in AVStream structure:
_stream->discard = AVDISCARD_ALL;
I am streaming short videos (4 or 5 seconds) encoded in H264 at 15 fps in VGA quality from different clients to a server using RTMP which produced an FLV file. I need to analyse the frames from the video as images as soon as possible so I need the frames to be written as PNG images as they are received.
Currently I use Wowza to receive the streams and I have tried using the transcoder API to access the individual frames and write them to PNGs. This partially works but there is about a second delay before the transcoder starts processing and when the stream ends Wowza flushes its buffers causing the last second not to get transcoded meaning I can lose the last 25% of the video frames. I have tried to find a workaround but Wowza say that it is not possible to prevent the buffer getting flushed. It is also not the ideal solution because there is a 1 second delay before I start getting frames and I have to re-encode the video when using the transcoder which is computationally expensive and unnecessarily for my needs.
I have also tried piping a video in real-time to FFmpeg and getting it to produce the PNG images but unfortunately it waits until it receives the entire video before producing the PNG frames.
How can I extract all of the frames from the stream as close to real-time as possible? I don’t mind what language or technology is used as long as it can run on a Linux server. I would be happy to use FFmpeg if I can find a way to get it to write the images while it is still receiving the video or even Wowza if I can find a way not to lose frames and not to re-encode.
Thanks for any help or suggestions.
Since you linked this question from the red5 user list, I'll add my two cents. You may certainly grab the video frames on the server side, but the issue you'll run into is transcoding from h.264 into PNG. The easiest was would be to use ffmpeg / avconv after getting the VideoData object. Here is a post that gives some details about getting the VideoData: http://red5.5842.n7.nabble.com/Snapshot-Image-from-VideoData-td44603.html
Another option is on the player side using one of Dan Rossi's FlowPlayer plugins: http://flowplayer.electroteque.org/snapshot
I finally found a way to do this with FFmpeg. The trick was to disable audio, use a different flv meta data analyser and to reduce the duration that FFmpeg waits for before processing. My FFmpeg command now starts like this:
ffmpeg -an -flv_metadata 1 -analyzeduration 1 ...
This starts producing frames within a second of receiving an input from a pipe so writes the streamed frames pretty close to real-time.
I have an application wherein I have H.264 frames from an RTSP stream stored in a proprietary database. I need to be able to present a frame to the H.264 decoder (frames in sequence, of course) and get back the decoded frame (bitmap, whatever) output. I cannot use the traditional DirectShow streams because I don't have a stream. Is there any codec can be used in this manner? Later I will need to go the other way as well (given bitmaps or other format images, create an H.264 stream). Any help you can give would be greatly appreciated.
Create a DirectShow Source Filter that assembles the h264 stream from database, then you can pass it to standard DirectShow H264 decoder. Look into DirectShow samples for example source code.
As Isso mentioned already, you can push the H.264 data into DirectShow pipeline and have the frame decoded. Additionally to this, there is H.264 Video Decoder MFT (Windows 7 and more recent only) which might be an easier way to use the decoder and to apply it to an individual "frame". You can use other decoders as well, such as FFmpeg/libavcodec however you would still need to interface to the decoders typically designed for stream processing.