I'm using FFmpeg to stream raw PCM data from an internet radio stream, which I then run through some processing.
FFmpeg buffers around 10 seconds before sending any output to stdout, and I've been trying to get it to send data in more frequent intervals, so I can process it in smaller chunks.
I've looked at various FFmpeg command line options, but could not find one that will decrease the internal buffering used.
Looking at the various format options, I've tried -fflags nobuffer and -avioflags direct on both input and output, to no avail.
Thanks.
Related
I am having an image sequence input of webp-s concatenated (for various reasons) in a single file. I have a full control over the single file format and can potentially reformat it as a container (IVF etc.) if a proper exists.
I would like ffmpeg to consume this input and time properly each individual frame (consider first displayed for 5 seconds, next 3 seconds, 7, 12 etc.) and output a video (mp4).
My current approach is using image2pipe or webp_pipe followed by a list of loop filters, but I am curious if there are any solid alternatives potentially a simple format/container I could use in order to reduce or completely avoid ffmpeg filter instructions as there might be hundreds or more in total.
ffmpeg -filter_complex "...movie=input.webps:f=webp_pipe,loop=10:1:20,loop=10:1:10..." -y out.mp4
I am aware of concat demuxer but having a separate file for each input image is not an option in my case.
I have tried IVF format which works ok for vp8 frames, but doesnt seem to accept webp. An alternative would be welcomed, but way too many exists for me to study each single one and help would be appreciated.
I'm trying to save a stream to a video file. If the input stream goes down, FFMPEG automatically stops encoding, but I want to somehow still display those seconds in which the input is down (as a black frame or freezing the last frame).
What I have tried:
ffmpeg -i udp://x.x.x.x:y -c:v copy output.mp4
I wonder if it is possible to keep writing the mp4 file even if the input goes down.
You need to code a special application for this.
It will take the input (will re-encode it if necessary) and will output to ffmpeg.
In the special app, you can check whether is the source is offline or not and act accordingly.
Crucial thing here is PCR values must be continuous, this is why this kind of thing is hard to do or code in general. But it can be done.
I was reading about the -re option in ffmpeg .
What they have mentioned is
From the docs
-re (input)
Read input at the native frame rate. Mainly used to simulate a grab device, or live input stream (e.g. when reading from a file). Should not be used with actual grab devices or live input streams (where it can cause packet loss). By default ffmpeg attempts to read the input(s) as fast as possible. This option will slow down the reading of the input(s) to the native frame rate of the input(s). It is useful for real-time output (e.g. live streaming).
My doubt is basically the part of the above description that I highlighted. It is suggested to not use the option during live input streams but in the end, it is suggested to use it in real-time output.
Considering a situation where both the input and output are in rtmp format, should I use it or not?
Don't use it. It's useful for real-time output when ffmpeg is able to process a source at a speed faster than real-time. In that scenario, ffmpeg may send output at that faster rate and the receiver may not be able to or want to buffer and queue its input.
It (-re) is suitable for streaming from offline files and reads them with its native speed (i.e. 25 fps); otherwise, FFmpeg may output hundreds of frames per second and this may cause problems.
I have 1-5 input streams, each uploading on a slightly different time offset.
With rtmp and ffmpeg, I can reliably encode a single stream into an HLS playlist that plays seamlessly on iOS, my target delivery platform.
I know that you can accept multiple input streams into ffmpeg, and I want to switch between the input streams to create a consistent, single, seamless output.
So I want to switch between
rtmp://localhost/live/stream1 .. rtmp://localhost/live/stream5 on a regular interval. Sometimes there will be multiple streams, and sometimes there won't.
Is there any way for ffmpeg to rotate between input streams while generating an HLS playlist? My goal is to avoid running duplicate instances of ffmpeg for server cost reasons, and I think connecting disparately encoded input streams for playback would be difficult if not impossible.
Switching on each segment is the ideal behavior, but I also need to keep the streams in time sync. Is this possible?
Switching live stream inputs can cause delays due to the initial connection time and buffering (rtmp_buffer).
There's no straight-forward way to do it with ffmpeg. Being an open source project you can add the functionality yourself. It shouldn't be very complicated if all all your inputs share the same codecs, number of tracks, frame sizes etc.
Some people suggested using another software to do the switch such as MLT or using filters such as zmq (ZeroMQ) to make ffmpeg accept commands.
One way to do it would be to re-stream the source as mpgets on a local port and use the local address as input in the command that outputs the HLS:
Stream switcher (60s of each stream, one at a time) - you can make a script with your own logic, this is for illustrative purposes:
ffmpeg -re -i rtmp://.../stream1 -t 60 -f mpegts udp://127.0.0.1:10000
ffmpeg -re -i rtmp://.../stream2 -t 60 -f mpegts udp://127.0.0.1:10000
[...]
ffmpeg -re -i rtmp://.../stream5 -t 60 -f mpegts udp://127.0.0.1:10000
Use the local address as source for the HLS stream - it'll wait for input if there's none and fix your DTS/PTS but you will probably introduce some delays on switching:
ffmpeg -re -i udp://127.0.0.1:10000 /path/to/playlist.m3u8
I am streaming short videos (4 or 5 seconds) encoded in H264 at 15 fps in VGA quality from different clients to a server using RTMP which produced an FLV file. I need to analyse the frames from the video as images as soon as possible so I need the frames to be written as PNG images as they are received.
Currently I use Wowza to receive the streams and I have tried using the transcoder API to access the individual frames and write them to PNGs. This partially works but there is about a second delay before the transcoder starts processing and when the stream ends Wowza flushes its buffers causing the last second not to get transcoded meaning I can lose the last 25% of the video frames. I have tried to find a workaround but Wowza say that it is not possible to prevent the buffer getting flushed. It is also not the ideal solution because there is a 1 second delay before I start getting frames and I have to re-encode the video when using the transcoder which is computationally expensive and unnecessarily for my needs.
I have also tried piping a video in real-time to FFmpeg and getting it to produce the PNG images but unfortunately it waits until it receives the entire video before producing the PNG frames.
How can I extract all of the frames from the stream as close to real-time as possible? I don’t mind what language or technology is used as long as it can run on a Linux server. I would be happy to use FFmpeg if I can find a way to get it to write the images while it is still receiving the video or even Wowza if I can find a way not to lose frames and not to re-encode.
Thanks for any help or suggestions.
Since you linked this question from the red5 user list, I'll add my two cents. You may certainly grab the video frames on the server side, but the issue you'll run into is transcoding from h.264 into PNG. The easiest was would be to use ffmpeg / avconv after getting the VideoData object. Here is a post that gives some details about getting the VideoData: http://red5.5842.n7.nabble.com/Snapshot-Image-from-VideoData-td44603.html
Another option is on the player side using one of Dan Rossi's FlowPlayer plugins: http://flowplayer.electroteque.org/snapshot
I finally found a way to do this with FFmpeg. The trick was to disable audio, use a different flv meta data analyser and to reduce the duration that FFmpeg waits for before processing. My FFmpeg command now starts like this:
ffmpeg -an -flv_metadata 1 -analyzeduration 1 ...
This starts producing frames within a second of receiving an input from a pipe so writes the streamed frames pretty close to real-time.