ffmpeg output to multiple files simultaneously - ffmpeg

What format/syntax is needed for ffmpeg to output the same input to several different "output" files? For instance different formats/different bitrates? Does it support parallelism on the output?

The ffmpeg documentation has been updated with lots more information about this and options depend on the version of ffmpeg you use: http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs

From FFMpeg documentation, FFmpeg writes to an arbitrary number of output "files".
Just make sure each output file (or stream), is preceded by the proper output options.

I use
ffmpeg -f lavfi -re -i 'life=s=300x200:mold=10:r=25:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800:flags=16' \
-f lavfi -re -i sine=frequency=1000:sample_rate=44100 -pix_fmt yuv420p \
-c:v libx264 -b:v 1000k -g 30 -keyint_min 60 -profile:v baseline -preset veryfast -c:a aac -b:a 96k \
-f flv "rtmp://yourname.com:1935/live/stream1" \
-f flv "rtmp://yourname.com:1935/live/stream2" \
-f flv "rtmp://yourname.com:1935/live/stream3" \

Is there any reason you can't just run more than one instance of ffmpeg? I've have great results with that ...
Generally what I've done is run ffmpeg once on the source file to get it to sort of the base standard (say a higher quality h.264 mp4 file) this will make sure your other jobs will run more quickly if your source file has any issues since they'll be cleaned up in this first pass
Then use that new source/input file to run x number of ffmpeg jobs, for example in bash ...
Where you see "..." would be where you'd put all your encoding options.
# create 'base' file
ffmpeg -loglevel error -er 4 -i $INPUT_FILE ... INPUT.mp4 >> $LOG_FILE 2>&1
# the command above will run and then move to start 3 background jobs
# text output will be sent to a log file
echo "base file done!"
# note & at the end to send job to the background
ffmpeg ... -i INPUT.mp4 ... FILENAME1.mp4 ... >/dev/null 2>&1 &
ffmpeg ... -i INPUT.mp4 ... FILENAME2.mp4 ... >/dev/null 2>&1 &
ffmpeg ... -i INPUT.mp4 ... FILENAME3.mp4 ... >/dev/null 2>&1 &
# wait until you have no more background jobs running
wait > 0
echo "done!"
Each of the background jobs will run in parallel and will be (essentially) balanced over your cpus, so you can maximize each core.

based on http://sonnati.wordpress.com/2011/08/30/ffmpeg-–-the-swiss-army-knife-of-internet-streaming-–-part-iv/ and http://ffmpeg-users.933282.n4.nabble.com/Multiple-output-files-td2076623.html
ffmpeg -re -i rtmp://server/live/high_FMLE_stream -acodec copy -vcodec x264lib -s 640×360 -b 500k -vpre medium -vpre baseline rtmp://server/live/baseline_500k -acodec copy -vcodec x264lib -s 480×272 -b 300k -vpre medium -vpre baseline rtmp://server/live/baseline_300k -acodec copy -vcodec x264lib -s 320×200 -b 150k -vpre medium -vpre baseline rtmp://server/live/baseline_150k -acodec libfaac -vn -ab 48k rtmp://server/live/audio_only_AAC_48k
Or you could pipe the output to a "tee" and send it to "X" other processes to actually do the encoding, like
ffmpeg -i input - | tee ...
which might save cpu since it might enable more output parallelism, which is apparently otherwise unavailable
see http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs and here

I have done like that
ffmpeg -re -i nameoffile.mp4 -vcodec libx264 -c:a aac -b:a 160k -ar 44100 -strict -2 -f flv \
-f flv rtmp://rtmp.1.com/code \
-f flv rtmp://rtmp.2.com/code \
-f flv rtmp://rtmp.3.com/code \
-f flv rtmp://rtmp.4.com/code \
-f flv rtmp://rtmp.5.com/code \
but is not working completely well as i was expecting and having on restream with nginx

Related

How to add a hard code of subs to this filter_complex

ffmpeg -ss 00:11:47.970 -t 3.090 -i "file.mkv" -ss 00:11:46.470 -t 1.500 -i "file" -ss 00:11:51.060 -t 0.960 -i "file.mkv" -an -c:v libvpx -crf 31 -b:v 10000k -y -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0]concat=n=3:v=1:a=1[outv][outa];[outv]scale='min(960,iw)':-1[outv];[outv]subtitles='file.srt'[outv]" -map [outv] file_out.webm -map [outa] file.mp3
I have a filter where take three different points in a file concat them together and scale them down this part works
Im looking to see how to add to the filter_complex a sub burn in step rendering the subs from the exact timings usings a file that I specify when I use the above code it doesn't work
The subtitles filter is receiving a concatenated stream. It does not contain the timestamps from the original segments. So the subtitles filter starts from the beginning. I'm assuming this is the problem when you said, "it doesn't work".
The simple method to solve this is to make temporary files then concatenate them.
Output segments
ffmpeg -ss 00:11:47.970 -t 3.090 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp1.webm
ffmpeg -ss 00:11:46.470 -t 1.500 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp2.webm
ffmpeg -ss 00:11:51.060 -t 0.960 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp3.webm
The timestamps are reset when fast seek is used (-ss before -i). -copytswill preserve the timestamps so the subtitles filter knows where to start the subtitles.
Make input.txt:
file 'temp1.webm'
file 'temp2.webm'
file 'temp3.webm'
Concatenate with the concat demuxer:
ffmpeg -f concat -i input.txt -c copy output.webm
-c copy enables stream copy mode so it avoids re-encoding to concatenate.

ffmpeg youtube livestream not working

Today i tried using ffmpeg on my debian 8.3 server to livestream 24/7 hours. However it doesnt work.
#! /bin/bash
INRES="1280x1024" # input resolution (The resolution of the program you want to stream!)
OUTRES="1024x790" # Output resolution (The resolution you want your stream to be at)
FPS="60" # target FPS
QUAL="ultrafast"
# one of the many FFMPEG presets that can be used
# If you have low bandwidth, put the qual preset on 'ultrafast' (upload bandwidth)
# If you have medium bandwitch put it on normal to medium or fast
STREAM_KEY="hidden" # this is your streamkey
ffmpeg -f "file.avi" -s "$INRES" -r "$FPS" -i :0.0 \
-f alsa -ac 2 -i pulse -vcodec libx264 -s "$OUTRES" \
-acodec libmp3lame -ab 128k -ar 44100 -threads 0 \
-f flv "rtmp://a.rtmp.youtube.com/live2"
it gives me the output
Unknown input format: 'file.avi'
This
ffmpeg -f "file.avi" ...
should be
ffmpeg -i "file.avi" ...

ffmpeg says "at least one output file must be specified" when piping from another process

To force ffmpeg to read, decode, and scale only once in order to bring CPU usage down I put those steps in one ffmpeg process and piped the result into another ffmpeg process that performed the encoding.
This improved the overall processing time by 15–20%.
INPUT_STREAM="ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 -threads auto -f yuv4mpegpipe -"
$INPUT_STREAM | ffmpeg -y -f yuv4mpegpipe -i - \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto out-700.mp4
I then tried the syntax below:
ffmpeg -i c:\sample.mp4 -threads auto -f yuv4mpegpipe - | ffmpeg -y -f yuv4mpegpipe -i -vcodec libx264 -b:v 250k -threads auto c:\out-250.mp4 -vcodec libx264 -b:v 260k -threads auto c:\out-260.mp4
… but this error appears:
At least one output file must be specified
But I have specified the out put file which is: C:\out-260.mp4. Still it doesn't work
What's wrong?
ffmpeg -y -f yuv4mpegpipe -i -vcodec …
You didn't specify any input file. To read from stdin, use -:
ffmpeg -y -f yuv4mpegpipe -i - -vcodec …

shell script ffmpeg stops after 2 jobs

I have a pretty simple shell script and after doing the first two jobs, it just stops and sits there, doesnt do anything, it doesnt seem to matter what the third job is, if I switch the order etc, it will not finish it.
Any ideas would be great...
Here is my shell script
for f in "$#"
do
name=$(basename "$f")
dir=$(dirname "$f")
/opt/local/bin/ffmpeg -i "$f" -y -b 250k -deinterlace -vcodec vp8 -acodec libvorbis -nostdin "$dir/webm/${name%.*}.webm"
/opt/local/bin/ffmpeg -i "$f" -y -b 250k -strict experimental -deinterlace -vcodec h264 -acodec aac -nostdin "$dir/mp4/${name%.*}.mp4"
/opt/local/bin/ffmpeg -i "$f" -y -ss 00:00:15.000 -deinterlace -vcodec mjpeg -vframes 1 -an -f rawvideo -s 720x480 "$dir/img/${name%.*}.jpg"
done
Your final ffmpeg line needs -nostdin.
Running FFMPEG from Shell Script /bin/sh

How to optimize ffmpeg w/ x264 for multiple bitrate output files

The goal is to create multiple output files that differ only in bitrate from a single source file. The solutions for this that were documented worked, but had inefficiencies. The solution that I discovered to be most efficient was not documented anywhere that I could see. I am posting it here for review and asking if others know of additional optimizations that can be made.
Source file MPEG-2 Video (Letterboxed) 1920x1080 #>10Mbps
MPEG-1 Audio # 384Kbps
Destiation files H264 Video 720x400 # multiple bitrates
AAC Audio # 128Kbps
Machine Multi-core Processor
The video quality at each bitrate is important so we are running in 2-Pass mode with the 'medium' preset
VIDEO_OPTIONS_P2 = -vcodec libx264 -preset medium -profile:v main -g 72 -keyint_min 24 -vf scale=720:-1,crop=720:400
The first approach was to encode them all in parallel processes
ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 &
ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 &
ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4 &
The obvious inefficiencies are that the source file is read, decoded, scaled, and cropped identically for each process. How can we do this once and then feed the encoders with the result?
The hope was that generating all the encodes in a single ffmpeg command would optimize-out the duplicate steps.
ffmpeg -y -i $INPUT_FILE \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4
However, the encoding time was nearly identical to the previous multi-process approach. This leads me to believe that all the steps are again being performed in duplicate.
To force ffmpeg to read, decode, and scale only once, I put those steps in one ffmpeg process and piped the result into another ffmpeg process that performed the encoding. This improved the overall processing time by 15%-20%.
INPUT_STREAM="ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 -threads auto -f yuv4mpegpipe -"
$INPUT_STREAM | ffmpeg -y -f yuv4mpegpipe -i - \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto out-700.mp4
Does anyone see potential problems with doing it this way, or know of a better method?
If you apply the audio/video options to the piped output of the first process, you could save some CPU, since it would exchange 3 encodings to a single one.
ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -f yuv4mpegpipe -\
| ffmpeg -y -f yuv4mpegpipe -i - \
-b:v 250k out-250.mp4 \
-b:v 500k out-500.mp4 \
-b:v 700k out-700.mp4
This is the recommended way for older versions of ffmpeg. There's a newer method (didn't test it) available since earlier this month: http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs
I think what the OP kind of wants more is to use the filters once, encode several times. The method used is good, though you might get more speed with the "tee" filter, see also the recent addition to the bottom of http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs "Multiple encodings for same input"

Resources