How to optimize ffmpeg w/ x264 for multiple bitrate output files - ffmpeg

The goal is to create multiple output files that differ only in bitrate from a single source file. The solutions for this that were documented worked, but had inefficiencies. The solution that I discovered to be most efficient was not documented anywhere that I could see. I am posting it here for review and asking if others know of additional optimizations that can be made.
Source file MPEG-2 Video (Letterboxed) 1920x1080 #>10Mbps
MPEG-1 Audio # 384Kbps
Destiation files H264 Video 720x400 # multiple bitrates
AAC Audio # 128Kbps
Machine Multi-core Processor
The video quality at each bitrate is important so we are running in 2-Pass mode with the 'medium' preset
VIDEO_OPTIONS_P2 = -vcodec libx264 -preset medium -profile:v main -g 72 -keyint_min 24 -vf scale=720:-1,crop=720:400
The first approach was to encode them all in parallel processes
ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 &
ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 &
ffmpeg -y -i $INPUT_FILE $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4 &
The obvious inefficiencies are that the source file is read, decoded, scaled, and cropped identically for each process. How can we do this once and then feed the encoders with the result?
The hope was that generating all the encodes in a single ffmpeg command would optimize-out the duplicate steps.
ffmpeg -y -i $INPUT_FILE \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto -f mp4 out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto -f mp4 out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto -f mp4 out-700.mp4
However, the encoding time was nearly identical to the previous multi-process approach. This leads me to believe that all the steps are again being performed in duplicate.
To force ffmpeg to read, decode, and scale only once, I put those steps in one ffmpeg process and piped the result into another ffmpeg process that performed the encoding. This improved the overall processing time by 15%-20%.
INPUT_STREAM="ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 -threads auto -f yuv4mpegpipe -"
$INPUT_STREAM | ffmpeg -y -f yuv4mpegpipe -i - \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto out-700.mp4
Does anyone see potential problems with doing it this way, or know of a better method?

If you apply the audio/video options to the piped output of the first process, you could save some CPU, since it would exchange 3 encodings to a single one.
ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 $AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -f yuv4mpegpipe -\
| ffmpeg -y -f yuv4mpegpipe -i - \
-b:v 250k out-250.mp4 \
-b:v 500k out-500.mp4 \
-b:v 700k out-700.mp4
This is the recommended way for older versions of ffmpeg. There's a newer method (didn't test it) available since earlier this month: http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs

I think what the OP kind of wants more is to use the filters once, encode several times. The method used is good, though you might get more speed with the "tee" filter, see also the recent addition to the bottom of http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs "Multiple encodings for same input"

Related

How to get playing audio file data in ffmpeg stream?

Dears experts of the wonderful ffmpeg utility! Tell me please who knows this:
I want to make a 24/7 stream on YouTube of music from looped video and audio tracks.
I do it like this:
ffmpeg -loglevel info -stream_loop -1 -y -re \
-i video.mp4 \
-f concat -safe 0 -i playlist.txt \
-c:v libx264 -preset veryfast -b:v 3000k -maxrate 3000k -bufsize 6000k \
-framerate 25 -video_size 1280x720 -vf "format=yuv420p" -g 50 -shortest -strict experimental \
-c:a aac -b:a 128k -ar 44100 \
-f flv rtmp://localhost/live/my-stream
i.e. video.mp4 is spinning in a loop and from the playlist.txt file I play mp3 in turn.
With this everything is ok, everything works. But I also want to show the title of the playing track.
As for example on some YouTube radios:
With cover is perfect!
Any ideas how this can be implemented?
I know that it is possible to display text through drawtext. You can output text from a file, which you can separately update yourself. But how to get the data of the currently playing file? ffmpeg does not give such information, only stream parameters: fps, framerate... Or is it still possible to get it?
Or are there better and easier ways?
Thanks in advance for your help!
you can use FFprobe to extract metadata from files.

what filters affect ffmpeg encoding speed

What are the options in this command that would cause my encoding speed to be 0.999x instead of 1.0x or higher?
ffmpeg -y \
-loop 1 -framerate 30 -re \
-i ./1280x720.jpg \
-stream_loop -1 -re \
-i ./audio.mp3 \
-vcodec libx264 -pix_fmt yuv420p \
-b:v 2500k -maxrate 2500k -bufsize 10000k \
-preset slow -tune stillimage \
-b:a 128k -ar 44100 -ac 2 -acodec aac \
-af "dynaudnorm=f=150:g=15" \
-g 60 \
-f flv tmp.flv
I am trying to figure out why would this only be encoding at 0.999x speed, is there anything that I could do to speed this up? 2 pass encoding? I cannot understand why the encoding speed is so slow?
Also please not i've tried present from slow - ultrafast, the encoding speed stays relatively unchanged.
The -re is the rate-limiting factor. It only feeds input in real-time so the encoder can't progress any faster.
Remove the -re before the inputs. Needed only when trying to simulate a real-time input or streaming to an output that expects its input in real-time.

Adding multiple audio tracks and subtitles to dash manifest (mpd) with ffmpeg

I'm trying to create a website to stream some videos. For each video, I extract video, audio and subtitles in 3 different folders. It happens that a video has multiple audio tracks and multiple subtitles. I did a lot of research and I don't know how to add all of them in the manifest. Right now, I use this command:
ffmpeg -f webm_dash_manifest \
-i video1.mp4 -f webm_dash_manifest \
-i video2.mp4 -f webm_dash_manifest \
-i audio1.webm -f webm_dash_manifest \
-i audio2.webm -f webm_dash_manifest \
-i subtitles.vtt \
-c copy -map 0 -map 1 -map 2 -map 3 \
-f webm_dash_manifest -adaptation_sets "id=0,streams=v id=1,streams=a" manifest.mpd
My two videos have different resolutions and bitrates, and it works perfectly. But I don't get any subtitles and my two audio tracks are considered like one same audio track which has two different bitrates (just like videos). I think I should have many adaptation_sets, but I don't know how to create them.
How can I create that manifest the right way?
After a few days, I found the solution.
My goal is to convert a video into mpeg-dash content which is really great for streaming.
I will encode video to h264, audio to aac, and subtitles to webvtt.
It's good settings for a large browser compatibility.
vp9 is really nice too but too long to encode for me.
Tools required:
ffmpeg: https://www.ffmpeg.org/download.html
mp4dash & mp4fragment: https://www.bento4.com/downloads/
Let's suppose we have a 1080p video file "video.mkv" with these streams:
0: video stream
1: audio stream, it language
2: audio stream, en langugage
3: subtitle stream, it language
4: subtitle stream, en language
1. Extracting differents streams
1.1 Video
I extract and transcode video stream to differents resolutions and bitrates:
ffmpeg -i video.mkv -an -sn -c:0 libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 5300k -maxrate 5300k -bufsize 2650k -vf 'scale=-1:1080' tmp/video/video-1080.mp4
ffmpeg -i video.mkv -an -sn -c:0 libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 2400k -maxrate 2400k -bufsize 1200k -vf 'scale=-1:720' tmp/video/video-720.mp4
ffmpeg -i video.mkv -an -sn -c:0 libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 600k -maxrate 600k -bufsize 300k -vf 'scale=-1:360' tmp/video/video-360.mp4
1.2 Audio
ffmpeg -i video.mkv -map 0:1 -ac 2 -ab 192k -vn -sn tmp/audio/audio-it.mp4
ffmpeg -i video.mkv -map 0:2 -ac 2 -ab 192k -vn -sn tmp/audio/audio-en.mp4
1.3 Subtitle
ffmpeg -i video.mkv -map 0:3 -vn -an tmp/subtitle/subtitle-it.vtt
ffmpeg -i video.mkv -map 0:4 -vn -an tmp/subtitle/subtitle-en.vtt
You can use the "-loglevel warning" option to see less informations.
2. Fragment video and audio
2.1 Video
mp4fragment tmp/video/video-1080.mp4 tmp/video/f-video-1080.mp4
mp4fragment tmp/video/video-720.mp4 tmp/video/f-video-720.mp4
mp4fragment tmp/video/video-360.mp4 tmp/video/f-video-360.mp4
2.2 Audio
mp4fragment tmp/audio/audio-it.mp4 tmp/audio/f-audio-it.mp4
mp4fragment tmp/audio/audio-en.mp4 tmp/audio/f-audio-en.mp4
3. Split files and create the dash manifest
mp4dash --mpd-name=manifest.mpd tmp/video/f-video-1080.mp4 tmp/video/f-video-720.mp4 tmp/video/f-video-360.mp4 tmp/audio/f-audio-it.mp4 tmp/audio/f-audio-en.mp4 \[+format=webvtt,+language=it\]tmp/subtitle/subtitle-it.vtt \[+format=webvtt,+language=en\]tmp/subtitle/subtitle-en.vtt
You can now delete the tmp folder
rm -rf tmp
(and your source file if you don't need it anymore)
You have now your mpeg-dash content which can be streamed. You have to serve your files (allow cors and enable byte range request).
I use angular and rx-player as player. I can switch language, subtitles and the video quality is adaptative to the client's bandwidth !
Rx-player: https://github.com/canalplus/rx-player

ffmpeg says "at least one output file must be specified" when piping from another process

To force ffmpeg to read, decode, and scale only once in order to bring CPU usage down I put those steps in one ffmpeg process and piped the result into another ffmpeg process that performed the encoding.
This improved the overall processing time by 15–20%.
INPUT_STREAM="ffmpeg -i $INPUT_FILE -vf scale=720:-1,crop=720:400 -threads auto -f yuv4mpegpipe -"
$INPUT_STREAM | ffmpeg -y -f yuv4mpegpipe -i - \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 250k -threads auto out-250.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 500k -threads auto out-500.mp4 \
$AUDIO_OPTIONS_P2 $VIDEO_OPTIONS_P2 -b:v 700k -threads auto out-700.mp4
I then tried the syntax below:
ffmpeg -i c:\sample.mp4 -threads auto -f yuv4mpegpipe - | ffmpeg -y -f yuv4mpegpipe -i -vcodec libx264 -b:v 250k -threads auto c:\out-250.mp4 -vcodec libx264 -b:v 260k -threads auto c:\out-260.mp4
… but this error appears:
At least one output file must be specified
But I have specified the out put file which is: C:\out-260.mp4. Still it doesn't work
What's wrong?
ffmpeg -y -f yuv4mpegpipe -i -vcodec …
You didn't specify any input file. To read from stdin, use -:
ffmpeg -y -f yuv4mpegpipe -i - -vcodec …

ffmpeg output to multiple files simultaneously

What format/syntax is needed for ffmpeg to output the same input to several different "output" files? For instance different formats/different bitrates? Does it support parallelism on the output?
The ffmpeg documentation has been updated with lots more information about this and options depend on the version of ffmpeg you use: http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs
From FFMpeg documentation, FFmpeg writes to an arbitrary number of output "files".
Just make sure each output file (or stream), is preceded by the proper output options.
I use
ffmpeg -f lavfi -re -i 'life=s=300x200:mold=10:r=25:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800:flags=16' \
-f lavfi -re -i sine=frequency=1000:sample_rate=44100 -pix_fmt yuv420p \
-c:v libx264 -b:v 1000k -g 30 -keyint_min 60 -profile:v baseline -preset veryfast -c:a aac -b:a 96k \
-f flv "rtmp://yourname.com:1935/live/stream1" \
-f flv "rtmp://yourname.com:1935/live/stream2" \
-f flv "rtmp://yourname.com:1935/live/stream3" \
Is there any reason you can't just run more than one instance of ffmpeg? I've have great results with that ...
Generally what I've done is run ffmpeg once on the source file to get it to sort of the base standard (say a higher quality h.264 mp4 file) this will make sure your other jobs will run more quickly if your source file has any issues since they'll be cleaned up in this first pass
Then use that new source/input file to run x number of ffmpeg jobs, for example in bash ...
Where you see "..." would be where you'd put all your encoding options.
# create 'base' file
ffmpeg -loglevel error -er 4 -i $INPUT_FILE ... INPUT.mp4 >> $LOG_FILE 2>&1
# the command above will run and then move to start 3 background jobs
# text output will be sent to a log file
echo "base file done!"
# note & at the end to send job to the background
ffmpeg ... -i INPUT.mp4 ... FILENAME1.mp4 ... >/dev/null 2>&1 &
ffmpeg ... -i INPUT.mp4 ... FILENAME2.mp4 ... >/dev/null 2>&1 &
ffmpeg ... -i INPUT.mp4 ... FILENAME3.mp4 ... >/dev/null 2>&1 &
# wait until you have no more background jobs running
wait > 0
echo "done!"
Each of the background jobs will run in parallel and will be (essentially) balanced over your cpus, so you can maximize each core.
based on http://sonnati.wordpress.com/2011/08/30/ffmpeg-–-the-swiss-army-knife-of-internet-streaming-–-part-iv/ and http://ffmpeg-users.933282.n4.nabble.com/Multiple-output-files-td2076623.html
ffmpeg -re -i rtmp://server/live/high_FMLE_stream -acodec copy -vcodec x264lib -s 640×360 -b 500k -vpre medium -vpre baseline rtmp://server/live/baseline_500k -acodec copy -vcodec x264lib -s 480×272 -b 300k -vpre medium -vpre baseline rtmp://server/live/baseline_300k -acodec copy -vcodec x264lib -s 320×200 -b 150k -vpre medium -vpre baseline rtmp://server/live/baseline_150k -acodec libfaac -vn -ab 48k rtmp://server/live/audio_only_AAC_48k
Or you could pipe the output to a "tee" and send it to "X" other processes to actually do the encoding, like
ffmpeg -i input - | tee ...
which might save cpu since it might enable more output parallelism, which is apparently otherwise unavailable
see http://ffmpeg.org/trac/ffmpeg/wiki/Creating%20multiple%20outputs and here
I have done like that
ffmpeg -re -i nameoffile.mp4 -vcodec libx264 -c:a aac -b:a 160k -ar 44100 -strict -2 -f flv \
-f flv rtmp://rtmp.1.com/code \
-f flv rtmp://rtmp.2.com/code \
-f flv rtmp://rtmp.3.com/code \
-f flv rtmp://rtmp.4.com/code \
-f flv rtmp://rtmp.5.com/code \
but is not working completely well as i was expecting and having on restream with nginx

Resources