ffmpeg, overlay .mov onto .mp4, way to make faster, without changing preset? - ffmpeg

I have the following command, which puts an overlay.mov file ontop of a .mp4 file. It works great but i'm wondering if the command i'm using can be sped up.
ffmpeg -loglevel quiet -i 'assets/videos/background.mp4' -i 'assets/videos/overlay.mov' -filter_complex '[1:v][0:v]scale2ref[ua][b];[ua]setsar=1,format=yuva444p,colorchannelmixer=aa=0.5[u];[b][u]overlay=eof_action=pass[v]' -map [v] -map 0:a -preset medium -y 'assets/videos/output.mov'
I know i can change -preset to ultrafast but is there a way i can improve the above command? Anything obsolete?

If you dont want to reencode video you can use the same codec, ffmpeg just move mov format to mp4 format, and It very fast.
ffmpeg -i input.mov -c:v copy out.mp4

Related

Hardcoding subtitles from DVD or VOB file with ffmpeg

I have some DVDs that I would like to encode so that I can play them on a Chromecast, with subtitles. It seems that Chromecast only supports text-based subtitle formats, while DVD subtitles are in a bitmap format, so I need to hardcode the subtitles onto the video stream.
First I use vobcopy to create a VOB file:
vobcopy -I /dev/sr0
Next I want to use ffmpeg to encode it as a video stream in a format that is supported by the Chromecast. This is the closest I've come so far (based on the ffmpeg documentation):
ffmpeg -analyzeduration 100M -probesize 100M -i in.vob \
-filter_complex "[0:v:0][0:s:0]overlay[vid]" -map "[vid]" \
-map 0:3 -codec:v libx264 -crf 20 -codec:a copy out.mkv
The -filter_complex "[0:v:0] [0:s:0]overlay[vid] parameters should overlay the first subtitle stream on the first video stream (-map 0:3 is for the audio). This partially works, but the subtitles are only shown for a fraction of a second (I'm guessing one frame).
How can I make the subtitles display for the correct duration?
I'm using ffmpeg 4.4.1 on Linux, but I've also tried the latest snapshot version, and tried gstreamer and vlc (but didn't get far).
The only solution I found that worked perfectly was a tedious multi-stage process.
Copy the DVD with vobcopy
vobcopy -I /dev/sr0
Extract the subtitles in vobsub format using mencoder. This command will write subs.idx and subs.sub. The idx file can be edited if necessary to tweak the appearance of the subtitles.
mencoder *.vob -nosound -ovc frameno -o /dev/null \
-vobsuboutindex 0 -sid 0 -vobsubout subs
Copy the audio and video from the VOB into an mkv file. ffprobe can be used to identify the relevant video and audio stream numbers.
ffmpeg -fflags genpts -i *vob -map 0:1 -map 0:3 \
-codec:v copy -codec:a copy copied_av.mkv
Merge the subtitles with the audio/video stream.
mkvmerge -o merged.mkv copied_av.mkv subs.sub subs.idx
Then ffmpeg will work reliably with the mkv file to write hardcoded subtitles to the video stream.
ffmpeg -i merged.mkv -filter_complex "[0:v:0][0:s:0]overlay[vid]" \
-map [vid] -map 0:1 -codec:v libx264 -codec:a copy hardcoded.mkv

Is there a way to pipe input video into ffmpeg?

ffmpeg -f avfoundation -i "1:0" -vf "crop=1920:1080:0:0" -pix_fmt yuv420p -y -r 30 -c:a aac -b:a 128k -f flv rtmp://RTMP_SERVER:RTMP_PORT/STREAM_KEY
Hello guys, the above command works pretty well. It records the audio/video of the computer. But what I want to do is pipe a repeating video or image(png/jpeg/gif), so that there is no live video feed from the computer, but just the image on the stream with the audio.
How would you go about doing this?
Also, if you know any programming interfaces that can do this same thing, please give suggestions. Because I would rather not use a CLI.
I think you should be able to achieve this by using -loop and some -map:ing. I can't test with avfoundation myself but something like this works for me:
ffmpeg -loop 1 -i image.png -i file_to_take_audio_from.mp4 -vf "scale=1920:1080:0:0" -pix_fmt yuv420p -r 30 -c:a aac -b:a 128k -map 0:v -map 1:a output.mp4
Replace -i file_to_take_audio_from.mp4 with -f avfoundation -i "1:0" and output.mp4 with -f flv rtmp://RTMP_SERVER:RTMP_PORT/STREAM_KEY.
Also you might be able to skip -vf if the image has correct resolution.
Hope that helps!
Use none or no value at all (:0) for the video device index and provide a secondary input:
ffmpeg -f avfoundation -i :0 -i image.png ...
There's a loop option for images such as animated GIFs and -stream_loop for input streams.
You can use the FFmpeg APIs directly instead of CLI.

FFMPEG: How to avoid audio/video desync in output of crossfaded clips when input is variable frame rate video

I'm doing screen recordings of gameplay (Dota2) using my NVIDIA graphics card GeForce experience hardware recording (NVEC Encoder). This creates a variable frame rate output video. My NVIDIA settings are 60 fps 15000 kbps. I have paid a guy to make a program that generates scripts that given start/stop timepoints can extract clips from the video and merge them with crossfade. See example code below. The script works for many input recordings but fails often: The audio and video are desynchronized (usually audio delay) in many of the clips, ca 0.5 seconds. I think it fails more when frame rate dropped more during recording. He does not know how to fix the problem, and I wonder if anyone could point out if anything could be fixed in the script (example below)?
Processing speed is quite important (now making a 10 min 'highlight' video takes ca 7-10 min). Solutions increasing that amount very much more is not of too big interest, unfortunately. His approach has been to work separately with audio and video and merge in the end. He already has a program to make ffmpeg code for working with different scenarios (also adding overlays, adding music, intro/outro) so it would be preferable with some easy fixes to his code and not dramatic redesigning of the logic. But if nothing else can fix the problem, a redesign in logic is ok. Using other tools than ffmpeg is also ok, but should be automatable (scripts/cli) and not increase processing times too much.
Running the program "mediainfo" on the input video shows that framerate dropped quite low for this input video:
Frame rate mode: Variable
Frame rate : 60.000 FPS
Minimum frame rate: 3.059 FPS
Maximum frame rate: 63.739 FPS
Full report here: https://pastebin.com/TX061Wih
The input video can be downloaded from dropbox here (6 GB):
https://www.dropbox.com/s/ftwdgapazbi62pr/fullgame.mp4?dl=0
Here the example of a script when asked to extract two clips from input video at 9:57 (41 sec length) and 15:45 (28 sec length) and crossfade merge them with a 0.5 crossfade time. There might be some code-remnants from options that are not used in this example (overlays, music, intro/outro). Using the input video above, this creates audio/video desync.
6 commands excecuted in sequence:
ffmpeg.exe -loglevel warning -ss 00:09:57 -i fullgame.mp4 -t 00:00:41 -filter_complex "[0:a]afade=t=out:st=40.5:d=0.5[a1]" -map "[a1]" -y out_temp_00.mp4.wav
ffmpeg.exe -loglevel warning -i fullgame.mp4 -ss 00:09:57 -t 00:00:41 -an -vcodec copy -f mpegts -avoid_negative_ts make_zero -y out_temp_00.mp4.ts
ffmpeg.exe -loglevel warning -ss 00:15:45 -i fullgame.mp4 -t 00:00:28 -filter_complex "[0:a]afade=t=in:st=0:d=0.5[a1]" -map "[a1]" -y out_temp_01.mp4.wav
ffmpeg.exe -loglevel warning -i fullgame.mp4 -ss 00:15:45 -t 00:00:28 -an -vcodec copy -f mpegts -avoid_negative_ts make_zero -y out_temp_01.mp4.ts
ffmpeg.exe -loglevel warning -i out_temp_00.mp4.wav -i out_temp_01.mp4.wav -y -filter_complex "[0:a]adelay=0|0[a0];[1:a]adelay=40500|40500[a1];[a0][a1]amix=inputs=2:dropout_transition=68.5,atrim=duration=68.5[outa0];[outa0]loudnorm[outa]" -map "[outa]" -ar 48000 -acodec aac -strict -2 fullgame_Output.mp4.aac
ffmpeg.exe -loglevel warning -i out_temp_00.mp4.ts -i out_temp_01.mp4.ts -y -i fullgame_Output.mp4.aac -filter_complex "[0:v]trim=start=0.5,setpts=PTS-STARTPTS[0c];[1:v]trim=start=0.5,setpts=PTS-STARTPTS[1c];[0:v]trim=40.5:41,setpts=PTS-STARTPTS[fo];[1:v]trim=0:0.5[fi];[fi]format=pix_fmts=yuva420p,fade=t=in:st=0:d=0.5:alpha=1[z];[fo]format=pix_fmts=yuva420p,fade=t=out:st=0:d=0.5:alpha=1[x];[z]fifo[w];[x]fifo[q];[q][w]overlay[r];[0c][r][1c]concat=n=3[outv]" -map "[outv]" -map 2:a -shortest -acodec copy -vcodec libx264 -preset ultrafast -b 15000k -aspect 1920:1080 fullgame_Output.mp4
P.S.
I already asked for help at an ffmpeg chat room. One guy said he knew what the problem was, but didnt know how to fix it(?):
[00:10] <kepstin> oh, wait, you're using -vcodec copy
[00:10] <kepstin> that explains everything.
[00:10] <kepstin> when you're using -vcodec copy, the start time (set with -ss) is rounded to the nearest keyframe
[00:10] <kepstin> it's not exact
[00:11] <kepstin> depending on the keyframe interval, this will result in possibly quite large shifts
[00:11] <kepstin> (also, your commands are applying audio filters on commands with -an, which is confusing/contradictory)
[00:12] <birdboy88> so the problem is that the audio temporary clips are not being extracted from the same excat timepoints?
[00:13] <kepstin> birdboy88: yeah, your audio is being re-encoded to wav so it's being cut sample-accurate, but the video's not being precisely cut.
[00:16] <birdboy88> kepstin: so I need to use slow seek (?) to extract video accurately? Or somehow extract audio only where there are video keyframes?
[00:17] <kepstin> birdboy88: i don't know how to extract audio starting at video keyframes with ffmpeg cli. You're already doing slow seek, which doesn't help (you should move the -ss option to before the -i option to speed it up)
[00:17] <kepstin> if you want accurate video cutting when saving to a file, you have to re-encode the video
[00:18] <kepstin> (doing this in a single ffmpeg command means you don't have to save to a file, so you can avoid the issue)
[00:18] * kepstin is off for a bit now
EDIT:
Everything is done with the latest ffmpeg version.
I was unable to get Gyan's code to work. It always loses some audio (audio is either 40.5 or 27.5, so only one audio is used). This is the only one working for me (changes were adelay=40500|40500 and amix=inputs=2[a0];[a0]loudnorm):
ffmpeg -i fullgame.mp4 -filter_complex "[0]split=2[vpre][vpost];
[0]asplit=2[apre][apost];
[vpre]trim=start='00:09:57':duration='00:00:41',setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start='00:09:57':duration='00:00:41',asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start='00:15:45':duration='00:00:28',setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start='00:15:45':duration='00:00:28',asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,adelay=40500|40500[apost-t];
[vpre-t][vpost-t]overlay[v];
[apre-t][apost-t]amix=inputs=2[a0];[a0]loudnorm[a]" -map "[v]" -map "[a]" -y -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Then I tried using a similar setup but with 3 clips, but on one machine I got error: "Error while filtering: Cannot allocate memory". And my 16 GB memory machine the processing speed is 0.02x! Any way to avoid this? This is the code I tried:
ffmpeg -i fullgame.mp4 -filter_complex "[0]split=3[vpre][vpost][v3];
[0]asplit=3[apre][apost][a3];
[vpre]trim=start=357:duration=41,setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start=357:duration=41,asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start=795:duration=28,setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,fade=t=out:st=40.5:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start=795:duration=28,asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,afade=t=out:st=27.5:d=0.5,adelay=40500|40500[apost-t];
[v3]trim=start=95:duration=30,setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5,setpts=PTS+41+28-0.5/TB[v3-t];
[a3]atrim=start=95:duration=30,asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,adelay=68500|68500[a3-t];
[vpre-t][vpost-t]overlay[v1];
[v1][v3-t]overlay[v];
[apre-t][apost-t][a3-t]amix=inputs=3[a0];
[a0]loudnorm[a]" -map "[v]" -map "[a]" -y -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Just do it in one command.
Besides the keyframe seek issue, which is true, your present sequence has an error in the last command. You have [0:v]trim=start=0.5...[0c] which trims out the first 0.5 seconds and will cause a desync of its own. Since this is the first clip, it should be [0:v]trim=0:40.5.
The full single command should be
ffmpeg -i fullgame.mp4 -filter_complex
"[0]split=2[vpre][vpost];[0]asplit=2[apre][apost];
[vpre]trim=start='00:09:57':duration='00:00:41',setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start='00:09:57':duration='00:00:41',asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start='00:15:45':duration='00:00:28',setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start='00:15:45':duration='00:00:28',asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5[apost-t];
[vpre-t][vpost-t]overlay[v];
[apre-t][apost-t]acrossfade=d=0.5,loudnorm,aresample=48000[a]"
-map "[v]" -map "[a]" -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Your original sequence had -strict -2 for audio AAC encoding. That hasn't been needed since Dec 2015. You have a very old version of ffmpeg if your ffmpeg throws an error without it. Upgrade first.
I did not test the above with your file, as it will take too long to filter 16 min of Full HD 60 fps video, but I tested the below faster command and it works fine with the latest git build of ffmpeg:
ffmpeg -ss 00:09:57 -t 00:00:41 -i fullgame.mp4 -ss 00:15:45 -t 00:00:28 -i fullgame.mp4 -filter_complex
"[0]afade=t=out:st=40.5:d=0.5[apre-t];
[1]format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[1]afade=t=in:st=0:d=0.5[apost-t];
[0][vpost-t]overlay[v];
[apre-t][apost-t]acrossfade=d=0.5,loudnorm,aresample=48000:ocl=stereo[a]"
-map "[v]" -map "[a]" -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4

Fastest way to add -movflags +faststart to an MP4 using FFMPEG, leaving everything else as-is

I want to add -movflags +faststart to an mp4 file. Basically that is all I want to do, nothing else should be changed. I am using ffmpeg.
What's the fastest way to do this? Do I have to re-encode the whole video? Or is there a better/easier way?
As simple as:
ffmpeg -i in.mp4 -c copy -map 0 -movflags +faststart out.mp4
Or if you can compile FFmpeg from source, make the tool qt-faststart in the tools/ directory and run it:
qt-faststart in.mp4 out.mp4
You can also use mp4box, which lets you move the MOOV atom to the start via this command:
mp4box -inter 0 in.mp4 -out out.mp4
Or if you want to fully optimize for streaming by also interleaving the audio/video data so that the file can be easily streamed in realtime:
mp4box -inter 500 in.mp4 -out out.mp4

ffmpeg rtmp and local file output

I have a trouble with ffmpeg
I receive a rtsp stream from a grabbing device (camera) and I stream-out it to rtmp (Youtube Live)
I want to have a copy of the stream in my computer so I write at the same time in a local file
I use this command :
ffmpeg -y -i 'RTSP_SOURCE' -c:v copy -c:a libvo_aacenc -map 0:v -bsf:v dump_extra -fflags +genpts -flags +global_header -movflags +faststart
-map_metadata 0 -metadata title= -f tee -filter_complex aevalsrc=0 '[f=mp4]/tmp/backup.mp4|[f=mpegts]/tmp/backup.ts|[f=flv]rtmp://a.rtmp.youtube.com/live2/STREAM_ID'
The problem is when I have some disconnections, ffmpeg exits and stop to recording
Is there any flag or option for telling to ffmpeg to continue recording in local files even there is not internet ?
Thank you very much for your help =)
You can try:
ffmpeg -f tee "[onfail=ignore] ...
More description is available here.

Resources