ffmpeg wmv to mp4 and synchronous add a logo image - ffmpeg

The script I use to add a logo:
ffmpeg -i input.mp4 -framerate 30000/1001 -loop 1 -i test.png \
-filter_complex "[1:v] fade=out:st=30:d=1:alpha=1 [ov]; \
[0:v][ov] overlay=10:10 [v]" -map "[v]" -map 0:a \
-c:v libx264 -c:a copy -shortest output.mp4
The command I use to convert video. (With this command, synchronize your webm and mp4 and get the picture.)
ffmpeg -i input.wmv -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis \
outputwebm.webm -c:v libx264 -crf 35 outputmp4.mp4 \
-vf "thumbnail,scale=640:360" -frames:v 1 outputpng.png
I want to add the logo image as synchronous.
The command I tried:
ffmpeg -i input.wmv -c:v libvpx -crf 10 -b:v 1M \
-c:a libvorbis outputwebm.webm -c:v libx264 \
-crf 35 -framerate 30000/1001 -loop 1 -i test.png \
-filter_complex "[1:v] fade=out:st=30:d=1:alpha=1 [ov]; \
[0:v][ov] overlay=10:10 [v]" -map "[v]" -map 0:a \
-c:v libx264 -c:a copy -shortest outputmp4.mp4 \
-vf "thumbnail,scale=640:360" -frames:v 1 outputpng.png
Result:

Group all inputs at the front of the command and remove the encoding for the temp MP4 file.
ffmpeg -i input.wmv -framerate 30000/1001 -loop 1 -i test.png -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis outputwebm.webm -filter_complex "[1:v] fade=out:st=30:d=1:alpha=1 [ov]; [0:v][ov] overlay=10:10 [v]" -map "[v]" -map 0:a -c:v libx264 -c:a copy -shortest outputmp4.mp4 -vf "thumbnail,scale=640:360" -frames:v 1 outputpng.png
If your PNG has greater resolution than the WMV then you'll need to map the video for the webm and png outputs.

Related

FFMPEG stream to multiples servers using the same -filter_complex options

I want to stream a video to two rtmp servers, I have some options like scaling the resolution from 1080p to 576p or adding a logo. These options are serving in the first rtmp server which the signal was sent, but in the second rtmp it is sending 1080p without any of these options, what am I doing wrong?
ffmpeg -reconnect_at_eof 1 -reconnect_streamed 1 -reconnect 1 -reconnect_delay_max 4 -i video.mp4 -i hello.jpg -filter_complex "overlay=1650:950,scale=1024:576" -vcodec libx264 -preset veryfast -b:v 1300k -acodec aac -b:a 128k -f flv rtmp://test -vcodec libx264 -preset veryfast -b:v 1300k -acodec aac -b:a 128k -f flv rtmp://test2
Unlike input streams, you can only consume a filtergraph output stream only once, and the first rtmp output is snatching it up. If you want to use it on both outputs, split the output of the filter:
ffmpeg -reconnect_at_eof 1 -reconnect_streamed 1 -reconnect 1 -reconnect_delay_max 4 \
-i video.mp4 -i hello.jpg \
-filter_complex "overlay=1650:950,scale=1024:576,split=2[v1][v2]" \
-map [v1] -map 0:a -vcodec libx264 -preset veryfast -b:v 1300k -acodec aac -b:a 128k \
-f flv rtmp://test \
-map [v2] -map 0:a -vcodec libx264 -preset veryfast -b:v 1300k \
-acodec aac -b:a 128k -f flv rtmp://test2
Another, likely preferred, option if you are outputting identical streams is to use tee muxer. It should look something like this:
ffmpeg -reconnect_at_eof 1 -reconnect_streamed 1 -reconnect 1 -reconnect_delay_max 4 \
-i video.mp4 -i hello.jpg \
-filter_complex "overlay=1650:950,scale=1024:576[vout]" \
-map [vout] -map 0:a -vcodec libx264 -preset veryfast -b:v 1300k -acodec aac -b:a 128k \
-f tee "[f=flv]rtmp://test|[f=flv] rtmp://test2"

Ffmpeg silent audio

I have a string that outputs 4x mp4.
I would like to add quiet audio to the outputs.
I have tried to insert anullsrc=cl=mono:sample_rate=48000 but don't really know where to insert it. It gives me an error.
ffmpeg -hwaccel_output_format cuda -i test.mxf -filter_complex "[0:v]yadif=1,format=yuv420p,split=4[vid1][vid2][vid3][vid4];[vid1]scale=-2:1080[1080];[vid2]scale=-2:432[432];[vid3]scale=-2:288[288];[vid4]scale=-2:216[216]" -map "[1080]" -map "[432]" -map "[288]" -map "[216]" -map 0:a:0 -c:v h264_nvenc -force_key_frames "expr:gte(t,n_forced*10)" -preset slow -rc vbr_hq -b:v:0 4.5M -b:v:1 1.5M -b:v:2 1.0M -b:v:3 0.5M -c:a aac -b:a 192k -f tee "[select=\'v:0,a\']1080.mp4|[select=\'v:1,a\']432.mp4|[select=\'v:2,a\']288.mp4|[select=\'v:3,a\']216.mp4"
You would add the anullsrc as a lavfi input and then map it.
You then either have to add -shortest or add -t X where X is the duration of the video.
ffmpeg -hwaccel_output_format cuda -i test.mxf -f lavfi -i "anullsrc=cl=mono:sample_rate=48000" -filter_complex "[0:v]yadif=1,format=yuv420p,split=4[vid1][vid2][vid3][vid4];[vid1]scale=-2:1080[1080];[vid2]scale=-2:432[432];[vid3]scale=-2:288[288];[vid4]scale=-2:216[216]" -map "[1080]" -map "[432]" -map "[288]" -map "[216]" -map 0:a:0? -map 1:a -c:v h264_nvenc -force_key_frames "expr:gte(t,n_forced*10)" -preset slow -rc vbr_hq -b:v:0 4.5M -b:v:1 1.5M -b:v:2 1.0M -b:v:3 0.5M -c:a aac -b:a 192k -shortest -f tee "[select=\'v:0,a\']1080.mp4|[select=\'v:1,a\']432.mp4|[select=\'v:2,a\']288.mp4|[select=\'v:3,a\']216.mp4"

Ffmpeg speed processing time

I have made this ffmpeg code but it is very slow to process. The backgroundvideo.mp4 is 4k but the final output is 960x540. Is ffmpeg processing the effects in 4k and than scale the video? Should I write the script in other order or should I downscale the video and than apply the other filters?
ffmpeg -t 00:00:09 -i "backgroundvideo.mp4" -i "photo.jpg" -i logo.png \
-filter_complex "[0]boxblur=20[video];[1][video]scale2ref=w=oh*mdar:h=ih/1.2[photo][video];\
[video][photo]overlay=(W-w)/2:(H-h)/2:format=auto[bg];\
[bg][2]overlay=0:0,subtitles=subtitle.ass:force_style='WrapStyle=0,format=yuv420p" \
-i "audio.wav" -map 0:v:0 -map 3:a:0 -vcodec h264_nvenc \
-s 960x540 -shortest -r 25 -crf 17 -aspect 16/9 output.mp4
thanks
Downscale before adding more filters:
ffmpeg -t 00:00:09 -i "backgroundvideo.mp4" -i "photo.jpg" -i logo.png -i "audio.wav" -filter_complex "[0]scale=960:-2,boxblur=20[video];[1][video]scale2ref=w=oh*mdar:h=ih/1.2[photo][vid];[vid][photo]overlay=(W-w)/2:(H-h)/2:format=auto[bg];[bg][2]overlay=0:0,subtitles=aegisub.ass:force_style='WrapStyle=0',format=yuv420p[v]" -map "[v]" -map 3:a:0 -vcodec h264_nvenc -shortest -r 25 -crf 17 output.mp4

ffmpeg: overlay multiple images to a video

In order to overlay a single image to a video, I can do:
ffmpeg -i vid00.mp4 -i img00.png -filter_complex "[0:v][1:v]overlay=0:0:enable='between(t, 1, 2)'" -c:v libx264 -preset ultrafast -qp 20 -c:a copy -y vid01.mp4
How can I overlay multiple images to a video in a single ffmpeg call?
I've tried stuff like:
ffmpeg -i vid00.mp4 -i img00.png -i img01.png -filter_complex "\
[0:v][1:v]overlay=0:0:enable='between(t, 1, 2)'[v0]; \
[2:v][3:v]overlay=0:0:enable='between(t, 3, 4)'[v1]; \
[v0][v1]concat=n=2:v=1:a=0,format=yuv420p[v]" -map "[v]" -map 0:a -c:v libx264 -preset ultrafast -qp 20 -c:a copy -y vid01.mp4
and variations thereof (by messing with the [0:v][1:v] indices), but to avail.,
Combined command:
ffmpeg -i vid00.mp4 -i img00.png -i img00.png -filter_complex "[0:v][1:v]overlay=0:0:enable='between(t, 1, 2)'[v0];[v0][2:v]overlay=0:0:enable='between(t, 3, 4)'" -c:v libx264 -preset ultrafast -qp 20 -c:a copy -y vid01.mp4

Cutting audio and adding overlay in single command FFMPEG

How to add overlay and cut audio from a particular time in any type of video?
Here is what I am trying
ffmpeg -ss 5 -t 30 -i Happier.mp4 -i Watermark.png-filter_complex "[0:v][1:v] overlay=0:0:enable='between(t,5,30)'" -preset ultrafast -pix_fmt yuv420p -c:a copy output.mp4
Use
ffmpeg -i Happier.mp4 -i Watermark.png \
-filter_complex "[0:v][1:v] overlay=0:0:enable='between(t,5,30)'[v]; \
[0]volume=0:enable='between(t,5,30)'[a]" \
-map "[v]" -map "[a]" -preset ultrafast -pix_fmt yuv420p output.mp4

Resources