NOTE: The below ffmpeg command lines are just a pseudo lines, not 100% final.
From single source I need to stream 2 different udp:// stream.
And I need to overlay 2 different png images on each.
Base sample line:
ffmpeg -r 25 -f dshow -i "video=VideoCaptureDevice:audio=AudioCaptureDevice" -codec:v libx264 -codec:a libfaac -f mpegts "udp://224.1.1.1:1234?pkt_size=1316" -codec:v libx264 s:v 720x480 -codec:a libfaac -f mpegts "udp://224.1.1.1:1235?pkt_size=1316"
Now I need to overlay 2 separate images in each output.
I tried below lines and failed.
Overlay sample line what I tried:
ffmpeg -r 25 -f dshow -i "video=VideoCaptureDevice:audio=AudioCaptureDevice" -i "C:\Image1.png" -filter_complex "overlay=100:100" -codec:v libx264 -codec:a libfaac -f mpegts "udp://224.1.1.1:1234?pkt_size=1316" -i "C:\Image2.png" -filter_complex "overlay=500:100" -codec:v libx264 s:v 720x480 -codec:a libfaac -f mpegts "udp://224.1.1.1:1235?pkt_size=1316"
Use
ffmpeg -r 25 -f dshow -i "video=VideoCaptureDevice:audio=AudioCaptureDevice" -i "C:\Image1.png" -i "C:\Image2.png" -filter_complex "split[a][b];[a][1]overlay=100:100[v1];[b][2]overlay=500:100,scale=720:480[v2]" -map "[v1]" -map 0:a -codec:v libx264 -codec:a libfaac -f mpegts "udp://224.1.1.1:1234?pkt_size=1316" -map "[v2]" -map 0:a -codec:v libx264 -codec:a libfaac -f mpegts "udp://224.1.1.1:1235?pkt_size=1316"
Related
I need to combine this 2 ffmpeg commands:
"-i videoFile.mp4 -c:a copy -c:v libx264 -vf pad=$length:height=$length:x=-1:y=-1:color=#195766 resultFile"
and
"-i videoFile.mp4 -i waterMark.png -filter_complex 'overlay=10:main_h-overlay_h-10' resultFile.mp4"
Is it possible? How result command will look like?
I hope I have got it right...
The combined command is:
ffmpeg -y -i videoFile.mp4 -i waterMark.png -c:a copy -c:v libx264 -filter_complex "[0:v]pad=384:height=216:x=-1:y=-1:color=#195766[t];[t][1:v]overlay=10:main_h-overlay_h-10[v]" -map "[v]" -map 0:a resultFile.mp4
For mobile FFmpeg on android (according to OP's comment):
"-y -i ${videoFile.absolutePath} -i $waterMarkPath -c:a copy -c:v libx264 -filter_complex pad=$length:height=$length:x=-1:y=-1:color=#195766[t];[t][1:v]overlay=10:main_h-overlay_h-10[v] -map [v] -map 0:a ${resultFile.absolutePath}"
I used the following post as reference: Create video with 5 images with fadeIn/out effect in ffmpeg.
Testing:
Creating a sample video file (with audio):
ffmpeg -y -r 25 -f lavfi -i testsrc=size=192x108:rate=30 -f lavfi -i sine=frequency=400 -f lavfi -i sine=frequency=1000 -filter_complex amerge -vcodec libx265 -crf 17 -pix_fmt yuv420p -acodec aac -ar 22050 -t 30 videoFile.mp4
Creating a sample PNG image file:
ffmpeg -y -f lavfi -i mandelbrot=rate=1:size=192x108 -t 1 waterMark.png
Executing the combined command:
ffmpeg -y -i videoFile.mp4 -i waterMark.png -c:a copy -c:v libx264 -filter_complex "[0:v]pad=$length:$length=216:x=-1:y=-1:color=#195766[t];[t][1:v]overlay=10:main_h-overlay_h-10[v]" -map "[v]" -map 0:a resultFile.mp4
Result (first frame of the output of the test):
I have made this ffmpeg code but it is very slow to process. The backgroundvideo.mp4 is 4k but the final output is 960x540. Is ffmpeg processing the effects in 4k and than scale the video? Should I write the script in other order or should I downscale the video and than apply the other filters?
ffmpeg -t 00:00:09 -i "backgroundvideo.mp4" -i "photo.jpg" -i logo.png \
-filter_complex "[0]boxblur=20[video];[1][video]scale2ref=w=oh*mdar:h=ih/1.2[photo][video];\
[video][photo]overlay=(W-w)/2:(H-h)/2:format=auto[bg];\
[bg][2]overlay=0:0,subtitles=subtitle.ass:force_style='WrapStyle=0,format=yuv420p" \
-i "audio.wav" -map 0:v:0 -map 3:a:0 -vcodec h264_nvenc \
-s 960x540 -shortest -r 25 -crf 17 -aspect 16/9 output.mp4
thanks
Downscale before adding more filters:
ffmpeg -t 00:00:09 -i "backgroundvideo.mp4" -i "photo.jpg" -i logo.png -i "audio.wav" -filter_complex "[0]scale=960:-2,boxblur=20[video];[1][video]scale2ref=w=oh*mdar:h=ih/1.2[photo][vid];[vid][photo]overlay=(W-w)/2:(H-h)/2:format=auto[bg];[bg][2]overlay=0:0,subtitles=aegisub.ass:force_style='WrapStyle=0',format=yuv420p[v]" -map "[v]" -map 3:a:0 -vcodec h264_nvenc -shortest -r 25 -crf 17 output.mp4
I have this command to add watermark to an mp4
ffmpeg -i junai-blvaz.mp4 -i evercam-logo-white.png -filter_complex "[1]scale=iw/2:-1[wm];[0][wm]overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10" -codec:a copy output.mp4
But I am creating the video using
ffmpeg -r 6 -i /tmp/%d.jpg -c:v h264_nvenc -r 6 -preset slow -bufsize 1000k -pix_fmt yuv420p -y junai-blvaz.mp4
Is there any way to merge this command of adding watermark
-i evercam-logo-white.png -filter_complex '[1]scale=iw/2:-1[wm];[0][wm]overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10'
to the very first command through which mp4 video has been created?
Combine the two commands:
ffmpeg -y -framerate 6 -i /tmp/%d.jpg -i evercam-logo-white.png -filter_complex "[1]scale=iw/2:-1[wm];[0][wm]overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10,format=yuv420p" -c:v h264_nvenc -preset slow -bufsize 1000k junai-blvaz.mp4
I am attempting to have multiple output files in ffmpeg map to multiple inputs however instead of each input mapping to a unique output I get the first input mapping to the first video and then the next videos never get created at all, I will describe exactly what I am trying to achieve below,
I need to:
create multiple video output files
from multiple audio input files
which all use the same one common image file to create the
video
I will post my command below, any help would be greatly appreciated thanks
-y -i input1.mp3 -i input2.mp3 -f image2 -loop 1 -r 2 -i imagefile.png -shortest -c:a aac -c:v mpeg4 -crf 18 -preset veryfast -movflags faststart -map 0 output1.mp4 -map1 output2.mp4
Basic command
ffmpeg -i input1.mp3 -i input2.mp3 -loop 1 -framerate 10 -i imagefile.png -map 2:v -map 0:a -vf format=yuv420p -shortest output1.mp4 -map 2:v -map 1:a -vf format=yuv420p -shortest output2.mp4
The video is filtered and encoded once per output.
With the split filter
ffmpeg -i input1.mp3 -i input2.mp3 -loop 1 -framerate 10 -i imagefile.png -filter_complex "[2:v]format=yuv420p,split=outputs=2[v0][v1]" -map "[v0] -map 0:a -shortest -movflags +faststart output1.mp4 -map "[v1]" -map 1:a -shortest -movflags +faststart output2.mp4
The video is filtered once total, and encoded separately for each output.
Pipe
ffmpeg -y -v error -loop 1 -framerate 10 -i imagefile.png -filter_complex "[0:v]format=yuv420p[v]" -map "[v]" -c:v libx264 -f nut - | ffmpeg -y -i - -i input1.mp3 -i input2.mp3 -map 0:v -map 1:a -c:v copy -shortest -movflags +faststart output1.mp4 -map 0:v -map 2:a -c:v copy -shortest -movflags +faststart output2.mp4
The video is filtered and encoded only once and each output stream copies it.
Hello I am trying to use ffmpeg to live stream content to youtube as well as output an mp4. The issue is I have a complex filter and do not know how to apply it to both outputs.
Here is the code
ffmpeg -re -i "https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8"
-i ./public/images/ACE.png
-i ./public/images/logo2.jpg
-filter_complex "[1]scale=40:40[ovrl1], [v0][ovrl1] overlay=580:10:enable='between(t,1,5)'[v1];[2]scale=40:40[ovrl2], [v1][ovrl2] overlay=580:10:enable='between(t,5,15)'[v2];[v2] drawtext=fontfile=fontfile=/System/Library/Fonts/Keyboard.ttf: \text='VideoGami':fontcolor=white: fontsize=24: x=(w-text_w)/2: y=(h-text_h)/1.05: enable='between(t,1,10)'"
-acodec aac -vcodec libx264 -f flv "rtmp://a.rtmp.youtube.com/live2/moo"
-acodec aac -vcodec libx264 trial.mp4
Use the tee muxer. Untested example:
ffmpeg -re -i "https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8" -i ./public/images/ACE.png -i ./public/images/logo2.jpg -filter_complex "[1]scale=40:40[ovrl1], [0:v:6][ovrl1] overlay=580:10:enable='between(t,1,5)'[v1];[2]scale=40:40[ovrl2], [v1][ovrl2] overlay=580:10:enable='between(t,5,15)',drawtext=fontfile=fontfile=/System/Library/Fonts/Keyboard.ttf: \text='VideoGami':fontcolor=white: fontsize=24: x=(w-text_w)/2: y=(h-text_h)/1.05: enable='between(t,1,10)'[v]" -map "[v]" -map 0:a:6 -c:v libx264 -c:a aac -f tee "[f=flv]rtmp://a.rtmp.youtube.com/live2/moo|trial.mp4"