I've been trying to create a video with ffmpeg's showwaves filter and have cobbled together the below command which I sort of understand. I'm wondering if it is possible to set the color of the wav form using hex colors. (i.e. #F3ECDA instead of "blue")
Also, feel free to tell me if there's any unneeded garbage in the command as is. Thanks.
ffmpeg -i audio.mp3 -loop 1 -i picture.jpg -filter_complex \
"[0:a]showwaves=s=960x202:mode=cline:colors=blue[fg]; \
[1:v]scale=960:-1,crop=iw:540[bg]; \
[bg][fg]overlay=shortest=1:main_h-overlay_h-30,format=yuv420p[out]" \
-map "[out]" -map 0:a -c:v libx264 -preset fast -crf 18 -c:a libopus output.col.mkv
See https://ffmpeg.org/ffmpeg-utils.html#Color for syntax. In short, it is colors=0xRRGGBB or colors=#RRGGBB. Rest looks fine.
Related
So Im trying to overlay a video on top of an image and then add text over the image in ffmpeg I found that Im able to do all these separately but when combining it gives me the error of
Cannot find a matching stream for unlabeled input pad 0 on filter Parsed_drawtext_2
The line of code:
ffmpeg -loop 1 -i overlay.png -re
-i overlay.mp4
-filter_complex "[1]scale=1660:934[inner];[0][inner]overlay=0:0:shortest=1[out];
drawtext=fontsize=40:fontfile=FreeSerif.ttf:textfile=text.txt:x=(w-text_w)/2:y=(h-text_h)/2:reload=1"
-map "[out]" -map 1:a -c:a copy -y -s 1280x800 output.mp4
Can anyone help me with this?
You almost got it. Only needs some re-arranging:
ffmpeg -loop 1 -i overlay.png
-i overlay.mp4
-filter_complex "[1]scale=1660:934[inner];[0][inner]overlay=0:0:shortest=1,scale=1280:800,drawtext=fontsize=40:fontfile=FreeSerif.ttf:textfile=text.txt:x=(w-text_w)/2:y=(h-text_h)/2:reload=1[out]"
-map "[out]" -map 1:a -c:a copy output.mp4
See FFmpeg filtering intro for a quick explanation of the syntax.
I would like to apply multiple blurs into my video (with audio copied), each of them having different coordinates and durations. Here is what I have tried:
ffmpeg -i test.mp4 -filter_complex \
"[0:v]crop=w=100:h=100:x=20:y=40,boxblur=10:enable='between(t,5,8)'[c1];
[0:v]crop=w=100:h=100:x=40:y=60,boxblur=10:enable='between(t,10,13)'[c2];
[0:v][c1]overlay=x=20:y=40[v];
[0:v][c2]overlay=x=40:y=60[v]" \
-map "[v]" -movflags +faststart output.mp4
However, this results in a Filter overlay has an unconnected output error. I would like to know if there is any good way to solve this. Thanks for your attention.
The 2nd overlay should use the output of the first overlay as its base input.
ffmpeg -i test.mp4 -filter_complex \
"[0:v]crop=w=100:h=100:x=20:y=40,boxblur=10:enable='between(t,5,8)'[c1];
[0:v]crop=w=100:h=100:x=40:y=60,boxblur=10:enable='between(t,10,13)'[c2];
[0:v][c1]overlay=x=20:y=40:enable='between(t,5,8)'[v0];
[v0][c2]overlay=x=40:y=60:enable='between(t,10,13)'[v]" \
-map "[v]" -movflags +faststart output.mp4
I need to resize input 3 (logo.gif) to 360x360, but using scale=360:360 just made my video quality really bad. Here's my code:
ffmpeg -y -hide_banner -safe 0 -f concat -i "concat.txt" -i "overlay.png" -i "audio.mp3" -ignore_loop 0 -i "logo.gif" -filter_complex "[0]scale=3840x2160,zoompan=z='if(lte(zoom,1.0),1.25,max(1.001,zoom-0.0012))':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':fps=20:d=200:s=1920x1080[p];[p][1]overlay, scale=1920:1080, drawtext=fontfile=Heathergreen.otf:text=TITLE':fontcolor=black:fontsize=62:x=135:y=940, drawtext=fontfile=voxbox.ttf:text='TEXT':fontcolor=white:fontsize=70:x=120:y=885[v];[2:a]showwaves=mode=cline:s=178x56:r=20:scale=sqrt:colors=0x222222,colorkey=0x000000:0.01:0.1,format=yuva420p[w];[v][3]overlay=20:500[z];[z][w]overlay=108:740[outv]" -map "[outv]" -map 2:a -pix_fmt yuv420p -c:v libx264 -c:a aac -preset veryfast -shortest -movflags faststart -fflags genpts -r 20 "output.mp4"
UPDATE: I've simply resized the image and used that as input rather than resizing during the encode. It works fine, but if anyone has an answer to this I'd be curious to know where I was going wrong.
Instead of [v][3]overlay=20:500[z] you would use [3]scale=360:360[3v];[v][3v]overlay=20:500[z]. Your GIF should be square-shaped to begin with, to avoid distorting it.
I want to add one or multiple resized images anywhere over the video using ffmpeg. It works well for some position. However, it does not add images to the exact position I want. I have tested it on console and its embedded in php with dynamic variables.
ffmpeg -y -i vid_1561454052.mp4 -i Penguins.jpg -filter_complex
"[0:v][1:v] overlay=221:127:enable='between(t,0,5)'" -pix_fmt yuv420p
-aspect 16:9 -c:a copy vid_1562740969.mp4
Please Help me out...
use loop option
ffmpeg -y -i vid_1561454052.mp4 -loop 1 -i Penguins.jpg \
-filter_complex "[0:v][1:v] overlay=221:127:enable='between(t,0,5)'" \
-pix_fmt yuv420p -aspect 16:9 -c:a copy vid_1562740969.mp4
For RenPy it uses the notion of an Alpha Mask video https://www.renpy.org/doc/html/movie.html#movie-displayables-and-movie-sprites
I can convert the a bunch of PNGs with alpha channel to http://wiki.webmproject.org/howtos/convert-png-frames-to-webm-video I was wondering how to do the same sort of thing without creating another set of PNG files with just the alpha frame.
I'll be okay with something that uses imagemagik in the middle if needed.
You can use ffmpeg to create both files at once.
ffmpeg -i img%d.png -filter_complex "alphaextract[a]" \
-map 0:v -pix_fmt yuv420p -c:v libvpx -b:v 0 -crf 20 color.webm \
-map "[a]" -pix_fmt yuv420p -c:v libvpx -b:v 0 -crf 20 alpha.webm
Depending on your shell, you may need to quote the map arg in single quotes.