For RenPy it uses the notion of an Alpha Mask video https://www.renpy.org/doc/html/movie.html#movie-displayables-and-movie-sprites
I can convert the a bunch of PNGs with alpha channel to http://wiki.webmproject.org/howtos/convert-png-frames-to-webm-video I was wondering how to do the same sort of thing without creating another set of PNG files with just the alpha frame.
I'll be okay with something that uses imagemagik in the middle if needed.
You can use ffmpeg to create both files at once.
ffmpeg -i img%d.png -filter_complex "alphaextract[a]" \
-map 0:v -pix_fmt yuv420p -c:v libvpx -b:v 0 -crf 20 color.webm \
-map "[a]" -pix_fmt yuv420p -c:v libvpx -b:v 0 -crf 20 alpha.webm
Depending on your shell, you may need to quote the map arg in single quotes.
Related
I previously had generated color.webm and alpha.webm files withffmpeg using png images.
However, when I play the generated videos in my application it consumes quite some resources so I have decided to create a standard video(No alpha needed), I cannot re-record the videos as there are a lot of them at this point.
I am generating the alpha and color video using the following command ffmpeg -framerate 30 -i img%d.png -filter_complex "alphaextract[a]" -map 0:v -c:v libvpx-vp9 -pix_fmt yuv420p -auto-alt-ref 0 -crf 15 -b:v 0 color.webm -map "[a]" -c:v libvpx-vp9 -pix_fmt yuv420p -auto-alt-ref 0 -crf 15 -b:v 0 alpha.webm
I am looking to get a final video output where in alpha values are replaced with the png/jpg image, it's more like replacing the green screen with a background however in my case greenscreen is the alpha channel.
Could any one let me know how to combine alpha.webm, color.webm with a jpg/png image without minimal or no quality loss?
.
I need to resize input 3 (logo.gif) to 360x360, but using scale=360:360 just made my video quality really bad. Here's my code:
ffmpeg -y -hide_banner -safe 0 -f concat -i "concat.txt" -i "overlay.png" -i "audio.mp3" -ignore_loop 0 -i "logo.gif" -filter_complex "[0]scale=3840x2160,zoompan=z='if(lte(zoom,1.0),1.25,max(1.001,zoom-0.0012))':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':fps=20:d=200:s=1920x1080[p];[p][1]overlay, scale=1920:1080, drawtext=fontfile=Heathergreen.otf:text=TITLE':fontcolor=black:fontsize=62:x=135:y=940, drawtext=fontfile=voxbox.ttf:text='TEXT':fontcolor=white:fontsize=70:x=120:y=885[v];[2:a]showwaves=mode=cline:s=178x56:r=20:scale=sqrt:colors=0x222222,colorkey=0x000000:0.01:0.1,format=yuva420p[w];[v][3]overlay=20:500[z];[z][w]overlay=108:740[outv]" -map "[outv]" -map 2:a -pix_fmt yuv420p -c:v libx264 -c:a aac -preset veryfast -shortest -movflags faststart -fflags genpts -r 20 "output.mp4"
UPDATE: I've simply resized the image and used that as input rather than resizing during the encode. It works fine, but if anyone has an answer to this I'd be curious to know where I was going wrong.
Instead of [v][3]overlay=20:500[z] you would use [3]scale=360:360[3v];[v][3v]overlay=20:500[z]. Your GIF should be square-shaped to begin with, to avoid distorting it.
I'm playing with ffmpeg to generate a pretty video out of an mp3 + jpg.
I've managed to generate a video that takes a jpg as a background, and adds a waveform complex filter on top of it (and removes the black bg as an overlay).
This works:
ffmpeg -y -i 1.mp3 -loop 1 -i 1.jpg -filter_complex "[0:a]showwaves=s=1280x720:mode=cline,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[1:v][v]overlay[outv]" -map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest output.mp4
I've been trying to add text somewhere in the generated video too. I'm trying the drawtext filter. I can't get this to work however, so it seems I don't understand the syntax, or how to combine filters.
This doesn't work:
ffmpeg -y -i 1.mp3 -loop 1 -i 1.jpg -filter_complex "[0:a]showwaves=s=1280x720:mode=line,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[1:v][v]overlay[outv]" -filter_complex "[v]drawtext=text='My custom text test':fontcolor=White#0.5: fontsize=30:font=Arvo:x=(w-text_w)/5:y=(h-text_h)/5[out]" -map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest output.mp4
Would love some pointers!
Filteres operating in series should be chained together
ffmpeg -y -i 1.mp3 -loop 1 -i 1.jpg \
-filter_complex "[0:a]showwaves=s=1280x720:mode=line,colorkey=0x000000:0.01:0.1,
format=yuva420p[v];
[1:v][v]overlay,
drawtext=text='My custom text test':fontcolor=White#0.5:
fontsize=30:font=Arvo:x=(w-text_w)/5:y=(h-text_h)/5[outv]"
-map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest output.mp4
(You applied the drawtext onto the output of showwaves; it can be directly applied on the overlay output)
I need to convert many videos in such a way that I take 2 different crops from each frame of a single video, stack them one over the other and scale down the result, creating a new smaller video.
I want to convert this fullHD frame (two crop areas are marked red) to this small stacked frame.
Right now I use the following code:
ffmpeg -i "video.mkv" -filter:v "crop=560:416:0:0" out1.mp4
ffmpeg -i "video.mkv" -filter:v "crop=560:384:1060:128" out2.mp4
ffmpeg -i out1.mp4 -vf "movie=out2.mp4[inner]; [in][inner] overlay=0:32,scale=280:208[out]" -c:v libx264 -preset veryfast -crf 30 result.mp4
It works but it is very inefficient and requires temporary files (out1 and out2). And the problem is I have over 100.000 of such videos (they are big and stored on a NAS and not directly on my computer's HDD). Converting all of them with a Windows batch script (for loop) will take...48 days. Can you help me to optimize the script?
Use the crop, vstack, scale, and format filters:
ffmpeg -i input.mkv -filter_complex "[0:v]crop=560:24:0:0[top];[0:v]crop=560:384:1076:128[bottom];[top][bottom]vstack,scale=280:-2[out]" -map "[out]" -c:v libx264 -preset veryfast -crf 30 -movflags +faststart result.mp4
If you want to complicate it somewhat for faster filtering (maybe) then you can try scaling first:
ffmpeg -i input.mkv -filter_complex "[0:v]scale=iw/2:-1,split[v0][v1];[v0]crop=560/2:24/2:0:0[top];[v1]crop=560/2:384/2:1076/2:128/2[bottom];[top][bottom]vstack[out]" -map "[out]" -c:v libx264 -preset veryfast -crf 30 -movflags +faststart result.mp4
You'll have to experiment to see which is fastest for you.
This code works fine for some audio files (makes a slideshow of JPG pictures with a PNG watermark and MP3 audio, while maintaining aspect ratio) but for this audio file, the pictures are not showing for the first two seconds or so of the video:
ffmpeg -y -framerate 1/12 -i "media/%03d.jpg" -i "media/audio.mp3" -loop 1 -i "media/watermark.png" -filter_complex "[0:v]scale=iw*min(3840/iw\,2160/ih):ih*min(3840/iw\,2160/ih), pad=3840:2160:(3840-iw)/2:(2160-ih)/2[ss]; [ss][2:v] overlay=main_w-overlay_w-10:main_h-overlay_h-10:shortest=1[out]" -map "[out]" -map 1:a -c:v libx264 -r 24 -preset veryfast -tune stillimage -pix_fmt yuv420p -c:a copy -map_metadata -1 "media/video.mkv" -report
I tried converting the audio into different formats of MP3, tried changing bitrates, changed audio to stereo, and even tried converting it to a WAV. None of these things worked.
Here are the report results for when I run this command.
If it makes a difference, I'm using Ubuntu 14.04 and FFmpeg version N-77455-g4707497 (latest version).
This command should work, but I consider this bizarre behaviour as FFmpeg should be automatically padding frames as per output spec
ffmpeg -y -framerate 1/12 -i "media/%03d.jpg" -i "media/audio.mp3" -loop 1 -i "media/watermark.png" -filter_complex "[0:v]scale=iw*min(3840/iw\,2160/ih):ih*min(3840/iw\,2160/ih), pad=3840:2160:(3840-iw)/2:(2160-ih)/2,fps=24[ss]; [ss][2:v] overlay=main_w-overlay_w-10:main_h-overlay_h-10:shortest=1[out]" -map "[out]" -map 1:a -c:v libx264 -r 24 -preset veryfast -tune stillimage -pix_fmt yuv420p -c:a copy -map_metadata -1 "media/video.mkv"