Merge/Blend/Composite transparent webm files with a jpg/png using ffmpeg - ffmpeg

I previously had generated color.webm and alpha.webm files withffmpeg using png images.
However, when I play the generated videos in my application it consumes quite some resources so I have decided to create a standard video(No alpha needed), I cannot re-record the videos as there are a lot of them at this point.
I am generating the alpha and color video using the following command ffmpeg -framerate 30 -i img%d.png -filter_complex "alphaextract[a]" -map 0:v -c:v libvpx-vp9 -pix_fmt yuv420p -auto-alt-ref 0 -crf 15 -b:v 0 color.webm -map "[a]" -c:v libvpx-vp9 -pix_fmt yuv420p -auto-alt-ref 0 -crf 15 -b:v 0 alpha.webm
I am looking to get a final video output where in alpha values are replaced with the png/jpg image, it's more like replacing the green screen with a background however in my case greenscreen is the alpha channel.
Could any one let me know how to combine alpha.webm, color.webm with a jpg/png image without minimal or no quality loss?
.

Related

Make an Alpha Mask video from PNG files

For RenPy it uses the notion of an Alpha Mask video https://www.renpy.org/doc/html/movie.html#movie-displayables-and-movie-sprites
I can convert the a bunch of PNGs with alpha channel to http://wiki.webmproject.org/howtos/convert-png-frames-to-webm-video I was wondering how to do the same sort of thing without creating another set of PNG files with just the alpha frame.
I'll be okay with something that uses imagemagik in the middle if needed.
You can use ffmpeg to create both files at once.
ffmpeg -i img%d.png -filter_complex "alphaextract[a]" \
-map 0:v -pix_fmt yuv420p -c:v libvpx -b:v 0 -crf 20 color.webm \
-map "[a]" -pix_fmt yuv420p -c:v libvpx -b:v 0 -crf 20 alpha.webm
Depending on your shell, you may need to quote the map arg in single quotes.

FFMPEG : Fill/Change (part of) audio waveform color as per actual progress with respect to time progress

I am trying to make command which is generating waveform from mp3 file and show on background image and play audio.
Togethr with this, I want to change waveform color left to right (something like progressbar) as per overall video time elapses.
I have created following command which shows progress bar using drawbox to fill box color as per current time position.
ffmpeg -y -loop 1 -threads 0 -i sample_background.png -i input.mp3 -filter_complex "color=red#0.5:s=1280x100[Color];[0:v]drawbox=0:155:1280:100:gray#1:t=fill[baserect];[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xffffff[waveform]; [baserect][waveform] overlay=0:155 [v1];[v1][Color] overlay=x='if(gte(t,0), -W+(t)*64, NAN)':y=155:format=yuv444[v2]" -map "[v2]" -map 1:a -c:v libx264 -crf 35 -ss 0 -t 20 -c:a copy -shortest -pix_fmt yuv420p -threads 0 output_withwave_and_progresbar.mp4
But I want to show progress inside generated audio waveform instead of making / filling rectangle using drawbox.
So I have tried to make 2 waveform of 2 different color and overlay on each other and I wanted to show such a way that top waveform should display only part from x position (left) respective to current time.
ffmpeg -y -loop 1 -threads 0 -i sample_background.png -i input.mp3 -filter_complex "[0:v]drawbox=0:155:1280:100:gray#1:t=fill[baserect];[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xff0000[waveform];[1:a]aformat=channel_layouts=mono,showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xffffff[waveform2]; [baserect][waveform] overlay=0:155 [v1];[v1][waveform2] overlay=x='if(gte(t,0), -W+(t)*64, NAN)':y=155:format=yuv444[v2]" -map "[v2]" -map 1:a -c:v libx264 -crf 35 -ss 0 -t 20 -c:a copy -shortest -pix_fmt yuv420p -threads 0 test.mp4
But I am not able to find way to do Wipe effect from left to right, currently it is sliding (as I am changing x of overlay)
It might be done using alpha merge and setting all other pixel to transparent and only show pixels which are less than x pos.
but I am not able to find how to do this.
Background image:
we can use any mp3 file file, currently I have set 20 sec duration.
Can someone please guide how we can do this?
Thanks.
Use the blend filter i.e.
ffmpeg -y -loop 1 -threads 0 -i sample_background.png -i input.mp3 -filter_complex "[0:v]drawbox=0:155:1280:100:gray#1:t=fill[baserect];[1:a]aformat=channel_layouts=mono,asplit[red][white];[red]showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xff0000[red];[white]showwaves=s=1280x100:rate=7:mode=cline:scale=sqrt:colors=0xffffff[white];[red][white]blend=all_expr='if(lte(X/W,T/64),A,B)'[waveform];[baserect][waveform]overlay=0:155:format=yuv444[v]" -map "[v]" -map 1:a -c:v libx264 -crf 35 -ss 0 -t 20 -c:a copy -shortest -pix_fmt yuv420p -threads 0 test.mp4
where 64 is total duration of audio.

Efficient command line to crop a video, overlay another crop from it and scale the result with ffmpeg

I need to convert many videos in such a way that I take 2 different crops from each frame of a single video, stack them one over the other and scale down the result, creating a new smaller video.
I want to convert this fullHD frame (two crop areas are marked red) to this small stacked frame.
Right now I use the following code:
ffmpeg -i "video.mkv" -filter:v "crop=560:416:0:0" out1.mp4
ffmpeg -i "video.mkv" -filter:v "crop=560:384:1060:128" out2.mp4
ffmpeg -i out1.mp4 -vf "movie=out2.mp4[inner]; [in][inner] overlay=0:32,scale=280:208[out]" -c:v libx264 -preset veryfast -crf 30 result.mp4
It works but it is very inefficient and requires temporary files (out1 and out2). And the problem is I have over 100.000 of such videos (they are big and stored on a NAS and not directly on my computer's HDD). Converting all of them with a Windows batch script (for loop) will take...48 days. Can you help me to optimize the script?
Use the crop, vstack, scale, and format filters:
ffmpeg -i input.mkv -filter_complex "[0:v]crop=560:24:0:0[top];[0:v]crop=560:384:1076:128[bottom];[top][bottom]vstack,scale=280:-2[out]" -map "[out]" -c:v libx264 -preset veryfast -crf 30 -movflags +faststart result.mp4
If you want to complicate it somewhat for faster filtering (maybe) then you can try scaling first:
ffmpeg -i input.mkv -filter_complex "[0:v]scale=iw/2:-1,split[v0][v1];[v0]crop=560/2:24/2:0:0[top];[v1]crop=560/2:384/2:1076/2:128/2[bottom];[top][bottom]vstack[out]" -map "[out]" -c:v libx264 -preset veryfast -crf 30 -movflags +faststart result.mp4
You'll have to experiment to see which is fastest for you.

Make video from images with ffmpeg/imagemagic

I want to convert jpg images to mp4 video without resizing image(keep original images size and well formated video)
I have tried lot's of solutions of ffmpeg and imagemagic (links given below)but both crop images after converting in video format and i want a video from images with original image size.
Solution will be appreciated with ffmpeg or imagemagick. :)
slow ffmpeg's images per second when creating video from images
image to video ffmpegf
FFMPEG An Intermediate Guide/image sequence
How can I create a video file from a set of jpg images? [duplicate]
How to create a video from images with FFmpeg?
FFmpeg
Make video from still image sequence
Combining images with ImageMagick
Imagemagick.org
ffmpeg -framerate 1/5 -i na%03d.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p output.mp4
on large image(1600X1200) its execute successfully but not generate a smooth video.
on small image(300x168) its show error. i also try this command on small image
ffmpeg -framerate 1/5 -i abc%03d.jpg -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p output.mp4 -vf "scale=trunc(iw/2)*2:trunc(ih/2)*2"
this work for me i use this in loop
ffmpeg -loop 1 -i na002.jpg -c:a copy -c:v libx264 -strict 1 -shortest -vf "scale='min(1280,iw)':min'(720,ih)':force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2" test.mp4
ffmpeg -y -i video.mp4 -vf "scale='min(1280,iw)':min'(720,ih)':force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2" out1.media
ffmpeg -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp0.mp4" -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp1.mp4" -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp2.mp4" -i "C:\xampp\htdocs\social_media\public\uploads\stories\temp_media\81/temp3.mp4" -filter_complex "[0]setdar=16/9[a];[1]setdar=16/9[b];[2]setdar=16/9[c];[3]setdar=16/9[d];[a][0:a][b][1:a][c][2:a][d][3:a] concat=n=4:v=1:a=1[v][a]" -map "[outv]" -map "[outa]"

encoding jpeg as h264 video

I am using the following command to encode an AVI to an H264 video for use in an HTML5 video tag:
ffmpeg -y -i "test.avi" -vcodec libx264 -vpre slow -vpre baseline -g 30 "out.mp4"
And this works just fine. But I also want to create a placeholder video (long story) from a single still image, so I do this:
ffmpeg -y -i "test.jpg" -vcodec libx264 -vpre slow -vpre baseline -g 30 "out.mp4"
And this doesn't work. What gives?
EDIT: After trying LordNeckbeards answer, here is my full output: http://pastebin.com/axhKpkLx
Example for a 10 second output:
ffmpeg -loop 1 -framerate 24 -i input.jpg -c:v libx264 -preset slow -tune stillimage -crf 24 -vf format=yuv420p -t 10 -movflags +faststart output.mp4
Same thing but with audio. The output duration will match the input audio duration:
ffmpeg -loop 1 -framerate 24 -i input.jpg -i audio.mp3 -c:v libx264 -preset slow -tune stillimage -crf 24 -vf format=yuv420p -c:a aac -shortest -movflags +faststart output.mp4
-loop 1 loops the image input.
-framerate sets the frame rate of the image input. Default is 25. Some players have issues with low frame rates so a value over 6 or so is recommended.
-i input.jpg the input.
-c:v libx264 the H.264 video encoder.
-preset x264 encoding preset. Use the slowest one you can.
-tune x264 tuning for various adjustments to fit specific situations.
-crf for quality. A lower value results in higher quality. Use the highest value that still provides an acceptable quality to you. Default is 23.
-vf format=yuv420p outputs the pixel format as yuv420p. This ensures the output uses a widely acceptable chroma sub-sampling scheme. Recommended for libx264 when encoding from images.
-c:a aac the AAC audio encoder. If your input is already AAC or M4A then use -c:a copy instead to stream copy instead of re-encode.
-t 10 (in the first example) makes a 10 second output. Needed because the image is looping indefinitely.
-shortest (in the second example) makes the output the same duration as the shortest input. In this case it is the audio since the image is looping indefinitely.
-movflags +faststart relocates the moov atom to the beginning of the file after encoding is finished. Allows playback to begin faster in progressive download playing; otherwise the whole video must be downloaded before playing.
-profile:v main (optional) some devices can't handle High profile.
See FFmpeg Wiki: H.264 for more info.

Resources