Im creating waveform's to my audio player by code:
ffmpeg -i source.wav -filter_complex "aformat=channel_layouts=mono,showwavespic=s=1280x90:colors=#000000" -frames:v 1 output.png
Sometimes waveform looking so bad like here:
Sometime in other song looking good like here:
So first waveform is tiny.. How can I normalize scale output waveform to size of output image 90px height?
There's a command you can add called "compand" which scales the waveform vertically. You could update your ffmpeg command to be:
ffmpeg -i source.wav -filter_complex "compand,aformat=channel_layouts=mono,showwavespic=s=1280x90:colors=#000000" -frames:v 1 output.png
You can check out the documentation here: https://trac.ffmpeg.org/wiki/Waveform
Related
I have this example video, recorded by Kazam:
https://user-images.githubusercontent.com/1997316/178513325-98513d4c-49d4-4a45-bcb2-196e8a76fa5f.mp4
It's a 1022x728 video.
I need to add a drop shadow identical to the one generated by the "Drop shadow (legacy)" filter of Gimp with the default settings. So, I generated with Gimp a PNG containing only the drop shadow. It's a 1052x758 image:
Now I want to put the video over the image to get a new video with the drop shadow. The wanted effect for the first frame is:
So, the video must be placed over the image. The top-left corner of the video must be in the position 11x11 of the background image.
How can I achieve this result?
I tried without success the following command. What's wrong?
ffmpeg -i shadow.png -i example.mp4 -filter_complex "[0:v][1:v] overlay=11:11'" -pix_fmt yuv420p output.mp4
About the transparency of the PNG background image, if it can't be maintained, then it's okay for the shadow to be on a white background. Otherwise, if it can be maintained by using an animated GIF as the output format, it is better.
The solution is to remove the transparency from shadow.png. Then:
ffmpeg -i example.mp4 -filter_complex "[0:v] palettegen" palette.png
ffmpeg -loop 1 -i shadow.png -i example.mp4 -i palette.png -filter_complex "[1:v] fps=1,scale=1022:-1[inner];[0:v][inner]overlay=11:11:shortest=1[new];[new][2:v] paletteuse[out]" -map '[out]' -y output.gif
The result is exactly what I wanted:
This solution is inspired by the answer https://stackoverflow.com/a/66318325 and by the article https://www.baeldung.com/linux/convert-videos-gifs-ffmpeg
ffmpeg can draw a waveform with showwavespic which works fine. But in my case, I have a file with multiple audio tracks and I want to specify which audio track to use to draw the waveform.
The default example is: ffmpeg -i input -filter_complex "showwavespic=s=640x120" -frames:v 1 output.png
I tried to add a map 0:a:0 inbetween but that gives strange ffmpeg errors.
Does anybody know how I can set the track index to use without first extracting the desired audiotrack?
This can be achieved using filtergraph link labels (see https://ffmpeg.org/ffmpeg-filters.html#Filtergraph-syntax-1) to select the relevant input eg:
ffmpeg -i input -filter_complex "[0:a:6]showwavespic=s=640x240" -frames:v 1 output.png
I am trying to crop a video so that I can remove a chunk of the content from the sides of a 360-degree video file using FFmpeg.
I used the following command and it does part of the job:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:0,pad=3840:1920:384:0 output.mp4
This will remove the sides of the video and that was initially exactly what I wanted (A). Now I'm wondering if it is possible to crop in the same way but to keep the top third of video. As such, A is what I have, B is what I want.:
I thought I could simply do this:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:640,pad=3840:1920:384:640 output.mp4
But that doesn't seem to work.
Any input would be very helpful.
Use the drawbox filter to fill cropped portion with default colour black.
ffmpeg -i testVideo.mp4 -vf drawbox=w=384:h=1280:x=0:y=640:t=fill,drawbox=w=384:h=1280:x=3840-384:y=640:t=fill -c:a copy output.mp4
The first filter acts on the left side, and the 2nd on the right.
I have a square video from Snap Spectacles (1088x1088) that I want to overlay on itself zoomed in and blurred.
Example input frame:
Generated zoomed in and blurred background:
Desired output:
I think I can do this with ffmpeg's maskedmerge, but I'm having trouble finding examples.
There's an example of maskedmerge that merges two videos of the same size and dynamically removes a green screen, and another that merges videos with transparency.
Here's the closest I've been able to get:
ffmpeg -i background.jpg -vf "movie=input.jpg[inner];[in][inner] overlay=#{offset}:0 [out]" -c:a copy output.jpg
tl;dr: given the first two frames, how could I generate the third frame (as video)?
Got it!
Like #Mulvya recommended, I needed a circular mask:
Given that mask snapmask.png, a blurred square background video background.mov, and the original video 65B6354F61B4AF02_HD.MOV, they can be merged like this:
ffmpeg -i background.mov -loop 1 -i snapmask.png -filter_complex " \
[1:v]alphaextract, scale=1080:1080 [mask];\
movie=65B6354F61B4AF02_HD.MOV, scale=1080:1080 [original];\
[original][mask] alphamerge [masked];\
[0:v][masked] overlay=420:0;"\
-c:a copy output.mov
You can do one better, though, which is generating the blurred background video on the fly in the same command. Now the only inputs are the original spectacles round video and the circular mask:
ffmpeg -i 65B6354F61B4AF02_HD.MOV -loop 1 -i snapmask.png -filter_complex "\
[0:v]split[a][b];\
[1:v]alphaextract, scale=1080:1080[mask];\
[a]scale=1080:1080 [ascaled];\
[ascaled][mask]alphamerge[masked];\
[b]crop=946.56:532:70.72:278, boxblur=10:5,scale=1920:1080[background];\
[background][masked]overlay=420:0"\
-c:a copy 65B6354F61B4AF02_HD_sq.MOV
That crop=946.56:532:70.72:278 bit is what I found worked best to crop out a rectangular portion of the circular video to zoom into.
It took me a while to wrap my head around the ffmpeg filter system for how to do this, but it's not as scary as I'd initially thought. The basic syntax is [input]command args[output], and commands can be chained without explicitly naming their outputs (like in [1:v]alphaextract, scale=1080:1080[mask]).
I need something that can be scripted on windows 7. This image will be used in banners.
Simon P Stevens' answer almost got me there:
ffmpeg -f image2 -i image%d.jpg video.avi
ffmpeg -i video.avi -pix_fmt rgb24 -loop_output 0 out.gif
Let's see if we can neaten this up.
Going via an avi is unnecessary. A -pix_fmt of rgb24 is invalid, and the -loop_output option prevents looping, which I don't want. We get:
ffmpeg -f image2 -i image%d.jpg out.gif
My input pictures are labeled with a zero-padded 3-digit number and I have 30 of them (image_001.jpg, image_002.jpg, ...), so I need to fix the format specifier
ffmpeg -f image2 -i image_%003d.jpg out.gif
My input pictures are from my phone camera, they are way too big! I need to scale them down.
ffmpeg -f image2 -i image_%003d.jpg -vf scale=531x299 out.gif
I also need to rotate them 90 degrees clockwise
ffmpeg -f image2 -i image_%003d.jpg -vf scale=531x299,transpose=1 out.gif
This gif will play with zero delay between frames, which is probably not what we want. Specify the framerate of the input images
ffmpeg -f image2 -framerate 9 -i image_%003d.jpg -vf scale=531x299,transpose=1 out.gif
The image is just a tad too big, so I'll crop out 100 pixels of sky. The transpose makes this tricky, I use the post-rotated x and y values:
ffmpeg -f image2 -framerate 9 -i image_%003d.jpg -vf scale=531x299,transpose=1,crop=299,431,0,100 out.gif
The final result - I get to share my mate's awesome facial expression with the world:
You can do this with ffmpeg
First convert the images to a video:
ffmpeg -f image2 -i image%d.jpg video.avi
(This will convert the images from the current directory (named image1.jpg, image2.jpg...) to a video file named video.avi.)
Then convert the avi to a gif:
ffmpeg -i video.avi -pix_fmt rgb24 -loop_output 0 out.gif
You can get windows binaries for ffmpeg here.
You can also do a similar thing with mplayer. See Encoding from multiple input image files.
I think the command line would be something like:
mplayer mf://*.jpg -mf w=800:h=600:type=jpg -vf scale=160:120 -vo gif89a:fps=3:output=out.gif
(Where 800 & 600 are your source width and height and 160 & 120 are the target width and height.out.gif is your target file name)
I've just tested both of these and they both work fine. However I got much better results from mplayer as I was able to specify the resolution and framerate. Your milage may vary and I'm sure you could find more options for ffmpeg if you looked.
With ImageMagick:
convert *.png a.gif
The ffmpeg to .avi and .avi to .gif worked, but the only thing to note is that your images must be named in perfect increasing numeric order to work, with no gaps. I cooked up a quick python script to rename all of my images accordingly so that this ffmpeg recipe would work:
import os
files = [ f for f in os.listdir('.') if os.path.isfile(os.path.join('.',f)) and f.endswith('.jpg') ]
for i, file in enumerate(sorted(files)):
os.rename(file, 'image%03d.jpg' % i)
And then I stumbled upon a much simpler approach than ffmpeg for doing the conversion, which is simply using ImageMagick's command line convert tool like this
convert image%03d.jpg[0-198] animated_gif.gif
Doesn't get much simpler than that folks.
Gist here: https://gist.github.com/3289840
Based on the answers of Simon P Stevens and dwurf I came up with this simplified solution:
ffmpeg -f image2 -framerate 1 -i image%d.jpg video.gif
This results in a rate of 1 second per image. Adjust the framerate value according to your needs.
I'd just like to add to dwurf's answer, that this will generate a gif with the standard 256-colors palette, which does not look very visually pleasing.
I've found two blog-posts and adapted them to my needs, in order to improve the visual quality by using a custom palette for your animation:
Generate the color palette:
ffmpeg -f image2 -i image%d.jpg -vf scale=900:-1:sws_dither=ed,palettegen palette.png
Convert images into a regular video with the desired framerate, because the third command only worked with a single input video and not a bunch of images
ffmpeg -f image2 -framerate 1.2 -i image%d.jpg video.flv
Now convert the generated video with the generated palette into a more beautiful gif:
ffmpeg -i video.flv -i palette.png -filter_complex "fps=1.2,scale=900:-1:flags=lanczos[x];[x][1:v]paletteuse" video.gif