hope one of you can tell me why this ffmpeg command of mine does not draw the desired text. the produced video doesn't have it. here you go:
ffmpeg -f image2 -thread_queue_size 64 -framerate 15.1 -i /home/michael-heuberger/binarykitchen/code/videomail.io/var/local/tmp/clients/videomail.io/11e6-723f-d0aa0bd0-aa9b-f7da27da678f/frames/%d.webp -y -an -vcodec libvpx -filter:v drawtext=fontfile=/home/michael-heuberger/binarykitchen/code/videomail.io/src/assets/fonts/Varela-Regular.ttf:text=www.videomail.io:fontsize=180:fontcolor=white:x=150:y=150:shadowcolor=black:shadowx=2:shadowy=2 -vf scale=trunc(iw/2)*2:trunc(ih/2)*2 -crf 12 -deadline realtime -cpu-used 4 -pix_fmt yuv420p -loglevel warning -movflags +faststart /home/michael-heuberger/binarykitchen/code/videomail.io/var/local/tmp/clients/videomail.io/11e6-723f-d0aa0bd0-aa9b-f7da27da678f/videomail_preview.webm
the crucial part is this video filter:
-filter:v drawtext=fontfile=/home/michael-heuberger/binarykitchen/code/videomail.io/src/assets/fonts/Varela-Regular.ttf:text=www.videomail.io:fontsize=180:fontcolor=white:x=150:y=150:shadowcolor=black:shadowx=2:shadowy=2
does it seem correct to you? if so, then why am i not seeing any text in the videomail_preview.webm video file?
using ffmpeg v2.8.6 here with --enable-libfreetype, --enable-libfontconfig and --enable-libfribidi enabled.
furthermore, the above command has been produced with fluent-ffmpeg.
so, any ideas?
Combine all filters into a single graph, so
-filter:v drawtext=fontfile=/home/michael-heuberger/binarykitchen/code/videomail.io/src/assets/fonts/Varela-Regular.ttf:text=www.videomail.io:fontsize=180:fontcolor=white:x=150:y=150:shadowcolor=black:shadowx=2:shadowy=2 -vf scale=trunc(iw/2)*2:trunc(ih/2)*2
becomes
-filter:v drawtext=fontfile=/home/michael-heuberger/binarykitchen/code/videomail.io/src/assets/fonts/Varela-Regular.ttf:text=www.videomail.io:fontsize=180:fontcolor=white:x=150:y=150:shadowcolor=black:shadowx=2:shadowy=2,scale=trunc(iw/2)*2:trunc(ih/2)*2
Related
I am trying to use ffmpeg to replace the video track in a video file with a still image. I tried some commands I got from other questions such as the one here
ffmpeg -i x.png -i orig.mp4 final.mp4
ffmpeg -r 1/5 -i x.png -r 30 -i orig.mp4 final.mp4
But these didn't work. I'm not sure which of these arguments are required or not. The output should be accepted by YouTube as a valid video - I was able to simply remove the video track, but apparently they don't let you upload a video without a video track.
You can try looping the still image like this:
ffmpeg -loop 1 -i x.png -i orig.mp4 final.mp4
Then you can tweak the encoding process by introducing the following quality parameters:
ffmpeg -loop 1 -i x.png -i orig.mp4 -crf 22 -preset slow final.mp4
they are described here.
If your colorspace gets rejected by YouTube you can try adding: -pix_fmt yuv420p.
Solution: A final solution is something like this:
Where -t 30 is an example duration of 30 seconds.
Using -c:a copy will directly copy the original audio without a new re-encoding (is faster).
ffmpeg -loop 1 -i x.png -i orig.mp4 -map 0 -map 1:a -c:v libx264 -pix_fmt yuv420p -crf 22 -preset slow -c:a copy -shortest final.mp4
I need to resize input 3 (logo.gif) to 360x360, but using scale=360:360 just made my video quality really bad. Here's my code:
ffmpeg -y -hide_banner -safe 0 -f concat -i "concat.txt" -i "overlay.png" -i "audio.mp3" -ignore_loop 0 -i "logo.gif" -filter_complex "[0]scale=3840x2160,zoompan=z='if(lte(zoom,1.0),1.25,max(1.001,zoom-0.0012))':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':fps=20:d=200:s=1920x1080[p];[p][1]overlay, scale=1920:1080, drawtext=fontfile=Heathergreen.otf:text=TITLE':fontcolor=black:fontsize=62:x=135:y=940, drawtext=fontfile=voxbox.ttf:text='TEXT':fontcolor=white:fontsize=70:x=120:y=885[v];[2:a]showwaves=mode=cline:s=178x56:r=20:scale=sqrt:colors=0x222222,colorkey=0x000000:0.01:0.1,format=yuva420p[w];[v][3]overlay=20:500[z];[z][w]overlay=108:740[outv]" -map "[outv]" -map 2:a -pix_fmt yuv420p -c:v libx264 -c:a aac -preset veryfast -shortest -movflags faststart -fflags genpts -r 20 "output.mp4"
UPDATE: I've simply resized the image and used that as input rather than resizing during the encode. It works fine, but if anyone has an answer to this I'd be curious to know where I was going wrong.
Instead of [v][3]overlay=20:500[z] you would use [3]scale=360:360[3v];[v][3v]overlay=20:500[z]. Your GIF should be square-shaped to begin with, to avoid distorting it.
I need to convert many videos in such a way that I take 2 different crops from each frame of a single video, stack them one over the other and scale down the result, creating a new smaller video.
I want to convert this fullHD frame (two crop areas are marked red) to this small stacked frame.
Right now I use the following code:
ffmpeg -i "video.mkv" -filter:v "crop=560:416:0:0" out1.mp4
ffmpeg -i "video.mkv" -filter:v "crop=560:384:1060:128" out2.mp4
ffmpeg -i out1.mp4 -vf "movie=out2.mp4[inner]; [in][inner] overlay=0:32,scale=280:208[out]" -c:v libx264 -preset veryfast -crf 30 result.mp4
It works but it is very inefficient and requires temporary files (out1 and out2). And the problem is I have over 100.000 of such videos (they are big and stored on a NAS and not directly on my computer's HDD). Converting all of them with a Windows batch script (for loop) will take...48 days. Can you help me to optimize the script?
Use the crop, vstack, scale, and format filters:
ffmpeg -i input.mkv -filter_complex "[0:v]crop=560:24:0:0[top];[0:v]crop=560:384:1076:128[bottom];[top][bottom]vstack,scale=280:-2[out]" -map "[out]" -c:v libx264 -preset veryfast -crf 30 -movflags +faststart result.mp4
If you want to complicate it somewhat for faster filtering (maybe) then you can try scaling first:
ffmpeg -i input.mkv -filter_complex "[0:v]scale=iw/2:-1,split[v0][v1];[v0]crop=560/2:24/2:0:0[top];[v1]crop=560/2:384/2:1076/2:128/2[bottom];[top][bottom]vstack[out]" -map "[out]" -c:v libx264 -preset veryfast -crf 30 -movflags +faststart result.mp4
You'll have to experiment to see which is fastest for you.
This code works fine for some audio files (makes a slideshow of JPG pictures with a PNG watermark and MP3 audio, while maintaining aspect ratio) but for this audio file, the pictures are not showing for the first two seconds or so of the video:
ffmpeg -y -framerate 1/12 -i "media/%03d.jpg" -i "media/audio.mp3" -loop 1 -i "media/watermark.png" -filter_complex "[0:v]scale=iw*min(3840/iw\,2160/ih):ih*min(3840/iw\,2160/ih), pad=3840:2160:(3840-iw)/2:(2160-ih)/2[ss]; [ss][2:v] overlay=main_w-overlay_w-10:main_h-overlay_h-10:shortest=1[out]" -map "[out]" -map 1:a -c:v libx264 -r 24 -preset veryfast -tune stillimage -pix_fmt yuv420p -c:a copy -map_metadata -1 "media/video.mkv" -report
I tried converting the audio into different formats of MP3, tried changing bitrates, changed audio to stereo, and even tried converting it to a WAV. None of these things worked.
Here are the report results for when I run this command.
If it makes a difference, I'm using Ubuntu 14.04 and FFmpeg version N-77455-g4707497 (latest version).
This command should work, but I consider this bizarre behaviour as FFmpeg should be automatically padding frames as per output spec
ffmpeg -y -framerate 1/12 -i "media/%03d.jpg" -i "media/audio.mp3" -loop 1 -i "media/watermark.png" -filter_complex "[0:v]scale=iw*min(3840/iw\,2160/ih):ih*min(3840/iw\,2160/ih), pad=3840:2160:(3840-iw)/2:(2160-ih)/2,fps=24[ss]; [ss][2:v] overlay=main_w-overlay_w-10:main_h-overlay_h-10:shortest=1[out]" -map "[out]" -map 1:a -c:v libx264 -r 24 -preset veryfast -tune stillimage -pix_fmt yuv420p -c:a copy -map_metadata -1 "media/video.mkv"
Hi I am new in FFmpeg,
I have made video from slideshow of sequential images (img001.jpg, img002.jpg, img003.jpg....). Using following commands in Ubuntu 14.04
ffmpeg -framerate 1/5 -i img%03d.jpg -c:v libx264 -r 30 -pix_fmt yuv420p -vf scale=320:240 out.mp4
But now I want to put animation like fade-in, fade-out between each sequential images, I want to generate video,
can anybody help me how to make it, i have searched lots of things but could not get....
The best way to do this is create intermediate mpeg's for each image and then concatenate them all into a video. For example, say you have 5 images; you would run this for each one of the images to create the intermediate mpeg's with a fade in at the beginning and a fade out at the end.
ffmpeg -y -loop 1 -i image -vf "fade=t=in:st=0:d=0.5,fade=t=out:st=4.5:d=0.5" -c:v mpeg2video -t 5 -q:v 1 image-1.mpeg
where t is the duration, or time, of each image. Once you have all of these mpeg's, you use ffmpeg's concat command to combine them all into an mp4.
ffmpeg -y -i image-1.mpeg -i image-2.mpeg -i image-3.mpeg -i image-4.mpeg -i image-5.mpeg -filter_complex '[0:v][1:v][2:v][3:v][4:v] concat=n=5:v=1 [v]' -map '[v]' -c:v libx264 -s 1280x720 -aspect 16:9 -q:v 1 -pix_fmt yuv420p output.mp4
This gives you the desired video and is the simplest and highest quality solution with ffmpeg. Let me know if you have any questions about how the above command works.