Combine Two Commands (Get Video from Images) - ffmpeg

I have 300 images and i wants to generate video from these images.
i am new to FFMPEG so now i am using two commands to generate video from images.
Command to generate video from images which also add Logo on video
ffmpeg -framerate 24 -i img_%d.jpg -i logo.png -filter_complex \
"[0:v][1:v] overlay=25:25:enable='between(t,0,20)'" \
-vcodec libx264 -crf 25 -pix_fmt yuv420p test_video.mp4
After using above command i am getting the video to add audio to this video i am using below command
ffmpeg -i test_video.mp4 -i inputfile.mp3 -c:v libx264 -c:a libvorbis -shortest final_video.mp4
which generates video and i am getting below message
MPEG-4 AAC decoder is required to play the file
Help to combine this both command. if possible can we add sound without any decoder required
Log for command 1 https://drive.google.com/file/d/1zS7gvrPy69VK_MkyE4127FpX2kEziJHq/view?usp=sharing
and Log command 2 https://drive.google.com/file/d/1rHqVGzj7f003aWP6eISiyUjsES8_EWuw/view?usp=sharing

Try next command:
ffmpeg -framerate 24 -i img_%d.jpg -i logo.png -i inputfile.mp3 -filter_complex \
"[0:v][1:v] overlay=25:25:enable='between(t,0,20)'" \
-vcodec libx264 -crf 25 -map 2:a -c:a copy -pix_fmt yuv420p -shortest test_video.mp4
-map 2:a is needed to skip image in case if there is cover image in track.
With -c:a copy track will not be re-encoded, so you will have mp3 inside of your video file.

Related

FFMPEG: Combine "Create video from images" + scale to x + add audio + overlay logo

I´m working on a webcam-project. It is for generating timelapse videos of sunset/sundown.
I´m using a raspberrypi to generate them with gphoto2 + DSLR.
At the end of the day the images should get to an video, with audio and an overlay logo.
And it should be scaled to 1920 pixel.
I got a nice solution an it worked.
Producing the timelapse video an scale it:
ffmpeg -y -framerate 25 -start_number 0000001 -i /var/www/html/webcam/2020-01-05_bilder/%7d.jpg -vf scale=1920:-1 -pix_fmt yuv420p /var/www/html/webcam/2020-01-05-tag-output-1920.mp4
Taking the output of (1) and add an overlay-logo, add audio
ffmpeg -y -i '/var/www/html/webcam/2020-01-05-tag-output-1920.mp4'
-i '/var/www/html/webcam-scripts/graphics/logo.png'
-i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3'
-shortest -filter_complex '[1][0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01)'
-c:v libx264 -crf 18 -preset slow -pix_fmt yuv420p -c:a aac -strict -2
'/var/www/html/webcam/2020-01-05-tag-1920.mp4
I tried to combine both actions, but I get an error:
ffmpeg -y -framerate 25 -start_number 0000001 -i '/var/www/html/webcam/2020-01-05_bilder/%7d.jpg' -vf scale=1920:-1 -pix_fmt yuv420p -i '/var/www/html/webcam-scripts/graphics/logo.png' -i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3' -shortest -filter_complex '[1][0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01)' -c:v libx264 -crf 18 -preset slow -pix_fmt yuv420p -c:a aac -strict -2 '/var/www/html/webcam/2020-01-05-tag-1920.mp4'
Error: Filtergraph 'scale=720:-1' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph.
-vf/-af/-filter and -filter_complex cannot be used together for the same stream.
Isn`t it possible to combine these inputs and scale it? Or ... Where is my misunderstanding?
Don't mix -vf and -filter_complex. Do all filtering in one filtergraph.
ffmpeg -y -framerate 25 -i '/var/www/html/webcam/2020-01-05_bilder/%7d.jpg' -i '/var/www/html/webcam-scripts/graphics/logo.png' -i '/var/www/html/webcam-scripts/sounds/chill_time_5.mp3' -filter_complex '[0]scale=1920:-2[v0];[1][v0]scale2ref=h=ow/mdar:w=iw/6[#A logo][liebfrauen]; [#A logo]format=argb,colorchannelmixer=aa=0.95[#B logo transparent]; [liebfrauen][#B logo transparent] overlay=(main_w-w)-(main_w*0.05):(main_h-h)-(main_h*0.01),format=yuv420p' -c:v libx264 -crf 18 -preset slow -c:a aac -shortest '/var/www/html/webcam/2020-01-05-tag-1920.mp4'
No need for -strict -2. It does nothing for modern ffmpeg.
I replaced -pix_fmt yuv420p with format=yuv420p so it is more organized.
-start_number 0000001 is not needed because 1 is the default.

FFMPEG Add watermark to MP4

I have this command to add watermark to an mp4
ffmpeg -i junai-blvaz.mp4 -i evercam-logo-white.png -filter_complex "[1]scale=iw/2:-1[wm];[0][wm]overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10" -codec:a copy output.mp4
But I am creating the video using
ffmpeg -r 6 -i /tmp/%d.jpg -c:v h264_nvenc -r 6 -preset slow -bufsize 1000k -pix_fmt yuv420p -y junai-blvaz.mp4
Is there any way to merge this command of adding watermark
-i evercam-logo-white.png -filter_complex '[1]scale=iw/2:-1[wm];[0][wm]overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10'
to the very first command through which mp4 video has been created?
Combine the two commands:
ffmpeg -y -framerate 6 -i /tmp/%d.jpg -i evercam-logo-white.png -filter_complex "[1]scale=iw/2:-1[wm];[0][wm]overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10,format=yuv420p" -c:v h264_nvenc -preset slow -bufsize 1000k junai-blvaz.mp4

Correct syntax for ffmpeg filter combination?

I'm playing with ffmpeg to generate a pretty video out of an mp3 + jpg.
I've managed to generate a video that takes a jpg as a background, and adds a waveform complex filter on top of it (and removes the black bg as an overlay).
This works:
ffmpeg -y -i 1.mp3 -loop 1 -i 1.jpg -filter_complex "[0:a]showwaves=s=1280x720:mode=cline,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[1:v][v]overlay[outv]" -map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest output.mp4
I've been trying to add text somewhere in the generated video too. I'm trying the drawtext filter. I can't get this to work however, so it seems I don't understand the syntax, or how to combine filters.
This doesn't work:
ffmpeg -y -i 1.mp3 -loop 1 -i 1.jpg -filter_complex "[0:a]showwaves=s=1280x720:mode=line,colorkey=0x000000:0.01:0.1,format=yuva420p[v];[1:v][v]overlay[outv]" -filter_complex "[v]drawtext=text='My custom text test':fontcolor=White#0.5: fontsize=30:font=Arvo:x=(w-text_w)/5:y=(h-text_h)/5[out]" -map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest output.mp4
Would love some pointers!
Filteres operating in series should be chained together
ffmpeg -y -i 1.mp3 -loop 1 -i 1.jpg \
-filter_complex "[0:a]showwaves=s=1280x720:mode=line,colorkey=0x000000:0.01:0.1,
format=yuva420p[v];
[1:v][v]overlay,
drawtext=text='My custom text test':fontcolor=White#0.5:
fontsize=30:font=Arvo:x=(w-text_w)/5:y=(h-text_h)/5[outv]"
-map "[outv]" -pix_fmt yuv420p -map 0:a -c:v libx264 -c:a copy -shortest output.mp4
(You applied the drawtext onto the output of showwaves; it can be directly applied on the overlay output)

FFmpeg concat and then add soundtrack results in a soundtrack that stutters

I am trying to create a video composed of clips of images and videos. For the clips of images, I use ffmpeg to create a video file and then I add a silent audio stream through these two steps:
ffmpeg.exe -loop 1 -i MyImage.png -codec:v libx264 -t 4.0 -profile:v high -preset slow -r 25 -b:v 500k -maxrate 500k -pix_fmt yuv420p -vf scale=1280:720 MyImageMovie.mp4
ffmpeg.exe -f lavfi -i anullsrc=r=48000 -i MyImageMovie.mp4 -shortest -c:v copy -c:a aac -strict experimental -y MyImageMovieWithSilentAudioStream.mp4
Then I combine my video clips and image clips with
ffmpeg.exe -f concat -i videoList.txt -c copy -y concatVideo.mp4
At this point, the video looks good, any video clips that have audio streams seemed well synced to the video.
Now I add a soundtrack:
ffmpeg.exe -i concatVideo.mp4 -i soundtrack.mp3 -ar 48000 -filter_complex "[1:a]apad [b] ; [0:a][b]amerge=inputs=2[a]" -map 0:v -map "[a]" -c:v copy -ac 2 -shortest -y FinalVideo.mp4
The problem is that the soundtrack on FinalVideo.mp4 stutters at some (not all) of the concatenation joints.
I suspect it has to do with the audio stream and the video stream of the Image clips not being perfectly aligned. The aac has .0231s resolution and the video has 0.04s resolution. When I ffprobe the MyImageMovieWithSilentAudioStream.mp4 the duration is 4.00s but the start is 0.0213.
If my concatenated video has several of these image clips, the error can start to accumulate.
What can I do to keep the video and audio in sync and add a soundtrack that doesn't stutter?
Also, this is a little interesting, I don't hear the stutter when the final video is played on Windows Media Player, but it is there if I play it on VLC or via the html native video element.
Try adding the soundtrack in the same step as the concat.
ffmpeg -f concat -i videoList.txt -i soundtrack.mp3 \
-filter_complex "[1:a]apad[b];[0:a][b]amerge=inputs=2[a]" \
-map 0:v -map "[a]"
-c:v copy -c:a aac -ac 2 -ar 48000 -shortest -y FinalVideo.mp4
As an aside, you can also combine the image and silent stream generation,
ffmpeg -loop 1 -i MyImage.png -f lavfi -i anullsrc=r=48000 \
-vf scale=1280:720 \
-c:v libx264 -profile:v high -preset slow -r 25 -b:v 500k -maxrate 500k -pix_fmt yuv420p \
-c:a aac -strict experimental -t 4 -y MyImageMovieWithSilentAudioStream.mp4

FFmpeg First 2 Seconds of Video Not Showing

This code works fine for some audio files (makes a slideshow of JPG pictures with a PNG watermark and MP3 audio, while maintaining aspect ratio) but for this audio file, the pictures are not showing for the first two seconds or so of the video:
ffmpeg -y -framerate 1/12 -i "media/%03d.jpg" -i "media/audio.mp3" -loop 1 -i "media/watermark.png" -filter_complex "[0:v]scale=iw*min(3840/iw\,2160/ih):ih*min(3840/iw\,2160/ih), pad=3840:2160:(3840-iw)/2:(2160-ih)/2[ss]; [ss][2:v] overlay=main_w-overlay_w-10:main_h-overlay_h-10:shortest=1[out]" -map "[out]" -map 1:a -c:v libx264 -r 24 -preset veryfast -tune stillimage -pix_fmt yuv420p -c:a copy -map_metadata -1 "media/video.mkv" -report
I tried converting the audio into different formats of MP3, tried changing bitrates, changed audio to stereo, and even tried converting it to a WAV. None of these things worked.
Here are the report results for when I run this command.
If it makes a difference, I'm using Ubuntu 14.04 and FFmpeg version N-77455-g4707497 (latest version).
This command should work, but I consider this bizarre behaviour as FFmpeg should be automatically padding frames as per output spec
ffmpeg -y -framerate 1/12 -i "media/%03d.jpg" -i "media/audio.mp3" -loop 1 -i "media/watermark.png" -filter_complex "[0:v]scale=iw*min(3840/iw\,2160/ih):ih*min(3840/iw\,2160/ih), pad=3840:2160:(3840-iw)/2:(2160-ih)/2,fps=24[ss]; [ss][2:v] overlay=main_w-overlay_w-10:main_h-overlay_h-10:shortest=1[out]" -map "[out]" -map 1:a -c:v libx264 -r 24 -preset veryfast -tune stillimage -pix_fmt yuv420p -c:a copy -map_metadata -1 "media/video.mkv"

Resources