Merge videos and images using ffmpeg - ffmpeg

I'm trying to compile one .webm file that contains this:
10 seconds showing image1.jpg
Show a movie (an .mp4 file), which lasts about 20 seconds
10 seconds showing image2.jpg
10 seconds showing image3.jpg
I was unable to find out how/if the concatenate functionality of ffmpeg could do such a thing. Any clues?

You can use the concat filter.
Without audio
ffmpeg \
-loop 1 -framerate 24 -t 10 -i image1.jpg \
-i video.mp4 \
-loop 1 -framerate 24 -t 10 -i image2.jpg \
-loop 1 -framerate 24 -t 10 -i image3.jpg \
-filter_complex "[0][1][2][3]concat=n=4:v=1:a=0" out.mp4
Match -framerate with frame rate from video.mp4.
With audio
If there is audio in video.mp4 you'll need to provide audio for the images as well for it to be able to concatenate. Example of generating silence:
ffmpeg \
-loop 1 -framerate 24 -t 10 -i image1.jpg \
-i video.mp4 \
-loop 1 -framerate 24 -t 10 -i image2.jpg \
-loop 1 -framerate 24 -t 10 -i image3.jpg \
-f lavfi -t 0.1 -i anullsrc=channel_layout=stereo:sample_rate=44100 \
-filter_complex "[0:v][4:a][1:v][1:a][2:v][4:a][3:v][4:a]concat=n=4:v=1:a=1" out.mp4
Match channel_layout with audio channel layout (stereo, mono, 5.1, etc) from video.mp4.
Match sample_rate with audio sample rate from video.mp4.
No need to match the -t duration from anullsrc with any associated video input: the concat filter will automatically pad it to match video duration.

Related

ffmpeg can loop png but not audio

I'm using the following to stream an image to YouTube:
ffmpeg -threads:v 2 -threads:a 8 -filter_threads 2 -thread_queue_size 1080 \
-loop 1 -re -i ./image.png \
-i ./track.mp3 \
-pix_fmt yuv420p -c:v libx264 -qp:v 19 -profile:v high -rc:v cbr_ld_hq -level:v 4.2 -r:v 60 -g:v 120 -bf:v 3 -refs:v 16 -preset fast -f flv rtmp://a.rtmp.youtube.com/live2/xxx
And the looping for the image (to keep it streaming over) works, but not the sound.
Remember that FFmpeg input options are applied per input. So, -loop 1 is only specified for -i image.png input, and -i ./track.mp3 has no input options defined. Now, to loop audio track, you need to use -stream_loop input option like this:
ffmpeg -threads:v 2 -threads:a 8 -filter_threads 2 -thread_queue_size 1080 \
-loop 1 -re -i ./image.png \
-stream_loop -1 -i ./track.mp3 \
...

FFmpeg make video from figures and speed up a chosen part

Make a video from a series of 100 figures
ffmpeg -framerate 10 -i input_figure%01d.png out.mp4
How can I only make figure numbers from [0-49] with a slower speed like -framerate 5?
My try is
ffmpeg -start_number 1 -framerate 5 -i input_figure%01d.png -vframes 49 \
-start_number 50 -framerate 10 -i input_figure%01d.png \
out.mp4
Doesn't work
The naive method is to create videos in parts, then concat them togother
ffmpeg -framerate 5 -i input_fig%01d.png -vframes 49 part_1.mp4
ffmpeg -start_number 50 -framerate 10 -i input_fig%01d.png part_2.mp4
ffmpeg -f concat -safe 0\
-i <(for f in ./part_*.mp4; do echo "file '$PWD/$f'"; done)\
-c copy out.mp4
rm part_*.mp4

FFMPEG images to video + overlay video

I am trying to make 15 second video where the background layer is a video made up of 2 images, the first line creates a 15 second video from 2 images.
I chose a small framerate so it renders an mp4 quickly. I then overlay a webm video (which has transparency) over the images. The final video seems to keep the framerate of 2, but i would rather keep the 24 framerate of the webm video.
Is this possible? & is it also possible to turn the below into 1 statement.
ffmpeg -loop 1 -framerate 2 -t 11 -i image1.png -loop 1 -framerate 2 -t 4 -i image2.png -filter_complex "[0][1]concat=n=2" backgroundvideo.mp4;
ffmpeg -i backgroundvideo.mp4 -c:v libvpx-vp9 -i overlayvideo.webm -filter_complex overlay newvid.mp4
You can use the filter fps to adjust your background's framerate
ffmpeg \
-loop 1 -framerate 2 -t 11 -i image1.png \
-loop 1 -framerate 2 -t 4 -i image2.png \
-c:v libvpx-vp9 -i overlayvideo.webm \
-filter_complex '[0][1]concat,fps=24[bg];[2][bg]overlay' \
backgroundvideo.mp4

ffmpeg combining three images into a looped video

So I have three images that I need to combine into a 10 second video, I have searched around and in the end I came up with the following command, it is generating a video but not what I expected.
ffmpeg.exe -loop 1 -framerate 1 -i 20180627124135055101050.JPG -i 20180627124135056101051.JPG -i 20180627124135057101052.JPG -i 20180627124135056101051.JPG -vf framestep=4,setpts=N/FRAME_RATE/TB -c:v mpeg4 -t 10 video.mp4
It is currently generating a video of 12 seconds in length, but it is also only showing the first image, I have played around with the framerate option and the framestep option values but all change that I can see is in the length of the video, it never actually shows the images in the order I need them to display. What I basically need as a result is a giff of the 3 images in the order image 1, image 2, image 3, image 2. But as a mp4 of 10 seconds long.
Any assistance would be greatly appreciated
EDIT 1
So I have made some progress, I am now using the following command to get the correct video output, but now I need it to loop so the total video length can be 10 seconds
ffmpeg -loop 1 -framerate 25 -t 0.25 -i 20180627124135055101050.JPG -loop 1 -framerate 24 -t 0.25 -i 20180627124135056101051.JPG -loop 1 -framerate 24 -t 0.25 -i 20180627124135057101052.JPG -loop 1 -framerate 24 -t 0.25 -i 20180627124135056101051.JPG -filter_complex "[0][1][2][3]concat=n=4:v=1:a=0" video.mp4
Ok so I finally figured it out, the command I had to use to get this working was:
ffmpeg -loop 1 -framerate 30 -t 0.16 -i 20180627124135055101050.JPG -loop 1 -framerate 30 -t 0.16 -i 20180627124135056101051.JPG -loop 1 -framerate 30 -t 0.16 -i 20180627124135057101052.JPG -loop 1 -framerate 30 -t 0.16 -i 20180627124135056101051.JPG -filter_complex "[0][1][2][3]concat=n=4:v=1:a=0[v1],[v1]loop=20:32767:0" video.mp4

Ffmpeg show an image for multiple seconds before a video without re-encoding

I've been looking all around for this. Problem is that most google searches end up with being about creating a video from solely PNG files.
I've found this command which does the job :
ffmpeg -y -loop 1 -framerate 60 -t 5 -i firstimage.jpg -t 5 -f lavfi -i aevalsrc=0 -loop 1 -framerate 60 -t 5 -i secondimage.png -t 5 -f lavfi -i aevalsrc=0 -loop 1 -framerate 60 -t 5 -i thirdimage.png -t 5 -f lavfi -i aevalsrc=0 -i "shadowPlayVid.mp4" -filter_complex "[0:0][1:0][2:0][3:0][4:0][5:0][6:0][6:1] concat=n=4:v=1:a=1 [v] [a]" -map [v] -map [a] output.mp4 >> log_file1.txt 2>&1
But it seems to reencode the whole video, the input video is H.264 without CFR, but it seems to me that putting just some images before the video shouldn't take too long.
Because it ends up encoding the whole thing, this takes about 2 hours with a video of 30 minutes on a strong computer, while I feel like without encoding this should be able to be done much quicker. How do I make sure it doesn't re-encode while maintaining every image showing for 5 seconds first?
Generate your playervid.mp4 via
ffmpeg -y -loop 1 -framerate 60 -t 5 -i sample-out3.jpg -f lavfi -t 5 -i aevalsrc=0 -vf settb=1/60000 -video_track_timescale 60000 -c:v libx264 -pix_fmt yuv420p playervid.mp4

Resources