How do I add 2 more pictures to this ffmpeg slideshow? - ffmpeg

This is the command I use to make the slideshow with ffmpeg:
ffmpeg -y -i audio.wav -framerate 1/4 -t 60 -loop 1 -i first.png -framerate 1/4 -t 600 -loop 1 -i Test.png -framerate 1/4 -t 600 -loop 1 -i test-ceinture-running-flip-belt.png -framerate 1/4 -t 600 -loop 1 -i Wikimedia_Outreach_test_logo.png -filter_complex "[1:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v0]; [2:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v1]; [3:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v2]; [4:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v3]; [v0][v1][v2][v3]concat=n=4:v=1:a=0 [out]" -map "[out]" -map 0:0 -c:a libvo_aacenc -b:a 128k -vcodec mpeg4 -qscale:v 20 -keyint_min 100 -f mp4 -r 10 -pix_fmt yuv420p out_024.mp4
I would like to add 2 more pictures that lasts 600 seconds each.
Could you please help me?

Well, why don't you experiment manipulating your current command? everything is there. According to your approach you can achieve this as follows.
ffmpeg -y -i audio.wav -framerate 1/4 -t 60 -loop 1 -i first.png -framerate 1/4 -t 600 -loop 1 -i Test.png -framerate 1/4 -t 600 -loop 1 -i test-ceinture-running-flip-belt.png -framerate 1/4 -t 600 -loop 1 -i Wikimedia_Outreach_test_logo.png -framerate 1/4 -t 600 -loop 1 -i new_image_1.png -framerate 1/4 -t 600 -loop 1 -i new_image_2.png -filter_complex "
[1:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v0];
[2:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v1];
[3:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v2];
[4:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v3];
[5:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v4];
[6:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v5];
[v0][v1][v2][v3][v4][v5]concat=n=6:v=1:a=0 [out]" -map "[out]" -map 0:0 -c:a libvo_aacenc -b:a 128k -vcodec mpeg4 -qscale:v 20 -keyint_min 100 -f mp4 -r 10 -pix_fmt yuv420p out_024.mp4
But the way you have done this is not efficient. You may need to reed the relevant documentation first. You can rename the image files with common settings like -framerate 1/4 -t 600 to something like img%03d.png. It will help you to reduce the command length as well as the performance aspect at the execution.
Hope this helps!

Related

ffmpeg - Sync Audio with image position (Audio Slideshow)

How can i start the audio Files at the same position as the pictures? (This is for a Image slideShow with changing Audio)
ffmpeg -loop 1 -t 19 -i 1.jpg -loop 1 -t 19 -i 2.jpg -i 1.mp3 -i 2.mp3
-filter_complex "
[0:a]adelay=19s:all=1[1a];
[1:a]adelay=24s:all=1[2a];
[0:v]scale=1280:720,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[0p];
[1:v]scale=1280:720,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[1p];
[0p][1p]xfade=transition=fade:duration=1:offset=19[1x];
-map [1x] -c:v libx264 -c:a copy -t 39 out.mp4
ok i found a solution to position audio files by seconds with Images.
just use this structure
and Paramter "adelay" for an audio offset
ffmpeg -loop 1 -t 10 -i "1.jpg" -loop 1 -t 10
-i "2.jpg" -t 5 -ss 0 -i "audio1.mp3"
-t 10 -ss 10 -i "audio2.mp3"
-filter_complex "[2]adelay=1000:all=1[a1];
[3]adelay=2000:all=1[a2];
[0:v]scale=1280:720,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[0p];
[1:v]scale=1280:720,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[1p];
[0p][1p]xfade=transition=fade:duration=1:offset=5.485[1x];
[a1][a2]amix=inputs=2[aout]" -map [1x] -map [aout] _out.mp4 -y 2>&1

FFMpeg - Freeze First Frame for X seconds

I need to pause/freeze the first frame of the video for 2 seconds before proceeding to the scroll effect. Here's what I have:
ffmpeg -y -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -loop 1 -i "temp.jpg" -i "floating.png" -filter_complex "[1:v]fps=fps=30,crop=1280:720:0:'t*(ih-oh)/120',overlay,scale=1280x720,drawtext=fontfile='font.ttf':text='text here':x=20:y=675:fontsize=60:fontcolor=white:shadowcolor=black:shadowx=2:shadowy=2,drawtext=fontfile='font.ttf':text='more text':x=w-tw-20:y=670:fontsize=70:fontcolor=white:shadowcolor=black:shadowx=2:shadowy=2[out]" -t 10 -map "[out]" -map "0:a" -shortest -c:v h264_qsv -c:a aac -ac 2 -ar 44100 -vb 30M -r 30 "video.mp4"
Any ideas?
I figured it out using:
loop=60:1:0,setpts=N/FRAME_RATE/TB

FFmpeg make video from figures and speed up a chosen part

Make a video from a series of 100 figures
ffmpeg -framerate 10 -i input_figure%01d.png out.mp4
How can I only make figure numbers from [0-49] with a slower speed like -framerate 5?
My try is
ffmpeg -start_number 1 -framerate 5 -i input_figure%01d.png -vframes 49 \
-start_number 50 -framerate 10 -i input_figure%01d.png \
out.mp4
Doesn't work
The naive method is to create videos in parts, then concat them togother
ffmpeg -framerate 5 -i input_fig%01d.png -vframes 49 part_1.mp4
ffmpeg -start_number 50 -framerate 10 -i input_fig%01d.png part_2.mp4
ffmpeg -f concat -safe 0\
-i <(for f in ./part_*.mp4; do echo "file '$PWD/$f'"; done)\
-c copy out.mp4
rm part_*.mp4

FFMPEG images to video with reverse sequence with other filters

Similar to this ffmpeg - convert image sequence to video with reversed order
But I was wondering if I can create a video loop by specifying the image range and have the reverse order appended in one command.
Ideally I'd like to combine it with this Make an Alpha Mask video from PNG files
What I am doing now is generating the reverse using https://stackoverflow.com/a/43301451/242042 and combining the video files together.
However, I am thinking it would be similar to Concat a video with itself, but in reverse, using ffmpeg
My current attempt was assuming 60 images. which makes vframes x2
ffmpeg -y -framerate 20 -f image2 -i \
running_gear/%04d.png -start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
-filter_complex alphaextract[a]
-map 0:v -b:v 5M -crf 20 running_gear.webm
-map [a] -b:v 5M -crf 20 running_gear-alpha.web
Without the alpha masking I can get it working using
ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
-start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [v]" \
-map "[v]" -b:v 5M -crf 20 running_gear.webm
With just the alpha masking I can do
ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
-start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [vc];[vc]alphaextract[a]"
-map [a] -b:v 5M -crf 20 alpha.webm
So I am trying to do it so the alpha mask is done at the same time.
Although my ultimate ideal would be to take the images, reverse it get an alpha mask and put it side-by-side so it can be used in Ren'py
Got it after a few trial and error. Not really my ultimate desire but still works.
ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
-start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [vc];[vc]split[v][av];[av]alphaextract[a]"
-map [v] -b:v 5M -crf 20 running_gear.webm
-map [a] -b:v 5M -crf 20 running_gear-alpha.webm
After checking some of the other filters (after learning about it from concat) I found hstack so the one that can put it side-by-side so it works better with Ren'Py is.
ffmpeg -y -framerate 20 -f image2 -i running_gear/%04d.png \
-start_number 0 -vframes 120 \
-filter_complex "[0:v]reverse,fifo[r];[0:v][r] concat=n=2:v=1 [vc];[vc]split[v][av];[av]alphaextract[a];[v][a]hstack[m]"
-map [m] -b:v 5M -crf 20 running_gear.webm

Using ffmpeg output to HLS and Image Stills

I want to combine the output from an RTSP stream into both an HLS stream and several image stills. I can do this fine separately (obviously) but i'm having trouble combining things. Can I get a quick hand?
Here are my outputs (that works):
Outputting HLS streams:
ffmpeg -rtsp_transport tcp -i '$RTSP_URL'
-c:v copy -b:v 64K -f flv rtmp://localhost/hls/stream_low \
-c:v copy -b:v 512K -f flv rtmp://localhost/hls/stream_high
Outputting image stills:
ffmpeg -hide_banner -i '$(RTSP_URL)' -y \
-vframes 1 -vf "scale=1920:-1" -q:v 10 out/screenshot_1920x1080.jpeg \
-vframes 1 -vf "scale=640:-1" -q:v 10 out/screenshot_640x360.jpeg \
-vframes 1 -vf "scale=384:-1" -q:v 10 out/screenshot_384x216.jpeg \
-vframes 1 -vf "scale=128:-1" -q:v 10 out/screenshot_128x72.jpeg
Any help is appreciated (I also posted a bounty ^_^)
Thanks guys!
Simply
ffmpeg -rtsp_transport tcp -i '$RTSP_URL'
-c:v copy -b:v 64K -f flv rtmp://localhost/hls/stream_low \
-c:v copy -b:v 512K -f flv rtmp://localhost/hls/stream_high \
-vframes 1 -vf "scale=1920:-1" -q:v 10 out/screenshot_1920x1080.jpeg \
-vframes 1 -vf "scale=640:-1" -q:v 10 out/screenshot_640x360.jpeg \
-vframes 1 -vf "scale=384:-1" -q:v 10 out/screenshot_384x216.jpeg \
-vframes 1 -vf "scale=128:-1" -q:v 10 out/screenshot_128x72.jpeg
Note that your "HLS" streams is actually a RTMP stream as the output protocol says. Also, with -c:v copy, there's no video encoding, so -b:v has no effect.

Resources