ffmpeg can loop png but not audio - ffmpeg

I'm using the following to stream an image to YouTube:
ffmpeg -threads:v 2 -threads:a 8 -filter_threads 2 -thread_queue_size 1080 \
-loop 1 -re -i ./image.png \
-i ./track.mp3 \
-pix_fmt yuv420p -c:v libx264 -qp:v 19 -profile:v high -rc:v cbr_ld_hq -level:v 4.2 -r:v 60 -g:v 120 -bf:v 3 -refs:v 16 -preset fast -f flv rtmp://a.rtmp.youtube.com/live2/xxx
And the looping for the image (to keep it streaming over) works, but not the sound.

Remember that FFmpeg input options are applied per input. So, -loop 1 is only specified for -i image.png input, and -i ./track.mp3 has no input options defined. Now, to loop audio track, you need to use -stream_loop input option like this:
ffmpeg -threads:v 2 -threads:a 8 -filter_threads 2 -thread_queue_size 1080 \
-loop 1 -re -i ./image.png \
-stream_loop -1 -i ./track.mp3 \
...

Related

ffmpeg - Sync Audio with image position (Audio Slideshow)

How can i start the audio Files at the same position as the pictures? (This is for a Image slideShow with changing Audio)
ffmpeg -loop 1 -t 19 -i 1.jpg -loop 1 -t 19 -i 2.jpg -i 1.mp3 -i 2.mp3
-filter_complex "
[0:a]adelay=19s:all=1[1a];
[1:a]adelay=24s:all=1[2a];
[0:v]scale=1280:720,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[0p];
[1:v]scale=1280:720,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[1p];
[0p][1p]xfade=transition=fade:duration=1:offset=19[1x];
-map [1x] -c:v libx264 -c:a copy -t 39 out.mp4
ok i found a solution to position audio files by seconds with Images.
just use this structure
and Paramter "adelay" for an audio offset
ffmpeg -loop 1 -t 10 -i "1.jpg" -loop 1 -t 10
-i "2.jpg" -t 5 -ss 0 -i "audio1.mp3"
-t 10 -ss 10 -i "audio2.mp3"
-filter_complex "[2]adelay=1000:all=1[a1];
[3]adelay=2000:all=1[a2];
[0:v]scale=1280:720,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[0p];
[1:v]scale=1280:720,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1[1p];
[0p][1p]xfade=transition=fade:duration=1:offset=5.485[1x];
[a1][a2]amix=inputs=2[aout]" -map [1x] -map [aout] _out.mp4 -y 2>&1

Using ffmpeg output to HLS and Image Stills

I want to combine the output from an RTSP stream into both an HLS stream and several image stills. I can do this fine separately (obviously) but i'm having trouble combining things. Can I get a quick hand?
Here are my outputs (that works):
Outputting HLS streams:
ffmpeg -rtsp_transport tcp -i '$RTSP_URL'
-c:v copy -b:v 64K -f flv rtmp://localhost/hls/stream_low \
-c:v copy -b:v 512K -f flv rtmp://localhost/hls/stream_high
Outputting image stills:
ffmpeg -hide_banner -i '$(RTSP_URL)' -y \
-vframes 1 -vf "scale=1920:-1" -q:v 10 out/screenshot_1920x1080.jpeg \
-vframes 1 -vf "scale=640:-1" -q:v 10 out/screenshot_640x360.jpeg \
-vframes 1 -vf "scale=384:-1" -q:v 10 out/screenshot_384x216.jpeg \
-vframes 1 -vf "scale=128:-1" -q:v 10 out/screenshot_128x72.jpeg
Any help is appreciated (I also posted a bounty ^_^)
Thanks guys!
Simply
ffmpeg -rtsp_transport tcp -i '$RTSP_URL'
-c:v copy -b:v 64K -f flv rtmp://localhost/hls/stream_low \
-c:v copy -b:v 512K -f flv rtmp://localhost/hls/stream_high \
-vframes 1 -vf "scale=1920:-1" -q:v 10 out/screenshot_1920x1080.jpeg \
-vframes 1 -vf "scale=640:-1" -q:v 10 out/screenshot_640x360.jpeg \
-vframes 1 -vf "scale=384:-1" -q:v 10 out/screenshot_384x216.jpeg \
-vframes 1 -vf "scale=128:-1" -q:v 10 out/screenshot_128x72.jpeg
Note that your "HLS" streams is actually a RTMP stream as the output protocol says. Also, with -c:v copy, there's no video encoding, so -b:v has no effect.

FFMPEG images to video + overlay video

I am trying to make 15 second video where the background layer is a video made up of 2 images, the first line creates a 15 second video from 2 images.
I chose a small framerate so it renders an mp4 quickly. I then overlay a webm video (which has transparency) over the images. The final video seems to keep the framerate of 2, but i would rather keep the 24 framerate of the webm video.
Is this possible? & is it also possible to turn the below into 1 statement.
ffmpeg -loop 1 -framerate 2 -t 11 -i image1.png -loop 1 -framerate 2 -t 4 -i image2.png -filter_complex "[0][1]concat=n=2" backgroundvideo.mp4;
ffmpeg -i backgroundvideo.mp4 -c:v libvpx-vp9 -i overlayvideo.webm -filter_complex overlay newvid.mp4
You can use the filter fps to adjust your background's framerate
ffmpeg \
-loop 1 -framerate 2 -t 11 -i image1.png \
-loop 1 -framerate 2 -t 4 -i image2.png \
-c:v libvpx-vp9 -i overlayvideo.webm \
-filter_complex '[0][1]concat,fps=24[bg];[2][bg]overlay' \
backgroundvideo.mp4

Adding splash screen using FFMPEG

everyone!
I'm trying to add a splash screen to fade out after 2 seconds into a video using FFMPEG.
I'm using the following command:
ffmpeg -loop 1 -framerate 2 -t 2 -i image.png \
-i video.mp4 \
-filter_complex "[0:v]fade=t=in:st=0:d=0.500000,fade=t=out:st=4.500000:d=0.500000,setsar=1; \
[0:0] [1:0] concat=n=2:v=1:a=0" \
-c:v libx264 -crf 23 output.mp4
but it is generating a video whose duration is correct, but plays for just 2 seconds, exactly the splash screen duration.
Since I don't have very experience on FFMPEG and got this code from the internet I don't know where the problem is...
Use
ffmpeg -i video.mp4 -loop 1 -t 2 -i image.png \
-filter_complex \
"[1]fade=t=in:st=0:d=0.500000,fade=t=out:st=1.500000:d=0.500000,setsar=1[i]; \
[i][0]concat=n=2:v=1:a=0" \
-c:v libx264 -crf 23 output.mp4
The image should be the same resolution as the video. It will fade-in for 0.5 seconds, remain for 1 second, then fade out for 0.5 seconds.

How do I add 2 more pictures to this ffmpeg slideshow?

This is the command I use to make the slideshow with ffmpeg:
ffmpeg -y -i audio.wav -framerate 1/4 -t 60 -loop 1 -i first.png -framerate 1/4 -t 600 -loop 1 -i Test.png -framerate 1/4 -t 600 -loop 1 -i test-ceinture-running-flip-belt.png -framerate 1/4 -t 600 -loop 1 -i Wikimedia_Outreach_test_logo.png -filter_complex "[1:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v0]; [2:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v1]; [3:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v2]; [4:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v3]; [v0][v1][v2][v3]concat=n=4:v=1:a=0 [out]" -map "[out]" -map 0:0 -c:a libvo_aacenc -b:a 128k -vcodec mpeg4 -qscale:v 20 -keyint_min 100 -f mp4 -r 10 -pix_fmt yuv420p out_024.mp4
I would like to add 2 more pictures that lasts 600 seconds each.
Could you please help me?
Well, why don't you experiment manipulating your current command? everything is there. According to your approach you can achieve this as follows.
ffmpeg -y -i audio.wav -framerate 1/4 -t 60 -loop 1 -i first.png -framerate 1/4 -t 600 -loop 1 -i Test.png -framerate 1/4 -t 600 -loop 1 -i test-ceinture-running-flip-belt.png -framerate 1/4 -t 600 -loop 1 -i Wikimedia_Outreach_test_logo.png -framerate 1/4 -t 600 -loop 1 -i new_image_1.png -framerate 1/4 -t 600 -loop 1 -i new_image_2.png -filter_complex "
[1:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v0];
[2:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v1];
[3:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v2];
[4:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v3];
[5:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v4];
[6:v]scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:0+(1280-iw*min(1280/iw\,720/ih))/2:0+(720-ih*min(1280/iw\,720/ih))/2 [v5];
[v0][v1][v2][v3][v4][v5]concat=n=6:v=1:a=0 [out]" -map "[out]" -map 0:0 -c:a libvo_aacenc -b:a 128k -vcodec mpeg4 -qscale:v 20 -keyint_min 100 -f mp4 -r 10 -pix_fmt yuv420p out_024.mp4
But the way you have done this is not efficient. You may need to reed the relevant documentation first. You can rename the image files with common settings like -framerate 1/4 -t 600 to something like img%03d.png. It will help you to reduce the command length as well as the performance aspect at the execution.
Hope this helps!

Resources