Why does the mp4 from ffmpeg freeze during the last 3 seconds? - ffmpeg

I'm trying to generate a perfectly looping mp4 from three inputs:
A background png image
An image sequence of transparent png images with the number of particles increasing
Another image sequence of transparent png images with the number of particles decreasing
I'm currently trying to achieve this with two commands (I have to use 'overlay' twice). The problem is that after the second command the video (test2.mp4) freezes for the last 3 seconds. Why does it happen? ARe there any other commands I could try to use?
First command:
ffmpeg -framerate 30 \
-pattern_type glob -i 'images/increase/*.png' \
-framerate 30 \
-i screens/Background.png \
-i audio/50-White-Noise-10min.mp3 \
-filter_complex "[1:v][0:v] overlay" \
-preset slow -c:a copy -shortest -c:v libx264 -pix_fmt yuv420p test.mp4
Second command:
ffmpeg -framerate 30 \
-pattern_type glob -i 'images/decrease/*.png' \
-i test.mp4 \
-filter_complex "[1:v][0:v] overlay" \
-preset slow -c:a copy -shortest -c:v libx264 -pix_fmt yuv420p test2.mp4

The solution was to do what Rajib commented: chain the overlay filters and do it all in one command:
ffmpeg -framerate 30 \
-i screens/Background.png \
-framerate 30 \
-pattern_type glob -i 'images/increase/*.png' \
-framerate 30 \
-pattern_type glob -i 'images/decrease/*.png' \
-i audio/50-White-Noise-10min.mp3 \
-filter_complex "[0][1] overlay[out],[out][2] overlay" \
-c:a copy -shortest -c:v libx264 -pix_fmt yuvj420p loop.mp4

Related

How to optimize encoding and packaging videos using ffmpeg and shaka-packager

I'm trying to encode and package uploaded videos for an LMS website where video size may differ. How can I write a sh script that converts and packages the given video based on its size (For ex. if the given video resolution is bigger than 720p and less than 1080p FFmpeg should convert videos in 2 sizes [360p, 720p] then shaka-packager should package them).
So far I have this script assuming that input video resolution is 1080p (or 1080p <= size < 4k)
#!/bin/sh
pwd
URL="$1"
ID="$2"
FOLDER="$3"
if [ -z "$URL" ];then
echo "Must input a file"
$SHELL
exit
fi
DIR="$FOLDER/$ID"
OUTDIR="$DIR/cmaf"
mkdir -p -v $DIR
mkdir -p -v $OUTDIR
GOP_SIZE=50
FPS=25
CRF=28
INPUT="$DIR/input"
wget -c -O $INPUT $URL &&
if [ ! -f $FILE ]; then
echo "$FILE does not exists"
$SHELL
exit
fi
ffmpeg -i $INPUT -y \
-threads 1 \
-c:v libx264 -crf $CRF -profile:v high -pix_fmt yuv420p \
-keyint_min $GOP_SIZE -g $GOP_SIZE -sc_threshold 0 \
-color_primaries 1 -color_trc 1 -colorspace 1 -movflags +faststart \
-c:a aac -b:a 128k -ar 44100 \
-r $FPS \
"$DIR/input.mp4" &&
ffmpeg -i "$DIR/input.mp4" -y \
-threads 1 \
-vn -acodec copy "$DIR/a.mp4" \
-vf scale=640:360 -an "$DIR/360p.mp4" \
-vf scale=1280:720 -an "$DIR/720p.mp4" \
-vf scale=1920:1080 -an "$DIR/1080p.mp4" &&
rm -R $OUTDIR
packager \
in="$DIR/a.mp4",stream=audio,output="$OUTDIR/a.mp4",drm_label=AUDIO \
in="$DIR/360p.mp4",stream=video,output="$OUTDIR/360p.mp4",drm_label=SD \
in="$DIR/720p.mp4",stream=video,output="$OUTDIR/720p.mp4",drm_label=HD \
in="$DIR/1080p.mp4",stream=video,output="$OUTDIR/1080p.mp4",drm_label=HD \
--enable_raw_key_encryption \
--keys label=AUDIO:key_id=f3c5e0761e6654b28f8049c778b23947:key=a4637a153a443df9eed0593043db7517,label=SD:key_id=abba277e8bcf552bbd2e86a434a9a5d7:key=69eaa807a6763af979e8d1940fb88397,label=HD:key_id=6d76f25cb17f5e76b8eaef6b7f582d87:key=cb541784c99737aef4fff74500c12ea7 \
--pssh 000000377073776800000000EDEF8BA979D64ACEA3C877DCD51D21ED00000071220F7465737420636F6E74656E74206967 \
--mpd_output "$OUTDIR/h264.mpd" \
--hls_master_playlist_output "$OUTDIR/h264_master.m3u8"
The above script first downloads a video by a given URL then converts it to appropriate video format before resizing and packaging. I assumed if I convert the video before scaling would be more performant than every time converting and resizing it. Also, I assumed if I resize to all resolutions in one command it would be much faster, but I think that is not how FFmpeg works. I'm stack in the world of FFmpeg not knowing how to write sh(or bash) script better, cleaner and dynamic for encoding and packaging videos for online streaming. I think there are others with the same problem or the same case. So any help, fix and recommendation is appreciated
For the sake of clarity, I stripped some arguments from your commands (yuv420p and -profile:v high are defaults, not changing frame-rate)
ffmpeg -i <input> -y \
-c:v libx264 -crf 28 -g 50 \
-c:a aac -b:a 128k -ar 44100 \
-movflags +faststart \
<output> &&
ffmpeg -i <output> -y \
-vn -c:a copy "$DIR/a.mp4" \
-vf scale=640:360 -an "$DIR/360p.mp4" \
-vf scale=1280:720 -an "$DIR/720p.mp4" \
-vf scale=1920:1080 -an "$DIR/1080p.mp4"
The first run will decode your input and re-encode it using libx264 with quality-target 28 and a keyframe every 50 frames.
The second instance will decode it again, guessing an encoder by the .mp4 extension -- defaulting to libx264 --, and re-encodes everything three times by using the default values -g 250 -crf 23 (I'm not sure about -movflags +faststart).
So you are (1) overwriting your settings from the first-run, (2) having an additional decode process and (3) having a certain quality loss due to multiple lossy encodings.
What you want is to combine these into one invocation:
ffmpeg -i <input> -y \
-vn -c:a aac -b:a 128k -ar 44100 "$DIR/a.mp4" \
-c:v libx264 -crf 28 -g 50 -s 640x360 -movflags +faststart -an "$DIR/360p.mp4" \
-c:v libx264 -crf 28 -g 50 -s 1280x720 -movflags +faststart -an "$DIR/720p.mp4" \
-c:v libx264 -crf 28 -g 50 -s 19201080 -movflags +faststart -an "$DIR/1080p.mp4"
Additionally, I would stay away from special arguments unless you really know what and why you are choosing them.
P.s.
This is a command that runs with 15 % CPU utilization on my laptop.
ffmpeg \
-hwaccel qsv -c:v h264_qsv -i 'rtsp://109.98.78.106' \
-an -c:v h264_qsv -global_quality 30 -vf "scale_qsv=h=360:w=-1" "/tmp/360p.mp4" \
-an -c:v h264_qsv -global_quality 30 -vf "scale_qsv=h=720:w=-1" "/tmp/720p.mp4" \
-an -c:v h264_qsv -global_quality 30 -vf "scale_qsv=h=1080:w=-1" "/tmp/1080p.mp4"
It might have some color and / or quality issues but this is a performance trade-off.

FFMPEG zoom-pan multiple images

I am trying to stitch multiple images with some zoom-pan happening on the images to create a video.
Command:-
ffmpeg -f lavfi -r 30 -t 10 -i \
color=#000000:1920x1080 \
-f lavfi \
-r 30 -t 10 \
-i aevalsrc=0 \
-i "image-1.png" \
-i "image-2.png" \
-y -filter_complex \
"[0:v]fifo[bg];\
[2:v]setpts=PTS-STARTPTS+0/TB,scale=4455:2506:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps='30':s='1920x1080'[v2];\
[bg][v2]overlay=0:0:enable='between(t,0, 5)'[bg];\
[3:v]setpts=PTS-STARTPTS+5.07/TB,scale=3840:2160:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps='30':s='1920x1080'[v3];\
[bg][v3]overlay=0:0:enable='between(t,5, 10)'[bg];\
[1:a]amix=inputs=1:duration=first:dropout_transition=0" \
-map "[bg]" -vcodec "libx264" -preset "veryfast" -crf "15" "output.mp4"
The output is not as expected, it only zooms only on the first image, the second image is just static.
FFMPEG version - 4.1
Use
ffmpeg -f lavfi -i color=#000000:1920x1080:r=30:d=10 \
-f lavfi -t 10 -i anullsrc \
-i "image-1.png" \
-i "image-2.png" \
-filter_complex \
"[2:v]scale=4455:2506:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps=30:s='1920x1080'[v2];\
[bg][v2]overlay=0:0:enable='between(t,0,5)'[bg];\
[3:v]scale=3840:2160:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps=30:s='1920x1080',setpts=PTS+5/TB[v3];\
[bg][v3]overlay=0:0:enable='between(t,5,10)'[bg];\
-map "[bg]" -map 1:a -vcodec libx264 -preset veryfast -crf 15 -y "output.mp4"
For lavfi sources, it's best to set frame rate and duration where applicable within the filter.
Since you're not looping the images, -t won't have any effect. Since zoompan will set fps in its output, you can skip input rate setting. And since it's a single image, setpts before zoompan has no relevance. It should be set only on the zoompan whose timestamps need to be shifted.
Since you've only one audio, no point sending it to amix - there's nothing to mix with! Just map it directly.

FFmpeg add a text to last image only

I managed to create a video from set of non-sequential images and attached an audio to it. Also I added a "Copyright" text on top right hand corner so that the text appears throughout the video. However, I would like that text to appear only on the last image. How should I change my code below to address this?
ffmpeg \
-thread_queue_size 512 -f image2 -pattern_type glob -framerate 1/3 \
-i '*.jpg' \
-i 'audio.mp3' \
-c:a aac -c:v libx264 \
-vf scale=640:480, format=yuv420p, drawtext="text='Copyright':fontcolor=white:box=1:boxcolor=black#0.5:boxborderw=5:x=w-tw-5:y=5" \
-preset medium \
video.mp4
Isolate the last image from the glob and then concat it:
ffmpeg \
-pattern_type glob -framerate 1/3 -i '*.jpg' -framerate 1/3 -loop 1 -t 5 -i last/img.jpg -i audio.mp3 \
-filter_complex \
"[0:v]scale=640:480,setsar=1[v0]; \
[1:v]scale=640:480,setsar=1,drawtext=text='Copyright':fontcolor=white:box=1:boxcolor=black#0.5:boxborderw=5:x=w-tw-5:y=5[v1]; \
[v0][v1]concat=n=2:v=1:a=0,fps=25,format=yuv420p[v]" \
-map "[v]" -map 2:a -c:v libx264 -c:a aac -shortest -movflags +faststart video.mp4

Combining multiple image files into a video while using filter_complex to apply a watermark

I'm trying to combine two ffmpeg operations into a single one.
Currently I have two sets of ffmpeg commands that first generate a video from existing images, then runs that video through ffmpeg again to apply a watermark.
I'd like to see if its possible to combine these into a single operation.
# Create the source video
ffmpeg -y \
-framerate 1/1 \
-i layer-%d.png \
-r 30 -vcodec libx264 -preset ultrafast -crf 23 -pix_fmt yuv420p \
output.mp4
# Apply the watermark and render the final output
ffmpeg -y \
-i output.mp4 \
-i logo.png \
-filter_complex "[1:v][0:v]scale2ref=40:40[a][b];[b][a]overlay=(80):(main_h-200-80)" \
final.mp4
Use
ffmpeg -y \
-framerate 1/1 -i layer-%d.png \
-i logo.png \
-filter_complex "[0:v]fps=30[img];
[1:v][img]scale2ref=40:40[a][b];[b][a]overlay=(80):(main_h-200-80)" \
final.mp4
(The use of scale2ref doesn't make sense since you're scaling to a fixed size).

Add different filter parameters to multiple images

I have this command to generate a slideshow with zoompan from a list of images, but it applies the same zoompan to all pictures.
ffmpeg -r 1/5 -i img%03d.jpg -i 1.mp3 -c:a aac -c:v libx264 -r 25 -pix_fmt yuv420p -vf "zoompan=z='if(lte(zoom,1.0),1.2,max(1.001,zoom-0.0015))':d=100" out.mp4
How can I get it to have different zoompan parameters for each image?
Input each image individually and provide a separate zoompan per image. Then concatenate with the concat filter.
ffmpeg \
-i img001.jpg \
-i img002.jpg \
-i img003.jpg \
-i audio.mp3 \
-filter_complex \
"[0:v]zoompan[v0]; \
[1:v]zoompan[v1]; \
[2:v]zoompan[v2]; \
[v0][v1][v2]concat=n=3:v=1:a=0,format=yuv420p[v]" \
-map "[v]" -map 3:a -shortest out.mp4
You'll need to adapt this example to use whatever zoompan values you want.

Resources