FFMPEG avoid_negative_ts makes video start not from keyframe - ffmpeg

Let's say I want to cut part of the mp4 video and resize it from 1280x720 to 854x480.
My command looks like this:
ffmpeg -ss 45 -i source.mp4 -ss 10 -to 20 \
-acodec aac -ar 44100 -ac 2 -c:v libx264 \
-crf 26 -vf scale=854:480:force_original_aspect_ratio=decrease,pad=854:480:0:0,setsar=1/1,setdar=16/9 \
-video_track_timescale 29971 -pix_fmt yuv420p \
-map_metadata 0 -avoid_negative_ts 1 -y dest.mp4
The problem is, when I don't use option avoid_negative_ts, resulting video has some issues with time bases etc, therefore it cannot be later converted by other libs, for example Swift's AVFoundation.
But when I use this option - video does not start with keyframe.
By using ffprobe I see start_time=0.065997 or other times other than 0.
How can I use option avoid_negative_ts and have a video that starts with keyframe?

Related

ffmpeg youtube livestream stops after a while

I'll update this question
ffmpeg -version
ffmpeg -version
ffmpeg version 4.3.1-4ubuntu1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 10 (Ubuntu 10.2.0-9ubuntu2)
I run this command to use ffmpeg to stream to youtube ;
ffmpeg -y -threads 12 \
-loop 1 -framerate 30 -re \
-i ./1280x720.jpg \
-i ./audio.mp3 \
-video_size 1280x720 \
-vcodec libx264 -pix_fmt yuv420p \
-b:v 4500k -maxrate 5500k -bufsize 22000k \
-preset ultrafast -crf 23 -tune stillimage \
-b:a 128k -ar 44100 -ac 2 -acodec aac \
-filter_complex "dynaudnorm=f=150:g=15" \
-r 30 -g 60 \
-f flv rtmp://a.rtmp.youtube.com/live2/xxxx 2>&1 | tee _LOG
The stream is excellent for 45-53 minutes then i'll get an error like this from ffmpeg:
[flv # 0x56077027cd80] Delay between the first packet and last packet in the muxing queue is 10034000 > 10000000: forcing output
then youtube starts to say, no data being received and the stream will end, which it does.
This is the full log: http://0x0.st/-zUH.txt
Your MP3 duration is 00:49:57.42 so the stream messes up after it ends. Loop the audio with -stream_loop -1 and add -re for real-time reading of the input:
ffmpeg -y \
-loop 1 -framerate 30 -re -i ./1280x720.jpg \
-re -stream_loop -1 -i ./audio.mp3 \
-c:v libx264 -pix_fmt yuv420p \
-b:v 4500k -maxrate 5500k -bufsize 22000k \
-preset ultrafast -tune stillimage \
-b:a 128k -ar 44100 -ac 2 -c:a aac \
-filter_complex "dynaudnorm=f=150:g=15" \
-g 60 -f flv rtmp://a.rtmp.youtube.com/live2/xxxx
Alternatively, remove -re -stream_loop -1 and add the output option -shortest if you want the stream to end when the audio ends.
Unrelated changes:
No need to set -threads. Let ffmpeg auto choose.
-video_size 1280x720 is an input option for certain demuxers and does nothing in your command. Removed. Your input is already 1280x720 anyway: otherwise, see Resizing videos with ffmpeg to fit a specific size.
-b:v and -crf are mutually exclusive. In your case -b:v is being ignored. For streaming you probably want to use -b:v. Removed -crf.
You already set the frame rate with -framerate 30 so -r 30 is not needed. Removed.
Recommend using the slowest -preset that still encodes fast enough.

FFmpeg split and concatenate issues

I am trying to use FFmpeg to split different clips, concatenate them, and then reencode the concatenated stream. Here is the command line that I would like to use with 2 input clips (actually I would like to use more than 2, but 2 would suffice for illustrating this problem) as example:
./ffmpeg -y -noautorotate -ss 4.9 -i in0.ts -noautorotate -i in1.ts \
-threads 0 -map_chapters -1 -write_tmcd 0 \
-metadata location= -max_muxing_queue_size 2000 -f mp4 \
-movflags faststart -filter_complex "[0:v:0]yadif=deint=interlaced,scale=1280:720:flags=bicubic,setdar=1.7777778[v0];[1:v:0]yadif=deint=interlaced,scale=1280:720:flags=bicubic,setdar=1.7777778[v1];[v0][0:a:0][v1][1:a:0]concat=n=2:v=1:a=1[cat_v][cat_a]"
-map "[cat_a]" -acodec aac -ac 2 -ar 44100 -b:a 160k -async 1
-sn -map "[cat_v]" -vcodec libx264 -profile:v baseline -level 4 -b:v \
5400k -preset medium -x264opts ref=3:keyint=90 \
-r 30000/1001 -vsync 1 -metadata:s:v rotate= -pix_fmt yuv420p outputfile01.mp4
But the FFmpeg hangs and is stuck at frame 0. The in0.ts has its last key frame at 4s. If I were to change the -ss 4.9 to -ss X where X <= 4.0, then there is no issue.
My FFmpeg version is 3.3. I am aware that this problem does not exist in FFmpeg 4.0.x onwards or in FFmpeg 3.2.x but exists in 3.3.x and 3.4.x. Can someone help me understand exactly what bug has been introduced in 3.3.x and 3.4.x that there is this problem?
-ss before -i relies on seeking using the demuxer. For files with inter-coded video streams, the seek target will be a keyframe. The callback seek function in MPEG-TS demuxer returns the first keyframe after the specified point.
BTW, I can reproduce the effect with the latest builds. Why do you say the behaviour doesn't occur in 4.0 or 3.2?
To achieve the intended result, you can use the trim filters,
./ffmpeg -y -noautorotate -i in0.ts -noautorotate -i in1.ts -filter_complex "[0:v:0]yadif=deint=interlaced,trim=4.9,setpts=PTS-STARTPTS,scale=1280:720:flags=bicubic,setdar=1.7777778[v0];[1:v:0]yadif=deint=interlaced,scale=1280:720:flags=bicubic,setdar=1.7777778[v1];[0:a:0]atrim=4.9,asetpts=PTS-STARTPTS[a0];[v0][a0][v1][1:a:0]concat=n=2:v=1:a=1[cat_v][cat_a]" -sn -map "[cat_a]" -async 1 -ac 2 -ar 44100 -c:a aac -b:a 160k -map "[cat_v]" -r 30000/1001 -vsync 1 -pix_fmt yuv420p -c:v libx264 -threads 0 -profile:v baseline -level:v 4 -b:v 5400k -preset medium -x264opts ref=3:keyint=90 -map_chapters -1 -metadata location= -metadata:s:v rotate= -max_muxing_queue_size 2000 -f mp4 -write_tmcd 0 -movflags faststart outputfile01.mp4

FFmpeg transparent PNG black outline issue

I'm encoding a video with a transparent PNG using ffmpeg. I noticed there's a slight black outline surrounding the image. Is there any way to remove it?
Output image:
Transparent PNG sample:
My ffmpeg command
ffmpeg -hide_banner -y -ss 0.0 -t 8.5 -i C:\Users\Admin\Desktop\test_movies\6.mp4 -i C:\Users\Admin\Desktop\test_movies\text_and_emoji.png -filter_complex [0:v]setpts=PTS-STARTPTS,scale=640:640:force_original_aspect_ratio=decrease,pad=640:640:(ow-iw)/2:(oh-ih)/2:color=#18ffff[0v];[1:v]scale=556.24744:141.41884[1v];[0v][1v]overlay=(W-w)/2-(W/2-325.33328):(H-h)/2-(H/2-567.7075):enable='between(t,0.0,8.5)' -ac 2 -ar 44100 -vcodec libx264 -g 75 -r 20 -preset ultrafast -strict experimental C:\Users\Admin\Desktop\test_movies\test.mp4
Last edit 1:
I tried using without [1:v]scale=556.24744:141.41884[1v], the output still have the slight outline
Sample output:
Sample code:
ffmpeg -hide_banner -y -ss 0.0 -t 8.5 -i C:\Users\Admin\Desktop\test_movies\white.mp4 -i C:\Users\Admin\Desktop\test_movies\text_and_emoji.png -filter_complex [0:v]scale=640:640:force_original_aspect_ratio=decrease,pad=640:640:(ow-iw)/2:(oh-ih)/2:color=#18ffff[0v];[0v][1:v]overlay=(W-w)/2-(W/2-325.33328):(H-h)/2-(H/2-567.7075):enable='between(t,0.0,8.5)' -ac 2 -ar 44100 -vcodec libx264 -preset ultrafast -strict experimental C:\Users\Admin\Desktop\test_movies\test.mp4
Last edit 2:
I tried another one with added alpha=premultiplied with latest ffmpeg version. It somehow removed the outline, but the quality of the picture reduced alot till it seems like it's pixelated. Plus. there's another unknown white layer at the back of the image.
Output video
Sample code:
C:\Users\Admin\Downloads\ffmpeg-20180102-57d0c24-win64-static\bin\ffmpeg -y -ss 0.0 -t 8.5 -i C:\Users\Admin\Desktop\test_movies\white.mp4 -i C:\Users\Admin\Desktop\test_movies\text_and_emoji.png -filter_complex [0:v]scale=640:640:force_original_aspect_ratio=decrease,pad=640:640:(ow-iw)/2:(oh-ih)/2:color=#00ffff[0v];[1:v]scale=480:120[1v];[0v][1v]overlay=(W-w)/2-(W/2-325.33328):(H-h)/2-(H/2-567.7075):alpha=premultiplied:enable='between(t,0.0,8.5)' -ac 2 -ar 44100 -vcodec libx264 C:\Users\Admin\Desktop\test_movies\test.mp4
Latest edit 3:
As suggested by #Mulvya, I combined his code with alpha=premultiplied and it seems alot better now, with very slight black outline (almost not visible)
Output video:
Sample code:
C:\Users\Admin\Downloads\ffmpeg-20180102-57d0c24-win64-static\bin\ffmpeg -y -ss 0.0 -t 8.5 -i C:\Users\Admin\Desktop\test_movies\white.mp4 -i C:\Users\Admin\Desktop\test_movies\text_and_emoji.png -filter_complex [0:v]setpts=PTS-STARTPTS,scale=640:640:force_original_aspect_ratio=decrease,pad=640:640:(ow-iw)/2:(oh-ih)/2:color=#18ffff[0v];[1:v]premultiply=inplace=1,scale=480:120[1v];[0v][1v]overlay=(W-w)/2-(W/2-325.33328):(H-h)/2-(H/2-567.7075):alpha=premultiplied:enable='between(t,0.0,8.5)':format=rgb,format=yuv420p -ac 2 -ar 44100 -vcodec libx264 C:\Users\Admin\Desktop\test_movies\test.mp4
This is due to a bug in the overlay filter, since fixed. Alter your filtergraph to to this,
"[0:v]setpts=PTS-STARTPTS,scale=640:640:force_original_aspect_ratio=decrease,pad=640:640:(ow-iw)/2:(oh-ih)/2:color=#18ffff[0v];[1:v]premultiply=inplace=1,scale=556.24744:141.41884[1v];[0v][1v]overlay=(W-w)/2-(W/2-325.33328):(H-h)/2-(H/2-567.7075):enable='between(t,0.0,8.5)':format=rgb,format=yuv420p"
You'll need a FFmpeg version from after Dec 16 2017 for this.

ffmpeg - Convert MP4 to WebM, poor results

I am trying to encode a video to webm for playing through a HTML5 video tag. I have these settings...
ffmpeg -i input.mp4 -c:v libvpx-vp9 -b:a 128k -b:v 1M -c:a libopus output.webm
The results aren't great, video has lost lot's of it's sharpness. Looking at the original file I can see the bitrate is 1694kb/s.
Are there any settings I can add or change to improve the output? Would maybe a 2 pass encode improve things?
Try with
ffmpeg -i input.mp4 -c:v libvpx-vp9 -crf 30 -b:v 0 -b:a 128k -c:a libopus output.webm
Adjust the CRF value till the quality/size tradeoff is ok. Lower values produce bigger but better files.
Try to run two passes:
ffmpeg -i file.mp4 -b:v 0 -crf 30 -pass 1 -an -f webm -y /dev/null
ffmpeg -i file.mp4 -b:v 0 -crf 30 -pass 2 output.webm
From - https://trac.ffmpeg.org/wiki/Encode/VP9

FFMPEG image not updating

THE INPUT FILES
An overlay image that has is being updated every 5 seconds by a Python script
A small MP4 file that will be looped by a concat input
An MP3 file as audio source
THE COMMAND (UPDATED)
This is the command I'm currently using to combine and stream the inputs.
ffmpeg -re -i music.mp3 -f concat -i videoincludes.txt
-r 1 -loop 1 -f image2 -i overlay.png
-c:v libx264 -c:a aac -shortest -crf 23 -pix_fmt yuv420p
-maxrate 2500k -bufsize 2500k -preset ultrafast -r 30 -g 60 -b:v 2000k -b:a 192k -ar 44100
-filter_complex "[1:v][2:v] overlay=0:0" -map 0:a -strict -2
-f flv rtmp://a.rtmp.youtube.com/live2/{key}
Als tried using -framerate 1 instead of -r 1
THE ISSUE
So the issue is that the image doesn't always update. Sometimes it does update every couple seconds at the start but it stops updating after 10-20 seconds without any difference in log output and sometimes it just doesn't update.
I can however confirm that the image is being updated by the Python script but FFmpeg is just not picking this up.
I read setting the input format of the image to image2 should allow it to update so I am not sure what is wrong or what I can do to improve it.
I'm working on the same task, and finally, I think, I found the answer.
Because streams different from each other we must reset their timestamps with setpts=PTS-STARTPTS to have them begin in the same zero timestamp . And, also, try to use image2pipe instead of image2.
This is your code with timestamp reset:
ffmpeg -re -i music.mp3 -f concat -i videoincludes.txt
-r 1 -loop 1 -f image2pipe -i overlay.png
-c:v libx264 -c:a aac -shortest -crf 23 -pix_fmt yuv420p
-maxrate 2500k -bufsize 2500k -preset ultrafast -r 30 -g 60 -b:v 2000k -b:a 192k -ar 44100
-filter_complex "[1:v]setpts=PTS-STARTPTS[out_main]; [2:v]setpts=PTS-STARTPTS[out_overlay]; [out_main][out_overlay]overlay=0:0" -map 0:a -strict -2
-f flv rtmp://a.rtmp.youtube.com/live2/{key}
p.s and I think, there is no need in -r or -framerate anymore

Resources