FFmpeg split and concatenate issues - ffmpeg

I am trying to use FFmpeg to split different clips, concatenate them, and then reencode the concatenated stream. Here is the command line that I would like to use with 2 input clips (actually I would like to use more than 2, but 2 would suffice for illustrating this problem) as example:
./ffmpeg -y -noautorotate -ss 4.9 -i in0.ts -noautorotate -i in1.ts \
-threads 0 -map_chapters -1 -write_tmcd 0 \
-metadata location= -max_muxing_queue_size 2000 -f mp4 \
-movflags faststart -filter_complex "[0:v:0]yadif=deint=interlaced,scale=1280:720:flags=bicubic,setdar=1.7777778[v0];[1:v:0]yadif=deint=interlaced,scale=1280:720:flags=bicubic,setdar=1.7777778[v1];[v0][0:a:0][v1][1:a:0]concat=n=2:v=1:a=1[cat_v][cat_a]"
-map "[cat_a]" -acodec aac -ac 2 -ar 44100 -b:a 160k -async 1
-sn -map "[cat_v]" -vcodec libx264 -profile:v baseline -level 4 -b:v \
5400k -preset medium -x264opts ref=3:keyint=90 \
-r 30000/1001 -vsync 1 -metadata:s:v rotate= -pix_fmt yuv420p outputfile01.mp4
But the FFmpeg hangs and is stuck at frame 0. The in0.ts has its last key frame at 4s. If I were to change the -ss 4.9 to -ss X where X <= 4.0, then there is no issue.
My FFmpeg version is 3.3. I am aware that this problem does not exist in FFmpeg 4.0.x onwards or in FFmpeg 3.2.x but exists in 3.3.x and 3.4.x. Can someone help me understand exactly what bug has been introduced in 3.3.x and 3.4.x that there is this problem?

-ss before -i relies on seeking using the demuxer. For files with inter-coded video streams, the seek target will be a keyframe. The callback seek function in MPEG-TS demuxer returns the first keyframe after the specified point.
BTW, I can reproduce the effect with the latest builds. Why do you say the behaviour doesn't occur in 4.0 or 3.2?
To achieve the intended result, you can use the trim filters,
./ffmpeg -y -noautorotate -i in0.ts -noautorotate -i in1.ts -filter_complex "[0:v:0]yadif=deint=interlaced,trim=4.9,setpts=PTS-STARTPTS,scale=1280:720:flags=bicubic,setdar=1.7777778[v0];[1:v:0]yadif=deint=interlaced,scale=1280:720:flags=bicubic,setdar=1.7777778[v1];[0:a:0]atrim=4.9,asetpts=PTS-STARTPTS[a0];[v0][a0][v1][1:a:0]concat=n=2:v=1:a=1[cat_v][cat_a]" -sn -map "[cat_a]" -async 1 -ac 2 -ar 44100 -c:a aac -b:a 160k -map "[cat_v]" -r 30000/1001 -vsync 1 -pix_fmt yuv420p -c:v libx264 -threads 0 -profile:v baseline -level:v 4 -b:v 5400k -preset medium -x264opts ref=3:keyint=90 -map_chapters -1 -metadata location= -metadata:s:v rotate= -max_muxing_queue_size 2000 -f mp4 -write_tmcd 0 -movflags faststart outputfile01.mp4

Related

Ffmpeg - How can I create HLS multiple language streams, in multiple qualities?

Preface
I'm working on converting videos from 4k to multiple qualities with multiple languages but am having issues with the multiple languages overlaying, sometimes losing quality and sometimes being out of sync. (this is less of a problem in the German audio, as this is voice over anyhow)
We as a team are complete noobs in terms of Video / Audio + HLS -- I'm a front end developer who has no experience of this so apologies if my question is poorly phrased
Videos
I have the video in a 4k format and have removed the original sound as I have English and German audio files that need to be overlayed. I am then taking these files and throwing them together into a .ts file like this:
$ ffmpeg -i ep03-ns-4k.mp4 -i nkit-ep3-de-output.m4a -i nkit-ep3-en-output.m4a \
> -thread 0 -muxdelay 0 -y \
> -map 0:v -map 1 -map 2 -movflags +faststart -refs 1 \
> -vcodec libx264 -acodec aac -profile:v baseline -level 30 -ar 44100 -ab 64k -f mpegts out.ts
This outputs a 4k out.ts video, with both audio tracks playing.
The hard part
This is where I'm finding it tricky, I now need to convert this single file into multiple quality levels (480, 720, 1080, 1920) and I attempt this with the following command:
ffmpeg -hide_banner -y -i out.ts \
-crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -ar 48000 \
-map 0:v:0 -map 0:v:0 -map 0:v:0 -map 0:v:0 \
-c:v:0 h264 -profile:v:0 main -filter:v:0 "scale=w=848:h=480:force_original_aspect_ratio=decrease" -b:v:0 1400k -maxrate:v:0 1498k -bufsize:v:0 2100k \
-c:v:1 h264 -profile:v:1 main -filter:v:1 "scale=w=1280:h=720:force_original_aspect_ratio=decrease" -b:v:1 2800k -maxrate:v:1 2996k -bufsize:v:1 4200k \
-c:v:2 h264 -profile:v:2 main -filter:v:2 "scale=w=1920:h=1080:force_original_aspect_ratio=decrease" -b:v:2 5600k -maxrate:v:2 5992k -bufsize:v:2 8400k \
-c:v:3 h264 -profile:v:3 main -filter:v:3 "scale=w=3840:h=1920:force_original_aspect_ratio=decrease" -b:v:3 11200k -maxrate:v:3 11984k -bufsize:v:3 16800k \
-var_stream_map "v:0 v:1 v:2 v:3" \
-master_pl_name master.m3u8 \
-f hls -hls_time 4 -hls_playlist_type vod -hls_list_size 0 \
-hls_segment_filename "%v/episode-%03d.ts" "%v/episode.m3u8"
This creates the required qualities, but I'm now at a loss of how this might work with the audio
Audio
For the audio I run this command:
ffmpeg -i out.ts -threads 0 -muxdelay 0 -y -map 0:a:0 -codec copy -f segment -segment_time 4 -segment_list_size 0 -segment_list audio-de/audio-de.m3u8 -segment_format mpegts audio-de/audio-de_%d.aac
ffmpeg -i out.ts -threads 0 -muxdelay 0 -y -map 0:a:1 -codec copy -f segment -segment_time 4 -segment_list_size 0 -segment_list audio-en/audio-en.m3u8 -segment_format mpegts audio-en/audio-en_%d.aac
This creates the required audio segments.
The question
I realise this is quite an ask, but is there anything wrong with our inputs? Is there a way that this can be done a bit more streamlined?
Any answers are greatly appreciated.
Lets say you have:
VideoA
AudioB-> Language 1
AudioC-> Language 2
AudioD-> Language 3
Although it can be done all together, it is better to use different commands for each language instance.
Note that the following are schematics only- some values and parameters will need to be filled in by you. However, this provides a scheme of how to connect the entities. Also I have simply set the size, and NOT used a scale filter. You can use a scale filter instead. Filters will go in place of the size parameter (-s 1280x720 etc).
ffmpeg -i VideoA -i AudioB -map [0:v] -map [1:a] -s 1280x720 -acodec aac -b:a 128k \
-vcodec libx264 -pix_fmt yuv420p [your other parameters go here] -movflags +faststart \
OutputAB_720p.mp4 -map [0:v] -map [1:a] -s 1920x1080 -acodec aac -b:a 128k -vcodec \
libx264 -pix_fmt yuv420p [your other parameters go here] -movflags +faststart \
OutputAB_1080p.mp4
The above shows a scheme for 2 resolutions, 720p and 1080p, merging VideoA with AudioB. To do the same scheme for AudioC you would repeat:
ffmpeg -i VideoA -i AudioC -map [0:v] -map [1:a] -s 1280x720 -acodec aac -b:a 128k \
-vcodec libx264 -pix_fmt yuv420p [your other parameters go here] -movflags +faststart \
OutputAC_720p.mp4 -map [0:v] -map [1:a] -s 1920x1080 -acodec aac -b:a 128k -vcodec \
libx264 -pix_fmt yuv420p [your other parameters go here] -movflags +faststart \
OutputAC_1080p.mp4
You could put all the inputs together:
ffmpeg -i VideoA -i AudioB -i AudioC -i AudioD
and accordingly map each for every language:
-map [0:v] -map [1:a]
-map [0:v] -map [2:a]
-map [0:v] -map [3:a]
etc.
But I feel such long commands that will result make it difficult to read, maintain and correct.

ffmpeg youtube livestream stops after a while

I'll update this question
ffmpeg -version
ffmpeg -version
ffmpeg version 4.3.1-4ubuntu1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 10 (Ubuntu 10.2.0-9ubuntu2)
I run this command to use ffmpeg to stream to youtube ;
ffmpeg -y -threads 12 \
-loop 1 -framerate 30 -re \
-i ./1280x720.jpg \
-i ./audio.mp3 \
-video_size 1280x720 \
-vcodec libx264 -pix_fmt yuv420p \
-b:v 4500k -maxrate 5500k -bufsize 22000k \
-preset ultrafast -crf 23 -tune stillimage \
-b:a 128k -ar 44100 -ac 2 -acodec aac \
-filter_complex "dynaudnorm=f=150:g=15" \
-r 30 -g 60 \
-f flv rtmp://a.rtmp.youtube.com/live2/xxxx 2>&1 | tee _LOG
The stream is excellent for 45-53 minutes then i'll get an error like this from ffmpeg:
[flv # 0x56077027cd80] Delay between the first packet and last packet in the muxing queue is 10034000 > 10000000: forcing output
then youtube starts to say, no data being received and the stream will end, which it does.
This is the full log: http://0x0.st/-zUH.txt
Your MP3 duration is 00:49:57.42 so the stream messes up after it ends. Loop the audio with -stream_loop -1 and add -re for real-time reading of the input:
ffmpeg -y \
-loop 1 -framerate 30 -re -i ./1280x720.jpg \
-re -stream_loop -1 -i ./audio.mp3 \
-c:v libx264 -pix_fmt yuv420p \
-b:v 4500k -maxrate 5500k -bufsize 22000k \
-preset ultrafast -tune stillimage \
-b:a 128k -ar 44100 -ac 2 -c:a aac \
-filter_complex "dynaudnorm=f=150:g=15" \
-g 60 -f flv rtmp://a.rtmp.youtube.com/live2/xxxx
Alternatively, remove -re -stream_loop -1 and add the output option -shortest if you want the stream to end when the audio ends.
Unrelated changes:
No need to set -threads. Let ffmpeg auto choose.
-video_size 1280x720 is an input option for certain demuxers and does nothing in your command. Removed. Your input is already 1280x720 anyway: otherwise, see Resizing videos with ffmpeg to fit a specific size.
-b:v and -crf are mutually exclusive. In your case -b:v is being ignored. For streaming you probably want to use -b:v. Removed -crf.
You already set the frame rate with -framerate 30 so -r 30 is not needed. Removed.
Recommend using the slowest -preset that still encodes fast enough.

FFMPEG image not updating

THE INPUT FILES
An overlay image that has is being updated every 5 seconds by a Python script
A small MP4 file that will be looped by a concat input
An MP3 file as audio source
THE COMMAND (UPDATED)
This is the command I'm currently using to combine and stream the inputs.
ffmpeg -re -i music.mp3 -f concat -i videoincludes.txt
-r 1 -loop 1 -f image2 -i overlay.png
-c:v libx264 -c:a aac -shortest -crf 23 -pix_fmt yuv420p
-maxrate 2500k -bufsize 2500k -preset ultrafast -r 30 -g 60 -b:v 2000k -b:a 192k -ar 44100
-filter_complex "[1:v][2:v] overlay=0:0" -map 0:a -strict -2
-f flv rtmp://a.rtmp.youtube.com/live2/{key}
Als tried using -framerate 1 instead of -r 1
THE ISSUE
So the issue is that the image doesn't always update. Sometimes it does update every couple seconds at the start but it stops updating after 10-20 seconds without any difference in log output and sometimes it just doesn't update.
I can however confirm that the image is being updated by the Python script but FFmpeg is just not picking this up.
I read setting the input format of the image to image2 should allow it to update so I am not sure what is wrong or what I can do to improve it.
I'm working on the same task, and finally, I think, I found the answer.
Because streams different from each other we must reset their timestamps with setpts=PTS-STARTPTS to have them begin in the same zero timestamp . And, also, try to use image2pipe instead of image2.
This is your code with timestamp reset:
ffmpeg -re -i music.mp3 -f concat -i videoincludes.txt
-r 1 -loop 1 -f image2pipe -i overlay.png
-c:v libx264 -c:a aac -shortest -crf 23 -pix_fmt yuv420p
-maxrate 2500k -bufsize 2500k -preset ultrafast -r 30 -g 60 -b:v 2000k -b:a 192k -ar 44100
-filter_complex "[1:v]setpts=PTS-STARTPTS[out_main]; [2:v]setpts=PTS-STARTPTS[out_overlay]; [out_main][out_overlay]overlay=0:0" -map 0:a -strict -2
-f flv rtmp://a.rtmp.youtube.com/live2/{key}
p.s and I think, there is no need in -r or -framerate anymore

FFmpeg First 2 Seconds of Video Not Showing

This code works fine for some audio files (makes a slideshow of JPG pictures with a PNG watermark and MP3 audio, while maintaining aspect ratio) but for this audio file, the pictures are not showing for the first two seconds or so of the video:
ffmpeg -y -framerate 1/12 -i "media/%03d.jpg" -i "media/audio.mp3" -loop 1 -i "media/watermark.png" -filter_complex "[0:v]scale=iw*min(3840/iw\,2160/ih):ih*min(3840/iw\,2160/ih), pad=3840:2160:(3840-iw)/2:(2160-ih)/2[ss]; [ss][2:v] overlay=main_w-overlay_w-10:main_h-overlay_h-10:shortest=1[out]" -map "[out]" -map 1:a -c:v libx264 -r 24 -preset veryfast -tune stillimage -pix_fmt yuv420p -c:a copy -map_metadata -1 "media/video.mkv" -report
I tried converting the audio into different formats of MP3, tried changing bitrates, changed audio to stereo, and even tried converting it to a WAV. None of these things worked.
Here are the report results for when I run this command.
If it makes a difference, I'm using Ubuntu 14.04 and FFmpeg version N-77455-g4707497 (latest version).
This command should work, but I consider this bizarre behaviour as FFmpeg should be automatically padding frames as per output spec
ffmpeg -y -framerate 1/12 -i "media/%03d.jpg" -i "media/audio.mp3" -loop 1 -i "media/watermark.png" -filter_complex "[0:v]scale=iw*min(3840/iw\,2160/ih):ih*min(3840/iw\,2160/ih), pad=3840:2160:(3840-iw)/2:(2160-ih)/2,fps=24[ss]; [ss][2:v] overlay=main_w-overlay_w-10:main_h-overlay_h-10:shortest=1[out]" -map "[out]" -map 1:a -c:v libx264 -r 24 -preset veryfast -tune stillimage -pix_fmt yuv420p -c:a copy -map_metadata -1 "media/video.mkv"

FFmpeg Live Stream - Loop Video?

I am trying to stream a video loop to justin.tv using FFmpeg? I have managed to loop an image sequence and combine it with line in audio:
ffmpeg -loop 1 -i imageSequence%04d.jpg -f alsa -ac 2 -ar 22050 -ab 64k \
-i pulse -acodec adpcm_swf -r 10 -vcodec flv \
-f flv rtmp://live.justin.tv/app/<yourStreamKeyHere>
Is it possible to do this with a video file?
Definitely possible. In the recent versions of ffmpeg they have added a -stream_loop flag that allows you to loop the input as many times as required.
The gotcha is that if you don't regenerate the pts from the source, ffmpeg will drop frames after the first loop (as the timestamp will suddenly go back in time). To avoid this, you need to tell ffmpeg to generate the pts so you get an increasing timestamp between loops. This is done with the +genpts call (it has to be before the -i arg).
Here's an example ffmpeg call (replace $F with your input file). This example generates two output streams and the -stream_loop -1 argument tells ffmpeg to continuously loop the input. The output in this case is for a similar stream broadcast ingest (MetaCDN), adjust accordingly to your requirements.
ffmpeg -threads 2 -re -fflags +genpts -stream_loop -1 -i $F \
-s 640x360 -ac 2 -f flv -vcodec libx264 -profile:v baseline -b:v 600k -maxrate 600k -bufsize 600k -r 24 -ar 44100 -g 48 -c:a libfdk_aac -b:a 64k "rtmp://publish.live.metacdn.com/2050C7/dfsdfsd/lowquality_664?hello&adbe-live-event=lowquality_" \
-s 1920x1080 -ac 2 -f flv -vcodec libx264 -profile:v baseline -b:v 2000k -maxrate 2000k -bufsize 2000k -r 24 -ar 44100 -g 48 -c:a libfdk_aac -b:a 64k "rtmp://publish.live.metacdn.com/2050C7/dfsdfsd/highquality_2064?mate&adbe-live-event=highquality_"
Sinclair Media has found a solution by using the lavfi filter and appending :loop=0 to the file name :
This is untested:
ffmpeg -f lavfi -re -i movie=StreamTest.avi:loop=0 \
-acodec libfaac -b:a 64k -pix_fmt yuv420p -vcodec libx264 \
-x264opts level=41 -r 25 -profile:v baseline -b:v 1500k \
-maxrate 2000k -force_key_frames 50 -s 640×360 -map 0 -flags \
-global_header -f segment -segment_list index_1500.m3u8 \
-segment_time 10 -segment_format mpeg_ts \
-segment_list_type m3u8 segmented.ts
But it should create a local "index_1500.m3u8" file that streams the video in "StreamTest.avi".
I just reuse the Rob's answers with a few of modifications in order to provide a file to live streaming
ffmpeg -threads 2 -re -fflags +genpts -stream_loop -1 -i gvf.mp4 -c copy -f mpegts -mpegts_service_id 102 -metadata service_name=My_channel -metadata service_provider=My_Self -max_interleave_delta 0 -use_wallclock_as_timestamps 1 -flush_packets 1 "udp://233.0.0.1:1001?localaddr=10.60.4.237&pkt_size=188"

Resources