HLS implementation with FFmpeg - ffmpeg

I am trying to implement HLS using FFmpeg for transcoding + segmenting but have been facing a couple of issues that have been bugging me for the past week.
Issue
Webserver currently receives live MP4 fragments being recorded on-the-go and needs to take care of transcoding and segmentation.
As mp4 fragments are being received, they need to be encoded. Then segmented. If i run a segmenter (be it ffmpeg or apple mediastreamsegmenter), every mp4 fragment is being treated as a VOD by itself and I'm not being able to integrate them as part of a larger live event implementation.
I thought of a solution where every time I receive an mp4 fragment, I first use fmpeg to concatenate it with previous ones to form the larger mp4 that I then pass to be segmented for HLS. That did not work either because the entire stream has to be re-segmented each and every time and existing TS fragments replaced by new ones that are similar yet shifted in time.
Implementation 1
ffmpeg -re -i fragmentX.mp4 -b:v 118k -b:a 32k -vcodec copy -preset:v veryfast -acodec aac -strict -2 -ac 2 -f mpegts -y fragmentX.ts
I manage the m3u8 manifest on my own, deleting old fragments and appending new ones.
When validating the stream, I find it stacked with EXT-X-DISCONTINUITY tags making the stream unwatchable.
Implementation 2
First combine latest fragment with overall.mp4
ffmpeg -i "concat:newfragment.mp4|existing.mp4" -c copy overall.mp4
Then pass the combination to ffmpeg for HLS segmentation
ffmpeg -re -i overall.mp4 -ac 2 -r 20 -vcodec libx264 -b:v 318k -preset:v veryfast -acodec aac -strict -2 -b:a 32k -hls_time 2 -hls_list_size 3 -hls_allow_cache 0 -hls_base_url /Users/JosephKalash/Desktop/test/350/ -hls_segment_filename '350/fragment%03d.ts' -hls_flags delete_segments 350/index.m3u8
Concatenation is not perfect and there are noticeable glitches where the fragments are supposed to be stitched. Segmentation replaces older fragments and the manifest is rewritten as if it's a new HLS stream every time ffmpeg is called.
I cannot figure out how to get this to work properly.
Any ideas?

Solved by relying on nginx rtmp module which turned out to be suited for the above implementation.

Related

How to continue saving to an existing m3u8 playlist using ffmpeg

I'm using nginx rtmp module to run a live streaming server that encodes a rtmp stream to a hls playlist. Is there a way to continue with an existing m3u8 file instead of creating a new playlist when I start ffmpeg? Streams can be disconnected sometimes and I want to keep a single playlist when a user resumes streaming.
Here's ffmpeg command I'm running:
ffmpeg -i rtmp://localhost/live/$name -c:v libx264 -x264opts keyint=60:no-scenecut -s 720x1280 -r 30 -b:v 2000k -profile:v high -preset veryfast -c:a libfdk_aac -sws_flags bilinear -hls_list_size 0 /tmp/hls/$name_720p_.m3u8
You have to add the flag "append_list" to the directive hls_flags:
ffmpeg -i in.nut -hls_flags append_list out.m3u8
for more information:
https://ffmpeg.org/ffmpeg-formats.html
Use hls_cleanup off directive which in this case it won't remove old hls fragments and files, but you must use nginx rtmp hls instead of creating it with ffmpeg.
There is an option append_list, whose explanation reads: "append the new segments into old hls segment list". It might work for you.

Can't fix timestamp of WebM video with dynamic resolution using FFmpeg

I'm developing the platform for 1-1 video calls with recording. For my purposes, I work with the following stack: WebRTC, Kurento Media Server, FFmpeg.
It works perfectly in an ideal environment, but if my users have a poor connection, after the recording I see a lot of problems with the out of sync audio and video tracks.
As I understand, the problem appears due to the incorrect timestamp, so I'm doing a bit post-processing where I generate a new timestamp and it helps!
Here is the command example:
ffmpeg -fflags +genpts -acodec libopus -vcodec libvpx \
-i in.webm \
-filter_complex "fps=30, setpts=PTS-STARTPTS" \
-acodec libvorbis -vcodec libvpx \
-vsync 1 -async 1 -r 30 -threads 4 out.webm
After that, I've faced one more problem. If the user has a poor connection, WebRTC can dynamically change the video resolution. After the post-processing for such type of videos (with the different resolutions during the video) I see the frozen image until the end of the video and it started from the moment, where the resolution was dynamically changed. There are no error in the FFmpeg logs, just information about changing the resolution:
[libvpx # 0x559335713440] dimension change! 480x270 -> 320x180
-async is forwarded to lavfi similarly to -af aresample=async=1:min_hard_comp=0.100000.
After analyzing the logs, I realized that the problem was due to the STARTPTS parameter, which, after automatically changing the extension, became very large (equal to the number of frames that were before it). I tried to remove STARTPTS and leave only PTS.
After that, the video started to work well, but only until the video resolution are dynamically changed, then again the audio and video tracks are out of sync.
I've tried to scale videos to a static resolution before fixing timestamp and it helps. But it's a little bit extra work. Command example:
ffmpeg -acodec libopus -vcodec libvpx \
-i in.webm \
-vf scale=640:480 \
-acodec libvorbis -vcodec libvpx \
-threads 4 out.webm
Also I've tried to combine both commands using filter_complex, but it didn't work.
I've worked with FFmpeg not so many time so far, so, maybe I'm doing something wrong? Maybe there are some easier ways to do that?
Since Kurento uses GStreamer for the video recording, so maybe it would be a better option to reconfigure Kurento to fix timestamp during the video recording?
I can provide any videos and commands which I use.
I'm using:
Kurento Media Server 6.9.0,
FFmpeg 4.1

ffmpeg dts_delta_threshold and aresample=async=1

I am using ffmpeg to encode livestreams for use in a tvheadend server. ffmpeg and hls discontinuities dont work, but ive fixed that using streamlink to read the hls stream then pipe that into ffmpeg.
Sometimes the audio has gaps in the live stream and the audio goes out of sync from that point on, I have managed to fix this using aresample=async=1. ffmpeg inserts silence for the gaps and audio stays synced.
Tvheadend doesnt like dts discontinuities and the stream will freeze whenever one is encountered. I have also fixed this with dts_delta_threshold 1. With this option the stream plays seamlessly without any freezes
Here is where my problem comes in when using dts_delta_threshold 1 the aresample command no longer works, I assume because there are no more gaps so it cant insert the silence. Ive tried various combinations and ordering of options.
Is there any way to apply the aresample=async=1 and also the dts_delta_threshold 1 command after.
This is my current command
streamlink -l warning --ringbuffer-size 64M --hls-timeout 100000000 --hls-live-restart hls://192.168.10.1/play/$1.$2.m3u8 best -O | \
ffmpeg -loglevel fatal -err_detect ignore_err \
-f mpegts -i - \
-filter_complex "eq=contrast=${3:-1.0}" \
-c:v libx264 -crf 18 -preset superfast -tune zerolatency -pix_fmt yuv420p -force_key_frames "expr:gte(t,n_forced*2)" \
-c:a aac -b:a 256k -ac 2 -af aresample=async=1 \
-metadata service_provider=$1 -metadata service_name="$1.$2" -f mpegts pipe:1
Ive tried putting the dts_delta_threshold before and after input, same thing audio goes out of sync if there is a gap in audio. Ive tried putting async 1 before input but that doesnt work either

Transcoding FLV to MP4 with ffmpeg very slow

I am trying to support the recording of webcam video on our website, which I then need to transcode to MP4 and WebM to support HTML5 playback. I have ffmpeg 1.2 installed on our server, and have the whole process running fairly well.
The one problem I do have though is transcoding FLV to MP4. it is unacceptably slow, e.g. an 8 second FLV takes about 2.5 mins to transcode!
The ffmpeg command I am using is:
ffmpeg -y -i webcam.flv -c:a libfaac -ac 2 -b:a 64k -ar 44100 -c:v libx264 \
-b:v 350k webcam.mp4
There are so many ffmpeg params, I am a bit lost as to the best way forward with this issue. You can download a test flv from here:
dropbox.com/s/hhd6uhdiuhk800w/webcam.flv
By comparison, transcoding to WebM takes about 5 seconds:
ffmpeg -y -i webcam.flv -c:a libvorbis -ac 2 -b:a 64k -ar 44100 -c:v libvpx \
-b:v 350k -metadata:s:v:0 rotate=0 webcam.webm
ok i found the answer. i had a closer look at the ffmpeg output, and noticed:
[mp4 # 0xa0060c0] Frame rate very high for a muxer not efficiently supporting it.
Please consider specifying a lower framerate, a different muxer or -vsync 2
doh. so i added "-vsync 2" as the last parameter before the output file and it worked a charm, took transcoding time down to about 10 secs! very happy.
working out "generalised" ffmpeg settings for all types of a/v input still seems like black magic to me...

Changing resolution mid-video with FFMPEG

I have a source video (mpeg2video) which I'm transcoding to x264. The source contains 2 different programs recorded from TV. One is in 4:3 AR and the other 16:9 AR. When I play the source file through VLC the player correctly changes size to show the video at the correct AR. So far so good.
When I transcode the conversion process auto detects the AR from the first few frames and then transcodes the whole video using this AR. If the 16:9 section comes first then the whole conversion is done in 16:9 and the 4:3 section looks stretch horizontally. If the 4:3 section is at the start of the source file then the whole transcode is done in 4:3 and the 16:9 section looks squashed horizontally.
No black bars are ever visible.
Here's my command:
nice -n 17 ffmpeg -i source.mpg -acodec libfaac -ar 48000 -ab 192k -async 1 -copyts -vcodec libx264 -b 1250k -threads 2 -level 31 -map 0:0 -map 0:1 -map 0:2 -scodec copy -deinterlace output.mkv
I don't fully understand what's going on. How do I get the same 'change in AR' mid video in the output file that I have in the input video?
I don't think ffmpeg is designed to do that midway. You will have to write your own application using libav for it. The simpler way would be create two chunks of video that you combine.
EDIT:
The best way to deal with it is if you can detect the change of AR yourself and transcode the two segments seperately and join them.
EDIT2:
Use ffmpeg itself to chunk the video, demux anything you want and mux it back again. It should work fine. You needn't use avidemux.

Resources