Can't fix timestamp of WebM video with dynamic resolution using FFmpeg - ffmpeg

I'm developing the platform for 1-1 video calls with recording. For my purposes, I work with the following stack: WebRTC, Kurento Media Server, FFmpeg.
It works perfectly in an ideal environment, but if my users have a poor connection, after the recording I see a lot of problems with the out of sync audio and video tracks.
As I understand, the problem appears due to the incorrect timestamp, so I'm doing a bit post-processing where I generate a new timestamp and it helps!
Here is the command example:
ffmpeg -fflags +genpts -acodec libopus -vcodec libvpx \
-i in.webm \
-filter_complex "fps=30, setpts=PTS-STARTPTS" \
-acodec libvorbis -vcodec libvpx \
-vsync 1 -async 1 -r 30 -threads 4 out.webm
After that, I've faced one more problem. If the user has a poor connection, WebRTC can dynamically change the video resolution. After the post-processing for such type of videos (with the different resolutions during the video) I see the frozen image until the end of the video and it started from the moment, where the resolution was dynamically changed. There are no error in the FFmpeg logs, just information about changing the resolution:
[libvpx # 0x559335713440] dimension change! 480x270 -> 320x180
-async is forwarded to lavfi similarly to -af aresample=async=1:min_hard_comp=0.100000.
After analyzing the logs, I realized that the problem was due to the STARTPTS parameter, which, after automatically changing the extension, became very large (equal to the number of frames that were before it). I tried to remove STARTPTS and leave only PTS.
After that, the video started to work well, but only until the video resolution are dynamically changed, then again the audio and video tracks are out of sync.
I've tried to scale videos to a static resolution before fixing timestamp and it helps. But it's a little bit extra work. Command example:
ffmpeg -acodec libopus -vcodec libvpx \
-i in.webm \
-vf scale=640:480 \
-acodec libvorbis -vcodec libvpx \
-threads 4 out.webm
Also I've tried to combine both commands using filter_complex, but it didn't work.
I've worked with FFmpeg not so many time so far, so, maybe I'm doing something wrong? Maybe there are some easier ways to do that?
Since Kurento uses GStreamer for the video recording, so maybe it would be a better option to reconfigure Kurento to fix timestamp during the video recording?
I can provide any videos and commands which I use.
I'm using:
Kurento Media Server 6.9.0,
FFmpeg 4.1

Related

FFMPEG screen capture outputting very poor and inconsistent framerate as webm with no audio

I've been testing different parameters to capture my desktop video and audio (desktop audio, not mic) and I find that no matter what settings I have, the resulting webm file's framerate is around 5fps and is horribly inconsistent. It starts at around 20fps and slowly drops over time until about 4-5fps. I'm not really sure what I'm doing wrong, but here is the basic command I'm using:
ffmpeg -y -video_size 1920x1080 -f gdigrab -framerate 60 -i desktop -c:v libvpx-vp9 -acodec libvorbis -c:a libopus -b:v 2M -threads 4 output.webm
I've tried anywhere between 30-60 fps and tested different bitrates but nothing seems to affect the output framerate.
Also, I know that acodec and c:a are for audio but I'm not sure how to specify the audio device to use.
So my issues are horrible framerate for webm and how to include desktop audio in the recording.
You can use arecord and pipe it through stdout and ffmpeg can read it from stdin.
aplay piping to arecord using a file instead of stdin and stdout
Replacing the aplay command with your ffmpeg. Dont forget to add '-i -' in ffmpeg.
A doubt: why are you defining audio encoder two times?
It's impossible to say why the video frame rate is low from the question. It can be an issue with encoder. Or issue in reading input. Remove the video encoding option. See if the issue persists. If it's working fine, try some other encoders.
Use -c:v libx264 instead of -c:v libvpx-vp9. libvpx-vp9's realtime encoding quality is really bad, even regular libvpx (i.e. VP8) is much better. If you insist on using libvpx, use options like -deadline realtime and -cpu-used -4

FFmpeg recording audio from several sources

everyone.
I'm trying to use FFmpeg to record video and 3 audio sources and use it to generate 3 different video files - each file should contain the same video stream but the different audio stream. The problem is that I got audio sync issues. The first audio stream is synced perfectly, but the second one has 1 sec lag, and the third one has like 2 sec lag.
I've made a few tests so far and it seems that root cause of the issue is initialization time of video/audio devices. So, one device is already recording something but the second is still being opened and so on. I've tried to change input devices order and after that audio streams still have the same issue BUT if before 2nd and 3rd audio streams were some time ahead of video, after reordering they became to lag after the audio (audio for the same event appears with some delay). So this test confirms my version about device initialization times.
But the question still, why the first audio stream is synchronized properly, while other 2 are not. and also, how could I overcome this issues? Any workarounds and ideas are highly appreciated.
Here is FFmpeg command I'm using and it's output.
ffmpeg.exe -f dshow -video_size 1920x1080 -i video="Logitech HD Webcam
C615" -f dshow -i audio="Microphone (HD Webcam C615)" -f dshow -i
audio="Microphone Array (Realtek High Definition Audio)"
-filter_complex "[1:a]volume=1[a1];[2:a]volume=1[a2]" -vf scale=h=1080:force_original_aspect_ratio=decrease -vcodec libx264
-pix_fmt yuv420p -crf 23 -preset ultrafast -acodec aac -vbr 5 -threads 0 -map v:0 -map [a1] -map [a2] -f tee
"[select=\'v,a:0\']C:/Users/vshevchu/Desktop/123/111/111_jjj1.avi|
[select=\'v,a:1\']C:/Users/vshevchu/Desktop/123/111/111_jjj2.avi"
OUTPUT
PS. Actually, the issue is exactly the same when I'm not using "tee" muxer but writing all the audio streams to one container. So, "tee" isn't a suspect.

HLS implementation with FFmpeg

I am trying to implement HLS using FFmpeg for transcoding + segmenting but have been facing a couple of issues that have been bugging me for the past week.
Issue
Webserver currently receives live MP4 fragments being recorded on-the-go and needs to take care of transcoding and segmentation.
As mp4 fragments are being received, they need to be encoded. Then segmented. If i run a segmenter (be it ffmpeg or apple mediastreamsegmenter), every mp4 fragment is being treated as a VOD by itself and I'm not being able to integrate them as part of a larger live event implementation.
I thought of a solution where every time I receive an mp4 fragment, I first use fmpeg to concatenate it with previous ones to form the larger mp4 that I then pass to be segmented for HLS. That did not work either because the entire stream has to be re-segmented each and every time and existing TS fragments replaced by new ones that are similar yet shifted in time.
Implementation 1
ffmpeg -re -i fragmentX.mp4 -b:v 118k -b:a 32k -vcodec copy -preset:v veryfast -acodec aac -strict -2 -ac 2 -f mpegts -y fragmentX.ts
I manage the m3u8 manifest on my own, deleting old fragments and appending new ones.
When validating the stream, I find it stacked with EXT-X-DISCONTINUITY tags making the stream unwatchable.
Implementation 2
First combine latest fragment with overall.mp4
ffmpeg -i "concat:newfragment.mp4|existing.mp4" -c copy overall.mp4
Then pass the combination to ffmpeg for HLS segmentation
ffmpeg -re -i overall.mp4 -ac 2 -r 20 -vcodec libx264 -b:v 318k -preset:v veryfast -acodec aac -strict -2 -b:a 32k -hls_time 2 -hls_list_size 3 -hls_allow_cache 0 -hls_base_url /Users/JosephKalash/Desktop/test/350/ -hls_segment_filename '350/fragment%03d.ts' -hls_flags delete_segments 350/index.m3u8
Concatenation is not perfect and there are noticeable glitches where the fragments are supposed to be stitched. Segmentation replaces older fragments and the manifest is rewritten as if it's a new HLS stream every time ffmpeg is called.
I cannot figure out how to get this to work properly.
Any ideas?
Solved by relying on nginx rtmp module which turned out to be suited for the above implementation.

How add scale in my ffmpeg command

i want convert video from any format to mp4. so i am using command:
ffmpeg -i ttt.mp4 -vcodec copy -acodec copy test.mp4
this is working perftectly but now i also add scale in this -s 320:240.
There also many other command for convert LIKE :
ffmpeg -i inputfile.avi -s 320x240 outputfile.avi
but after convert by this command video not play in html5 player
BUT this is not working so tell me in my command how i add scale;
So please provide me solution for this .
Thanks in advance.
You have several problems:
In your command, you have -vcodec copy you cannot scale video without reencoding.
In the command you randomly found on the Internet, they are using AVI, which is not HTML5-compatible.
What you should do is:
ffmpeg -i INPUT -s 320x240 -acodec copy OUT.mp4
Adding to Timothy_G:
Video copy will ignore the video filter chain of ffmpeg, so no scaling is available (man ffmpeg is a great source of information that you will not find on Google). Notice that once you start decoding-filtering-encoding (i.e., no copy) the process will be much slower (x100 time slower or even more). The libx264 is recommended if you want compatibility with all browsers.
$ ffmpeg -i INPUT -s 320x240 -threads 4 -c:a copy -c:v libx264 OUT.mp4
vp9 will provide nearly 50% extra bandwidth saving, but only for supported browsers (Firefox/Chrome), and the encoding will much slower compared to libx264 (that itself is much slower that v:c copy):
$ ffmpeg -i INPUT -s 320x240 -c:a copy -c:v vp9 OUT.webm
Notice that there is a set of formats (containers) accepted by browsers (most admit mp4, some also webm, ...) and for each format there is a set of audio/video codecs accepted. For example you can use mp3 or aac with an mp4 file (container), but not with webm files.
http://en.wikipedia.org/wiki/HTML5_video#Supported_video_formats

FFMpeg creates MP4 which no browser can decode, but it can be played in VLC [duplicate]

This question already has an answer here:
FFmpeg converting image sequence to video results in blank video [closed]
(1 answer)
Closed 5 years ago.
How can I debug what happened? I've tried this with variations to generate a short video from a single image:
ffmpeg -loop 1 -i black.png -vcodec libx264 -b 1500k -s 640x360 -t 1 out.mp4
tried:
Changing aspect ration (or omitting it).
using -image2 instead of -loop.
omitting bitrate.
creating longer videos.
also tried different syntax for specifying video codec: -v:c libx264.
tried mpeg instead of libx264.
In every case the effect is the same. The video plays in VLC, but not in the browser.
Browsers require certain metadata about the movie to be in the front of the file in order to be able to start playing right away. ffmpeg can achieve this using the -movflags faststart option.
Try:
ffmpeg -loop 1 -i black.png -vcodec libx264 -b 1500k -s 640x360 -t 1 -movflags faststart out.mp4
Note this does a second pass and will increase encoding time. Also be sure to be using a newer version of ffmpeg which supports this flag. More documentation can be found here.

Resources