FFmpeg recording audio from several sources - windows

everyone.
I'm trying to use FFmpeg to record video and 3 audio sources and use it to generate 3 different video files - each file should contain the same video stream but the different audio stream. The problem is that I got audio sync issues. The first audio stream is synced perfectly, but the second one has 1 sec lag, and the third one has like 2 sec lag.
I've made a few tests so far and it seems that root cause of the issue is initialization time of video/audio devices. So, one device is already recording something but the second is still being opened and so on. I've tried to change input devices order and after that audio streams still have the same issue BUT if before 2nd and 3rd audio streams were some time ahead of video, after reordering they became to lag after the audio (audio for the same event appears with some delay). So this test confirms my version about device initialization times.
But the question still, why the first audio stream is synchronized properly, while other 2 are not. and also, how could I overcome this issues? Any workarounds and ideas are highly appreciated.
Here is FFmpeg command I'm using and it's output.
ffmpeg.exe -f dshow -video_size 1920x1080 -i video="Logitech HD Webcam
C615" -f dshow -i audio="Microphone (HD Webcam C615)" -f dshow -i
audio="Microphone Array (Realtek High Definition Audio)"
-filter_complex "[1:a]volume=1[a1];[2:a]volume=1[a2]" -vf scale=h=1080:force_original_aspect_ratio=decrease -vcodec libx264
-pix_fmt yuv420p -crf 23 -preset ultrafast -acodec aac -vbr 5 -threads 0 -map v:0 -map [a1] -map [a2] -f tee
"[select=\'v,a:0\']C:/Users/vshevchu/Desktop/123/111/111_jjj1.avi|
[select=\'v,a:1\']C:/Users/vshevchu/Desktop/123/111/111_jjj2.avi"
OUTPUT
PS. Actually, the issue is exactly the same when I'm not using "tee" muxer but writing all the audio streams to one container. So, "tee" isn't a suspect.

Related

FFMPEG reduce fps for live h264 stream with direct copy

I found different articles on changing the fps with ffmpeg but none of them is matching for my exact purposes.
There is an ffmpeg command like below:
ffmpeg -i RTSPCAMERAPRODUCEH264 -c:v copy -an -movflags +frag_keyframe+empty_moov -f mp4
This will remux my camerastream to fragmented mp4 perfectly.
Is there a way to force ffmpeg to lower the FPS to save bandwidth?
I.e. camera streams 30fps, it needs 1Mbps for fmp4 (sample numbers!):
I'd like to know if it's possible to lower the FPS and get an output stream for which 500kbps (50% of original is enough) without re-encoding.
ffmpeg -r 1 -i RTSPCAMERAPRODUCEH264 -c:v copy -an -movflags +frag_keyframe+empty_moov -f mp4
and
ffmpeg -i RTSPCAMERAPRODUCEH264 -c:v copy -an -movflags +frag_keyframe+empty_moov -r 1 -f mp4
do not seem to work.
A temporally coded video stream (like one with H264 codec) cannot arbitrarily drop intermediate packets, so this is not possible. Only whole or trailing part of GOPs may be dropped.

The audio disappears from 2/3 of the video after speeding up with ffmpeg

I have tried to speed up a video file (with the audio stream) 8 times with ffmpeg using the script below. All works well that from a 50 hour video I get a 7-hour video with the audio sped up also, yet in the resulting file the audio lasts for just over 2 hours and silences after that, i.e. there video without audio.
ffmpeg -i video.mp4 -filter_complex "[0:v]setpts=0.5*PTS,setpts=0.5*PTS,setpts=0.5*PTS[v];[0:a]atempo=2.0,atempo=2.0,atempo=2.0[a]" -map "[v]" -map "[a]" video_x8.mp4
EDIT:
video.mp4 file
video_x8.mp4 file (naming is different for the clear picture)
EDIT 1.
Here are the full 100MB logs. https://gofile.io/?c=L0Au2e
EDIT 2: Thank you Gyan. But could you please help me write it in 1 command so that it works in 1 go?
As far as I can tell, the atempo (whether due to the chaining or otherwise), is not updating timestamps correctly, so the remedy is to insert asetpts afterwards.
ffmpeg -i video.mp4 -vf "setpts=0.125*PTS" -af "atempo=8.0,asetpts=N/SR/TB" video_x8.mp4

FFMPEG screen capture outputting very poor and inconsistent framerate as webm with no audio

I've been testing different parameters to capture my desktop video and audio (desktop audio, not mic) and I find that no matter what settings I have, the resulting webm file's framerate is around 5fps and is horribly inconsistent. It starts at around 20fps and slowly drops over time until about 4-5fps. I'm not really sure what I'm doing wrong, but here is the basic command I'm using:
ffmpeg -y -video_size 1920x1080 -f gdigrab -framerate 60 -i desktop -c:v libvpx-vp9 -acodec libvorbis -c:a libopus -b:v 2M -threads 4 output.webm
I've tried anywhere between 30-60 fps and tested different bitrates but nothing seems to affect the output framerate.
Also, I know that acodec and c:a are for audio but I'm not sure how to specify the audio device to use.
So my issues are horrible framerate for webm and how to include desktop audio in the recording.
You can use arecord and pipe it through stdout and ffmpeg can read it from stdin.
aplay piping to arecord using a file instead of stdin and stdout
Replacing the aplay command with your ffmpeg. Dont forget to add '-i -' in ffmpeg.
A doubt: why are you defining audio encoder two times?
It's impossible to say why the video frame rate is low from the question. It can be an issue with encoder. Or issue in reading input. Remove the video encoding option. See if the issue persists. If it's working fine, try some other encoders.
Use -c:v libx264 instead of -c:v libvpx-vp9. libvpx-vp9's realtime encoding quality is really bad, even regular libvpx (i.e. VP8) is much better. If you insist on using libvpx, use options like -deadline realtime and -cpu-used -4

Can't fix timestamp of WebM video with dynamic resolution using FFmpeg

I'm developing the platform for 1-1 video calls with recording. For my purposes, I work with the following stack: WebRTC, Kurento Media Server, FFmpeg.
It works perfectly in an ideal environment, but if my users have a poor connection, after the recording I see a lot of problems with the out of sync audio and video tracks.
As I understand, the problem appears due to the incorrect timestamp, so I'm doing a bit post-processing where I generate a new timestamp and it helps!
Here is the command example:
ffmpeg -fflags +genpts -acodec libopus -vcodec libvpx \
-i in.webm \
-filter_complex "fps=30, setpts=PTS-STARTPTS" \
-acodec libvorbis -vcodec libvpx \
-vsync 1 -async 1 -r 30 -threads 4 out.webm
After that, I've faced one more problem. If the user has a poor connection, WebRTC can dynamically change the video resolution. After the post-processing for such type of videos (with the different resolutions during the video) I see the frozen image until the end of the video and it started from the moment, where the resolution was dynamically changed. There are no error in the FFmpeg logs, just information about changing the resolution:
[libvpx # 0x559335713440] dimension change! 480x270 -> 320x180
-async is forwarded to lavfi similarly to -af aresample=async=1:min_hard_comp=0.100000.
After analyzing the logs, I realized that the problem was due to the STARTPTS parameter, which, after automatically changing the extension, became very large (equal to the number of frames that were before it). I tried to remove STARTPTS and leave only PTS.
After that, the video started to work well, but only until the video resolution are dynamically changed, then again the audio and video tracks are out of sync.
I've tried to scale videos to a static resolution before fixing timestamp and it helps. But it's a little bit extra work. Command example:
ffmpeg -acodec libopus -vcodec libvpx \
-i in.webm \
-vf scale=640:480 \
-acodec libvorbis -vcodec libvpx \
-threads 4 out.webm
Also I've tried to combine both commands using filter_complex, but it didn't work.
I've worked with FFmpeg not so many time so far, so, maybe I'm doing something wrong? Maybe there are some easier ways to do that?
Since Kurento uses GStreamer for the video recording, so maybe it would be a better option to reconfigure Kurento to fix timestamp during the video recording?
I can provide any videos and commands which I use.
I'm using:
Kurento Media Server 6.9.0,
FFmpeg 4.1

Sending BlackMagic DeckLink Studio 4K over RTMP streams with FFmpeg

I'm trying to send a stream of video that's coming into a BlackMagic DeckLink Studio 4K capture card over a few different RTMP streams at once with FFmpeg. The command that I am doing it with is this:
ffmpeg -re -format_code Hi59 -f decklink -i 'DeckLink Studio 4K' -map 0 -flags +global_header -vcodec libx264 -crf 25 -preset medium -pix_fmt yuv422p -acodec aac -f tee "[f=flv]rtmp://ip1/live/test|[f=flv]rtmp://ip2/live/test.
However, whenever I send this video out, I just get color bars when looking at the stream. I tried using a different video source (the testsrc supplied by FFmpeg), and that sends out fine over RTMP to multiple stream destinations.
Is there something weird with how tee and the decklink stuff work in FFmpeg? Or is there an issue with my command?
If you see colors bars, that means that ffmpeg is connecting to the card and streaming fine but the card is giving the bars. Your command says that ffmpeg is expecting 1920X1080#29.97 interlaced, make sure that is the format into the Decklink. You can also try explicitly setting the connection type, example:
ffmpeg -re -format_code Hi59 -video_input sdi -f decklink -i 'DeckLink Studio 4K' -map 0 -flags +global_header -vcodec libx264 -crf 25 -preset medium -pix_fmt yuv422p -acodec aac -f tee "[f=flv]rtmp://ip1/live/test|[f=flv]rtmp://ip2/live/test
If you are still running into issues, make sure that the BlackMagic software can see the video signal, and it's the format you expect.
One last thing to check, if its an HDMI input make sure it is not HDCP; it's not supported.

Resources