FFMPEG screen capture outputting very poor and inconsistent framerate as webm with no audio - cmd

I've been testing different parameters to capture my desktop video and audio (desktop audio, not mic) and I find that no matter what settings I have, the resulting webm file's framerate is around 5fps and is horribly inconsistent. It starts at around 20fps and slowly drops over time until about 4-5fps. I'm not really sure what I'm doing wrong, but here is the basic command I'm using:
ffmpeg -y -video_size 1920x1080 -f gdigrab -framerate 60 -i desktop -c:v libvpx-vp9 -acodec libvorbis -c:a libopus -b:v 2M -threads 4 output.webm
I've tried anywhere between 30-60 fps and tested different bitrates but nothing seems to affect the output framerate.
Also, I know that acodec and c:a are for audio but I'm not sure how to specify the audio device to use.
So my issues are horrible framerate for webm and how to include desktop audio in the recording.

You can use arecord and pipe it through stdout and ffmpeg can read it from stdin.
aplay piping to arecord using a file instead of stdin and stdout
Replacing the aplay command with your ffmpeg. Dont forget to add '-i -' in ffmpeg.
A doubt: why are you defining audio encoder two times?
It's impossible to say why the video frame rate is low from the question. It can be an issue with encoder. Or issue in reading input. Remove the video encoding option. See if the issue persists. If it's working fine, try some other encoders.

Use -c:v libx264 instead of -c:v libvpx-vp9. libvpx-vp9's realtime encoding quality is really bad, even regular libvpx (i.e. VP8) is much better. If you insist on using libvpx, use options like -deadline realtime and -cpu-used -4

Related

FFmpeg raw h.264 set pts value

I am currently using ffmpeg to convert a custom container media format to mp4. It is straightforward to dump all the h.264 frames to one file and the aac audio to another. Then I can combine the two and create an mp4 file with ffmpeg.
The problem is that the video source isn't always perfect. From time to time frames are dropped or late etc. This causes an A/V sync issue since the pts is generated using a constant rate by ffmpeg. The source format I am using has the PTS value but I cant figure out a way to pass it to ffmpeg with the raw h.264 frames.
I suppose it would be possible to create a demuxer for the custom format, but it seems like a lot effort. I looked into ffmpeg's .nut container format thinking that I might be able to convert from the custom container to .nut first. Unfortunately it seems more complex than it looks on the surface.
It seems like there should be an easy way to pass a frame and its PTS value to ffmpeg, but I haven't come across it yet. Any help would be appreciated.
Here is the ffmpeg command I am using
ffmpeg -f s16le -ac 1 -ar 48k -i source.audio -framerate 20 -i source.video -c:a aac -b:a 64k -r 20 -c:v h264_nvenc -rc:v vbr_hq -cq:v 19 -n out.mp4

Streaming RTMP to JANUS-Gateway only showing bitrate but no video

I'm currently using the streaming plugin as follows
Fancy artchitecture here
OBS--------RTMP--------->NGINX-Server------FFMPEG(input RTMP output RTP)--------->JANUS---------webrtc-------->Client
When using the ffmpeg command (bellow), on the Janus streaming interface, we only see the bitrate that corresponds to that of the ffmpeg output in the console but we don't see any video.
ffmpeg -i rtmp://localhost/live/test -an -c:v copy -flags global_header -bsf dump_extra -f rtp rtp://localhost:8004
(using "-c:v copy" so that no encoding is used and hence reducing the
latency)
The video shows fine if I use "-c:v libx264", the only issue is that it is CPU intensive and adds latency.
Previously I had tried using RTSP as input for FFMPEG and in this case the video show fine with almost no latency even though I use "-c:v copy".
So I don't realy get why for RTSP the copy works fine, but for RTMP I have to use the libx264 codec. If anyone has an idea about this I am all ears :)
I had similar issue and my problem was that the stream / video that I used has large GOP size.
For WebRTC, latency is sub-second, so the input source should have short interval I frames. Better to remove B frames since they referring backward and forward as well.
Here are commands that you could use for small GOP size (4) and remove B frames.
Using RTMP streaming src:
ffmpeg rtmp://<your_src> -c:v libx264 -g 4 -bf 0 -f rtp -an rtp://<your_dst>
Using a mp4 file:
ffmpeg -re -i test.mp4 -c:v libx264 -g 4 -bf 0 -f rtp -an rtp://<your_dst>
-c:v copy does not reduce latency. It merely tells ffmpeg not to transcode.

Can't fix timestamp of WebM video with dynamic resolution using FFmpeg

I'm developing the platform for 1-1 video calls with recording. For my purposes, I work with the following stack: WebRTC, Kurento Media Server, FFmpeg.
It works perfectly in an ideal environment, but if my users have a poor connection, after the recording I see a lot of problems with the out of sync audio and video tracks.
As I understand, the problem appears due to the incorrect timestamp, so I'm doing a bit post-processing where I generate a new timestamp and it helps!
Here is the command example:
ffmpeg -fflags +genpts -acodec libopus -vcodec libvpx \
-i in.webm \
-filter_complex "fps=30, setpts=PTS-STARTPTS" \
-acodec libvorbis -vcodec libvpx \
-vsync 1 -async 1 -r 30 -threads 4 out.webm
After that, I've faced one more problem. If the user has a poor connection, WebRTC can dynamically change the video resolution. After the post-processing for such type of videos (with the different resolutions during the video) I see the frozen image until the end of the video and it started from the moment, where the resolution was dynamically changed. There are no error in the FFmpeg logs, just information about changing the resolution:
[libvpx # 0x559335713440] dimension change! 480x270 -> 320x180
-async is forwarded to lavfi similarly to -af aresample=async=1:min_hard_comp=0.100000.
After analyzing the logs, I realized that the problem was due to the STARTPTS parameter, which, after automatically changing the extension, became very large (equal to the number of frames that were before it). I tried to remove STARTPTS and leave only PTS.
After that, the video started to work well, but only until the video resolution are dynamically changed, then again the audio and video tracks are out of sync.
I've tried to scale videos to a static resolution before fixing timestamp and it helps. But it's a little bit extra work. Command example:
ffmpeg -acodec libopus -vcodec libvpx \
-i in.webm \
-vf scale=640:480 \
-acodec libvorbis -vcodec libvpx \
-threads 4 out.webm
Also I've tried to combine both commands using filter_complex, but it didn't work.
I've worked with FFmpeg not so many time so far, so, maybe I'm doing something wrong? Maybe there are some easier ways to do that?
Since Kurento uses GStreamer for the video recording, so maybe it would be a better option to reconfigure Kurento to fix timestamp during the video recording?
I can provide any videos and commands which I use.
I'm using:
Kurento Media Server 6.9.0,
FFmpeg 4.1

FFmpeg recording audio from several sources

everyone.
I'm trying to use FFmpeg to record video and 3 audio sources and use it to generate 3 different video files - each file should contain the same video stream but the different audio stream. The problem is that I got audio sync issues. The first audio stream is synced perfectly, but the second one has 1 sec lag, and the third one has like 2 sec lag.
I've made a few tests so far and it seems that root cause of the issue is initialization time of video/audio devices. So, one device is already recording something but the second is still being opened and so on. I've tried to change input devices order and after that audio streams still have the same issue BUT if before 2nd and 3rd audio streams were some time ahead of video, after reordering they became to lag after the audio (audio for the same event appears with some delay). So this test confirms my version about device initialization times.
But the question still, why the first audio stream is synchronized properly, while other 2 are not. and also, how could I overcome this issues? Any workarounds and ideas are highly appreciated.
Here is FFmpeg command I'm using and it's output.
ffmpeg.exe -f dshow -video_size 1920x1080 -i video="Logitech HD Webcam
C615" -f dshow -i audio="Microphone (HD Webcam C615)" -f dshow -i
audio="Microphone Array (Realtek High Definition Audio)"
-filter_complex "[1:a]volume=1[a1];[2:a]volume=1[a2]" -vf scale=h=1080:force_original_aspect_ratio=decrease -vcodec libx264
-pix_fmt yuv420p -crf 23 -preset ultrafast -acodec aac -vbr 5 -threads 0 -map v:0 -map [a1] -map [a2] -f tee
"[select=\'v,a:0\']C:/Users/vshevchu/Desktop/123/111/111_jjj1.avi|
[select=\'v,a:1\']C:/Users/vshevchu/Desktop/123/111/111_jjj2.avi"
OUTPUT
PS. Actually, the issue is exactly the same when I'm not using "tee" muxer but writing all the audio streams to one container. So, "tee" isn't a suspect.

How add scale in my ffmpeg command

i want convert video from any format to mp4. so i am using command:
ffmpeg -i ttt.mp4 -vcodec copy -acodec copy test.mp4
this is working perftectly but now i also add scale in this -s 320:240.
There also many other command for convert LIKE :
ffmpeg -i inputfile.avi -s 320x240 outputfile.avi
but after convert by this command video not play in html5 player
BUT this is not working so tell me in my command how i add scale;
So please provide me solution for this .
Thanks in advance.
You have several problems:
In your command, you have -vcodec copy you cannot scale video without reencoding.
In the command you randomly found on the Internet, they are using AVI, which is not HTML5-compatible.
What you should do is:
ffmpeg -i INPUT -s 320x240 -acodec copy OUT.mp4
Adding to Timothy_G:
Video copy will ignore the video filter chain of ffmpeg, so no scaling is available (man ffmpeg is a great source of information that you will not find on Google). Notice that once you start decoding-filtering-encoding (i.e., no copy) the process will be much slower (x100 time slower or even more). The libx264 is recommended if you want compatibility with all browsers.
$ ffmpeg -i INPUT -s 320x240 -threads 4 -c:a copy -c:v libx264 OUT.mp4
vp9 will provide nearly 50% extra bandwidth saving, but only for supported browsers (Firefox/Chrome), and the encoding will much slower compared to libx264 (that itself is much slower that v:c copy):
$ ffmpeg -i INPUT -s 320x240 -c:a copy -c:v vp9 OUT.webm
Notice that there is a set of formats (containers) accepted by browsers (most admit mp4, some also webm, ...) and for each format there is a set of audio/video codecs accepted. For example you can use mp3 or aac with an mp4 file (container), but not with webm files.
http://en.wikipedia.org/wiki/HTML5_video#Supported_video_formats

Resources