I'm having trouble capturing and encoding audio+video on-the-fly on macOS.
I tried two options:
ffmpeg
ffmpeg -threads 0 -f avfoundation -s 1920x1080 -framerate 25 -I 0:0 -async 441 -c:v libx264 -preset medium -pix_fmt yuv420p -crf 22 -c:a libfdk_aac -aq 95 -y
gstreamer
gst-launch-1.0 -ve avfvideosrc device-index=0 ! video/x-raw,width=1920,height=1080,framerate=25/1 ! vtenc_h264 ! queue ! mp4mux name=mux ! filesink location=out.mp4 osxaudiosrc device=0 ! audio/x-raw ! faac midside=false ! queue ! mux.
The ffmpeg option works, but only for lower resolutions. With higher resolutions, the Mac mini (2018 gen) can't do the heavy lifting. I think because I installed ffmpeg with brew, so it wasn't compiled on my machine, meaning it doesn't use the h264 hardware encoder in the Mac?
The gstreamer option works as well, but there's a slight audio/video sync issue (audio is 100ms ahead of the video). I can't seem to add delay to the GStreamer queue (it ignores it):
queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=100000000
Anyone who has any experience with this? Thanks!
That change in the queues effects internal flow only. It has no impact on timestamps on the buffers traveling through the pipeline. The timestamps define how the sync between audio and video is.
Try to use the identity element on either the video or audio path and set some timestamp offset via the ts-offset property.
Related
I am on Fedora 37, version is irrelevant as I had same issue on Fedora 36.
I am using the following command to send video from my usb video card to v4l2loopback and the CPU usage goes to 100% and I can't do much more on my computer (i5-8600k, nvidia 2060 rtx, 16GB ram).
ffmpeg -nostdin -threads 1 -f video4linux2 -video_size 1920x1080 -input_format mjpeg -i /dev/video1 -f v4l2 -preset fast -c:v copy /dev/video0
previously the command was like that and the rest of the arguments was added to see if it makes any difference, it seems it doesn't change much
ffmpeg -f video4linux2 -video_size 1920x1080 -input_format mjpeg -i /dev/video1 -f v4l2 -c copy /dev/video0
To be honest I don't know much about ffmpeg or video in general, I was using the same command a few months ago on the same hardware with almost 0 impact to my performance, I could play games and watch videos at the same time while now I get 15 FPS in my games while I am using ffmpeg. Did anything change in ffmpeg? Any improvements to my ffmpeg command? Shouldn't ffmpeg not encode anything if I am using pixelformat and frame rate supported by my usb video card and if that is true shouldn't be super light?
I'm trying to record from a microphone and webcam on MacOS with the following command:
ffmpeg -f avfoundation -framerate 30 -i "0:0" ~/recorded.mp4
My result has crackling in the audio.
I'm familiar with this problem when you use a DAW: you solve it by increasing the sample buffer. The idea is that audio samples coming from your interface/mic are not coming in a consistent or fast enough rate, so the missing samples being filled with zeroes causes the crackling sound. To avoid missing samples you want the recording software to wait longer for samples accumulating in a buffer before they're processed.
How can you configure such buffer for ffmpeg?
Version 4.3 seems to have this issue. Try with 4.2.
It seems to me that giving the -aq 0 audio quality parameter reduces this issue.
ffmpeg -f avfoundation -i 0:0 -acodec pcm_f32le -ar 48000 -aq 0 output.wav
I'm currently using the streaming plugin as follows
Fancy artchitecture here
OBS--------RTMP--------->NGINX-Server------FFMPEG(input RTMP output RTP)--------->JANUS---------webrtc-------->Client
When using the ffmpeg command (bellow), on the Janus streaming interface, we only see the bitrate that corresponds to that of the ffmpeg output in the console but we don't see any video.
ffmpeg -i rtmp://localhost/live/test -an -c:v copy -flags global_header -bsf dump_extra -f rtp rtp://localhost:8004
(using "-c:v copy" so that no encoding is used and hence reducing the
latency)
The video shows fine if I use "-c:v libx264", the only issue is that it is CPU intensive and adds latency.
Previously I had tried using RTSP as input for FFMPEG and in this case the video show fine with almost no latency even though I use "-c:v copy".
So I don't realy get why for RTSP the copy works fine, but for RTMP I have to use the libx264 codec. If anyone has an idea about this I am all ears :)
I had similar issue and my problem was that the stream / video that I used has large GOP size.
For WebRTC, latency is sub-second, so the input source should have short interval I frames. Better to remove B frames since they referring backward and forward as well.
Here are commands that you could use for small GOP size (4) and remove B frames.
Using RTMP streaming src:
ffmpeg rtmp://<your_src> -c:v libx264 -g 4 -bf 0 -f rtp -an rtp://<your_dst>
Using a mp4 file:
ffmpeg -re -i test.mp4 -c:v libx264 -g 4 -bf 0 -f rtp -an rtp://<your_dst>
-c:v copy does not reduce latency. It merely tells ffmpeg not to transcode.
I am trying to play some audio on my linux server and stream it to multiple internet browsers. I have a loopback device I'm specifying as input to ffmpeg. ffmpeg is then streamed via rtp to a WebRTC server (Janus). It works, but the sound that comes out is horrible.
Here's the command I'm using to stream from ffmpeg to janus over rtp:
nice --20 sudo ffmpeg -re -f alsa -i hw:Loopback,1,0 -c:a libopus -ac
1 -b:a 64K -ar 8000 -vn -rtbufsize 250M -f rtp rtp://127.0.0.1:17666
The WebRTC server (Janus) requires that the audio codec be opus. If I try to do 2 channel audio or increase the sampling rate, the stream slows down or sound worse. The "nice" command is to give the process higher priority.
Using gstreamer instead of ffmpeg works and sounds great!
Here's the cmd I'm using on CentOS 7:
sudo gst-launch-1.0 alsasrc device=hw:Loopback,1,0 ! rawaudioparse ! audioconvert ! audioresample ! opusenc ! rtpopuspay ! udpsink host=127.0.0.1 port=14365
I've been testing different parameters to capture my desktop video and audio (desktop audio, not mic) and I find that no matter what settings I have, the resulting webm file's framerate is around 5fps and is horribly inconsistent. It starts at around 20fps and slowly drops over time until about 4-5fps. I'm not really sure what I'm doing wrong, but here is the basic command I'm using:
ffmpeg -y -video_size 1920x1080 -f gdigrab -framerate 60 -i desktop -c:v libvpx-vp9 -acodec libvorbis -c:a libopus -b:v 2M -threads 4 output.webm
I've tried anywhere between 30-60 fps and tested different bitrates but nothing seems to affect the output framerate.
Also, I know that acodec and c:a are for audio but I'm not sure how to specify the audio device to use.
So my issues are horrible framerate for webm and how to include desktop audio in the recording.
You can use arecord and pipe it through stdout and ffmpeg can read it from stdin.
aplay piping to arecord using a file instead of stdin and stdout
Replacing the aplay command with your ffmpeg. Dont forget to add '-i -' in ffmpeg.
A doubt: why are you defining audio encoder two times?
It's impossible to say why the video frame rate is low from the question. It can be an issue with encoder. Or issue in reading input. Remove the video encoding option. See if the issue persists. If it's working fine, try some other encoders.
Use -c:v libx264 instead of -c:v libvpx-vp9. libvpx-vp9's realtime encoding quality is really bad, even regular libvpx (i.e. VP8) is much better. If you insist on using libvpx, use options like -deadline realtime and -cpu-used -4