I am trying to play some audio on my linux server and stream it to multiple internet browsers. I have a loopback device I'm specifying as input to ffmpeg. ffmpeg is then streamed via rtp to a WebRTC server (Janus). It works, but the sound that comes out is horrible.
Here's the command I'm using to stream from ffmpeg to janus over rtp:
nice --20 sudo ffmpeg -re -f alsa -i hw:Loopback,1,0 -c:a libopus -ac
1 -b:a 64K -ar 8000 -vn -rtbufsize 250M -f rtp rtp://127.0.0.1:17666
The WebRTC server (Janus) requires that the audio codec be opus. If I try to do 2 channel audio or increase the sampling rate, the stream slows down or sound worse. The "nice" command is to give the process higher priority.
Using gstreamer instead of ffmpeg works and sounds great!
Here's the cmd I'm using on CentOS 7:
sudo gst-launch-1.0 alsasrc device=hw:Loopback,1,0 ! rawaudioparse ! audioconvert ! audioresample ! opusenc ! rtpopuspay ! udpsink host=127.0.0.1 port=14365
Related
The situation is kind of complex. I was archiving several CCTV camera feeds (rtsp, h264, no audio) through OpenCV, which worked but the CPU utilization was too high and started to lose some frames time by time.
To reduce the CPU utilization, I started to use FFMPEG to skip the decoding and encoding processes, which worked perfectly on my home machine. However, when I connected to my university VPN and tried to deploy it on our lab server, FFmpeg couldn't read any frame, ffplay couldn't get anything either. However, OpenCV, VLC Player and IINA Player could still read and display the feed.
In Summary,
1 FFMPEG/ffplay
1.1 can only read the feed from my home network(Wi-Fi, optimum)
1.2 from other two networks, the error message says: "Could not find codec parameters for stream 0 (Video: h264, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options"
2 IINA/VLC Player, OpenCV
These tools can get the video all the time.
I'm wondering whether it's related to some specific port access, that the ffmpeg required but the others don't. I'd appreciate it if anyone can provide any suggestions.
As references, the tested ffplay command is simple:
ffplay 'the rtsp address'
Thanks
Update
More tests have been performed.
By specifying rtsp_transport as TCP, ffplay can play the video, but FFmpeg can't access the video. (In the beginning, when both FFmpeg and ffplay worked through my home network, it was UDP)
The FFmpeg command is as follows:
ffmpeg -i rtsp://the_ip_address/axis-media/media.amp -hide_banner -c:v copy -s 1920x1080 -segment_time 00:30:00 -f segment -strftime 1 -reset_timestamps 1 -rtsp_transport tcp "%Y-%m-%d-%H-%M-%S_Test.mp4"
Please help...
Solved by forcing it to use "-rtsp_transport tcp" right before -i.
ffmpeg -rtsp_transport tcp -i rtsp://the_ip_address/axis-media/media.amp -hide_banner -c:v copy -s 1920x1080 -segment_time 00:30:00 -f segment -strftime 1 -reset_timestamps 1 "%Y-%m-%d-%H-%M-%S_Test.mp4"
I'm currently using the streaming plugin as follows
Fancy artchitecture here
OBS--------RTMP--------->NGINX-Server------FFMPEG(input RTMP output RTP)--------->JANUS---------webrtc-------->Client
When using the ffmpeg command (bellow), on the Janus streaming interface, we only see the bitrate that corresponds to that of the ffmpeg output in the console but we don't see any video.
ffmpeg -i rtmp://localhost/live/test -an -c:v copy -flags global_header -bsf dump_extra -f rtp rtp://localhost:8004
(using "-c:v copy" so that no encoding is used and hence reducing the
latency)
The video shows fine if I use "-c:v libx264", the only issue is that it is CPU intensive and adds latency.
Previously I had tried using RTSP as input for FFMPEG and in this case the video show fine with almost no latency even though I use "-c:v copy".
So I don't realy get why for RTSP the copy works fine, but for RTMP I have to use the libx264 codec. If anyone has an idea about this I am all ears :)
I had similar issue and my problem was that the stream / video that I used has large GOP size.
For WebRTC, latency is sub-second, so the input source should have short interval I frames. Better to remove B frames since they referring backward and forward as well.
Here are commands that you could use for small GOP size (4) and remove B frames.
Using RTMP streaming src:
ffmpeg rtmp://<your_src> -c:v libx264 -g 4 -bf 0 -f rtp -an rtp://<your_dst>
Using a mp4 file:
ffmpeg -re -i test.mp4 -c:v libx264 -g 4 -bf 0 -f rtp -an rtp://<your_dst>
-c:v copy does not reduce latency. It merely tells ffmpeg not to transcode.
I'm trying to stream .wav audio files via RTP multicast. I'm using the following command:
ffmpeg -re -i Melody_file.wav -f rtp rtp://224.0.1.211:5001
It successfully initiates the stream. However, the audio comes out very choppy. Any ideas how I can make the audio stream clean? I do not need any video at all. Below is a screenshot of my output:
Here's some examples expanding upon the useful comments between #Ralf and #Ahmed about setting asetnsamples and aresample, and also those mentioned in the Snom wiki. Basically one can get smoother multicast transmission/playback using these approaches for G711/mulaw audio:
ffmpeg -re -i Melody_file.wav -filter_complex 'aresample=8000,asetnsamples=n=160' -acodec pcm_mulaw -ac 1 -f rtp rtp://224.0.1.211:5001
Or using higher quality G722 audio codec:
ffmpeg -re -i Melody_file.wav -filter_complex 'aresample=16000,asetnsamples=n=160' -acodec g722 -ac 1 -f rtp rtp://224.0.1.211:5001
I'm having trouble capturing and encoding audio+video on-the-fly on macOS.
I tried two options:
ffmpeg
ffmpeg -threads 0 -f avfoundation -s 1920x1080 -framerate 25 -I 0:0 -async 441 -c:v libx264 -preset medium -pix_fmt yuv420p -crf 22 -c:a libfdk_aac -aq 95 -y
gstreamer
gst-launch-1.0 -ve avfvideosrc device-index=0 ! video/x-raw,width=1920,height=1080,framerate=25/1 ! vtenc_h264 ! queue ! mp4mux name=mux ! filesink location=out.mp4 osxaudiosrc device=0 ! audio/x-raw ! faac midside=false ! queue ! mux.
The ffmpeg option works, but only for lower resolutions. With higher resolutions, the Mac mini (2018 gen) can't do the heavy lifting. I think because I installed ffmpeg with brew, so it wasn't compiled on my machine, meaning it doesn't use the h264 hardware encoder in the Mac?
The gstreamer option works as well, but there's a slight audio/video sync issue (audio is 100ms ahead of the video). I can't seem to add delay to the GStreamer queue (it ignores it):
queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=100000000
Anyone who has any experience with this? Thanks!
That change in the queues effects internal flow only. It has no impact on timestamps on the buffers traveling through the pipeline. The timestamps define how the sync between audio and video is.
Try to use the identity element on either the video or audio path and set some timestamp offset via the ts-offset property.
I am trying to stream desktop with as little latency as possible I am using this command to stream
ffmpeg -sn -f avfoundation -i '1' -r 10 -vf scale=1920x1080 -tune zerolatency -f rawvideo udp://224.3.0.11:5000
and for client side this command
ffplay -f rawvideo -pixel_format uyvy422 -framerate 10 -video_size 1920x1080 -fs -i udp://224.3.0.11:5000
The issue I am having is shown in this screenshoot here from the client side does anyone know what I can do to stop this issue?
Because of UDP.
UDP is an unordered protocol. Hence the video decoder did not receive the video packets in the correct order - causing the glitch in your video stream.