FFMPEG : Microphone capturing too much noise while using ffmpeg command - ffmpeg

I am listening audio from source IP address and trying to encode it into speex format and again sending it to the destination IP address using ffmpeg.
My ffmpeg command is:
ffmpeg -protocol_whitelist file,rtp,udp -i temp.sdp -c:a libspeex -f rtp rtp://<dest_ip>:<port>
SDP file content is(temp.sdp):
v=0
c=IN IP4 <source_IP>
t=0 0
m=audio <port> RTP/AVP 98
a=rtpmap:98 L16/8000
Issue: Whenever I am trying to run this command, I am getting too much background noise on speaker.
I could hear music(not clearly), but not human voice.
Also, I have tried with highpass and lowpass filters are as follows:
ffmpeg -protocol_whitelist file,rtp,udp -i temp.sdp -af "highpass=f=200, lowpass=f=3000" -c:a
libspeex -f rtp rtp://<dest_ip>:<port>

Related

No output when creating an RTMP stream from an RTP stream with FFmpeg

I am building a service that needs to convert RTP streams into HLS streams. The requirements recently shifted, and I now have to create an RTMP stream from the RTP stream and then convert the RTMP to HLS using two separate FFmpeg processes. The problem is that the RTP to RTMP process doesn't actually output anything to the specified RTMP URL.
Going directly from RTP to HLS with the following command (some options removed for brevity) works as expected:
ffmpeg -f sdp \
-protocol_whitelist file,udp,rtp \
-i example.sdp \
-g 2 \
-hls_time 2.0 \
-hls_list_size 5 \
-vcodec libx264 \
-acodec aac \
-f hls chunks/test-master.m3u8
However, converting RTP to RTMP with the following command yields no output, nor does it seem to be receiving any input despite the use of an identical SDP file:
ffmpeg -f sdp \
-protocol_whitelist pipe,udp,rtp \
-i example.sdp \
-g 2 \
-vcodec libx264 \
-acodec aac \
-f flv rtmp://localhost/test-stream
This is an example of what the SDP file looks like:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=Test
c=IN IP4 127.0.0.1
t=0 0
m=audio 37000 RTP/AVPF 97
a=rtpmap:97 opus/48000/2
a=fmtp:97 minptime=10;useinbandfec=1
m=video 37002 RTP/AVPF 96
a=rtpmap:96 H264/90000
a=fmtp:96 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=64001E
MediaSoup generates the RTP stream, and the ports and host match up. I've verified that there is actually a stream of data coming through the ports in question using nc. There are no error messages. Am I missing something obvious here?
MediaSoup is built with SFU (Selective Forwarding Unit). As I can see that you're using:
m=video 37002 RTP/AVPF 96 a=rtpmap:96 H264/90000 a=fmtp:96 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=64001E. You need to make sure the consumer for video stream is using the same video codec, this can be done by console.log(consumer.rtpParameters.codecs);

Convert RTP (OPUS) stream to HLS (AAC) stream using ffmpeg or gstreamer

I currently have an RTP protocol stream in OPUS codec playing locally on rtp://127.0.0.1:5006
I would like to convert this stream into an HLS protocol with AAC codec (or others if easier) so that it is more accessible to devices with just a browser.
I know that ffmpeg and gstreamer are capable of this but I'm just lost among the various arguments/parameters.
Currently, I have an SDP file that describes my stream (unsure if this is correct, I wrote it after just googling/reading the spec)
v=0
t=0 0
m=audio 5006 RTP/AVP 98
c=IN IP4 127.0.0.1
a=recvonly
a=rtpmap:98 opus/48000/2
a=fmtp:98 stereo=0; sprop-stereo=0; useinbandfec=1c
Any ideas?
I was able to get this to work by using the below command. The SDP file seemed to work without issues as well.
ffmpeg -protocol_whitelist file,udp,rtp -i input.sdp -c:a aac -b:a 128k -ac 2 -f hls -hls_time 4 -hls_playlist_type event outputstream.m3u8
If anyone else had issues understanding the arguments like me, just take the time to search the arguments in https://ffmpeg.org/ffmpeg.html and understand them. Everything becomes much more straightforward then.

ffmpeg is really slow at decoding h264 stream over RTP

I need some help getting ffplay to receive and decode a Real Time stream encoded in h264.
I'm trying to make a point-to-point stream between computer A receiving video frames from a Kinect and computer B running ffplay to show the livestream.
These are the commands I'm running on both computers.
Computer A (RPI 3)
ffmpeg -f rawvideo -vcodec rawvideo -pix_fmt rgb24 -s 640x480 -i - -threads 4 -preset ultrafast -codec:v libx2 64 -an -f rtp rtp://192.168.0.100:2000
This is what ffmpeg outputs:
fps= 14 q=12.0 size=856kB time=00:00:05.56 bitrate=1261.4kbits/s speed=0.54x
The out stream runs in between 10-20 frames. It's not good, but I can work with that.
Computer B
ffplay -protocol_whitelist "file,udp,rtp" -probesize 32 -sync ext -i streaming.sdp
streaming.sdp
v=0
0=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 192.168.0.100
t=0 0
a=tool:libavformat 57.56.100
m=video 2000 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1
I'm getting the stream, but at about 0.0001fps which is clearly bad. My guess is I'm missing something on the ffplay command, since ffmpeg shows a more stable and fast stream, but I can't seem to find what I'm missing.
The problem wasn't on ffmpeg, but on the code I wrote that was grabbing the data from the device. I was receiving the same frame multiple times, and blocking the thread capturing data, making most of the frames a duplicate of the first one.

RTP streaming with FFmpeg audio and video out of sync

I am streaming a webcam/audio with the command:
ffmpeg.exe -f dshow -framerate 30 -i video="xxx" -c:v libx264 -an -f rtp rtp://localhost:50041 -f dshow -i audio="xxx" -c:a aac -vn -f rtp rtp://localhost:50043
This outputs the following sdp info:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
t=0 0
a=tool:libavformat 57.65.100
m=video 50041 RTP/AVP 96
c=IN IP6 ::1
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1
m=audio 50043 RTP/AVP 97
c=IN IP6 ::1
b=AS:128
a=rtpmap:97 MPEG4-GENERIC/44100/2
a=fmtp:97 profile-level-id=1;mode=AAC-
hbr;sizelength=13;indexlength=3;indexdeltalength=3; config=121056E500
And I read the stream with the command:
ffmpeg.exe -protocol_whitelist file,udp,rtp -i D:\test.sdp -c:v libx264 -c:a aac d:\out.mp4
In the resulting file, the audio is slightly ahead of the video. I have read that RTCP runs on the RTP port + 1, and contains synchronization information. I don't see any RTCP information in the SDP file though.
Do I need to specify something to include RTCP?
If that's not the issue, what else can I do to sync the audio and video?
Not sure if RTCP is your issue, but I would start by trying to use one directshow input and splitting it to two outputs like this:
ffmpeg.exe -f dshow -framerate 30 -i video="XX":audio="YY" -an -vcodec libx264 -f rtp rtp://localhost:50041 -acodec aac -vn -f rtp rtp://localhost:50043
The ffmpeg DirectShow documentation mentions synchronization issues when multiple inputs are used. It also mentions trying the "-copy_ts" flag to resolve sync issues if you want to keep the inputs separate.

VLC: Unable to open SDP file for H265 using FFMPEG

I am streaming live video using rtp and ffmpeg using this command:
ffmpeg -re -f v4l2 -framerate 30 -video_size 640x480 -i /dev/video0 -c:v libx265 -tune zerolatency -s 320x240 -preset ultrafast -pix_fmt yuv420p -r 10 -strict experimental -f rtp rtp://127.0.0.1:49170 > ffmpeg.sdp
The generated sdp file is:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 56.36.100
m=video 49170 RTP/AVP 96
a=rtpmap:96 H265/9000
Vlc gives the following error:
The format of 'file:///home/username/ffmpeg.sdp' cannot be detected. Have a look at the log for details.
Terminal gives the following error:
[0xaf801010] ps demux error: cannot peek
[0xaf801010] mjpeg demux error: cannot peek
[0xaf801010] mpgv demux error: cannot peek
[0xaf801010] ps demux error: cannot peek
[0xb6d00618] main input error: no suitable demux module for `file/:///home/username/ffmpeg.sdp'
If I simply change libx265 -> libx264 in the command and H265 -> H264 the stream runs perfectly fine.
However I need to stream on H265. Any Suggestions?
I guess the problem is because VLC (or ffplay) doesn't get the VPS,SPS,PPS frames.
In order to start to decode H265 stream you need a VPS, a SPS, a PPS and an IDR frame.
In order to ask to libx265 to repeat these configuration frames before each IDR frame you could add to your streaming command :
-x265-params keyint=30:repeat-headers=1
Then the command becomes :
ffmpeg -re -f v4l2 -framerate 30 -video_size 640x480 -i /dev/video0 -c:v libx265 -tune zerolatency -x265-params keyint=30:repeat-headers=1 -s 320x240 -preset ultrafast -pix_fmt yuv420p -r 10 -strict experimental -f rtp rtp://127.0.0.1:49170 > ffmpeg.sdp
It generate the following ffmpeg.sdp file:
SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 56.36.100
m=video 49170 RTP/AVP 96
a=rtpmap:96 H265/90000
I was able to display the stream with ffplay ffmpeg.sdp and VLC ffmpeg.sdp (removing the first line SDP: of ffmpeg.sdp)
Don't shoot me down in flames, as I do not use VLC for this type of thing but I recall that to get Gstreamer working with H265, I had to install:
libde265 and gstreamer1.0-libde265
There is also a vlc-plugin-libde265 listed in the ubuntu repositories.
See: https://github.com/strukturag/libde265

Resources