At the outset, I will say that I am trying to establish a connection via websocket and rtsp. I use forked node-rtsp-stream.
I have problem with huge latency about 10-15 seconds. I came to the conclusion that ffmpeg is to blame.
My observations:
When I use
ffmpeg -rtsp_transport tcp \
-fflags discardcorrupt \
-f mpeg1video \
-i rtsp://{id-address}
at the start, there is a delay of about 8-6 seconds after which the stream gently accelerates and finally the delay is on the level of 1-2sec. However with the newest version of jsmpeg I get on frontend player only jsmpeg possible garbage data. skipping
When I use
ffmpeg -rtsp_transport tcp \
-fflags discardcorrupt \
-i rtsp://{id-address}\
-f mpegts \
-codec:v mpeg1video \
-b:v 1000k \
-bf 0
I have a latency of about 18-10 seconds at startup and I can't go below that even if I try to speed up the whole stream with startup arguments
What's wrong?
Related
Trying to transcode two streams into one gives me poor/unstable encoding speeds from x0.400 to x0.988, sometimes above x1.
fmpeg \
-thread_queue_size 15 -rtbufsize 100M -i "https://.../stream.m3u8" \
-thread_queue_size 15 -rtbufsize 100M -i "http://.../video.mjpg" \
-filter_complex \
"[0:v]setpts=PTS-STARTPTS [bg]; \
[1:v]scale=200:-1,setpts=PTS-STARTPTS [fg]; \
[bg][fg]overlay=W-w-10:10" \
-c:v mpeg1video \
-b:v 1000k \
-r 25 \
-threads 1 \
-f mjpeg udp://127.0.0.1:1235?pkt_size=1316
Hardware specs:
CPU is Intel Core 2 Duo
Mechanical hard drive
I choose the mpeg1video encoder because of the low CPU usage. It seems that my Core 2 Duo can't keep up with libx264 .
I played with output bitrates, fps and threads, -re but nothing seems to improve and stabilize encoding speed to x1. Which parameters do I need to change/add/remove to achieve a reliable x1 encoding speed?
Input streams are not reliable, download internet connection is slow and unreliable.
I'd like to take an RSTP webcam, downsample the video to a lower rate (say one frame every 5 seconds) and serve the result as an RTSP stream.
Is is possible to configure ffmpeg (or libffmpeg) to do such a thing?
Yes, all we have to do is adding -r 0.2 argument, and re-encode the video.
It is also recommended to add -tune zerolatency or -g 1 for making sure every frame is a key frame (required in case video latency is relevant).
Example:
Receiving RTSP stream from localhost, and streaming at 0.2fps (to localhost with different port):
ffmpeg -rtsp_flags listen -rtsp_transport tcp -stimeout 1000000 -i rtsp://127.0.0.1:10000/live.stream -r 0.2 -vcodec libx264 -tune zerolatency -pix_fmt yuv420p -rtsp_transport tcp -f rtsp rtsp://127.0.0.1:20000/live.stream
Testing:
For testing I simulated the RTSP camera with FFmpeg (streaming synthetic video at 25fps).
The RTSP stream is captured by another FFmpeg process that reduces the rate to 0.2fps.
The 0.2fps video is captured and displayed using FFprobe.
The test is implemented as a batch file:
::Play the video for testing
start ffplay -rtsp_flags listen -rtsp_transport tcp -flags low_delay -vf setpts=0 -listen_timeout 1000000 rtsp://127.0.0.1:20000/live.stream
::Wait 5 seconds
ping 127.0.0.1 -n 5 > nul
::Capture the RTSP camera at 25fps, convert to 0.2fps (with re-encoding)
start ffmpeg -rtsp_flags listen -rtsp_transport tcp -stimeout 1000000 -i rtsp://127.0.0.1:10000/live.stream -r 0.2 -vcodec libx264 -tune zerolatency -pix_fmt yuv420p -rtsp_transport tcp -f rtsp rtsp://127.0.0.1:20000/live.stream
::Wait 5 seconds
ping 127.0.0.1 -n 5 > nul
::Simulate an RTSP camera at 25fps
ffmpeg -re -f lavfi -i testsrc=size=192x108:rate=25 -vcodec libx264 -pix_fmt yuv420p -g 30 -rtsp_transport tcp -f rtsp -muxdelay 0.1 rtsp://127.0.0.1:10000/live.stream
It starts awkward and gets stable after few frames.
(We mat use select filter for solving it).
Sample frames:
I'm working on a robot (raspberry pi 4 based) that is accessible from anywhere. My robot is currently at a 3-second latency. I also use OvenMediaEngine (RTMP to WebRTC) to transmit my stream to the client (on a website).Here is my command:
raspivid -n -t 0 -w 1280 -h 720 -fps 25 -b 3500000 -g 50 -fl -o - | ffmpeg -thread_queue_size 1024 -i - -itsoffset 6 -f alsa -channels 1 -thread_queue_size 1024 -i hw:2 -preset ultrafast -tune zerolatency -vcodec libx264 -r 25 -b:v 512k -s 1280x720 -acodec aac -ac 2 -ab 32k -ar 44100 -f flv rtmp://xxxxxxxx:1935/app/stream
Does anyone know why it won't stream at subsecond latency?
Thanks in advance!
I am not exactly sure where you are incurring latency, but it usually happens either during transport or encoding.
If possible I would see if you avoid re-encoding to H264. You are going to pay a penalty of decoding (or just parsing?) and then encoding.
I would also see if you can ingest into OME with something other then RTMP. WebRTC and RTSP both will give you better latency.
Raspberry Pi IP camera on my network broadcasting to a web browser. I want to save 10 minutes long video clips. This is the line:
raspivid -t -0 -w 1080 -h 720 -awb auto -fps 30 -b 1200000 -o - |ffmpeg -loglevel quiet -i - -vcodec copy -an -f flv -metadata streamName=myStream tcp://0.0.0.0:6666&
Following a youtube tutorial I managed to watch my rpi ip camera on the browser but I want to record myself sleeping to detect any breath interruption.
raspivid -t -0 -w 1080 -h 720 -awb auto -fps 30 -b 1200000 -o - |ffmpeg -loglevel quiet -i - -vcodec copy -an -f flv -metadata streamName=myStream tcp://0.0.0.0:6666&
Works fine! I only want to add recording to a file 10 minutes videos (in chronological order if it's possible)
You can use the segment muxer to save the recording in 10 minute segments.
ffmpeg -loglevel quiet -i - -c copy -an -f flv -metadata streamName=myStream tcp://0.0.0.0:6666 -c copy -an -f segment -segment_time 600 -reset_timestamps 1 vid%d.mp4
This will generate, in addition to streaming, vid1.mp4, vid2.mp4, vid3.mp4...
Due to keyframe placement, segments may not be exactly 10 minutes long.
Another way from #Gyan's suggestion, you can combine segment and strftime format to record files with file name as time that it starts recording like:
video_2019-08-04-12.00.00.flv
video_2019-08-04-12.10.00.flv
video_2019-08-04-12.20.00.flv
...
Following command below:
ffmpeg -loglevel quiet -i - -vcodec copy -an -f flv -metadata streamName=myStream tcp://0.0.0.0:6666 \
-f segment -strftime 1 \
-segment_time 00:10:00 \
-segment_format flv \
-an -vcodec copy \
-reset_timestamps 1 \
video_%Y-%m-%d-%H.%M.%S.flv
I am trying to convert a source VBR SPTS MPEG-2 TS file into CBR using ffmpeg. The code I am using is the following:
#!/bin/bash
pkill ffmpeg
ffmpeg \
-re -i source.ts -c copy \
-muxrate 18000K \
-f mpegts \
udp://destination_ip:1234?pkt_size=1316
The source VPID bitrate is ~ 10Mbps and the APID is 296Kbps. So according to my understanding this code should deliver 18Mbps CBR where the difference between the muxrate and the bitrate of all the PIDs is filled with null packets.
The problem is that the output is far from perfect. The overall bitrate is semi-CBR at best. It ranges between 12Mbps and 15Mbps and I see a lot of PCR accuracy and PCR repetition errors along with CC errors both on the VPID and APID.
Some ideas:
make sure you have a recent version of ffmpeg because at some point there was a bug which messed up PCR insertion when stream copying
if you want constant UDP output you must use the bitrate option like:
-flush_packets 0 -f mpegts "udp://destination_ip:1234?pkt_size=1316&bitrate=18000000"
UDP is an unreliable protocol and you might experience packet loss (unfortunately the bitrate option only works for UDP for now AFAIK)
if you have a dedicated connection but still experience CC errors check the destination OS max UDP buffer sizes and make sure it can handle 18 Mbps
specify -minrate and -maxrate too.
use -bufsize bigger than bitrate.
set -muxrate value like bufsize.
The final command:
ffmpeg \
-re -i source.ts \
-b:v 10500k \
-minrate 10500k \
-maxrate 10500k \
-bufsize 18000k \
-muxrate 18000k \
-f mpegts \
udp://destination_ip:1234?pkt_size=1316