ffmpeg video stream delay in playback? - ffmpeg

I'm trying to capture and stream video from a 5MP USB camera using ffmpeg 3.2.2 on Windows. Here's the command line that I'm using:
ffmpeg -f dshow -video_size 320x240 -framerate 30 -i video="HD USB Camera" -vcodec libx264 -preset ultrafast -tune zerolatency -g 60 -f mpegts udp://192.168.1.100:10000
The destination for my stream (an Ubuntu box on the same subnet) is running ffplay via:
ffplay -i udp://127.0.0.1:10000
This works but the video stream seems like it's delayed by 8 - 10 seconds. It's my understanding that the destination can't begin displaying the stream until it receives an I-frame so I tried specifying a GOP value of 60 thinking that this would cause an I-frame to be inserted every 2 seconds (# 30 FPS).
The Windows machine that's doing the transcoding is running an i7-3840QM # 2.80GHz and has 32 GB RAM. FFmpeg appears to be using very little CPU (like 2%) so it doesn't seem like it's CPU bound. Just as a test, I tried ingesting an MP4 file and not doing any transcoding (ffmpeg -re -i localFile.mp4 -c copy -f mpegts udp://192.168.1.100:10000) but it still takes several seconds before the stream is displayed on the Ubuntu system.
On a related note, I'm also evaluating a trial version of the Wowza Streaming Engine server and when I direct my ffmpeg stream to Wowza, I get the same 8 - 10 second delay before the Wowza test player starts playing it back. For what it's worth, once the stream starts playing, it seems to be running fine (other than the fact that everything is "behind" by several seconds).
I'm new to video streaming so I might be missing something obvious here but can anyone tell me what might be causing this delay or suggest how I might further troubleshoot the problem? Thank you!

Try setting this values:
analyzeduration integer (input)
Specify how many microseconds are analyzed to probe the input. A
higher value will enable detecting more accurate information, but will
increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
probesize integer (input)
Set probing size in bytes, i.e. the size of the data to analyze to get
stream information. A higher value will enable detecting more
information in case it is dispersed into the stream, but will increase
latency. Must be an integer not lesser than 32. It is 5000000 by
default.
FFmpeg docs

Related

Real time livestreaming - RPI FFmpeg and H5 Player

I work at a telehealth company and we are using connected medical devices in order to provide the doctor with real time information from these equipements, the equipements are used by a trained health Professional.
Those devices work with video and audio. Right now, we are using them with peerjs (so peer to peer connection) but we are trying to move away from that and have a RPI with his only job to stream data (so streaming audio and video).
Because the equipements are supposed to be used with instructions from a doctor we need the doctor to receive the data in real time.
But we also need the trained health professional to see what he is doing (so we need a local feed from the equipement)
How do we capture audio and video
We are using ffmpeg with a go client that is in charge of managing the ffmpeg clients and stream them to a SRS server.
This works but we are having a 2-3 sec delay when streaming the data. (rtmp from ffmpeg and flv on the front end)
ffmpeg settings :
("ffmpeg", "-f", "v4l2", `-i`, "*/video0", "-f", "flv", "-vcodec", "libx264", "-x264opts", "keyint=15", "-preset", "ultrafast", "-tune", "zerolatency", "-fflags", "nobuffer", "-b:a", "160k", "-threads", "0", "-g", "0", "rtmp://srs-url")
My questions
Is there a way for this set up to achieve low latency (<1 sec) (for the nurse and for the doctor) ?
Is the way I want to achieve this good ? Is there a batter way ?
Flow schema
Data exchange and use case flow:
Note: The nurse and doctor use HTTP-FLV to play the live stream, for low latency.
In your scenario, the latency is introduced by two parts:
The audio/video encoding by FFmpeg in RPI.
The player to consume and ingest the live stream.
FFmpeg in RPI
I noticed that you have already set some args, you could see full help by ffmpeg --help full to check these params.
The keyint equals to -g, so please remove keyint, and set the fps(-r). Please set -r 15 -g 15 which set the gop to 1s or 15fps:
-g <int> set the group of picture (GOP) size (from INT_MIN to INT_MAX) (default 12)
-r rate set frame rate (Hz value, fraction or abbreviation)
The x264 options preset and tune is useful for low latency, but also need to set another one profile to turn off bframe. Please set to -profile baseline -preset ultrafast -tune zerolatency for lower latency:
-preset <string> Set the encoding preset (cf. x264 --fullhelp) (default "medium")
-tune <string> Tune the encoding params (cf. x264 --fullhelp)
-profile <string> Set profile restrictions (cf. x264 --fullhelp)
You set a wrong -fflags nobuffer which is for decoder(player), instead you should use -fflags flush_packets for encoder:
-fflags <flags> (default autobsf)
flush_packets E.......... reduce the latency by flushing out packets immediately
nobuffer .D......... reduce the latency introduced by optional buffering
Note that the E means encoder while D means decoder/player.
The cli for FFmpeg, please covert to your params:
-vcodec libx264 \
-r 15 -g 15 \
-profile baseline -preset ultrafast -tune zerolatency \
-fflags flush_packets
However, I think these settings only works when you change your player settings, because the bottleneck is in the player now(latency 1~3s).
Player
For HTTP-FLV, please use conf/realtime.conf for SRS server, and please use ffplay to test the latency:
ffplay -fflags nobuffer -flags low_delay -i "http://your_server/live/stream.flv"
I think the latency should be <1s, better than H5 player, which uses MSE. You could compare the latency of them.
However, you couldn't let your users to use ffplay, it's test only for development. So we must use a low latency H5 player, that is WebRTC.
Please config SRS with conf/rtmp2rtc.conf which allows you to publish by FFmpeg by RTMP in low latency, and play the stream by WebRTC.
When your SRS is started, there is a WebRTC player, for example: http://localhost:8080/players/rtc_player.html and please read more about WebRTC from here
The url is very similar:
RTMP: rtmp://ip/live/livestream
FLV: http://ip/live/livestream.flv
HLS: http://ip/live/livestream.m3u8
WebRTC: webrtc://ip/live/livestream
If you use WebRTC player, the latency should be ~500ms and very stable.

Speed up the FFmpeg process time in Android

I want to loop video until the sound stops, everything works good but it takes too much time.
if my audio file length is 4 minutes then it takes approx of 4 minutes & the size is also too much, here is my command
String[] cmd = new String[]{"-i",audioFile.getAbsolutePath(),"-filter_complex","movie="+videoFile.getAbsolutePath()+":loop=0,setpts=N/(FRAME_RATE*TB)","-c","copy","-y",createdFile.getAbsolutePath()};
We see many "encoding with ffmpeg on Android is too slow" questions here. Assuming you're encoding with libx264 add -preset ultrafast and -crf 26 or whatever value looks acceptable to you (see FFmpeg Wiki: H.264).
Not much else you can do if you want to use software based encoding via ffmpeg & x264. FFmpeg does not yet support MediaCodec hardware encoding as far as I know. It does support MediaCodec video decoding of H.264, HEVC, MPEG-2, MPEG-4, VP8, and VP9, but decoding is not the bottleneck here.
You can try to get x264 to use your CPU capabilities, such as avoiding compiling x264 with --disable-asm, but I don't know if that is possible with your hardware.
Note that stream copying (re-muxing) with -c copy is not possible when filtering the same stream, so change it to the more specific -c:a copy since you are not filtering the audio.
try this query with addition of " "-preset", "ultrafast" into query , ..... String[] cmd = new String[]{"-i",audioFile.getAbsolutePath(),"-preset", "ultrafast","-filter_complex","movie="+videoFile.getAbsolutePath()+":loop=0,setpts=N/(FRAME_RATE*TB)","-c","copy","-y",createdFile.getAbsolutePath()};

FFMPEG: RTSP to HLS restream stops with "No more output streams to write to, finishing."

I'm trying to do a live restream an RTSP feed from a webcam using ffmpeg, but the stream repeatedly stops with the error:
"No more output streams to write to, finishing."
The problem seems to get worse at higher bitrates (256kbps is mostly reliable) and is pretty random in its occurrence. At 1mbps, sometimes the stream will run for several hours without any trouble, on other occasions the stream will fail every few minutes. I've got a cron job running which restarts the stream automatically when it fails, but I would prefer to avoid the continued interruptions.
I have seen this problem reported in a handful of other forums, so this is not a unique problem, but not one of those reports had a solution attached to it. My ffmpeg command looks like this:
ffmpeg -loglevel verbose -r 25 -rtsp_transport tcp -i rtsp://user:password#camera.url/live/ch0 -reset_timestamps 1 -movflags frag_keyframe+empty_moov -bufsize 7168k -stimeout 60000 -hls_flags temp_file -hls_time 5 -hls_wrap 180 -acodec copy -vcodec copy streaming.m3u8 > encode.log 2>&1
What gets me is that the error makes no sense, this is a live stream so output is always wanted until I shut off the stream. So having it shut down because output isn't wanted is downright odd. If ffmpeg was complaining because of a problem with input it would make more sense.
I'm running version 3.3.4, which I believe is the latest.
Update 13 Oct 17:
After extensive testing I've established that "No more outputs" error message generated by FFMPEG is very misleading. The error seems to be generated if the data coming in from RTSP is delayed, eg by other activity on the router the camera is connected via. I've got a large buffer and timeout set which should be sufficient for 60 seconds, but I can still deliberately trigger this error with far shorter interruptions, so clearly the buffer and timeout aren't having the desired effect. This might be fixed by setting a QOS policy on the router and by checking that the TCP packets from the camera have a suitably high priority set, it's possible this isn't the case.
However, I would still like to improve the robustness of the input stream if it is briefly interrupted. Is there any way to persuade FFMPEG to tolerate this or to actually make use of the buffer it seems to be ignoring? Can FFMPEG be persuaded to simply stop writing output and wait for input to become available rather than bailing out? Or could I get FFMPEG to duplicate the last complete frame until it's able to get more data? I can live with the stream stuttering a bit, but I've got to significantly reduce the current behaviour where the stream drops at the slightest hint of a problem.
Further update 13 Oct 2017:
After more tests, I've found that the problem actually seems to be that HLS is incapable of coping with a discontinuity in the incoming video stream. If I deliberately cut the network connection between the camera and FFMPEG, FFMPEG will wait for the connection to be re-established for quite a long time. If the interruption was long (>10 seconds) the stream will immediately drop with the "No More Outputs" error the instant that the connection is re-established. If the interruption is short, then RTSP will actually start pulling data from the camera again, but the stream will then drop with the same error a few seconds later. So it seems clear that the gap in the input data is causing the HLS encoder to have a fit and give up once the stream is resumed, but the size of the gap has an impact on whether the drop is instant or not.
I had a similar problem. In my case stream stopped without any errors after few minutes. I fixed this by switching from freebsd to linux. Maybe the problem is bad package dependencies or ffmpeg version. So my suggestion is to try older or newer version of ffmpeg or another OS.
Update: Actually this doesn't solve the problem. I've tested a bit more and stream stopped after 15 minutes.
Been facing the same problem. After an extended trial and error i found that the problem resided in my cctv camera parameters. More exactly i adjusted the key frame interval parameter to match the frame-rate of the recording camera.
My syntax (windows)
SET cam1_rtsp="rtsp://192.168.0.93:554/11?timeout=30000000&listen_timeout=30000000&recv_buffer_size=30000000"
ffmpeg -rtsp_transport tcp -vsync -1 -re -i %cam1_rtsp% -vcodec copy -af apad -shortest -async 1 -strftime 1 -reset_timestamps 1 -metadata title=\\"Cam\\" -map 0 -f segment -segment_time 300 -segment_atclocktime 1 -segment_format mp4 CCTV\%%Y-%%m-%%d_%%H-%%M-%%S.mp4 -loglevel verbose
After this correction got a 120 hour smooth input stream with no errors.
Hope this helps anyone.

Pushing on-fly transcoded video to embeded http results with no seekbar

I'm trying to achieve a simple home-based solution for streaming/transcoding video to low-end machine that is unable to play file properly.
I'm trying to do it with ffmpeg (as ffserver will be discontinued)
I found out that ffmpeg have build in http server that can be used for this.
The application Im' testing with (for seekbar) is vlc
I'm probably doing something wrong here (or trying to do something that other does with other applications)
My ffmpeg code I use is:
d:\ffmpeg\bin\ffmpeg.exe -r 24 -i "D:\test.mkv" -threads 2 -vf
scale=1280:720 -c:v libx264 -preset medium -crf 20 -maxrate 1000k
-bufsize 2000k -c:a ac3 -seekable 1 -movflags faststart -listen 1 -f mpegts http://127.0.0.1:8080/test.mpegts
This code also give me ability to start watching it when I want (as opposite to using rtmp via udp that would start video as soon as it transcode it)
I readed about moving atoom thing at file begging which should be handled by movflags faststart
I also checked the -re option without any luck, -r 25 is just to suppress the Past duration 0.xx too large warning which I read is normal thing.
test file is one from many ones with different encoder setting etc.
The setting above give me a seekbar but it doesn't work and no overall duration (and no progress bar), when I switch from mpegts to matroska/mkv I see duration of video (and progress) but no seekbar.
If its possible with only ffmpeg I would prefer to stick to it as standalone solution without extra rtmp/others servers.
after some time I get to point where:
seek bar is a thing on player side , hls in version v6 support pointing to start item as v3 start where ever it whats (not more than 3 items from end of list)
playback and seek is based on player (safari on ios support it other dosn't) also ffserver is no needed to push the content.
In the end it work fine without seek and if seek is needed support it on your end with player/js.player or via middle-ware like proxy video server.

ffmpeg publishing VP8 to Janus Gateway 100% CPU MBP

I'm looking to use Janus Gateway to stream very low latency to a thousand viewers from a single source.
I'm aiming for VP8 video streaming since H.264 support hasn't dropped in Chrome yet.
My config is
[gst-rpwc]
type = rtp
id = 1
description = Test Stream
audio = no
video = yes
videoport = 8004
videopt = 100
videortpmap = VP8/90000
I'm testing initially on OSX with the built in webcam. This is the pipeline
ffmpeg -f avfoundation -video_size 640x480 -framerate 30 -i "0" -b:v 800k -c:v libvpx rtp://x.x.x.x:8004
But my CPU on a Retina Macbook Pro is at 100% the entire time and I'm only getting a few frames every few seconds on the client end. I believe the conversion from the built in iSight camera to VP8 is too intensive. Is there a way to make this conversion more effecient?
I'm no expert on Janus, but for a WebRTC VP8 stream, the videofmtp you have doesn't make sense as that string is for h.264 and to a lesser extent, the videopt isn't what I've seen for VP8, that value should be 100. The biggest issue here is that ffmpeg can't do DTLS, so even with the mods I've specified, this will probably not work.

Resources