I work at a telehealth company and we are using connected medical devices in order to provide the doctor with real time information from these equipements, the equipements are used by a trained health Professional.
Those devices work with video and audio. Right now, we are using them with peerjs (so peer to peer connection) but we are trying to move away from that and have a RPI with his only job to stream data (so streaming audio and video).
Because the equipements are supposed to be used with instructions from a doctor we need the doctor to receive the data in real time.
But we also need the trained health professional to see what he is doing (so we need a local feed from the equipement)
How do we capture audio and video
We are using ffmpeg with a go client that is in charge of managing the ffmpeg clients and stream them to a SRS server.
This works but we are having a 2-3 sec delay when streaming the data. (rtmp from ffmpeg and flv on the front end)
ffmpeg settings :
("ffmpeg", "-f", "v4l2", `-i`, "*/video0", "-f", "flv", "-vcodec", "libx264", "-x264opts", "keyint=15", "-preset", "ultrafast", "-tune", "zerolatency", "-fflags", "nobuffer", "-b:a", "160k", "-threads", "0", "-g", "0", "rtmp://srs-url")
My questions
Is there a way for this set up to achieve low latency (<1 sec) (for the nurse and for the doctor) ?
Is the way I want to achieve this good ? Is there a batter way ?
Flow schema
Data exchange and use case flow:
Note: The nurse and doctor use HTTP-FLV to play the live stream, for low latency.
In your scenario, the latency is introduced by two parts:
The audio/video encoding by FFmpeg in RPI.
The player to consume and ingest the live stream.
FFmpeg in RPI
I noticed that you have already set some args, you could see full help by ffmpeg --help full to check these params.
The keyint equals to -g, so please remove keyint, and set the fps(-r). Please set -r 15 -g 15 which set the gop to 1s or 15fps:
-g <int> set the group of picture (GOP) size (from INT_MIN to INT_MAX) (default 12)
-r rate set frame rate (Hz value, fraction or abbreviation)
The x264 options preset and tune is useful for low latency, but also need to set another one profile to turn off bframe. Please set to -profile baseline -preset ultrafast -tune zerolatency for lower latency:
-preset <string> Set the encoding preset (cf. x264 --fullhelp) (default "medium")
-tune <string> Tune the encoding params (cf. x264 --fullhelp)
-profile <string> Set profile restrictions (cf. x264 --fullhelp)
You set a wrong -fflags nobuffer which is for decoder(player), instead you should use -fflags flush_packets for encoder:
-fflags <flags> (default autobsf)
flush_packets E.......... reduce the latency by flushing out packets immediately
nobuffer .D......... reduce the latency introduced by optional buffering
Note that the E means encoder while D means decoder/player.
The cli for FFmpeg, please covert to your params:
-vcodec libx264 \
-r 15 -g 15 \
-profile baseline -preset ultrafast -tune zerolatency \
-fflags flush_packets
However, I think these settings only works when you change your player settings, because the bottleneck is in the player now(latency 1~3s).
Player
For HTTP-FLV, please use conf/realtime.conf for SRS server, and please use ffplay to test the latency:
ffplay -fflags nobuffer -flags low_delay -i "http://your_server/live/stream.flv"
I think the latency should be <1s, better than H5 player, which uses MSE. You could compare the latency of them.
However, you couldn't let your users to use ffplay, it's test only for development. So we must use a low latency H5 player, that is WebRTC.
Please config SRS with conf/rtmp2rtc.conf which allows you to publish by FFmpeg by RTMP in low latency, and play the stream by WebRTC.
When your SRS is started, there is a WebRTC player, for example: http://localhost:8080/players/rtc_player.html and please read more about WebRTC from here
The url is very similar:
RTMP: rtmp://ip/live/livestream
FLV: http://ip/live/livestream.flv
HLS: http://ip/live/livestream.m3u8
WebRTC: webrtc://ip/live/livestream
If you use WebRTC player, the latency should be ~500ms and very stable.
Related
I can download http://www.w6rz.net/adv8dvbt23.ts.
And there are many samples for dvbt sample ts files.
But, I want to convert my video file to TS file for dvbt.
First, I checked on google, but I cannot find any answer.
I think, this does not make sense, or, the way of thinking may have been wrong.
FFmpeg can used for this?
but, there is no any parmameter for Transmit mode, QAM / 64QAB, guard interval.
FFmpeg can used for this? but, there is no any parmameter for Transmit mode, QAM / 64QAB, guard interval.
As I explained already:
ffmpeg doesn't know anything about RF things like Constellation type; it is just a tool to transcode between different video formats. .ts is for "transport stream", and it's the video container format that DVB uses. The GNU Radio transmit flowgraphs on the other hand know nothing about video things – all they do is take the bits from a file. So that file needs to be in a format that the receiver would understand, and that's why I instructed you to use FFMPEG with the parameters you need. Since I don't know which bitrate you're planning on transmitting, I can't help you with how to use ffmpeg
So, you need to generate video data that your DVB-T receiver understands, but more importantly even, you need to put them in a container that ensures constant bitrate.
As pointed out in a different comment to your ham.stackexchange.com question about the topic, your prime source of examples would be GNU Radio's own gr-dtv module; when you look into gnuradio/gr-dtv/examples/README.dvbt, you'll find a link to https://github.com/drmpeg/dtv-utils , W6RZ's own tooling :)
There you'll find the tools necessary to calculate the exact stream bitrate you need your MPEG transport stream to have. Remember, a DVB-T transmitter has to transmit at a constant bits per second, so your video container must be constant-bitrate. That's why a transport stream pads the video data to achieve constant rate.
Then, you'll use ffmpeg to transcode your video and put into the transport stream container:
ffmpeg -re -i inputvideo.mpeg \
-vcodec mpeg2video \
-s 720x576 #resolution; this is a good choice, since most TVs will deal with it \
-r 25 #frames per second, use 25\
-flags cgop+ilme -sc_threshold 1000000000 #MPEG codec options\
-b:v 2M #Video *codec data* bit rate (defines video quality). Must be lower than stream bit rate, so < muxrate-(audio bitrate)\
-minrate:v 2M -maxrate:v 2M #enforce constant video bit rate\
-acodec mp2 -ac 2 -b:a 192k #audio codec, quality and bitrate\
-muxrate ${RATE FROM TOOL}
-f mpegts #specify you want a MPEG Transport Stream container as output\
outputfile.ts
I'm trying to put together a reliable, reasonably low (<2s) latency desktop window share to browser solution. Currently I have:
client sender using FFMPEG:
ffmpeg -f gdigrab -i "title=notepad.exe" -r 10 -framerate 10 -c:v libx264 -g 50 -preset fast -tune zerolatency -f rtp rtp://192.168.1.85:1234
server re-stream to HTTP using VLC:
vlc -vv test.sdp --sout=#transcode{vcodec=theo,vb=1600,scale=1,channels=1,acodec=none}:http{dst=:8080/webcam.ogg} :no-sout-rtp-sap :no-sout-standard-sap :sout-keep
where the sdp file is generated from the output of the ffmpeg command
Client browser:
<video id="video" autoplay loop muted preload="auto">
<source src="http://192.168.1.85:8080/webcam.ogg" type="video/ogg"/>
</video>
This works and gives good quality. But the latency is terrible (around 10s) and I'm at a loss to know how to tune it. I know that the latency is in the VLC transcoding/restreaming - displaying the RTP stream from the client on the server only has around 1s lag.
I guess there are two questions - can this approach be sensibly tuned, or is the approach wrong to start with?
Sub 2 second over http is near impossible. Latency can be reduced, but you probably need to change out you http origin software, switch delivery to chunked transfer, optimize your encoding pipeline, and manage your player buffer. Even then I doubt you will get to 2 seconds.
I'm trying to achieve a simple home-based solution for streaming/transcoding video to low-end machine that is unable to play file properly.
I'm trying to do it with ffmpeg (as ffserver will be discontinued)
I found out that ffmpeg have build in http server that can be used for this.
The application Im' testing with (for seekbar) is vlc
I'm probably doing something wrong here (or trying to do something that other does with other applications)
My ffmpeg code I use is:
d:\ffmpeg\bin\ffmpeg.exe -r 24 -i "D:\test.mkv" -threads 2 -vf
scale=1280:720 -c:v libx264 -preset medium -crf 20 -maxrate 1000k
-bufsize 2000k -c:a ac3 -seekable 1 -movflags faststart -listen 1 -f mpegts http://127.0.0.1:8080/test.mpegts
This code also give me ability to start watching it when I want (as opposite to using rtmp via udp that would start video as soon as it transcode it)
I readed about moving atoom thing at file begging which should be handled by movflags faststart
I also checked the -re option without any luck, -r 25 is just to suppress the Past duration 0.xx too large warning which I read is normal thing.
test file is one from many ones with different encoder setting etc.
The setting above give me a seekbar but it doesn't work and no overall duration (and no progress bar), when I switch from mpegts to matroska/mkv I see duration of video (and progress) but no seekbar.
If its possible with only ffmpeg I would prefer to stick to it as standalone solution without extra rtmp/others servers.
after some time I get to point where:
seek bar is a thing on player side , hls in version v6 support pointing to start item as v3 start where ever it whats (not more than 3 items from end of list)
playback and seek is based on player (safari on ios support it other dosn't) also ffserver is no needed to push the content.
In the end it work fine without seek and if seek is needed support it on your end with player/js.player or via middle-ware like proxy video server.
I'm trying to capture and stream video from a 5MP USB camera using ffmpeg 3.2.2 on Windows. Here's the command line that I'm using:
ffmpeg -f dshow -video_size 320x240 -framerate 30 -i video="HD USB Camera" -vcodec libx264 -preset ultrafast -tune zerolatency -g 60 -f mpegts udp://192.168.1.100:10000
The destination for my stream (an Ubuntu box on the same subnet) is running ffplay via:
ffplay -i udp://127.0.0.1:10000
This works but the video stream seems like it's delayed by 8 - 10 seconds. It's my understanding that the destination can't begin displaying the stream until it receives an I-frame so I tried specifying a GOP value of 60 thinking that this would cause an I-frame to be inserted every 2 seconds (# 30 FPS).
The Windows machine that's doing the transcoding is running an i7-3840QM # 2.80GHz and has 32 GB RAM. FFmpeg appears to be using very little CPU (like 2%) so it doesn't seem like it's CPU bound. Just as a test, I tried ingesting an MP4 file and not doing any transcoding (ffmpeg -re -i localFile.mp4 -c copy -f mpegts udp://192.168.1.100:10000) but it still takes several seconds before the stream is displayed on the Ubuntu system.
On a related note, I'm also evaluating a trial version of the Wowza Streaming Engine server and when I direct my ffmpeg stream to Wowza, I get the same 8 - 10 second delay before the Wowza test player starts playing it back. For what it's worth, once the stream starts playing, it seems to be running fine (other than the fact that everything is "behind" by several seconds).
I'm new to video streaming so I might be missing something obvious here but can anyone tell me what might be causing this delay or suggest how I might further troubleshoot the problem? Thank you!
Try setting this values:
analyzeduration integer (input)
Specify how many microseconds are analyzed to probe the input. A
higher value will enable detecting more accurate information, but will
increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
probesize integer (input)
Set probing size in bytes, i.e. the size of the data to analyze to get
stream information. A higher value will enable detecting more
information in case it is dispersed into the stream, but will increase
latency. Must be an integer not lesser than 32. It is 5000000 by
default.
FFmpeg docs
I'm looking to use Janus Gateway to stream very low latency to a thousand viewers from a single source.
I'm aiming for VP8 video streaming since H.264 support hasn't dropped in Chrome yet.
My config is
[gst-rpwc]
type = rtp
id = 1
description = Test Stream
audio = no
video = yes
videoport = 8004
videopt = 100
videortpmap = VP8/90000
I'm testing initially on OSX with the built in webcam. This is the pipeline
ffmpeg -f avfoundation -video_size 640x480 -framerate 30 -i "0" -b:v 800k -c:v libvpx rtp://x.x.x.x:8004
But my CPU on a Retina Macbook Pro is at 100% the entire time and I'm only getting a few frames every few seconds on the client end. I believe the conversion from the built in iSight camera to VP8 is too intensive. Is there a way to make this conversion more effecient?
I'm no expert on Janus, but for a WebRTC VP8 stream, the videofmtp you have doesn't make sense as that string is for h.264 and to a lesser extent, the videopt isn't what I've seen for VP8, that value should be 100. The biggest issue here is that ffmpeg can't do DTLS, so even with the mods I've specified, this will probably not work.