How to live stream from multiple webcam - ffmpeg

I have five webcams I want to live to stream their content to m3u8(HLS stream), so I can use an HTML web player to play that file.
My current setup:
I have five systems each has a webcam connected to it, so I am using RTSP to stream data from the system to AWS.
./ffmpeg -f avfoundation -s 640x480 -r 30 -i "0" -f rtsp rtsp://awsurl.com:10000/cam1
./ffmpeg -f avfoundation -s 640x480 -r 30 -i "0" -f rtsp rtsp://awsurl.com:10000/cam2
....
./ffmpeg -f avfoundation -s 640x480 -r 30 -i "0" -f rtsp rtsp://awsurl.com:10000/cam5
On the cloud, I want to set up a server. I Googled and learned about GStreamer, with which I can set up an RTSP server. The command below has an error. (I can't figure out how to set up one server for multiple webcam streams)
gst-launch-1.0 udpsrc port=10000 ! rtph264depay ! h264parse ! video/x-h264,stream-format=avc ! \
mpegtsmux ! hlssink target-duration=2 location="output_%05d.ts"\
playlist-root=http://localhost:8080/hls/stream/ playlists-max=3
I question how I can set up the RTSP to differentiate between multiple webcam streams using one server (or do I have to create a server for each webcam stream)?

This might not be a canonical answer, as there are no details about the camera streams, the OS and your programming language, but you may try the following:
1. Install prerequisites
You would need gstrtspserver library (and may be gstreamer dev packages as well if you want to try from C++).
Assuming a Linux Ubuntu host, you would use:
sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev
2. Get information about the received streams
You may use various tools for that, with gstreamer you may use:
gst-discoverer-1.0 rtsp://awsurl.com:10000/cam1
For example, if you see:
Topology:
unknown: application/x-rtp
video: H.264 (Constrained Baseline Profile)
Then it is H264 encoded video sent by RTP so RTPH264.
You would get more details adding verbose flag (-v).
If you want your RTSP server to stream with H264 encoding and the incoming stream is also H264, then you would just forward.
If the received stream has a different encoding than what you want to encode, then you would have to decode video and re-encode it.
3. Run the server:
This python script would run a RTSP server, streaming 2 cams with H264 encoding (expanding to 5 should be straight forward).
Assuming here that the first cam is H264 encoded, it is just forwarding. For the second camera, the stream is decoded and re-encoded into H264 video.
In latter case, it is difficult to give a canonical answer, because the decoder and encoder plugins would depend on your platform. Some also use special memory space (NVMM for Nvidia, 3d11 for Windows, ...), in such case you may have to copy to system memory for encoding with x264enc, or better use an other encoder using same memory space as input.
import gi
gi.require_version('Gst','1.0')
gi.require_version('GstVideo','1.0')
gi.require_version('GstRtspServer','1.0')
from gi.repository import GObject, GLib, Gst, GstVideo, GstRtspServer
Gst.init(None)
mainloop = GLib.MainLoop()
server = GstRtspServer.RTSPServer()
mounts = server.get_mount_points()
factory1 = GstRtspServer.RTSPMediaFactory()
factory1.set_launch('( rtspsrc location=rtsp://awsurl.com:10000/cam1 latency=500 ! rtph264depay ! h264parse ! rtph264pay name=pay0 pt=96 )')
mounts.add_factory("/cam1", factory1)
factory2 = GstRtspServer.RTSPMediaFactory()
factory2.set_launch('( uridecodebin uri=rtsp://awsurl.com:10000/cam2 source::latency=500 ! queue ! x264enc key-int-max=15 insert-vui=1 ! h264parse ! rtph264pay name=pay0 pt=96 )')
mounts.add_factory("/cam2", factory2)
server.attach(None)
print ("stream ready at rtsp://127.0.0.1:8554/{cam1,cam2,...}")
mainloop.run()
If you want using C++ instead of python, you would checkout sample test-launch for your gstreamer version (you can get it with gst-launch-1.0 --version) that is similar to this script and adapt.
4. Test
Note that it may take a few seconds to start before displaying.
gst-play-1.0 rtsp://[Your AWS IP]:8554/cam1
gst-play-1.0 rtsp://[Your AWS IP]:8554/cam2
I have no experience with AWS, be sure that no firewall blocks UDP/8554.

rtsp-simple-server might be a good choice for presenting and broadcasting live streams through various format/protocols such as HLS over HTTP.
Even on meager and old configuration, it still has provided me a decent latency.
If you look for reduced latency, you might be surprised with cam2ip. Unfortunatly this isn't HLS, it's actually mjpeg, and thus without sound, but with far better latency.

Related

How extract single frame from video stream using gstreamer / closing stream

I need to take one frame from video stream from web camera and write it to the file.
In ffmpeg I could do it in this way:
ffmpeg -i rtsp://10.6.101.40:554/video.3gp -t 1 img.png
My GStreamer command:
gst-launch-1.0 rtspsrc location="rtsp://10.6.101.40:554/video.3gp" is_live=true ! decodebin ! jpegenc ! filesink location=img.jpg
problem is, gstreamer process keeps running and does not end. How can I take only one frame and force stream to close after file is written?
Is it possible to do this from command line or should I code this in c/python etc...
Thanks a lot.
I was able to do this with:
! jpegenc snapshot=TRUE
See jpegenc - snapshot.
but my source is different so your mileage may vary.
Try using the property number of buffers for element queue, and restrict it to 1. This will give you hopefully a single frame.

Using ffserver to do UDP multicast streaming

Here's the deal. I'm working with IPTV hardware and I need to output a bunch of demo streams. These are MPEG2 transport stream that need to be straight up UDP Multicast streams. I have an ffmpeg command that works great:
ffmpeg -re -i /Volumes/Data/DemoVideos/GRAILrpsp.ts -acodec copy -vcodec copy -f mpegts udp://239.192.1.82:12000[ttl=1,buffer_size=2097157]
What I would like to do is convert this into an ffserver config file instead of having to start a whole bunch of ffmpeg streams and then figuring out how to get them to loop. I'm sure I can do it with the right scripting but what a pain, isn't that what ffserver is for? But I can't find any documentation on doing UDP streaming using ffserver. You can set a multicast address and port but it goes to RTP which this hardware isn't designed for. Any help would be greatly appreciated.
At the time of this post, according to the ffserver Documentation it doesn't support MPEG-TS directly in UDP:
ffserver receives prerecorded files or FFM streams from some ffmpeginstance as input, then streams them over RTP/RTSP/HTTP.

Setting up rtsp stream on Windows

I am trying to set up an rtsp stream that can be accessed from an application. I have been experimenting with ffmpeg to realize that. I have succeded as far as I was able to stream from ffmpeg to ffplay but I could not load the stream in vlc for example. Here are the calls that I did from two different shells on the same machine:
ffmpeg.exe -y -loop 1 -r 24 -i test_1.jpg -vcodec libx264 -tune stillimage -f rtsp rtsp://127.0.0.1:1234/stream.sdp
ffplay.exe -rtsp_flags listen rtsp://127.0.0.1:1234/stream.sdp
Can anybody explain to me what I would have to do to load the stream as a network stream using vlc? Any help is appreciated.
I have done this before and I'm not sure what was wrong with rtsp output of ffmpeg. But what i can say right now is please consider using Live555 library if you have any streaming scenario. cause the ffmpeg code (for rtp muxer) is not good and it is buggy. ffmpeg has another solution for streaming server which is called ffserver which prepare ffmpeg pipe for vlc or another third-party application. and that's bad written and buggy too (libav group -another fork of libav* libraries) never used ffserver code and in not sure if they have any plan to consider ffserver as their solution. they have ffplay(avplay), ffmpeg(avconv) and ffprobe but not ffserver.
If you want to use Live555 which really easy, you have to just go to their website (www.live555.com) download the source code and build MediaServer application (It is in 'MediaServer' folder). if you read the code's documentation, I'm sure you will have not any problem.It's a basic rtsp server to stream any (supported) accessible file on your HDD via rtsp url of your server.
if you have any problem with code just comment here, so I can help you more with live555.

How to stream video from webcam using Gstreamer?

How to stream video(and if it possible audio too) from webcam using Gstreamer? I already tried to stream video from source, but I can't stream video from webcam on Windows. How I can do this?
Client:
VIDEO_CAPS="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-1998"
DEST=localhost
VIDEO_DEC="rtph263pdepay ! avdec_h263"
VIDEO_SINK="videoconvert ! autovideosink"
LATENCY=100
gst-launch -v gstrtpbin name=rtpbin latency=$LATENCY \
udpsrc caps=$VIDEO_CAPS port=5000 ! rtpbin.recv_rtp_sink_0 \
rtpbin. ! $VIDEO_DEC ! $VIDEO_SINK \
udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \
rtpbin.send_rtcp_src_0 ! udpsink host=$DEST port=5005 sync=false async=false
Server:
DEST=127.0.0.1
VOFFSET=0
AOFFSET=0
VELEM="ksvideosrc is-live=1"
VCAPS="video/x-raw,width=352,height=288,framerate=15/1"
VSOURCE="$VELEM ! $VCAPS"
VENC="avenc_h263p ! rtph263ppay"
VRTPSINK="udpsink port=5000 host=$DEST ts-offset=$VOFFSET name=vrtpsink"
VRTCPSINK="udpsink port=5001 host=$DEST sync=false async=false name=vrtcpsink"
VRTCPSRC="udpsrc port=5005 name=vrtpsrc"
gst-launch gstrtpbin name=rtpbin
$VSOURCE ! $VENC ! rtpbin.send_rtp_sink_2
rtpbin.send_rtp_src_2 ! $VRTPSINK
rtpbin.send_rtcp_src_2 ! $VRTCPSINK
$VRTCPSRC ! rtpbin.recv_rtcp_sink_2
You will have to use GStreamer 1.3.90 or newer and the ksvideosrc element that is available only since that version.
And then you can stream it just like any other input... the details depend on what codecs, container format, streaming protocol and network protocol you want to use. The same goes for audio, that works basically exactly the same as video.
http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/tests/examples/rtp
here you can find some examples that use RTP for streaming. Server side and client side examples, audio-only, video-only or both. And also streaming from real audio/video capture sources (for Linux though, but on Windows it works exactly the same... just with the Windows specific elements for that).

Is it possible to pull a RTMP stream from one server and broadcast it to another?

I essentially have a situation where I need to pull a stream from one Wowza media server and publish it to a Red5 or Flash Media Server instance with FFMPEG. Is there a command to do this? I'm essentially looking for something like this:
while [ true ]; do
ffmpeg -i rtmp://localhost:2000/vod/streamName.flv rtmp://localhost:1935/live/streamName
done
Is this currently possible from FFMPEG? I remembered reading something like this, but I can't remember how exactly to do it.
Yes. An example (pulling from a local server, publishing to a local server):
$ ffmpeg -analyzeduration 0 -i "rtmp://localhost/live/b live=1" -f flv rtmp://localhost:1936/live/c
analyzeduration is to make it start faster. You can also add other parameters in there to "reencode" etc. if desired.
try typing in this way:
$ffmpeg -i "[InputSourceAddress]" -f [Outputfileformat] "[OutputSourceAddress]"
The input source address can be in type rtmp, or rtsp/m3u8/etc.

Resources