I need to take one frame from video stream from web camera and write it to the file.
In ffmpeg I could do it in this way:
ffmpeg -i rtsp://10.6.101.40:554/video.3gp -t 1 img.png
My GStreamer command:
gst-launch-1.0 rtspsrc location="rtsp://10.6.101.40:554/video.3gp" is_live=true ! decodebin ! jpegenc ! filesink location=img.jpg
problem is, gstreamer process keeps running and does not end. How can I take only one frame and force stream to close after file is written?
Is it possible to do this from command line or should I code this in c/python etc...
Thanks a lot.
I was able to do this with:
! jpegenc snapshot=TRUE
See jpegenc - snapshot.
but my source is different so your mileage may vary.
Try using the property number of buffers for element queue, and restrict it to 1. This will give you hopefully a single frame.
Related
I have five webcams I want to live to stream their content to m3u8(HLS stream), so I can use an HTML web player to play that file.
My current setup:
I have five systems each has a webcam connected to it, so I am using RTSP to stream data from the system to AWS.
./ffmpeg -f avfoundation -s 640x480 -r 30 -i "0" -f rtsp rtsp://awsurl.com:10000/cam1
./ffmpeg -f avfoundation -s 640x480 -r 30 -i "0" -f rtsp rtsp://awsurl.com:10000/cam2
....
./ffmpeg -f avfoundation -s 640x480 -r 30 -i "0" -f rtsp rtsp://awsurl.com:10000/cam5
On the cloud, I want to set up a server. I Googled and learned about GStreamer, with which I can set up an RTSP server. The command below has an error. (I can't figure out how to set up one server for multiple webcam streams)
gst-launch-1.0 udpsrc port=10000 ! rtph264depay ! h264parse ! video/x-h264,stream-format=avc ! \
mpegtsmux ! hlssink target-duration=2 location="output_%05d.ts"\
playlist-root=http://localhost:8080/hls/stream/ playlists-max=3
I question how I can set up the RTSP to differentiate between multiple webcam streams using one server (or do I have to create a server for each webcam stream)?
This might not be a canonical answer, as there are no details about the camera streams, the OS and your programming language, but you may try the following:
1. Install prerequisites
You would need gstrtspserver library (and may be gstreamer dev packages as well if you want to try from C++).
Assuming a Linux Ubuntu host, you would use:
sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev
2. Get information about the received streams
You may use various tools for that, with gstreamer you may use:
gst-discoverer-1.0 rtsp://awsurl.com:10000/cam1
For example, if you see:
Topology:
unknown: application/x-rtp
video: H.264 (Constrained Baseline Profile)
Then it is H264 encoded video sent by RTP so RTPH264.
You would get more details adding verbose flag (-v).
If you want your RTSP server to stream with H264 encoding and the incoming stream is also H264, then you would just forward.
If the received stream has a different encoding than what you want to encode, then you would have to decode video and re-encode it.
3. Run the server:
This python script would run a RTSP server, streaming 2 cams with H264 encoding (expanding to 5 should be straight forward).
Assuming here that the first cam is H264 encoded, it is just forwarding. For the second camera, the stream is decoded and re-encoded into H264 video.
In latter case, it is difficult to give a canonical answer, because the decoder and encoder plugins would depend on your platform. Some also use special memory space (NVMM for Nvidia, 3d11 for Windows, ...), in such case you may have to copy to system memory for encoding with x264enc, or better use an other encoder using same memory space as input.
import gi
gi.require_version('Gst','1.0')
gi.require_version('GstVideo','1.0')
gi.require_version('GstRtspServer','1.0')
from gi.repository import GObject, GLib, Gst, GstVideo, GstRtspServer
Gst.init(None)
mainloop = GLib.MainLoop()
server = GstRtspServer.RTSPServer()
mounts = server.get_mount_points()
factory1 = GstRtspServer.RTSPMediaFactory()
factory1.set_launch('( rtspsrc location=rtsp://awsurl.com:10000/cam1 latency=500 ! rtph264depay ! h264parse ! rtph264pay name=pay0 pt=96 )')
mounts.add_factory("/cam1", factory1)
factory2 = GstRtspServer.RTSPMediaFactory()
factory2.set_launch('( uridecodebin uri=rtsp://awsurl.com:10000/cam2 source::latency=500 ! queue ! x264enc key-int-max=15 insert-vui=1 ! h264parse ! rtph264pay name=pay0 pt=96 )')
mounts.add_factory("/cam2", factory2)
server.attach(None)
print ("stream ready at rtsp://127.0.0.1:8554/{cam1,cam2,...}")
mainloop.run()
If you want using C++ instead of python, you would checkout sample test-launch for your gstreamer version (you can get it with gst-launch-1.0 --version) that is similar to this script and adapt.
4. Test
Note that it may take a few seconds to start before displaying.
gst-play-1.0 rtsp://[Your AWS IP]:8554/cam1
gst-play-1.0 rtsp://[Your AWS IP]:8554/cam2
I have no experience with AWS, be sure that no firewall blocks UDP/8554.
rtsp-simple-server might be a good choice for presenting and broadcasting live streams through various format/protocols such as HLS over HTTP.
Even on meager and old configuration, it still has provided me a decent latency.
If you look for reduced latency, you might be surprised with cam2ip. Unfortunatly this isn't HLS, it's actually mjpeg, and thus without sound, but with far better latency.
The wavparse documentation provides this example to play a .wav audio file through the speakers on Linux with Alsa audio.
gst-launch-1.0 filesrc location=sine.wav ! wavparse ! audioconvert ! alsasink
I have tried to adapt this for use on Windows with the wasapisink or the autoaudiosink:
gst-launch-1.0.exe -v filesrc location=1.wav ! wavparse ! audioconvert ! autoaudiosink
gst-launch-1.0.exe -v filesrc location=1.wav ! wavparse ! audioconvert ! wasapisink
Both attempts result in an error:
ERROR: from element /GstPipeline:pipeline0/GstWavParse:wavparse0: Internal data stream error.
The full logs look like this:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstWavParse:wavparse0.GstPad:src: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, channels=(int)2, channel-mask=(bitmask)0x0000000000000003, rate=(int)44100
ERROR: from element /GstPipeline:pipeline0/GstWavParse:wavparse0: Internal data stream error.
Additional debug info:
../gst/wavparse/gstwavparse.c(2308): gst_wavparse_loop (): /GstPipeline:pipeline0/GstWavParse:wavparse0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
ERROR: from element /GstPipeline:pipeline0/GstAutoAudioSink:autoaudiosink0/GstWasapiSink:autoaudiosink0-actual-sink-wasapi: The stream is in the wrong format.
Additional debug info:
../gst-libs/gst/audio/gstaudiobasesink.c(1117): gst_audio_base_sink_wait_event (): /GstPipeline:pipeline0/GstAutoAudioSink:autoaudiosink0/GstWasapiSink:autoaudiosink0-actual-sink-wasapi:
Sink not negotiated before eos event.
ERROR: pipeline doesn't want to preroll.
Freeing pipeline ...
I have tried with multiple .wav files from various sources. Always the same result.
I have confirmed that autoaudiosink works on my PC because both of these commands generated an audible tone:
gst-launch-1.0.exe -v audiotestsrc samplesperbuffer=160 ! audioconvert ! autoaudiosink
gst-launch-1.0.exe -v audiotestsrc samplesperbuffer=160 ! autoaudiosink
I have also confirmed that playbin can play the file through my speakers, but this doesn't work for me because ultimately I will need to split up the pipeline a bit more.
gst-launch-1.0.exe -v playbin uri=file:///C:/1.wav
I am using gstreamer 1.18.0 with Windows 10. How do I play the contents of a .wav file through my speakers using a filesrc and autoaudiosink?
Maybe try audioresample before or after audioconvert too. Not entirely sure about current windows audio subsystems - but nowadays hardware tend to require a sample rate of 48000 hz. If the audio subsystem does not take care of it you need to take of it yourself.
In gstreamer pipeline, I'm trying to figure out if there's a way to specify that I only want key frames from a RTSP stream.
In ffmpeg you can do this with with -skip_frame nokey flag. E.g.:
ffmpeg -skip_frame nokey -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov -qscale 0 -r 1/1 frame%03d.jpg
The corresponding gstreamer command to read the RTSP feed looks like this:
gst-launch-1.0 rtspsrc location=rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov ! decodebin ! videorate ! "video/x-raw,framerate=1/1" ! videoconvert ! autovideosink
Does anyone know if it is possible to ask gstreamer to only return keyframes?
I think you could try add GST_PAD_PROBE_TYPE_BUFFER pad probe and return GST_PAD_PROBE_DROP on buffers with GST_BUFFER_FLAG_DELTA_UNIT flag set.
After spending days looking for a complete answer to this question, I eventually ended up with a solution that gave me the rtsp processing boost I was looking for.
Here is the diff of a pipeline in Python that transitioned from processing every RTSP frame to only processing key frames.
https://github.com/ambianic/ambianic-edge/pull/171/files#diff-f89415777c559bba294250e788230c5e
First register for the stream start bus event:
Gst.MessageType.STREAM_START
This is triggered when the stream processing starts. When this event occurs, request seek to the next keyframe.
When the request completes, the pipeline triggers the next bus event we need to listen for:
Gst.MessageType.ASYNC_DONE
Finally, here is the keyframe seek request itself:
def _gst_seek_next_keyframe(self):
found, pos_int = self.gst_pipeline.query_position(Gst.Format.TIME)
if not found:
log.warning('Gst current pipeline position not found.')
return
rate = 1.0 # keep rate close to real time
flags = \
Gst.SeekFlags.FLUSH | Gst.SeekFlags.KEY_UNIT | \
Gst.SeekFlags.TRICKMODE | Gst.SeekFlags.SNAP_AFTER | \
Gst.SeekFlags.TRICKMODE_KEY_UNITS | \
Gst.SeekFlags.TRICKMODE_NO_AUDIO
is_event_handled = self.gst_pipeline.seek(
rate,
Gst.Format.TIME,
flags,
Gst.SeekType.SET, pos_int,
Gst.SeekType.END, 0)
You can use a new seek event gst_event_new_seek with the flag GstSeekFlags for trickmode GST_SEEK_FLAG_TRICKMODE, skip frames GST_SEEK_FLAG_SKIP and keyframes only GST_SEEK_FLAG_TRICKMODE_KEY_UNITS.
You can also use identity and its property drop-buffer-flags to filter for GST_BUFFER_FLAG_DELTA_UNIT and maybe GST_BUFFER_FLAG_DROPPABLE.
see trickmodes, seeking and GstSeekFlags in the documentation for the seeking and identity:drop-buffer-flags and GstBufferFlags for identity.
I am having trouble to play the audio from the rtsp server, i have no problem for the video playback, but some error occurred while i tried to play audio,
the following is the command used to play video:
C:\gstreamer\1.0\x86_64\bin>gst-launch-1.0 rtspsrc location=rtsp://192.168.2.116/axis-media/media.amp latency=0 !decodebin ! autovideosink
however, when i change the autovideosink to autoaudiosink, which as in follow:
C:\gstreamer\1.0\x86_64\bin>gst-launch-1.0 rtspsrc location=rtsp://192.168.2.116/axis-media/media.amp latency=0 !decodebin ! autoaudiosink
i get the errors below:
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc1: Internal data flow error.
Additional debug info:
gstbasesrc.c(2933): gst_base_src_loop (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc1:
streaming task paused, reason not-linked (-1)
I am new to both stackoverflow and Gstreamer, any helps from you would be much appreciated
Thanks to thiagoss's reply , I have my first success on playing both video and audio using the following pipeline:
gst-launch-1.0 rtspsrc location=rtsp://192.168.2.116/axis-media/media.amp latency=0 name=src src. ! decodebin ! videoconvert ! autovideosink src. ! decodebin ! audioconvert ! autoaudiosink
IIRC rtspsrc will output one pad for each stream (video and audio might be separate) so you could be linking your video output to an audiosink.
You can run with -v to see the caps on each pad and verify this. Then you can properly link by using pad names in gst-launch-1.0:
Something like:
gst-launch-1.0 rtspsrc location=rtsp://192.168.2.116/axis-media/media.amp latency=0 name=src src.stream_0 !decodebin ! autovideosink
Check the correct stream_%u number to use for each stream to have it linked correctly.
You can also just be missing a videoconvert before the videosink. I'd also test that.
How to stream video(and if it possible audio too) from webcam using Gstreamer? I already tried to stream video from source, but I can't stream video from webcam on Windows. How I can do this?
Client:
VIDEO_CAPS="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-1998"
DEST=localhost
VIDEO_DEC="rtph263pdepay ! avdec_h263"
VIDEO_SINK="videoconvert ! autovideosink"
LATENCY=100
gst-launch -v gstrtpbin name=rtpbin latency=$LATENCY \
udpsrc caps=$VIDEO_CAPS port=5000 ! rtpbin.recv_rtp_sink_0 \
rtpbin. ! $VIDEO_DEC ! $VIDEO_SINK \
udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \
rtpbin.send_rtcp_src_0 ! udpsink host=$DEST port=5005 sync=false async=false
Server:
DEST=127.0.0.1
VOFFSET=0
AOFFSET=0
VELEM="ksvideosrc is-live=1"
VCAPS="video/x-raw,width=352,height=288,framerate=15/1"
VSOURCE="$VELEM ! $VCAPS"
VENC="avenc_h263p ! rtph263ppay"
VRTPSINK="udpsink port=5000 host=$DEST ts-offset=$VOFFSET name=vrtpsink"
VRTCPSINK="udpsink port=5001 host=$DEST sync=false async=false name=vrtcpsink"
VRTCPSRC="udpsrc port=5005 name=vrtpsrc"
gst-launch gstrtpbin name=rtpbin
$VSOURCE ! $VENC ! rtpbin.send_rtp_sink_2
rtpbin.send_rtp_src_2 ! $VRTPSINK
rtpbin.send_rtcp_src_2 ! $VRTCPSINK
$VRTCPSRC ! rtpbin.recv_rtcp_sink_2
You will have to use GStreamer 1.3.90 or newer and the ksvideosrc element that is available only since that version.
And then you can stream it just like any other input... the details depend on what codecs, container format, streaming protocol and network protocol you want to use. The same goes for audio, that works basically exactly the same as video.
http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/tests/examples/rtp
here you can find some examples that use RTP for streaming. Server side and client side examples, audio-only, video-only or both. And also streaming from real audio/video capture sources (for Linux though, but on Windows it works exactly the same... just with the Windows specific elements for that).