I am new to GSTreamer.
I tried to upsink a video using GStreamer. While running the folloing command,
gst-launch-1.0 videotestsrc ! udpsink port=5200
I get warnings as follows.
WARNING: from element /GstPipeline:pipeline0/GstUDPSink:udpsink0: Attempting to send a UDP packets larger than maximum size (115200 > 65507)
Additional debug info:
gstmultiudpsink.c(715): gst_multiudpsink_send_messages (): /GstPipeline:pipeline0/GstUDPSink:udpsink0:
Reason: Error sending message: A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself.
0:00:56.934530706 4912 0000000002F09640 WARN multiudpsink gstmultiudpsink.c:715:gst_multiudpsink_send_messages:<udpsink0> warning: Attempting to send a UDP packets larger than maximum size (115200 > 65507)
0:00:56.939093412 4912 0000000002F09640 WARN multiudpsink gstmultiudpsink.c:715:gst_multiudpsink_send_messages:<udpsink0> warning: Reason: Error sending message: A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself.
What is the problem in GStreamer parameters?
Is there anything missing in that?
You need payloading before before transmitting a video on UDP.
gst-launch-1.0 videotestsrc ! rtpraw4vpay ! udpsink port=5200
But transmitting raw video over udp is not preferred. A better way to transmit video is to encode it to reduce the size. I'd prefer h264 encoding for optimum size.
gst-launch-1.0 videotestsrc ! x264enc ! video/x-h264, stream-format=byte-stream ! rtph264pay ! udpsink port=5200
You will receive this stream with
gst-launch-1.0 udpsrc port=5200 ! application/x-rtp, encoding-name=H264, payload=96 ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink
Related
I have the following pipelines
sender
gst-launch-1.0 videotestsrc ! "video/x-raw,width=1280, height=720,framerate=30/1" ! shmsink socket-path=/tmp/stream sync=true wait-for-connection=false shm-size=100000000
receiver
gst-launch-1.0 shmsrc socket-path=/tmp/stream is-live=1 ! video/x-raw,width=1280,height=720,framerate=30/1,format=BGR ! videoconvert ! queue ! autovideosink
Is there a way i can include the caps data in the shared memory instead of specifying that data on the receiver side?
what i want is something like
gst-launch-1.0 shmsrc socket-path=/tmp/stream is-live=1 ! video/x-raw ! videoconvert ! queue ! autovideosink
This currently shows
ERROR: from element /GstPipeline:pipeline0/GstCapsFilter:capsfilter0: Filter caps do not completely specify the output format
I can modify the sender as much as possible but want to keep the receiver more general to my needs.
The above is possible with encoded video like h264 but not raw video.
The wavparse documentation provides this example to play a .wav audio file through the speakers on Linux with Alsa audio.
gst-launch-1.0 filesrc location=sine.wav ! wavparse ! audioconvert ! alsasink
I have tried to adapt this for use on Windows with the wasapisink or the autoaudiosink:
gst-launch-1.0.exe -v filesrc location=1.wav ! wavparse ! audioconvert ! autoaudiosink
gst-launch-1.0.exe -v filesrc location=1.wav ! wavparse ! audioconvert ! wasapisink
Both attempts result in an error:
ERROR: from element /GstPipeline:pipeline0/GstWavParse:wavparse0: Internal data stream error.
The full logs look like this:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstWavParse:wavparse0.GstPad:src: caps = audio/x-raw, format=(string)S16LE, layout=(string)interleaved, channels=(int)2, channel-mask=(bitmask)0x0000000000000003, rate=(int)44100
ERROR: from element /GstPipeline:pipeline0/GstWavParse:wavparse0: Internal data stream error.
Additional debug info:
../gst/wavparse/gstwavparse.c(2308): gst_wavparse_loop (): /GstPipeline:pipeline0/GstWavParse:wavparse0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
ERROR: from element /GstPipeline:pipeline0/GstAutoAudioSink:autoaudiosink0/GstWasapiSink:autoaudiosink0-actual-sink-wasapi: The stream is in the wrong format.
Additional debug info:
../gst-libs/gst/audio/gstaudiobasesink.c(1117): gst_audio_base_sink_wait_event (): /GstPipeline:pipeline0/GstAutoAudioSink:autoaudiosink0/GstWasapiSink:autoaudiosink0-actual-sink-wasapi:
Sink not negotiated before eos event.
ERROR: pipeline doesn't want to preroll.
Freeing pipeline ...
I have tried with multiple .wav files from various sources. Always the same result.
I have confirmed that autoaudiosink works on my PC because both of these commands generated an audible tone:
gst-launch-1.0.exe -v audiotestsrc samplesperbuffer=160 ! audioconvert ! autoaudiosink
gst-launch-1.0.exe -v audiotestsrc samplesperbuffer=160 ! autoaudiosink
I have also confirmed that playbin can play the file through my speakers, but this doesn't work for me because ultimately I will need to split up the pipeline a bit more.
gst-launch-1.0.exe -v playbin uri=file:///C:/1.wav
I am using gstreamer 1.18.0 with Windows 10. How do I play the contents of a .wav file through my speakers using a filesrc and autoaudiosink?
Maybe try audioresample before or after audioconvert too. Not entirely sure about current windows audio subsystems - but nowadays hardware tend to require a sample rate of 48000 hz. If the audio subsystem does not take care of it you need to take of it yourself.
I successfully streamed my webcam's image with GStreamer using gst-launch this way :
SERVER
./gst-launch-1.0 -v -m autovideosrc ! video/x-raw,format=BGRA ! videoconvert ! queue ! x264enc pass=qual quantizer=20 tune=zerolatency ! rtph264pay ! udpsink host=XXX.XXX.XXX.XXX port=7480
CLIENT
./gst-launch-1.0 udpsrc port=7480 ! "application/x-rtp, payload=127" ! rtph264depay ! decodebin ! glimagesink
Now I try to reproduce the client side in my app using this pipeline (I don't post the code as I made an Objective-C wrapper around my pipeline and elements) :
udpsrc with caps:"application/x-rtp,media=video,payload=127,encoding-name=H264"
rtph264depay
decodebin
glimagesink (for testing) or a custom appsink (in pull-mode) that converts image to CVPixelBufferRef (tested: it works with videotestsrc / uridecodebin / etc.)
It doesn't work, even if the state messages of the pipeline look quite 'normal'. I have messages in the console concerning SecTaskLoadEntitlements failed error=22 but I have them too when working with the command line.
I'm asking myself what's under gst-launch that I'm missing. I couldn't find any example on the web on udpsrc based pipeline.
My questions are :
Does anybody knows what's actually happening when we launch gst-launch or a way to know what's actually happening?
Are there some examples of working pipelines in code with udpsrc?
EDIT
Here is the image of my pipeline. As you can see, GstDecodeBin element doesn't create a src pad, as it's not receiving - or treating - anything (I set a 'timeout' property to 10 seconds on the udpsrc element, that is thrown). Could it be an OSX sandboxing problem?
Now my pipeline looks like this:
udpsrc
queue
h264 depay
decode bin
video converter
caps filter
appsink / glimagesink
Tested with the method in this question, the app does actually receive something on this port.
Found why it wasn't receiving anything: GstUdpSrc element must be in GST_STATE_NULL to be assigned a port to listen to, or it will listen to the default port (5004) silently.
Everything works fine now.
Setting the environment variable GST_DEBUG to udpsrc:5 helped a lot, for information.
I am having trouble to play the audio from the rtsp server, i have no problem for the video playback, but some error occurred while i tried to play audio,
the following is the command used to play video:
C:\gstreamer\1.0\x86_64\bin>gst-launch-1.0 rtspsrc location=rtsp://192.168.2.116/axis-media/media.amp latency=0 !decodebin ! autovideosink
however, when i change the autovideosink to autoaudiosink, which as in follow:
C:\gstreamer\1.0\x86_64\bin>gst-launch-1.0 rtspsrc location=rtsp://192.168.2.116/axis-media/media.amp latency=0 !decodebin ! autoaudiosink
i get the errors below:
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc1: Internal data flow error.
Additional debug info:
gstbasesrc.c(2933): gst_base_src_loop (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0/GstUDPSrc:udpsrc1:
streaming task paused, reason not-linked (-1)
I am new to both stackoverflow and Gstreamer, any helps from you would be much appreciated
Thanks to thiagoss's reply , I have my first success on playing both video and audio using the following pipeline:
gst-launch-1.0 rtspsrc location=rtsp://192.168.2.116/axis-media/media.amp latency=0 name=src src. ! decodebin ! videoconvert ! autovideosink src. ! decodebin ! audioconvert ! autoaudiosink
IIRC rtspsrc will output one pad for each stream (video and audio might be separate) so you could be linking your video output to an audiosink.
You can run with -v to see the caps on each pad and verify this. Then you can properly link by using pad names in gst-launch-1.0:
Something like:
gst-launch-1.0 rtspsrc location=rtsp://192.168.2.116/axis-media/media.amp latency=0 name=src src.stream_0 !decodebin ! autovideosink
Check the correct stream_%u number to use for each stream to have it linked correctly.
You can also just be missing a videoconvert before the videosink. I'd also test that.
How to stream video(and if it possible audio too) from webcam using Gstreamer? I already tried to stream video from source, but I can't stream video from webcam on Windows. How I can do this?
Client:
VIDEO_CAPS="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-1998"
DEST=localhost
VIDEO_DEC="rtph263pdepay ! avdec_h263"
VIDEO_SINK="videoconvert ! autovideosink"
LATENCY=100
gst-launch -v gstrtpbin name=rtpbin latency=$LATENCY \
udpsrc caps=$VIDEO_CAPS port=5000 ! rtpbin.recv_rtp_sink_0 \
rtpbin. ! $VIDEO_DEC ! $VIDEO_SINK \
udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \
rtpbin.send_rtcp_src_0 ! udpsink host=$DEST port=5005 sync=false async=false
Server:
DEST=127.0.0.1
VOFFSET=0
AOFFSET=0
VELEM="ksvideosrc is-live=1"
VCAPS="video/x-raw,width=352,height=288,framerate=15/1"
VSOURCE="$VELEM ! $VCAPS"
VENC="avenc_h263p ! rtph263ppay"
VRTPSINK="udpsink port=5000 host=$DEST ts-offset=$VOFFSET name=vrtpsink"
VRTCPSINK="udpsink port=5001 host=$DEST sync=false async=false name=vrtcpsink"
VRTCPSRC="udpsrc port=5005 name=vrtpsrc"
gst-launch gstrtpbin name=rtpbin
$VSOURCE ! $VENC ! rtpbin.send_rtp_sink_2
rtpbin.send_rtp_src_2 ! $VRTPSINK
rtpbin.send_rtcp_src_2 ! $VRTCPSINK
$VRTCPSRC ! rtpbin.recv_rtcp_sink_2
You will have to use GStreamer 1.3.90 or newer and the ksvideosrc element that is available only since that version.
And then you can stream it just like any other input... the details depend on what codecs, container format, streaming protocol and network protocol you want to use. The same goes for audio, that works basically exactly the same as video.
http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/tests/examples/rtp
here you can find some examples that use RTP for streaming. Server side and client side examples, audio-only, video-only or both. And also streaming from real audio/video capture sources (for Linux though, but on Windows it works exactly the same... just with the Windows specific elements for that).