Convert FFmpeg command to GStreamer pipeline parse image from rtsp stream - ffmpeg

I would like to convert the working FFmpeg command to a GStreamer pipeline to extract image from the RTSP stream.
ffmpeg -hide_banner -v error -rtsp_transport tcp -stimeout 10000000 -i 'rtsp://{domain}/Streaming/tracks/101?starttime=20220831T103000Z&endtime=20220831T103010Z' -vframes 1 -y image.jpg
Here is the GStreamer pipeline I tried to convert:
gst-launch-1.0 rtspsrc location="rtsp://{domain}/Streaming/tracks/101?starttime=20220831T103000Z&endtime=20220831T103010Z" max-rtcp-rtp-time-diff=0 latency=0 is_live=true drop-on-latency=true ! decodebin3 ! videoconvert ! jpegenc snapshot=true ! filesink location="/mnt/c/images/frame3.jpg"
I couldn't manage to get it working. It gives the wrong timestamp image and the Gstreamer pipeline never stopped after extracting the image just working like an infinite loop.
But the FFmpeg command works perfect and extracts the correct image and quits from the command after successfully extracting the image.

You may try adding imagefreeze with num-buffers=1:
gst-launch-1.0 rtspsrc protocols=tcp location="rtsp://{domain}/Streaming/tracks/101?starttime=20220831T103000Z&endtime=20220831T103010Z" max-rtcp-rtp-time-diff=0 latency=0 is-live=true drop-on-latency=true ! decodebin ! videoconvert ! imagefreeze num-buffers=1 ! jpegenc snapshot=true ! filesink location="/mnt/c/images/frame3.jpg"

Related

Push video to SRT source with Gstreamer

I'm trying to push stream to SRT source. I can able to do it with FFmpeg like as below:
ffmpeg -re -i {INPUT} -vcodec libx264 -profile:v baseline -g 60 -acodec aac -f mpegts srt://test.antmedia.io:4200?streamid=WebRTCAppEE/stream1
The above command pushes Ant Media Server SRT Service. But If I try with Gstreamer SRT function, Gstreamer tries to create an SRT source. So that Gstreamer cannot able to send Ant Media Server, because SRT server is already created with Ant Media Server. Please let me know which part I'm missing. I have tried:
gst-launch-1.0 -v videotestsrc ! video/x-raw, height=1080, width=1920 ! videoconvert ! x264enc tune=zerolatency ! video/x-h264, profile=baseline ! mpegtsmux ! srtsink uri="srt://test.antmedia.io:4200?streamid=WebRTCAppEE/stream1"

Keeping video file playable while recording to it

I am trying to stream a file , starting from an arbitrary position , that i am recording to at the same time. But until i stop recording the file seems to be not playable.
Recording
gst-launch-1.0 -e videotestsrc ! x264enc ! mp4mux ! filesink location=test.mp4
Streaming from udp, starting from minute 1.
ffmpeg -i test.mp4 -re -ss 00:01:00 -f mpegts udp://127.0.0.1:1453
ffmpeg says moov atom not found and just quits.
After I stop the recording pipeline. Its works as expected.
Thank you all in advance.
You need to record in fragments to make this work, i.e. setting a reasonable fragment-duration (in ms).
gst-launch-1.0 -e videotestsrc ! x264enc ! mp4mux fragment-duration=2000 ! filesink location=test.mp4
To play it with gstreamer (while recording):
gst-launch-1.0 filesrc location=test.mp4 ! decodebin ! videoconvert ! xvimagesink
Even if this post is outdated, did you tried to use the AVI container. With the avi container I succeed to read the video while it is recording.
Try that for example :
ffmpeg -i rtsp_link -c:v h264 -f avi -preset ultrafast -tune zerolatency -vcodec h264 output.avi
But I would be glad to obtain the historic of the comment of your post. It seems that it contains an other potential answer.

Error while displaying RTP stream packets from udp port on Gstreamer

I am streaming a live webcam using VLC to darwin streaming server.
Then tried to play this live web cam feed on RTSP client using following
GST_DEBUG=2 gst-launch -vvv playbin uri=rtsp://172.19.91.21/channel.sdp
Everthing works fine. output is coming on gstreamer window.
I have reflect all the packest from DSS to RTSP client as well as on a udp_port. But when i tried to play RTP stream using following command
GST_DEBUG=2 gst-launch-0.10 -vvv udpsrc port=5000 multicast-iface="lo" multicast-group="172.19.91.20" buffer-size=1000000 caps="application/x-rtp, media=video, clock-rate=90000, encoding-name=H264" do-timestamp=false ! rtph264depay ! decodebin ! autovideosink
I am getting following errors
0:00:07.108734201 7874 0x89d2a90 ERROR ffmpeg :0:: non-existing PPS referenced
0:00:07.108803500 7874 0x89d2a90 ERROR ffmpeg :0:: non-existing PPS 0 referenced
0:00:07.108824183 7874 0x89d2a90 ERROR ffmpeg :0:: decode_slice_header error
0:00:07.108840903 7874 0x89d2a90 ERROR ffmpeg :0:: no frame!
0:00:07.108859244 7874 0x89d2a90 WARN ffmpeg gstffmpegdec.c:2299:gst_ffmpegdec_frame: ffdec_h264: decoding error (len: -1, have_data: 0)
Please guide me how to solve this problem.
It's working fine after using following command:
GST_DEBUG=2 gst-launch-0.10 -v udpsrc port=5000 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)\"Z2QAHqzZQKA9sBEAAAMAAQAAAwAyjxYtlg\=\=\,aOvjyyLA\=\"' ! rtph264depay ! ffdec_h264 ! xvimagesink sync=false

One liner to create HLS stream

IIUC with HLS or DASH, I can create a manifest and serve the segments straight from my httpd, e.g. python -m http.server.
I have a UVC video feed coming in on /dev/video1 and I'm battling to create a simple m3u8 in either gstreamer or ffmpeg.
I got as far as:
gst-launch-1.0 -e v4l2src device=/dev/video1 ! videoconvert ! x264enc ! mpegtsmux ! hlssink max-files=5
Any ideas?
Video
To list video1 device capabilities:
ffmpeg -f v4l2 -list_formats all -i /dev/video1
Audio (ALSA example)
To list ALSA devices:
arecord -L
HLS
Use two inputs:
ffmpeg -f alsa -i <alsa_device> -f v4l2 -i /dev/video1 [...] /path/to/docroot/playlist.m3u8
You can find the various HLS parameters in the FFmpeg documentation.
Further reading:
FFmpeg H.264 Encoding Guide
FFmpeg Webcam Capture
I found the option tune=zerolatency was what I needed it from stalling. Still need to figure out how to bring in the audio too.
gst-launch-1.0 -e v4l2src device=/dev/video1 ! videoconvert ! x264enc tune=zerolatency ! mpegtsmux ! hlssink max-files=5
Sadly my Thinkpad X220 is overheating at > 96C.
Would be nice to get the ffmpeg version.

Changing container using FFMPEG produces NAL unit error

Background:
My current videofile is put in a Linux based system that streams content (RTP) to other users. I'm filming and sending the content to the server after I change the and make sure the encoding is correct I stumble upon issues.
I've tried doing this using ffmpeg, however the system I'm injecting this file in won't recognize it and stream it to another device.
I'm doing all the transcoding and such on a Windows system
C:\Users\mazdak\Documents\Projects\ffmpeg\bin>ffmpeg -y -i input.mp4 -pix_fmt yuv420p -c:v libx264 -profile:v main -level:v 4.1 -color_range 0 -colorspace bt709 -x264opts colorprim=bt709:transfer=bt709:bframes=1 -an output.mkv
Error:
What I'm getting is
StreamMedia exception ry: Unexpected NAL unit type: 9
(...)
StreamMedia exception ry: First media frame must be sync point
Maybe I'm not preparing it for RTSP? Is that the issue. Cause what I see is that the files that are able to stream are encoded using Gstreamer
So I thought.. perhaps ffmpeg does not do that? well let's give gst-launch a try.
I need pointers as to how to go about this.
What I have:
OSSBuild of GStreamer
ffmpeg utils
input.mp4 - H264 Main profile L3.1 - Pixel format yuvj420p
Audio in container
What I need (probably):
output.mkv- H264 Main profile L4.1 - Pixel format yuv420p - RTP prepared (rtph264pay module)
Audio removed
I have h264_analyze output from both the movie I filmed. From the movie that is successfully streamed, and the movies from my attempts with ffmpeg
So this question can go in a whole bunch of different directions depending on what you're trying to do. Here is a very basic pipeline that just re-muxes h264 video data in an mp4 file into an mkv file. It ignores the audio. No re-encoding is necessary.
gst-launch-0.10 filesrc location="bbb.mp4" ! qtdemux ! video/x-h264 !
h264parse ! matroskamux ! filesink location=/tmp/bbb.mkv
Here is another pipeline that demuxes an mp4 file, re-encodes it using the out-of-the-box x264 settings, and re-muxes it into an mkv file.
gst-launch-0.10 filesrc location="bbb.mp4" ! decodebin2 !
videoconvert ! x264enc ! h264parse ! matroskamux ! filesink
location=/tmp/bbb2.mkv
Video formats are usually more like a bundle of data than an individual file. A the top level you have your container formats (mp4, mkv, etc.), then often within those containers you have video and audio data stored in various formats (h264 video, AAC audio, etc.). Then at the streaming level you have protocols like RTP (RTSP is a sort of wrapper protocol for negotiating one or more RTP streams) and MPEGTS.
You may also want to double check what your camera is producing. You can run ffprobe on it:
ffprobe whatever.mp4
You can also try creating simple test videos from scratch to see if GStreamer can even make anything your server can understand.
gst-launch-0.10 videotestsrc num-buffers=120 ! ffmpegcolorspace ! x264enc profile=main ! h264parse ! matroskamux ! filesink location=/tmp/main.mkv
gst-launch-0.10 videotestsrc num-buffers=120 ! ffmpegcolorspace ! x264enc profile=baseline ! h264parse ! matroskamux ! filesink location=/tmp/baseline.mkv
gst-launch-0.10 videotestsrc num-buffers=120 ! ffmpegcolorspace ! x264enc profile=high ! h264parse ! matroskamux ! filesink location=/tmp/high.mkv
My guess is that input.mp4 contains NAL of type 9 (as the error message points out).
"Access unit delimiters" (NAL type 9) should not be in an mp4.
To me it looks like your camera is muxing an illegal h.264 bitstream format into input.mp4.
MP4s should contain size prefixed NALs and no SPS (type 7), PPS (type 8) or AUs (type 9).
Now the question is how to filter out the AUs or just pass them through.
I would try a stream copy - dropping the audio - see: https://ffmpeg.org/ffmpeg.html#Stream-copy

Resources