IIUC with HLS or DASH, I can create a manifest and serve the segments straight from my httpd, e.g. python -m http.server.
I have a UVC video feed coming in on /dev/video1 and I'm battling to create a simple m3u8 in either gstreamer or ffmpeg.
I got as far as:
gst-launch-1.0 -e v4l2src device=/dev/video1 ! videoconvert ! x264enc ! mpegtsmux ! hlssink max-files=5
Any ideas?
Video
To list video1 device capabilities:
ffmpeg -f v4l2 -list_formats all -i /dev/video1
Audio (ALSA example)
To list ALSA devices:
arecord -L
HLS
Use two inputs:
ffmpeg -f alsa -i <alsa_device> -f v4l2 -i /dev/video1 [...] /path/to/docroot/playlist.m3u8
You can find the various HLS parameters in the FFmpeg documentation.
Further reading:
FFmpeg H.264 Encoding Guide
FFmpeg Webcam Capture
I found the option tune=zerolatency was what I needed it from stalling. Still need to figure out how to bring in the audio too.
gst-launch-1.0 -e v4l2src device=/dev/video1 ! videoconvert ! x264enc tune=zerolatency ! mpegtsmux ! hlssink max-files=5
Sadly my Thinkpad X220 is overheating at > 96C.
Would be nice to get the ffmpeg version.
Related
I'm trying to push stream to SRT source. I can able to do it with FFmpeg like as below:
ffmpeg -re -i {INPUT} -vcodec libx264 -profile:v baseline -g 60 -acodec aac -f mpegts srt://test.antmedia.io:4200?streamid=WebRTCAppEE/stream1
The above command pushes Ant Media Server SRT Service. But If I try with Gstreamer SRT function, Gstreamer tries to create an SRT source. So that Gstreamer cannot able to send Ant Media Server, because SRT server is already created with Ant Media Server. Please let me know which part I'm missing. I have tried:
gst-launch-1.0 -v videotestsrc ! video/x-raw, height=1080, width=1920 ! videoconvert ! x264enc tune=zerolatency ! video/x-h264, profile=baseline ! mpegtsmux ! srtsink uri="srt://test.antmedia.io:4200?streamid=WebRTCAppEE/stream1"
I would like to convert the working FFmpeg command to a GStreamer pipeline to extract image from the RTSP stream.
ffmpeg -hide_banner -v error -rtsp_transport tcp -stimeout 10000000 -i 'rtsp://{domain}/Streaming/tracks/101?starttime=20220831T103000Z&endtime=20220831T103010Z' -vframes 1 -y image.jpg
Here is the GStreamer pipeline I tried to convert:
gst-launch-1.0 rtspsrc location="rtsp://{domain}/Streaming/tracks/101?starttime=20220831T103000Z&endtime=20220831T103010Z" max-rtcp-rtp-time-diff=0 latency=0 is_live=true drop-on-latency=true ! decodebin3 ! videoconvert ! jpegenc snapshot=true ! filesink location="/mnt/c/images/frame3.jpg"
I couldn't manage to get it working. It gives the wrong timestamp image and the Gstreamer pipeline never stopped after extracting the image just working like an infinite loop.
But the FFmpeg command works perfect and extracts the correct image and quits from the command after successfully extracting the image.
You may try adding imagefreeze with num-buffers=1:
gst-launch-1.0 rtspsrc protocols=tcp location="rtsp://{domain}/Streaming/tracks/101?starttime=20220831T103000Z&endtime=20220831T103010Z" max-rtcp-rtp-time-diff=0 latency=0 is-live=true drop-on-latency=true ! decodebin ! videoconvert ! imagefreeze num-buffers=1 ! jpegenc snapshot=true ! filesink location="/mnt/c/images/frame3.jpg"
I am trying to play some audio on my linux server and stream it to multiple internet browsers. I have a loopback device I'm specifying as input to ffmpeg. ffmpeg is then streamed via rtp to a WebRTC server (Janus). It works, but the sound that comes out is horrible.
Here's the command I'm using to stream from ffmpeg to janus over rtp:
nice --20 sudo ffmpeg -re -f alsa -i hw:Loopback,1,0 -c:a libopus -ac
1 -b:a 64K -ar 8000 -vn -rtbufsize 250M -f rtp rtp://127.0.0.1:17666
The WebRTC server (Janus) requires that the audio codec be opus. If I try to do 2 channel audio or increase the sampling rate, the stream slows down or sound worse. The "nice" command is to give the process higher priority.
Using gstreamer instead of ffmpeg works and sounds great!
Here's the cmd I'm using on CentOS 7:
sudo gst-launch-1.0 alsasrc device=hw:Loopback,1,0 ! rawaudioparse ! audioconvert ! audioresample ! opusenc ! rtpopuspay ! udpsink host=127.0.0.1 port=14365
I am trying to stream a file , starting from an arbitrary position , that i am recording to at the same time. But until i stop recording the file seems to be not playable.
Recording
gst-launch-1.0 -e videotestsrc ! x264enc ! mp4mux ! filesink location=test.mp4
Streaming from udp, starting from minute 1.
ffmpeg -i test.mp4 -re -ss 00:01:00 -f mpegts udp://127.0.0.1:1453
ffmpeg says moov atom not found and just quits.
After I stop the recording pipeline. Its works as expected.
Thank you all in advance.
You need to record in fragments to make this work, i.e. setting a reasonable fragment-duration (in ms).
gst-launch-1.0 -e videotestsrc ! x264enc ! mp4mux fragment-duration=2000 ! filesink location=test.mp4
To play it with gstreamer (while recording):
gst-launch-1.0 filesrc location=test.mp4 ! decodebin ! videoconvert ! xvimagesink
Even if this post is outdated, did you tried to use the AVI container. With the avi container I succeed to read the video while it is recording.
Try that for example :
ffmpeg -i rtsp_link -c:v h264 -f avi -preset ultrafast -tune zerolatency -vcodec h264 output.avi
But I would be glad to obtain the historic of the comment of your post. It seems that it contains an other potential answer.
Background:
My current videofile is put in a Linux based system that streams content (RTP) to other users. I'm filming and sending the content to the server after I change the and make sure the encoding is correct I stumble upon issues.
I've tried doing this using ffmpeg, however the system I'm injecting this file in won't recognize it and stream it to another device.
I'm doing all the transcoding and such on a Windows system
C:\Users\mazdak\Documents\Projects\ffmpeg\bin>ffmpeg -y -i input.mp4 -pix_fmt yuv420p -c:v libx264 -profile:v main -level:v 4.1 -color_range 0 -colorspace bt709 -x264opts colorprim=bt709:transfer=bt709:bframes=1 -an output.mkv
Error:
What I'm getting is
StreamMedia exception ry: Unexpected NAL unit type: 9
(...)
StreamMedia exception ry: First media frame must be sync point
Maybe I'm not preparing it for RTSP? Is that the issue. Cause what I see is that the files that are able to stream are encoded using Gstreamer
So I thought.. perhaps ffmpeg does not do that? well let's give gst-launch a try.
I need pointers as to how to go about this.
What I have:
OSSBuild of GStreamer
ffmpeg utils
input.mp4 - H264 Main profile L3.1 - Pixel format yuvj420p
Audio in container
What I need (probably):
output.mkv- H264 Main profile L4.1 - Pixel format yuv420p - RTP prepared (rtph264pay module)
Audio removed
I have h264_analyze output from both the movie I filmed. From the movie that is successfully streamed, and the movies from my attempts with ffmpeg
So this question can go in a whole bunch of different directions depending on what you're trying to do. Here is a very basic pipeline that just re-muxes h264 video data in an mp4 file into an mkv file. It ignores the audio. No re-encoding is necessary.
gst-launch-0.10 filesrc location="bbb.mp4" ! qtdemux ! video/x-h264 !
h264parse ! matroskamux ! filesink location=/tmp/bbb.mkv
Here is another pipeline that demuxes an mp4 file, re-encodes it using the out-of-the-box x264 settings, and re-muxes it into an mkv file.
gst-launch-0.10 filesrc location="bbb.mp4" ! decodebin2 !
videoconvert ! x264enc ! h264parse ! matroskamux ! filesink
location=/tmp/bbb2.mkv
Video formats are usually more like a bundle of data than an individual file. A the top level you have your container formats (mp4, mkv, etc.), then often within those containers you have video and audio data stored in various formats (h264 video, AAC audio, etc.). Then at the streaming level you have protocols like RTP (RTSP is a sort of wrapper protocol for negotiating one or more RTP streams) and MPEGTS.
You may also want to double check what your camera is producing. You can run ffprobe on it:
ffprobe whatever.mp4
You can also try creating simple test videos from scratch to see if GStreamer can even make anything your server can understand.
gst-launch-0.10 videotestsrc num-buffers=120 ! ffmpegcolorspace ! x264enc profile=main ! h264parse ! matroskamux ! filesink location=/tmp/main.mkv
gst-launch-0.10 videotestsrc num-buffers=120 ! ffmpegcolorspace ! x264enc profile=baseline ! h264parse ! matroskamux ! filesink location=/tmp/baseline.mkv
gst-launch-0.10 videotestsrc num-buffers=120 ! ffmpegcolorspace ! x264enc profile=high ! h264parse ! matroskamux ! filesink location=/tmp/high.mkv
My guess is that input.mp4 contains NAL of type 9 (as the error message points out).
"Access unit delimiters" (NAL type 9) should not be in an mp4.
To me it looks like your camera is muxing an illegal h.264 bitstream format into input.mp4.
MP4s should contain size prefixed NALs and no SPS (type 7), PPS (type 8) or AUs (type 9).
Now the question is how to filter out the AUs or just pass them through.
I would try a stream copy - dropping the audio - see: https://ffmpeg.org/ffmpeg.html#Stream-copy