Changing container using FFMPEG produces NAL unit error - ffmpeg

Background:
My current videofile is put in a Linux based system that streams content (RTP) to other users. I'm filming and sending the content to the server after I change the and make sure the encoding is correct I stumble upon issues.
I've tried doing this using ffmpeg, however the system I'm injecting this file in won't recognize it and stream it to another device.
I'm doing all the transcoding and such on a Windows system
C:\Users\mazdak\Documents\Projects\ffmpeg\bin>ffmpeg -y -i input.mp4 -pix_fmt yuv420p -c:v libx264 -profile:v main -level:v 4.1 -color_range 0 -colorspace bt709 -x264opts colorprim=bt709:transfer=bt709:bframes=1 -an output.mkv
Error:
What I'm getting is
StreamMedia exception ry: Unexpected NAL unit type: 9
(...)
StreamMedia exception ry: First media frame must be sync point
Maybe I'm not preparing it for RTSP? Is that the issue. Cause what I see is that the files that are able to stream are encoded using Gstreamer
So I thought.. perhaps ffmpeg does not do that? well let's give gst-launch a try.
I need pointers as to how to go about this.
What I have:
OSSBuild of GStreamer
ffmpeg utils
input.mp4 - H264 Main profile L3.1 - Pixel format yuvj420p
Audio in container
What I need (probably):
output.mkv- H264 Main profile L4.1 - Pixel format yuv420p - RTP prepared (rtph264pay module)
Audio removed
I have h264_analyze output from both the movie I filmed. From the movie that is successfully streamed, and the movies from my attempts with ffmpeg

So this question can go in a whole bunch of different directions depending on what you're trying to do. Here is a very basic pipeline that just re-muxes h264 video data in an mp4 file into an mkv file. It ignores the audio. No re-encoding is necessary.
gst-launch-0.10 filesrc location="bbb.mp4" ! qtdemux ! video/x-h264 !
h264parse ! matroskamux ! filesink location=/tmp/bbb.mkv
Here is another pipeline that demuxes an mp4 file, re-encodes it using the out-of-the-box x264 settings, and re-muxes it into an mkv file.
gst-launch-0.10 filesrc location="bbb.mp4" ! decodebin2 !
videoconvert ! x264enc ! h264parse ! matroskamux ! filesink
location=/tmp/bbb2.mkv
Video formats are usually more like a bundle of data than an individual file. A the top level you have your container formats (mp4, mkv, etc.), then often within those containers you have video and audio data stored in various formats (h264 video, AAC audio, etc.). Then at the streaming level you have protocols like RTP (RTSP is a sort of wrapper protocol for negotiating one or more RTP streams) and MPEGTS.
You may also want to double check what your camera is producing. You can run ffprobe on it:
ffprobe whatever.mp4
You can also try creating simple test videos from scratch to see if GStreamer can even make anything your server can understand.
gst-launch-0.10 videotestsrc num-buffers=120 ! ffmpegcolorspace ! x264enc profile=main ! h264parse ! matroskamux ! filesink location=/tmp/main.mkv
gst-launch-0.10 videotestsrc num-buffers=120 ! ffmpegcolorspace ! x264enc profile=baseline ! h264parse ! matroskamux ! filesink location=/tmp/baseline.mkv
gst-launch-0.10 videotestsrc num-buffers=120 ! ffmpegcolorspace ! x264enc profile=high ! h264parse ! matroskamux ! filesink location=/tmp/high.mkv

My guess is that input.mp4 contains NAL of type 9 (as the error message points out).
"Access unit delimiters" (NAL type 9) should not be in an mp4.
To me it looks like your camera is muxing an illegal h.264 bitstream format into input.mp4.
MP4s should contain size prefixed NALs and no SPS (type 7), PPS (type 8) or AUs (type 9).
Now the question is how to filter out the AUs or just pass them through.
I would try a stream copy - dropping the audio - see: https://ffmpeg.org/ffmpeg.html#Stream-copy

Related

Push video to SRT source with Gstreamer

I'm trying to push stream to SRT source. I can able to do it with FFmpeg like as below:
ffmpeg -re -i {INPUT} -vcodec libx264 -profile:v baseline -g 60 -acodec aac -f mpegts srt://test.antmedia.io:4200?streamid=WebRTCAppEE/stream1
The above command pushes Ant Media Server SRT Service. But If I try with Gstreamer SRT function, Gstreamer tries to create an SRT source. So that Gstreamer cannot able to send Ant Media Server, because SRT server is already created with Ant Media Server. Please let me know which part I'm missing. I have tried:
gst-launch-1.0 -v videotestsrc ! video/x-raw, height=1080, width=1920 ! videoconvert ! x264enc tune=zerolatency ! video/x-h264, profile=baseline ! mpegtsmux ! srtsink uri="srt://test.antmedia.io:4200?streamid=WebRTCAppEE/stream1"

Capture camera + mic and encode to h264/aac on macOS

I'm having trouble capturing and encoding audio+video on-the-fly on macOS.
I tried two options:
ffmpeg
ffmpeg -threads 0 -f avfoundation -s 1920x1080 -framerate 25 -I 0:0 -async 441 -c:v libx264 -preset medium -pix_fmt yuv420p -crf 22 -c:a libfdk_aac -aq 95 -y
gstreamer
gst-launch-1.0 -ve avfvideosrc device-index=0 ! video/x-raw,width=1920,height=1080,framerate=25/1 ! vtenc_h264 ! queue ! mp4mux name=mux ! filesink location=out.mp4 osxaudiosrc device=0 ! audio/x-raw ! faac midside=false ! queue ! mux.
The ffmpeg option works, but only for lower resolutions. With higher resolutions, the Mac mini (2018 gen) can't do the heavy lifting. I think because I installed ffmpeg with brew, so it wasn't compiled on my machine, meaning it doesn't use the h264 hardware encoder in the Mac?
The gstreamer option works as well, but there's a slight audio/video sync issue (audio is 100ms ahead of the video). I can't seem to add delay to the GStreamer queue (it ignores it):
queue max-size-buffers=0 max-size-time=0 max-size-bytes=0 min-threshold-time=100000000
Anyone who has any experience with this? Thanks!
That change in the queues effects internal flow only. It has no impact on timestamps on the buffers traveling through the pipeline. The timestamps define how the sync between audio and video is.
Try to use the identity element on either the video or audio path and set some timestamp offset via the ts-offset property.

Keeping video file playable while recording to it

I am trying to stream a file , starting from an arbitrary position , that i am recording to at the same time. But until i stop recording the file seems to be not playable.
Recording
gst-launch-1.0 -e videotestsrc ! x264enc ! mp4mux ! filesink location=test.mp4
Streaming from udp, starting from minute 1.
ffmpeg -i test.mp4 -re -ss 00:01:00 -f mpegts udp://127.0.0.1:1453
ffmpeg says moov atom not found and just quits.
After I stop the recording pipeline. Its works as expected.
Thank you all in advance.
You need to record in fragments to make this work, i.e. setting a reasonable fragment-duration (in ms).
gst-launch-1.0 -e videotestsrc ! x264enc ! mp4mux fragment-duration=2000 ! filesink location=test.mp4
To play it with gstreamer (while recording):
gst-launch-1.0 filesrc location=test.mp4 ! decodebin ! videoconvert ! xvimagesink
Even if this post is outdated, did you tried to use the AVI container. With the avi container I succeed to read the video while it is recording.
Try that for example :
ffmpeg -i rtsp_link -c:v h264 -f avi -preset ultrafast -tune zerolatency -vcodec h264 output.avi
But I would be glad to obtain the historic of the comment of your post. It seems that it contains an other potential answer.

Errors when streaming h264 video from gstreamer to ffmpeg

Hi I am trying to receive a udp/rtp stream with ffmpeg in the client side but is having trouble.
Server side pipeline:
gst-launch-1.0 -v filesrc location=video2.mp4 ! decodebin ! x264enc !
rtph264pay ! udpsink host=127.0.0.1 port=5006
On the client side, I can play the video with the following pipeline:
gst-launch-1.0 -e udpsrc uri=udp://0.0.0.0:5006 ! application/x-rtp, clock-rate=90000, payload=96 ! rtph264depay ! decodebin ! autovideosink
However, since I want to convert the stream into a rtsp/http stream, I tried to receive the rtp stream with ffmpeg and perform something like:
ffmpeg -i udp://127.0.0.1:5006 -acodec copy -vcodec copy http://localhost:8090/feed1.ffm
But before doing that, I was testing this approach by saving the stream into a mp4 file with:
ffmpeg -f h264 -i udp://127.0.0.1:5006 -strict -2 -f mp4 stream.mp4
But this did not work, it gave me the following error:
missing picture in access unit with size 15019525 [h264 # 0x11d5100]
no frame! [h264 # 0x11f06c0] decoding for stream 0 failed [h264 #
0x11f06c0] Could not find codec parameters for stream 0 (Video: h264):
unspecified size Consider increasing the value for the
'analyzeduration' and 'probesize' options [h264 # 0x11f06c0]
Estimating duration from bitrate, this may be inaccurate
udp://127.0.0.1:5006: could not find codec parameters
Have anyone tried such approach before or experienced similar problem, I would like to get some direction on how to solve it. Thanks!

One liner to create HLS stream

IIUC with HLS or DASH, I can create a manifest and serve the segments straight from my httpd, e.g. python -m http.server.
I have a UVC video feed coming in on /dev/video1 and I'm battling to create a simple m3u8 in either gstreamer or ffmpeg.
I got as far as:
gst-launch-1.0 -e v4l2src device=/dev/video1 ! videoconvert ! x264enc ! mpegtsmux ! hlssink max-files=5
Any ideas?
Video
To list video1 device capabilities:
ffmpeg -f v4l2 -list_formats all -i /dev/video1
Audio (ALSA example)
To list ALSA devices:
arecord -L
HLS
Use two inputs:
ffmpeg -f alsa -i <alsa_device> -f v4l2 -i /dev/video1 [...] /path/to/docroot/playlist.m3u8
You can find the various HLS parameters in the FFmpeg documentation.
Further reading:
FFmpeg H.264 Encoding Guide
FFmpeg Webcam Capture
I found the option tune=zerolatency was what I needed it from stalling. Still need to figure out how to bring in the audio too.
gst-launch-1.0 -e v4l2src device=/dev/video1 ! videoconvert ! x264enc tune=zerolatency ! mpegtsmux ! hlssink max-files=5
Sadly my Thinkpad X220 is overheating at > 96C.
Would be nice to get the ffmpeg version.

Resources