How do you stream a raw H.265 incoming video using Red5?
I've seen this example to stream flv file, and this for the client side, and for H.264 with or without ffmpeg.
Basically the question can be split into two:
How do you stream it from a .h265 file? If from .265 file is not possible, how do you do it from a file that contains H.265 video? Any example?
How do you stream it from an incoming RTP session? I can get the session UDP/RTP unpacked and decode into raw H.265 NAL packets. I'm assuming some conversion is needed, any libraries available for that purpose? Examples?
If I can get an answer to the above first split question, I know I can redirect the incoming stream to a named pipe. That can be used as an indirect solution to the second split question. Streaming from the incoming UDP session directly is preferred, though.
This is some preliminary idea, though surely not the best solution.
From a previous article "How to stream in h265 using gstreamer", there is a solution to include the x265enc element to gstreamer.
From an aws kinesis video examples page, a command line can be used to process rtsp/rtp input in h.265 and convert it to h.264 format:
gst-launch-1.0 rtspsrc location="rtsp://192.168.1.<x>:8554/h265video" ! \
decodebin ! x264enc ! <video-sink>
The <video-sink> should be something specific to Red5 server. Or if streaming in h.265 format which Red5 might or might not take:
gst-launch-1.0 rtspsrc location="rtsp://192.168.1.<x>:8554/h265video" \
short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! \
<video-sink>
Related
what I want to do is that send live camera stream which is encoded by h264 to gstreamer. I already have seen many example which send over network by using rtp and mpeg-ts. But problem is that all those examples assume that the input will be served by fixed file or live stream which is already transcoded in transport portocol like below.
client :
gst-launch-1.0 videotestsrc horizontal-speed=5 ! x264enc tune="zerolatency" threads=1 ! mpegtsmux ! tcpserversink host=192.168.0.211 port=8554
server : gst-launch-1.0 tcpclientsrc port=8554 host=192.168.0.211 ! tsdemux ! h264parse ! avdec_h264 ! xvimagesink
But, My camera offer the below interface (written in java, actually work on adnroid). The interface offer just live raw h264 blocks.
mReceivedVideoDataCallBack=newDJIReceivedVideoDataCallBack(){
#Override
public void onResult(byte[] videoBuffer, int size)
{
}
I can create tcp session to send those data block. But, how can i make those data which is not packed in transport protocol into format which is understable by gstreamer tcpclient?
Transcode the original stream in ts format in the camera side can be a solution. But i have no clue to do transcode from non-file and non-transport-format data. I have searched gstreamer and ffmpeg, But I could not derive a way to deal h264 block stream using the supported interface, unitl now.
Or, Are there any way to make gstreamer to directly accept those simple raw h264 block?
I think the best solution is to create your own element for your video source and then construct a pipeline using your element and mpegtsmux.
However, you can use appsrc + mpegtsmux and feed your appsrc through JNI with buffers you have from callback.
I have two video streaming units capable of streaming live video inputs:
AXIS Q7424-R Video Encoder
EPIPHAN VGADVI Broadcaster 99460 -
I am using gstreamer to view these streams on client terminals running linux. I am interested in the h264, rtp multicast streams (which both units support).
I can stream the Epiphan video using the following gstreamer pipeline:
gst-launch-0.10 udpsrc multicast-group=ADDRESS port=PORT caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! ffdec_h254 ! autovideosink
However, this pipeline does not work for the Axis unit as I get the following error repeatedly:
ffmpeg:0:: non-existing PPS referenced
ffmpeg:0:: non-existing PPS 0 referenced
ffmpeg:0:: decode_slice_header error
ffmpeg:0:: no frame!
ffdec_h264: decoding error (len:-1, have_data: 0)
I have read that this error means that the ffmpeg decoder is missing the SPS/PPS information provided by a keyframe. The axis unit has a GOV parameter which is the interval at which i-frames are sent; it is set to 32.
Note that I can view both units' rtp streams in unicast with the following:
gst-launch-0.10 rtspsrc location=rtsp://ADDRESS:PORT/... latency=100 ! rtph264depay ! ffdec_h264 ! autovideosink
Since unicast works and the unicast and multicast pipelines are the same (except for the source), my guess is either:
My udpsrc caps are simply incorrect for the axis stream (and I don't really know where/how to verify it)
or, the axis multicast format/encoding is different and requires a modification to the pipeline (I find this unlikely since unicast is working and I don't understand why the encoding would change between unicast/multicast).
Any suggestions are appreciated as I am limited by my knowledge of gstreamer and media formats in terms of what to try next.
As stated in szatmary's comments, the axis hardware does not seem to stream the SPS/PPS information. I have contacted AXIS' support on this issue to which I received a response stating that my question is "outside of the scope of support staff".
The solution I have found was to include the "sprop-parameter-sets" in the receiving pipeline. This parameter seems to be unique per stream and can be obtained by either:
starting a unicast receiver with rtsp (example provided in question above) which will provide the set of caps that can be copied, or
accessing the .sdp file from the hardware, which is usually available through http (for example, http://<USERNAME:PASSWORD>#<ADDRESS:PORT>/axis-cgi/alwaysmulti.sdp?camera=1)
Note that accessing the sdp file is per stream (hence the camera=1), so if your hardware has multiple inputs then be sure to grab the right one.
I have an application that sends raw h264 NALUs as generated from encoding on the fly using x264 x264_encoder_encode. I am getting them through plain TCP so I am not missing any frames.
I need to be able to decode such a stream in the client using Hardware Acceleration in Windows (DXVA2). I have been struggling to find a way to get this to work using FFMPEG. Perhaps it may be easier to try Media Foundation or DirectShow, but they won't take raw H264.
I either need to:
Change the code from the server application to give back an mp4 stream. I am not that experienced with x264. I was able to get raw H264 by calling x264_encoder_encode, by following the answer to this question: How does one encode a series of images into H264 using the x264 C API? How can I go from this to something that is wrapped in MP4 while still being able to stream it in realtime
I could at the receiver wrap it with mp4 headers and feed it into something that can play it using DXVA. I wouldn't know how to do this
I could find another way to accelerate it using DXVA with FFMPEG or something else that takes it in raw format.
An important restriction is that I need to be able to pre-process each decoded frame before displaying it. Any solution that does decoding and displaying in a single step would not work for me
I would be fine with either solution
I believe you should be able to use H.264 packets off the wire with Media Foundation. there's an example on page 298 of this book http://www.docstoc.com/docs/109589628/Developing-Microsoft-Media-Foundation-Applications# that use a HTTP stream with Media Foundation.
I'm only learning Media Foundation myself and am trying to do a similar thing to you, in my case I want to use H.264 payloads from an RTP packet, and from my understanding that will require a custom IMFSourceReader. Accessing the decoded frames should also be possible from what I've read since there seems to be complete flexibility in chaining components together into topologies.
I'm working with FFMpeg for decoding Mjpeg streams.
Recently I've bumped into access violation exceptions from FFMpeg, after investigating, I found that due to network packet drops, I'm passing to the FFMpeg a frame that might have "gaps" in it.
The FFMpeg probably crash since it jumps to a marker payload which doesn't exist in the frame's memory.
Any idea where I can find a mjpeg structure validator?
Is there any way to configure FFMpeg to perform such validations by itself?
Thanks.
I would be inclined to use Gstreamer here instead of ffmpeg and set "max-errors" property in jpegdec plugin to -1.
gst-launch -v souphttpsrc location="http://[ip]:[port]/[dir]/xxx.cgi" do-timestamp=true is_live=true ! multipartdemux ! jpegdec max-errors=-1 ! ffmpegcolorspace ! autovideosink.
This takes care of the corrupt jpeg frames and continues the stream.
Didn't really found an answer to the question.
Apparently, ffmpeg doesn't handle corrupted frames very well.
Decided to try a different 3rd party decoder instead of ffmpeg. For now, at least for Jpeg, it works faster and much more robust.
Problem:
I have to save live video streams data which come as an RTP packets from RTSP Server.
The data come in two formats : MPEG4 and h264.
I do not want to encode/decode input stream.
Just write to a file which is playable with proper codecs.
Any advice?
Best Wishes
History:
My Solutions and their problems:
Firt attempt: FFmpeg
I use FFmpeg libary to get audio and video rtp packets.
But in order to write packets i have to use av_write_frame :
which seems that decode /encode takes place.
Also, when i give output format as mp4 ( av_guess_format("mp4", NULL, NULL)
the output file is unplayable.
[ any way ffmpeg has bad doc. hard to find what is wrong]
Second attempth: DirectShow
Then i decide to use DirectShow. I found a RTSP Source Filter.
Then a Mux and File Writer.
Cretae Single graph:
RTSP Source --> MPEG MUX ---> File Writer
It worked but the problem is that the output file is not playable
if graph is not stoped. If something happens, graph crashed for example
the output file is not playable
Also i can able to write H264 data, but the video is completely unplayable.
The MP4 file format has an index that is required for correct playback, and the index can only be created once you've finished recording. So any solution using MP4 container files (and other indexed files) is going to suffer from the same problem. You need to stop the recording to finalise the file, or it will not be playable.
One solution that might help is to break the graph up into two parts, so that you can keep recording to a new file while stopping the current one. There's an example of this at www.gdcl.co.uk/gmfbridge.
If you try the GDCL MP4 multiplexor, and you are having problems with H264 streams, see the related question GDCL Mpeg-4 Multiplexor Problem