real time decoding of SHVC bit streams - ffmpeg

Does anyone know an open source decoder that can perform real time SHVC bit stream decoding?. The openHEVC states that it has the capability to decode HEVC scalable bit streams, but I was not able to decode a SHVC bit stream generated by SHM 7.0 reference encoder.
Also, does the ffmpeg support scalable extension of HEVC?.
Thanks.

The openHEVC current version, seems to be supporting only SHM4.1 bit streams. All layers of the bit streams generated from SHM 4.1 reference encoder are decodable using the current openHEVC version.

Related

Free and open-source lib to decode x.265 (HEVC) stream in a C project?

I'm doing a project in C which requires playing an incoming stream of HEVC content to the user. My understanding is that I need a library that gives me an API to a HEVC decoder (not and encoder, but a decoder). Here are my options so far:
The x265 looks perfect but it's all about the encoding part (and nothing about decoding it !). I'm not interested in an API to a HEVC encoder, what I want is the decoder part.
There is libde265 and OpenHEVC but I'm not sure they have what I want. Couldn't find it anywhere in their docs that there is an API that I can use to decode the content but since there are players out there using those libs, I'm assuming it must be there somewhere ... couldn't find it though !
There is ffmpeg project with its own decoders (HEVC included) but I'm not sure this is the right thing since I only want the hevc decoder and nothing else.
Cheers
Just go with FFmpeg, I'm guessing you'll only need to link with libavcodec library and it's API/Interfaces. And yes, the machine where your code work will have the whole FFmpeg installed (or maybe not, just the library might work).
Anyway, even that shouldn't be any problem unless the machine is an embedded system with tight space constraints (which is unlikely since it's h265, which implies abundant of source needed).

build an encoder on android(FFMPEG)

I need to build an encoder on android. Trying to encode the video stream captured by camera to h.264.
I've got the libffmpeg.so file, but I don't know how to use it.
I'm new on this. Could anyone give some suggestions?
To use the FFMPEG libraries on Android, you would have to integrate the same as OMX components.
For ffmpeg compilation and OMX generation, you could refer to this link: FFmpeg on Android
Once you have the OMX component ready, you will have to integrate the same into Android, by including the same in media_codecs.xml. If you desire to invoke your specific encoder always, please do ensure that your codec is the first codec registered in the list.
For the encoder, you will to have consider a couple of important points.
One, if you wish to optimize your system, then you may want to avoid copying of frames from the source (camera, surface or some other source) to the input port of your OMX encoder component. Hence, your codec will have to support passing of buffers through metadata (Reference: http://androidxref.com/4.2.2_r1/xref/frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp#1413). If you require more information on this topic, please raise a separate question.
Two, The encoder will have to support standard OMX indices and some new indices. For example, for Miracast, a new index prependSPSPPStoIDRFrames is introduced, which is supported through getExtensionIndex. For reference, you could refer to http://androidxref.com/4.2.2_r1/xref/frameworks/av/media/libstagefright/ACodec.cpp#891 .
In addition to the aforementioned index, the encoder will also get a new request to enableGraphicBuffers with a FALSE boolean value. The most important point for these 2 indices is to ensure that the OMX component doesn't fail when these 2 indices are invoked.
With these modifications, you should be able to integrate your encoder into Stagefright framework.

Decode H.264 without stream

I have an application wherein I have H.264 frames from an RTSP stream stored in a proprietary database. I need to be able to present a frame to the H.264 decoder (frames in sequence, of course) and get back the decoded frame (bitmap, whatever) output. I cannot use the traditional DirectShow streams because I don't have a stream. Is there any codec can be used in this manner? Later I will need to go the other way as well (given bitmaps or other format images, create an H.264 stream). Any help you can give would be greatly appreciated.
Create a DirectShow Source Filter that assembles the h264 stream from database, then you can pass it to standard DirectShow H264 decoder. Look into DirectShow samples for example source code.
As Isso mentioned already, you can push the H.264 data into DirectShow pipeline and have the frame decoded. Additionally to this, there is H.264 Video Decoder MFT (Windows 7 and more recent only) which might be an easier way to use the decoder and to apply it to an individual "frame". You can use other decoders as well, such as FFmpeg/libavcodec however you would still need to interface to the decoders typically designed for stream processing.

Wrap a stream of raw H264 NALUs into a container like MP4

I have an application that sends raw h264 NALUs as generated from encoding on the fly using x264 x264_encoder_encode. I am getting them through plain TCP so I am not missing any frames.
I need to be able to decode such a stream in the client using Hardware Acceleration in Windows (DXVA2). I have been struggling to find a way to get this to work using FFMPEG. Perhaps it may be easier to try Media Foundation or DirectShow, but they won't take raw H264.
I either need to:
Change the code from the server application to give back an mp4 stream. I am not that experienced with x264. I was able to get raw H264 by calling x264_encoder_encode, by following the answer to this question: How does one encode a series of images into H264 using the x264 C API? How can I go from this to something that is wrapped in MP4 while still being able to stream it in realtime
I could at the receiver wrap it with mp4 headers and feed it into something that can play it using DXVA. I wouldn't know how to do this
I could find another way to accelerate it using DXVA with FFMPEG or something else that takes it in raw format.
An important restriction is that I need to be able to pre-process each decoded frame before displaying it. Any solution that does decoding and displaying in a single step would not work for me
I would be fine with either solution
I believe you should be able to use H.264 packets off the wire with Media Foundation. there's an example on page 298 of this book http://www.docstoc.com/docs/109589628/Developing-Microsoft-Media-Foundation-Applications# that use a HTTP stream with Media Foundation.
I'm only learning Media Foundation myself and am trying to do a similar thing to you, in my case I want to use H.264 payloads from an RTP packet, and from my understanding that will require a custom IMFSourceReader. Accessing the decoded frames should also be possible from what I've read since there seems to be complete flexibility in chaining components together into topologies.

Adding audio channel using ffmpeg

I am working on ffmpeg and trying to add a audio stream on the fly. I am using AudioQueues and I get raw audio buffer. I am encoding audio with linear PCM and hence the audio I get will be of raw format, which I know ffmpeg does accept it. But I cannot figure out how. I have looked into AVStream, where in we have to create a new stream for this audio channel but how do I encode it to a video which is already initialized in another AVStream structure.
Overall, I would like to have an idea of the architecture of ffmpeg. I found it difficult to work since it is least documented. Any pointers or details are appreciated.
Thanks and Regards,
Raj Pawan G
If you want to use java, you'll find a much better documented API wrapper for FFmpeg with Xuggler.
That said, FFmpeg can support Raw PCM bu tnot all containers can contain it. use the PCM codecs (see avcodec.h) and find the one that has the right size and attributes you want. To add the audio to the same container find a AVFormatContext object that you use for your existing video stream, and use av_new_stream(...) to add a new stream. Then attach your PCM encoder and 'encode' to that and write resulting packets. See output_example.c in the FFmpeg for examples of this API in action.

Resources