I'm using libavcodec with hardware support through QSV for encoding (e.g., hevc_qsv), and trying to determine the actual hardware support during runtime.
The encoder declares its supported formats in libavcodec/qsvenc_hevc.c, and as of ffmpeg 5.1 this declares support for the following formats.
.p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_NV12,
AV_PIX_FMT_P010,
AV_PIX_FMT_YUYV422,
AV_PIX_FMT_Y210,
AV_PIX_FMT_QSV,
AV_PIX_FMT_BGRA,
AV_PIX_FMT_X2RGB10,
AV_PIX_FMT_NONE },
I've been using this list to determine if there's hardware support for a specific format, however, as these values are hardcoded in ffmpeg they do not necessarily reflect the support of the hardware at runtime. How can the actual hardware support be queried?
I suppose I may have to interface mfx directly, or somehow obtain the QSV version to determine supported formats. Older QSV may only support NV12 and P010, for example.
Related
I'm doing a project in C which requires playing an incoming stream of HEVC content to the user. My understanding is that I need a library that gives me an API to a HEVC decoder (not and encoder, but a decoder). Here are my options so far:
The x265 looks perfect but it's all about the encoding part (and nothing about decoding it !). I'm not interested in an API to a HEVC encoder, what I want is the decoder part.
There is libde265 and OpenHEVC but I'm not sure they have what I want. Couldn't find it anywhere in their docs that there is an API that I can use to decode the content but since there are players out there using those libs, I'm assuming it must be there somewhere ... couldn't find it though !
There is ffmpeg project with its own decoders (HEVC included) but I'm not sure this is the right thing since I only want the hevc decoder and nothing else.
Cheers
Just go with FFmpeg, I'm guessing you'll only need to link with libavcodec library and it's API/Interfaces. And yes, the machine where your code work will have the whole FFmpeg installed (or maybe not, just the library might work).
Anyway, even that shouldn't be any problem unless the machine is an embedded system with tight space constraints (which is unlikely since it's h265, which implies abundant of source needed).
ffmpeg filter minterpolate (motion interpolation) does not work in MPV.
(Nevertheless the file then is played normally without the minterpolate).
(I researched using search engines and throughout documentation and troubleshooted to make a use of opengl and generally tried everything apart from asking for help and learning to understand more in the source code and I'm not a programmer)…
--gpu-context=angle --gpu-api=opengl also does not make opengl work. (I'm guessing opengl could help from seeing its use in the documentations).
Note
To get a full list of available video filters, see --vf=help and
http://ffmpeg.org/ffmpeg-filters.html .
Also, keep in mind that most actual filters are available via the
lavfi wrapper, which gives you access to most of libavfilter's
filters. This includes all filters that have been ported from MPlayer
to libavfilter.
Most builtin filters are deprecated in some ways, unless they're only
available in mpv (such as filters which deal with mpv specifics, or
which are implemented in mpv only).
If a filter is not builtin, the lavfi-bridge will be automatically
tried. This bridge does not support help output, and does not verify
parameters before the filter is actually used. Although the mpv syntax
is rather similar to libavfilter's, it's not the same. (Which means
not everything accepted by vf_lavfi's graph option will be accepted by
--vf.)
You can also prefix the filter name with lavfi- to force the wrapper.
This is helpful if the filter name collides with a deprecated mpv
builtin filter. For example --vf=lavfi-scale=args would use
libavfilter's scale filter over mpv's deprecated builtin one.
I expect MPV to play with minterpolate (one of several filters that MPV can use, listed in http://ffmpeg.org/ffmpeg-filters.html) enabled. But this is what happens:
Input: "--vf=lavfi=[minterpolate=fps=60000/1001:mi_mode=mci]"
Output:
cplayer: (+) Video --vid=1 (*) (h264 1280x720 29.970fps)
cplayer: (+) Audio --aid=1 (*) (aac 2ch 44100Hz)
vd: Using hardware decoding (d3d11va).
ffmpeg: Impossible to convert between the formats supported by the filter 'mpv_src_in0' and the filter 'auto_scaler_0'
lavfi: failed to configure the filter graph
vf: Disabling filter lavfi.00 because it has failed.
(Interesting is also that --gpu-api=opengl does not work (despite that according to specification my—not to brag—HD Graphics 400 Braswell supports its 4.2 version)… And that aresample seems to have no effect too, and with the few audio filters selected playback often doesn't start nor output errors.)
The problem is that you're using hardware decoding WITHOUT copying the decoded video back to system memory. This means your video filter can't access it. The fix is simple but that error message makes it very hard to figure this out.
To fix this just pass in --hwdec=no. Though --hwdec=auto-copy also fixes it but minterpolate in mci mode is so CPU intensive there's not much point in also using hardware decoding. (for most video sources)
All together:
mpv input.mkv --hwdec=no --vf=lavfi="[minterpolate=fps=60000/1001:mi_mode=mci]"
Explanation: The most efficient hardware decoding doesn't copy the video data back to system memory after decoding. But you need it in memory for running CPU based filtering on the decoded video data. You were asking mpv to do some video filtering but it doesn't have access to the decoded video data.
More details from the mpv docs:
auto-copy selects only modes that copy the video data back to system memory after decoding. This selects modes like vaapi-copy (and so on). If none of these work, hardware decoding is disabled. This mode is usually guaranteed to incur no additional quality loss compared to software decoding (assuming modern codecs and an error free video stream), and will allow CPU processing with video filters. This mode works with all video filters and VOs.
Because these copy the decoded video back to system RAM, they're often less efficient than the direct modes, and may not help too much over software decoding.
I have a series of videos with custom information encoded in the sei message NAL. Is it possible to read that information when decoding using the Nvidia hardware decoder. If it is not supported, should I use FFMPEG compiled with NVENC support instead?
UPDATE:
I want to decode the media and read the SEI message. I am streaming live video and including postprocessing info in the sei message. The client has to use that info to apply effects to the decoded media.
Decoding the media as quickly as possible is important, and I want to do it in hardware. I assume that the Nvidia decoder must parse the NAL units to decode them. I would like to avoid duplicating work if possible.
Does anyone know an open source decoder that can perform real time SHVC bit stream decoding?. The openHEVC states that it has the capability to decode HEVC scalable bit streams, but I was not able to decode a SHVC bit stream generated by SHM 7.0 reference encoder.
Also, does the ffmpeg support scalable extension of HEVC?.
Thanks.
The openHEVC current version, seems to be supporting only SHM4.1 bit streams. All layers of the bit streams generated from SHM 4.1 reference encoder are decodable using the current openHEVC version.
I'm trying to configure a "Windows Media Audio Standard" DMO codec to compress in single-pass, constant bit-rate mode (CBR). Unfortunately I can not find on the MSDN documentation how can I pass the desired bit-rate to the encoder object.
In other words, I'm looking for the equivalent of MFPKEY_RMAX which seems to identify be the desired bit-rate setting for two-pass Variable Bit-rate encoding, but for single-pass, CBR encoding.
Finally found it.
The key I required is MF_MT_AUDIO_AVG_BYTES_PER_SECOND and is documented here:
Choose the encoding bit rate.
For CBR encoding, you must know the bit rate at which you want to encode the stream before the encoding session begins. You must set the bit rate during while you are configuring the encoder. To do this, while you are performing media type negotiation, check the MF_MT_AUDIO_AVG_BYTES_PER_SECOND attribute (for audio streams) or the MF_MT_AVG_BITRATE attribute (for video streams) of the available output media types and choose an output media type that has the average bit rate closest to the target bit rate you want to achieve. For more information, see Media Type Negotiation on the Encoder.