FFMPEG Transcode H265 video from 10-bit to 8-bit - ffmpeg

I'm trying to convert my library from various formats into HEVC 8-bit mainly to shrink my library down. This is generally working but I've run into an issue when trying to convert an existing file from 10-bit H.265 to 8-bit H.265.
My processor, an Intel Celeron J3455, supports hardware decoding/encoding H.265 at 8-bit but only hardware decoding for 10-bit.
It seems that ffmpeg is attempting to keep the video as 10-bit to match the source rather than allowing me to convert to 8-bit and this is creating an error.
Here is a sample command that I'm using:
ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.10bit.x265.mkv -map 0:0 -c:v:0 hevc_vaapi -vf "scale_vaapi=w=-1:h=1080" -b:v 4027047 -map 0:1 -c:a:0 aac -b:a 384000 -ac 6 -map 0:s -scodec copy -map_metadata:g -1 -metadata JBDONEVERSION=1 -metadata JBDONEDATE=2020-06-06T20:52:36.072Z -map_chapters 0 output.8bit.x265.mkv
The error I get is:
[hevc_vaapi # 0x5568b27fb1c0] No usable encoding entrypoint found for profile VAProfileHEVCMain10 (18).
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
From what I can tell ffmpeg looks at the source and selectes VAProfileHEVCMain10 instead of VAProfileHEVCMain. I'd like to force it to output 8-bit.
I've tried adding -pix_fmt yuv420p but that gives me this error:
Incompatible pixel format 'yuv420p' for codec 'hevc_vaapi', auto-selecting format 'vaapi_vld'
I've also tried making this change to the command: "scale_vaapi=w=-1:h=1080,format=yuv420p"
However that gives me the error:
Impossible to convert between the formats supported by the filter 'Parsed_scale_vaapi_0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Any suggestions?

I've just been figuring this out as well. Your problem is (most likely) with -hwaccel_output_format vaapi. It's outputting frames in VAAPI format and not the format you need (read more here, also quoted a section at the end of this comment). So you need to adjust for 8-bit there: -hwaccel_output_format yuv420p.
In my case I'm also using -filter_hw_device vaapi0 -vf format=nv12|vaapi,hwupload (specified before -c:v hevc_vaapi). The vaapi0 here is a named device I've initialised with init_hw_device. You're directly using a path with -hwaccel_device so I'm not sure what the name of your device is, but you may not need these extra arguments.
The hardware codecs used by VAAPI are not able to access frame data in arbitrary memory. Therefore, all frame data needs to be uploaded to hardware surfaces connected to the appropriate device before being used. All VAAPI hardware surfaces in ffmpeg are represented by the vaapi pixfmt (the internal layout is not visible here, though).
The hwaccel decoders normally output frames in the associated hardware format, but by default the ffmpeg utility download the output frames to normal memory before passing them to the next component. This allows the decoder to work standlone to make decoding faster without any additional options:
ffmpeg -hwaccel vaapi ... -i input.mp4 -c:v libx264 ... output.mp4
For other outputs, the option -hwaccel_output_format can be used to specify the format to be used. This can be a software format (which formats are usable depends on the driver), or it can be the vaapi hardware format to indicate that the surface should not be downloaded.

Related

ffmpeg h264_nvenc force level=41

I have some videos at 1080p 60fps.
These videos are with level=50 and my TV plays only videos up to level=41, so I want to convert my videos using ffmpeg and hardware acceleration.
I have a Windows 10 machine with ffmpeg and Geforce 2060, so I try run below command:
ffmpeg -i video.mp4 -vcodec h264_nvenc -preset slow -level 4.1 output.mp4
but I get this error:
[h264_nvenc # 000001dd43cd07c0] InitializeEncoder failed: invalid param (8): Invalid Level.
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
When I try to run the same command above using "-level 4.2" or above, the encode process works! But it is useless for me, because I really need "-level 4.1".
I noticed that if use libx264 instead of h264_nvenc, the encode process accepts "-level 4.1", but it take too long to complete (I want to enjoy hardware acceleration instead of CPU).
How can I force h264_nvenc to accept "level=4.1" ?

FFmpeg raw h.264 set pts value

I am currently using ffmpeg to convert a custom container media format to mp4. It is straightforward to dump all the h.264 frames to one file and the aac audio to another. Then I can combine the two and create an mp4 file with ffmpeg.
The problem is that the video source isn't always perfect. From time to time frames are dropped or late etc. This causes an A/V sync issue since the pts is generated using a constant rate by ffmpeg. The source format I am using has the PTS value but I cant figure out a way to pass it to ffmpeg with the raw h.264 frames.
I suppose it would be possible to create a demuxer for the custom format, but it seems like a lot effort. I looked into ffmpeg's .nut container format thinking that I might be able to convert from the custom container to .nut first. Unfortunately it seems more complex than it looks on the surface.
It seems like there should be an easy way to pass a frame and its PTS value to ffmpeg, but I haven't come across it yet. Any help would be appreciated.
Here is the ffmpeg command I am using
ffmpeg -f s16le -ac 1 -ar 48k -i source.audio -framerate 20 -i source.video -c:a aac -b:a 64k -r 20 -c:v h264_nvenc -rc:v vbr_hq -cq:v 19 -n out.mp4

Watermark video by ffmpeg and hardware accelerator

I use a p4000 card and ffmpeg with all requirements (driver , toolkit , cuda compile)
I want to put watermark on the video by this command
./bin/ffmpeg -hwaccel cuvid -c:v h264_cuvid -i input.mp4 -i input.png -filter_complex "overlay=10:10" -c:v h264_nvenc output.mp4
but I encounter this error
Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #1:0
Thanks for the advice
-hwaccel cuvid
Mean that decoded frame will go to hw encoder directly (not copy in system memory), so at this time it's impossible to add filters to decoded frame. Try remove -hwaccel cuvid, but speed will be slow.
https://trac.ffmpeg.org/wiki/HWAccelIntro#CUDACUVIDNVDEC

FFMPEG Hwaccel error with hwupload

I'm currently trying to use the vaapi hwaccelleration on FFMPEG.
In my command, I have hwaccel on vaapi, hwaccel_output_fomrat on vaapi, -hwaccel_device on /dev/dri/renderD128, as well as -vf as format=nv12, hwupload and as video codec -c:v on h264_vaapi.
When I now try to start it, I'm getting the error
grep stderr: [hwupload # 0x30bb660] A hardware deveice reference is required to upload frames to.
[Parsed_hwupload_1 # 0x30bb560] Query format failed for 'Parsed_hwupload_1': Invalid argument
Can I somewhere define a hardware device reference? I thought it's what I do with hwaccel_device, but seems like not. So what can I do to get this working?
You'll need to initialize your hardware accelerator correctly, as shown in the documentation below (perhaps we should create a wiki entry for this in time?):
Assume the following snippet:
ffmpeg -re -threads 4 -loglevel debug \
-init_hw_device vaapi=intel:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device intel -filter_hw_device intel \
-i 'udp://$ingest_ip:$ingest_port?fifo_size=9000000' \
-vf 'format=nv12|vaapi,hwupload' \
-c:v h264_vaapi -b:v $video_bitrate$unit -maxrate:v $video_bitrate$unit -qp:v 21 -sei +identifier+timing+recovery_point -profile:v main -level 4 \
-c:a aac -b:a $audio_bitrate$unit -ar 48000 -ac 2 \
-flags -global_header -fflags +genpts -f mpegts 'udp://$feed_ip:$feed_port'
Where:
(a). VAAPI is available, and we will bind the DRM node /dev/dri/renderD128 to the encode session, and
(b). We are taking a udp input, where $ingest_ip:$port_ip corresponds to a known UDP input stream, matching the IP and port pairing respectively, with a defined fifo size (as indicated by the '?fifo_size=n' parameter).
(c). Encoding to an output udp stream packaged as an MPEG Transport stream (see the muxer in use, mpegts), with the necessary parameters matching the output IP and port pairing respectively.
(d). Defined video bitrates ($video_bitrate$unit, where $unit can be either K or M, as you see fit) and audio bitrates ($audio_bitrate$unit, where $unit should be in K, for AAC LC-based encodings) as shown above, with appropriate encoder settings passed to the vaapi encoders. For your reference, there are four available video encoders in FFmpeg as at the time of writing, namely:
i. h264_vaapi
ii. hevc_vaapi
iii. vp8_vaapi
iii. vp9_vaapi
With the omission of the mjpeg encoder (as its' not of interest in this context), and each of these encoders' documentation can be accessed via:
ffmpeg -hide_banner -h encoder=$encoder_name
Where $encoder_name matches the encoders on the list above.
For VAAPI, the following notes apply:
VAAPI-based encoders can only take input as VAAPI surfaces, so it will typically need to be preceeded by a hwupload instance to convert a normal frame into a vaapi format frame. Note that the internal format of the surface will be derived from the format of the hwupload input, so additional format filters may be required to make everything work, as shown in the snippet above:
i. -init_hw_device vaapi=intel:/dev/dri/renderD128 initializes a hardware device named vaapi (that can be called up later via the -hwaccel_device and -filter_hw_device as demonstrated above) bound to the DRM render node /dev/dri/renderD128. The intel: prefix can be dropped, but its' often useful to identify what render node was used by a vendor name in an environment where more than one VAAPI-capable device exist, such as a rig with an Intel IGP and an AMD GPU.
ii. Take note of the format constraint defined by -hwaccel_output_format vaapi. This is needed to satisfy the condition in 1.
iii. We then pick up the named hardware acceleration implementation, vaapi, and call it up for both the hardware accelerator device (-hwaccel_device) and the device to which we will upload the hardware frames via the hwupload filter (-filter_hw_device). Omitting the latter will result in encoder initialization failure, as you observed.
iv. Now, inspect the video filter syntax closely:
-vf 'format=nv12|vaapi,hwupload'
This video filter chain converts any unsupported video frames to the VAAPI hardware format, applying a known constraint prior to uploading the frames to the device via hwupload. This is done for safery reasons; you cannot assume that the decoded format will be accepted by the encoder. Performance in this mode will vary, based on source, the decoder device and the VAAPI driver in use.
v. Now, for the video encoder (defined by -c:v $encoder_name), pass your arguments as needed. You can modify the example I provided in the snippet above, though its' wise to refer to the encoder documentation as explained earlier should you need further tuning.
Bonus: Dealing with the Intel-based QSV encoders:
I'm including this section for future reference, for these who use Intel's open source MSDK for FFmpeg's QSV enablement and the associated encoders. See the snippet below:
ffmpeg -re -threads 4 -loglevel debug \
-init_hw_device qsv=qsv:MFX_IMPL_hw_any -hwaccel qsv -filter_hw_device qsv \
-i 'udp://$ingest_ip:$ingest_port?fifo_size=9000000' \
-vf 'hwupload=extra_hw_frames=10,vpp_qsv:deinterlace=2,format=nv12' \
-c:v h264_qsv -b:v $video_bitrate$unit -rdo 1 -pic_timing_sei 1 -recovery_point_sei 1 -profile high -aud 1 \
-c:a aac -b:a $audio_bitrate$unit -ar 48000 -ac 2 \
-flags -global_header -fflags +genpts -f mpegts 'udp://$feed_ip:$feed_port'
You can see the similarities.
The QSV encoders use VAAPI-style mappings (as explained above), but with an extra constraint placed for the hwupload filter: The hwupload=extra_hw_frames=10 parameter must be used, or the encoder's initialization will fail.
One of the reasons I cannot recommend QSV's encoders, despite their supposedly better output quality, are their fragile mappings, that often exit with some of the most unhelpful errors often unrelated to how the encoder failed.
Where possible, stick to VAAPI. QSV's usefulness (where applicable) is for low power encoding, as is the case with Intel's Apollolake and anemic Cannonlake initial offerings.
Hope this documentation will be of use to you.

Using FFMPEG to losslessly convert YUV to another format for editing in Adobe Premier

I have a raw YUV video file that I want to do some basic editing to in Adobe CS6 Premiere, but it won't recognize the file. I thought to use ffmpeg to convert it to something Premiere would take in, but I want this to be lossless because afterwards I will need it in YUV format again. I thought of avi, mov, and prores but I can't seem to figure out the proper command line to ffmpeg and how to ensure it is lossless.
Thanks for your help.
Yes, this is possible. It is normal that you can't open that raw video file since it is just raw data in one giant file, without any headers. So Adobe Premiere doesn't know what the size is, what framerate ect.
First make sure you downloaded the FFmpeg command line tool. Then after installing you can start converting by running a command with parameters. There are some parameters you have to fill in yourself before starting to convert:
What type of the YUV pixel format are you using? The most common format is YUV4:2:0 planar 8-bit (YUV420p). You can type ffmpeg -pix_fmts to get a list of all available formats.
What is the framerate? In my example I will use -r 25 fps.
What encoder do you want to use? The libx264 (H.264) encoder is a great one for lossless compression.
What is your framesize? In my example I will use -s 1920x1080
Then we get this command to do your compression.
ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -r 25 -pix_fmt yuv420p -i inputfile.yuv -c:v libx264 -preset ultrafast -qp 0 output.mp4
A little explanation of all other parameters:
With -f rawvideo you set the input format to a raw video container
With -vcodec rawvideo you set the input file as not compressed
With -i inputfile.yuv you set your input file
With -c:v libx264 you set the encoder to encode the video to libx264.
The -preset ultrafast setting is only speeding up the compression so your file size will be bigger than setting it to veryslow.
With -qp 0 you set the maximum quality. 0 is best, 51 is worst quality in our example.
Then output.mp4 is your new container to store your data in.
After you are done in Adobe Premiere, you can convert it back to a YUV file by inverting allmost all parameters. FFmpeg recognizes what's inside the mp4 container, so you don't need to provide parameters for the input.
ffmpeg -i input.mp4 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1920x1080 -r 25 rawvideo.yuv

Resources