I'm currently trying to use the vaapi hwaccelleration on FFMPEG.
In my command, I have hwaccel on vaapi, hwaccel_output_fomrat on vaapi, -hwaccel_device on /dev/dri/renderD128, as well as -vf as format=nv12, hwupload and as video codec -c:v on h264_vaapi.
When I now try to start it, I'm getting the error
grep stderr: [hwupload # 0x30bb660] A hardware deveice reference is required to upload frames to.
[Parsed_hwupload_1 # 0x30bb560] Query format failed for 'Parsed_hwupload_1': Invalid argument
Can I somewhere define a hardware device reference? I thought it's what I do with hwaccel_device, but seems like not. So what can I do to get this working?
You'll need to initialize your hardware accelerator correctly, as shown in the documentation below (perhaps we should create a wiki entry for this in time?):
Assume the following snippet:
ffmpeg -re -threads 4 -loglevel debug \
-init_hw_device vaapi=intel:/dev/dri/renderD128 -hwaccel vaapi -hwaccel_output_format vaapi -hwaccel_device intel -filter_hw_device intel \
-i 'udp://$ingest_ip:$ingest_port?fifo_size=9000000' \
-vf 'format=nv12|vaapi,hwupload' \
-c:v h264_vaapi -b:v $video_bitrate$unit -maxrate:v $video_bitrate$unit -qp:v 21 -sei +identifier+timing+recovery_point -profile:v main -level 4 \
-c:a aac -b:a $audio_bitrate$unit -ar 48000 -ac 2 \
-flags -global_header -fflags +genpts -f mpegts 'udp://$feed_ip:$feed_port'
Where:
(a). VAAPI is available, and we will bind the DRM node /dev/dri/renderD128 to the encode session, and
(b). We are taking a udp input, where $ingest_ip:$port_ip corresponds to a known UDP input stream, matching the IP and port pairing respectively, with a defined fifo size (as indicated by the '?fifo_size=n' parameter).
(c). Encoding to an output udp stream packaged as an MPEG Transport stream (see the muxer in use, mpegts), with the necessary parameters matching the output IP and port pairing respectively.
(d). Defined video bitrates ($video_bitrate$unit, where $unit can be either K or M, as you see fit) and audio bitrates ($audio_bitrate$unit, where $unit should be in K, for AAC LC-based encodings) as shown above, with appropriate encoder settings passed to the vaapi encoders. For your reference, there are four available video encoders in FFmpeg as at the time of writing, namely:
i. h264_vaapi
ii. hevc_vaapi
iii. vp8_vaapi
iii. vp9_vaapi
With the omission of the mjpeg encoder (as its' not of interest in this context), and each of these encoders' documentation can be accessed via:
ffmpeg -hide_banner -h encoder=$encoder_name
Where $encoder_name matches the encoders on the list above.
For VAAPI, the following notes apply:
VAAPI-based encoders can only take input as VAAPI surfaces, so it will typically need to be preceeded by a hwupload instance to convert a normal frame into a vaapi format frame. Note that the internal format of the surface will be derived from the format of the hwupload input, so additional format filters may be required to make everything work, as shown in the snippet above:
i. -init_hw_device vaapi=intel:/dev/dri/renderD128 initializes a hardware device named vaapi (that can be called up later via the -hwaccel_device and -filter_hw_device as demonstrated above) bound to the DRM render node /dev/dri/renderD128. The intel: prefix can be dropped, but its' often useful to identify what render node was used by a vendor name in an environment where more than one VAAPI-capable device exist, such as a rig with an Intel IGP and an AMD GPU.
ii. Take note of the format constraint defined by -hwaccel_output_format vaapi. This is needed to satisfy the condition in 1.
iii. We then pick up the named hardware acceleration implementation, vaapi, and call it up for both the hardware accelerator device (-hwaccel_device) and the device to which we will upload the hardware frames via the hwupload filter (-filter_hw_device). Omitting the latter will result in encoder initialization failure, as you observed.
iv. Now, inspect the video filter syntax closely:
-vf 'format=nv12|vaapi,hwupload'
This video filter chain converts any unsupported video frames to the VAAPI hardware format, applying a known constraint prior to uploading the frames to the device via hwupload. This is done for safery reasons; you cannot assume that the decoded format will be accepted by the encoder. Performance in this mode will vary, based on source, the decoder device and the VAAPI driver in use.
v. Now, for the video encoder (defined by -c:v $encoder_name), pass your arguments as needed. You can modify the example I provided in the snippet above, though its' wise to refer to the encoder documentation as explained earlier should you need further tuning.
Bonus: Dealing with the Intel-based QSV encoders:
I'm including this section for future reference, for these who use Intel's open source MSDK for FFmpeg's QSV enablement and the associated encoders. See the snippet below:
ffmpeg -re -threads 4 -loglevel debug \
-init_hw_device qsv=qsv:MFX_IMPL_hw_any -hwaccel qsv -filter_hw_device qsv \
-i 'udp://$ingest_ip:$ingest_port?fifo_size=9000000' \
-vf 'hwupload=extra_hw_frames=10,vpp_qsv:deinterlace=2,format=nv12' \
-c:v h264_qsv -b:v $video_bitrate$unit -rdo 1 -pic_timing_sei 1 -recovery_point_sei 1 -profile high -aud 1 \
-c:a aac -b:a $audio_bitrate$unit -ar 48000 -ac 2 \
-flags -global_header -fflags +genpts -f mpegts 'udp://$feed_ip:$feed_port'
You can see the similarities.
The QSV encoders use VAAPI-style mappings (as explained above), but with an extra constraint placed for the hwupload filter: The hwupload=extra_hw_frames=10 parameter must be used, or the encoder's initialization will fail.
One of the reasons I cannot recommend QSV's encoders, despite their supposedly better output quality, are their fragile mappings, that often exit with some of the most unhelpful errors often unrelated to how the encoder failed.
Where possible, stick to VAAPI. QSV's usefulness (where applicable) is for low power encoding, as is the case with Intel's Apollolake and anemic Cannonlake initial offerings.
Hope this documentation will be of use to you.
Related
I'm trying to convert my library from various formats into HEVC 8-bit mainly to shrink my library down. This is generally working but I've run into an issue when trying to convert an existing file from 10-bit H.265 to 8-bit H.265.
My processor, an Intel Celeron J3455, supports hardware decoding/encoding H.265 at 8-bit but only hardware decoding for 10-bit.
It seems that ffmpeg is attempting to keep the video as 10-bit to match the source rather than allowing me to convert to 8-bit and this is creating an error.
Here is a sample command that I'm using:
ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input.10bit.x265.mkv -map 0:0 -c:v:0 hevc_vaapi -vf "scale_vaapi=w=-1:h=1080" -b:v 4027047 -map 0:1 -c:a:0 aac -b:a 384000 -ac 6 -map 0:s -scodec copy -map_metadata:g -1 -metadata JBDONEVERSION=1 -metadata JBDONEDATE=2020-06-06T20:52:36.072Z -map_chapters 0 output.8bit.x265.mkv
The error I get is:
[hevc_vaapi # 0x5568b27fb1c0] No usable encoding entrypoint found for profile VAProfileHEVCMain10 (18).
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
From what I can tell ffmpeg looks at the source and selectes VAProfileHEVCMain10 instead of VAProfileHEVCMain. I'd like to force it to output 8-bit.
I've tried adding -pix_fmt yuv420p but that gives me this error:
Incompatible pixel format 'yuv420p' for codec 'hevc_vaapi', auto-selecting format 'vaapi_vld'
I've also tried making this change to the command: "scale_vaapi=w=-1:h=1080,format=yuv420p"
However that gives me the error:
Impossible to convert between the formats supported by the filter 'Parsed_scale_vaapi_0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Any suggestions?
I've just been figuring this out as well. Your problem is (most likely) with -hwaccel_output_format vaapi. It's outputting frames in VAAPI format and not the format you need (read more here, also quoted a section at the end of this comment). So you need to adjust for 8-bit there: -hwaccel_output_format yuv420p.
In my case I'm also using -filter_hw_device vaapi0 -vf format=nv12|vaapi,hwupload (specified before -c:v hevc_vaapi). The vaapi0 here is a named device I've initialised with init_hw_device. You're directly using a path with -hwaccel_device so I'm not sure what the name of your device is, but you may not need these extra arguments.
The hardware codecs used by VAAPI are not able to access frame data in arbitrary memory. Therefore, all frame data needs to be uploaded to hardware surfaces connected to the appropriate device before being used. All VAAPI hardware surfaces in ffmpeg are represented by the vaapi pixfmt (the internal layout is not visible here, though).
The hwaccel decoders normally output frames in the associated hardware format, but by default the ffmpeg utility download the output frames to normal memory before passing them to the next component. This allows the decoder to work standlone to make decoding faster without any additional options:
ffmpeg -hwaccel vaapi ... -i input.mp4 -c:v libx264 ... output.mp4
For other outputs, the option -hwaccel_output_format can be used to specify the format to be used. This can be a software format (which formats are usable depends on the driver), or it can be the vaapi hardware format to indicate that the surface should not be downloaded.
I'm developing the platform for 1-1 video calls with recording. For my purposes, I work with the following stack: WebRTC, Kurento Media Server, FFmpeg.
It works perfectly in an ideal environment, but if my users have a poor connection, after the recording I see a lot of problems with the out of sync audio and video tracks.
As I understand, the problem appears due to the incorrect timestamp, so I'm doing a bit post-processing where I generate a new timestamp and it helps!
Here is the command example:
ffmpeg -fflags +genpts -acodec libopus -vcodec libvpx \
-i in.webm \
-filter_complex "fps=30, setpts=PTS-STARTPTS" \
-acodec libvorbis -vcodec libvpx \
-vsync 1 -async 1 -r 30 -threads 4 out.webm
After that, I've faced one more problem. If the user has a poor connection, WebRTC can dynamically change the video resolution. After the post-processing for such type of videos (with the different resolutions during the video) I see the frozen image until the end of the video and it started from the moment, where the resolution was dynamically changed. There are no error in the FFmpeg logs, just information about changing the resolution:
[libvpx # 0x559335713440] dimension change! 480x270 -> 320x180
-async is forwarded to lavfi similarly to -af aresample=async=1:min_hard_comp=0.100000.
After analyzing the logs, I realized that the problem was due to the STARTPTS parameter, which, after automatically changing the extension, became very large (equal to the number of frames that were before it). I tried to remove STARTPTS and leave only PTS.
After that, the video started to work well, but only until the video resolution are dynamically changed, then again the audio and video tracks are out of sync.
I've tried to scale videos to a static resolution before fixing timestamp and it helps. But it's a little bit extra work. Command example:
ffmpeg -acodec libopus -vcodec libvpx \
-i in.webm \
-vf scale=640:480 \
-acodec libvorbis -vcodec libvpx \
-threads 4 out.webm
Also I've tried to combine both commands using filter_complex, but it didn't work.
I've worked with FFmpeg not so many time so far, so, maybe I'm doing something wrong? Maybe there are some easier ways to do that?
Since Kurento uses GStreamer for the video recording, so maybe it would be a better option to reconfigure Kurento to fix timestamp during the video recording?
I can provide any videos and commands which I use.
I'm using:
Kurento Media Server 6.9.0,
FFmpeg 4.1
everyone.
I'm trying to use FFmpeg to record video and 3 audio sources and use it to generate 3 different video files - each file should contain the same video stream but the different audio stream. The problem is that I got audio sync issues. The first audio stream is synced perfectly, but the second one has 1 sec lag, and the third one has like 2 sec lag.
I've made a few tests so far and it seems that root cause of the issue is initialization time of video/audio devices. So, one device is already recording something but the second is still being opened and so on. I've tried to change input devices order and after that audio streams still have the same issue BUT if before 2nd and 3rd audio streams were some time ahead of video, after reordering they became to lag after the audio (audio for the same event appears with some delay). So this test confirms my version about device initialization times.
But the question still, why the first audio stream is synchronized properly, while other 2 are not. and also, how could I overcome this issues? Any workarounds and ideas are highly appreciated.
Here is FFmpeg command I'm using and it's output.
ffmpeg.exe -f dshow -video_size 1920x1080 -i video="Logitech HD Webcam
C615" -f dshow -i audio="Microphone (HD Webcam C615)" -f dshow -i
audio="Microphone Array (Realtek High Definition Audio)"
-filter_complex "[1:a]volume=1[a1];[2:a]volume=1[a2]" -vf scale=h=1080:force_original_aspect_ratio=decrease -vcodec libx264
-pix_fmt yuv420p -crf 23 -preset ultrafast -acodec aac -vbr 5 -threads 0 -map v:0 -map [a1] -map [a2] -f tee
"[select=\'v,a:0\']C:/Users/vshevchu/Desktop/123/111/111_jjj1.avi|
[select=\'v,a:1\']C:/Users/vshevchu/Desktop/123/111/111_jjj2.avi"
OUTPUT
PS. Actually, the issue is exactly the same when I'm not using "tee" muxer but writing all the audio streams to one container. So, "tee" isn't a suspect.
Hello i need to have two versions of the same file stored on my server, medium and HD quality, the thing is that don't really know ffmpeg that well so im just trying this is code at random, i'm using the code belo but I end up with a much larger file, however it works,it plays.
ffmpeg -i inputfile.wmv -vcodec libx264 -ar 44100 -b 200 -ab 56 -crf 22 -s 360x288 -vpre medium -f flv tmp.flv
Just need the two commands to create the 2 different files
You need to give more information about what bitrate, quality or target file size you are aiming for and the size and quality of your source material preferably including codecs used and relevant parameters.
You should read the manual or ffmpeg -h or both. There are several problems with your command line:
You are using constant rate factor, crf = 22, while still trying to limit the bitrate using -b 200.
Bitrate is specified in bits/s (unless you are using a very old ffmpeg), and 200 bps is not usable for anything, add k to get kilobits/s.
You have not specified an audio codec, but you have specified an audio bitrate, ffmpeg will try to guess the audio codec for you but I don't know what codec is the default for .flv-files.
I'm assuming that the command line you posted is supposed to be for the 'medium' quality file.
Some suggestions that you can try:
Try this first: specify audio codec, e.g. -acodec libmp3lame, or if the audio is already in a good format you can just copy it without modification using -acodec copy
Try a different rate factor, e.g. -crf 30, higher numbers mean uglier picture quality, but also smaller file size.
Try a different encoder preset, e.g. -vpre slow, in general, the slower presets enable features that require more CPU cycles when encoding but results in a better picture quality, see x264 --fullhelp or this page to see what each preset contains.
Do a 2-pass encode, link.
If you don't want to read all the documentation for ffmpeg and the codec parameters that you need I suggest you look at this cheat sheet, although the command line switches have changed over the different versions of ffmpeg so the examples might not work.
An example command line:
ffmpeg -i inputfile.wmv -vcodec libx264 -crf 25 -s 360x288 -vpre veryslow -acodec libmp3lame -ar 44100 -ab 56k -f flv tmp.flv
The parameter -s [size] is the size of the output video, in pixels, for the HD file you probably want something around 1280x720, if your material is 5:4 ratio (as 360x288 is) you'll want to try 1280x1024, 960x768 or 900x720. Don't set a size larger than the source material as that will simply upscale the video and you will (probably) end up with a larger file without any noticeable improvement in quality. The -ab parameter is the audio bitrate, you'll probably want to increase this parameter on the HD version as well.
I'm dealing with a very big issue about bit rate , ffmpeg provide the -b option for the bit rate and for adjustment it provide -minrate and -maxrate, -bufsize but it don't work proper. If i'm giving 256kbps at -b option , when the trans-coding finishes , it provide the 380kbps. How can we achieve the constant bit rate using ffmpeg. If their is +-10Kb it's adjustable. but the video bit rate always exceed by 50-100 kbps.
I'm using following command
ffmpeg -i "demo.avs" -vcodec libx264 -s 320x240 -aspect 4:3 -r 15 -b 256kb \
-minrate 200kb -maxrate 280kb -bufsize 256kb -acodec libmp3lame -ac 2 \
-ar 22050 -ab 64kb -y "output.mp4"
When trans-coding is done, the Media Info show overall bit rate 440kb (it should be 320kb).
Is their something wrong in the command. Or i have to use some other parameter? Plz provide your suggestion its very important.
Those options don't do what you think they do. From the FFmpeg FAQ:
3.18 FFmpeg does not adhere to the -maxrate setting, some frames are bigger than
maxrate/fps.
Read the MPEG spec about video buffer verifier.
3.19 I want CBR, but no matter what I do frame sizes differ.
You do not understand what CBR is, please read the MPEG spec. Read
about video buffer verifier and constant bitrate. The one sentence
summary is that there is a buffer and the input rate is constant, the
output can vary as needed.
Let me highlight a sentance for you:
The one sentence summary is that there is a buffer and the input rate is constant, the output can vary as needed.
That means, in essence that the -maxrate and other settings don't control the output stream rate like you thought they did.