CLI for ffmpeg and multiple GPU card - ffmpeg

I have multiple GPU video card, how do I tell ffmpeg to use certain video card for video compression. Right now I type in ffmpeg -i infile.mp4 -c:v libx264 -x264opts opencl, does ffmpeg allocate the first device that support opencl?

There is some info here https://trac.ffmpeg.org/wiki/HWAccelIntro specifically to use -hwaccel_device but it's CUDA related

Related

fastest way to re-encode x264 videos using ffmpeg

Hello I am looking for the fastest way to fastest way to re-encode x264 mp4 and insert keyframes while using minimal resource as possible.
Thank you.
Add -preset ultrafast:
ffmpeg -i input.mp4 -codec:v libx264 -preset ultrafast -force_key_frames "expr:gte(t,n_forced*4)" -hls_time 4 -hls_playlist_type vod -hls_segment_type mpegts vid.m3u8
It is recommended to use the slowest -preset you can. See FFmpeg Wiki: H.264 for more info.
You can also add -codec:a copy to stream copy the audio to avoid re-encoding it.
If you have access to a hardware accelerated H.264 encoder then you can look into using that instead, but none are as efficient as x264 (quality per bit).
fastest way to re-encode [...] while using minimal resource as possible
This is mutually exclusive. Video encoding is resource intensive you have to make a compromise between encoding speed, efficiency, and less resource use. If you want to use fewer resources the result is slower encoding and/or lower quality per bit.

Optimally using hevc_videotoolbox and ffmpeg on OSX

I'm using ffmpeg 4.3.1 to convert videos from h264 to h265 and initially I was excited to discover that I can use my Mac's GPU to speed up the conversion with the flag hevc_videotoolbox.
My Mac hardware is the 10th generation Intel i5 with AMD Radeon Pro 5300
I'm using this command:
ffmpeg -i input_h264.mp4 -c:v hevc_videotoolbox -b:v 6000K -c:a copy -crf 19 -preset veryslow output_h265.mp4
The conversion speeds increased from 0.75x to 4x, almost a 500% improvement !
But then I noticed large filesizes and slightly fuzzy results. Then I noticed that changing the crf or the preset makes no difference, ffmpeg seems to ignore those settings. The only setting that seems to work is the video bit rate (-b:v).
So I started to google around to see how I could get better results.
But except for a few posts here and there, I'm mostly coming up blank.
Where can I get documentation on how to get better results using hevc_videotoolbox?
How can I find out what settings work and which ones are ignored?
Use the constant quality mode of videotoolbox on Apple Silicon to achieve high speed, high quality and small size. This works from FFmpeg 4.4 and higher — it's based on this commit.
Note that this does not work with Rosetta 2.
Compile ffmpeg for macOS or use ffmpeg from Homebrew (brew install ffmpeg)
Run with -q:v 65. The value should be 1-100, the higher the number, the better the quality. 65 seems to be acceptable.
For example:
ffmpeg -i in.avi -c:v hevc_videotoolbox -q:v 65 -tag:v hvc1 out.mp4
Listing options
Run ffmpeg -h encoder=hevc_videotoolbox to list options specific to hevc_videotoolbox.
Use -b:v to control quality. -crf is only for libx264, libx265, libvpx, and libvpx-vp9. It will be ignored by other encoders. It will also ignore -preset.
hevc_videotoolbox isn't as good as libx265, but it is fast
Like most hardware accelerated encoders, hevc_videotoolbox is not as efficient as libx265. So you may have to give it a significantly higher bitrate to match an equivalent quality compared to libx265. This may defeat the purpose of re-encoding from H.264 to HEVC/H.265.
Avoid re-encoding if you can
Personally, I would avoid re-encoding to prevent generation loss unless the originals were encoded very inefficiently and drive space was more important.
VideoToolBox can only use the -b:v setting. The crf is ignored. You can run a few test encodes and get an idea what video bitrate is "equivalent" to the CF you desire, then use that bit rate.

How to improve the output video quality with ffmpeg and h264_videotoolbox flag?

Currently I am using this command to convert
ffmpeg -i <srcfile> -vcodec libx264 -profile:v main -level 3.1 -preset slower -crf 18 -x264-params ref=4 -acodec copy -movflags +faststart <outfile>
to convert some dashcam footage for viewing on an iOS device.
The above command took about 30 min to complete on a 2017 Macbookpro with 16G of RAM.
I want to speed it. One thing I tried is to harness the GPU in the computer. Therefore I added the flag -c:v h264_videotoolbox
It sped up by a lot. I can complete the conversion in 1 min.
However, when I inspected the output, the GPU version suffers from banding and blurry.
Here is a screenshot. CPU version on the left and GPU version on the right.
To highlight the difference, here are the parts of the videos
Trees in reflections
corrugated iron sheet wall
Is there any switch that I can manipulate to make the GPU version clearer?
This is a simplistic H.264 encoder compared to x264, so you're not going to get the same quality per bitrate. h264_videotoolbox is optimized for speed and does not support -crf.
You can view some options specific to this encoder with ffmpeg -h encoder=h264_videotoolbox, but as they are probably already set to "auto" (I didn't confirm via source code and I don't have the hardware to try it) these additional options may not make much of a difference.
So you'll just have to increase the bitrate, such as with -b:v 8000k.
Or continue to use libx264 with a faster -preset.
I see the question's been answered and nearly two years ago. Jumping in for others who might stumble on the thread. I get great results with VideoToolbox as encoder, using either GPU or software to accelerate, depending which machine I am using.
As already mentioned, setting a constant bitrate, and adjusting it upward is key to producing a result that is nearly indistinguishable from a large source file. A constant bitrate is as effective as two-pass encoding for high-quality output, when paired with other key parameters, and is much quicker than two-pass.
May seem counter-intuitive, but a computer running on all threads, full throttle, to encode a video won't give you best results. Several researchers have demonstrated that quality actually goes down if all cpu threads are engaged in encoding; better to use fewer threads and even throttle ffmpeg with a 3rd party app (encoding does not slow down significantly, in my experience). So limit threads on newer multithread desktops and laptops.
Common practice for target bitrates (seen on Netflix, Amazon) vary with resolution, naturally: at least 5,000kbps for 1080p; 3,500 for 720p. For a noticeable improvement in video quality, the encoder bitrate should be set to at least 1.5 times those common practice bitrates: ie, 7,500 for 1080p, 5,250 for 720p. Similarly for 4K GoPros or dash cams.
Often I work with large movie files from my bluray library, and create slimmed-down versions that are 1/3 to 1/2 the size of the original (20G original gives way to a file of 8-10GB with no perceptible loss of quality. Also: framerate. Maintaining the same framerate from source to slimmed-down file is essential, so that parameter is important. Framerate is either 24fps, 25fps, or 30fps for theatrical film, European tv, and North American tv, respectively. (Except in transferring film to a tv screen, 24fps becomes 23.976fps, in most cases.) Of course 60fps is common for GoPro-like cameras, but here 30fps would be a reasonable choice.
It is this control of framerate and bitrate the keeps ffmpeg in check, and gives you predictable, repeatable results. Not an errant, gigantic file that is larger than the one you may have started with.
I work on a Mac, so there may be slight differences on the command line, and here I use VideoToolbox as software encoder, but a typical command reads:
ffmpeg -loglevel error -stats -i source.video -map 0:0 -filter:v fps\=24000/1001 -c:v h264_videotoolbox -b:v 8500k -profile 3 -level 41 -coder cabac -threads 4 -allow_sw:v 1 -map 0:1 -c:a:0 copy -disposition:a:0 default -map 0:6 -c:s:0 copy -disposition:s:0 0 -metadata:g title\=“If you want file title in the metadata, goes here” -default_mode passthrough ‘outfile.mkv’
-loglevel error (to troubleshoot errors)
-stats (provides progess status in terminal window)
-i infile (source video to transcode)
-map 0:0 (specify each stream in the original to map to output)
-filter:v fps\=24000/1001 (framerate of 23.976, like source)
-c:v h264_videotoolbox (encoder)
-b:v (set bitrate, here I chose 8500k)
-profile 3 -level 41 (h264 profile high, level 4.1)
-coder cabac (cabac coder chosen)
-threads 4 (limit of 4 cpu threads, of 8 on this laptop)
-allow_sw:v 1 (using VideoToolbox software encoding for accleration; GPU is not enabled)
-map 0:1 -c:a:0 copy -disposition:a:0 default (copies audio stream over, unchanged, as default audio)
-map 0:6 -c:s:0 copy -disposition:s:0 0 (copies subtitle stream over, not as default ... ie, will play subtitles automatically)
-metadata:g (global metadata, you can reflect filename in metadata)
-default_mode passthrough (allow audio w/o further processing)
outfile (NOTE: no dash precedes filename/path. Chose mkv format to
hold my multiple streams; mp4 or other formats work just fine ... as
long as contents are appropriate for format.)
In addition of llogan's answer I'd recommend set 'realtime' property to zero (this can increase quality in motion scenes)
As llogan says, bitrate option is good parameter in this situation.
ffmpeg -i input.mov -c:v h264_videotoolbox -b:v {bitrate} -c:a aac output.mp4
if you want to set 1000kb/s bitrate, command is like this
ffmpeg -i input.mov -c:v h264_videotoolbox -b:v 1000k -c:a aac output.mp4

Enable QSV for FFmpeg with directshow input and JPEG image sequence output

I'm using FFmpeg with Directshow input. The output is a series of single JPEG images. FFmpeg itself maps the stream to mjpeg and uses image2 for the output.
Is it possible to increase performance by using the Intel QuickSync Video (QSV) hardware acceleration for this process? The FFmpeg QuickSync Wiki actually lists JPEG encoding since Braswell.
This is what I tried so far:
ffmpeg -init_hw_device qsv=hw -filter_hw_device hw -f dshow -video_size 3840x2160 -framerate 25 -i "video=My Webcam" -vf hwupload=extra_hw_frames=64,format=qsv -vcodec mjpeg_qsv "C:\out\%d.jpg"
The command works, images are generated - but the GPU load seems to be the same as without any qsv options..?
Thanks!
I compared it for a h264 encoded video. First with the QSV jpeg encoder:
ffmpeg -c:v h264_qsv -i myvideo.mkv -vcodec mjpeg_qsv images/%06d.jpg
and afterwards without mjpeg_qsv:
ffmpeg -c:v h264_qsv -i myvideo.mkv images/%06d.jpg
The example is minimalistic and can be improved in many ways. In the picture you can see the load of the GPU. In the red box with mjpeg_qsv and the blue box without mjpeg_qsv. The execution time was also better with mjpeg_qsv, speed=3.34x vs speed=1.84x. Since you are using your webcam as an source, your pipeline is limited by the frame rate of the live video. So, your hardware will only process 25 frames per second (-framerate 25). Depending on the resolution and your hardware, this might be an easy job for your GPU, in both cases. Also, make sure you look for the Intel GPU and not your Nvidia/AMD GPU. If you still have the impression that you can't see a performance gain, please let us know both of your commands for comparison.

how to get the smallest possible delay on a live HLS stream to a Google cast?

I am streaming a live HLS buffer with ffmpeg and I want to play it back on a Chromecast device with the lowest latency possible.
Best result I have so far is with that command:
ffmpeg -y -f x11grab -video_size 1280x720 -i :99 -f alsa -ac 2 -i pulse -fflags nobuffer -vcodec libx264 -r 24 -preset superfast -pix_fmt yuv420p -g 6 -hls_list_size 5 -hls_time 0 -strict -2 video/test.m3u8
The main issue I have is that it seems that my Google cast has a bigger buffer than when I try with VLC with a buffer of size 0, 3 seconds difference. Is there a way to make sure the device uses the smallest buffer size as possible?
I looked at the Cast reference and I haven't found anything yet.
Based from this blog, if videos are choppy or suffering from constant buffering interruptions, it is recommended that you reduce your video playback settings. This can be done in the Chromecast options in the upper-right-hand corner of your Chrome browser. Click the box, select Options, and reduce your streaming to Standard (480p). The video quality will take a small hit, but it should be watchable, with little to no interruption. You may also check this page for more recommendations.

Resources