FFmpeg streaming low CRF results in low quality - ffmpeg

I am streaming(live chat) with FFmpeg using the following parameters.
ffmpeg -f dshow -rtbufsize 100M -i video="device":audio="device" \
-c:v libx264 -crf 30 -preset ultrafast -tune zerolatency \
-c:a aac -f mpegts udp://127.0.0.1:1234
Unexpectedly, when CRF is lowered from 30 to 20, step-by-step. The stream quality decreases dramatically. When the CRF is about 20, sudden changes in the picture (like a head movement), seems to make the image green, gray or very distorted. I tried using CRF 30, and the problems seems to be gone. Why is this happening ?
I don't think it is a bandwidth issue given that I am on localhost. Also I didn't change anything based on I-frames.
Edit: I checked the file sizes given CRF 20 and CRF 30 on libx265.
10 Second video.
CRF 20: 1.7M
CRF 30: 350kb
Is 1.7M really bad for a 10 second stream that localhost or any other live streaming sevice can't handle ?

I don't know what the resolution is, but it seems to be a performance issue for encoding.
CRF=20 can also be used up to MAX 20 Mbps as a criterion of 720p.
As a suggestion, do not use CRF but specify proper bitrates according to resolution.
720p -> 2.5Mbps
1080p -> 4Mbps
And 'dshow' is not very fast capture either. You have to take it into consideration.
I recommand DX10's swap-chain. It's super fast capturer.

Related

How to see default NVENC hevc options in ffmpeg

I am trying to use the hevc_nvenc encoder in ffmpeg to reencode an old video I have. Obviously software encoding using libx265 would be better, but I want to make in fast. I am trying to optimize for video quality, so I am using these options:
-profile:v main -b_ref_mode 0 -preset p7 -tune hq -rc vbr
b_ref_mode 0 since my gpu doesn't support bframe reference mode.
This gives results with average bitrate of around 2M, so I am guessing that is the default bitrate setting for nvenc. Increasing -b:v increases average bitrate, but around 6.5M it stops. Even using -b:v 50M gives the same video bitrate. I have also tried setting -cq 1 to force the best quality possible, but that actually decreases bitrate to around 4.5M. The only way I found to get the desired bitrate is using -cq 1 and -maxrate set to a big value. This removes the restriction, and actually the -b:v option is no longer needed, it seems like it doesn't have any effect at all.
All of this behaviour seems very strange to me, with some hidden default values for bitrate and maxrate, so the question is where can I see these values? I tried using ffmpeg with -loglevel debug but didn't see these values getting passed, and the documentation I found says maxrate default in ffmpeg is 0 (what does this mean?).

How can I convert these Blender video settings to ffmpeg please?

Whenever I use the ffmpeg commands, the video is larger and the seeking/keyframe inteval is not the same as the video rendered from Blender. How can I convert the following Blender settings to ffmpeg please?
Blender Settings:
Frame rate: 30
Codec: h.264
Output .mp4
Keyframe interval: 1
Output quality: Medium
Encoding speed: Good
Here's my current command however the seeking and file size is different:
ffmpeg -framerate 30 -i %04d.jpg -g 1 -vcodec libx264 video.mp4
ffmpeg -r 30 -i %04d.jpg -vcodec libx264 -crf 25 -x264-params keyint=30:scenecut=0 -preset veryslow video.mp4
Explanation:
-r 30 befor input pictures will say ffmpeg to use 30 pictures per second
-vcodec libx264 will let ffmpeg encode in plain old H.264
-crf 25 will let the encoder decide on the bitrate for a medium quality (lower it for better quality / higher file size, increase it for worse quality / lower file size. Need to find your right setting there through testing)
-x264-params keyint=30:scenecut=0 will tell the x264 encoder to set a keyframe every 30s frames (here 1s) and to disable scene detection. Be aware that this increases the file size a lot, you should not use a keyframe every second, except for livestreaming. Modern video encoders like AV1 will at most times set a keyframe every 10-20s based on scene detection.
-preset veryslow will use the best libx264 preset available to make the file as small as possible with H.264 (however needs more time to encode). If you want a faster encode but a larger file set it to slow.
Some general opinions from me:
If you don't need compatibility to very old devices rather encode with libx265 or 2-pass libvpx-vp9. This will save you a lot of space without quality loss. libx265 slow is even faster then libx264 veryslow for me.

How to improve the output video quality with ffmpeg and h264_videotoolbox flag?

Currently I am using this command to convert
ffmpeg -i <srcfile> -vcodec libx264 -profile:v main -level 3.1 -preset slower -crf 18 -x264-params ref=4 -acodec copy -movflags +faststart <outfile>
to convert some dashcam footage for viewing on an iOS device.
The above command took about 30 min to complete on a 2017 Macbookpro with 16G of RAM.
I want to speed it. One thing I tried is to harness the GPU in the computer. Therefore I added the flag -c:v h264_videotoolbox
It sped up by a lot. I can complete the conversion in 1 min.
However, when I inspected the output, the GPU version suffers from banding and blurry.
Here is a screenshot. CPU version on the left and GPU version on the right.
To highlight the difference, here are the parts of the videos
Trees in reflections
corrugated iron sheet wall
Is there any switch that I can manipulate to make the GPU version clearer?
This is a simplistic H.264 encoder compared to x264, so you're not going to get the same quality per bitrate. h264_videotoolbox is optimized for speed and does not support -crf.
You can view some options specific to this encoder with ffmpeg -h encoder=h264_videotoolbox, but as they are probably already set to "auto" (I didn't confirm via source code and I don't have the hardware to try it) these additional options may not make much of a difference.
So you'll just have to increase the bitrate, such as with -b:v 8000k.
Or continue to use libx264 with a faster -preset.
I see the question's been answered and nearly two years ago. Jumping in for others who might stumble on the thread. I get great results with VideoToolbox as encoder, using either GPU or software to accelerate, depending which machine I am using.
As already mentioned, setting a constant bitrate, and adjusting it upward is key to producing a result that is nearly indistinguishable from a large source file. A constant bitrate is as effective as two-pass encoding for high-quality output, when paired with other key parameters, and is much quicker than two-pass.
May seem counter-intuitive, but a computer running on all threads, full throttle, to encode a video won't give you best results. Several researchers have demonstrated that quality actually goes down if all cpu threads are engaged in encoding; better to use fewer threads and even throttle ffmpeg with a 3rd party app (encoding does not slow down significantly, in my experience). So limit threads on newer multithread desktops and laptops.
Common practice for target bitrates (seen on Netflix, Amazon) vary with resolution, naturally: at least 5,000kbps for 1080p; 3,500 for 720p. For a noticeable improvement in video quality, the encoder bitrate should be set to at least 1.5 times those common practice bitrates: ie, 7,500 for 1080p, 5,250 for 720p. Similarly for 4K GoPros or dash cams.
Often I work with large movie files from my bluray library, and create slimmed-down versions that are 1/3 to 1/2 the size of the original (20G original gives way to a file of 8-10GB with no perceptible loss of quality. Also: framerate. Maintaining the same framerate from source to slimmed-down file is essential, so that parameter is important. Framerate is either 24fps, 25fps, or 30fps for theatrical film, European tv, and North American tv, respectively. (Except in transferring film to a tv screen, 24fps becomes 23.976fps, in most cases.) Of course 60fps is common for GoPro-like cameras, but here 30fps would be a reasonable choice.
It is this control of framerate and bitrate the keeps ffmpeg in check, and gives you predictable, repeatable results. Not an errant, gigantic file that is larger than the one you may have started with.
I work on a Mac, so there may be slight differences on the command line, and here I use VideoToolbox as software encoder, but a typical command reads:
ffmpeg -loglevel error -stats -i source.video -map 0:0 -filter:v fps\=24000/1001 -c:v h264_videotoolbox -b:v 8500k -profile 3 -level 41 -coder cabac -threads 4 -allow_sw:v 1 -map 0:1 -c:a:0 copy -disposition:a:0 default -map 0:6 -c:s:0 copy -disposition:s:0 0 -metadata:g title\=“If you want file title in the metadata, goes here” -default_mode passthrough ‘outfile.mkv’
-loglevel error (to troubleshoot errors)
-stats (provides progess status in terminal window)
-i infile (source video to transcode)
-map 0:0 (specify each stream in the original to map to output)
-filter:v fps\=24000/1001 (framerate of 23.976, like source)
-c:v h264_videotoolbox (encoder)
-b:v (set bitrate, here I chose 8500k)
-profile 3 -level 41 (h264 profile high, level 4.1)
-coder cabac (cabac coder chosen)
-threads 4 (limit of 4 cpu threads, of 8 on this laptop)
-allow_sw:v 1 (using VideoToolbox software encoding for accleration; GPU is not enabled)
-map 0:1 -c:a:0 copy -disposition:a:0 default (copies audio stream over, unchanged, as default audio)
-map 0:6 -c:s:0 copy -disposition:s:0 0 (copies subtitle stream over, not as default ... ie, will play subtitles automatically)
-metadata:g (global metadata, you can reflect filename in metadata)
-default_mode passthrough (allow audio w/o further processing)
outfile (NOTE: no dash precedes filename/path. Chose mkv format to
hold my multiple streams; mp4 or other formats work just fine ... as
long as contents are appropriate for format.)
In addition of llogan's answer I'd recommend set 'realtime' property to zero (this can increase quality in motion scenes)
As llogan says, bitrate option is good parameter in this situation.
ffmpeg -i input.mov -c:v h264_videotoolbox -b:v {bitrate} -c:a aac output.mp4
if you want to set 1000kb/s bitrate, command is like this
ffmpeg -i input.mov -c:v h264_videotoolbox -b:v 1000k -c:a aac output.mp4

ffmpeg to create a slideshow results in very large file

I have a series of JPEG images named 0000.jpg 0001.jpg 0002.jpg...
I used the following command to turn these into a slideshow video, each frame playing for 1 second.
ffmpeg -framerate 1 -i %04d.jpg -c:v libx264 -vf fps=1 -pix_fmt yuv420p out.mp4
This works fine except that the resulting video is 6x larger than what I get if I encode the exact same frames at normal framerate (e.g. 25 FPS)
I'm looking for a way to effectively get the same efficient encoding as when encoding 25fps but with each frame showing for 1 second.
H.264 is a video codec that is inter-coded i.e. frames rely on other frames in order to get decoded. What x264, a H.264 encoder, does, at a basic level, is observe the changes in the picture content between frames and then save only that difference.
In the CRF ratecontrol mode, which is what's used by default when no rate control mode is expressly specified (-b:v, -crf, -x264opts qp), x264 is sensitive to framerate. When given an input at 25 fps, each frame is displayed for 40 milliseconds, so the viewer isn't that sensitive to image quality of each individual frame, so x264 compresses it quite a bit. But when that input is encoded at 1 fps, each output frame will be on display for a whole second, so x264 is much less aggressive with its compression.
The workaround is to encode at 25 fps and then remux at the lower fps.
#1
ffmpeg -framerate 25 -i %04d.jpg -c:v libx264 -pix_fmt yuv420p out.264
#2 Using mp4box*, part of GPAC, remux with the lower rate:
mp4box -add out.264:fps=1 -new out.mp4
*normally, this would be possible using ffmpeg, but it does not correctly remux H.264 raw streams with out-of-order frame storage i.e. those with B-frames.

FFmpeg - Convert MP4 to Webm very slow

I need convert MP4 to webm with ffmpeg.
So, i use :
ffmpeg -i input.mp4 -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis output.webm
But it's very long.
Is there faster ?
libvpx is a relatively slow encoder. According to the VP8 Encode Parameter Guide: Encode Quality vs. Speed, you can use the -cpu-used option to increase encoding speed. A higher value results in faster encoding but lower quality:
Setting a value of 0 will give the best quality output but is
extremely slow. Using 1 (default) or 2 will give further significant
boosts to encode speed, but will start to have a more noticeable
impact on quality and may also start to effect the accuracy of the
data rate control. Setting a value of 4 or 5 will turn off "rate
distortion optimisation" which has a big impact on quality, but also
greatly speeds up the encoder.
Alternatively, it appears that VA-API can be utilized for hardware accelerated VP8 encoding, but I have no experience with this.

Resources