FFmpeg ignores quantity parameter - ffmpeg

this is how I use FFmpeg
ffmpeg -f dshow -i video="UScreenCapture" -vcodec libx264 -q 26 -f flv output.flv
the thing is, the quantity is always 28, ffmpeg ignores that. How to fix this? I need a "flash" codec anyway, to stream to twitch tv

The options -q (and the alias -qscale) are ignored by libx264. If you want to
control the quality,
use:
-crf
ffmpeg -i input -c:v libx264 -crf 22 output.flv
Or set the bitrate with -b:v
ffmpeg -i input -c:v libx264 -b:v 555k output.flv

According to the documentation, "the meaning of q is codec-dependent" and apparently libx264 ignores that option. Use -crf (and a -preset if you want) instead. The bigger the crf value, the lower the quality.

if you wish to generate CQP (constant QP stream), e.g for constant QP=20 i suggest using the following parameters:
'x264-params qp=20:ipratio=1.0:pbratio=1.0:qpstep=0'
Example:
ffmpeg -s 1920x1080 -i test.yuv -vcodec libx264 -x264-params qp=20:ipratio=1.0:pbratio=1.0:qpstep=0 -y test.h264
Notice that 'ipratio=1.0' makes x264 to encode P frame with same QP as I-frame and 'pbratio=1.0' makes x264 to encode B-frame with same QP as P-frame.

The -b options, -q, and -crf seem to do nothing for video qualtiy (at least for my install of ffmpeg version 9), so I am posting a result from another post that gets right to the point
If you want high quality, setting bitrate is a poor way to achieve that. There are many other settings with far bigger influence on quality than bitrate. I would leave the bitrate setting out entirely unless you are having to meet hardware requirements of some sort.
If you are trying to get higher quality, try something like
ffmpeg -i sourcefile.mov -target pal-dvd -qscale 2 -trellis 2 outputfile.mpg
output video size goes from 13Mb for a 2 min video to 130Mb, but it gets the job done.

Related

Ffmpeg nvenc encoder on gpu does not compress files as much as compared to libx264

I wanted to encode a video file which was initially encoded by a libx264 encoder on a non gpu machine with ultrafast preset and crf 23 , i typically re-encode it with preset medium and get a good compression but the process is very slow , so i am considering a gpu based solution
my current command to use ffmpeg on a nvidia turing gpu
ffmpeg -y -vsync passthrough -hwaccel cuda -i a.mp4 -max_muxing_queue_size 9999 -pix_fmt yuv420p -c:v h264_nvenc -preset medium -tune ll -b:v 4M -bufsize 4M -maxrate 10M -qmin 0 b.mp4
usual command i use to do the same
ffmpeg -i a.mp4 -max_muxing_queue_size 9999 -pix_fmt yuv420p -c:v libx264 -preset medium b.mp4
enter code here
How can i make this command do a better job at reducing file size , i am okay to compromise on the quality of the video for a good reduction in size
I would highly recommend reading this H.264 Video Encoding Guide
On the surface, these variants can help you:
Decrease your bitrate
Add -cq option with suitable value 0-51 (-cq for h264_nvenc is pretty the same as -crf for libx264)
Change -tune option value to hq
Try two-pass encoding (if you know desired output file size), but here is very low benefit
If you struggle with available options for h264_nvenc you can see the whole list of them by executing following command:
ffmpeg -hide_banner -h encoder=h264_nvenc
Most of them are self descriptive or similar from libx264

ffmpeg fps lower than expected

I'm trying to run this command.
ffmpeg -i out_frames/frame%08d.jpg -i input.mp4 -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r 23.98 -pix_fmt yuv420p output.mp4
It takes a folder of frames (of the input.mp4, but at higher resolution) and makes a video with those frames and also takes the audio from the input.mp4.
Problem is the output.mp4 has a lower/higher fps rate (depending on the clip used) than it should have, resulting in the audio out of sync.
Any help?
You should use a more precise framerate -r 24000/1001 instead of -r 23.98.
In your case it is also better to put this option before the declaration of the input frames.
If this does not help then it means that your problem it's probably not the framerate issue.

FFMPEG: How to avoid audio/video desync in output of crossfaded clips when input is variable frame rate video

I'm doing screen recordings of gameplay (Dota2) using my NVIDIA graphics card GeForce experience hardware recording (NVEC Encoder). This creates a variable frame rate output video. My NVIDIA settings are 60 fps 15000 kbps. I have paid a guy to make a program that generates scripts that given start/stop timepoints can extract clips from the video and merge them with crossfade. See example code below. The script works for many input recordings but fails often: The audio and video are desynchronized (usually audio delay) in many of the clips, ca 0.5 seconds. I think it fails more when frame rate dropped more during recording. He does not know how to fix the problem, and I wonder if anyone could point out if anything could be fixed in the script (example below)?
Processing speed is quite important (now making a 10 min 'highlight' video takes ca 7-10 min). Solutions increasing that amount very much more is not of too big interest, unfortunately. His approach has been to work separately with audio and video and merge in the end. He already has a program to make ffmpeg code for working with different scenarios (also adding overlays, adding music, intro/outro) so it would be preferable with some easy fixes to his code and not dramatic redesigning of the logic. But if nothing else can fix the problem, a redesign in logic is ok. Using other tools than ffmpeg is also ok, but should be automatable (scripts/cli) and not increase processing times too much.
Running the program "mediainfo" on the input video shows that framerate dropped quite low for this input video:
Frame rate mode: Variable
Frame rate : 60.000 FPS
Minimum frame rate: 3.059 FPS
Maximum frame rate: 63.739 FPS
Full report here: https://pastebin.com/TX061Wih
The input video can be downloaded from dropbox here (6 GB):
https://www.dropbox.com/s/ftwdgapazbi62pr/fullgame.mp4?dl=0
Here the example of a script when asked to extract two clips from input video at 9:57 (41 sec length) and 15:45 (28 sec length) and crossfade merge them with a 0.5 crossfade time. There might be some code-remnants from options that are not used in this example (overlays, music, intro/outro). Using the input video above, this creates audio/video desync.
6 commands excecuted in sequence:
ffmpeg.exe -loglevel warning -ss 00:09:57 -i fullgame.mp4 -t 00:00:41 -filter_complex "[0:a]afade=t=out:st=40.5:d=0.5[a1]" -map "[a1]" -y out_temp_00.mp4.wav
ffmpeg.exe -loglevel warning -i fullgame.mp4 -ss 00:09:57 -t 00:00:41 -an -vcodec copy -f mpegts -avoid_negative_ts make_zero -y out_temp_00.mp4.ts
ffmpeg.exe -loglevel warning -ss 00:15:45 -i fullgame.mp4 -t 00:00:28 -filter_complex "[0:a]afade=t=in:st=0:d=0.5[a1]" -map "[a1]" -y out_temp_01.mp4.wav
ffmpeg.exe -loglevel warning -i fullgame.mp4 -ss 00:15:45 -t 00:00:28 -an -vcodec copy -f mpegts -avoid_negative_ts make_zero -y out_temp_01.mp4.ts
ffmpeg.exe -loglevel warning -i out_temp_00.mp4.wav -i out_temp_01.mp4.wav -y -filter_complex "[0:a]adelay=0|0[a0];[1:a]adelay=40500|40500[a1];[a0][a1]amix=inputs=2:dropout_transition=68.5,atrim=duration=68.5[outa0];[outa0]loudnorm[outa]" -map "[outa]" -ar 48000 -acodec aac -strict -2 fullgame_Output.mp4.aac
ffmpeg.exe -loglevel warning -i out_temp_00.mp4.ts -i out_temp_01.mp4.ts -y -i fullgame_Output.mp4.aac -filter_complex "[0:v]trim=start=0.5,setpts=PTS-STARTPTS[0c];[1:v]trim=start=0.5,setpts=PTS-STARTPTS[1c];[0:v]trim=40.5:41,setpts=PTS-STARTPTS[fo];[1:v]trim=0:0.5[fi];[fi]format=pix_fmts=yuva420p,fade=t=in:st=0:d=0.5:alpha=1[z];[fo]format=pix_fmts=yuva420p,fade=t=out:st=0:d=0.5:alpha=1[x];[z]fifo[w];[x]fifo[q];[q][w]overlay[r];[0c][r][1c]concat=n=3[outv]" -map "[outv]" -map 2:a -shortest -acodec copy -vcodec libx264 -preset ultrafast -b 15000k -aspect 1920:1080 fullgame_Output.mp4
P.S.
I already asked for help at an ffmpeg chat room. One guy said he knew what the problem was, but didnt know how to fix it(?):
[00:10] <kepstin> oh, wait, you're using -vcodec copy
[00:10] <kepstin> that explains everything.
[00:10] <kepstin> when you're using -vcodec copy, the start time (set with -ss) is rounded to the nearest keyframe
[00:10] <kepstin> it's not exact
[00:11] <kepstin> depending on the keyframe interval, this will result in possibly quite large shifts
[00:11] <kepstin> (also, your commands are applying audio filters on commands with -an, which is confusing/contradictory)
[00:12] <birdboy88> so the problem is that the audio temporary clips are not being extracted from the same excat timepoints?
[00:13] <kepstin> birdboy88: yeah, your audio is being re-encoded to wav so it's being cut sample-accurate, but the video's not being precisely cut.
[00:16] <birdboy88> kepstin: so I need to use slow seek (?) to extract video accurately? Or somehow extract audio only where there are video keyframes?
[00:17] <kepstin> birdboy88: i don't know how to extract audio starting at video keyframes with ffmpeg cli. You're already doing slow seek, which doesn't help (you should move the -ss option to before the -i option to speed it up)
[00:17] <kepstin> if you want accurate video cutting when saving to a file, you have to re-encode the video
[00:18] <kepstin> (doing this in a single ffmpeg command means you don't have to save to a file, so you can avoid the issue)
[00:18] * kepstin is off for a bit now
EDIT:
Everything is done with the latest ffmpeg version.
I was unable to get Gyan's code to work. It always loses some audio (audio is either 40.5 or 27.5, so only one audio is used). This is the only one working for me (changes were adelay=40500|40500 and amix=inputs=2[a0];[a0]loudnorm):
ffmpeg -i fullgame.mp4 -filter_complex "[0]split=2[vpre][vpost];
[0]asplit=2[apre][apost];
[vpre]trim=start='00:09:57':duration='00:00:41',setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start='00:09:57':duration='00:00:41',asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start='00:15:45':duration='00:00:28',setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start='00:15:45':duration='00:00:28',asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,adelay=40500|40500[apost-t];
[vpre-t][vpost-t]overlay[v];
[apre-t][apost-t]amix=inputs=2[a0];[a0]loudnorm[a]" -map "[v]" -map "[a]" -y -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Then I tried using a similar setup but with 3 clips, but on one machine I got error: "Error while filtering: Cannot allocate memory". And my 16 GB memory machine the processing speed is 0.02x! Any way to avoid this? This is the code I tried:
ffmpeg -i fullgame.mp4 -filter_complex "[0]split=3[vpre][vpost][v3];
[0]asplit=3[apre][apost][a3];
[vpre]trim=start=357:duration=41,setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start=357:duration=41,asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start=795:duration=28,setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,fade=t=out:st=40.5:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start=795:duration=28,asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,afade=t=out:st=27.5:d=0.5,adelay=40500|40500[apost-t];
[v3]trim=start=95:duration=30,setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5,setpts=PTS+41+28-0.5/TB[v3-t];
[a3]atrim=start=95:duration=30,asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5,adelay=68500|68500[a3-t];
[vpre-t][vpost-t]overlay[v1];
[v1][v3-t]overlay[v];
[apre-t][apost-t][a3-t]amix=inputs=3[a0];
[a0]loudnorm[a]" -map "[v]" -map "[a]" -y -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Just do it in one command.
Besides the keyframe seek issue, which is true, your present sequence has an error in the last command. You have [0:v]trim=start=0.5...[0c] which trims out the first 0.5 seconds and will cause a desync of its own. Since this is the first clip, it should be [0:v]trim=0:40.5.
The full single command should be
ffmpeg -i fullgame.mp4 -filter_complex
"[0]split=2[vpre][vpost];[0]asplit=2[apre][apost];
[vpre]trim=start='00:09:57':duration='00:00:41',setpts=PTS-STARTPTS[vpre-t];
[apre]atrim=start='00:09:57':duration='00:00:41',asetpts=PTS-STARTPTS,afade=t=out:st=40.5:d=0.5[apre-t];
[vpost]trim=start='00:15:45':duration='00:00:28',setpts=PTS-STARTPTS,format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[apost]atrim=start='00:15:45':duration='00:00:28',asetpts=PTS-STARTPTS,afade=t=in:st=0:d=0.5[apost-t];
[vpre-t][vpost-t]overlay[v];
[apre-t][apost-t]acrossfade=d=0.5,loudnorm,aresample=48000[a]"
-map "[v]" -map "[a]" -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4
Your original sequence had -strict -2 for audio AAC encoding. That hasn't been needed since Dec 2015. You have a very old version of ffmpeg if your ffmpeg throws an error without it. Upgrade first.
I did not test the above with your file, as it will take too long to filter 16 min of Full HD 60 fps video, but I tested the below faster command and it works fine with the latest git build of ffmpeg:
ffmpeg -ss 00:09:57 -t 00:00:41 -i fullgame.mp4 -ss 00:15:45 -t 00:00:28 -i fullgame.mp4 -filter_complex
"[0]afade=t=out:st=40.5:d=0.5[apre-t];
[1]format=yuva420p,fade=t=in:st=0:d=0.5:alpha=1,setpts=PTS+40.5/TB[vpost-t];
[1]afade=t=in:st=0:d=0.5[apost-t];
[0][vpost-t]overlay[v];
[apre-t][apost-t]acrossfade=d=0.5,loudnorm,aresample=48000:ocl=stereo[a]"
-map "[v]" -map "[a]" -c:v libx264 -preset ultrafast -b:v 15000k -aspect 1920:1080 -c:a aac fullgame_Output.mp4

encoding jpeg as h264 video

I am using the following command to encode an AVI to an H264 video for use in an HTML5 video tag:
ffmpeg -y -i "test.avi" -vcodec libx264 -vpre slow -vpre baseline -g 30 "out.mp4"
And this works just fine. But I also want to create a placeholder video (long story) from a single still image, so I do this:
ffmpeg -y -i "test.jpg" -vcodec libx264 -vpre slow -vpre baseline -g 30 "out.mp4"
And this doesn't work. What gives?
EDIT: After trying LordNeckbeards answer, here is my full output: http://pastebin.com/axhKpkLx
Example for a 10 second output:
ffmpeg -loop 1 -framerate 24 -i input.jpg -c:v libx264 -preset slow -tune stillimage -crf 24 -vf format=yuv420p -t 10 -movflags +faststart output.mp4
Same thing but with audio. The output duration will match the input audio duration:
ffmpeg -loop 1 -framerate 24 -i input.jpg -i audio.mp3 -c:v libx264 -preset slow -tune stillimage -crf 24 -vf format=yuv420p -c:a aac -shortest -movflags +faststart output.mp4
-loop 1 loops the image input.
-framerate sets the frame rate of the image input. Default is 25. Some players have issues with low frame rates so a value over 6 or so is recommended.
-i input.jpg the input.
-c:v libx264 the H.264 video encoder.
-preset x264 encoding preset. Use the slowest one you can.
-tune x264 tuning for various adjustments to fit specific situations.
-crf for quality. A lower value results in higher quality. Use the highest value that still provides an acceptable quality to you. Default is 23.
-vf format=yuv420p outputs the pixel format as yuv420p. This ensures the output uses a widely acceptable chroma sub-sampling scheme. Recommended for libx264 when encoding from images.
-c:a aac the AAC audio encoder. If your input is already AAC or M4A then use -c:a copy instead to stream copy instead of re-encode.
-t 10 (in the first example) makes a 10 second output. Needed because the image is looping indefinitely.
-shortest (in the second example) makes the output the same duration as the shortest input. In this case it is the audio since the image is looping indefinitely.
-movflags +faststart relocates the moov atom to the beginning of the file after encoding is finished. Allows playback to begin faster in progressive download playing; otherwise the whole video must be downloaded before playing.
-profile:v main (optional) some devices can't handle High profile.
See FFmpeg Wiki: H.264 for more info.

FFMPEG sensible defaults

I'm using ffmpeg to watermark videos with a PNG file using vfilters like so:
ffmpeg -i 26.wmv -vcodec libx264 -acodec copy -vf "movie=logo.png [watermark]; [in][watermark] overlay=10:10 [out]" 26_w.mkv
However, the input files are all of different quality/bitrates, and I want the output files to be of similar quality/bitrate to the input files. How would I achieve this?
Also, I know almost nothing about ffmpeg, so are there any options which would be sensible to set to give a good quality:filesize ratio?
Usually wanting the output to be the "same quality" as the input is an assumed thing that people will always want. Unfortunately, this is not possible when using a lossy encoder, and even lossless encoders may not provide the same quality due to colorspace conversion, chroma subsampling, and other issues. However, you can achieve visually lossless (or nearly so) outputs when using a lossy encoder; meaning that it may look as if the output is the same quality to your eyes, but technically it is not. Also, attempting to use the same bitrate and other parameters as the input will most likely not achieve what you want.
Example:
ffmpeg -i input -codec:v libx264 -preset medium -crf 24 -codec:a copy output.mkv
The two option for you to adjust are -crf and -preset. CRF (constant rate factor) is your quality level. A lower value is a higher quality. The preset is a collection of options that will give a particular encoding speed vs compression tradeoff. A slower preset will encode slower, but will achieve higher compression (compression is quality per filesize). The basic usage is:
Use the highest crf value that still gives you the quality you want.
Use the slowest preset you have patience for (see x264 --help for a preset list and ignore the placebo preset as it is a joke).
Use these settings for the rest of your videos.
Other notes:
You do not have to encode the whole video to test quality. You can use the -ss and -t options to select a random section to encode, such as -ss 30 -t 60 which will skip the first 30 seconds and create a 60 second output.
In this example the audio is stream copied instead of re-encoded.
Remember that every encoder is different, and what works for x264 will not apply to other encoders.
Add -pix_fmt yuv420p if the output does not play in dumb players like QuickTime.
Also see:
FFmpeg and x264 Encoding Guide
FFmpeg and AAC Encoding Guide
Here is a set of very good examples http://ffmpeg.org/ffmpeg.html#Examples
Here is a script i made for converting files to flv video and also adding a preview image
<?php
$filename = "./upload/".$_GET['name'].".".substr(strrchr($_FILES['Filedata']['name'], '.'), 1);
move_uploaded_file($_FILES['Filedata']['tmp_name'], $filename);
chmod($filename, 0777);
exec ("ffmpeg -i ".$filename." -ar 22050 -b 200 -r 12 -f flv -s 500x374 upload/".$_GET['name'].".flv");
exec ("ffmpeg -i ".$filename." -an -ss 00:00:03 -an -r 1 -s 300x200 -vframes 1 -y -pix_fmt rgb24 upload/".$_GET['name']."%d.jpg");
?>

Resources