I am working on re-encoding some footage (x264), including some grainy footage. I am interested in CRF-only bitrate management (I want to avoid artifacts during demanding scenes).
What are recommended parameters to be set instead of leaving them at their defaults?
Here is what I got so far, pretty simple:
ffmpeg -i in.mkv -vf unsharp=3:3:1 -c:v libx265 -tune:v grain -crf 24 -c:a copy out.mkv
(this example has grain tune as many files are grainy, and without it it washes it out and all the "detail by noise" is lost + I am applying a light sharpening filter, I find there is always a room to sharpen a bit without causing noticeable sharpening artifacts)
If I am not mistaken all the params one does consider are ones contained in the presets, but is there some other or one of those which is a good practice to adjust manually to achieve a better result? I was wondering specifically about P,I,B-frames and AQ (but I guess there are some other as well).
The defaults are what the developers recomend. But every video is different, and could be improved with custom settings. There is no “Better default”, because it could be worse on a different file. It can’t be know by anyone without the video file, and the preferences of the viewer.
Related
I am a newer user to ffmpeg, but I have a slightly complicated use case for it. I need to be able to cut multiple sections out of a video and/or multiple sections out of the audio, with the actual length of the video and audio files remaining intact (e.g. the audio would cut out but the video continues, or the video continues but the audio cuts out). I have been slowly learning about complex filtergraphs, but a little help would be VERY much appreciated.
this is currently my super basic "test script" to see if I can get it to work (in it's actual use case, the timestamps will be variables in a python program)
ffmpeg -i bdt.mkv -filter_complex
[0:v]trim=start=10.0:end=15.0,setpts=PTS-STARTPTS[0v];
[0:a]atrim=start=10.0:end=15.0,asetpts=PTS-STARTPTS[0a];
[0:v]trim=start=65.0:end=70.0,setpts=PTS-STARTPTS[1v];
[0:a]atrim=start=65.0:end=70.0,asetpts=PTS-STARTPTS[1a];[0v][0a][1v]
[1a]concat=n=2:v=1:a=1[outv][outa] -map [outv] -map [outa] out.mp4
Use timeline option enable with the between expression. For video, you can use the overlay filter.
-vf color=black[b_];
[b_][0:v]scale2ref[b][in];
[in][b]overlay=shortest=1:enable='between(t,1,2)'
For audio, use the volume filter:
-af volume=0:enable='between(t,1,2)'
You'll need to escape 's. If you want to do more complex on/off's build up enable option using additional expressions (see the link above).
These aren't the only way to achieve the effect, but the easiest I could think of atm.
I am making a datamoshing program in C++, and I need to find a way to remove one frame from a video (specifically, the p-frame right after a sequence jump) without re-encoding the video. I am currently using h.264 but would like to be able to do this with VP9 and AV1 as well.
I have one way of going about it, but it doesn't work for one frustrating reason (mentioned later). I can turn the original video into two intermediate videos - one with just the i-frame before the sequence jump, and one with the p-frame that was two frames later. I then create a concat.txt file with the following contents:
file video.mkv
file video1.mkv
And run ffmpeg -y -f concat -i concat.txt -c copy output.mp4. This produces the expected output, although is of course not as efficient as I would like since it requires creating intermediate files and reading the .txt file from disk (performance is very important in this project).
But worse yet, I couldn't generate the intermediate videos with ffmpeg, I had to use avidemux. I tried all sorts of variations on ffmpeg -y -ss 00:00:00 -i video.mp4 -t 0.04 -codec copy video.mkv, but that command seems to really bug out with videos of length 1-2 frames - while it works for longer videos no problem. My best guess is that there is some internal checker to ensure the output video is not corrupt (which, unfortunately, is exactly what I want it to be!).
Maybe there's a way to do it this way that gets around that problem, or better yet, a more elegant solution to the problem in the first place.
Thanks!
If you know the PTS or data offset or packet index of the target frame, then you can use the noise bitstream filter. This is codec-agnostic.
ffmpeg -copyts -i input -c copy -enc_time_base -1 -bsf:v:0 noise=drop=eq(pos\,11291) out
This will drop the packet from the first video stream stored at offset 11291 in the input file. See other available variables at http://www.ffmpeg.org/ffmpeg-bitstream-filters.html#noise
I'm attempting to copy videos from a site. They are stored in 6 different resolutions, as an hls stream format. When I use the command ffmpeg -i http://c.brightcove.com/services/mobile/streaming/index/master.m3u8?videoId=5506754630001 -c copy output.ts I get the highest quality (1280x720). However, when I wget the .m3u8 I can see there are other qualities but am having trouble with how to copy those quality (i.e. 640x380). the original link is http://www.sportsnet.ca/hockey/nhl/analyzing-five-potential-trade-destinations-matt-duchene/.
I'm hoping someone can help me out with this. Thank you.
I don't know if it's of any help but
ffmpeg -i http(s)//link/to/input.m3u8 -map m:variant_bitrate:BITRATE -c copy output.ts
is a valid approach for selection of the quality.
The variant_bitrate meta tag is documented here: FFmpeg Formats Documentation#applehttp.
The stream specifiers that can be used via -map option are documented here:
ffmpeg Documentation#5.1 Stream specifiers
This means you need to know the BITRATE of the master, which can be a bit more complicated...
If it's still of interest, I can get back with a python 3.6 script that would require an external module...
or you have to manually check which bitrate you need
(in browser or with ffprobe-i http(s)//link/to/input.m3u8 )
If anyone knows more about this it would be nice to know when this variant_bitrate meta tag was implemeted as I'm quite sure that this wasn't always possible...
I am using FFmpeg to mix a MP3 file containing a commentary track into the soundtrack of multimedia file. So far I have had great success using FFmpeg's sidechaincompress filter to auto-duck the soundtrack stream before mixing in the commentary. You can hear the commentary clearly, even when there's loud music or explosions going on in the film.
Awesome.
However, the issue I have now is during the very quiet scenes. When the soundtrack is very quiet, the commentary seems far too loud. If I adjust the volume of the entire commentary track so that it sounds right during the quiet scenes, it's too heard to hear during the loud scenes.
My current idea is to somehow use the sidechaincompress filter to duck the commentary track as well, before finally mixing it into the soundtrack. The problem though is that sidechaincompress compresses the target's volume when the source is loud, but I need the volume to be compressed when the source is quiet.
I have to admit that I am a quiet the newbie in this domain, so I may coming at this entirely wrong. I'm happy for any advice you can provide!
I've run into a similar problem recently and it seems like sidechaincompress does exactly what we need in this situation.
I used the following command to merge commentary[0:2] track with main track[0:1]
ffmpeg -i "multi-audio-track-source.mkv" -filter_complex "\
[0:1]aformat=sample_fmts=s16:channel_layouts=stereo[main];\ # these eliminate warnings about mismatching tracks.
[0:2]aformat=sample_fmts=s16:channel_layouts=stereo[commentarytmp];\ # these eliminate warnings about mismatching tracks.
[commentarytmp]asplit=2[commentarycmpr][commentary];\ # same connecting pin cannot be used in 2 filters so we have to make a copy of the commentary audio track for later mixing.
[main][commentarycmpr]sidechaincompress[cmpr]; \
[cmpr][commentary]amix[final]" \
-map "0:0" -map "[final]" \
-c:v copy -c:a aac single-audio-track-output.mkv
I'm using ffmpeg to convert a RTSP stream (from a security camera) into a HLS stream which I then play on a website using hls.js.
I start the transmuxing with: ffmpeg -i rtsp:<stream> -fflags flush_packets -max_delay 1 -an -flags -global_header -hls_time 1 -hls_list_size 3 -hls_wrap 3 -vcodec copy -y <file>.m3u8
I can get the stream to play, but the quality isn't good at all... Sometimes the stream jumps on time or freezes for a while. If I open it using VLC I get the same kind of problems.
Any idea why? Or how can I stabilize it?
I've had a similar issue one time and the issue ended up being not enough bandwidth. Be it an issue with whatever means the camera is using to stream, the connection to the server, etc. In my case, I had a set bandwidth listed as an FFMPEG argument that I simply had to increase. I know sometimes really low framerates set on the camera can cause oddities where you may have to add the "-framerate (Frames Per Second)" argument (no quotes) depending on how the page is set up.
If it is a bandwidth issue, the only way to resolve it as far as I'm aware is to increase the bandwidth somehow or make sure you aren't limiting yourself in some way which could come down to exactly how you are hosting the website/server and verifying speeds from each point the best you can. If you can't find the oddity in the connection yourself or need additional help, comment and I will help further.
This is an old question, so I don't know if OP will see this or not, but I'll leave this here to potentially give something to troubleshoot to anyone else having the same or similar issue since this is what helped me on a very similar issue.