Can I recycle ffmpeg2pass-0.log - ffmpeg

Can I re-use 2 pass logs?
That is to say: I am doing this, but I wonder if I should be doing this. ie: does a -pass 1 that had different crf/b:v parameters from -pass 2 output produce the same results as always uniquely encoding both passes for every input?
I have a feeling I shouldn't be reusing pass1.
Say I am doing tests, and for the same input file produce multiple 2 pass ouputs with varying constrained bitrate/crf variants...
eg:
constrainedQ-br9M-crf12.webm
constrainedQ-br12M-crf18.webm
constrainedQ-br14M-crf18.webm
constrainedQ-br16M-crf18.webm
Is it ok to detect the prior log file, check that it was produced for the same input file, and re-use it by skipping -pass 1 for subsequent renders? (in which case ffmpeg finds the existing log and appears to use it for pass 2)
Or
Should I be re-generating the pass 1 log whenever the bitrate or crf changes?
[edit] everyone loves a bit of context code
f_rm2passFilesVP9() {
rm \
"${input%/*}/ffmpeg2pass-0.log" \
"${input%/*}/ffmpeg2pass-0.log.temp" \
"${input%/*}/ffmpeg2pass-0.log.mbtree.temp" &> /dev/null
}
...
f_2passLogForThisInputExists() {
if [[ "$input" == $(cat "${input%/*}/.priorInput" 2> /dev/null) ]];then
echo 1
else
echo 0
fi
}
...
if [[ 0 == $(f_2passLogForThisInputExists) ]];then
echo " ENCODING CONSTRAINED QUALITY br:$br crf:$CRF - PASS 1/2"
trap "f_rm2passFilesVP9" 1 2 3 6
ffmpeg -hide_banner -y -i "${input}" \
-c:v libvpx-vp9 -pass 1 -b:v "$br" -crf "$CRF" -threads 4 \
-tile-columns 6 -frame-parallel 1 \
-an -f webm /dev/null
echo "$input" >"${input%/*}/.priorInput"
trap "" 1 2 3 6
else
echo "REUSING - PASS 1 FOR THIS INPUT - PASS 1/2"
fi
echo "ENCODING CONSTRAINED QUALITY br:$br crf:$CRF - PASS 2/2"
ffmpeg -hide_banner -y -i "${input}" \
-c:v libvpx-vp9 -pass 2 -b:v "$br" -crf "$CRF" -threads 4 -speed 2 \
-tile-columns 6 -frame-parallel 1 -auto-alt-ref 1 -lag-in-frames 25 \
-c:a libopus -b:a 64k -f webm \
"${exportName}"

This should be OK if you are simply giving it more or less bitrate. I do this sometimes, but I also use a lot of zones, so I usually just rerun first pass because I know zones will definitely make a difference in the first pass file. If I add a zone after the first pass file is created the it will apply the difference in bitrate across the file and if I do it before then the zone bitrate modifier gets applied only to the frames specified. Since I think you are simply giving the file more or less bitrate then the distribution should be pretty much the same. I would say only rerun first pass if your second pass is going to much higher bitrate, like 20% or more. Better just to run first pass at the highest bitrate you intend to encode, if possible.

Related

FFmpeg - divide single .webm video into multiple videos (each representing part of the screen) specifying rows and columns count

Can FFmpeg divide one big rectangular video into x smaller rectangular ones?
What would be a command for it?
Can we parametrize the command with number of rows and columns we desire?
Can we somehow prevent loosing pixel precision when command is provided with improper rows/column count for source video resolution?
This script does the job:
inputFile=$1
rows=$2
columns=$3
counter=0
for((r=0; r<$rows; r++))
do
for((c=0; c<$columns; c++))
doinputFile=$1
ffmpeg -i $inputFile -filter:v "crop=iw/$columns:ih/$rows:$c*
(iw/$columns):$r*(ih/$rows)" -vcodec libvpx -crf 5 -b:v 10M -an
"$(dirname "$inputFile")/$(basename "$inputFile" ".webm")$counter.webm"
((counter=counter+1))
done
done
There is a single-call solution to do single-in multiple-out like this:
ffmpeg -i inputFile \
-vsync vfr \
-filter_complex [0:v]untile=$ncolsx$nrows:$nout,select=mod(n\,$nout)+1:$nout[v1][v2][v3]...[v$nout] \
-map [v1] -r $fps output1.webm \
-map [v2] -r $fps output2.webm \
...
-map [v$nout] -r $fps output$nout.webm
Here, $nout = $ncols * $nrows and you need to set the output framerate $fps explicitly (or it defaults to $input_fps * $nout).
Accordingly, you can run your nested loops to form the FFmpeg command argument string, and call it once after the loop. Note that my use of pseudo-variables $xxx is not adhering to any language so please make necessary adjustments.

Problem with escaped quotes in variable (I think) [duplicate]

This question already has an answer here:
Pass dynamically generated parameters to command inside script
(1 answer)
Closed 11 months ago.
I am trying to write a parameter driven routine to extract parts of audio files using ffmpeg.
Because the routine is parameter driven I end up with a number of options in variables (a technique I have used successfully before in simpler examples) and for some reason this time it isn't working. having stared at it and tried various experiments for hours I give up and hope the helpful experts can sort me out
This is a simplified version with the variables set directly
...
#!/bin/bash
a="a b c.mp3"
b="out-$a"
trackstring="-metadata track=\"07/93\""
echo "trackstring=$trackstring"
titlestring="-metadata title=\"$a\""
echo "titlestring=$titlestring"
startpoint="-ss 0"
echo "startpoint=$startpoint"
endpoint="-to 300"
echo "endpoint=$endpoint"
coverstring="-c:v copy"
echo "coverstring=$coverstring"
audiostring="-c:a libmp3lame -ab 32k -ac 1"
echo "audiostring=$audiostring"
echo "ffmpeg $startpoint $endpoint -i \"$a\" -hide_banner -loglevel warning $coverstring $audiostring $titlestring $trackstring \"$b\""
ffmpeg $startpoint $endpoint -i "$a" -hide_banner -loglevel warning $coverstring $audiostring $titlestring $trackstring "$b"
...
The resulting output from my script looks like this:
trackstring=-metadata track="07/93"
titlestring=-metadata title="a b c.mp3"
startpoint=-ss 0
endpoint=-to 300
coverstring=-c:v copy
audiostring=-c:a libmp3lame -ab 32k -ac 1
ffmpeg -ss 0 -to 300 -i "a b c.mp3" -hide_banner -loglevel warning -c:v copy -c:a libmp3lame -ab 32k -ac 1 -metadata title="a b c.mp3" -metadata track="07/93" "out-a b c.mp3"
Which gives me exactly what I am expecting and I think all valid BUT....
Then ffmpeg gives me:
[mp3 # 0x55ae679e4640] Estimating duration from bitrate, this may be inaccurate
[NULL # 0x55ae679ea0c0] Unable to find a suitable output format for 'b'
b: Invalid argument
A bad one!
But interestingly putting the whole command into a string and the using an explicit sub-shell works exactly as expected: So starting from the last of the assignments in the original post
...
audiostring="-c:a libmp3lame -ab 32k -ac 1"
echo "audiostring=$audiostring"
cmd="ffmpeg $startpoint $endpoint -i \"$1\" -hide_banner -loglevel warning $coverstring $audiostring $titlestring $trackstring \"$2\""
echo "$cmd"
bash -c "$cmd"
Frankly, although I now have "working code" I think I am more confused than before.
The output is unchanged (except no error from ffmpeg) and the file(s) are generated exactly as expected

Trying to change extension of filename on an ffmpeg script

(first time posting a question here)
So I'm looking to write a ffmmpeg script to automate encoding my files to VP9.
The problem I'm having is when I try to strip the extension and add a new one.
For example
Demo.mp4
Should change to
Demo.webm
I'm running this on a Ubuntu-16.04 (Server Non-GI Version)
I've tried a few different ways to accomplish this (using google and other posts on StackOverflow) but I can't seem to make it work
This is the error I keep getting..
line 31: Demo.mp4+.vp9: syntax error: invalid arithmetic operator (error token is ".mp4+.vp9")
I've also commented (in the code below) where the syntax error is pointing to..
#!/bin/bash
# Welcome Message
clear
printf "====================================\n"
printf "FFMPEG Encoder\n"
printf "(Using HDR-4k Profile)\n"
printf "====================================\n\n"
printf " Loading Files in Current Directory...\n\n"
sleep 3s
# Variables
i=1
ext=".webm"
vadd=4000000
vsub=2000000
# Iterate through files in current directory
for j in *.{mp4,mkv};
do
echo "$i.$j"
file[i]=$j
i=$(( i + 1 ))
done
# Select File & Bitrate
printf "Enter file number\n"
read fselect
printf "${file[$fselect]}: Selected for encoding\n\n"
printf "Enter Average Bitrate (Eg: 8000000)\n\n"
read bselect
# ***THIS IS WHERE THE PROBLEM IS***
# Prepare output file, strip trailing extension (eg .mkv) and add .webm
ftemp1="${file[$fselect]}"
ftemp2="${ftemp1::-4}"
fout="$(($ftemp2+$ext))"
printf "Output file will be: $fout"
printf "Preparing to encode..."
sleep 5s
# Encode with User-Defined Parameters
ffmpeg -y -report -i ${file[$fselect]} -b:v $bselect -speed 4 -pass 1 \
-pix_fmt yuv420p10le \
-color_primaries 9 -color_trc 16 -colorspace 9 -color_range 1 \
-maxrate "$(($bselect+$vadd))" -minrate "$(($bselect-$vsub))" \
-profile:v 2 -vcodec libvpx-vp9 -f webm /dev/null && \
ffmpeg -y -report -i ${file[$fselect]} -b:v $bselect -pass 2 \
-pix_fmt yuv420p10le \
-color_primaries 9 -color_trc 16 -colorspace 9 -color_range 1 \
-maxrate "$(($bselect+$vadd))" -minrate "$(($bselect-$vsub))" \
-profile:v 2 -vcodec libvpx-vp9 \
$fout
I'm certain there is a much cleaner way to do this - but I'm not expecting help with that :P
My suspicion is that I'm trying to add two different types of variables? But I thought I defined them as strings..I could be wrong
Please Help... lol
You are trying to do arithmetic calculus ($((...))). But you just need to concatenate two strings:
fout="$ftemp2$ext"
BTW, you can simplify this transformation in three lines with a single line:
fout="${file[$fselect]/%.mp4/$ext}"
This works as a regular expression, where an .mp4 string found at the end (the % symbol) is repalced by the contents of $ext.

Batch avconv re-encode videos halfway through the list

As I have a low-end computer running Linux I often need to re-encode HD videos, lowering the quality to be able to watch them in my machine. A typical case is when I download several episodes of a series, I can't convert them all at once, and I need to start re-encoding halfway through the series.
I typically use this line to convert a single episode to a lower quality:
avconv -i anime_episode_01.mkv -map 0 -c copy -c:v libx264 -crf 31 anime_01.mkv
If I were to batch-convert them at once I would use something like:
for i in *.mkv;do avconv -i "$i" -map 0 -c copy -c:v libx264 -crf 31 "encoded/$i";done
Where encoded is a subdirectory.
But what if I need to start re-encoding at, say, episode 5?
I have no idea.
There are probably lots of ways to do this, here are a couple of options.
Option 1: Use seq
Use seq to generate a sequence, loop over this and encode.
A sequence from 5 to 15:
seq 5 15
If you need to format the numbers, e.g. to get a 0 prefix for one digit numbers, you can
use the -f switch, which takes a printf style formatting argument of a float/double.
seq -f %02.0f 5 15
This can be used in a loop, e.g. something like this:
for i in $(seq -f %02.0f 5 15); do
filename="anime_episode${i}.mkv"
echo "Encoding episode $i: $filename"
avconv -i "$filename" -map 0 -c copy -c:v libx264 -crf 31 "encoded/$filename"
done
Option 2: Check whether encoded file exists
Do pretty much the same as you do in your current loop, but only perform encoding if
the encoded file does not already exist.
for i in *.mkv; do
if [ ! -f encoded/$i ]; then
echo "Encoding file: $i"
avconv -i "$i" -map 0 -c copy -c:v libx264 -crf 31 "encoded/$i"
else
echo "Skipped file: $i"
fi
done

How to set a video's duration in FFMPEG?

How can I limit the video duration for a given video? For example, if we are uploading one video that should not exceed more than 5 minutes, I need a command in FFMPEG.
Use the -t option to specify a time limit:
`-t duration'
Restrict the transcoded/captured video sequence to the duration specified in seconds. hh:mm:ss[.xxx] syntax is also supported.
http://www.ffmpeg.org/ffmpeg.html
Just to elaborate a bit further for more detailed use and examples.
As Specified in the FFMpeg Docs
-t duration (input/output)
When used as an input option (before -i),
limit the duration of data read from the input file.
e.g. ffmpeg -t 5 -i input.mp3 testAsInput.mp3
Will stop writing automatically after 5 seconds
When used as an output option (before an output url),
stop writing the output after its duration reaches duration.
e.g. ffmpeg -i input.mp3 -t 5 testAsOutput.mp3
Will stop writing automatically after 5 seconds
Effectively, in this use case the result is the same. See below for a more extended use case.
-to position (input/output)
Stop writing the output or reading the input at position.
e.g. same as above but with to instead of t
duration or positionmust be a time duration specification, as specified in the ffmpeg-utils(1) manual.
[-][HH:]MM:SS[.m...] or [-]S+[.m...][s|ms|us]
-to and -t are mutually exclusive and -t has priority.
Example use as input option with multiple inputs
Note: -f pulse -i 1 is my system audio , -f pulse -i 2 is my micrphone input
Lets imagine I want to record both my microphone and speakers at the same time indefinetly.(until I force a stop with Ctrl+C)
I could use the amix filter for example.
ffmpeg \
-f pulse -i 1 \
-f pulse -i 2 \
-filter_complex "amix=inputs=2" \
testmix.mp3
Now lets imagine I only want to record the first 5 seconds of my system audio and always my microphone, again, until I kill the process with Ctrl+C).
ffmpeg \
-t 5 -f pulse -i 1 \
-f pulse -i 2 \
-filter_complex "amix=inputs=2:duration=longest" \
testmix.mp3
Note: :duration=longest amix option is the default anyway, so not really needed to specify explicitly
Now lets assume I want the same as above but limit the recording to 10 seconds. The following examples would satisfy that requirement:
ffmpeg \
-t 5 -f pulse -i 1 \
-t 10 -f pulse -i 2 \
-filter_complex "amix=inputs=2:duration=longest" \
testmix.mp3
ffmpeg \
-t 5 -f pulse -i 1 \
-f pulse -i 2 \
-filter_complex "amix=inputs=2:duration=longest" \
-t 10 testmix.mp3
Note: With regards to start position searching/seeking this answer with a bit of investigation I did, may also be of interest.
An example;
ffmpeg -f lavfi -i color=s=1920x1080 -loop 1 -i "input.png" -filter_complex "[1:v]scale=1920:-2[fg]; [0:v][fg]overlay=y=-'t*h*0.02'[v]" -map "[v]" -t 00:00:03 output.mp4
This sets the max time to 3 seconds. Note that the -t has to be just before the output file, if you set it at the start of this command, i.e. ffmpeg -t .... it will NOT work.

Resources