I've been working through FFmpeg, but I have been unable to get a rotation to run from the examples they have on their site. I am trying to "wiggle" a video back and forth at a fixed point on the bottom - think a head moving left to right (and so on).
I am attempting to do this with the filter "rotate" (https://ffmpeg.org/ffmpeg-filters.html#rotate). Attempting to use their examples, I get an error.
This is what I have so far:
ffmpeg -i vid1.mp4 -i vid2.mov -loop 1 -i image.png -filter_complex "\
[2:v]alphaextract, scale=240x160[mask];\
[0:v] scale=240x160, rotate=A*sin(2*PI/T*t) [ascaled];\
[ascaled][mask]alphamerge[masked];\
[1:v]scale=480x360[background];\
[background][masked]overlay=120:20"\
-c:a copy 65B6354F61B4AF02_HD_sq.MOV
I am using "rotate" directly from an example in an attempt to get something to run at all.
The error I get back is:
[Parsed_rotate_3 # 0x7ff4476045e0] [Eval # 0x7fff5b3e3f00] Undefined constant or missing '(' in 'T*t)'
[Parsed_rotate_3 # 0x7ff4476045e0] Error occurred parsing angle expression 'A*sin(2*PI/T*t)'
[Parsed_rotate_3 # 0x7ff4476045e0] Failed to configure output pad on Parsed_rotate_3
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #1:0
If I remove 'A', 'T', 'sin', etc, rotate does actually work, but far from the desired behavior.
Am I missing something to expose those params?
In the expression,
rotate=A*sin(2*PI/T*t)
A and T aren't literals. The user is meant to replace them with numerals, representing the amplitude in radians and period in seconds repsectively.
e.g.
rotate=2*sin(2*PI/3*t)
Give a try to the transpose filter, for example, Rotate 90 clockwise
ffmpeg -i input.mp4 -vf "transpose=1" output.mp4
For the transpose parameter you can pass:
0 90CounterCLockwise and Vertical Flip (default)
1 90Clockwise
2 90CounterClockwise
3 90Clockwise and Vertical Flip
Related
I'm using FFmpeg to extract frames of a video and therefore I want to print the metadata of the video to a text file first (to get the scene\ value of each frame).
This already works for me with something like:
ffmpeg -i input.mp4 -vf "select='gte(scene,0)',metadata=print:file=scenescores.txt" -an -f null -
Because I'm using all this inside of a Java application I want to pass an absolute path (of an temp directory) to print:file= instead of the currently relative one which will write it to the root directory of the project.
But when I try to specify an absolute path like D:\scenescores.txt I get the following error:
[metadata # 00000203282ff0c0] Unable to parse option value "scenescores.txt" as boolean
[metadata # 00000203282ff0c0] Error setting option direct to value scenescores.txt.
[Parsed_metadata_1 # 00000203269bdf00] Error applying options to the filter.
[AVFilterGraph # 0000020328020840] Error initializing filter 'metadata' with args 'print:file=D:\scenescores.txt'
Is there any way to achieve printing to an absolute path? Am I missing some escape rules or something?
I played a lot with escaping different things and in the end it worked for me like this:
ffmpeg -i input.mp4 -vf "select='gte(scene,0)',metadata=print:file=\'D:\scenescores.txt\'" -an -f null -
The difference is that the path is surrounded by \'.
Also I read that escaping \ to \\ or / can help sometimes.
I'm trying to rotate videos and increase its sound as well as change itsframe rate
ffplay -i C:/Users/thota/OneDrive/Desktop/VET/Input.mp4 -af "volume="10.0",atempo="10.0" -vf "transpose=2,transpose=2,setpts=1/"10.0"*PTS,scale="3840:2160",fps="5.0"
I'm using FFmpeg as I'm trying to build a video editing application hence I need to combine many operations when i try to use above command I'm getting this error(in command im using ffplay as I just want to see output)
error
[atempo # 000001fdd50c7c40] [Eval # 00000047b79fe770] Undefined constant or missing '(' in
'vftranspose=2'
[atempo # 000001fdd50c7c40] Unable to parse option value "10.0 -vf transpose=2"
[atempo # 000001fdd50c7c40] [Eval # 00000047b79fe770] Undefined constant or missing '(' in
'vftranspose=2'
[atempo # 000001fdd50c7c40] Unable to parse option value "10.0 -vf transpose=2"
[atempo # 000001fdd50c7c40] Error setting option tempo to value 10.0 -vf transpose=2.
[Parsed_atempo_1 # 000001fdd50c7b40] Error applying options to the filter.
Error initializing filter 'atempo' with args '10.0 -vf transpose=2'
Pleas help me solve this issue and suggest me a best way to add multiple operations when i try to use , its being tough so is their any other way
If yes please let me know
Thank you
It looks like you are missing quotation marks " character in two places.
The following command works (weird audio, but no errors):
ffplay -i C:/Users/thota/OneDrive/Desktop/VET/Input.mp4 -af "volume="10.0",atempo="10.0"" -vf "transpose=2,transpose=2,setpts=1/"10.0"*PTS,scale="3840:2160",fps="5.0""
You don't need all the quotation marks, and you may improve readability by using '' instead of nested "".
The following command is equivalent:
ffplay -i C:/Users/thota/OneDrive/Desktop/VET/Input.mp4 -af "volume=10.0,atempo=10.0" -vf "transpose=2,transpose=2,setpts=1/10.0*PTS,scale='3840:2160',fps=5.0"
How do I cut a section out of a video with ffmpeg?
Imagine I have a 60 second mp4 A.
I want to remove all the stuff from 0:15 to 0:45.
The result should be a 30-second mp4, which is composed of the first 15 seconds of A directly followed by the last 15 seconds of A.
How can I do this without using concat?
I know how I could do it by creating two intermediary files and then using ffmpeg to concat them. I don't want to have to perform so much manual work for this (simple?) operation.
I have also seen the trim filder used for removing multiple parts from a video. All the usages I've found show that it seems to be very verbose, and I haven't found an example for a case as simple as I would like (just a single section removed).
Do I have to use trim for this operation? Or are there other less verbose solutions?
The ideal would of course be something at least simple as -ss 0:15 -to 0:45 which removes the ends of a video (-cut 0:15-0:45 for example).
I started from
https://stackoverflow.com/a/54192662/3499840 (currently the only answer to "FFmpeg remove 2 sec from middle of video and concat the parts. Single line solution").
Working from that example, the following works for me:
# In order to keep <start-15s> and <45s-end>, you need to
# keep all the frames which are "not between 15s and 45s":
ffmpeg -i input.mp4 \
-vf "select='not(between(t,15,45))', setpts=N/FRAME_RATE/TB" \
-af "aselect='not(between(t,15,45))', asetpts=N/SR/TB" \
output.mp4
This is a one-line linux command, but I've used the bash line-continuation character ('\') so that I can vertically align the equals-signs as this helps me to understand what is going on.
I had never seen ffmpeg's not and between operators before, but I found their documentation here.
Regarding the usual ffmpeg "copy vs re-encode" dichotomy, I was hoping to be able to use ffmpeg's "copy" "codec" (yeah, I know that it's not really a codec) so that ffmpeg would not re-encode my video, but if I specify "copy", then ffmpeg starts and stops at the nearest keyframes which are not sufficiently close to my desired start and stop points. (I want to remove a piece of video that is approximately 20 seconds long, but my source video only has one keyframe every 45 seconds!). Hence I am obliged to re-encode. See https://trac.ffmpeg.org/wiki/Seeking#Seekingwhiledoingacodeccopy for more info.
The setpts/asetpts filters set the timestamps on each frame to the correct values so that your media player will play each frame at the correct time.
HTH.
If you want to use the copy "codec", consider the following approach:
ffmpeg -i input.mp4 -t "$start_cut_section" -c copy part1.mp4&
ffmpeg -i input.mp4 -ss "$end_cut_section" -c copy part2.mp4&
echo "file 'part1.mp4'" > filelist;
echo "file 'part2.mp4'" >> filelist;
wait;
ffmpeg -f concat -i filelist -c copy output.mp4;
rm filelist;
This creates two files from before and after the cut, then combines them into a new trimmed final video. Obviously, this can be used to create as many cuts as you like. It may seem like a longer approach than the accepted answer, but it likely will execute much faster because of the use of the copy codec.
In Bash,
I am trying to match an image to a frame in ffmpeg. I also want to exit the ffmpeg process when the match is found. Here is a (simplified version) of the code currently:
ffmpeg --hide_banner -ss 0 -to 60 \
-i "video.mp4" -i "image.jpg" -filter_complex \
"blend=difference, blackframe" -f null - </dev/null 2>log.txt &
pid=$!
trap "kill $pid 2>/dev/null" EXIT
while kill -0 $pid 2>/dev/null; do
# (grep command to monitor log file)
# if grep finds blackframe match, return blackframe time
done
To my understanding, if the video actually contains a blackframe I will get a false-positive. How can I effectively mitigate this?
While this is unnecessary to answer the question, I would like to exit the ffmpeg process without having to use grep to constantly monitor the log file, instead using pure ffmpeg
Edit: I say this because while I understand the blend filter is computing the difference, I am getting a false positive on a blackframe in my video and I don't know why.
Edit: A possible solution to this issue is to not use blackframe at all, but psnr (Peak Signal to Noise Ratio) but normal usage is by comparing two videos frame by frame, and I don't know how to effectively use it with an image as input.
Use
ffmpeg -ss 0 -t 60 -copyts -i video.mp4 -i image.jpg -filter_complex "[0]extractplanes=y[v];[1]extractplanes=y[i];[v][i]blend=difference,blackframe=0,metadata=select:key=lavfi.blackframe.pblack:value=100:function=equal,trim=duration=0.0001,metadata=print:file=-" -an -v 0 -vsync 0 -f null -
If a match is found, it will print to stdout a line of the form,
frame:179 pts:2316800 pts_time:6.03333
lavfi.blackframe.pblack=100
else no lines will be printed. It will exit after the first match, if found, or till whole input is processed.
Since blackframe only looks at luma, I use extractplanes both to speed up blend and also avoid any unexpected format conversions blend may request.
blackframe threshold is set to 0, so all frames have the blackframe value metadata tagged. False positives are not possible since blend computes the difference. The difference between a black input frame and the reference frame is equal to the reference frame, unless the reference is a black frame, in which case, it is not a false positive.
The first metadata filter only passes through frames with blackframe value of 100. The trim filter stops a 2nd frame from passing through (except if your video's fps is greater than 10000). The 2nd metadata filter prints the selected frame's metadata.
I am trying to overlay one video on top of another using ffmpeg, but couldn't quite understand the error.
I based on the existing command from here
More specifically, I want to replace the all colors close to a specific color (say brown r:82,g:44,b:11), and then have them set as transparent.
ffmpeg -i moonmen.mp4 -i transparent_overlay.mp4 -filter_complex
"[1]split[m][a];
[a]geq='if(between(r(X,Y), 77, 87)*between(g(X,Y), 39, 49)*between(b(X,Y), 06, 16) ,255:255:255,0:0:0)';
[m][al]alphamerge[ovr];
[0][ovr]overlay"
output.mp4
but I got error:
[Parsed_geq_1 # 0x7fc8e2e08400] Either YCbCr or RGB but not both must be specified
[AVFilterGraph # 0x7fc8e2e07c60] Error initializing filter 'geq' with args 'if(between(r(X,Y), 77, 87)*between(g(X,Y), 39, 49)*between(b(X,Y), 06, 16) ,255:255:255,0:0:0)'
Error initializing complex filters.
Invalid argument
The geq filter can work on both RGB and YUV format input, so one of the expressions has to be labeled to indicate that. But more importantly, your input may not be RGB, or have alpha, so that has to be fixed first.
ffmpeg -i moonmen.mp4 -i transparent_overlay.mp4 -filter_complex
"[1]format=rgba,geq=r='r(X,Y)':a='if(between(r(X,Y),77,87)*between(g(X,Y),39,49)*between(b(X,Y),6,16),0,255)'[ovr];
[0][ovr]overlay"
output.mp4