I want to take the input, blend N frames, decimate the other frames and use those for the output with the fps of my choice.
I used this line:
ffmpeg -y -i input.mp4 -vf tmix=frames=15:weights="1",select='not(mod(n\,15))' -vsync vfr frames/output-%05d.tif
That generated images, which I combined into the video. So far, so good.
But I'd like to skip the image output and go straight to video, so I tried this:
ffmpeg -y -i input.mp4 -vf tmix=frames=15:weights="1",select='not(mod(n\,15))' -vsync vfr -r 30 -c:v prores_ks -profile:v 3 -vendor apl0 -bits_per_mb 8000 -pix_fmt yuv422p10le output.mov
That produces 1.62 fps video, instead of 30 fps.
I'm at a loss on how to get it to output 30fps without the intermediate step of outputting images.
Thanks
I think the simplest way to achieve this is to feed the input at the 15-times the desired rate and drop all intermediate frames with -r 30:
ffmpeg -y -r 450 -i input.mp4 \
-vf tmix=frames=15:weights="1" \
-r 30 sandbox/out.mp4
However, a tmix solution is somewhat inefficient for your use case because it's mixing for all frames, including those dropped. If you don't mind a longer expression, you can try:
ffmpeg -i in.mp4 \
-vf
setpts=\'floor(N/15)/(30*TB)\',select=\'mod(n,15)+1\':n=15[v0][v1][v2][v3][v4][v5][v6][v7][v8][v9][v10][v11][v12][v13][v14];\
[v0][v1][v2][v3][v4][v5][v6][v7][v8][v9][v10][v11][v12][v13][v14]mix=inputs=15:weights=1 \
-r 30 sandbox/out.mp4
[edit] setpts expression should be floor(N/15)/(30*TB) not mod(n,15)+1 for 15 successive frames to have the same pts.
Related
I am trying to extract desired frames from video when someone in video start to speak and extract 5 frames per second when speaking. I get the framelist I want like 0, 6,12,18 and 200,206, 212....
I try to create scripts to do it automatically using below command per frame,but it's pretty slow, is there quick to get the desired frames from list with frame num as image name?
ffmpeg -i 1.mp4 -vf select='eq(n,0)' -vsync 0 -an -y -q:v 16 0.png
ffmpeg -i 1.mp4 -vf select='eq(n,6)' -vsync 0 -an -y -q:v 16 6.png
....
Hi I am trying to speed up and trim clips with FFMPEG version 4.2.2. Is there a limit to how fast you can speed up a clip? If I try to speed up a clip over a certain then the output file cannot be opened.
I have tried two methods without any luck: 1. using the setPTS filter and 2. inputing the file at a faster frame rate.
1.
ffmpeg -i GH012088.MP4 -y -ss 18 -t 0.48 -an -filter:v "setpts=0.096*PTS" -r 25 output.MP4
2.
ffmpeg -r 312.1875 -i GH012088.MP4 -y -ss 18 -t 0.48 -r 25 -an output.MP4
I am trying to create a clip from the input that starts at 1 second in the original clip, plays at 10.4166 x speed and lasts for 0.48 seconds
What am I doing wrong?
Thanks
Use
ffmpeg -ss 1 -i GH012088.MP4 -y -t 0.48 -an -filter:v "setpts=0.096*PTS" -r 25 output.MP4
The seek has to be on the input side, before frames are retimed. The -t has to be on output side, after frames are retimed.
Does the movie have sound?
If yes, than we have to sync speed up audio and video by combine filter:
ffmpeg -i video.avi -filter_complex "[0:v]setpts=0.5*PTS[v];[0:a]atempo=2.0[a]" -map "[v]" -map "[a]" -f avi video1.avi
I'm creating a video that:
uses a still image as a source
has a text overlay
fades in and out
has a silent stereo audio track.
So far, I have this, and it (almost) works correctly:
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -i turtle-2.jpg -c:v libx264 -t 5 -r 30 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=in:0:60,fade=out:90:60 -acodec aac turtle11.mp4
The only problem is that the fade out doesn't seem to be going to black, even tho this is a 150 frame video and I believe I am following the ffmpeg documentation correctly.
The resulting video is here:
http://video.blivenyc.com/vid-from-image/turtle11.mp4
Any thoughts?
Well, I'm not sure why but this works, even tho it appears to be equivalent:
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -i turtle-2.jpg -c:v libx264 -t 5 -r 30 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=t=in:st=0:d=1,fade=t=out:st=4:d=1 -acodec aac turtle12.mp4
Basically, frame-based syntax:
fade=in:0:60,fade=out:90:60
gets substitued with time-based:
fade=t=in:st=0:d=1,fade=t=out:st=4:d=1
And somehow it works. Not sure why this is.
The video stream on which the fade filter operates is not 150 frames long. Input and output framerates are different here. The use of -r to set output rate happens after all filtering is done. At that stage, ffmpeg will drop or duplicate frames to obtain the output rate.
The input rate for an image or image sequence is 25, unless expressly set otherwise. In your command, since there is no override, it's 25. So fade out of 60 frames starting at frame 90, will end at frame 125 (5 seconds x 25). ffmpeg will duplicate 5 frames of each input second to get it to 30.
To get the desired result, use
ffmpeg -f lavfi -i "aevalsrc=0|0" -loop 1 -framerate 30 -i turtle-2.jpg -c:v libx264 -t 5 -s 1920x1080 -aspect 16:9 -pix_fmt yuv420p -filter:v drawtext="fontsize=130:fontfile=comic.ttf:text='hello world':x=(w-text_w)*.25:y=(h-text_h)*.75",fade=in:0:60,fade=out:90:60 -acodec aac turtle11.mp4
I'm scaling a video and applying a watermark like so:
ffmpeg -ss 0:0:0.000 -i video.mp4 -y -an -t 0:0:10.000
-vf \"[in]scale=400:316[middle]\" -b:v 2000k -r 20
-vf 'movie=watermark.png,pad=400:316:0:0:0x00000000 [watermark];[middle] [watermark]overlay=0:0[out]'
out.flv
However, the applied watermark seems to be scaled to the original video size rather than the smaller scaled video size.
This command line worked on ffmpeg version 0.8.6.git and now behaves differently after an upgrade to version N-52381-g2288c77.
How do I get it to work again?
Update 2013-04-26:
I now have tried to use the overlay filter's X and Y parameters instead of padding without success.
Answered by ubitux on the FFmpeg IRC:
Use scale and overlay in a single -filter_complex chain, like so:
ffmpeg -y -ss 0 -t 0:0:30.0 -i 'video.mp4' -i '/watermark.png'
-filter_complex "[0:0] scale=400:225 [wm]; [wm][1:0] overlay=305:0 [out]"
-map "[out]" -b:v 896k -r 20 -an
'out.flv'
Also load the watermark via -i rather than the movie filter.
I want to compress a raw video .y4m to mpg, and I want then to extract the frames from the mpg video, I need the GOP of the compression to be :IBBPBBPBBPBBPBBIBBP....15:2
I used this command:
ffmpeg -i video.ym4 -vcodec libx264 -sameq -y -r 30 output.avi 2>list.txt
ffmpeg -i output.avi -vcodec libx264 -y -sameq -vf showinfo -y -f image2 image%3d.jpeg -r 30 2>list1.txt
The output contains only 2 I frames, 100 P and 198 B frames, it is not 15:2 GOP, what to do?
I need one I-frame every 15 frames, and the pattern to b IBBPBBP..
Sorry, Im new to ffmpeg, please help me, this is the input to my project, it is the important step to me.
Try (according to http://ffmpeg.org/ffmpeg.html#Video-Encoders)
ffmpeg -i video.ym4 -vcodec libx264 -g 15 -y -r 30 output.avi
I think option -sameq (means "same quantizers") is not needed in your case.