I would like to extract, say, 10 video frames from a given video file using ffmpeg, ideally uniformly distributed throughout the video. I know this can be done in a few ways, for example
ffmpeg -i input.mp4 -vf fps=1/10 out%03d.jpg
will output one image every 10 seconds. However, this is too slow for my liking and scales proportionally with the length of the video. I have read a bit about ffmpeg's seeking capability, for example
ffmpeg -ss 00:00:05 -i input.mp4 -frames:v 1 out001.jpg
will very quickly seek to the 5th second of the video and extract one frame. However I haven't come across a way to seek to multiple locations in the video without calling the above command repeatedly at various times.
Is there a quicker way of accomplishing this?
Using a long command, this can be done
e.g.
ffmpeg -ss 00:00:05 -i input.mp4
-ss 00:01:05 -i input.mp4
-ss 00:03:05 -i input.mp4
-ss 00:40:05 -i input.mp4
-map 0:v -frames:v 1 out001.jpg
-map 1:v -frames:v 1 out002.jpg
-map 2:v -frames:v 1 out003.jpg
-map 3:v -frames:v 1 out004.jpg
Related
I am trying to extract frames between a certain number as images
Currently I am doing it like this
ffmpeg
-i input.mp4
-vf select='eq(n\,34885)+eq(n\,34886)+eq(n\,34887)+eq(n\,34888)+eq(n\,34889)+eq(n\,34890)+eq(n\,34891)'
-vsync 0 output_frames_%d.png
Not only this the command above very cumbersome but also takes a lot of time to run, is there an easier and faster way to do this?
You may try this
ffmpeg -i input.mp4 -vf "select='between(n\,34885\,34891)'" -vsync 0 -start_number 34885 out-%02d.png
You could try this:
ffmpeg -i input.mp4 -vf select='between(n\,34885\,34891)' -vsync passthrough -frames 7 frame_%d.png
It's less cumbersome and stops once it has hit the target number of frames (7).
I am trying to extract desired frames from video when someone in video start to speak and extract 5 frames per second when speaking. I get the framelist I want like 0, 6,12,18 and 200,206, 212....
I try to create scripts to do it automatically using below command per frame,but it's pretty slow, is there quick to get the desired frames from list with frame num as image name?
ffmpeg -i 1.mp4 -vf select='eq(n,0)' -vsync 0 -an -y -q:v 16 0.png
ffmpeg -i 1.mp4 -vf select='eq(n,6)' -vsync 0 -an -y -q:v 16 6.png
....
I want to convert video to gif with audio. The two should match when played at the same time.
The command I use somehow generates results that's a bit off.
To create gif:
ffmpeg -ss 00:00:10 -i input.mp4 -t 4 -pix_fmt rgb24 -r 10 -filter:v "scale=-1:300" out.gif
To create mp3:
ffmpeg -ss 00:00:10 -i input.mp4 -t 4 out.mp3
I'm guessing this has something to do with the slicing.
Untested: You could try one of these two options below. If still not working then please provide a short clip (MP4 link) that can be tested to give you required solution...
Option 1) Try using -itsoffset instead of -t...
ffmpeg -ss 00:00:10 -itsoffset 4 -i input.mp4 -pix_fmt rgb24 -r 10 -filter:v "scale=-1:300" out.gif
ffmpeg -ss 00:00:10 -itsoffset 4 -i input.mp4 out.mp3
Option 2) Avoid issue of non-matching times for keyframes (of video track vs audio track)...
First trim the video (get's whatever audio is available at nearest video keyframe to your time):
ffmpeg -ss 00:00:10 -i input.mp4 -t 4 trimmed.mp4
Then use trimmed MP4 (will have synced audio) as source for your output GIF and MP3.
ffmpeg -i trimmed.mp4 -pix_fmt rgb24 -r 10 -filter:v "scale=-1:300" out.mp4
ffmpeg -i trimmed.mp4 out.mp3
I'm trying to extract keyframes from a large video I have. The problem I'm seeing is that it is extracting far too many, leaving me with many very similar images.
Below is what I am currently using (from terminal)
ffmpeg -i video.mov -vf "select=eq(pict_type\,I)" -vsync vfr thumb%04d.png -hide_banner
What would be great is if there was a way I can either make it only output 1 in 5 keyframes. Or what would be even better is if there is a way I can make it only output if the frame is over x% different from the previous one.
1 in 5 keyframes:
ffmpeg -i video.mov -vf "select=eq(pict_type\,I),select='not(mod(n\,5))'" -vsync vfr thumb%04d.png
frame is over x% different from the previous one:
ffmpeg -i video.mov -vf "select=eq(pict_type\,I),select='gt(scene\,x/100)'" -vsync vfr thumb%04d.png
I am trying to create a video output from multiple video cameras.
Following the example given here Presenting more than 2 videos using FFmpeg
and other similar examples.
but Im getting the error
Output pad "default" for the filter "src" of type "buffer" not connected to any destination
when i run
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih[a];[a][1:0]overlay=w[b];[b][2:0]overlay=w:h" -shortest output.mp4
Im not really sure what this means or how to fix it.
Any help would be greatly appreciated!
Thanks.
When using the "padding" option, you have to specify which is the size of the output image and where you want to put the input image
[0:0]pad=iw*2:ih:0:0
tested under windows 7 with file of same size
ffmpeg -i out.avi -i out.avi -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[a][1:0]overlay=w" -shortest output.mp4
and with WebCam Cap (vfwcap) and a still picture (as i have only o=1 WebCam). BTW you can see how to scale one the source to fit in the target (just in case your source have different resolution)
ffmpeg -y -f vfwcap -r 10 -i 0 -loop 1 -i photo.jpg -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[1:0]scale=640:480[b];[a][b]overlay=w" -shortest output.mp4
under Linux:
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih:0:0[[a];a][1:0]overlay=w" -shortest output.mp4
if it doesn't work test a simple record of video 1 and after of video 0 and check their properties (type, resolution, fps).
ffmpeg -i /dev/video1 -shortest output1.mp4
ffmpeg -I output1.mp4
If you still have issue, update your question with ffmpeg console output (as text) for video and video 0 capture and also of the call with the overlay