I'm trying to create a timeline of images for a video using FFMPEG.wasm. The times I want to pass in is 1/10 its total duration.
I tried:
const duration = video.current.duration;
const fps = duration / 10;
//this is the command I used on the CLI for a 1 hour video.
ffmpeg -skip_frame nokey -i temp.mp4 -vf "scale=180:-1,fps=1/600" out%d.png
but the output is really slow when it comes to long videos due to FFMPEG having to go through the entire video.
Is there a command that's like:
ffmpeg -ss "600, 1200, 1800 ... 6000" -i temp.mp4 -vf scale=180:-1 -frames:v 1 out%d.png
where it only outputs an image only at those specific times without having to loop the FFMPEG command 10 times?
Related
I have a folder of exactly 300 images in png format (labelled 1.png, 2.png, ..., 300.png), which I'm trying to convert to a video. I would like the video to be in the webm format, but there seems to be an issue:
using the following command:
ffmpeg -start_number 1 -i ./frames/%d.png -frames:v 300 -r 30 out.webm
does generate an out.webm file, and, according to ffprobe -select_streams v -count_frames -show_entries stream=nb_read_frames,r_frame_rate out.webm (which is presumably quite an inefficient way to get that information, but that's besides the point), it does contain 300 frames and has a framerate of exactly 30/1, however, instead of the expected exactly 10 seconds (from 300 frames being played at 30 fps), the video lasts slightly longer (about 12 seconds).
This discrepancy does seem to scale up with video length; 900 frames being converted to a video the same way and with the same frame rate yield a 36 (instead of 30) second video.
For testing, I also tried generating an mp4 file instead of a webm one, with the following command (exact same as above, but out.mp4 instead of out.webm), and that worked exactly as expected, out.mp4 was a 10-second long video.
ffmpeg -start_number 1 -i ./frames/%d.png -frames:v 100 -r 30 out.mp4
How do I fix this? is my ffmpeg command off or is this a bug within the tool?
The documentation ( https://www.ffmpeg.org/ffmpeg.html ) has an example:
For creating a video from many images: ffmpeg -f image2 -framerate 12
-i foo-%03d.jpeg -s WxH foo.avi
and
To force the frame rate of the input file (valid for raw formats only)
to 1 fps and the frame rate of the output file to 24 fps: ffmpeg -r 1
-i input.m2v -r 24 output.avi
and also
As an input option, ignore any timestamps stored in the file and
instead generate timestamps assuming constant frame rate fps. This is
not the same as the -framerate option used for some input formats like
image2 or v4l2 (it used to be the same in older versions of FFmpeg).
If in doubt use -framerate instead of the input option -r.
For your case result:
ffmpeg -framerate 30 -i ./frames/%d.png output.webm
I have more than a thousand images that I want to transform into a 3 minutes video. I tried using this line ffmpeg -r 30 -i "E:/White-box-Cartoonization/test_code/cartoonized_images/$flower%03d.bmp" -c:v libx264 -pix_fmt yuv420p out.mp4 it worked but creates only a 5 seconds video. What do I need to do to turn it into a full length 3 minutes video?
If you have 1250 images and want an output duration of 180 seconds:
ffmpeg -framerate 1250/180 -i input%03d.bmp -c:v libx264 -vf format=yuv420p output.mp4
This example results in a frame rate of 6.94. Some players can't handle such low frame rates. If your player does not like it then add the -r output option to make a normal output frame rate. ffmpeg will duplicate the frames but the output will look the same.
ffmpeg -framerate 1250/180 -i input%03d.bmp -c:v libx264 -vf format=yuv420p -r 25 output.mp4
For 3 minutes of video at 30 frames per second (-r parameter) you'd need 30*60*3 images: 5400 images.
Your source parameter specifies there would be only 3 digits, so you have a maximum of 1000 source images:
$flower%03d.bmp => $flower000.bmp .. $flower999.bmp
1000 images at 30 frames per second should give about 30 seconds of video ... if you actually have $flowerxxx.bmp files.
You might need a 4th digit in there somewhere.
$flower%04d.bmp
I'm capturing video from 4 cameras connected with HDMI through a capture card. I'm using ffmpeg to save the video feed from the cameras to multiples jpeg files (30 jpeg per second per camera).
I want to be able to save the images with the capture time. Currently I'm using this command for one camera:
ffmpeg -f video4linux2 -pixel_format yuv420p -timestamps abs -I /dev/video0 -c:a jpeg -t 60 -ts_from_file 2 camera0-%5d.jpeg
It saves my file with the names camera0-00001.jpg, camera0-00002.jpg, etc.
Then I rename my file with camera0-HH-mm-ss-(1-30).jpeg based on the modified time of the file.
So in the end I have 4 files with the same time and same frame like this:
camera0-12-00-00-1.jpeg
camera1-12-00-00-1.jpeg
camera2-12-00-00-1.jpeg
camera3-12-00-00-1.jpeg
My issue is that the file may be offset from one to two frame. They may have the same name but sometime one or two camera may show different frame.
Is there a way to be sure that the capture frames has the actual time of the capture and not the time of the creation of the file?
You can use the mkvtimestamp_v2 muxer
ffmpeg -f video4linux2 -pixel_format yuv420p -timestamps abs -copyts -i /dev/video0 \
-vf setpts=PTS-STARTPTS -vsync 0 -vframes 1800 camera0-%5d.jpeg \
-c copy -vsync 0 -vframes 1800 -f mkvtimestamp_v2 timings.txt
timings.txt will have output like this
# timecode format v2
1521177189530
1521177189630
1521177189700
1521177189770
1521177189820
1521177189870
1521177189920
1521177189970
...
where each reading is the Unix epoch time in milliseconds.
I've switched to output frame count limit to stop the process instead of -t 60. You can use -t 60 for the first output since we are resetting timestamps there, but not for the second. If you do that, remember to only use the first N entries from the text file, where N is the number of images produced.
I would like ffmpeg to do the following:
read an input mp4 (-i movie.mp4)
skip the first 5 seconds (-ss 5)
find scene changes and display the frame numbers (-vf "select=gt(scene\, 0.4, showinfo))
output #1 - a gif file (output.gif)
output #2 - a contact sheet with all the thumbnails (-vf "select scale=320:-1, tile=12x200" thumbnails.png)
This will generate the thumbnails:
ffmpeg -hide_banner -i d:/Test/movie01.mp4 -ss 5 -vf "select=gt(scene\,0.4), showinfo, scale=320:-1, tile=12x200" -vsync 0 thumbnails%03d.png
this will generate the gif:
ffmpeg -hide_banner -i d:/Test/movie01.mp4 -ss 5 -vf "select='not(mod(n,60))',setpts='N/(30*TB)', scale=320:-1" -vsync 0 output.gif
I would like to do both at once with 2 more features:
set fps and resolution for the gif; I would like the gif to represent the whole movie in X seconds, at Y fps (I know the duration of the input movie so I can calculate how often a frame needs to be captured)
set the width only for the thumbnail picture (tile=12 for example) and let ffmpeg determine the appropriate height
I have tried to compose a command line from what I read on this page: https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs, using the split / map commands but I couldn't get it to work
Use
ffmpeg -ss 5 -i input.mp4
-vf "select='not(mod(n,60))',setpts=N/Y/TB',scale=320:-1" -r Y output.gif
-vf "select='gt(scene\,0.4)',showinfo,scale=320:-1,tile=12x200" -vsync 0 thumbnails%03d.png
tile requires both W and H to be set.
I'd like to programmatically create a video file that is composed of a series of images. However, I'd also like to be able to specify a duration for each image. I often see ffmpeg examples suggested for similar tasks, but they always assume the same duration for each image. Is there an efficient way to accomplish this? (An inefficient solution might be setting the frame rate to something high and repeatedly copying each image until it matches the intended duration)
I will be dynamically generating each of the images as well, so if there is way to encode the image data into video frames without writing each image to disk, that's even better. This, however, is not a requirement.
Edit: To be clear, I don't necessarily need to use ffmpeg. Other free command-line tools are fine, as are video-processing libraries. I'm just looking for a good solution.
I was able to solve the exact same problem with the following commands.
vframes is set to the number of seconds * fps
In the example the first video has 100 frames (100 frame / 25 fps = 4 seconds) and second one has 200 frames (8 seconds)
ffmpeg -f image2 -loop 1 -vframes 100 -r 25 -i a.jpg -vcodec mpeg4 a.avi
ffmpeg -f image2 -loop 1 -vframes 200 -r 25 -i b.jpg -vcodec mpeg4 b.avi
mencoder -ovc copy -o out.mp4 a.mp4 b.mp4
The mencoder part is just like the one of d33pika
You can use the concat demuxer to manually order images and to provide a specific duration for each image.
ffmpeg -f concat -i input.txt -vsync vfr -pix_fmt yuv420p output.mp4
Your input.txt should look like this.
file '/path/to/dog.png'
duration 5
file '/path/to/cat.png'
duration 1
file '/path/to/rat.png'
duration 3
file '/path/to/tapeworm.png'
duration 2
file '/path/to/tapeworm.png'
You can write this txt file dynamically according to your needs and excute the command.
For more info refer to https://trac.ffmpeg.org/wiki/Slideshow
It seems like there is no way to have different durations for different images using ffmpeg. I would create separate videos for each of the images and then concat them using mencoder like this:
ffmpeg -f image2 -vframes 30 -i a.jpg -vcodec libx264 -r 1 a.mp4
ffmpeg -f image2 -vframmes 10 -i bjpg -vcodec libx264 -r 1 b.mp4
mencoder -ovc copy -o out.mp4 a.mp4 b.mp4
mencoder for the concat operation needs all the output videos to have same resolution,framerate and codec.
Here a.mp4 has 30 frames of duration 30 seconds and b.mp4 has 10 frames of 10 seconds.