I'm trying to generate videos with high precision durations, but all of the results always end up with their duration approximated leaving out milliseconds.
E.g. ffmpeg -y -loop 1 -i "blank.png" -tune stillimage -t 9.983002003392 test.mkv
This example is supposed to give the file a duration of 9 seconds and 983002003392 milliseconds, though those milliseconds get lost when I check the duration using ffbrobe ffprobe test.mkv -show_entries format=duration -v quiet -of csv="p=0", it shows 10.000000 instead.
This doesn't happen when generating audio files instead, like ffmpeg -y -f lavfi -i anullsrc=sample_rate=48000 -t 9.983002003392 test.wav
When I check its duration it shows 9.98, which is not the precision I asked but at least it's not approximated to just seconds.
The duration is then further messed up by using concat on the videos and mixing audio files with the videos, but I suppose that's a problem related to the different settings of each file.
My question is: How can I generate a video with a high precision duration without approximating it to just seconds?
Related
I have a folder of exactly 300 images in png format (labelled 1.png, 2.png, ..., 300.png), which I'm trying to convert to a video. I would like the video to be in the webm format, but there seems to be an issue:
using the following command:
ffmpeg -start_number 1 -i ./frames/%d.png -frames:v 300 -r 30 out.webm
does generate an out.webm file, and, according to ffprobe -select_streams v -count_frames -show_entries stream=nb_read_frames,r_frame_rate out.webm (which is presumably quite an inefficient way to get that information, but that's besides the point), it does contain 300 frames and has a framerate of exactly 30/1, however, instead of the expected exactly 10 seconds (from 300 frames being played at 30 fps), the video lasts slightly longer (about 12 seconds).
This discrepancy does seem to scale up with video length; 900 frames being converted to a video the same way and with the same frame rate yield a 36 (instead of 30) second video.
For testing, I also tried generating an mp4 file instead of a webm one, with the following command (exact same as above, but out.mp4 instead of out.webm), and that worked exactly as expected, out.mp4 was a 10-second long video.
ffmpeg -start_number 1 -i ./frames/%d.png -frames:v 100 -r 30 out.mp4
How do I fix this? is my ffmpeg command off or is this a bug within the tool?
The documentation ( https://www.ffmpeg.org/ffmpeg.html ) has an example:
For creating a video from many images: ffmpeg -f image2 -framerate 12
-i foo-%03d.jpeg -s WxH foo.avi
and
To force the frame rate of the input file (valid for raw formats only)
to 1 fps and the frame rate of the output file to 24 fps: ffmpeg -r 1
-i input.m2v -r 24 output.avi
and also
As an input option, ignore any timestamps stored in the file and
instead generate timestamps assuming constant frame rate fps. This is
not the same as the -framerate option used for some input formats like
image2 or v4l2 (it used to be the same in older versions of FFmpeg).
If in doubt use -framerate instead of the input option -r.
For your case result:
ffmpeg -framerate 30 -i ./frames/%d.png output.webm
I want to use ffmpeg to convert a sequence of images to a video, the images are got in realtime, the interval of getting image is changeable, maybe i get next image in 1 second or even 1 millisecond.
I want the target video in a special fps(like 100), now my implement is creating a loop, which fade ffmpeg last image then sleep(like 10ms).
Do you guys know some options could let ffmpeg fill frames automatically?
If that option do exist, i wonder is that possible to make video real fps is half of it is claimed.
My ffmpeg command likes follow:
ffmpeg -f image2pipe -r 100 -i pipe:0 -f flv -r 100 pipe:1
You can use
ffmpeg -f image2pipe -use_wallclock_as_timestamps 1 -i pipe:0 -f flv -vsync cfr -r 100 pipe:1
FFmpeg will set each incoming frame's timestamp to the time it is received. SInce the output rate is set and mode is constant frame rate, ffmpeg will duplicate the last frame till next input frame is received, or drop if two frames are less than 10ms apart. Change -r to 1000 to keep frames only a millisecond apart.
To get a thumbnail from an image halfway through the video I can do ffmpeg -ss 100 -i /tmp/video.mp4 -frames:v 1 -s 200x100 image.jpg. By using -ss 100 it gets a thumbnail at 100 seconds (which would be halfway through the video assuming the video is 200 seconds long).
But if I don't know the exact length of the video, in my application code I would need to use something like ffprobe to first determine the length of the video, and then divide it by 2 to get the thumbnail time.
Is there a way to get ffmpeg to get the thumbnail at the percentage of the video you want? So instead of specifying -ss 100, something like -ss 50% or -ss 20% to get a thumbnail from halfway or 20% into the file?
I know I can do this through application code, but it would be more efficient if there's a way for ffmpeg to handle this itself.
It's not pretty but:
ffmpeg -y -i logo-13748357.mp4 -vf "select=gte(n\,$(shuf -i 1-$(ffprobe -v error -select_streams v:0 -show_entries stream=nb_frames -of default=nokey=1:noprint_wrappers=1 logo-13748357.mp4) -n 1))" -vframes 1 out_img.png
The above command is an example of a one liner that grabs the number of frames in a video and passes that to the "shuf" command https://linux.die.net/man/1/shuf which randomly selects a number in the range of frames and the result of that is passed to ffmpeg.
So the above is able to randomly select a frame from a video with just one line.
You might be able to adapt this approach to your specific goal. Presuming of course it still matters seven years down the track.
I want to convert video to images, do some image processing and convert images back to video.
Here is my commands:
./ffmpeg -r 30 -i $VIDEO_NAME "image%d.png"
./ffmpeg -r 30 -y -i "image%d.png" output.mpg
But in output.mpg video I have some artefacts like in jpeg.
Also I don't know how to detrmine fps, I set fps=30 (-r 30).
When I use above first command without -r it produces a lot of images > 1kk, but than I use -r 30 option it produce same number of images as this command calculationg number of frames:
FRAME_COUNT=`./ffprobe -v error -count_frames -select_streams v:0 -show_entries stream=nb_read_frames -of default=nokey=1:noprint_wrappers=1 $VIDEO_NAME`
So my questions are:
How to determine frame rate ?
How to convert images to video and don't reduce initial quality?
UPDATE:
Seems this helped, after I removed -r option
Image sequence to video quality
so resulting command is :
./ffmpeg -y -i "image%d.png" -vcodec mpeg4 -b $BITRATE output_$BITRATE.avi
but I'm still not sure how to select bitrate.
How can I see bitrate of original .mp4 file?
You can use the qscale parameter instead of bitrate e.g.
ffmpeg -y -i "image%d.png" -vcodec mpeg4 -q:v 1 output_1.avi
q:v is short for qscale:v. 1 may produce too large files. 4-6 is a decent range to use.
I have bunch of videos which are rather long, so I take screenshots of 10th second (-ss 00:00:10). Sometimes videos are very short, like 5 seconds, and -ss 00:00:10 fails.
I don't have an option to compute video size as don't have an option to download them whole (videos are hosted on S3 and used as streams through CloudFront).
Maybe there are some built-in options that I overlooked?
What I really don't want to do is shorten -ss option gradually on fails so it would be the last resort.
One liner:
ffprobe -show_entries format=filename,duration -of default=noprint_wrappers=1:nokey=1 /path/to/input/file -loglevel 0 | awk 'BEGIN {RS="";FS="\n"}{system("ffmpeg -ss "$2/2" -i "$1" -vframes 1 out.png") }'
meaning:
use ffprobe to get the file duration in seconds then pipe to awk and execute the frame extraction ffmpeg command using a seek time equal to duration/2