Sync files timestamp with ffmpeg - ffmpeg

I'm capturing video from 4 cameras connected with HDMI through a capture card. I'm using ffmpeg to save the video feed from the cameras to multiples jpeg files (30 jpeg per second per camera).
I want to be able to save the images with the capture time. Currently I'm using this command for one camera:
ffmpeg -f video4linux2 -pixel_format yuv420p -timestamps abs -I /dev/video0 -c:a jpeg -t 60 -ts_from_file 2 camera0-%5d.jpeg
It saves my file with the names camera0-00001.jpg, camera0-00002.jpg, etc.
Then I rename my file with camera0-HH-mm-ss-(1-30).jpeg based on the modified time of the file.
So in the end I have 4 files with the same time and same frame like this:
camera0-12-00-00-1.jpeg
camera1-12-00-00-1.jpeg
camera2-12-00-00-1.jpeg
camera3-12-00-00-1.jpeg
My issue is that the file may be offset from one to two frame. They may have the same name but sometime one or two camera may show different frame.
Is there a way to be sure that the capture frames has the actual time of the capture and not the time of the creation of the file?

You can use the mkvtimestamp_v2 muxer
ffmpeg -f video4linux2 -pixel_format yuv420p -timestamps abs -copyts -i /dev/video0 \
-vf setpts=PTS-STARTPTS -vsync 0 -vframes 1800 camera0-%5d.jpeg \
-c copy -vsync 0 -vframes 1800 -f mkvtimestamp_v2 timings.txt
timings.txt will have output like this
# timecode format v2
1521177189530
1521177189630
1521177189700
1521177189770
1521177189820
1521177189870
1521177189920
1521177189970
...
where each reading is the Unix epoch time in milliseconds.
I've switched to output frame count limit to stop the process instead of -t 60. You can use -t 60 for the first output since we are resetting timestamps there, but not for the second. If you do that, remember to only use the first N entries from the text file, where N is the number of images produced.

Related

webm files created with ffmpeg are too long

I have a folder of exactly 300 images in png format (labelled 1.png, 2.png, ..., 300.png), which I'm trying to convert to a video. I would like the video to be in the webm format, but there seems to be an issue:
using the following command:
ffmpeg -start_number 1 -i ./frames/%d.png -frames:v 300 -r 30 out.webm
does generate an out.webm file, and, according to ffprobe -select_streams v -count_frames -show_entries stream=nb_read_frames,r_frame_rate out.webm (which is presumably quite an inefficient way to get that information, but that's besides the point), it does contain 300 frames and has a framerate of exactly 30/1, however, instead of the expected exactly 10 seconds (from 300 frames being played at 30 fps), the video lasts slightly longer (about 12 seconds).
This discrepancy does seem to scale up with video length; 900 frames being converted to a video the same way and with the same frame rate yield a 36 (instead of 30) second video.
For testing, I also tried generating an mp4 file instead of a webm one, with the following command (exact same as above, but out.mp4 instead of out.webm), and that worked exactly as expected, out.mp4 was a 10-second long video.
ffmpeg -start_number 1 -i ./frames/%d.png -frames:v 100 -r 30 out.mp4
How do I fix this? is my ffmpeg command off or is this a bug within the tool?
The documentation ( https://www.ffmpeg.org/ffmpeg.html ) has an example:
For creating a video from many images: ffmpeg -f image2 -framerate 12
-i foo-%03d.jpeg -s WxH foo.avi
and
To force the frame rate of the input file (valid for raw formats only)
to 1 fps and the frame rate of the output file to 24 fps: ffmpeg -r 1
-i input.m2v -r 24 output.avi
and also
As an input option, ignore any timestamps stored in the file and
instead generate timestamps assuming constant frame rate fps. This is
not the same as the -framerate option used for some input formats like
image2 or v4l2 (it used to be the same in older versions of FFmpeg).
If in doubt use -framerate instead of the input option -r.
For your case result:
ffmpeg -framerate 30 -i ./frames/%d.png output.webm

How to make ffmpeg automatically fill frames?

I want to use ffmpeg to convert a sequence of images to a video, the images are got in realtime, the interval of getting image is changeable, maybe i get next image in 1 second or even 1 millisecond.
I want the target video in a special fps(like 100), now my implement is creating a loop, which fade ffmpeg last image then sleep(like 10ms).
Do you guys know some options could let ffmpeg fill frames automatically?
If that option do exist, i wonder is that possible to make video real fps is half of it is claimed.
My ffmpeg command likes follow:
ffmpeg -f image2pipe -r 100 -i pipe:0 -f flv -r 100 pipe:1
You can use
ffmpeg -f image2pipe -use_wallclock_as_timestamps 1 -i pipe:0 -f flv -vsync cfr -r 100 pipe:1
FFmpeg will set each incoming frame's timestamp to the time it is received. SInce the output rate is set and mode is constant frame rate, ffmpeg will duplicate the last frame till next input frame is received, or drop if two frames are less than 10ms apart. Change -r to 1000 to keep frames only a millisecond apart.

FFmpeg Slideshow issues

trying to get my head around ffmpeg to create a slideshow where each image is displayed for ~5 seconds with some audio. created a bat file to run the following so far:
ffmpeg -f image2 -i image-%%03d.jpg -i music.mp3 output.mpg
It gets the images and displayes them all very fast in the first second of the video, it then plays out the rest of the audio while showing the last image.
I want to make the images stay up longer (about 5 seconds), and stop the video after the last frame (not playing the rest of the song), are either of these things possible? i could hack the frame rate thing i guess by having hundreds of the same image in order to keep it up longer, but this is far from ideal!
Thanks
The default encoder for mpg output, mpeg1video, is strict about the allowed frame rates, so an input and an output -r are required:
ffmpeg -r 1/5 -i image-%03d.jpg -i music.mp3 -r 25 -qscale:v 2 -shortest -codec:a copy output.mpg
The input images will have a frame rate of 1 frame every 5 seconds and the output will duplicate frames to reach 25 frames per second.
-f image2 is generally not required.
-qscale:v can control output quality. A sane range is 2-5.
-shortest will make the output duration the same as the shortest input duration.
-codec:a copy copy your MP3 audio instead of re-encoding.
MPEG-1 video has more modern alternatives. See the FFmpeg and x264 Encoding Guide for more info.
Also see:
* FFmpeg FAQ: How do I encode single pictures into movies?
* FFmpeg Wiki: Create a video slideshow from images
You could use the filter fps instead of output framerate
ffmpeg -r 1/5 -i img%03d.png -i musicfile -c:v libx264 -vf fps=25 -pix_fmt yuv420p out.mp4
This however skips the last image for me strangely.

ffmpeg frame grabbing slow on mp4 files

The following ffmpeg frame grab command takes a long time to grab an image from the mp4 file.
ffmpeg.exe -itsoffset -200 -i C:\93844428.mp4 -vcodec mjpeg -vframes 1 -y -an -f rawvideo -s 640x360 C:\test\out1.jpg
For a 20MB file (about 2 minutes of video) it takes up to about 6 seconds to find the image depending on what offset (in seconds) you ask to grab it.
For a 100MB it can anything in many minutes if you request a large offset.
This only appears to be an issue with mp4 files.
Is there anything that can be done to improve this?
This logic is inefficient to do a frame grab. Don't use itsoffset. If you want frame at a particular location use the -ss switch to set the time offset you want the frame from.

ffmpeg images-to-video script anyone? [duplicate]

This question already has answers here:
How to create a video from images with FFmpeg?
(9 answers)
Closed 3 years ago.
I'm wanting to take a bunch of images and make a video slideshow out of them. There'll be an app for that, right? Yup, quite a few it seems. The problem is I want the slides synced to a piece of music, and all the apps I've seen only allow you to show each slide for a multiple of a whole second. I want them to show for multiples of 1.714285714 seconds to fit with 140 bpm.
The tools I've seen generally seem to have ffmpeg under the hood, so presumably this kind of thing could be done with a script. But ffmpeg has sooo many options...I'm hoping someone will have something close.
I'll have up to about 100 slides, the ones that have to show for 3.428571428 secs or whatever I guess I can simply show twice.
For very recent versions of ffmpeg (roughly from the end of year 2013)
The following will create a video slideshow (using video codec libx264 or webm) from all the png images in the current directory. The command accepts image names numbered and ordered in series (img001.jpg, img002.jpg, img003.jpg) as well as random bunch of images.
(each image will have a duration of 5 seconds)
ffmpeg -r 1/5 -pattern_type glob -i '*.png' -c:v libx264 out.mp4 # x264 video
ffmpeg -r 1/5 -pattern_type glob -i '*.png' out.webm # WebM video
For older versions of ffmpeg
This will create a video slideshow (using video codec libx264 or webm) from series of png images, named img001.png, img002.png, img003.png, …
(each image will have a duration of 5 seconds)
ffmpeg -f image2 -r 1/5 -i img%03d.png -vcodec libx264 out.mp4 # x264 video
ffmpeg -f image2 -r 1/5 -i img%03d.png out.webm # WebM video
You may have to slightly modify the following commands if you have a very recent version of ffmpeg
This will create a slideshow in which each image has a duration of 15 seconds:
ffmpeg -f image2 -r 1/15 -i img%03d.png out.webm
If you want to create a video out of just one image, this will do (output video duration is set to 30 seconds):
ffmpeg -loop 1 -f image2 -i img.png -t 30 out.webm
If you don't have images numbered and ordered in series (img001.jpg, img002.jpg, img003.jpg) but rather random bunch of images, you might try this:
cat *.jpg | ffmpeg -f image2pipe -r 1 -vcodec mjpeg -i - out.webm
or for png images:
cat *.png | ffmpeg -f image2pipe -r 1 -vcodec png -i - out.webm
That will read all the jpg/png images in the current directory and write them, one by one, using the pipe, to the ffmpeg's input, which will produce the video out of it.
Important: All images in a series need to be of the same size (x and y dimensions) and format.
Explanation: By telling FFmpeg to set the input file's FPS option (frames per second) to some very low value, we made FFmpeg duplicate frames at the output and thus we achieved to display each image for some time on screen. You have seen, that you can set any fraction as framerate. 140 beats per minute would be -r 140/60.
Source: The FFmpeg wiki
For creating images from a video use
ffmpeg -i video.mp4 img%03d.png
This will create images named img001.png, img002.png, img003.png, …
You can extract images from a video, or create a video from many
images:
For extracting images from a video:
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
This will extract one video frame per second from the video and will
output them in files named 'foo-001.jpeg', 'foo-002.jpeg', etc. Images
will be rescaled to fit the new WxH values. If you want to extract
just a limited number of frames, you can use the above command in
combination with the -vframes or -t option, or in combination with -ss
to start extracting from a certain point in time. For creating a video
from many images:
ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
The syntax foo-%03d.jpeg specifies to use a decimal number composed
of three digits padded with zeroes to express the sequence number. It
is the same syntax supported by the C printf function, but only
formats accepting a normal integer are suitable.
This is an excerpt from the documentation, for more info check on the documentation page of ffmpeg.
I wound up using this:
mencoder "mf://html/*.png" -ovc x264 -mf fps=1.16666667 -o output.avi
and changing the sample rate afterwards in LiVES.
a load more details (and the end result video) at: http://hyperdata.org/hackit/ (mirror)

Resources