I am creating a dashcam library where video files are written constantly to 2 buffers. When an event happens, the most recent buffer is returned. Everything works fine except when I try to customize the FPS, I see inconsistent behavior.
This is the ffmpeg command I use:
ffmpeg -y -i rtsp://admin:admin#192.168.1.200 -f segment -segment_time 3 -segment_wrap 2 out_%d.mp4
This works as expected and constantly spits out 2 three second files - out_0.mp4 & out_1.mp4. The default FPS of the streaming device is 100. When I add the fps parameter like so:
ffmpeg -y -i rtsp://admin:admin#192.168.1.200 -f segment -segment_time 3 -segment_wrap 2 -r 60 out_%d.mp4
I see that one or both the files are 4s long and all the frames are the same. When I drop the FPS to 30, the files are at least 8s long.
What am I doing wrong? How can I ensure that the dumped video files are valid and as long as it is specified by -segment_time
The segment muxer, by default, only splits at keyframes. Default keyframe interval is around 250.
Add -g X where X is segment_time * fps to set an appropriate interval.
Related
I'm capturing video from 4 cameras connected with HDMI through a capture card. I'm using ffmpeg to save the video feed from the cameras to multiples jpeg files (30 jpeg per second per camera).
I want to be able to save the images with the capture time. Currently I'm using this command for one camera:
ffmpeg -f video4linux2 -pixel_format yuv420p -timestamps abs -I /dev/video0 -c:a jpeg -t 60 -ts_from_file 2 camera0-%5d.jpeg
It saves my file with the names camera0-00001.jpg, camera0-00002.jpg, etc.
Then I rename my file with camera0-HH-mm-ss-(1-30).jpeg based on the modified time of the file.
So in the end I have 4 files with the same time and same frame like this:
camera0-12-00-00-1.jpeg
camera1-12-00-00-1.jpeg
camera2-12-00-00-1.jpeg
camera3-12-00-00-1.jpeg
My issue is that the file may be offset from one to two frame. They may have the same name but sometime one or two camera may show different frame.
Is there a way to be sure that the capture frames has the actual time of the capture and not the time of the creation of the file?
You can use the mkvtimestamp_v2 muxer
ffmpeg -f video4linux2 -pixel_format yuv420p -timestamps abs -copyts -i /dev/video0 \
-vf setpts=PTS-STARTPTS -vsync 0 -vframes 1800 camera0-%5d.jpeg \
-c copy -vsync 0 -vframes 1800 -f mkvtimestamp_v2 timings.txt
timings.txt will have output like this
# timecode format v2
1521177189530
1521177189630
1521177189700
1521177189770
1521177189820
1521177189870
1521177189920
1521177189970
...
where each reading is the Unix epoch time in milliseconds.
I've switched to output frame count limit to stop the process instead of -t 60. You can use -t 60 for the first output since we are resetting timestamps there, but not for the second. If you do that, remember to only use the first N entries from the text file, where N is the number of images produced.
I'm trying to overlay a 15 second video transition on the beginning of an image sequence (png sequence with an alpha to reveal the image below), which I can do fine with the overlay filter. But I want to hold the first frame of the image sequence for 5 seconds before playing the animation. I've tried trim/select but I can't seem to get it be a duration of 5 seconds, I also can't seem to concat it back with the other video to do the transition. So my questions are:
How do I get the first frame and hold it for 5 seconds, the method below works but doesn't seem like the cleanest option?
-framerate 30 -t 60.0 -i input1.%04d.jpg -framerate 30 -t 15.0 -i transition1_%03d.png -filter_complex "color=c=red:d=5:s=480x270:r=30[bg]; [bg][1:v]overlay[transhold]; [0:v][transhold]overlay=repeatlast=0[out]"
How can I concat that with the original before I overlay it on the main video, I can do it with two overlays with the start of the actual transition offset by the length of the hold using the command below, but it seems a bit clunky.
-framerate 30 -t 60.0 -i input1.%04d.jpg -framerate 30 -t 15.0 -i transition1_%03d.png -filter_complex "color=c=red:d=5:s=480x270:r=30[bg]; [1:v]split[trans][transhold]; [trans]setpts=PTS+5/TB[trans];[transhold]select=eq(n\0)[transhold];[bg][transhold]overlay[transhold]; [0:v][transhold]overlay=repeatlast=0[tmp1]; [tmp1][trans]overlay[out]"
This is all part of a larger command where I'm compiling four HD images into a 4k feed each with it's own transition so the cleaner I can be the better really. I'd also like to be able to vary the duration of the hold for the different HD inputs. If I need to I could bring in the first image as a different input but I would still need to concat them. I thought there must be a way to do this with filters...
This was answered in another post:
https://video.stackexchange.com/questions/23551/ffmpeg-extract-first-frame-and-hold-for-5-seconds
-framerate 30 -t 60.0 -i input1.%04d.jpg
-framerate 30 -t 15.0 -i transition1_%03d.png
-filter_complex
"[1]loop=149:1:0[trans];
[0][trans]overlay=eof_action=pass" out.mp4
The first frames of the second input is repeated 149 times, so that there are 150 instances (30 fps x 5s). The 0 at the end of loop is the starting index of the frame(s) to loop. The middle 1 is the number of frames to loop starting at the index in the 3rd argument.
I tried everything, so this is my last chance.
I need to make a slideshow using ffmpeg of the following files:
image-1.jpg
image-2.jpg
image-3.jpg
image-4.jpg
image-5.jpg
The input rate will be 1 fps (these are images so it's normal), but I would like the output rate to be 3 fps. Each image must last 1 minute for example.
My aim is to have the smallest output file size for this slideshow. I can use any codec I want.
I tried this:
ffmpeg -f image2 -r 1/30 -i image-%d.jpeg video.mpg
But ffmpeg says:
MPEG1/2 does not support 5/1 fps
I was wondering if it is at all possible to change the metadata of video segments in ffmpeg as the segments are being created. I know that by using the "-metadata" tag, you can change the metadata of the -i input video, but if that -i input video is being split into different segments by the "-f segment" option, then how do you change the metadata of the resulting segments while the -i input video is being segmented? I know that it's possible to change the metadata after the segmenting has completed, but this isn't that useful since I'm looking to stream the segments live as the input video is being segmented. To give a little better description:
ffmpeg -f video4linux2 -s wvga -t ${CAPTURE_DURATION} -i "/dev/video0" -r 30 \
-vcodec ${VID_CODEC} -s:v 640x480 -b:v 80k -keyint_min 30 -g 30 \
-sc_threshold 0 -map 0 -flags -global_header -qcomp 0.8 \
-qmin 10 -qmax 51 -qdiff 4 -f segment -segment_time ${SEG_TIME} \
-segment_format ${SEG_FORMAT} -metadata START=0 -y "${LOCATE}${OUTPUT}%01d.${EXTENSION}"
Essentially what I'm doing is taking a video from the standard video input and segmenting it. Once the video segments are created, I can test videos by throwing them all into a VLC playlist, and when the segment format is "mp4", there is a notable delay between each video segment where VLC won't start the video segment until it has played back the time again where the segment was in the original video. So for example, if I have a 30 second video, and split it into 5 second segments, VLC will play the 1st segment immediately, but it will wait 5 seconds before playing the 2nd segment after the 1st segment has finished playing. It does this because the 2nd segment has a start time metadata of 5 seconds, so VLC thinks that it has to wait 5 seconds before playing the 2nd segment. What I'm wondering is if there's a way to tell ffmpeg to set the segment start time metadata to 0 seconds as the segments are being created. Any help would be greatly appreciated.
According to the source code, there is a flag that should do what you want:
{ "reset_timestamps", "reset timestamps at the begin of each segment",
OFFSET(reset_timestamps), AV_OPT_TYPE_INT, {.i64 = 0}, 0, 1, E }
instead of -metadata START=0 use -reset_timestamps 1 and all your segments will start playing immediately.
According to the docs (see below) the '-vf thumbnail' should handle batches of N frames and pick 1 frame from each batch but it doesn't. Am I doing something wrong? I also tried various options with "-vframes 5" and 'out%d.png" but I got the same same frame repeated many times but it did process multiple batches of N frames.
8.37 thumbnail
Select the most representative frame in a given sequence of consecutive frames.
It accepts as argument the frames batch size to analyze (default N=100); in a set of N frames, the filter will pick one of them, and then handle the next batch of N frames until the end.
Since the filter keeps track of the whole frames sequence, a bigger N value will result in a higher memory usage, so a high value is not recommended.
The following example extract one picture each 50 frames:
thumbnail=50
Complete example of a thumbnail creation with ffmpeg:
ffmpeg -i in.avi -vf thumbnail,scale=300:200 -frames:v 1 out.png
You need to set another one parameter -vsync (set it to 0 or 2), or muxer got wrong frames because by default -vsync=1
For example, correct command is
ffmpeg -i INPUT_FILE -vsync 0 -vf thumbnail,scale=300:200 -frames:v 20 -f image2 img-%04d.jpg
As for me, instead of thumbnail filter I use I-frame selector - its generate a little bit more files, but it more accurate for my purposes.
This is example with timestamp and firstly we are must find out correct fps from file (this is Mac OS X grep dialect) to set value of r=
ffmpeg -i INPUT_FILE 2>&1 | grep -Po "[^\s]+\sfps"
Also you are need select you own fontfile, I use Mac OS X files
Now all ready (f.e. save first 20 I-frames)
ffmpeg -i INPUT_FILE -someq -vsync 0 -vf \
drawtext="fontfile=/Library/Fonts/Courier\ New.ttf: \
timecode='00\:00\:00\:00':r=23.98: fontcolor=0xFFFFFFFF:fontsize=18:\
shadowcolor=0x000000EE:shadowx=1:shadowy=1",select='eq(pict_type\,I)'\
-vframes:v 20 -f image2 img-%04d.jpg
(strange, I get error on \-spitted line, but all works in one-line)