I am stuck at a problem of frame extraction using ffmpeg. I am pointing out a given frame duration in video editors like filmora and shotkut (both coherent upto 2 places of milliseconds) and then I am using that duration in ffmpeg to extract all the frames at the native framerate. However, I don't get the perfect coherence when I see the first frame extracted and the corresponding image in editors(vlc, wondershare, filmora all same) both are different.
Please find an example of the command below :
ffmpeg -i "/mnt/sda1/Downloaded_Videos/25mm_Videos/24-12_21/0840.mp4" -ss 00:09:50.18 -to 00:10:49.22 /mnt/sda1/ExtractedFrames/25mm/24Dec_test/frame%5d_0r_0840_00095018_00095018.png
The extracted frame frame_0_0r_0840_00095018_00095018.png is different from the image or frame loaded in editors and vlc player when going to timestamp : 00:09:50.18.
Thanks for the valuable comment #Gyan. The framecount does the trick for ex.
00:00:10:20 -> signifies the 20th frame of the 10th second in the video
whereas
00:00:10.20 -> signifies the frame at the 200th milli-second for the 10th second of the video.
Conversion : Know the video fps : 00:00:10:20 = 00:00:10.(20 * 1000)/fps
Related
I want to record video from a camera, save it to file, and at the same time have access to the last frame recorded.
One idea would be to use ffmpeg's Multiple Outputs functionality where I split the stream into two, one gets saved to file, one spits out the last recorded frame (ideally, the frames won't need to be written to disk, but piped onwards for processing).
What I don't know is how to get ffmpeg to spit "the last frame" from a stream.
Any ideas?
Output a video and continuously update an image every second
ffmpeg -i input.foo outputvideo.mp4 -r 1 -update 1 image.jpg
Output a video and output a new image every second
ffmpeg -i input.foo outputvideo.mp4 -r 1 images_%04d.jpg
Output will be named images_0001.jpg, images_0002.jpg, etc.
Also see
FFmpeg image muxer documentation for more info and options.
How can I extract a good quality JPEG image from a video file with ffmpeg?
I have two problems with FFmpeg, when I use it to join DNG files sequence into mp4 video file. I also need to downgrade the resolution of the video from 6016x3200 to 2030x1080.
First of all I got almost black screen in the resulting video. Had to play with gamma and brightness options. But it was not enough!
New problems:
something strange happens with aspect ratio in resulting video file: in the first frame aspect is normal, just like in the original picture, but all the rest frames are getting squeezed. can't figure out why this happen!? (see picture attached).
colors are desaturated. despite the fact that I set "saturation" option to the maximum value. and also, the first frame of the video is different from the rest (while DNG files are all similar, first is no exception)
I tried prores codec as well, with the same result.
command I use is simple:
ffmpeg.exe -start_number 1 -i "K:\video\copter_R%5d.dng" -c:v libx264 -vf "fps=25,format=yuv420p, eq=gamma=3.2:brightness=0.2:contrast=1.6:saturation=3, scale=w=2030:h=1080" e:\output.mp4
I tried to use different variants of scale parameter: "scale=-1:1080" as well.
Illustration:
UPDATE: ffmpeg log report for operation:
https://drive.google.com/file/d/1H6bdpU0Eo4WfR3h-SRtgf7WBNYVFRwz2/view?usp=sharing
As OBS Studio lacks a visual indicator to show how far a video has progressed (and when you need to advance to the next scene), I was wondering if there is a command-line option (or solution) to get FFMPEG to re-encode the video and show a progress bar at the bottom of the video that shows how long the video has been playing so far.
Is there such a feature?
Here's a simple 3 second example using an animated overlay:
ffmpeg -i input.mp4 -filter_complex "color=c=red:s=1280x10[bar];[0][bar]overlay=-w+(w/10)*t:H-h:shortest=1" -c:a copy output.mp4
What you will have to change:
In the color filter I used 1280 as an example to match the width of input.mp4. You can use ffprobe to get the width or the scale2ref filter to resize to match input.mp4.
In the overlay filter I used 10 as an example for the total duration in seconds of input.mp4. You can use ffprobe to get the duration.
I am trying to make a copy of one of my mp4 movies with audio intact, but blacking out video frames only on the last few minutes. Basically I want to keep the end credit music but without the artifacted video.
I found this answer: which works perfectly for an entire mp4 file (including a test fragment I made of the above ending credits sequence), but I need it applied as I stated above to just the end of the entire copied full mp4.
In this case I don't want to start blanking the video stream frames until after 2h 7m 30s. I messed around with combinations of the -ss, -start_time and -timecode 02:07:31 params, but I'm an ffmpeg noob and couldn't get it to produce anything but cut-out sections or the whole copy blanked.
Any guidance would be greatly appreciated!
You can use the drawbox filter to black those frames out.
ffmpeg -i in -vf drawbox=c=black:t=fill:enable='gt(t\,7650)' -c:a copy out
This will black out the frames from 7650 seconds onwards.
I have some images that were taken from a video via screen capture. I would like to know when in the video these images appear (timestamps). Is there a way to programmatically match an image with a specific frame in a video using ffmpeg or some other tool?
I am very open to different technologies as I'm eager to automate this. It would be extremely time consuming to do this manually.
You can get the psnr between that image and each frame in the video, and the match is the frame with the highest psnr. ffmpeg has a tool to calculate the psnr in tests/tiny_psnr which you can use to script this together, or there's also a psnr filter in the libavfilter module in ffmpeg if you prefer to code rather than script.
Scripting, you'd basically decode the video to a FIFO, decode the image to a file, and then match the FIFO frames repeatedly against the image file using tiny_psnr, selecting the framenumber for the frame with highest psnr. The output will be a frame-number, which (using fps output on the commandline) you can approximately convert to a timestamp.
Programming-wise, you'd decode the the video and image to AVFrame, use the psnr filter to compare the two, and then look at the output frame metadata to record the psnr value in your program, and search for the frame with the highest metadata psnr value, and for that frame, AVFrame->pkt_pts would be the timestamp.