is it possible to use ffmpeg to convert a movie with variable framerate to a sequence of still pics without duplicates?
I tried something like:
ffmpeg -i vid.avi pic%d.png
but each frame generates thousands of pictures. I also tried:
ffmpeg -i vid.avi -r 10 pic%d.png
but I still have lots of duplicates AND some frames are missing
is it possible to specify something like "-r natural"???
TIA
A bit late, but...
ffmpeg -vsync 2 -i <input movie> <output with %d>
will do the trick. For example:
ffmpeg -vsync 2 -i "C:\Test.mp4" "C:\Thumbnails\Test%d.jpg"
will create exactly one jpeg per frame in the C:\Thumbnails folder, and each jpeg will be sequentially numbered with no gaps or duplicates in numbering.
Related
I want to edit a small video frame/image by image without losing quality (or without losing much of it). I have used ffmpeg to split into images using the following line:
ffmpeg -i test.mp4 $filename%%03d.bmp
This worked fine. I tried merging the images back using several lines including:
ffmpeg -re -f image2 -framerate 30 -i $filename%%03d.bmp -c:v prores_aw -pix_fmt yuv422p10le test.mkv
Though, this results in a difference between brightness/contast between original and merged videos. The merged file would be a bit darker (you have to look close) than original file. What can I do to fix this?
Thanks for your time.
The need
Hello, I need to extract two regions of a .h264 video file via the crop filter into two files. The output videos need to be monochrome and extension .mp4. The encoding (or format?) should guarantee that video frames are organized monotonically. Finally, I need to get the timestamps for both files (which I'd bet are the same timestamps that I would get from the input file, see below).
In the end I will be happy to do everything in one command via an elegant one liner (via a complex filter I guess), but I start doing it in multiple steps to break it down in simpler problems.
In this path I get into many difficulties and despite having searched in many places I don't seem to find solutions that work. Unfortunately I'm no expert of ffmpeg or video conversion, so the more I search, the more details I discover, the less I solve problems.
Below you find some of my attempts to work with the following options:
-filter:v "crop=400:ih:260:0,format=gray" to do the crop and the monochrome conversion
-vf showinfo possibly combined with -vsync 0 or -copyts to get the timestamps via stderr redirection &> filename
-c:v mjpeg to force monotony of frames (are there other ways?)
1. cropping each region and obtaining monochrome videos
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:260:0,format=gray" outL.mp4
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:1280:0,format=gray" outR.mp4
The issue here is that in the output files the frames are not organized monotonically (I don't understand why; how come would that make sense in any video format? I can't say if that comes from the input file).
EDIT. Maybe it is not frames, but packets, as returned by av .demux() method that are not monotonic (see below "instructions to reproduce...")
I have got the advice to do a ffmpeg -i outL.mp4 outL.mjpeg after, but this produces two videos that look very pixellated (at least playing them with ffplay) despite being surprisingly 4x bigger than the input. Needless to say, I need both monotonic frames and lossless conversion.
EDIT. I acknowledge the advice to specify -q:v 1; this fixes the pixellation effect but produces a file even bigger, ~12x in size. Is it necessary? (see below "instructions to reproduce...")
2. getting the timestamps
I found this piece of advice, but I don't want to generate hundreds of image files, so I tried the following:
$ ffmpeg -y -hide_banner -i outL.mp4 -vf showinfo -vsync 0 &>tsL.txt
$ ffmpeg -y -hide_banner -i outR.mp4 -vf showinfo -vsync 0 &>tsR.txt
The issue here is that I don't get any output because ffmpeg claims it needs an output file.
The need to produce an output file, and the doubt that the timestamps could be lost in the previous conversions, leads me back to making a first attempt of a one liner, where I am testing also the -copyts option, and the forcing the encoding with -c:v mjpeg option as per the advice mentioned above (don't know if in the right position though)
ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:1280:0,format=gray" -vf showinfo -c:v mjpeg eyeL.mp4 &>tsL.txt
This does not work because surprisingly the output .mp4 I get is the same as the input. If instead I put the -vf showinfo option just before the stderr redirection, I get no redirected output
ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:260:0,format=gray" -c:v mjpeg outR.mp4 -vf showinfo dummy.mp4 &>tsR.txt
In this case I get the desired timestamps output (too much: I will need some solution to grab only the pts and pts_time data out of it) but I have to produce a big dummy file. The worst thing is anyway, that the mjpeg encoding produces a low resolution very pixellated video again
I admit that the logic how to place the options and the output files on the command line is obscure to me. Possible combinations are many, and the more options I try the more complicated it gets, and I am not getting much closer to the solution.
3. [EDIT] instructions how to reproduce this
get a .h264 video
turn it into .mp by ffmpeg command $ ffmpeg -i inVideo.h264 out.mp4
run the following python cell in a jupyter-notebook
see that the packets timestamps have diffs greater and less than zero
%matplotlib inline
import av
import numpy as np
import matplotlib.pyplot as mpl
fname, ext="outL.direct", "mp4"
cont=av.open(f"{fname}.{ext}")
pk_pts=np.array([p.pts for p in cont.demux(video=0) if p.pts is not None])
cont=av.open(f"{fname}.{ext}")
fm_pts=np.array([f.pts for f in cont.decode(video=0) if f.pts is not None])
print(pk_pts.shape,fm_pts.shape)
mpl.subplot(211)
mpl.plot(np.diff(pk_pts))
mpl.subplot(212)
mpl.plot(np.diff(fm_pts))
finally create also the mjpeg encoded files in various ways, and check packets monotony with the same script (see also file size)
$ ffmpeg -i inVideo.h264 out.mjpeg
$ ffmpeg -i inVideo.h264 -c:v mjpeg out.c_mjpeg.mp4
$ ffmpeg -i inVideo.h264 -c:v mjpeg -q:v 1 out.c_mjpeg_q1.mp4
Finally, the question
What is a working way / the right way to do it?
Any hints, even about single steps and how to rightly combine them will be appreciated. Also, I am not limited tio the command line, and I would be able to try some more programmatic solution in python (jupyter notebook) instead of the command line if someone points me in that direction.
I'm using ffmpeg to take screenshot from online video stream. I want to seek multiple timeline. I've used the following command to capture 1 screenshot by seek command:
ffmpeg -ss 00:02:10 -i "stream-url" -frames:v 1 out1.jpg
How I can take multiple screenshot via multiple seek time. I've searched for the solution but no success.
I've used the following command to take multiple screenshot as follows:
ffmpeg -noaccurate_seek -ss 00:01:10 -i "stream-url" -map 0:v:0 -vframes 1 -f mpeg "thumb/output_01.jpg" -ss 00:02:10 -i "stream-url" -map 1:v:0 -vframes 1 -f mpeg "thumb/output_02.jpg"
Is there any way to generate screenshots from same input via seek command? How to make it more faster? How to skip multiple input(-i param)? I've also tried with other commands but those are more slower. Can anyone help me?
There's no easy way I know to specify a number of arbitrary seek points from which to extract frames (similar question here).
However, seeking is very fast with the way you specified. Instead of constructing a complex command, you could just download the YouTube video using youtube-dl (if you haven't done that already) and generate the commands like this:
ffmpeg -ss 00:01:10 -i input -frames:v 1 out1.jpg
ffmpeg -ss 00:02:05 -i input -frames:v 1 out2.jpg
ffmpeg -ss 00:03:20 -i input -frames:v 1 out3.jpg
Note that exporting JPG might lead to low quality. Using PNG is preferred; you will get lossless frames that you can handle with another program later (e.g. to resize or compress).
If you want to get frames from regular intervals, use the fps filter to drop the framerate:
ffmpeg -i input -filter:v fps=1/60 out%02d.jpg
This will output a frame every minute (1/60 frames per second = 1 frame per minute), with two zero-padded digits as output numbers. You could additionally offset the start by providing a -ss option before the input file.
I am trying to create a video out of a sequence of images and various audio files using FFmpeg. While it is no problem to create a video containing the sequence of images with the following command:
ffmpeg -f image2 -i image%d.jpg video.mpg
I haven't found a way yet to add audio files at specific points to the generated video.
Is it possible to do something like:
ffmpeg -f image2 -i image%d.jpg -i audio1.mp3 AT 10s -i audio2.mp3 AT 15s video.mpg
Any help is much appreciated!
EDIT:
The solution in my case was to use sox as suggested by blahdiblah in the answer below. You first have to create an empty audio file as a starting point like that:
sox -n -r 44100 -c 2 silence.wav trim 0.0 20.0
This generates a 20 sec empty WAV file. After that you can mix the empty file with other audio files.
sox -m silence.wav "|sox sound1.mp3 -p pad 0" "|sox sound2.mp3 -p pad 2" out.wav
The final audio file has a duration of 20 seconds and plays sound1.mp3 right at the beginning and sound2.mp3 after 2 seconds.
To combine the sequence of images with the audio file we can use FFmpeg.
ffmpeg -i video_%05d.png -i out.wav -r 25 out.mp4
See this question on adding a single audio input with some offset. The -itsoffset bug mentioned there is still open, but see users' comments for some cases in which it does work.
If it works in your case, that would be ideal:
ffmpeg -i in%d.jpg -itsoffset 10 -i audio1.mp3 -itsoffset 15 -i audio2.mp3 out.mpg
If not, you should be able to combine all the audio files with sox, overlaying or inserting silence to produce the correct offsets and then use that as input to FFmpeg. Not as convenient, but guaranteed to work.
One approach I can think of is to create your audio file for the whole duration of the video first and then mux the audio with the video file
This question already has answers here:
How to create a video from images with FFmpeg?
(9 answers)
Closed 3 years ago.
I'm wanting to take a bunch of images and make a video slideshow out of them. There'll be an app for that, right? Yup, quite a few it seems. The problem is I want the slides synced to a piece of music, and all the apps I've seen only allow you to show each slide for a multiple of a whole second. I want them to show for multiples of 1.714285714 seconds to fit with 140 bpm.
The tools I've seen generally seem to have ffmpeg under the hood, so presumably this kind of thing could be done with a script. But ffmpeg has sooo many options...I'm hoping someone will have something close.
I'll have up to about 100 slides, the ones that have to show for 3.428571428 secs or whatever I guess I can simply show twice.
For very recent versions of ffmpeg (roughly from the end of year 2013)
The following will create a video slideshow (using video codec libx264 or webm) from all the png images in the current directory. The command accepts image names numbered and ordered in series (img001.jpg, img002.jpg, img003.jpg) as well as random bunch of images.
(each image will have a duration of 5 seconds)
ffmpeg -r 1/5 -pattern_type glob -i '*.png' -c:v libx264 out.mp4 # x264 video
ffmpeg -r 1/5 -pattern_type glob -i '*.png' out.webm # WebM video
For older versions of ffmpeg
This will create a video slideshow (using video codec libx264 or webm) from series of png images, named img001.png, img002.png, img003.png, …
(each image will have a duration of 5 seconds)
ffmpeg -f image2 -r 1/5 -i img%03d.png -vcodec libx264 out.mp4 # x264 video
ffmpeg -f image2 -r 1/5 -i img%03d.png out.webm # WebM video
You may have to slightly modify the following commands if you have a very recent version of ffmpeg
This will create a slideshow in which each image has a duration of 15 seconds:
ffmpeg -f image2 -r 1/15 -i img%03d.png out.webm
If you want to create a video out of just one image, this will do (output video duration is set to 30 seconds):
ffmpeg -loop 1 -f image2 -i img.png -t 30 out.webm
If you don't have images numbered and ordered in series (img001.jpg, img002.jpg, img003.jpg) but rather random bunch of images, you might try this:
cat *.jpg | ffmpeg -f image2pipe -r 1 -vcodec mjpeg -i - out.webm
or for png images:
cat *.png | ffmpeg -f image2pipe -r 1 -vcodec png -i - out.webm
That will read all the jpg/png images in the current directory and write them, one by one, using the pipe, to the ffmpeg's input, which will produce the video out of it.
Important: All images in a series need to be of the same size (x and y dimensions) and format.
Explanation: By telling FFmpeg to set the input file's FPS option (frames per second) to some very low value, we made FFmpeg duplicate frames at the output and thus we achieved to display each image for some time on screen. You have seen, that you can set any fraction as framerate. 140 beats per minute would be -r 140/60.
Source: The FFmpeg wiki
For creating images from a video use
ffmpeg -i video.mp4 img%03d.png
This will create images named img001.png, img002.png, img003.png, …
You can extract images from a video, or create a video from many
images:
For extracting images from a video:
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
This will extract one video frame per second from the video and will
output them in files named 'foo-001.jpeg', 'foo-002.jpeg', etc. Images
will be rescaled to fit the new WxH values. If you want to extract
just a limited number of frames, you can use the above command in
combination with the -vframes or -t option, or in combination with -ss
to start extracting from a certain point in time. For creating a video
from many images:
ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
The syntax foo-%03d.jpeg specifies to use a decimal number composed
of three digits padded with zeroes to express the sequence number. It
is the same syntax supported by the C printf function, but only
formats accepting a normal integer are suitable.
This is an excerpt from the documentation, for more info check on the documentation page of ffmpeg.
I wound up using this:
mencoder "mf://html/*.png" -ovc x264 -mf fps=1.16666667 -o output.avi
and changing the sample rate afterwards in LiVES.
a load more details (and the end result video) at: http://hyperdata.org/hackit/ (mirror)