I know it’s possible to get a part of a file using the command:
ffmpeg -ss 00:02:00.000 -i input.mp4" -t 00:00:05.000 out.mp4
But is it possible to combine multiple videos with text and other effects?
I want to create a output from the following
File1.mp4:
Read from 00:02:00.000 to 00:02:05.000
File2.mp4
Read from 00:00:00.000 to 00:01:30.000
Insert overlay image “logo.png” for 20 seconds
File3.mp4
Insert the whole file
Insert text from 00:00:10.000 to 00:00:30.000
It can be done with FFmpeg, but it isn't really an 'editor' so the command will get long, unwieldy and prone to execution errors the more the number of input clips and effects you apply.
That said, one way to do this is using the concat filter.
ffmpeg -i file1.mp4 -i file2.mp4 -i file3.mp4 -loop 1 -t 20 -i logo.png \
-filter_complex "[0:v]trim=120:125,setpts=PTS-STARTPTS[v1];
[1:v]trim=duration=90,setpts=PTS-STARTPTS[vt2];
[vt2][3:v]overlay=eof_action=pass[v2];
[2:v]drawtext=enable='between(t,10,30)':fontfile=font.ttf:text='Hello World'[v3];
[0:a]atrim=120:125,asetpts=PTS-STARTPTS[a1];
[1:a]trim=duration=90,setpts=PTS-STARTPTS[a2];
[v1][a1][v2][a2][v3][2:a]concat=n=3:v=1:a=1[v][a]" -map "[v]" -map "[a]" output.mp4
I haven't specified any encoding parameters like codec or bitrate..etc. Assuming you're familiar with those. Also, haven't specified arguments for overlay or drawtext like position..etc. Consult the documentation for a guide to those.
Related
I am trying to add an overlay to a video using the following command
ffmpeg -y -i "$videoPath" -i "$overlayPath" -filter_complex "[0:v] [1:v] overlay=$overlayPosition" -pix_fmt yuv420p -c:a copy "$outputPath"
However, I would like to be able to resize the overlay I am about to apply to some arbitrary resolution (no care for keeping proportions). However, although I followed a couple of similar solutions from SO (like FFMPEG - How to resize an image overlay?), I am not quite sute about the meaning of the parameters or what I need to add it in my case.
I would need to add something like (?)
[1:v]scale=360:360[z] [1:v]overlay=$overlayPosition[z]
This doesn't seem to work so I'm not sure what I should be aiming for.
I would appreciate any assistance, perhaps with some explanation.
Thanks!
You have found all parts. Let's bring them together:
ffmpeg -i "$videoPath" -i "$overlayPath" -filter_complex "[1:v]scale=360:360[z];[0:v][z]overlay[out]" -map "[out]" -map "0:a" "$outputPath"
For explanation:
We're executing here two filter within the "filter_complex" parameter separated by a semicolon ";".
First we scale the second video input ([1:v]) to a new resolution and store the output in variable "z" (you can put here any name).
Second we bring the first input video ([0:v]) and the overlay ([z]) together and store the output in variable "out".
Now it's time to tell ffmpeg what he should pack into our output file:
-map "[out]" (for the video)
-map "0:a" (for the audio of the first input file)
I have just started using ffmpeg for one of my project. I have very limited knowledge of ffmpeg.
I need a help on below problem. Thanks in advance.
I have two files:-
Audio File
Video File
I want to generate single file after performing below operations:-
trim the audio file to custom start and stop point.
merge audio and video file to a single file (video file is of same size)
apply speed filter on the generated file.
I am able to achieve the output but with three different ffmpeg commands due to which it is taking lot of time. I want to achieve the all there tasks in a single ffmpeg command.
Thanks.
Use the setpts and atempo (or rubberband) filters. This example will double the speed:
ffmpeg -i video.mp4 -ss 3 -t 10 -i audio.mp3 -filter_complex "[0:v]setpts=0.5*PTS[v];[1:a]atempo=2[a]" -map "[v]" -map "[a]" -shortest output.mp4
-ss 3 will skip beginning 3 seconds in audio.mp3.
-t 10 will limit audio.mp3 duration to 10 seconds.
-shortest will make output.mp4 duration the same as the shortest output stream duration.
I am cropping and adding subtitles to a video using the following:
ffmpeg -i inputfile.mov -lavfi "crop=720:720:280:360,subtitles=subs.srt:force_style='OutlineColour=&H100000000,BorderStyle=3,Outline=1,Shadow=0,MarginV=20,Fontsize=18'" -crf 1 -c:a copy output.mov
I have another video called credits.mp4 which has the same dimensions as the output.mov (after cropping). Can I do this during the above process, or would I have to use something like concat afterwards?
Using bash in Terminal on a Mac
Use the concat filter:
ffmpeg -i inputfile.mov -i credits.mp4 -lavfi "[0]crop=720:720:280:360,subtitles=subs.srt:force_style='OutlineColour=&H100000000,BorderStyle=3,Outline=1,Shadow=0,MarginV=20,Fontsize=18',setpts=PTS-STARTPTS[v0];[1]setpts=PTS-STARTPTS[v1];[v0][0:a][v1][1:a]concat=n=2:v=1:a=1[v][a]" -map "[v]" -map "[a]" output.mp4
Because no info was provided about your inputs I made some assumptions:
The attributes of both input files are the same as they are fed to concat. If not then perform additional filtering to conform them to a common set of parameters.
credits.mp4 has audio. If not, then add an audio file or use the anullsrc filter as an input to create silent/dummy/filler audio for proper concatenation.
I am fiddling with ffmpeg, extracting jpg pictures from videos. I am splitting the input stream into two output stream with -filter_complex, because I process my videos from direct http link (scarce free space on VPS), and I don't want to read through the whole video twice (traffic quota is also scarce). Furthermore I need two series of pitcures, one for applying some filters (fps changing, scale, unsharp, crop, scale) and then selecting from them by naked eye, and the other series being untouched (expect fps changing, and cropping the black borders), using them for furter processing after selecting from the first series. I call my ffmpeg command from Ruby script, so it contains some string interpolation / substitution in the form #{}. My working command line looked like:
ffmpeg -y -fflags +genpts -loglevel verbose -i #{url} -filter_complex "[0:v]fps=fps=#{new_fps.round(5).to_s},split=2[in1][in2];[in1]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]},scale=#{thumb_width}:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(#{gammaval})[out1];[in2]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]}[out2]" -f #{format} -c copy #{options} -map_chapters -1 - -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
#{options} is set when output is MP4, then its value is "-movflags frag_keyframe+empty_moov" so I can send it to standard output without seeking capability and uploading the stream somewhere without making huge temporary video files.
So I get two series of pictures, one of them is filtered, sharpened, the other is in fact untouched. And I also get an output stream of the video on the standard output which is handled by Open3.popen3 library connecting the output stream of the input of two other commands.
Problem arise when I would like to seek in the video to a given point and omitting the streamed video output on the STDOUT. I try to apply combined seeking, fast seek before the given time code and the slow seek to the exact time code, given in floating seconds:
ffmpeg -report -y -fflags +genpts -loglevel verbose -ss #{(seek_to-seek_before).to_s} -i #{url} -ss #{seek_before.to_s} -t #{t_duration.to_s} -filter_complex "[0:v]fps=fps=#{pics_per_sec},split=2[in1][in2];[in1]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]},scale=#{thumb_width}:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(#{gammaval})[out1];[in2]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]}[out2]" -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
Running this command I get the needed two series of pictures, but they contains different number of images, 233 vs. 484.
Actual values can be read from this interpolated / substituted command line:
ffmpeg -report -y -fflags +genpts -loglevel verbose -ss 1619.0443599999999 -i fabf.avi -ss 50.0 -t 46.505879999999934 -filter_complex "[0:v]fps=fps=5,split=2[in1][in2];[in1]crop=iw-0:ih-0:0:0,scale=280:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(0.526316)[out1];[in2]crop=iw-0:ih-0:0:0[out2]" -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
Detailed log can be found here: http://www.filefactory.com/file/1yih17k2hrmp/ffmpeg-20160610-223820.txt
Before last line it shows 188 duplicated frames.
I also tried passing "-vsync 0" option, but didn't help. When I generate the two series of images in two consecutive steps, with two different command lines, then no problem arises, I get same amount of pictures in both series of course. So my question would be, how can I use the later command line, generating the two series of images by only one reading / parsing of the remote video file?
You have to replicate the -ss -t options for the 2nd output as well i.e.
...-f image2 -q 1 %06d.jpg -map '[out2]' -ss 50 -t 46.5 -f image2 -q 1 big_%06d.jpg
Each output option (those not before -i) only apply to the output that immediately follows.
I'm looking to create a video using a set of png images that have transparency merged with a static background.
After doing a lot of digging I seems like it's definitely possible by using the filters library.
My initial video making without including the background is:
ffmpeg -y -qscale 1 -r 1 -b 9600 -loop -i bg.png -i frame_%d.png -s hd720 testvid.mp4
Using -vf I can apply the background as overlay:
ffmpeg -y -qscale 1 -r 1 -b 9600 -i frame_%d.png -vf "movie=bg.png [wm];[in][wm] overlay=0:0 [out]" -s hd720 testvid.mp4
However the problem is it's overlaying the background over the input. According libacfilter I can split the input and play with it's content. I'm wondering if I can somehow change the overlay order?
Any help greatly appreciated!
UPDATE 1:
I'm trying to make the following filter work but I'm getting the movie without the background:
ffmpeg -y -qscale 1 -r 1 -b 9600 -i frame_%d.png -vf "movie=bg.png [bg]; [in] split [T1], fifo, [bg] overlay=0:0, [T2] overlay=0:0 [out]; [T1] fifo [T2]" -s hd720 testvid.mp4
UPDATE 2:
Got video making using -vf option. Just piped the input slit it applied image over it and overlayed the two split feeds! Probably not the most efficient way... but it worked!
ffmpeg -y -r 1 -b 9600 -i frame_%d.png -vf "movie=bg.png, scale=1280:720:0:0 [bg]; [in] format=rgb32, split [T1], fifo, [bg] overlay=0:0, [T2] overlay=0:0 [out]; [T1] fifo [T2]" -s hd720 testvid.mp4
The overlay order is controlled by the order of the inputs, from the ffmpeg docs
[...] takes two inputs and one output, the first input is the "main" video on which the second input is overlayed.
You second command thus becomes:
ffmpeg -y -loop 1 -qscale 1 -r 1 -b 9600 -i frame_%d.png -vf "movie=bg.png [wm];[wm][in] overlay=0:0" -s hd720 testvid.mp4
With the latest versions of ffmpeg the new -filter_complex command makes the same process even simpler:
ffmpeg -loop 1 -i bg.png -i frame_%d.png -filter_complex overlay -shortest testvid.mp4
A complete working example:
The source of our transparent input images (apologies for dancing):
Exploded to frames with ImageMagick:
convert dancingbanana.gif -define png:color-type=6 over.png
(Setting png:color-type=6 (RGB-Matte) is crucial because ffmpeg doesn't handle indexed transparency correctly.) Inputs are named over-0.png, over-1.png, over-2.png, etc.
Our background image (scaled to banana):
Using ffmpeg version N-40511-g66337bf (a git build from yesterday), we do:
ffmpeg -loop 1 -i bg.png -r 5 -i over-%d.png -filter_complex overlay -shortest out.avi
-loop loops the background image input so that we don't just have one frame, crucial!
-r slows down the dancing banana a bit, optional.
-filter_complex is a very recently added ffmpeg feature making handling of multiple inputs easier.
-shortest ends encoding when the shortest input ends, which is necessary as looping the background means that that input will never end.
Using a slightly less cutting-edge build, ffmpeg version 0.10.2.git-d3d5e84:
ffmpeg -loop 1 -r 5 -i back.png -vf 'movie=over-%d.png [over], [in][over] overlay' -frames:v 8 out.avi
movie doesn't allow rate setting, so we slow down the background instead which gives the same effect. Because the overlaid movie isn't a proper input, we can't use -shortest and instead explicitly set the number of frames to output to how many overlaid input frames we have.
The final result (output as a gif for embedding):
for references in the future as of 17/02/2015, the command-line is :
ffmpeg -loop 1 -i images/background.png -i images/video_overlay%04d.png -filter_complex overlay=shortest=1 testvid.mp4
thanks for llogan who took the time to reply here : https://trac.ffmpeg.org/ticket/4315#comment:1