Where I made a mistake - FFmpeg (Linux) basic problem - ffmpeg

I just started learning FFmpeg. I have code (like below), but it's doing nothing.
fmpeg -i videoplayback.mp4 -filter_complex "[1:v]trim=start=0:end=1,setpts=PTS-STARTPTS,scale=480x360,setsar=sar=16/9[intro1];
[1:v]trim=start=1:end=123.39,setpts=PTS-STARTPTS,scale=480x360,setsar=sar=16/9[main1];
[1:v]trim=start=123.39:end=124.39,setpts=PTS-STARTPTS,scale=480x360,setsar=sar=16/9[end1];
[intro1]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[intro1];
[end1]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[end1];
[intro1][main1][end1][output];
[a:1][audio]; -vcodec libx264 -map "[output]" -map"[audio]" "output.mp4"

fmpeg should be ffmpeg.
You only have one input so [1:v] should be [0:v] (it starts counting from 0).
No need for alpha for fading because you are not overlapping or blending frames.
Ending fade needs to be a fade out (not fade in).
You can't re-use filter output labels within the filtergraph.
Some of your filterchains can be combined.
Some of your labels are not associated with a filter (it appears you forgot to use the concat filter).
You can add scale and setsar at the end instead of using them for each segment.
Replace the last ; with ".
You didn't map the audio properly.
Stream copy (re-mux) the audio.
Example:
ffmpeg -i videoplayback.mp4 -filter_complex "[0:v]trim=end=1,setpts=PTS-STARTPTS,fade=t=in:d=1[intro];[0:v]trim=start=1:end=123.39,setpts=PTS-STARTPTS[main];[0:v]trim=start=123.39,setpts=PTS-STARTPTS,fade=t=out:d=1[end];[intro][main][end]concat=n=3:v=1:a=0,scale=480x360,setsar=16/9[v]" -map "[v]" -map 0:a -c:a copy output.mp4

Related

FFMPEG Split Side by Side video in two

I have .m2ts video which include a 3d video, with, as consequence, the left and right components. Is there a smart way to split the video in two stimulus (with for example ffmpeg)?
The actual solution is to convert the video in mp4 and then crop it in two. However, I suppose that it is not the smarter solution.
Thanks
Split video into 2 streams; both into 1 output file
Use the crop filter:
ffmpeg -i input.m2ts -filter_complex "[0]crop=iw/2:ih:0:0[left];[0]crop=iw/2:ih:ow:0[right]" -map "[left]" -map "[right]" -map 0:a output.mp4
Split video into 2 separate output files
Use the crop filter:
ffmpeg -i input.m2ts -filter_complex "[0]crop=iw/2:ih:0:0[left];[0]crop=iw/2:ih:ow:0[right]" -map "[left]" -map 0:a left.mp4 -map "[right]" -map 0:a right.mp4
Convert between stereoscopic formats
Such as above-below, side-by-side, alternating, interleaved, anaglyph, etc.
Use the stereo3d filter and also see FFmpeg Wiki: Stereoscopic.

ffmpeg append video with different dimensions

I am cropping and adding subtitles to a video using the following:
ffmpeg -i inputfile.mov -lavfi "crop=720:720:280:360,subtitles=subs.srt:force_style='OutlineColour=&H100000000,BorderStyle=3,Outline=1,Shadow=0,MarginV=20,Fontsize=18'" -crf 1 -c:a copy output.mov
I have another video called credits.mp4 which has the same dimensions as the output.mov (after cropping). Can I do this during the above process, or would I have to use something like concat afterwards?
Using bash in Terminal on a Mac
Use the concat filter:
ffmpeg -i inputfile.mov -i credits.mp4 -lavfi "[0]crop=720:720:280:360,subtitles=subs.srt:force_style='OutlineColour=&H100000000,BorderStyle=3,Outline=1,Shadow=0,MarginV=20,Fontsize=18',setpts=PTS-STARTPTS[v0];[1]setpts=PTS-STARTPTS[v1];[v0][0:a][v1][1:a]concat=n=2:v=1:a=1[v][a]" -map "[v]" -map "[a]" output.mp4
Because no info was provided about your inputs I made some assumptions:
The attributes of both input files are the same as they are fed to concat. If not then perform additional filtering to conform them to a common set of parameters.
credits.mp4 has audio. If not, then add an audio file or use the anullsrc filter as an input to create silent/dummy/filler audio for proper concatenation.

How to apply multiple cropped blurs?

I would like to apply multiple blurs into my video (with audio copied), each of them having different coordinates and durations. Here is what I have tried:
ffmpeg -i test.mp4 -filter_complex \
"[0:v]crop=w=100:h=100:x=20:y=40,boxblur=10:enable='between(t,5,8)'[c1];
[0:v]crop=w=100:h=100:x=40:y=60,boxblur=10:enable='between(t,10,13)'[c2];
[0:v][c1]overlay=x=20:y=40[v];
[0:v][c2]overlay=x=40:y=60[v]" \
-map "[v]" -movflags +faststart output.mp4
However, this results in a Filter overlay has an unconnected output error. I would like to know if there is any good way to solve this. Thanks for your attention.
The 2nd overlay should use the output of the first overlay as its base input.
ffmpeg -i test.mp4 -filter_complex \
"[0:v]crop=w=100:h=100:x=20:y=40,boxblur=10:enable='between(t,5,8)'[c1];
[0:v]crop=w=100:h=100:x=40:y=60,boxblur=10:enable='between(t,10,13)'[c2];
[0:v][c1]overlay=x=20:y=40:enable='between(t,5,8)'[v0];
[v0][c2]overlay=x=40:y=60:enable='between(t,10,13)'[v]" \
-map "[v]" -movflags +faststart output.mp4

Video file is hanging after concatenating video files and drawtext to output

I'm trying to concat 3 video files and add text to output using ffmpeg.
Each part is 10 sec long.
I've end up with this code:
ffmpeg -i output3.mp4 -i output2.mp4 -i output1.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0]concat=n=3:v=1:a=1[v][a]; [0:v:0]drawtext=fontfile=tahoma.ttf:text=Sample text:fontcolor=white:fontsize=40:box=1:boxcolor=black#0.7:boxborderw=5:x=100:y=100" -map "[v]" -map "[a]" output.mp4
The result video has 30 seconds but it hangs after 1st part (10s). When I remove drawtext filter part (just concat), then video is fine, but without text...
Anyone can help ?
Use
ffmpeg -i output3.mp4 -i output2.mp4 -i output1.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0]concat=n=3:v=1:a=1[v][a]; [v]drawtext=fontfile=tahoma.ttf:text=Sample text:fontcolor=white:fontsize=40:box=1:boxcolor=black#0.7:boxborderw=5:x=100:y=100[v]" -map "[v]" -map "[a]" output.mp4
Your existing syntax applied the text on top of the video stream of the first input file, instead of the resultant video from the concat filter.

How to combine a video and an image using "reverse" vstack?

I have an image and a video (same width). I now want to use ffmpeg to add the image above the video. Google and other SO threads the use of the vstack filter_complex tag, which works great - except that it puts the image under the video.
I've tried putting the image first and then the video, but this doesnt work. I've also tried giving the vstack command reverse inputs, but also didnt work!
The video may also contain audio which I would need to keep.
See code below:
// Works, but puts image below video (instead of above)
ffmpeg -i test.mp4 -i text.png -filter_complex vstack result.mp4
// Doesn't work at all
ffmpeg -i test.mp4 -i text.png -filter_complex '[1:v][0:v]vstack' result.mp4
// Doesn't work at all
ffmpeg -i test.mp4 -i text.png -filter_complex '[1:v][0:v]vstack=inputs=2[v]' -map '[v]' -map 0:a result.mp4
Google / SO did not yield any tips on how to achieve this so far. Do you know a solution?
Use
ffmpeg -i test.mp4 -i text.png -filter_complex '[1:v][0:v]vstack' -c:a copy -pix_fmt yuv420p result.mp4
Videos and images can have different pixel formats. When the various inputs to a stack filter don't have the same format, the filter picks the format of the first input and converts all other inputs to that format. However, some video players don't support a wide variety of formats. yuv420p is the widely supported format and so the command above forces the output to that one. Audio, if present in the MP4, will get carried over.
ffmpeg -i test.mp4 -i text.png -filter_complex '[1:v]format=yuv444p[img];[img][0:v]vstack' -c:a copy -pix_fmt yuv420p result.mp4

Resources