FFMPEG Split Side by Side video in two - ffmpeg

I have .m2ts video which include a 3d video, with, as consequence, the left and right components. Is there a smart way to split the video in two stimulus (with for example ffmpeg)?
The actual solution is to convert the video in mp4 and then crop it in two. However, I suppose that it is not the smarter solution.
Thanks

Split video into 2 streams; both into 1 output file
Use the crop filter:
ffmpeg -i input.m2ts -filter_complex "[0]crop=iw/2:ih:0:0[left];[0]crop=iw/2:ih:ow:0[right]" -map "[left]" -map "[right]" -map 0:a output.mp4
Split video into 2 separate output files
Use the crop filter:
ffmpeg -i input.m2ts -filter_complex "[0]crop=iw/2:ih:0:0[left];[0]crop=iw/2:ih:ow:0[right]" -map "[left]" -map 0:a left.mp4 -map "[right]" -map 0:a right.mp4
Convert between stereoscopic formats
Such as above-below, side-by-side, alternating, interleaved, anaglyph, etc.
Use the stereo3d filter and also see FFmpeg Wiki: Stereoscopic.

Related

ffmpeg: overlaying two videos is deliting audio from one

i'm trying to overlay two videos with ffmpeg, but in the output there is only the audio of the first video.
Using
ffmpeg -i screen.mkv -vf "movie=webcam.mp4, scale=600: -1 [inner]; [in][inner] overlay = main_w - (overlay_w + 10) : main_h - (overlay_h + 10)" output.mp4
from cmd, in the final output i have only the audio from the first video specfied (screen.mkv).
how can i solve?
Use 2 inputs, a complex filtergraph, and -map output options:
-i webcam.mp4
-i screen.mkv
-filter_complex [0:v]scale=600:-1[inner];
[1:v][inner]overlay=main_w-(overlay_w+10):main_h-(overlay_h+10)[vout]
-map [vout] -map 0:a output.mp4

FFMPEG script to merge multiple videos and a background image

I have 30 clips which are different in aspect ratio(like some videos are 10801920(they are vertical) and some are 1280720(horizontal aspect ratio videos). I want to merge all of them but also have a static background image that is of 1920x1080 aspect ratio. The video would be such that all the clips are concatenated but they have a background image(just like those tiktok compilation videos on youtube). Can someone please help me with this program?
Example using 3 videos. It can easily be expanded to 30 videos. I broke the command into multiple lines so you can see the syntax better. Make it one line before executing.
ffmpeg -i 1.mp4 -i 2.mp4 -i 3.mp4 -i image.png -filter_complex
"[0:v]scale=1280:720:force_original_aspect_ratio=increase,crop=1280:720,setsar=1,fps=25,format=yuv420p[v0];
[1:v]scale=1280:720:force_original_aspect_ratio=increase,crop=1280:720,setsar=1,fps=25,format=yuv420p[v1];
[2:v]scale=1280:720:force_original_aspect_ratio=increase,crop=1280:720,setsar=1,fps=25,format=yuv420p[v2];
[0:a]aformat=sample_rates=44100:channel_layouts=stereo[a0];
[1:a]aformat=sample_rates=44100:channel_layouts=stereo[a1];
[2:a]aformat=sample_rates=44100:channel_layouts=stereo[a2];
[v0][a0][v1][a1][v2][a2]concat=n=3:v=1:a=1[vid][a];
[3][vid]overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2[v]"
-map "[v]" -map "[a]" -c:v libx264 -c:a aac -movflags +faststart output.mp4
References:
How to concatenate videos in ffmpeg with different attributes?
How to center overlay in ffmpeg?
Resizing videos with ffmpeg to fit specific size

Video file is hanging after concatenating video files and drawtext to output

I'm trying to concat 3 video files and add text to output using ffmpeg.
Each part is 10 sec long.
I've end up with this code:
ffmpeg -i output3.mp4 -i output2.mp4 -i output1.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0]concat=n=3:v=1:a=1[v][a]; [0:v:0]drawtext=fontfile=tahoma.ttf:text=Sample text:fontcolor=white:fontsize=40:box=1:boxcolor=black#0.7:boxborderw=5:x=100:y=100" -map "[v]" -map "[a]" output.mp4
The result video has 30 seconds but it hangs after 1st part (10s). When I remove drawtext filter part (just concat), then video is fine, but without text...
Anyone can help ?
Use
ffmpeg -i output3.mp4 -i output2.mp4 -i output1.mp4 -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0]concat=n=3:v=1:a=1[v][a]; [v]drawtext=fontfile=tahoma.ttf:text=Sample text:fontcolor=white:fontsize=40:box=1:boxcolor=black#0.7:boxborderw=5:x=100:y=100[v]" -map "[v]" -map "[a]" output.mp4
Your existing syntax applied the text on top of the video stream of the first input file, instead of the resultant video from the concat filter.

Where I made a mistake - FFmpeg (Linux) basic problem

I just started learning FFmpeg. I have code (like below), but it's doing nothing.
fmpeg -i videoplayback.mp4 -filter_complex "[1:v]trim=start=0:end=1,setpts=PTS-STARTPTS,scale=480x360,setsar=sar=16/9[intro1];
[1:v]trim=start=1:end=123.39,setpts=PTS-STARTPTS,scale=480x360,setsar=sar=16/9[main1];
[1:v]trim=start=123.39:end=124.39,setpts=PTS-STARTPTS,scale=480x360,setsar=sar=16/9[end1];
[intro1]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[intro1];
[end1]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[end1];
[intro1][main1][end1][output];
[a:1][audio]; -vcodec libx264 -map "[output]" -map"[audio]" "output.mp4"
fmpeg should be ffmpeg.
You only have one input so [1:v] should be [0:v] (it starts counting from 0).
No need for alpha for fading because you are not overlapping or blending frames.
Ending fade needs to be a fade out (not fade in).
You can't re-use filter output labels within the filtergraph.
Some of your filterchains can be combined.
Some of your labels are not associated with a filter (it appears you forgot to use the concat filter).
You can add scale and setsar at the end instead of using them for each segment.
Replace the last ; with ".
You didn't map the audio properly.
Stream copy (re-mux) the audio.
Example:
ffmpeg -i videoplayback.mp4 -filter_complex "[0:v]trim=end=1,setpts=PTS-STARTPTS,fade=t=in:d=1[intro];[0:v]trim=start=1:end=123.39,setpts=PTS-STARTPTS[main];[0:v]trim=start=123.39,setpts=PTS-STARTPTS,fade=t=out:d=1[end];[intro][main][end]concat=n=3:v=1:a=0,scale=480x360,setsar=16/9[v]" -map "[v]" -map 0:a -c:a copy output.mp4

How to combine a video and an image using "reverse" vstack?

I have an image and a video (same width). I now want to use ffmpeg to add the image above the video. Google and other SO threads the use of the vstack filter_complex tag, which works great - except that it puts the image under the video.
I've tried putting the image first and then the video, but this doesnt work. I've also tried giving the vstack command reverse inputs, but also didnt work!
The video may also contain audio which I would need to keep.
See code below:
// Works, but puts image below video (instead of above)
ffmpeg -i test.mp4 -i text.png -filter_complex vstack result.mp4
// Doesn't work at all
ffmpeg -i test.mp4 -i text.png -filter_complex '[1:v][0:v]vstack' result.mp4
// Doesn't work at all
ffmpeg -i test.mp4 -i text.png -filter_complex '[1:v][0:v]vstack=inputs=2[v]' -map '[v]' -map 0:a result.mp4
Google / SO did not yield any tips on how to achieve this so far. Do you know a solution?
Use
ffmpeg -i test.mp4 -i text.png -filter_complex '[1:v][0:v]vstack' -c:a copy -pix_fmt yuv420p result.mp4
Videos and images can have different pixel formats. When the various inputs to a stack filter don't have the same format, the filter picks the format of the first input and converts all other inputs to that format. However, some video players don't support a wide variety of formats. yuv420p is the widely supported format and so the command above forces the output to that one. Audio, if present in the MP4, will get carried over.
ffmpeg -i test.mp4 -i text.png -filter_complex '[1:v]format=yuv444p[img];[img][0:v]vstack' -c:a copy -pix_fmt yuv420p result.mp4

Resources