I have 2 videos, same video and audio quality but different length.
Lets say the video resolution is 1920x1080 pixel
I want to merge both videos side by side, considering the longest length.
What I found so far, but it is not what I need :(
On internet I found many examples which gives me the outcome of 3840x1080 pixel.
What I want:
Outcome: Video with 1920x1080 pixel
use from video1: left part, such Pixel 1 to 960
use from video2: right part, such Pixel 961 to 1920
audio is merged, i.e. I can hear both audios simultaneously as available
What I want - optional:
Between 2 videos, there is a visible split like |
Is there a single ffmpeg command line I can use?
Many Thanks,
BM
using crop and stack:
ffmpeg -i test10.mkv -i test06.mkv -filter_complex "
color=red:2x1080:24000/1001:1[c];
[0]crop=iw/2-1:ih:0:0[v0];
[1]crop=iw/2+1:ih:iw-ow:0[v1];
[v0][c][v1]xstack=inputs=3:grid=3x1;
[0][1]amix
" output.mkv
or
[v0][c][v1]hstack=inputs=3;
if inputs have different framerates etc, you can try to use overlay:
ffmpeg -i test10.mkv -i test06.mkv -filter_complex "
[0]crop=iw/2-1:ih:0:0[v0];
[1]crop=iw/2+1:ih:iw-ow:0[v1];
color=red:1920x1080[bg];
[bg][v0]overlay=shortest=1[b0];
[b0][v1]overlay=x=W-w;
[0][1]amix
" output.mkv
Related
Exporting all the frames in a single Nx1 tile can be done like this:
ffmpeg -i input.mp4 -vf "fps=5,tile=100x1" output.jpg
The problem is that I don't know up front how many frames are there going to be, so I specify a much higher number than expected (based on the movie length and fps). Ideally I would like something like this:
ffmpeg -i input.mp4 -vf "fps=5,tile=Nx1" output.jpg
Where Nx1 would tell ffmpeg to create an image as wide as the number of exported frames.
I know there is a showinfo filter that might come as handy, but I was never able to integrate it so that it's output is used as input for tile.
Also, I tried pre-calculating the number of frames based on the movie duration and fps but this was never very accurate. Even for exactly 3.000s movie and 3fps it was producing 8 frames.
I've been trying to get this to work on and off for the past month and am very frustrated, so I'm hoping someone on here could help me. What I'm trying to do is very simple but I struggle with ffmpeg. I basically just want to take a folder of pictures, each of which have different sizes and some may be horizontal or vertical orientation, and put them into a video slideshow where they show for maybe 5-10 seconds each. No matter what I try, it always winds up stretching out the pictures to be out of the ratio and they just look funny. I noticed Windows 10 Photo program does this perfectly, but I want a programmatic approach and I don't think it has a commandline feature. Can someone help me tweak this ffmpeg commandline to work the way I need it to? Desired video output would be 1920x1080 in this case. Thanks!
ffmpeg -r 1/5 -start_number 0 -i "C:\Source_Directory_Pictures\Image_%d.jpg" -c:v libx264 -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" "F:\Destination_Output\Test_Output.mp4"
Use a combination of scale and pad to generate proportionally resized images centered onto a 1080p frame.
Use
ffmpeg -framerate 1/5 -start_number 0 -reinit_filter 0 -i "C:\Source_Directory_Pictures\Image_%d.jpg" -vf "scale=1920:1080:force_original_aspect_ratio=decrease:eval=frame,pad=1920:1080:-1:-1:eval=frame" -r 25 -c:v libx264 "F:\Destination_Output\Test_Output.mp4"
for example, this command line:
ffmpeg -i rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov -vf "scale=w=416:h=234:force_original_aspect_ratio=decrease" -an -f rawvideo -pix_fmt yuv420p -r 15 -
works fine except if the source video was 360x240, output will be 351x234. which kinda sucks as yuv420p video with odd sizes is difficult to handle due to the way colour data is stored.
is there a way i could force ffmpeg to give nearest possible even values?
If you're resizing use just one of the dimensions with an absolute value, example:
Change:
-vf "scale=w=416:h=234:force_original_aspect_ratio=decrease"
To:
-vf "scale=w=416:h=-2"
Should scale to a width of 416 and scale the height appropriately so the aspect ratio keeps the same.
-2 = scale using mod 2
-4 = scale using mod 4 etc....
you can achieve that by using force_divisible_by=2 in your filter like this :
-vf scale=w=852:h=480:force_original_aspect_ratio=decrease:force_divisible_by=2
i know the question is old but hope this help someone.
I have been trying to use ffmpeg to create a wavefile image from an opus file. so far i have found three different methods but cannot seem to determine which one is the best.
The end result is hopefully to have a sound-wave that is only approx. 55px in height. The image will become part of a css background-image.
Adapted from Generating a waveform using ffmpeg:
ffmpeg -i file.opus -filter_complex
"showwavespic,colorbalance=bs=0.5:gm=0.3:bh=-0.5,drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=black#0.5"
file.png
which produces this image:
Next, I found this one (and my favorite because of the simplicity):
ffmpeg -i test.opus -lavfi showwavespic=split_channels=1:s=1024x800 test.png
And here is what that one looks like:
Finally, this one from FFmpeg Wiki: Waveform, but it seems less efficient using a second utility (gnuplot) rather than just ffmpeg:
ffmpeg -i file.opus -ac 1 -filter:a
aresample=4000 -map 0:a -c:a pcm_s16le -f data - | \
gnuplot -e "set
terminal png size 525,050;set output
'file.png';unset key;unset tics;unset border; set
lmargin 0;set rmargin 0;set tmargin 0;set bmargin 0; plot '
Option two is my favorite, but i dont like the margins on the top and bottom of the waveforms.
Option three (using gnuplot) makes the best 'shaped' image for our needs, since the initial spike in sound seems to make the rest almost too small to use (lines tend to almost disappear) when the image is sized at only 50 pixels high.
Any suggestions how might best approach this? I really understand very little about any of the options I see, except of course for the size. Note too i have 10's of thousands to process, so naturally i want to make a wise choice at the very beginning.
Original and manipulated waveforms.
You can use the compand filter to adjust the dynamic range. drawbox is then used to make the horizontal line.
ffmpeg -i test.opus -filter_complex \
"compand=gain=-6,showwavespic=s=525x50, \
drawbox=x=(iw-w)/2:y=(ih-h)/2:w=iw:h=1:color=white" \
-vframes 1 output.png
It won't be quite as accurate of a representation of your audio as the original waveform, but it may be an improvement visually; especially on such a wide scale.
Also see FFmpeg Wiki: Waveform.
I already have found out how to scale the thumbnail to stay within specified bounding dimensions while maintaining aspect ratio. For example, to get the frame shown at 6 seconds into the input.mp4 video file, and scale it to fit into 96x60 (16:10 aspect ratio):
ffmpeg -y -i input.mp4 -ss 6 -vframes 1 -vf scale="'if(gt(a,16/10),96,-1)':'if(gt(a,16/10),-1,60)'" output.png
This is fine, it works.
Next, I would like to do the same, but if the video's aspect ratio is not exactly 16:10, then I would like to force the output image to have an aspect ratio of 16:10 by taking the above transformation, and filling or padding the space with white. That is, I want the output to be as if I took, say, a 96x48 image, and laid it over a 96x60 white background, resulting in white bars above and below the 96x48 image.
Ideally, I do not want to resort to using another tool or library, such as ImageMagick. It would be best if ffmpeg could do this on its own.
Here's what I went with. For the -vf argument:
-vf "scale='if(gt(a,16/10),96,-1)':'if(gt(a,16/10),-1,60)', pad=w=96:h=60:x=(ow-iw)/2:y=(oh-ih)/2:color=white"
This applies two filters in sequence, separated by a comma.
target_H = 2436
target_W = 1124
ffmpeg -i 1.mp4 -ss 1 -vframes 1 -vf "scale=min(iw*2436/ih\,1124):min(2436\,ih*1124/iw),pad=1124:2436:(1124-iw)/2:(2436-ih)/2:green" output.png