I'm trying to concat two mp4 video files using ffmpg (with the below command), a main video and a secondary one, the main video always have 1080x1920 resolution and the resulting video should have the some resolution.
val concat = "-i ${mainVideoPath} -i ${secondVideoPath} -filter_complex [0:v]scale=1080:1920:force_original_aspect_ratio=decrease,setsar=1:1,pad=1080:1920:(ow-iw)/2:(oh-ih)/2[s0];[1:v]scale=1080:1920:force_original_aspect_ratio=decrease,setsar=1:1,pad=1080:1920:(ow-iw)/2:(oh-ih)/2[s1];[s0][s1]concat=n=2:v=1[v] -map [v] $resultVideoPath"
The concat work fine but the main part of my resulting video always lose quality although the resulting video has the same resolution.
Any help will be appreciated.
Related
I am investigating a possibility to store video streams which are coming from few sources already coded in h264 without video transcoding as the device I would like to use for this project won't be capable of transcoding combined video on the fly.
What I am looking for is two or more pictures side to side (not video concatenation) packed into mp4/avi/mkv.
I believe mkv container supports such kind of packaging but I've not been able to find appropriate options for ffmpeg or other tool to store it this way. What it does is very slow video transcoding into one big h264 stream.
If your player can handle it just make it perform the side-by-side view. No encoding or muxing required.
mpv video player
Example using mpv:
mpv --lavfi-complex="[vid1][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
The above example assumes each input has the same height. Otherwise you will have to add the scale, scale2ref, pad, and/or crop filters. Simple example using the crop filter to remove 20 pixels from the height:
mpv --lavfi-complex="[vid1]crop=iw:ih-20[c];[c][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
See the mpv documentation and FFmpeg Filters for more info.
Just specify multiple inputs.
ffmpeg -i [input 1] -i [input 2] ... -map 0 -map 1 ... -codec copy -f matroska [output]
As for the "side-to-side" part, it's up to the player to determine the presentation. If you don't control the player and you need a specific layout or presentation, then you must "burn" all these video streams into a new one and encode it as a new single stream.
I am trying to create a waveform video from audio. My goal is to produce a video that looks something like this
For my test I have an mp3 that plays a short clipped sound. There are 4 bars of 1/4 notes and 4 bars of 1/8 notes played at 120bpm. I am having some trouble coming up with the right combination of preprocessing and filtering to produce a video that looks like the image. The colors dont have to be exact, I am more concerned with the shape of the beats. I tried a couple of different approaches using showwaves and showspectrum. I cant quite wrap my head around why when using showwaves the beats go past so quickly, but using showspectrum produces a video where I can see each individual beat.
ShowWaves
ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showwaves=s=1280x100:mode=cline:rate=25:scale=sqrt,format=yuv420p[v]" -map "[v]" -map 0:a output_wav.mp4
This link will download the output of that command.
ShowSpectrum
ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showspectrum=s=1280x100:mode=combined:color=intensity:saturation=5:slide=1:scale=cbrt,format=yuv420p[v]" -map "[v]" -an -map 0:a output_spec.mp4
This link will download the output of that command.
I posted the simple examples because I didn't want to confuse the issue by adding all the variations I have tried.
In practice I suppose I can get away with the output from showspectrum but I'd like to understand where/how I am thinking about this incorrectly. Thanks for any advice.
Here is a link to the source audio file.
What showwaves does is show the waveform in realtime, and the display window is 1/framerate i.e. if the video output is 25 fps, then each frame shows the waveform of 40 ms of audio. There's no 'history' or 'memory' so you can't (directly) get a scrolling output like it seems your reference video shows.
The workaround for this is to use the showwavespic filter to produce a single frame showing the entire waveform at a high enough horizontal resolution. Then do a scrolling overlay of that picture over a desired background, at a speed such that the scroll lasts as long as the audio.
Basic command template would be:
ffmpeg -loop 1 -i bg.png -loop 1 -i wavespic.png -i audio.mp3
-filter_complex "[0][1]overlay=W-w*t/mp3dur:y=SOMEFIXEDVALUE" -shortest waves.mp4
mp3dur above should be replaced with the duration of the audio file.
i am a beginner programmmer and am trying to implement ffmpeg. I am trying to convert a bunch of images to video and add a audio background. Can anybody help me and tell me how to loop the audio as required by the length of the video generated.
PS. This number of images varies so can we implement something that dynamically loops the audio as required
Use
ffmpeg -i images%d.jpg -f lavfi -i amovie=audio.mp3:loop=0,asetpts=N/SR/TB -shortest out.mp4
I have a generic intro sequence (no audio) and a main video clip. I want the audio from the main clip to play as the intro sequence is playing then the video to switch from the finished intro sequence to the main video. So almost like playing both videos at the same time but hiding one until the other is finished. Is this possible with ffmpeg? Almost like a send to back function for the video on the main clip (but keep it's audio rolling so it's in sync when it shows as the intro clip finishes).
Looks like you want a J-cut. This can be done using the overlay filter.
ffmpeg -i main.mp4 -i intro.mp4 -filter_complex "[1][0]scale2ref[intro][base]; \
[base][intro]overlay=eof_action=pass[v]" -map "[v]" -map 0:a -c:a copy out.mp4
The scale2ref filter ensures that the intro is the same resolution as the main video. Then the intro is overlaid on top of the main video, in sync, and vanishes when it ends, leaving the main video on display. The audio is copied over - no processing required.
I'm attempting to write a script that will merge 2 separate video files into 1 wider one, in which both videos play back simultaneously. I have it mostly figured out, but when I view the final output, the video that I'm overlaying is extremely slow.
Here's what I'm doing:
Expand the left video to the final video dimensions
ffmpeg -i left.avi -vf "pad=640:240:0:0:black" left_wide.avi
Overlay the right video on top of the left one
ffmpeg -i left_wide.avi -vf "movie=right.avi [mv]; [in][mv] overlay=320:0" combined_video.avi
In the resulting video, the playback on the right video is about half the speed of the left video. Any idea how I can get these files to sync up?
Like the user 65Fbef05 said, the both videos must have the same framerate
use -f framerate and framerate must be the same in both videos.
To find the framerate use:
ffmpeg -i video1
ffmpeg -i video2
and look for the line which contains "Stream #0.0: Video:"
on that line you'll find the fps in movie.
Also I don't know what problems you'll encounter by mixing 2 audio tracks.
From my part I will try to use the audio from the movie which will be overlayed
over and discard the rest.