ffmpeg overlay duration issue - ffmpeg

I am making a setting (using anothergui) to burn in an overlay movie:
-i "<SourceFileName>"
-i "Pathto\overlay1080.mov"
-filter_complex overlay=shortest=0
-y
-vcodec prores -profile:v 3
-qscale:v 1
-acodec pcm_s16le -ar 48000 -ac 2
"<OutputPath>\forCompFolder\<OutputFileName>.mov"
The compositing is fine, but the overlay video is 999 frames long and the first input video is shorter. When I render it opts for the length of the overlay video. What am I doing wrong?
Thanks in advance.

Related

ffmpeg duplicates image overlay (place it twice to the output)

I need gif overlay (equaliser line) on png background and mix it with audio to video output.
Last problem is, that gif overlay is placed twice to the output video. One on 0:0 coordinates and second on 50:262 which is specified in the 1 overlay.
Similary as attached
ffmpeg -i "audio.mp3" -ignore_loop 0 -i "anim-eq.gif" -loop 1 -i "bg.png" -filter_complex "[2]scale=w=1080:h=608,overlay=0:0[vt]; [1]scale=w=350:h=84,[vt]overlay=50:262" -c:a aac -ab 64k -ac 2 -ar 44100 -c:v libx264 -shortest "output.mp4"
Thank you for help.
You have set two overlay filters. There should be only one.
-filter_complex "[2]scale=w=1080:h=608[vt];[1]scale=w=350:h=84[eq];[vt][eq]overlay=50:262"

ffmpeg: concatenation images and videos

I am trying to combine files img1.png and video1.ts into a single movie. Everything works correct except audio: if first file in the movie is img1.png - there is no audio for video.ts. If first file is video1.ts - everything works as expected.
What I do:
1) create a video file fom img1.png:
ffmpeg -loop 1 -i img1.png -c:v libx264 -t 30 -pix_fmt yuv420p img.ts
2) concatenation:
ffmpeg -i "concat:img.ts|video1.ts" -c copy -bsf:a aac_adtstoasc res.mp4
What should I do to save audio for video1.ts ?
Thanks in advance!
You'll need to add a dummy audio stream with the same properties as the audio stream in the video file.
So, if the main audio is AAC, stereo, 44100 Hz, you would use
ffmpeg -loop 1 -i img1.png -f lavfi -i anullsrc -pix_fmt yuv420p -c:v libx264 -c:a aac -ar 44100 -ac 2 -t 30 img.ts

ffmpeg audio watermark at specific time

I'm looking for a way to add an audio watermark, on specific time, to a video file (with existing audio) . something like: ffmpeg -i mainAVfile.mov -i audioWM.wav -filter_complex "[0:a][1:a] amix=inputs=2:enable='between(t,9,10)' [aud]; [0:v][aud]" -c:v libx264 -vf "scale=1280:720:sws_dither=ed:flags=lanczos, setdar=16:9" -c:a libfdk_aac -ac 2 -ab 96k -ar 48000 -af "aformat=channel_layouts=stereo, aresample=async=1000" -threads 0 -y output.mp4
The above command gives me this error Timeline ('enable' option) not supported with filter 'amix'. amerge didn't work as well. I kind of get lost with filter_complex syntax, specifically with the following conditions
On the main AV file, both audio and video tracks are filtered
Watermark should be between the 9th and 10th second (I already
generated a 1 second, 10k tone file)
The watermark need to survive the proceeding audio transcode
Use
ffmpeg -i mainAVfile.mov -i audioWM.wav
-filter_complex
"[0:a]aformat=channel_layouts=stereo,aresample=async=1000[main];
[1:a]atrim=0:1,adelay=9000|9000[wm];[main][wm]amix=inputs=2"
-vf "scale=1280:720:sws_dither=ed:flags=lanczos,setdar=16:9" -c:v libx264
-c:a libfdk_aac -ac 2 -ar 48000 -b:a 96k
-threads 0 -y output.mp4
It's preferable to perform all filtering in a single filtergraph. But I've kept the video filter as-is.

Creating video from audio and resized image using FFMPEG

Im trying to create an mp4 video from an mp3 and an image with ffmpeg. The video should be the size of 640x360 with black background and the image should be resized to fit in this dimensions and centred in the middle. The video's length must match the mp3's length.
Its basically a video creation for youtube from a song and an artwork.
For now i was able to achieve this with 3 steps:
resize image:
-i %image% -vf scale='if(gt(a,4/3),640,-1)':'if(gt(a,4/3),-1,360)' %resized_image%
create a music video with black background:
-f lavfi -i color=s=640x360 -i %audio_file% -c:v libx264 -s:v 640x360 -c:a aac -strict experimental -b:a 320k -shortest -pix_fmt yuv420p %video%
put the resized image centred in the video:
-i %video% -i %resized_image% -filter_complex "overlay=(W-w)/2:(H-h)/2" -codec:a copy %final_video%
Is it possible to achieve all this with one ffmpeg command ?
Single command would be
ffmpeg -loop 1 -i image -i audio
-vf scale='if(gt(a,4/3),640,-1)':'if(gt(a,4/3),-1,360)',pad=640:360:(ow-iw)/2:(oh-ih)/2,format=yuv420p
-c:v libx264 -c:a aac -b:a 320k -strict -2 -shortest final.mp4

Use FFmpeg to combine fdshow audio capture with still image

On windows I am trying to capture direct show audio and combine with a still image. I have come up with the following command:
ffmpeg -f dshow -i audio="Microphone (Conexant SmartAudio HD)" -loop 1 -i black2.png -b:a 30k -ac 1 -acodec libfdk_aac -vcodec libx264 -b:v 60k -shortest test.mp4
This nearly works, a video with image and audio are produced but the video output is created at a much faster rate that the captured audio. So if the audio is capturing for 1 minute a 5 minute video is produced instead of 1 minute. The audio plays for the 1st minute and there is no audio for the remaining 4 minutes, The images displays through out video.
Any help appreciated, Thank you.
Try
ffmpeg -f dshow -i audio="Microphone (Conexant SmartAudio HD)" -loop 1 -re -i black2.png -b:a 30k -ac 1 -acodec libfdk_aac -vcodec libx264 -b:v 60k -shortest test.mp4
The -re limits the speed at which Ffmpeg processes the video - to realtime.

Resources