ffmpeg watermark: scale2ref output can't be used in second overlay - ffmpeg

I was able to add a watermark to 2 position(top left & bottom right) of a video with scaling image height to tenth of the video height in one command
ffmpeg -hide_banner -i /path/to/input.mp4 -i /path/to/watermark.jpg -filter_complex "[1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[video-out][logo-out]overlay=10:10[flag];[1:v][flag]scale2ref=oh*mdar:ih/10[logo-out2][video-out2];[video-out2][logo-out2]overlay=W-w-10:H-h-10" -c:a copy /path/to/output.mp4
But the above command is too redundant, so I remove the second scale2ref
ffmpeg -hide_banner -i /path/to/input.mp4 -i /path/to/watermark.jpg -filter_complex "[1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[video-out][logo-out]overlay=10:10[flag];[flag][logo-out]overlay=W-w-10:H-h-10" -c:a copy /path/to/output.mp4
But sadly, error occurs
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7fb195013c00] Invalid stream specifier: logo-out.
Last message repeated 1 times
Stream specifier 'logo-out' in filtergraph description [1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[video-out][logo-out]overlay=10:10[flag];[flag][logo-out]overlay=W-w-10:H-h-10 matches no streams
I know error occurs because of the first overlay didn't set an image output specifier, but it seems we can't do this? I only know overlay can set a video stream specifier.
How can I use the [logo-out] specifier which output from scale2ref in the second overlay?

An output generated inside a filtergraph can only be consumed once. To reuse it, split it first.
ffmpeg -hide_banner -i /path/to/input.mp4 -i /path/to/watermark.jpg -filter_complex "[1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[logo-out]split=2[logo-left][logo-right];[video-out][logo-left]overlay=10:10[flag];[flag][logo-right]overlay=W-w-10:H-h-10" -c:a copy /path/to/output.mp4

Related

ffmpeg make video from audio, img and vtt subtitles

I'm trying to make a video with image audio file and vtt files, that's my command
ffmpeg -i F:\speech\media/waves/to_be_translated/python_example_test_GUyqHnh.wav -loop 1 -i F:\speech\waves/img.jpg -vf -filter_complex subtitles=F:\\speech\\media/typedVideos/combinedVideoTyped/zcjgtmrdlscqzina\\subtitles.vtt -map -shortest F:\speech\media/typedVideos/combinedVideoTyped/zcjgtmrdlscqzina\exported-video.mp4
but it gives this error:
Output #0, webvtt, to 'subtitles=F:\\speech\\media/typedVideos/combinedVideoTyped/zcjgtmrdlscqzina\\subtitles.vtt':
Output file #0 does not contain any stream
what am I doing wrong?
You have to tell ffmpeg what to do with the inputs.
There are many ways to skin a cat, here is one simplistic way.
ffmpeg -i input.jpg -f lavfi -i color=size=640x480:color=black -i 'input.wav' -filter_complex "[1][0]overlay[out];[out]subtitles='input.srt'[vid]" -map [vid] -map 2 -shortest -preset ultrafast output.mp4
We specify 3 inputs, the image, a Libavfilter input virtual device and the audio.
The virtual device generates a black video of a specified size.
The image is overlaid on top of the video, with the subtitles placed on the resulting output.
Finally we map the finished video with the audio into the final output file, an .mp4 which finishes when the shortest element going into it finishes, which will be the audio in this case, as the image and the video have no length per se.

ffmpeg filter_complex trim out-sync

command line:
ffmpeg -i <INPUT> -filter-complex "<FILTER_COMPLEX>" -map "[ofa]" -map "[ofv]" -acodec aac -vcodec libx264 test.mp4
FILTER_COMPLEX content:
[0:v]split=3[sv1][sv2][sv3];
[0:a]asplit=3[sa1][sa2][sa3];
[sv1]trim=start=200:duration=5,setpts=PTS-STARTPTS[ov1];[sa1]atrim=start=200:duration=5[oa1];
[sv2]trim=start=300:duration=5,setpts=PTS-STARTPTS[ov2];[sa2]atrim=start=300:duration=5[oa2];
[sv3]trim=start=400:duration=5,setpts=PTS-STARTPTS[ov3];[sa3]atrim=start=400:duration=5[oa3];
[ov1][ov2][ov3]concat=n=3:v=1:a=0[ofv];
[oa1][oa2][oa3]concat=n=3:v=0:a=1[ofa]
As a result, the output video sounds are out of sync, and the video is redirected to 00:00:00, but the sound remains at the original time position.
Therefore, how to use ffmpeg to intercept several clips from a video, recombine them into a new video file, and keep the sound and picture synchronized.
I tried with [sa1]atrim=start=200:duration=5,setpts=PTS-STARTPTS[oa1] but an error:
Media type mismatch between the 'Parsed_atrim_4' filter output pad 0 (audio) and the 'Parsed_setpts_5' filter input pad 0 (video)
Cannot create the link atrim:0 -> setpts:0
Error initializing complex filters.
Invalid argument

ffmpeg -vg -filtercomplex, create gif with color kway and limited fps

I need to create a gif file with color key (greenscreen) with 10FPS and specified size. I try to combine -vg and -filter_complex:
ffmpeg -i testdatei-c.avi -vf "fps=10,scale=320:-1:flags=lanczos" -filter_complex "[0:v]chromakey=0xFFFFFF,split[v0][v1];[v0]palettegen[p];[v1][p]paletteuse" output.gif
I get the error:
Filtergraph 'fps=10,scale=320:-1:flags=lanczos' was specified through the -vf/-af/-filter option for output stream 0:0, which is fed from a complex filtergraph.
-vf/-af/-filter and -filter_complex cannot be used together for the same stream.
All filters for a stream should be within the same filtergraph, so inside the -filter_complex
ffmpeg -i testdatei-c.avi -filter_complex "[0:v]chromakey=0xFFFFFF,fps=10,scale=320:-1:flags=lanczos,split[v0][v1];[v0]palettegen[p];[v1][p]paletteuse" output.gif

I can't overlay and center a video on top of an image with ffmpeg. The output is 0 seconds long

I have an mp4 that I want to overlay on top of a jpeg. The command I'm using is:
Ffmpeg -y -i background.jpg -i video.mp4 -filter_complex "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" -codec:a copy output.mp4
But for some reason, the output is 0 second long but the thumbnail does show the first frame of the video centred on the image properly.
I have tried using -t 4 to set the output's length to 4 seconds but that does not work.
I am doing this on windows.
You need to loop the image. Since it loops indefinitely you then must use the shortest option in overlay so it ends when video.mp4 ends.
ffmpeg -loop 1 -i background.jpg -i video.mp4 -filter_complex \
"overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2:shortest=1" \
-codec:a copy -movflags +faststart output.mp4
See overlay documentation for more info.
Well you should loop the image until the video duration. So to do the you need to add -loop 1 before the input image. Then the image will have a infinite duration. So to control it specify -shortest before the output file which will trim all the streams to the shortest duration among them. Else you can use -t to trim the image duration to the video length. This will do what you want.
Hope this helps!

ffmpeg audio property is blank in output video

Here's what I have to do,
I want to convert two different images in different video file (ex: convert a.jpg into a.avi and b.jpg into b.avi).
I am trying to generate video (.avi) from image file. Video file is generated successfully but I can't see the audio properties when I right click on video and see details tab in property.
Then I have one video file (.avi), using ffmpeg concat function, I am concating these three video files (a.avi, middle.avi which I already have, b.avi).
After this, I am getting file output.avi but audio is not there in outout.avi file. I have middle.avi which already contains audio.
Here's my concat command,
ffmpeg -i "concat:a.avi|middle.avi|b.avi" -vcodec copy 103_n4_2.avi
I am trying to generate video (.avi) from only one image file. Video file is generated successfully but I can't see the audio properties when I right click on video and see details tab in property.
Here's my command to convert image to video:
ffmpeg -loop 1 -i bCopy.jpg -t 30 -q:v 0 -r 24 output_a.avi
PS: a.avi and b.avi (which I have generated from images does not contain audio) but only middle.avi contains the audio.
I think the audio track is completly ommitted. I was not able to test it but it seams you need to map the audio stream manually to the output and delay it by the the length of your first image.
ffmpeg -i middle.avi -itsoffset 1 -map 1:1 -i "concat:a.avi|middle.avi|b.avi" -map 0:0 -vcodec copy -acodec copy 103_n4_2.avi
Thanks for the help but I have found the solution of this perticular issue,
Here are the steps:
1. Generate blank mp3 of the first slide duration.
- ffmpeg -f lavfi -i aevalsrc=0 -t 31 -q:a 9 -acodec libmp3lame out.mp3
2. Trim audio from middle slide
- ffmpeg -i middle.avi -acodec copy middle.mp3
3. Concat this two audio
- ffmpeg -i "concat:out.mp3|middle.mp3" -acodec copy 103_n4_2.mp3
4. Now concat audio and video(that we've cgenerated by concatenation)
- ffmpeg -i 103_n4_3.avi -i 103_n4_2.mp3 -c:v copy -c:a aac -strict experimental 103_n4_5.avi

Resources