FFMPEG not moving text - image

So I'm trying to add scrolling text to a video using ffmpeg (using Golang to execute a ffmpeg command)
I am using this command to test how text moves:
"ffmpeg -y -i %s -filter_complex \"[0]split[txt][orig];[txt]drawtext=fontfile=font/Amejo.ttf:fontsize=20:fontcolor=white:x=(w-text_w)/2+20:y=t:text='" + strings.Join(strings.Split(verse.Translation, " "), "\n") + "':bordercolor=black:line_spacing=20:borderw=3[txt];[orig]crop=iw:50:0:0[orig];[txt][orig]overlay\" -c:v libx264 -y -preset ultrafast %s"
I have set the y = t, expecting it to move the text. However my text just stays stationary, I have no idea why.
This is an example output (Its a video but this is all it shows the entire video):
Thanks

Related

ffmpeg watermark: scale2ref output can't be used in second overlay

I was able to add a watermark to 2 position(top left & bottom right) of a video with scaling image height to tenth of the video height in one command
ffmpeg -hide_banner -i /path/to/input.mp4 -i /path/to/watermark.jpg -filter_complex "[1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[video-out][logo-out]overlay=10:10[flag];[1:v][flag]scale2ref=oh*mdar:ih/10[logo-out2][video-out2];[video-out2][logo-out2]overlay=W-w-10:H-h-10" -c:a copy /path/to/output.mp4
But the above command is too redundant, so I remove the second scale2ref
ffmpeg -hide_banner -i /path/to/input.mp4 -i /path/to/watermark.jpg -filter_complex "[1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[video-out][logo-out]overlay=10:10[flag];[flag][logo-out]overlay=W-w-10:H-h-10" -c:a copy /path/to/output.mp4
But sadly, error occurs
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7fb195013c00] Invalid stream specifier: logo-out.
Last message repeated 1 times
Stream specifier 'logo-out' in filtergraph description [1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[video-out][logo-out]overlay=10:10[flag];[flag][logo-out]overlay=W-w-10:H-h-10 matches no streams
I know error occurs because of the first overlay didn't set an image output specifier, but it seems we can't do this? I only know overlay can set a video stream specifier.
How can I use the [logo-out] specifier which output from scale2ref in the second overlay?
An output generated inside a filtergraph can only be consumed once. To reuse it, split it first.
ffmpeg -hide_banner -i /path/to/input.mp4 -i /path/to/watermark.jpg -filter_complex "[1:v][0:v]scale2ref=oh*mdar:ih/10[logo-out][video-out];[logo-out]split=2[logo-left][logo-right];[video-out][logo-left]overlay=10:10[flag];[flag][logo-right]overlay=W-w-10:H-h-10" -c:a copy /path/to/output.mp4

ffmpeg - adding Dynamic logo & random position watermark to video?

I am making a tutorial for sending through mail to my uses, and to avoid piracy distribution I thought to put watermarks (logo.png) at random places at interval on the videos.
I tried using the command from ffmpeg - Dynamic letters and random position watermark to video?:
ffmpeg -i input.mp4 \
-vf \
"drawtext=fontfile=font.ttf:fontsize=80:fontcolor=yellow#0.5:text='studentname': \
x=if(eq(mod(t\,30)\,0)\,rand(0\,(W-tw))\,x): \
y=if(eq(mod(t\,30)\,0)\,rand(0\,(H-th))\,y)" \
-c:v libx264 -crf 23 -c:a copy output.mp4
But it gave me error:
[NULL # 0x55c812525600] Unable to find a suitable output format for '\'
\: Invalid argument
Remove the backslashes (\) at the end of each line and make the command one line:
ffmpeg -i input.mp4 -vf "drawtext=fontfile=font.ttf:fontsize=80:fontcolor=yellow#0.5:text='studentname':x=if(eq(mod(t\,30)\,0)\,rand(0\,(W-tw))\,x):y=if(eq(mod(t\,30)\,0)\,rand(0\,(H-th))\,y)" -c:v libx264 -crf 23 -c:a copy output.mp4

concatenate audio files with an image

I am trying to concatenate multiple audio files and a single image into one video file using one command.
I have list of mp3 files and a playlist file (.m3u) in a direcotry.
I managed to do this but my solution is bad:
reading the playlist file and creating a new .txt in the ffmpeg required format
concatenating the audio files using the .txt into an .mp3
concatenating the large audio file and the static image into a video
This creates 2 unnecessary files that I have to delete.
I tried a different command
ffmpeg -loop 1 -framerate 1 -i myImage.jpg -i file1.mp3 -i file2.mp3 -i file3.mp3 -filter_complex '[0:0][1:0][2:0]concat=n=3:v=0:a=1' -tune stillimage -shortest output.mp4
however im getting a Error initializing complex filters.
Invalid argument error
Another kick in the nuts is that the system im working on has spaces in the folder names.
i tried using -i "concat:file1.mp3|file2.mp3|..." however i cannot use double quote marks to quote out the path so I get an invalid argument error.
Thank you very much for your help.
Method 1: concat demuxer
Make input.txt containing the following:
file 'file1.mp3'
file 'file2.mp3'
file 'file3.mp3'
Run ffmpeg:
ffmpeg -loop 1 -framerate 1 -i myImage.jpg -f concat -i input.txt -filter_complex "[0]scale='iw-mod(iw,2)':'ih-mod(ih,2)',format=yuv420p[v]" -map "[v]" -r 15 -tune stillimage -map 1:a -shortest -movflags +faststart output.mp4
All MP3 files being input to the concat demuxer must have the same channel layout and sample rate. If they do not then convert them using the -ac and -ar options so they are all the same.
Method 2: concat filter
Update: There seems to be a bug with -shortest not working with the concat filter (I keep forgetting about that). See the method above using the concat demuxer, or replace -shortest with -t. The value for -t should equal the total duration of all three MP3 files.
ffmpeg -loop 1 -framerate 1 -i myImage.jpg -i file1.mp3 -i file2.mp3 -i file3.mp3 -filter_complex "[0]scale='iw-mod(iw,2)':'ih-mod(ih,2)',format=yuv420p[v];[1:a][2:a][3:a]concat=n=3:v=0:a=1[a]" -map "[v]" -r 15 -map "[a]" -tune stillimage -shortest -movflags +faststart output.mp4
Option descriptions
scale filter makes image have even width and height which is required when outputting YUV 4:2:0 with libx264.
format filter sets chroma subsampling to 4:2:0, otherwise libx264 will try to limit subsampling, but most players can only handle 4:2:0.
concat filter is accepting file1.mp3, file2.mp3, and file3.mp3 as inputs. Your original command was trying to concat the video to the audio resulting in Invalid argument.
-map "[v]" chooses the video output from -filter_complex.
-r 15 sets output frame rate to 15 because most players can't handle 1 fps. This is faster than setting -framerate 15.
-map "[a]" chooses the audio output from -filter_complex.
-map 1:a chooses the audio from input #1 (the second input as counting starts from 0).
-movflags +faststart after encoding finishes this option moves some data from the end of the MP4 output file to the beginning. This allows playback to begin faster otherwise the complete file will have to be downloaded first.

Video Merging Issue using ffmpeg in C#

I am using FFMPEG to merge more then 2 videos but I am facing an issue during the video merging. The scenario is that I am creating a video from images and then merge that video with mobile recorded video. Video merging is working but the sound of the mobile video is not working. the command of video merging is as follow.
" -f concat -safe 0 -i " + Path + " -c copy " + output
Path is the txt file in which the file path is written.
The command to make video from images is as follow.
" -y -f image2 -s 1920x1080 -r 1/3 -i " + filepath + " -vf fps=25 -pix_fmt yuv420p " + output;
filepath is the folder path where the images are as follow img000.png, img001.png and so on.
I am adding a sound file in video that I am creating from images. The command to add sound is as follow.
" -i inputFile -i audioiFile -c:a copy -codec:a Aac -b:a 192 k -strict experimental -shortest output"

Ffmpeg video overlay

I am trying to create a video output from multiple video cameras.
Following the example given here Presenting more than 2 videos using FFmpeg
and other similar examples.
but Im getting the error
Output pad "default" for the filter "src" of type "buffer" not connected to any destination
when i run
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih[a];[a][1:0]overlay=w[b];[b][2:0]overlay=w:h" -shortest output.mp4
Im not really sure what this means or how to fix it.
Any help would be greatly appreciated!
Thanks.
When using the "padding" option, you have to specify which is the size of the output image and where you want to put the input image
[0:0]pad=iw*2:ih:0:0
tested under windows 7 with file of same size
ffmpeg -i out.avi -i out.avi -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[a][1:0]overlay=w" -shortest output.mp4
and with WebCam Cap (vfwcap) and a still picture (as i have only o=1 WebCam). BTW you can see how to scale one the source to fit in the target (just in case your source have different resolution)
ffmpeg -y -f vfwcap -r 10 -i 0 -loop 1 -i photo.jpg -filter_complex "[0:0]pad=iw*2:ih:0:0[a];[1:0]scale=640:480[b];[a][b]overlay=w" -shortest output.mp4
under Linux:
ffmpeg -i /dev/video1 -i /dev/video0 -filter_complex "[0:0]pad=iw*2:ih:0:0[[a];a][1:0]overlay=w" -shortest output.mp4
if it doesn't work test a simple record of video 1 and after of video 0 and check their properties (type, resolution, fps).
ffmpeg -i /dev/video1 -shortest output1.mp4
ffmpeg -I output1.mp4
If you still have issue, update your question with ffmpeg console output (as text) for video and video 0 capture and also of the call with the overlay

Resources