FFMPEG delay mutliple overlay videos (with different delays) - ffmpeg

I'm trying to overlay a video (for example a report) with different other videos or images, like hints to the facebook page or a hint to the website. These other videos or images are smaller than the original and sometimes transparent (rgba).
I already tried to overlay multiple videos, which works pretty well:
ffmpeg -i 30fps_fhd.mp4 -i sample.mp4 -i timer.webm -i logo.jpg -filter_complex "overlay = x=100:y=1000, overlay = x=30:y=66:eof_action=pass, overlay = x=0:y=0" -acodec copy -t 70 out.mp4
But now, I want to start some videos or images not at the beginning of the video, instead after a period of time.
I found flags like 'itsoffset' or 'setpts', but I dont know how to apply them on this 'multiple video / image overlay command'.
LG Bamba

Okay, I found out how it works. I have to take the input with [index:v] apply effects on it (separated with commas) and save it in a variable (add [var-name] at the end of the effect-chain). Then you can apply the overlay effect, by putting two variables (if not edited, then the input variable -> [index:v]) next to each other and write the effect-metadata next to them.

Related

Watermarking with scale2ref conundrum - video resizes to watermark rather than watermark scaling to fit video

Hope you can hep me figure out where I am misleading myself. I am trying to watermark a bunch of videos with varying resolution sizes with a .png watermark file that is 1200x600. I have videos that are as large as 2480x1280 and as small as 360x141.
I had originally thought that ffmpeg could handle it, but I am having issues with the conversion, and I am pretty sure it is my misunderstanding of how to leverage scale2ref command properly. Now, from scale2ref documentation they say this:
Scale a subtitle stream (b) to match the main video (a) in size before overlaying
'scale2ref[b][a];[a][b]overlay'
I understand that stream[b] is my watermark file, and stream[a] is my video file.
My ffmpeg command is this:
while read -r line || [[ -n "$line" ]]; do fn_out="w$line"; (/u2/vidmarktest/scripttesting/files_jeud8334j/dioffmpeg/ffmpeg -report -nostdin -i /u2/vidmarktest/scripttesting/files_jeud8334j/"$line" -i /u2/vidmarktest/mw1.1200x600.png -filter_complex "scale2ref[b][a];[b][a]overlay=x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2" /u2/vidmarktest/scripttesting/files_jeud8334j/converted/"$fn_out" -c:v libx265); done < videos.txt
To do some 'splainin, it is going to be part of a bash script that will be cron-ed so we can make sure we have all our latest submissions to the directory watermarked.
The problem is this:
All of my converted videos are scaled to fit 1200x600 now, rather than remaining in their original configuration with the watermark being the part that should be scaled to fit the video.
To note, that in this section: [b][a]overlay=x=(main_w-overlay_w)/2:y=(main_h-overlay_h)/2, many will say I need to switch the [a] and [b] values. When you do that, the watermark is obscured by the video, not interlaced. Switching those two values puts the [b] value (the watermark) over the [a] value (the video).
Any feedback will be highly appreciated, welcomed, and graded :)
I am expecting to get the watermark to adjust to the video resolution, but am failing miserably. What do I not know about ffmpeg and scale2ref that is causing my problem?
In the command, your video is the first input, and the watermark the 2nd. In scale2ref, the inputs aren't explicitly set, so ffmpeg picks streams in input order, whereas you need the watermark to be the first input.
Use
[1:v][0:v]scale2ref[wm][v];[v][wm]overlay=...

Changing Text on Slideshow with ffmpeg

I have a directory full of *.jpg images which I want to concatenate to a video. This works fine with the concat-video filter:
ffmpeg -f concat -safe 0 -i files.txt -c:v libx264 -pix_fmt yuv420p out.mp4
The file.txt contains the list of absolute pathnames of the images. This list is created by the find-command on linux bash.
Now I want to add a text overlay, where every image shows a text representing the creation date.
I found the drawtext video-filter like in this answer: Text on video ffmpeg
However, I think I cannot set a video-filter per file when using the concat filter; I understand I can only set one filter for the whole ffmpeg-call.
Is there any other way to concatenate the files to a video and add text individually to each image?
EDIT: A trivial solution is to add the text to the images first by iterting the images. This would either irreversably change the images or create copies and thus double the disk space requirement even temporarily. It would be preferable to add the text on-the-fly for each frame so that there is no additional disk space required.

generate thumbnail from the middle of every scenes changes in a video using ffmpeg or other software

Is there a way to generate thumbnails from scene changes using ffmpeg? Instead of picking the I-Frames but the middle of two I-Frames? This is assuming the middle of two major changes is usually the best time to take thumbnail shot from.
Yes, there is a way!
Execute this command:
ffmpeg -i yourvideo.mp4 -vf
select="eq(pict_type\,I)" -vsync 0 -an
thumb%03d.png
It is really simple. ffmpeg algorithms will do all the work and generate thumbnails from the middle of every scene change.
The items on bold:
yourvideo.mp4 (you need to change this to your video input)
thumb%03d.png (This is your output)

FFmpeg fade effects between frames

I want to create a slideshow of my images with fade in & fade out transitions between them and i am using FFmpeg fade filter.
If I use command:
ffmpeg -i input.mp4 "fade=in:5:8" output.mp4
To create the output video with fade effect, then it gives output video with first 5 frames black and than images are shown with fade in effect but i want fade:in:out effect between frame change.
How can i do that?
Please tell a solution for Centos server because i am using FFmpeg on this server only
To create a video with fade effect, just break the video into parts and create separate videos for each image. For instance, if you have 5 images then firstly, create 50-60 copies of each image and obtain a video for that:
$command= "ffmpeg -r 20 -i images/%d.jpg -y -s 320x240 -aspect 4:3 slideshow/frame.mp4";
exec($command." 2>&1", $output);
This will allow you to create 5 different videos. Then, you need 10-12 different copies of those five images and again create separate videos with fade effects.
ffmpeg -i input.mp4 "fade=in:5:8" output.mp4
After this you will have videos like: video for image 1 and its fade effect then for image 2 and its fade effect and so on. Now combine those videos in respective order to get the whole video.
For combining the videos you need:
$command = "cat pass.mpg slideshow/frame.mpg > final.mpg";
This means to join the videos using cat and then you need to convert them to mpg, join them and again reconvert them to mp4 or avi to view them properly. Also the converted mpg videos will not be proper so do not bother. When you convert them to mp4, it will be working fine.
You can make a slideshow with crossfading between the pictures, by using the framerate filter. In the following example 0.25 is the framerate used for reading in the pictures, in this case 4 seconds for each picture. The parameter fps sets the output framerate. The parameters interp_start and interp_end can be used for changing the fading effect: interp_start=128:interp_end=128 means no fading at all. interp_start=0:interp_end=255 means continuous fading. When one picture has faded out and the next picture has fully faded in, the third picture will immediately begin to fade in. There is no pause for showing the second picture. interp_start=64:interp_end=191 means half of the time is pause for showing the pictures and the other half is fading. Unfortunately it won't be a full fading from 0 to 100%, but only from 25% to 75%. That's not exactly what you might want, but better than no fading at all.
ffmpeg -framerate 0.25 -i IMG_%3d.jpg -vf "framerate=fps=30:interp_start=64:interp_end=192:scene=100" test.mp4
You can use gifblender to create the blended, intermediary frames from your images and then convert those to a movie with ffmpeg.

Overlaying video with ffmpeg

I'm attempting to write a script that will merge 2 separate video files into 1 wider one, in which both videos play back simultaneously. I have it mostly figured out, but when I view the final output, the video that I'm overlaying is extremely slow.
Here's what I'm doing:
Expand the left video to the final video dimensions
ffmpeg -i left.avi -vf "pad=640:240:0:0:black" left_wide.avi
Overlay the right video on top of the left one
ffmpeg -i left_wide.avi -vf "movie=right.avi [mv]; [in][mv] overlay=320:0" combined_video.avi
In the resulting video, the playback on the right video is about half the speed of the left video. Any idea how I can get these files to sync up?
Like the user 65Fbef05 said, the both videos must have the same framerate
use -f framerate and framerate must be the same in both videos.
To find the framerate use:
ffmpeg -i video1
ffmpeg -i video2
and look for the line which contains "Stream #0.0: Video:"
on that line you'll find the fps in movie.
Also I don't know what problems you'll encounter by mixing 2 audio tracks.
From my part I will try to use the audio from the movie which will be overlayed
over and discard the rest.

Resources