I'm currently trying to learn everything related to videos and encountered a problem that I need help with.
The Question is: How can I save the difference between 2 videos to a seperate file with ffmpeg?
For example here is the ffplay command I'm trying with:
(Source: https://superuser.com/questions/854543/how-to-compare-the-difference-between-2-videos-color-in-ffmpeg)
ffplay -f lavfi "movie=left.mp4,setpts=PTS-STARTPTS,split=3[a0][a1][a2];
movie=right.mp4,setpts=PTS-STARTPTS,split[b0][b1];
[a0][b0]blend=c0_mode=difference[y];
[a1]lutyuv=y=val:u=128:v=128[uv];
[y][uv]mergeplanes=0x001112:yuv420p,pad=2*iw:ih:0:0[down];
[a2][b1]hstack[up];[up][down]vstack"
In this case I would want to have the bottom left video saved to a new file.
Can someone help me get together the right ffmpeg filter and explain the proccessing of ffmpeg?
Your modified command:
ffmpeg -i left.mp4 -i right.mp4 -filter_complex "[0][1]blend=c0_mode=difference[y];[0]lutyuv=y=val:u=128:v=128[uv];[y][uv]mergeplanes=0x001112:yuv420p[v]" -map "[v]" output.mp4
See documentation for blend, lutyuv, and mergeplanes filters.
Related
I am trying to add an overlay to a video using the following command
ffmpeg -y -i "$videoPath" -i "$overlayPath" -filter_complex "[0:v] [1:v] overlay=$overlayPosition" -pix_fmt yuv420p -c:a copy "$outputPath"
However, I would like to be able to resize the overlay I am about to apply to some arbitrary resolution (no care for keeping proportions). However, although I followed a couple of similar solutions from SO (like FFMPEG - How to resize an image overlay?), I am not quite sute about the meaning of the parameters or what I need to add it in my case.
I would need to add something like (?)
[1:v]scale=360:360[z] [1:v]overlay=$overlayPosition[z]
This doesn't seem to work so I'm not sure what I should be aiming for.
I would appreciate any assistance, perhaps with some explanation.
Thanks!
You have found all parts. Let's bring them together:
ffmpeg -i "$videoPath" -i "$overlayPath" -filter_complex "[1:v]scale=360:360[z];[0:v][z]overlay[out]" -map "[out]" -map "0:a" "$outputPath"
For explanation:
We're executing here two filter within the "filter_complex" parameter separated by a semicolon ";".
First we scale the second video input ([1:v]) to a new resolution and store the output in variable "z" (you can put here any name).
Second we bring the first input video ([0:v]) and the overlay ([z]) together and store the output in variable "out".
Now it's time to tell ffmpeg what he should pack into our output file:
-map "[out]" (for the video)
-map "0:a" (for the audio of the first input file)
I'm currently using this command:
ffmpeg -f concat -safe 0 -i info.txt -s 1280x720 -crf 24 output.mp4
to join all videos in a folder. Before running the command I have entered "file " into the info.txt file. This works perfectly, but I would like to get a fade effect between all videos (crossfade). How is this possible? I have tried adding the following argument, but it didn't work. I found it online on an old post.
-filter_complex xfade=offset=4.5:duration=1
If anyone has a simple way of doing it, please let me know. I'm using the latest FFmpeg, so all features should be available.
Is there some way to use libav/avconv to duplicate the effect of the tile filter in FFMPEG?
I'm trying to create a strip of images from left to right with one image for every ten seconds of video input.
My plan is to first generate the images and then create the image strip. Preferably I want to use libav over ffmpeg. So far I have created this:
avconv -i video.mp4 -vf scale=320:-1,fps=1/10 -q:v 6 img%03d.jpg
which creates the images. But then I only know how create the image with ffmpeg using:
ffmpeg -i img%03d.jpg -filter_complex tile=6x1 output.jpg
So if anyone has any tips on how to rewrite the just the second or both commands to use avconv I welcome any advise :)
As libav/avconv did not have any filters supporting my requirements in any easy way switching to a static build of ffmpeg was the simplest solution.
The commands then became:
ffmpeg -i video.mp4 -vf scale=320:-1,fps=1/10 -q:v 6 img%03d.jpg
and
ffmpeg -i img%03d.jpg -filter_complex tile=6x1 output.jpg
Hi everyone,
I want to add a watermark to a video use a picture.
here is the problem
and this is my command:
c:\ffmpeg.exe -y -i c:\ffmpeg\input\walk.mp4 -acodec copy -b 300k -vf "movie=w1.jpg [watermark];[in][watermark] overlay=5:5 [out]" c:\ffmpeg\output\walk.mp4
What am I doing wrong?
You can use the overlay filter, but first you need to use a recent build because the version you are using is considered to be absolutely ancient due to how active the FFmpeg project is. You can get builds for Windows at Zeranoe FFmpeg builds.
Now that you are not using a graybeard ffmpeg here is the most basic example:
ffmpeg -i background.avi -i watermark.jpg -filter_complex overlay output.mp4
The overlay filter documentation will show how to position the watermark. This example will place the watermark 10 pixels from the bottom right corner of the main video and copy your audio as in your example:
ffmpeg -i background.avi -i watermark.jpg -filter_complex overlay=main_w-overlay_w-10:main_h-overlay_h-10 -codec:a copy output.mp4
Novice user of ffmpeg but going thru whatever docs I can find online.
For a current project I will need to composite 2 videos together to create a .flv file.
Does anyone know the commands to do this?
This works for me:
ffmpeg -i background_file.l -i file_to_overlay.flv -filter_complex overlay=0:0 -acodec aac -strict -2 out.flv
See http://ffmpeg.org/ffmpeg.html#overlay-1 for more details.
Also you can add the scaler in the filter chain and scale things appropriately too.
Do a ffmpeg -filters to see the filters available.