ffmpeg: does drawbox safely erase content of video? - ffmpeg

I want to make a box completely unrecognizable in a video without any way of reconstructing its content. Is
ffmpeg -y -i input_video -vf "drawbox=x=10:y=10:w=100:h=100:color=black#1.0:t=fill" ouput_video
sufficient for that?
The command creates a black box at the desired location. I am just not sure if this cannot be undone.

Related

Different brightness merging multiple BMP files into MP4 using ffmpeg

I want to edit a small video frame/image by image without losing quality (or without losing much of it). I have used ffmpeg to split into images using the following line:
ffmpeg -i test.mp4 $filename%%03d.bmp
This worked fine. I tried merging the images back using several lines including:
ffmpeg -re -f image2 -framerate 30 -i $filename%%03d.bmp -c:v prores_aw -pix_fmt yuv422p10le test.mkv
Though, this results in a difference between brightness/contast between original and merged videos. The merged file would be a bit darker (you have to look close) than original file. What can I do to fix this?
Thanks for your time.

Adding two overlays in FFMPEG

I am adding an animated PNG overlay to an existing video with some text written right below the gif in ffmpeg using the command:
.\ffmpeg.exe -i vid.mp4 -ignore_loop 0 -i elephant.png -filter_complex "overlay=x=(main_w-overlay_w-10):y=(main_h-overlay_h-30):shortest=1, drawtext=fontfile=seguiemj.ttf:text='This is a test': fontcolor=white: fontsize=48: x=(w-text_w-250): y=(h-text_h-10)" output.mp4
Now, instead of text, I want to add another png in the same position as the text and using the same relative equations but I'm not sure how to write the variables for the second overlay? Since I'm already using overlay_w and overlay_h for the first image overlay, how can I define other variables for other overlay images?

FFMPEG - crop and pad a video (keep 3840x1920 but with black borders)

I am trying to crop a video so that I can remove a chunk of the content from the sides of a 360-degree video file using FFmpeg.
I used the following command and it does part of the job:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:0,pad=3840:1920:384:0 output.mp4
This will remove the sides of the video and that was initially exactly what I wanted (A). Now I'm wondering if it is possible to crop in the same way but to keep the top third of video. As such, A is what I have, B is what I want.:
I thought I could simply do this:
ffmpeg -i testVideo.mp4 -vf crop=3072:1920:384:640,pad=3840:1920:384:640 output.mp4
But that doesn't seem to work.
Any input would be very helpful.
Use the drawbox filter to fill cropped portion with default colour black.
ffmpeg -i testVideo.mp4 -vf drawbox=w=384:h=1280:x=0:y=640:t=fill,drawbox=w=384:h=1280:x=3840-384:y=640:t=fill -c:a copy output.mp4
The first filter acts on the left side, and the 2nd on the right.

ffmpeg subtitle background

I am trying to add subtitles to a video in marathi language and to make it user friendly i have added a background to subtitle however there are some black lines coming in the background. Please help me in removing those lines.
ffmpeg -i output_mp4.mp4 -vf "subtitles=transcript_srt.srt: force_style='OutlineColour=&H80000000,BorderStyle=3,Outline=1,Shadow=1'" -c:a copy output_srt.mp4
This is the command i have used.

Overlay circular video with transparency with maskedmerge

I have a square video from Snap Spectacles (1088x1088) that I want to overlay on itself zoomed in and blurred.
Example input frame:
Generated zoomed in and blurred background:
Desired output:
I think I can do this with ffmpeg's maskedmerge, but I'm having trouble finding examples.
There's an example of maskedmerge that merges two videos of the same size and dynamically removes a green screen, and another that merges videos with transparency.
Here's the closest I've been able to get:
ffmpeg -i background.jpg -vf "movie=input.jpg[inner];[in][inner] overlay=#{offset}:0 [out]" -c:a copy output.jpg
tl;dr: given the first two frames, how could I generate the third frame (as video)?
Got it!
Like #Mulvya recommended, I needed a circular mask:
Given that mask snapmask.png, a blurred square background video background.mov, and the original video 65B6354F61B4AF02_HD.MOV, they can be merged like this:
ffmpeg -i background.mov -loop 1 -i snapmask.png -filter_complex " \
[1:v]alphaextract, scale=1080:1080 [mask];\
movie=65B6354F61B4AF02_HD.MOV, scale=1080:1080 [original];\
[original][mask] alphamerge [masked];\
[0:v][masked] overlay=420:0;"\
-c:a copy output.mov
You can do one better, though, which is generating the blurred background video on the fly in the same command. Now the only inputs are the original spectacles round video and the circular mask:
ffmpeg -i 65B6354F61B4AF02_HD.MOV -loop 1 -i snapmask.png -filter_complex "\
[0:v]split[a][b];\
[1:v]alphaextract, scale=1080:1080[mask];\
[a]scale=1080:1080 [ascaled];\
[ascaled][mask]alphamerge[masked];\
[b]crop=946.56:532:70.72:278, boxblur=10:5,scale=1920:1080[background];\
[background][masked]overlay=420:0"\
-c:a copy 65B6354F61B4AF02_HD_sq.MOV
That crop=946.56:532:70.72:278 bit is what I found worked best to crop out a rectangular portion of the circular video to zoom into.
It took me a while to wrap my head around the ffmpeg filter system for how to do this, but it's not as scary as I'd initially thought. The basic syntax is [input]command args[output], and commands can be chained without explicitly naming their outputs (like in [1:v]alphaextract, scale=1080:1080[mask]).

Resources