Overlay text on video with ffmpeg [duplicate] - ffmpeg

It is described here how ot burn a srt file into a video.
However, I want to put a semi-transparent background to the subtitles so that the texts can be read more easily. How can I do that?

ASS subtitles can have a semi-transparent background for the text.
With aegisub
The easiest way to do this is with aegisub.
Open your subtitles file with aegisub.
Click Subtitle → Styles manager.
Under Current Script choose Default, then press the Edit button.
Experiment with the Outline and Shadow values. Check Opaque box.
Under Colors click the color under Outline or Shadows. A window will appear. Adjust the value of the Alpha box to change transparency.
Save the subtitles as an .ass file.
Now you can use the AAS file to make hardsubs or softsubs with ffmpeg.
Without aegisub
If you want hardsubs you can use the subtitles filter to add the transparent background with the force_style option.
ffmpeg -i input -filter_complex "subtitles=subs.ass:force_style='OutlineColour=&H80000000,BorderStyle=3,Outline=1,Shadow=0,MarginV=20'" output
This will work with any text based subtitles supported by FFmpeg because the filter will automatically convert them to ASS.
See SubStation Alpha (ASS) style fields for formatting options.
Issue with multiple lines
If your subtitles contains multiple lines, due to auto-wrapping of long lines or an intentional line break, the backgrounds will overlap and potentially look ugly as shown below:
You can avoid this by:
Changing the Outline and Shadow sizes to 0.
The alpha settings of the shadow will control the transparency of the background box. Click on the shadow color to adjust the Alpha of the shadow color to your desired transparency level.
Edit the ASS file in a text editor. In the Style line change the value corresponding with BorderStyle to 4. This will fill the bounding box background of each subtitle event. Example Style line:
Style: Default,Arial,20,&H00FFFFFF,&H000000FF,&H80000000,&H80000000,-1,0,0,0,100,100,0,0,4,0,0,2,10,10,10,1
Example:
Note that BorderStyle=4 is a non-standard value, so it may not work properly in all players.
Thanks to sup and wm4 for the BorderStyle suggestion.
Using drawbox
The drawbox filter can be used to create a background box.
This may be useful if you want the box to span the width.
ffmpeg -i input -filter_complex "drawbox=w=iw:h=24:y=ih-28:t=max:color=black#0.4,subtitles=subs.ass" output
Downside is that you have to account for line breaks or word wrapping for long subtitles. Simply making the box taller to compensate will suffice, but will look ugly because the subtitles baseline remains static: the single line subtitles will have more padding on the top than the bottom.

Create a png with a transparent box and a alpha channel in your favoured size. You can use e.g. gimp or photoshop.
Then use this command:
ffmpeg -i video.mp4 -i logo.png -filter_complex "[0:v][1:v]overlay=10:10" \
-codec:a copy out.mp4
where 10:10 is the distance from the upper left corner.
After that you can insert your subtitles.

ffmpeg -i input.mp4 -filter_complex "subtitles=input.srt:force_style='BackColour=&H80000000,BorderStyle=4,Fontsize=11'" output.mp4
BackColour=&H80000000 means having a %50 opaque black background.
Its a hex representation of color, AABBGGRR.

You can use this Aegisub script. This script automatically generate transparent background for every line of subtitle.

With the current version of libass (0.15) and the current version of ffmpeg (N-100402-g0320dab265, compiled from source, probably the same as version 4.2), you can use this bash script
INFILE="movie.mp4"
SUBS="subtitles.srt"
OUTFILE="result.mp4"
ffmpeg -i "${INFILE}" -vf subtitles=${SUBS}:force_style='Borderstyle=4,Fontsize=16,BackColour=&H80000000'" "${OUTFILE}"
to burn subtitles.srt into movie.mp4 and save it as result.mp4.
The subtitles will appear correctly boxed in a 50% transparent rectangle,
even when there are 2 lines in a subtitle.

Related

ffmpeg blend=screen makes background look green or foreground green

When I apply a screen blend to the foreground asset (Pikachu) over the background asset (White circle on black background)
GIMP and Adobe Photoshop make the circle asset look white, and the background asset look RGB like this:-
which is how it should look.
However, if we take the input assets:-
and
and use this ffmpeg command
ffmpeg -i circle_rgb_50.png -i pikachu_rgb.png -filter_complex "[0:v][1:v]blend=screen" pikachu_screened_over_circle_rgb_just_blend.png
we get this:-
and if I reverse the blend so that the inputs to the blend function are the other way around, like this:-
ffmpeg -i circle_rgb_50.png -i pikachu_rgb.png -filter_complex "[1:v][0:v]blend=screen" pikachu_screened_over_circle_rgb_just_blend_other_way_around.png
we get this:-
Why doesn't FFMPEG do blending the same way as GIMP or Adobe Photoshop ?
Or is there another parameter I need to pass so that blends look as they should ?
screen is the value of all_mode option of the blend filter. Correct your filter_complex:
-filter_complex "[0:v][1:v]blend=all_mode=screen"

Detect scene change on part of the frame

I have a video file of an online lecture cosisting of a slideshow with audio in the background.
I want to save images of each slide as well as the timestamp of that slide.
I do this using the scene and metadata filters:
ffmpeg -i week-01.mp4 -filter_complex "select='gt(scene,0.011)',metadata=print:file=frames/time.txt" -vsync vfr frames/img%03d.jpg
This works fine exept for one thing, there is a timer onscreen on the right in the video file.
If i set the thershold small enough to pick up all the slide changes, it also picks up the timer changes.
So here is my question, Can I ask ffmpeg to:
analize part of the frame (only the right side till roughly 75% to the left).
Then, on detecting a scene change in this area, save the entire frame and the timestamp.
I though of making a script that
crops the video and saves it alongside the origional
analize the cropped video for scene changes and save the timestamps
extract the frames from the origional video using the timestamps
Is there a better/faster/shorter way to do this?
Thanks in advance!
You can do it in one command like this,
ffmpeg -i week-01.mp4 -filter_complex "[0]split=2[full][no_timer];[no_timer]drawbox=w=0.25*iw:h=ih:x=0.75*iw:y=0[no_timer];[no_timer]select='gt(scene,0.011)',metadata=print:file=frames/time.txt[no_timer];[no_timer][full]overlay" -vsync vfr frames/img%03d.jpg
Basically, make two copies of the video, use drawbox on one copy to paint solid black over the quarter of the screen on the right, analyze scene change and record scores to file; then overlay the full unpainted frame on top of the painted ones. Due to how overlay syncs frames, only the full frames with corresponding timestamps will be used to overlay on top of the base selected frames.

Using FFMPEG to create animated GIF from series of images and insert text for each image

I am generating an animated gif from a series of png's labeled img_00.png, img_01.png, etc. I want to insert text to the top right corner of the animated gif for each frame that is generated from the png to display some specific information. For example say I have 3 pngs, img_00, img_01, and img_02...what I want from the gif is:
For frame generated from img_00, display "This is from img_00".
For frame generated from img_01, display "This is from img_01".
For frame generated from img_02, display "This is the last image generated from img_02!".
So far I have been messing around with drawtext option (assuming framerate=1):
ffmpeg -f image2 -framerate 1 -i img_%02d.png -filter_complex "drawtext=enable='between(t,0,1)':text='word1':fontsize=24:fontcolor=white:x=w-tw:y=0,drawtext=enable='between(t,1,2)':text='word2':fontsize=24:fontcolor=white:x=w-tw:y=0" out.gif
But I am getting "word1" and "word2" overlapped on top of each other. Is there a better way of doing this or someway to fix drawtext so the overlap doesn't happen?
between(t,0,1) and between(t,1,2) will overlap for t=1. Either the end time for the first range or the start time for the second range should be adjusted e.g. you can make the first range between(t,0,0.9).

Set a semi transparent box when burning vobsubs to MP4 with ffmpeg

I'm using ffmpeg to convert some old videos. Some have VobSub subtitles, so I am hardcoding them in. Is there a way to add a semi transparent box behind the subtitles for easier reading?

ffmpeg animated gif is blotchy

I am generating an animated gif from an mp4 ... but due (I think) to color reduction (gif requires -pix_fmt rgb24) the result is somewhat ... blotchy? like running an image through an oil paint (or maybe "posterize") special effect filter. I think that the quality could be better, but I don't know what to tweak.
Not sure about this ... but ooking at the color palette of the resulting gif in an image editor it does not even appear to have attempted to create a color palette specific to this clip, but instead is attempting to us a generic palette ... which wastes a lot of pixmap space. That is, if I am interpreting this correctly.
Any tips on preserving the original video image instead of getting a "posterized" animated gif?
To get better looking gifs, you can use generated palettes. palettegen filter will generate a png palette to use with the paletteuse filter.
ffmpeg -i input.mkv -vf palettegen palette.png
ffmpeg -i input.mkv -i palette.png -lavfi paletteuse output.gif
You can try using -vf format=rgb8,format=rgb24.

Resources