Scale watermark in ffmpeg based on video size [duplicate] - ffmpeg

i have some videos and i want to add watermark to them
but problem is coz in every video "watermark size" is different
(in some videos watermark is smaller and in some is bigger - i think because of video input size coz its different)
here is my ffmpeg command (just link is different)
ffmpeg -i "http://VIDEO-LINK" -i "/var/www/logo/logo.png" -filter_complex 'overlay=17:17' -vcodec h264 -crf 25 -preset veryfast -maxrate 600k -bufsize 600k -aspect '640:360' -s '640:360' -acodec libfdk_aac -hls_time 10 -hls_wrap 10 -start_number 1 -y "1.m3u8"
is there a way to make any percentage or fixed watermark based on output which is 640x360
coz if input video is 640x360 it show big watermark with this command
if input link is 1280x720 then watermark is so small

You can use the scale2ref filter.
-filter_complex "[1][0]scale2ref=iw/8:ih/8[wm][vid];[vid][wm]overlay=17:17[out]"
If the aspect ratio of your watermark is not the same as your video inputs, then the scale2ref will distort your logo. It's best to perform a one-time operation where the logo is padded so that the image has the same aspect ratio as your videos.

Related

While ffmpeg is recording, I want it to create a smaller and lower quality video

Currently I am using this...
ffmpeg -video_size 1920x1080 -framerate 1 -f x11grab -i :0.0+0,0 -f pulse -ac 2 -i default -t 00:00:10 Output.mkv
While ffmpeg is recording a video, I want it to significantly reduce both the size and quality compared to the ffmpeg command above.
In case you are curious, I am recording brief quality assurance videos to ensure a simple little web scraper I wrote in Python is scraping data properly (specifically, that it is clicking on a particular button, at a particular time, on a particular web page). My Python script triggers the command above to start recording my screen a few seconds before my Python script is supposed to click on that button.
Of course, to verify a button on a web page had been clicked on, low quality video resolution would normally suffice.
For libx264/libx265 the most important option to reduce both the size and quality is -crf. This option controls quality. A value of 51 provides the worst quality. If it's too terrible then use a lower number.
ffmpeg -video_size 1920x1080 -framerate 1 -f x11grab -i :0.0+0,0 -f pulse -channels 2 -i default -t 00:00:10 -c:v libx264 -crf 51 -c:a libopus Output.mkv
See FFmpeg Wiki: H.264.
For significantly reduce both the size and quality of Output.mkv you can use the next ffmpeg configuration:
crop: iw-(cut width in pixels):ih-(cut heigth in pixels)
scale: to set the ratio and resolution (example 1700x800)
crf: to set quality, where 0 is lossless, 23 is default, and 51 is worst possible. A lower value is a higher quality and a subjectively sane range is 18-28. Consider 18 to be visually lossless
bitrate value: -b:v value -minrate value and -maxrate max value (example -b:v 4000K -minrate 2000K -maxrate 6000K)
preset: in theory slow is best quality/size, and you can probe ultrafast
superfast
veryfast
faster
fast
medium – default preset
slow
slower
veryslow
Cut a part of video, changue the resolution, changue crf, and bitrate, first only video, then you can add audio in other work, dont mix, repeat, is very important, encoding/decoding audio and video separate, then you can mix all, but first the video like in this example
ffmpeg -i Output.mkv -map 0:v -vf crop=iw-150:ih-85,scale=ih*16/9:ih,scale=1072:732,setsar=1 -c:v libx264 -crf 17 -b:v 4000K -maxrate 6000K -bufsize 4M -movflags -faststart -preset veryfast -dn video-new.mkv

Why images are combined (ffmpeg)?

I want to create video from images (one image per frame).
ffmpeg -framerate 21.533 -i %d.bmp -i z.wav -r 21.533 -t 120 -map 0:v:0 -map 1:a:0 -c:v libx265 -c:a aac -b:a 128k z.mp4
When I watch resulting video I see (at least at the end of video) that frames are combined with each other (2 images on each frame overlaps with different transparency ratio). I seems like when source and destination frame rate mismatch.
I can remove -framerate and -r options but result will be the same (with 25 fps).
What's the problem?
How to fix it?
The problem is that KMPlayer plays with frame mixing/overlapping.
The video is ok... Another player plays ok...

FFMPEG, any video to 16:9

Help me find a command or script that will convert any video to 16:9, h264 and ~2500kbps. I have a server where people upload videos of different quality, size and length. It can be either 640x480 or 1216x2160. Ultimately, I need to get any resolution to 16:9 (with black borders, if needs) and bitrate without visible loss of quality, which will be acceptable for online broadcasting.
I have this command, but it does not check the resolution of the video. And if the video was 560x448 1000kbps and 700mb, then after conversion it will be 1280x720 3000kbps and 1.5gb, that's not right.
ffmpeg -i 5.avi -vcodec libx264 -crf 23 -preset veryfast -vf scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1 -tune zerolatency highoutput.mp4
Please try the following as a starting point:
ffmpeg -i "5.avi" -vcodec libx264 -crf 23 -vf "scale=w=trunc(ih*dar/2)*2:h=trunc(ih/2)*2, setsar=1/1, scale=w=1920:h=1080:force_original_aspect_ratio=1, pad=w=1920:h=1080:x=(ow-iw)/2:y=(oh-ih)/2:color=#000000" "output.mp4"
Please tweak the crf value depending on the picture quality.
Use
-vf scale=iw*sar:ih,setsar=1,pad='max(iw+mod(iw,2),2*trunc(ih*16/9/2))':'max(ih+mod(ih,2),2*trunc(iw*9/16/2))':-1:-1

Add unique color watermark per frame with FFMpeg

I've been able to add a random color watermark with this code:
ffmpeg -y -r 100 -i "N%3d.tif" -c:v libx264 -vf "drawbox=y=0:color=random#1:width=8:height=ih:t=fill,scale=1920:1080" -crf 30 -g 10 -profile:v high -level 4.1 -pix_fmt yuv420p test.mp4
And I know that it's doable with a script and processing each input frame individually, but I would really like to find a way with FFMpeg to add the watermark during the actual video encoding. It needs to be a unique color per frame. Any ideas on how to accomplish this?
Thanks!
The drawbox expression is only evaluated once. But the hue filter can be used to vary the color.
In the command below, a small portion from the left side of the frame is cropped off, a color is drawn once, and then its hue varied. This is then overlaid on the full frame.
ffmpeg -y -framerate 100 -i "N%3d.tif"
-filter_complex "[0]split=2[wm][vid];[wm]crop=8:ih,drawbox=color=random#1:t=fill,
hue=n*random(1234)[wm];[vid][wm]overlay,scale=1920:1080"
-c:v libx264 -crf 30 -g 10 -profile:v high -level 4.1 -pix_fmt yuv420p test.mp4

encoding jpeg as h264 video

I am using the following command to encode an AVI to an H264 video for use in an HTML5 video tag:
ffmpeg -y -i "test.avi" -vcodec libx264 -vpre slow -vpre baseline -g 30 "out.mp4"
And this works just fine. But I also want to create a placeholder video (long story) from a single still image, so I do this:
ffmpeg -y -i "test.jpg" -vcodec libx264 -vpre slow -vpre baseline -g 30 "out.mp4"
And this doesn't work. What gives?
EDIT: After trying LordNeckbeards answer, here is my full output: http://pastebin.com/axhKpkLx
Example for a 10 second output:
ffmpeg -loop 1 -framerate 24 -i input.jpg -c:v libx264 -preset slow -tune stillimage -crf 24 -vf format=yuv420p -t 10 -movflags +faststart output.mp4
Same thing but with audio. The output duration will match the input audio duration:
ffmpeg -loop 1 -framerate 24 -i input.jpg -i audio.mp3 -c:v libx264 -preset slow -tune stillimage -crf 24 -vf format=yuv420p -c:a aac -shortest -movflags +faststart output.mp4
-loop 1 loops the image input.
-framerate sets the frame rate of the image input. Default is 25. Some players have issues with low frame rates so a value over 6 or so is recommended.
-i input.jpg the input.
-c:v libx264 the H.264 video encoder.
-preset x264 encoding preset. Use the slowest one you can.
-tune x264 tuning for various adjustments to fit specific situations.
-crf for quality. A lower value results in higher quality. Use the highest value that still provides an acceptable quality to you. Default is 23.
-vf format=yuv420p outputs the pixel format as yuv420p. This ensures the output uses a widely acceptable chroma sub-sampling scheme. Recommended for libx264 when encoding from images.
-c:a aac the AAC audio encoder. If your input is already AAC or M4A then use -c:a copy instead to stream copy instead of re-encode.
-t 10 (in the first example) makes a 10 second output. Needed because the image is looping indefinitely.
-shortest (in the second example) makes the output the same duration as the shortest input. In this case it is the audio since the image is looping indefinitely.
-movflags +faststart relocates the moov atom to the beginning of the file after encoding is finished. Allows playback to begin faster in progressive download playing; otherwise the whole video must be downloaded before playing.
-profile:v main (optional) some devices can't handle High profile.
See FFmpeg Wiki: H.264 for more info.

Resources