generate video containing scrolling image - ffmpeg

I want to generate a video [let's say 800x600] from a 800x10000 still image.
The image has to scroll, from top to bottom as if someone was actually scrolling a page.
If it could scroll faster over some portions and slower over others, that would be great, if not I think I could just make a few separate videos and then just stitch them up.
I cannot find any documentation on this subject; could anyone give me a hint? Thanks for your time!

Use the scroll filter. The crop filter is optional and will output a reasonable width and height for large image inputs. You can consider using the scale filter too. The format filter outputs a widely compatible pixel format / chroma subsampling scheme.
Vertical
ffmpeg -loop 1 -i input.png -vf "scroll=vertical=0.01,crop=iw:600:0:0,format=yuv420p" -t 10 output.mp4
Horizontal
ffmpeg -loop 1 -i input.png -vf "scroll=horizontal=0.01,crop=800:600:0:0,format=yuv420p" -t 10 output.mp4
Scroll filter options
horizontal, h Set the horizontal scrolling speed. Default is 0. Allowed range is from -1 to 1. Negative values changes scrolling direction.
vertical, v Set the vertical scrolling speed. Default is 0. Allowed range is from -1 to 1. Negative values changes scrolling direction.
hpos Set the initial horizontal scrolling position. Default is 0. Allowed range is from 0 to 1.
vpos Set the initial vertical scrolling position. Default is 0. Allowed range is from 0 to 1.

Related

ffmpeg scale pixel different compare of android pixel

i have the following simple command that merge image in front of video
String comand = '-y -i $videoPath -i $imagePath -filter_complex "[1]scale=300:300[logo1];[0:v][logo1] overlay=300:300" -qscale 0 $outPutPath';
i need to understand why the pixel of width and height which is scale=300:300 is completely difference of any UI pixel like android screen phone ?
in other word : i am building app and i am set the width and height of Container or any widget lets say also 300andwidth 300but i got scale bigger than that one inffmpeg` pixel, Although pixel values are the same !
as shown in image i use the same pixel values however the result is different !
my scenario is to let user get values from UI and i making ffmpeg command scale depend on user UI values. This causes the width and height to be inconsistent
Is there more than one type of pixel or why this difference?

Trying to convert multiple images into a video [duplicate]

I am trying to encode a .mp4 video from a set of frames using FFMPEG using the libx264 codec.
This is the command I am running:
/usr/local/bin/ffmpeg -r 24 -i frame_%05d.jpg -vcodec libx264 -y -an video.mp4
I sometimes get the following error:
[libx264 # 0xa3b85a0] height not divisible by 2 (520x369)
After searching around a bit it seems that the issue has something to do with the scaling algorithm and can be fixed by adding a -vf argument.
However, in my case I don't want to do any scaling. Ideally, I want to keep the dimensions exactly the same as the frames. Any advice? Is there some sort of aspect ratio that h264 enforces?
The answer to the original question should not scale the video but instead fix the height not divisible by 2 error. This can be achieve using this filter:
-vf "pad=ceil(iw/2)*2:ceil(ih/2)*2"
Full command:
ffmpeg -i frame_%05d.jpg -vcodec libx264 \
-vf "pad=ceil(iw/2)*2:ceil(ih/2)*2" -r 24 \
-y -an video.mp4
Basically, .h264 needs even dimensions so this filter will:
Divide the original height and width by 2
Round it up to the nearest pixel
Multiply it by 2 again, thus making it an even number
Add black padding pixels up to this number
You can change the color of the padding by adding filter parameter :color=white. See the documentation of pad.
For width and height
Make width and height divisible by 2 with the crop filter:
ffmpeg -i input.mp4 -vf "crop=trunc(iw/2)*2:trunc(ih/2)*2" output.mp4
If you want to scale instead of crop change crop to scale.
For width or height
Using the scale filter. This will make width 1280. Height will be automatically calculated to preserve the aspect ratio, and the width will be divisible by 2:
ffmpeg -i input.mp4 -vf scale=1280:-2 output.mp4
Similar to above, but make height 720 and automatically calculate width:
ffmpeg -i input.mp4 -vf scale=-2:720 output.mp4
You can't use -2 for both width and height, but if you already specified one dimension then using -2 is a simple solution.
If you want to set some output width and have output with the same ratio as original
scale=720:-1
and not to fall with this problem then you can use
scale="720:trunc(ow/a/2)*2"
(Just for people searching how to do that with scaling)
The problem with the scale solutions here is that they distort the source image/video which is almost never what you want.
Instead, I've found the best solution is to add a 1-pixel pad to the odd dimension. (By default, the pading is black and hard to notice.)
The problem with the other pad solutions is that they do not generalize over arbitrary dimensions because they always pad.
This solution only adds a 1-pixel pad to height and/or width if they are odd:
-vf pad="width=ceil(iw/2)*2:height=ceil(ih/2)*2"
This is ideal because it always does the right thing even when no padding is necessary.
It's likely due to the the fact that H264 video is usually converted from RGB to YUV space as 4:2:0 prior to applying compression (although the format conversion itself is a lossy compression algorithm resulting in 50% space savings).
YUV-420 starts with an RGB (Red Green Blue) picture and converts it into YUV (basically one intensity channel and two "hue" channels). The Hue channels are then subsampled by creating one hue sample for every 2X2 square of that hue.
If you have an odd number of RGB pixels either horizontally or vertically, you will have incomplete data for the last pixel column or row in the subsampled hue space of the YUV frame.
LordNeckbeard has the right answer, very fast
-vf scale=1280:-2
For android, dont forget add
"-preset ultrafast" and|or "-threads n"
You may also use bitand function instead of trunc:
bitand(x, 65534)
will do the same as trunc(x/2)*2 and it is more transparent in my opinion.
(Consider 65534 a magical number here ;) )
My task was to scale automatically a lot of video files to half resolution.
scale=-2,ih/2 lead to slightly blurred images
reason:
input videos had their display aspect ratio (DAR) set
scale scales the real frame dimensions
during preview the new videos' sizes have to be corrected using DAR which in case of quite low-resoution video (360x288, DAR 16:9) may lead to blurring
solution:
-vf "scale='bitand(oh*dar, 65534)':'bitand(ih/2, 65534)', setsar=1"
explanation:
output_height = input_height / 2
output_width = output_height * original_display_aspect_ratio
both output_width and output_height are now rounded to nearest smaller number divisible by 2
setsar=1 means output_dimensions are now final, no aspect ratio correction should be applied
Someone might find this helpful.

How can I pan, right to left, in a "wide" video output to 1080p? [duplicate]

What would be the most efficient way to create a video from a panoramic image that would for example have the size: 5000 width x 600 height px?
I created this GIF image that would explain things a bit better. Imagine that the video would be inside the red border. So the video would potentially be panning from left to right.
A moving crop is the most convenient way to achieve this in ffmpeg.
ffmpeg -loop 1 -i in.jpg -vf "crop=500:ih:'min((iw/10)*t,9*iw/10)':0" -t 10 pan.mp4
The crop filter crops to a size of 500 x ih i.e. 500x600. The top-left co-ordinate of the cropping window is fixed to Y=0. For X, the expression is min((iw/10)*t,9*iw/10) i.e. in each second, the cropping window will slide across 10% of the image width. So, at t=9, the cropping window covers (4500,0) to (5000,600) for the example image. From that time, the min function returns the other value 9*iw/10 = 4500 and the sliding stops.
Dynamically Crop a panorama video to 1280x720 with timestamp settings from a detector:
ffmpeg script: crop=1280:ih:'func_cropstartx':0
func_cropstartx:
cropstartx[0] = location[0].x
for every location 0 < i <= N:
cropstartx[i] = if (gte(t\, location[i].time)\, location[i].x\, cropstartx[i-1]);
Use a sorted list of setpoints
Example:
crop = 1280:ih:'if(gte(t\,10)\,600\,if(gte(t\,8)\,400\,if(gte(t\,6)\,300\,if(gte(t\,4)\,200\,100))))' : 0

ffmpeg delogo works not good when delogo at boundary

I am trying using ffmpeg delogo filter to hide the logo, but I found when the logo is appears at boundary, delogo filter seems works not good. please check following images.
I also read a bit of source, seems the algorithm need subtract 1 pixel boundary. in the second image, the delogo result is looks strange, bottom is white. I want to make it invisible, is it possible?
thanks
Try
ffmpeg -i video
-filter_complex
"[0]split[m][b];
[b]crop=iw:144:0:174,vflip[a];
[m][a]vstack,delogo=794:689:134:40:1,crop=iw:720:0:0" out.mp4
The video is split in two. The 2nd feed is cropped to the bottom 20% but ending at a height just above the logo. This is then flipped and vertically stacked with the main stream. delogo is applied making sure that the height of the logo covers the whole logo. Then the excess portion at the bottom is cropped off.
Result:

ffmpeg resize down larger video to fit desired size and add padding

I'm trying to resize a larger video to fit an area that I have. In order to achieve this I calculate first the dimensions of the resized video so That it fits my area, and then I try to add padding to this video so that the final result will have the desired dimension, keeping the aspect ratio as well.
So let's say that I have the original video dimensions of 1280x720 and to fit my area of 405x320 I need first to resize the video to 405x227. I do that. Everything is fine at this point. I do some math and I find out that I have to add 46 pixels of padding at the top and the bottom.
So the padding parameter of the command for that would be -vf "pad=405:320:0:46:black". But each time I run the command I get an error like Input area 0:46:405:273 not within the padded area 0:0:404:226.
The only docs for padding that I found is this http://ffmpeg.org/libavfilter.html#pad.
I don't know what I'm doing wrong. Anyone had this problem before? Do you have any suggestions?
try -vf "scale=iw*min(405/iw\,320/ih):ih*min(405/iw\,320/ih),pad=405:320:(405-iw)/2:(320-ih)/2"
Edit to clarify what's going on in that line: you are asking how to scale one box to fit inside another box. The boxes might have different aspect ratios. If they do, you want to fill one dimension, and center along the other dimension.
# you defined the max width and max height in your original question
max_width = 405
max_height = 320
# first, scale the image to fit along one dimension
scale = min(max_width/input_width, max_height/input_height)
scaled_width = input_width * scale
scaled_height = input_height * scale
# then, position the image on the padded background
padding_ofs_x = (max_width - input_width) / 2
padding_ofs_y = (max_height - input_height) / 2
Here is a generic filter expression for scaling (maintaining aspect ratio) and padding any source size to any target size:
-vf "scale=min(iw*TARGET_HEIGHT/ih\,TARGET_WIDTH):min(TARGET_HEIGHT\,ih*TARGET_WIDTH/iw),
pad=TARGET_WIDTH:TARGET_HEIGHT:(TARGET_WIDTH-iw)/2:(TARGET_HEIGHT-ih)/2"
Replace TARGET_WIDTH and TARGET_HEIGHT with your desired values. I use this to pull a 200x120 padded thumbnail from any video. Props to davin for his nice overview of the math.
Try this:
-vf 'scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:x=(640-iw)/2:y=(480-ih)/2:color=black'
According to FFmpeg documentation, the force_original_aspect_ratio option is useful to keep the original aspect ratio when scaling:
force_original_aspect_ratio
Enable decreasing or increasing output video width or height if
necessary to keep the original aspect ratio. Possible values:
disable
Scale the video as specified and disable this feature.
decrease
The output video dimensions will automatically be decreased if
needed.
increase
The output video dimensions will automatically be increased if
needed.

Resources