With ffmpeg, you can:
create a video from a list of images
create an image with tiles representing frames of a video
but, how is it possible to create a video from tiles, in a picture, representing frames of a video?
if I have this command line:
ffmpeg -i test.mp4 -vf "scale=320:240,tile=12x25" out.png
I will get an image (out.png) made of 12x25 tiles of 320x240 pixels each.
I am trying to reverse the process and, from that image, generate a video.
Is it possible?
Edit with more details:
What am I really trying to achieve is to convert a video into a gif preview. But in order to make an acceptable gif, I need to build a common palette. So, either I scan the movie twice, but it would be very long since I have to do it for a large batch, or I make a tiled image with all the frames in a single image, then make a gif with a palette computed from all the frames, which would be significantly faster... if possible.
Related
I have some videos taken of a display, with the camera not perfectly oriented, so that the result shows a strong trapezoidal effect.
I know that there is a perspective filter in ffmpeg https://ffmpeg.org/ffmpeg-filters.html#perspective, but I'm too dumb to understand how it works from the docs - and I cannot find a single example.
Somebody can show me how it works?
The following example extracts a trapezoidal perspective section from an input Matroska video to an output video.
An estimated coordinate had to be inserted to complete the trapezoidal pattern (out-of-frame coordinate x2=-60,y2=469).
Input video frame was 1280x720. Pixel interpolation was specified linear, however that is the default if not specified at all. Cubic interpolation bloats the output with NO apparent improvement in video quality. Output video frame size will be of the input video's frame size.
Video output was viewable but rough quality due to sampling error.
ffmpeg -hide_banner -i input.mkv -lavfi "perspective=x0=225:y0=0:x1=715:y1=385:x2=-60:y2=469:x3=615:y3=634:interpolation=linear" output.mkv
You can also make use of ffplay (or any player which lets you access ffmpeg filters, like mpv) to preview the effect, or if you want to keystone-correct a display surface.
For example, if you have your TV above your fireplace mantle and you're sitting on the floor looking up at it, this will un-distort the image to a large extent:
ffplay video.mkv -vf 'perspective=W*.1:0:W*.9:0:-W*.1:H:W*1.1:H'
The above expands the top by 20% and compresses the bottom by 20%, cropping the top and infilling the bottom with the edge pixels.
Also handy for playing back video of a building you're standing in front of with the camera pointed up around 30 degrees.
We have some videos that have different scale and aspect ratio and we'd like to convert them to a fix 640x480 size (4/3 ar letterbox padding if necessary).
Two sizes are occurs very often: 853 × 480, 1280 × 720.
I made some research and tries before write this question but didn't get the expected result.
For example:
ffmpeg -i video.mp4 -vf "scale=640:480,pad=640:480:(ow-iw)/2:(oh-ih)/2,setdar=4/3" -c:a copy output.mp4
setdar=4/3 seems to required because if I omitted the result remain the original aspect ratio.
Are there any solution for different size conversion?
The generic filterchain for fitting a video in a WxH canvas is
"scale=iw*sar:ih,scale=640:480:force_original_aspect_ratio=decrease,pad=640:480:-1:-1"
The first scale filter makes sure the video is not kept anamorphic. If you know the video is square-pixels, you can skip it. The 2nd filter fits the video in a canvas of 640x480 using the force_original_aspect_ratio option.
How to compare images to find the least blurry image?
I want to automaticallz generate an image/thumbnail from a video.
I use ffmpg for that.
However once in a while the image is totally blurred and I want to get rid of the blurry images.
My idea was to create multiple images per video and than compare the images to eachother.
Now the question:
Is there a way to compare the blurryness of images?
I had similar problem when choosing thumbnail for video.
My solution was to take 10 screenshots, each second apart and choose one with highest size.
Making screenshots with ffmpeg:
ffmpeg -y -hide_banner -loglevel panic -ss *secondsinmovie* -i movie.mp4 -frames:v 1 -q:v 2 screenshot.jpg
Then, to fine tune it. Take that second and iterate over frames and again, choose highest filesize
I've an image that represents a short animation as 40 frames in 5 rows 8 columns. How can I use ffmpeg to generate a video from this?
I've read that answer to generate a video from a list of images but I'm unsure about how to tell ffmpeg to read parts of a single image in sequence.
As far as I no, there is no built in way of doing this using ffmpeg. But I could think of first extracting all the images using two nested for loops and an imagemagick crop and than you can use ffmpeg to generate the video based on the extracted files.
You can use an animated crop to do this. Basic template is,
ffmpeg -loop 1 -i image -vf "crop=iw/8:ih/5:mod(n,8)*iw/8:trunc(n/8)*ih/5" -vframes 40 out.mp4
Basically, the crop extracts out a window of iw/8 x ih/5 each frame and the co-ordinates of the top-left corner of the crop window is animated by the 3rd and 4th arguments, where n is the frame index (starting from 0).
I was given two inputs one is an image (image from .mp4 video file)and the other one is video(mostly in .ts format). Mostly the video is lossy encoding. I need to find the image in the video. Here I can't compare the raw frames of video and image as they are different in encoding . To my knowledge, I need to find same alike image/frame in the video with respect to image. Is there any tools/api to find the image in the video.
Detect features and try to establish a homography.
Then pick the frame with the most homography inliers (the cv::findHomography function has an output parameter named mask)