How to compare images to find the least blurry image?
I want to automaticallz generate an image/thumbnail from a video.
I use ffmpg for that.
However once in a while the image is totally blurred and I want to get rid of the blurry images.
My idea was to create multiple images per video and than compare the images to eachother.
Now the question:
Is there a way to compare the blurryness of images?
I had similar problem when choosing thumbnail for video.
My solution was to take 10 screenshots, each second apart and choose one with highest size.
Making screenshots with ffmpeg:
ffmpeg -y -hide_banner -loglevel panic -ss *secondsinmovie* -i movie.mp4 -frames:v 1 -q:v 2 screenshot.jpg
Then, to fine tune it. Take that second and iterate over frames and again, choose highest filesize
Related
I've an image that represents a short animation as 40 frames in 5 rows 8 columns. How can I use ffmpeg to generate a video from this?
I've read that answer to generate a video from a list of images but I'm unsure about how to tell ffmpeg to read parts of a single image in sequence.
As far as I no, there is no built in way of doing this using ffmpeg. But I could think of first extracting all the images using two nested for loops and an imagemagick crop and than you can use ffmpeg to generate the video based on the extracted files.
You can use an animated crop to do this. Basic template is,
ffmpeg -loop 1 -i image -vf "crop=iw/8:ih/5:mod(n,8)*iw/8:trunc(n/8)*ih/5" -vframes 40 out.mp4
Basically, the crop extracts out a window of iw/8 x ih/5 each frame and the co-ordinates of the top-left corner of the crop window is animated by the 3rd and 4th arguments, where n is the frame index (starting from 0).
With ffmpeg, you can:
create a video from a list of images
create an image with tiles representing frames of a video
but, how is it possible to create a video from tiles, in a picture, representing frames of a video?
if I have this command line:
ffmpeg -i test.mp4 -vf "scale=320:240,tile=12x25" out.png
I will get an image (out.png) made of 12x25 tiles of 320x240 pixels each.
I am trying to reverse the process and, from that image, generate a video.
Is it possible?
Edit with more details:
What am I really trying to achieve is to convert a video into a gif preview. But in order to make an acceptable gif, I need to build a common palette. So, either I scan the movie twice, but it would be very long since I have to do it for a large batch, or I make a tiled image with all the frames in a single image, then make a gif with a palette computed from all the frames, which would be significantly faster... if possible.
I want to add some GIF files to my README.
Please help me create some GIF files:
merge PNG images.
convert a video to GIF.
I have seen some 60fps 4k gif files, but I know they are fake.
For example 9gag using <video> tag with an mp4 video source.
I am not able to embed video directly to my README.
I want only short (2-5 seconds) videos.
What is the best way to add animation to a github README file?
If you want a tool to record your desktop screen to gif directly, I strongly recommend ScreenToGif which is really easy and very useful.
You can use FFMPEG with this method for converting videos to reasonable quality GIFs - http://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html
quote:
#!/bin/sh
palette="/tmp/palette.png"
filters="fps=15,scale=320:-1:flags=lanczos"
ffmpeg -v warning -i $1 -vf "$filters,palettegen" -y $palette
ffmpeg -v warning -i $1 -i $palette -lavfi "$filters [x]; [x][1:v] paletteuse" -y $2
...which can be used like this:
% ./gifenc.sh video.mkv anim.gif
PNG files can be converted and merged in a GIF with imagemagick:
convert -loop 0 -delay 100 in1.png in2.png out.gif
Or with some online tools like this one.
But please keep in mind that GIF is not really intended or suited for large, high quality animations. With some trickery you can get it to display more than 256 colors, but it will dramatically increase file size. 4k 60fps GIFs will be very large to download and most likely cause performance problems. If you are planning to add multiple such GIFs to a single page, they probably will crash the browser or slow it down significantly for some visitors. That's why some sites now are using videos for what they call "GIFs".
For maximum visual quality there's gifski encoder.
GIF theoretically supports maximum 100fps, but for backwards-compatibility reasons browsers won't play it at this rate. Some play at 33fps, some at 25 fps max.
GIF compression is awfully bad. Even the ideal case of solid color compresses poorly. GIF is able to encode only a small rectangle that differs between frames, so if it's a screencast where only mouse or text cursor moves, it may be tolerable file size (you can optimize that with gifsicle/giflossy), but otherwise avoid using high resolutions.
I'm trying to use FFMPEG to create a video with one video overlayed on top another.
I have 2 MP4s. I need to make all BLACK pixels in the overlay video transparent so that I can see the main video underneath it.
I found two ways to overlay one video on another:
First, the following positions the overlay in the center, and therefore, hides that portion of the main video beneath it:
ffmpeg -i 1.mp4 -vf "movie=2.mp4 [a]; [in][a] overlay=352:0 [b]" combined.mp4 -y
And, this one, places the overlay video on the left, but it's opacity is set to 50% so at least other one beneath it is visible:
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v]setpts=PTS-STARTPTS[top]; [1:v]setpts=PTS-STARTPTS, format=yuva420p,colorchannelmixer=aa=0.5[bottom]; [top][bottom]overlay=shortest=0" -acodec libvo_aacenc -vcodec libx264 out.mp4 -y
My goal is simply to make all black pixels in the overlay (2.mp4) completely transparent. How can this be done.
The notional way to do this is to chroma-key the black out and then overlay, But as #MoDJ said, this likely won't produce satisfactory results. Neither will the method I suggest below, but it's worth a try.
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex
"[1]split[m][a];
[a]geq='if(gt(lum(X,Y),16),255,0)',hue=s=0[al];
[m][al]alphamerge[ovr];
[0][ovr]overlay"
output.mp4
Above, I duplicate the overlay video stream, then use the geq filter to manipulate the luma values so that any pixel with luma greater than 16 (i.e. not pure black) has its luma set to white, else zero. Since I haven't provided expressions for the two color channels, geq falls back on the luma expression. We don't want that, so I use the hue filter to nullify those channels. Then I use the alphamerge filter to merge this as an alpha channel with the first copy of the overlay video. Then, the overlay. Like I said, this may not produce satisfactory results. You can tweak the value 16 in the geq filter to change the black threshold. Suggested range is 16-24 for limited-range (Y: 16-235) video files.
You will not be able to get a "replace black pixels" approach to work properly. What you actually want is a foreground video with a real alpha channel that can be manipulated and tested before doing an overlay on a background. For an extended example that describes the problems, please take a look at my blog post on the subject. When using FFMPEG, an easy way to import alpha channel video is to use Quicktime with the Animation codec video at 32 BPP.
When extracting still frames from a video at a specific time mark, like this:
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 example.png
I noticed using PNG or JPG results in different colors.(Note that the -q:v 1 indicates maximum image quality)
Here are some examples:
JPG vs PNG
JPG vs PNG
JPG vs PNG
In general, the JPG shots seem to be slightly darker and less saturated than the PNGs.
When checking with exiftool or imagemagick's identify, both images use sRGB color space and no ICC profile.
Any idea what's causing this? Or which of these two would be 'correct'?
I also tried saving screenshots with my video player (MPlayerX), in both JPG and PNG. In that case, the frame dumps in either format look exactly the same, and they look mostly like ffmpeg's JPG stills.
This is related to the video range or levels. Video stores color as luma and chroma i.e. brightness and color difference and due to legacy reasons from the days of analogue signals, black and white are not represented as 0 and 255 in a 8-bit encoding but as 16 and 235 respectively. The video stream should normally be flagged that this is the case, since one can also store video where 0 and 255 are B and W respectively. If the file isn't flagged or flagged wrongly, then some rendering or conversion functions can produce the wrong results. But we can force FFmpeg to interpret the input one way or the other.
Use
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 -src_range 0 -dst_range 1 example.png/jpg
This tells FFmpeg to assume studio or limited range and to output to full range. The colours still won't be identical due to color encoding conversion but the major change should disappear.
I don't know about ffmpeg specifically. But, in general, JPEG images can have compression that lowers the quality slightly in exchange for a large reduction in file size. Most programs that can write JPEG files will have a switch (or however they take options) which sets the "quality" or "compression" or something like that. I don't seem to have an ffmpeg on any of the half dozen machines I have open, or I'd tell you what I thought the right one was.