I want to add some GIF files to my README.
Please help me create some GIF files:
merge PNG images.
convert a video to GIF.
I have seen some 60fps 4k gif files, but I know they are fake.
For example 9gag using <video> tag with an mp4 video source.
I am not able to embed video directly to my README.
I want only short (2-5 seconds) videos.
What is the best way to add animation to a github README file?
If you want a tool to record your desktop screen to gif directly, I strongly recommend ScreenToGif which is really easy and very useful.
You can use FFMPEG with this method for converting videos to reasonable quality GIFs - http://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html
quote:
#!/bin/sh
palette="/tmp/palette.png"
filters="fps=15,scale=320:-1:flags=lanczos"
ffmpeg -v warning -i $1 -vf "$filters,palettegen" -y $palette
ffmpeg -v warning -i $1 -i $palette -lavfi "$filters [x]; [x][1:v] paletteuse" -y $2
...which can be used like this:
% ./gifenc.sh video.mkv anim.gif
PNG files can be converted and merged in a GIF with imagemagick:
convert -loop 0 -delay 100 in1.png in2.png out.gif
Or with some online tools like this one.
But please keep in mind that GIF is not really intended or suited for large, high quality animations. With some trickery you can get it to display more than 256 colors, but it will dramatically increase file size. 4k 60fps GIFs will be very large to download and most likely cause performance problems. If you are planning to add multiple such GIFs to a single page, they probably will crash the browser or slow it down significantly for some visitors. That's why some sites now are using videos for what they call "GIFs".
For maximum visual quality there's gifski encoder.
GIF theoretically supports maximum 100fps, but for backwards-compatibility reasons browsers won't play it at this rate. Some play at 33fps, some at 25 fps max.
GIF compression is awfully bad. Even the ideal case of solid color compresses poorly. GIF is able to encode only a small rectangle that differs between frames, so if it's a screencast where only mouse or text cursor moves, it may be tolerable file size (you can optimize that with gifsicle/giflossy), but otherwise avoid using high resolutions.
Related
My videos are 1920x1080 recorded with high ISO (3200) using smartphone (to get bright view, backlight scene mode). It produce a lot of noise. I try many video filter but all of them produce blur similar to when we reduce the resolution in half then increase it back again.
Is there a good video noise filter that only remove noise without producing blur?
Because if it produce blur, I would prefer to not do any filtering at all.
I have tried video filter:
nlmeans=s=30:r=3:p=1
vaguedenoiser=threshold=22:percent=100:nsteps=4
owdenoise=8:6:6
hqdn3d=100:0:50:0
bm3d=sigma=30:block=4:bstep=8:group=1:range=8:mstep=64:thmse=0:hdthr=5:estim=basic:planes=1
dctdnoiz=sigma=30:n=4
fftdnoiz=30:1:6:0.8
All produce blur, some even worse. I have to use strong setting to make the noise moderately removed. I end up halving the resolution and use remove grain then scale it up again. This is much better for me than all the above method (pp filter is used to reduce size without reducing image detail):
scale=960:540,removegrain=3:0:0:0,pp=dr/fq|8,scale=1920:1080
code example
FOR %%G IN (*.jpg) DO "ffmpeg.exe" -y -i "%%G" -vf "nlmeans=s=30:r=3:p=1" -qmin 1 -qmax 1 -q:v 1 "%%G.jpg"
Part of the image
The image:
To help with blur, I always use unsharp to sharpen the image after nlmeans. Below are the parameters I find work best on old grainy movies, or 4K transfers of old movies that create unacceptable grain. It seems to work quite well. For 4K movies, it almost makes them as good as the 1080p Blu Ray versions.
nlmeans=s=1:p=7:pc=5:r=3:p=3
unsharp=7:7:2.5
I create lots of 4K 60fps 3D animations, and every frame of these animations are exported as separate PNG files to my disk drive. These PNG files use their own lossless compression method, but the file sizes are still quite large (a 30 second animation can take anywhere between 4 and 18 GB). I'm interested in alternative lossless compression formats to reduce the file sizes even further.
The reason I'm interested in lossless compression is because I create a LARGE variety of animations, and lossy algorithms are not always consistent in terms of visual fidelity (what doesn't create visible artifacts for one animation might for another).
Do you have good recommendations for general purpose lossless video codecs that can achieve superior performance to storing the PNG frames individually?
So far, I have attempted to use h.265 lossless using ffmpeg:
ffmpeg -r 60 -i out%04d.png -c:v libx265 -preset ultrafast -x265-params lossless=1 OUTPUT.mp4
But the result was a 15.4GB file when the original PNG files themselves only took up 5.77 GB in total. I assume this was because, for this particular animation, interframe compression was far worse than intraframe compression, but I don't really know.
I understand that this is highly dependent on the content I'm attempting to compress, but I'm just hoping that I can find something that's better than storing the frames individually.
For lossless archival of RGB inputs, I suggest you try x264's RGB encoder.
ffmpeg -framerate 60 -i out%04d.png -c:v libx264rgb -qp 0 OUTPUT.mp4
How to compare images to find the least blurry image?
I want to automaticallz generate an image/thumbnail from a video.
I use ffmpg for that.
However once in a while the image is totally blurred and I want to get rid of the blurry images.
My idea was to create multiple images per video and than compare the images to eachother.
Now the question:
Is there a way to compare the blurryness of images?
I had similar problem when choosing thumbnail for video.
My solution was to take 10 screenshots, each second apart and choose one with highest size.
Making screenshots with ffmpeg:
ffmpeg -y -hide_banner -loglevel panic -ss *secondsinmovie* -i movie.mp4 -frames:v 1 -q:v 2 screenshot.jpg
Then, to fine tune it. Take that second and iterate over frames and again, choose highest filesize
When extracting still frames from a video at a specific time mark, like this:
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 example.png
I noticed using PNG or JPG results in different colors.(Note that the -q:v 1 indicates maximum image quality)
Here are some examples:
JPG vs PNG
JPG vs PNG
JPG vs PNG
In general, the JPG shots seem to be slightly darker and less saturated than the PNGs.
When checking with exiftool or imagemagick's identify, both images use sRGB color space and no ICC profile.
Any idea what's causing this? Or which of these two would be 'correct'?
I also tried saving screenshots with my video player (MPlayerX), in both JPG and PNG. In that case, the frame dumps in either format look exactly the same, and they look mostly like ffmpeg's JPG stills.
This is related to the video range or levels. Video stores color as luma and chroma i.e. brightness and color difference and due to legacy reasons from the days of analogue signals, black and white are not represented as 0 and 255 in a 8-bit encoding but as 16 and 235 respectively. The video stream should normally be flagged that this is the case, since one can also store video where 0 and 255 are B and W respectively. If the file isn't flagged or flagged wrongly, then some rendering or conversion functions can produce the wrong results. But we can force FFmpeg to interpret the input one way or the other.
Use
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 -src_range 0 -dst_range 1 example.png/jpg
This tells FFmpeg to assume studio or limited range and to output to full range. The colours still won't be identical due to color encoding conversion but the major change should disappear.
I don't know about ffmpeg specifically. But, in general, JPEG images can have compression that lowers the quality slightly in exchange for a large reduction in file size. Most programs that can write JPEG files will have a switch (or however they take options) which sets the "quality" or "compression" or something like that. I don't seem to have an ffmpeg on any of the half dozen machines I have open, or I'd tell you what I thought the right one was.
Hey, I want to split a video which is one second long (25fps)into 25 seperate video files. I know I can split it up into jpegs but I need to retain the audio. So when I recompile audio is still there.
This is what I tried to grab the first frame only (with audio):
ffmpeg -i 1.mov -vcodec mjpeg -qscale 1 -an -ss 00:00:00:00 -t 00:00:00:1 frame1.mov
But it doesn't seem to work. Am I wrong in assuming ffmpeg supports time stamps in this format? hh:mm:ss:f?
Thanks
You are wrong in assuming ffmpeg supports timestamps in that format, but that's not the only problem
ffmpeg does not support the time format you're using. Options are either the time in seconds, or hh:mm:ss[.xxx] (two colons and a dot instead of three colons).
Your example code specifically strips the audio. That's what -an does.
Splitting by time when you actually want frames is not a great idea. Though since the audio frames are unlikely to match 1:1 with the video frames, it might be the best option.
Most importantly, most video and audio codecs are lossy and thus do not take well to being broken up into lots of pieces and then put back together. To do this without horribly mangling the quality, you need to first convert both video and audio into raw formats, manipulate those, and then re-transcode to compressed formats once you've done whatever you want to do with the pieces. You'll still lose some quality, but not as much as otherwise.