I've tried WebP converter to convert images to WebP format, but it takes about 1-2 second for one image to convert. I have around 70 images and I'd like to convert them in less than a minute. Is there a quicker way how to do it?
'cwebp -lossless -q 0 -m 1' is used in https://developers.google.com/speed/webp/docs/webp_lossless_alpha_study for fast lossless comperession. It averages at 19 ms per image for a web corpus of 1000 randomly sampled PNG-images.
How big are these images? What kind of computer do you use to convert? Do you have an alpha-channel in the images? Do you convert to lossless or lossy? Which version of cwebp are you using?
Related
I am seeing the following image in a paper:
However, when I download the associated dataset with the paper, the images are like this:
How can I make the almost black images in the dataset looking like the one in the paper?
link to dataset: http://www.cs.bu.edu/~betke/research/HRMF2/
link to paper: http://people.bu.edu/breslav/084.pdf
The contents of this dataset is saved as 16 png data. But ordinary display has only 8bit dynamic range. So we cannot display them without windowing. Please try to use ImageJ, it can map 16bit data into visible 8bit data.
https://imagej.nih.gov/ij/index.html
It can show as following.
As the other answers rightly say, the images are in 16-bit PNG format. You can convert one to a conventionally scaled, viewable JPEG with ImageMagick which is installed on most Linux distros and is available for macOS and Windows.
So, in Terminal:
magick 183.png -auto-level 183.jpg
If you would like to convert all 800 images to JPEGs in one go, you can use ImageMagick's mogrify like this:
magick mogrify -format JPEG -auto-level *png
Note that if your ImageMagick is the older v6 (as opposed to the v7 commands I used) the two commands become:
convert 183.png -auto-level 183.jpg
mogrify -format JPEG -auto-level *png
The images in this dataset have high dynamic range of 16-bits per pixel. Image viewer you are using is mapping 16-bit pixels to 8-bit ones, in order to display them (most displays can only effectively handle 8 or 10 bits of brightness). Most image viewers will just truncate the least significant 8 bits, which in your case produces nearly black image. To get better results, use ImageJ (Fiji distribution is the easiest way to get started), which displays this:
I have this image (photo taken by me on SGS 9 plus): Uncompressed JPG image. Its size is 4032 x 3024 and its weight is around 3MB. I compressed it with TinyJPG Compressor and its weight was 1.3MB. For PNG images I used Online-Convert and I saw webp images much more smaller even than compressed with TinyPNG. I expected something similar, especially that I read an article JPG to WebP – Comparing Compression Sizes where WEBP is much smaller that compressed JPG.
But when I convert my JPG to WEBP format in various online image convertion tools, I see 1.5-2MB size, so file is bigger than my compressed JPG. Am I missing something? WEBP should not be much smaller than compressed JPG? Thank you in advance for every answer.
These are lossy codecs, so their file size mostly depends on quality setting used. Comparing just file sizes from various tools doesn't say anything without ensuring images have the same quality (otherwise they're incomparable).
There are a couple of possibilities:
JPEG may compress better than WebP. WebP has problems with blurring out of the details, low-resolution color, and using less than full 8 bits of the color space. In the higher end of quality range, a well-optimized JPEG can be similar or better than WebP.
However, most of file size differences in modern lossy codecs are due to difference in quality. The typical difference between JPEG and WebP at the same quality is 15%-25%, but file sizes produced by each codec can easily differ by 10× between low-quality and high-quality image. So most of the time when you see a huge difference in file sizes, it's probably because different tools have chosen different quality settings (and/or recompression has lost fine details in the image, which also greatly affects file sizes). Even visual difference too small for human eye to notice can cause noticeable difference in file size.
My experience is that lossy WebP is superior below quality 70 (in libjpeg terms) and JPEG is often better than WebP at quality 90 and above. In between these qualities it doesn't seem to matter much.
I believe WebP qualities are inflated about 7 points, i.e., to match JPEG quality 85 one needs to use WebP quality 92 (when using the cwebp tool). I didn't measure this well, this is based on rather ad hoc experiments and some butteraugli runs.
Lossy WebP has difficulties compressing complex textures such as leafs of trees densely, whereas JPEGs difficulties are with thin lines against flat borders, like a telephone line hanging against the sky or computer graphics.
I'm using Imagemagick version 7.0.5-4 to perform image processing operations like crop, resize etc with go-graphics library. I also manage a pool of magickwand objects.
Features: Cipher DPC HDRI Modules
Delegates (built-in): bzlib freetype jng jpeg ltdl lzma png tiff xml zlib
The read time of image to MagickWand object as magickWand.ReadImage(<url>) in png images is very high compared that of jpg images. For images of size around 22kb, reading a jpg file takes around 300ms and that of png image takes around 1-2 minutes.
Edited:
When a single request is sent to server, the read operation takes around 20ms, but when on load of 100rps, it goes till 2-4 minutes. This trend is only in png images, not in jpg.
Any ideas on what can be done different in reading png files and how it can be made performant? Its fine to reduce the quality of images to around 60%. Tried options like SetImageDepth but it made no difference.
The compression quality parameter has a different effect and meaning when dealing with PNG files from when dealing with JPEG files.
PNG compression is always lossless and the appearance is never affected by the quality. As I cannot see your images, I would suggest you either don't bother compressing since it will happen anyway, or that you use a quality of 75. If you tell me you are saving cartoons or line drawings, I might advise differently.
Please have a read here and do some experiments yourself with the tradeoff between time and filesize.
I have made you some plots to show the effect on time to compress and compressed size for different quality settings using two different kinds of images - cartoons and photos - so you can see the effect.
Here is a cartoon:
Look at how the quality setting (0-100) affects time and size with JPEG output:
Now look what happens if you use those same quality settings (0-100) when generating PNG output:
Now let's look at compressing an iPhone photo to JPEG:
And when compressing an iPhone photo to a PNG:
Hopefully you can see that using one quality setting from your config file for both PNG and JPEG and with photos and cartoons/line drawings is not ideal.
I'm having problems opening up certain jpeg files (ones from Facebook and Instagram, and some Samsung phones) in Photoshop. I've read that if I use mogrify -comment test insert-image-here.jpg, it will "handle" the file and somehow it'll open up in Photoshop. And surprisingly, it works very well.
However, I recently "mogrified" an image using the same above command, only to have the filesize go down by 0.71mb, which was alarming as I don't want to recompress my images in jpeg. I then mogrified it ten more times, but I didn't see any obvious visual losses. I tried "mogrifying" a small 170kb image 20 times, and the filesize initially decreased, then increased every subsequent iteration. I compared the files by swapping between them quickly, but didn't see any quality loss.
What is mogrify doing that is decreasing the filesize, seemingly without reducing the quality? Is there a quality option that I can add to make mogrify not reduce the filesize?
This is a similar question from another user Why does the size of my image decrease when I add a comment to an image? but I cannot discern any quality loss whatsoever, so running my image through at 95 or 70% quality 20 times would be immediately noticeable.
Here is the link to the image that I am using as a test: http://ocicat.wildrain.tripod.com/sitebuildercontent/sitebuilderpictures/aragon.jpg
Edit:
I ran two more images through the mogrify command 1000 times (one of white noise which I didn't include.) I still don't see any quality loss--is JPEG compression that unnoticeable (maybe my eyes are failing me.) Interestingly, the final file size and the original file size of these two images are the same.
Zero iterations:
1000 iterations:
I do not understand why you would have trouble opening a JPG from those sources. But some viewers will not handle CMYK jpg files properly. I would be surprised if Photoshop has that problem. If you use Imagemagick to add a comment, then it will decompress your file and recompress it. Imagemagick will use the -quality value in the file if it can find it and make the output the same. However, if it cannot find the quality value in the file, then it will compress at value 92. That could cause a decrease in the file size if it was at 100 as input but was recompressed at 92. The next time you do the same it will continue to use 92. However, there might be some loss of effective quality because JPG is lossy. But at 92 it probably will not be visually noticeable. You could try convert in place of mogrify and see if that is any different. Also there is no need to add a comment. Just reading the input and saving it again will decompress and recompress it in Imagemagick. See http://www.imagemagick.org/script/command-line-options.php#quality
Your image is sRGB and has a quality of 75 according to Imagemagick
identify -verbose Aragon.jpg
Format: JPEG (Joint Photographic Experts Group JFIF format)
Mime type: image/jpeg
Class: DirectClass
Geometry: 9600x14400+0+0
Resolution: 2400x2400
Print size: 4x6
Units: PixelsPerInch
Type: TrueColor
Endianess: Undefined
Colorspace: sRGB
Depth: 8-bit
...
Compression: JPEG
Quality: 75
Orientation: Undefined
Properties:
date:create: 2017-08-18T21:26:37-07:00
date:modify: 2017-08-18T21:26:37-07:00
jpeg:colorspace: 2
jpeg:sampling-factor: 2x2,1x1,1x1
signature: bad15aa674dc45312d47627b620c895ee76b1fa4457b55bf1acca85883de5963
Artifacts:
filename: aragon.jpg
verbose: true
So the CMYK issue is not present.
Is this file before or after you processed it with Imagemagick mogrify?
If this file was originally quality 75 and was recompressed at 92 or some higher value than 75, then it might increase in file size.
If you do not want the file size to decrease then recompress at 75 or higher. 100 would give you the least compression, but may increase the file size.
Other factors could be a change is -sampling-factor for the JPG. See http://www.imagemagick.org/script/command-line-options.php#sampling-factor. Also there could be a difference in the compression tables. Imagemagick use libjpg to read and decompress JPGs. The original JPG may have been compressed using other tools.
Another factor might be the introduction or removal of a color profile. Imagemagick should not be changing that automatically.
My best suggestion is to check the quality of the input and what quality is assigned to the output. Also check the input colorspace. Use identify -verbose yourimage to see what may have changed.
Unfortunately, I do not know exactly what is happening. I can only tell you some of the factors that may be involved.
I used Imagemagick 6.9.9.7 Q16 Mac OSX to convert your file.
convert aragon.jpg aragon2.jpg
The input has Filesize: 5.94157MiB. The output has Filesize: 5.23264MiB. Both files have the same quality 75. So there is a slight change in file size due to decompression and recompression, probably due to actual loss in quality due to the fact that JPG has a lossy compression. Or perhaps due to a change in compression tables used. Doing it once more yields a Filesize: 5.23289MiB. So a very slight increase. Doing
convert aragon.jpg -quality 100 aragon4.jpg
Yields a Filesize: 13.968MiB, since we have asked it to use a larger compression quality than the input, so the file size will increase dramatically even though it is a lossy compression.
In my application I need to resize and make the quality on PNG files poorer.
In full size the PNGs are 3100x4400px using 2,20MB disk space.
When running the following command:
convert -resize 1400 -quality 10 input.png output.png
the images are resized to 1400x2000 using 5,33MB disk space.
So my question is: How can I reduce the file size?
You can further reduce quality of PNG by using posterization:
https://github.com/pornel/mediancut-posterizer (Mac GUI)
This is a lossy operation that allows zlib to compress better.
Convert image to PNG8 using pngquant.
It reduces images to 256 colors, so quality depends on the type of image, but pngquant makes very good palettes, so you might be surprised how often it works.
Use Zopfli-png or AdvPNG to re-compress images better.
This is lossless and recommended for all images if you have CPU cycles to spare.
After using imagemagick to resize, you can compress the image using pngquant.
On mac (with homebrew) brew install pngquant then:
pngquant <filename.png>
This will create a new image filename-fs8.png that is normally much smaller in size.
Help page says, that -quality option used with PNG sets the compression level for zlib, where (roughly) 0 is the worst compression, 100 - is the best (default is 75). So try to set -quality to 100 or even remove the option.
Another method is to specify PNG:compression-level=N, PNG:compression-strategy=N and PNG:compression-filter=N to achieve even better results.
http://www.imagemagick.org/script/command-line-options.php#quality
For lazy people that arrived here wanting to paste in a one liner:
mogrify -resize 50% -quality 50 *.png && pngquant *.png --ext .png --force
This modifies all of the pngs in the current directory in place, so make sure you have a backup. Modify your resize and quality parameters as suits your needs. I did a quick experiment and mogrify first, then pngquant, resulted in significantly smaller image size.
The ubuntu package for pngquant is called "pngquant" but I had it installed already on 20.04LTS so seems like it may be there by default.
I found that the best way was to use the
- density [value]
parameter.