I want to resize some PNG images with imagemagick, but it's making the files 5 times larger when I convert them:
$ convert -resize 50% -quality 80 01.png 01_half.png
$ ls -hal 01*.*
-rw-rw-r-- 1 3.3M Sep 9 09:05 01_half.png
-rwxr-xr-x 1 651K Jan 13 2011 01.png
From 651KB to 3.3MB! Can anyone suggest how to stop this happening?
Please note that ImageMagick's quality option's behavior is different with PNG's than it is with eg JPG's, and 80 looks like a pretty odd value for PNG's. As explained in the manual quality's value is cut in 2 where the first digit controls the zlib compression factor, and the second digit controls the filter type.
Related
I want to find the file whose size is less than 1M, so I write as find . -size -1M. But it seems not work indeed:
find . -size -1M | xargs ls -lh
-rw-rw-r-- 1 xyz xyz 0 Apr 2 14:48 ./test/score
-rw-rw-r-- 1 xyz xyz 0 Apr 2 14:48 ./test/ir1
On the contrary, it's amazing that find . -size 1M works.
From man find:
The + and - prefixes signify greater than and less than, as
usual; i.e., an exact size of n units does not match. Bear in mind
that the size is rounded up to the next unit. Therefore
-size -1M is not equivalent to -size -1048576c. The former only matches empty files, the latter matches files from 0 to
1,048,575 bytes.
I'm using convert utility from ImageMagick to convert raw image bytes to usable image format such as PNG. My raw files are generated by code, so there is no any headers, just pure pixels.
In order to convert my image I'm using command:
$ convert -depth 1 -size 576x391 -identify gray:image.raw image.png
gray:image.raw=>image.raw GRAY 576x391 576x391+0+0 1-bit Gray 28152B 0.010u 0:00.009
The width is fixed and pretty known for me. However I have to evaluate the height of the image from the file size each time which is annoying.
Without height specified or if wrong height is specified the utility compains:
$ convert -depth 1 -size 576 -identify gray:image.raw image.png
convert-im6.q16: must specify image size `image.raw' # error/gray.c/ReadGRAYImage/143.
convert-im6.q16: no images defined `image.png' # error/convert.c/ConvertImageCommand/3258.
$ convert -depth 1 -size 576x390 -identify gray:iphone.raw iphone.png
convert-im6.q16: unexpected end-of-file `image.raw': No such file or directory # error/gray.c/ReadGRAYImage/237.
convert-im6.q16: no images defined `image.png' # error/convert.c/ConvertImageCommand/3258.
So I wonder is there a way to automatically detect the image height based on the file/blob size?
A couple of ideas...
You may not be aware of the NetPBM format, but it is very simple and you may be able to change your software that creates the raw images so that it directly generates PBM format images which are readable and useable by OpenCV, Photoshop, GIMP, feh, eog and ImageMagick of course. It would not require any libraries or extra dependencies in your software, all you need to do is put a textual PBM header on the front, so your file looks like this:
P4
576 391
... YOUR EXISTING BINARY DATA ...
Do not forget to put newlines (i.e. linefeed character) after P4 and after 391.
You can try it for yourself and add a header onto one of your files like this and then view it with GIMP or other tool:
printf "P4\n576 391\n" > image.pbm
cat image.raw >> image.pbm
If you prefer a one-liner, just use a bash command grouping like this - which is equivalent to the 2 lines above:
{ printf "P4\n576 391\n"; cat image.raw; } > image.pbm
Be careful to have all the spaces and semi-colons exactly as I have them!
Another idea, just putting some meat on Fred's answer, might be the following one-liner which uses a bash arithmetic context and a bash command substitution, you can do this:
convert -depth 1 -size "576x$(($(stat -c "%s" image.raw)*8/576))" gray:image.raw image.png
Note that if you are on macOS, stat is a little different, so you may prefer the slightly less efficient, but more portable:
convert -depth 1 -size "576x$(($(wc -c < image.raw)*8/576))" gray:image.raw image.png
You have to know the -depth and width to compute the height for ImageMagick raw format. If depth is 1, then your image is binary (b/w). So height = 8 * file size (in B)/(width). 28152*8/391 = 576
I'm actualy creating Tiled TIFF from jpeg files by using ImageMagick. My aim is to work with IIPImage server.
I can generate easily files but my problem is that I have to deal with a large warehouse of images and it's crucial to optimize the space occupied by my TIFF files.
Thus, by using a compression of 45% (and tiles of 256x256) I obtain a acceptable quality and It's the maximum level of optimization I know.
With that configuration, my TIFF files have a little more the same size as the original jpeg files.
For exemple, if a jpeg weights 10Mo, the result TIFF weights 11.4Mo. It's good but not enought because if my initial warehouse weights 2To, I have to plan at least 4To for my project.
Thereby, I want to know if it exists a way for optimizing further the size of my TIFF files without losing more quality than 45%... By using ImageMagick or another tool.
For information, I'm using this command for generating TIFF.
convert <jpeg file> -quality 45 -depth 8 +profile '*' -define tiff:tile-geometry=256x256 -compress jpeg 'ptif:<tiff file>'
Thanks !
I thought I'd just add a note to #mark-setchell's excellent answer, but it came out too long, so I've made a separate one, sorry.
Your problem is that imagemagick (at least on current Ubuntu) saves pyramidal JPEG TIFFs as RGB rather than YCbCr, so they are huge. For example, wtc.jpg is a 10,000 x 10,000 pixel JPEG image saved with the default Q75:
$ time convert wtc.jpg -quality 45 -depth 8 +profile '*' -define tiff:tile-geometry=256x256 -compress jpeg 'ptif:x-convert.tif'
real 0m27.553s
user 1m10.903s
sys 0m1.129s
$ ls -l wtc.jpg x-convert.tif
-rw-r--r-- 1 john john 15150881 Mar 16 08:55 wtc.jpg
-rw-r--r-- 1 john john 37346722 Mar 30 20:17 x-convert.tif
You can see the compression type like this:
$ tiffinfo x-convert.tif | grep -i interp
Photometric Interpretation: RGB color
Perhaps there's some way to make it use YCbCr instead? I'm not sure how, unfortunately.
I would use libvips instead. It's more than 10x faster (on this laptop anyway), uses much less memory, and it enables YCbCr mode correctly, so you get much smaller files:
$ time vips tiffsave wtc.jpg x-vips.tif --compression=jpeg --tile --tile-width=256 --tile-height=256 --pyramid
real 0m2.180s
user 0m2.595s
sys 0m0.082s
$ ls -l x-vips.tif
-rw-r--r-- 1 john john 21188074 Mar 30 20:27 x-vips.tif
$ tiffinfo x-vips.tif | grep -i interp
Photometric Interpretation: YCbCr
If you set Q lower, you can get the size down more:
$ vips tiffsave wtc.jpg x-vips.tif --compression=jpeg --tile --tile-width=256 --tile-height=256 --pyramid --Q 45
$ ls -l x-vips.tif
-rw-r--r-- 1 john john 12664900 Mar 30 22:01 x-vips.tif
Though I'd stick at the default Q75 myself.
I am not familiar with IIPImage server, so my thoughts may be inappropriate. If you store a tiled TIFF, you are storing multiple resolutions and all but the highest resolution is redundant - so could you maybe just store the highest resolution and generate the lower ones on demand?
The "PalaisDuLouvre.tif" image is 2MB as a tiled TIF:
ls -lhr PalaisDuLouvre.tif
-rw-r--r--# 1 mark staff 1.9M 30 Mar 11:24 PalaisDuLouvre.tif
and it contains the same image at 6 different resolutions:
identify PalaisDuLouvre.tif
PalaisDuLouvre.tif[0] TIFF 4000x828 4000x828+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
PalaisDuLouvre.tif[1] TIFF 2000x414 2000x414+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
PalaisDuLouvre.tif[2] TIFF 1000x207 1000x207+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
PalaisDuLouvre.tif[3] TIFF 500x103 500x103+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
PalaisDuLouvre.tif[4] TIFF 250x51 250x51+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
PalaisDuLouvre.tif[5] TIFF 125x25 125x25+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
Yet, I can store it in better quality (90%) than your tiled TIFF like this:
convert PalaisDuLouvre.tif[0] -quality 90 fullsize.jpg
with size 554kB:
-rw-r--r-- 1 mark staff 554K 30 Mar 13:44 fullsize.jpg
and generate a tiled TIF the same as yours in under 1 second on demand with:
convert fullsize.jpg -define tiff:tile-geometry=256x256 -compress jpeg ptif:tiled.tif
Alternatively, you could use vips to make your TIFF pyramid even faster. The following takes 0.2secs on my iMac, i.e. nearly 5x faster than ImageMagick:
vips tiffsave fullsize.jpg vips.tif --compression=jpeg --Q=45 --tile --tile-width=256 --tile-height=256 --pyramid
I have a high res Baseline JPEG which I want to compress from 6MB to +- 300kb for my website and make it progressive.
Now I know how to do both, progressive with photoshop and compression with a tool online or gulp/grunt task.
I am wondering what is the best order for the image(best quality):
First, compress the original image and then make it progressive.
First, make it progressive and then compress the image.
doesn't matter :)
As regards the quality, that is a difficult call since it is dependent on the images - which you don't show. And if you are going for an 20x reduction in size, you must expect some loss of quality. So, I'll leave you to assess quality. As regards the processing...
You can do both at once with ImageMagick which is installed on most Linux distros and is available for macOS and Windows.
Check input image size is 6MB:
ls -lrht input.jpg
-rw-r--r-- 1 mark staff 6.0M 2 Dec 16:09 input.jpg
Check input image is not interlaced:
identify -verbose input.jpg | grep -i interlace
Interlace: None
Convert to progressive/interlaced JPEG and 300kB in size:
convert input.jpg -interlace plane -define jpeg:extent=300KB result.jpg
Check size is now under 300kB:
ls -lhrt result.jpg
-rw-r--r--# 1 mark staff 264K 2 Dec 16:11 result.jpg
Check now interlaced:
identify -verbose result.jpg | grep -i interlace
Interlace: JPEG
You can also use jpegtran which is lighter weight than ImageMagick:
jpegtran -copy none -progressive input.jpg output.jpg
is there any lightweight command line batch image cropping tool(Linux or Windows) which can handle a variety of the formats ?
In Linux you can use
mogrify -crop {Width}x{Height}+{X}+{Y} +repage image.png
for CLI image manipulation
Imagemagick's convert does the trick for me (and much more than cropping):
convert -crop +100+10 in.jpg out.jpg
crops 100 pixels off the left border, 10 pixels from the top.
convert -crop -100+0 in.jpg out.jpg
crops 100 pixels off the right, and so on. The Imagemagick website knows more:
http://www.imagemagick.org/Usage/crop/#crop
Imagemagick is what you want -- tried and true.
I found nconvert pretty handy so far.
for f in final/**/*;
do
convert -crop 950x654+0+660 "$f" "${f%.jpg}".jpg
done
This script loops through all the sub-folders and crops the .jpg files.
macOS has sips image processing tool integrated. Cropping functions available are:
-c, --cropToHeightWidth pixelsH pixelsW
--cropOffset offsetY offsetH
Easy with sips: just set the offset to start the cropping:
sips --cropOffset 1 1 -c <height> <width> -o output.png input.png
I have scanned some pages and all ~130 pages needs the lower ~1/8 of the page cut off.
Using mogrify didn't work for me,
a#a-NC210-NC110:/media/a/LG/AC/Learn/Math/Calculus/Workshop/clockwise/aa$ mogrify -quality 100 -crop 2592×1850+0+0 *.jpg
mogrify.im6: invalid argument for option `2592×1850+0+0': -crop # error/mogrify.c/MogrifyImageCommand/4232.
However convert did:
a#a-NC210-NC110:~/Pictures/aa$ convert '*.jpg[2596x1825+0+0]' letter%01d.jpg
a#a-NC210-NC110:~/Pictures/aa$
I learnt this here under the Inline Image Crop section.
Notice my syntax: I had to put my geometry in brackets: [].
Using the successful syntax above but with mogrify simply didn't work, producing:
a#a-NC210-NC110:~/Pictures/aa$ mogrify '*.jpg[2596x1825+0+0]' letter%01d.jpg
mogrify.im6: unable to open image `letter%01d.jpg': No such file or directory # error/blob.c/OpenBlob/2638.
Linux a-NC210-NC110 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux
Lubuntu 14.04 LTS