JPEG compression and progressive JPEG - image

I have a high res Baseline JPEG which I want to compress from 6MB to +- 300kb for my website and make it progressive.
Now I know how to do both, progressive with photoshop and compression with a tool online or gulp/grunt task.
I am wondering what is the best order for the image(best quality):
First, compress the original image and then make it progressive.
First, make it progressive and then compress the image.
doesn't matter :)

As regards the quality, that is a difficult call since it is dependent on the images - which you don't show. And if you are going for an 20x reduction in size, you must expect some loss of quality. So, I'll leave you to assess quality. As regards the processing...
You can do both at once with ImageMagick which is installed on most Linux distros and is available for macOS and Windows.
Check input image size is 6MB:
ls -lrht input.jpg
-rw-r--r-- 1 mark staff 6.0M 2 Dec 16:09 input.jpg
Check input image is not interlaced:
identify -verbose input.jpg | grep -i interlace
Interlace: None
Convert to progressive/interlaced JPEG and 300kB in size:
convert input.jpg -interlace plane -define jpeg:extent=300KB result.jpg
Check size is now under 300kB:
ls -lhrt result.jpg
-rw-r--r--# 1 mark staff 264K 2 Dec 16:11 result.jpg
Check now interlaced:
identify -verbose result.jpg | grep -i interlace
Interlace: JPEG
You can also use jpegtran which is lighter weight than ImageMagick:
jpegtran -copy none -progressive input.jpg output.jpg

Related

Optmize size of tiled tiff with ImageMagick

I'm actualy creating Tiled TIFF from jpeg files by using ImageMagick. My aim is to work with IIPImage server.
I can generate easily files but my problem is that I have to deal with a large warehouse of images and it's crucial to optimize the space occupied by my TIFF files.
Thus, by using a compression of 45% (and tiles of 256x256) I obtain a acceptable quality and It's the maximum level of optimization I know.
With that configuration, my TIFF files have a little more the same size as the original jpeg files.
For exemple, if a jpeg weights 10Mo, the result TIFF weights 11.4Mo. It's good but not enought because if my initial warehouse weights 2To, I have to plan at least 4To for my project.
Thereby, I want to know if it exists a way for optimizing further the size of my TIFF files without losing more quality than 45%... By using ImageMagick or another tool.
For information, I'm using this command for generating TIFF.
convert <jpeg file> -quality 45 -depth 8 +profile '*' -define tiff:tile-geometry=256x256 -compress jpeg 'ptif:<tiff file>'
Thanks !
I thought I'd just add a note to #mark-setchell's excellent answer, but it came out too long, so I've made a separate one, sorry.
Your problem is that imagemagick (at least on current Ubuntu) saves pyramidal JPEG TIFFs as RGB rather than YCbCr, so they are huge. For example, wtc.jpg is a 10,000 x 10,000 pixel JPEG image saved with the default Q75:
$ time convert wtc.jpg -quality 45 -depth 8 +profile '*' -define tiff:tile-geometry=256x256 -compress jpeg 'ptif:x-convert.tif'
real 0m27.553s
user 1m10.903s
sys 0m1.129s
$ ls -l wtc.jpg x-convert.tif
-rw-r--r-- 1 john john 15150881 Mar 16 08:55 wtc.jpg
-rw-r--r-- 1 john john 37346722 Mar 30 20:17 x-convert.tif
You can see the compression type like this:
$ tiffinfo x-convert.tif | grep -i interp
Photometric Interpretation: RGB color
Perhaps there's some way to make it use YCbCr instead? I'm not sure how, unfortunately.
I would use libvips instead. It's more than 10x faster (on this laptop anyway), uses much less memory, and it enables YCbCr mode correctly, so you get much smaller files:
$ time vips tiffsave wtc.jpg x-vips.tif --compression=jpeg --tile --tile-width=256 --tile-height=256 --pyramid
real 0m2.180s
user 0m2.595s
sys 0m0.082s
$ ls -l x-vips.tif
-rw-r--r-- 1 john john 21188074 Mar 30 20:27 x-vips.tif
$ tiffinfo x-vips.tif | grep -i interp
Photometric Interpretation: YCbCr
If you set Q lower, you can get the size down more:
$ vips tiffsave wtc.jpg x-vips.tif --compression=jpeg --tile --tile-width=256 --tile-height=256 --pyramid --Q 45
$ ls -l x-vips.tif
-rw-r--r-- 1 john john 12664900 Mar 30 22:01 x-vips.tif
Though I'd stick at the default Q75 myself.
I am not familiar with IIPImage server, so my thoughts may be inappropriate. If you store a tiled TIFF, you are storing multiple resolutions and all but the highest resolution is redundant - so could you maybe just store the highest resolution and generate the lower ones on demand?
The "PalaisDuLouvre.tif" image is 2MB as a tiled TIF:
ls -lhr PalaisDuLouvre.tif
-rw-r--r--# 1 mark staff 1.9M 30 Mar 11:24 PalaisDuLouvre.tif
and it contains the same image at 6 different resolutions:
identify PalaisDuLouvre.tif
PalaisDuLouvre.tif[0] TIFF 4000x828 4000x828+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
PalaisDuLouvre.tif[1] TIFF 2000x414 2000x414+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
PalaisDuLouvre.tif[2] TIFF 1000x207 1000x207+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
PalaisDuLouvre.tif[3] TIFF 500x103 500x103+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
PalaisDuLouvre.tif[4] TIFF 250x51 250x51+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
PalaisDuLouvre.tif[5] TIFF 125x25 125x25+0+0 8-bit sRGB 1.88014MiB 0.000u 0:00.000
Yet, I can store it in better quality (90%) than your tiled TIFF like this:
convert PalaisDuLouvre.tif[0] -quality 90 fullsize.jpg
with size 554kB:
-rw-r--r-- 1 mark staff 554K 30 Mar 13:44 fullsize.jpg
and generate a tiled TIF the same as yours in under 1 second on demand with:
convert fullsize.jpg -define tiff:tile-geometry=256x256 -compress jpeg ptif:tiled.tif
Alternatively, you could use vips to make your TIFF pyramid even faster. The following takes 0.2secs on my iMac, i.e. nearly 5x faster than ImageMagick:
vips tiffsave fullsize.jpg vips.tif --compression=jpeg --Q=45 --tile --tile-width=256 --tile-height=256 --pyramid

How to shrink and optimize images?

I'm currently using jpegoptim on CentOS 6. It lets you set a quality and file size benchmark. However, it doesn't let you resize the images.
I have 5000 images of all file sizes and dimensions that I want to shrink to a max width and max file size.
For example, I'd like to get all images down to a maximum width of 500 pixels and 50 KB.
How can I shrink and optimize all of these images?
You can do this with ImageMagick, but it is hard to say explicitly which way to do it as it depends on whether all the files are in the same directory and also whether you have or can use, GNU Parallel.
Generally, you can reduce the size of a single image to a specific width of 500 like this:
# Resize image to 500 pixels wide
convert input.jpg -resize 500x result.jpg
where input.jpg and result.jpg are permitted to be the same file. If you wanted to do the height, you would use:
# Resize image to 500 pixels high
convert input.jpg -resize x500 result.jpg
since dimensions are specified as width x height.
If you only want to reduce files that are larger than 500 pixels, and not do any up-resing (increasing resolution), you add > to the dimension:
# Resize images wider than 500 pixels down to 500 pixels wide
convert image.jpg -resize '500x>' image.jpg
If you want to reduce the file size of the result, you must use a -define to guide the JPEG encoder as follows:
# Resize image to no more than 500px wide and keep output file size below 50kB
convert image.jpg -resize '500x>' -define jpeg:extent=50KB result.jpg
So, now you need to put a loop around all your files:
#!/bin/bash
shopt -s nullglob
shopt -s nocaseglob
for f in *.jpg; do
convert "$f" -resize '500x>' -define jpeg:extent=50KB "$f"
done
If you like thrashing all your CPU cores, do that using GNU Parallel to get the job done faster.
Note that if you have a file that is smaller than 500px wide, ImageMagick will not process it so if it is smaller than 500 pixels wide and also larger than 50kB, it will not get reduced in terms of filesize. To catch that unlikely edge case, you may need to run another find afterwards to find any files over 50kB and then run them through convert but without the -resize, something like this:
find . -type f -iname "*.jpg" -size -51200c -exec convert {} -define jpeg:extent=50KB {} \;

ImageMagick error with montage command

I'm stitching 8 images of 8k by 8k pixels in a row using the montage command.
This is what I enter in:
montage -mode concatenate -limit area 0 -tile x1 image1.png image2.png image3.png image4.png image5.png image6.png image7.png image8.png out1.png
This is the error I get out:
montage: magick/quantum.c:215: DestroyQuantumInfo: Assertion `quantum_info->signature == 0xabacadabUL' failed.
Abort
Can anyone help? Thanks
You may get on better with this command which does what I think you are trying to do:
convert +append image{1..8}.png out.png
As you can see from the following identify command, the images have been laid out side-by-side to make an image 64k pixels wide as a result of the +append command. Just FYI, use -append to lay them out one above the other in a 64k pixel tall stack.
identify out.png
out.png PNG 64000x8000 64000x8000+0+0 8-bit sRGB 2c 62.4KB 0.000u 0:00.000
Your originally posted command also works fine on my ImageMagick Version:
ImageMagick 6.8.9-5 Q16 x86_64 2014-07-29

Command line batch image cropping tool

is there any lightweight command line batch image cropping tool(Linux or Windows) which can handle a variety of the formats ?
In Linux you can use
mogrify -crop {Width}x{Height}+{X}+{Y} +repage image.png
for CLI image manipulation
Imagemagick's convert does the trick for me (and much more than cropping):
convert -crop +100+10 in.jpg out.jpg
crops 100 pixels off the left border, 10 pixels from the top.
convert -crop -100+0 in.jpg out.jpg
crops 100 pixels off the right, and so on. The Imagemagick website knows more:
http://www.imagemagick.org/Usage/crop/#crop
Imagemagick is what you want -- tried and true.
I found nconvert pretty handy so far.
for f in final/**/*;
do
convert -crop 950x654+0+660 "$f" "${f%.jpg}".jpg
done
This script loops through all the sub-folders and crops the .jpg files.
macOS has sips image processing tool integrated. Cropping functions available are:
-c, --cropToHeightWidth pixelsH pixelsW
--cropOffset offsetY offsetH
Easy with sips: just set the offset to start the cropping:
sips --cropOffset 1 1 -c <height> <width> -o output.png input.png
I have scanned some pages and all ~130 pages needs the lower ~1/8 of the page cut off.
Using mogrify didn't work for me,
a#a-NC210-NC110:/media/a/LG/AC/Learn/Math/Calculus/Workshop/clockwise/aa$ mogrify -quality 100 -crop 2592×1850+0+0 *.jpg
mogrify.im6: invalid argument for option `2592×1850+0+0': -crop # error/mogrify.c/MogrifyImageCommand/4232.
However convert did:
a#a-NC210-NC110:~/Pictures/aa$ convert '*.jpg[2596x1825+0+0]' letter%01d.jpg
a#a-NC210-NC110:~/Pictures/aa$
I learnt this here under the Inline Image Crop section.
Notice my syntax: I had to put my geometry in brackets: [].
Using the successful syntax above but with mogrify simply didn't work, producing:
a#a-NC210-NC110:~/Pictures/aa$ mogrify '*.jpg[2596x1825+0+0]' letter%01d.jpg
mogrify.im6: unable to open image `letter%01d.jpg': No such file or directory # error/blob.c/OpenBlob/2638.
Linux a-NC210-NC110 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux
Lubuntu 14.04 LTS

Can ImageMagick return the image size?

I'm using ImageMagick from the command line to resize images:
convert -size 320x240 image.jpg
However, I don't know how to determine the size of the final image. Since this is a proportional image scale, it's very possible that new image is 100x240 or 320x90 in size (not 320x240).
Can I call the 'convert' command to resize the image and return the new image dimensions? For example, pseudo code:
convert -size 320x240 -return_new_image_dimension image.jpg // returns the new resized image dimensions
-ping option
This option is also recommended as it prevents the entire image from being loaded to memory, as mentioned at: https://stackoverflow.com/a/22393926/895245:
identify -ping -format '%w %h' image.jpg
man identify says:
-ping efficiently determine image attributes
We can for example test it out with some of the humongous images present on Wikimedia's "Large image" category e.g. this ultra high resolution image of Van Gogh's Starry Night which Wikimedia claims is 29,696 × 29,696 pixels, file size: 175.67 MB:
wget -O image.jpg https://upload.wikimedia.org/wikipedia/commons/e/e8/Van_Gogh_-_Starry_Night_-_Google_Art_Project-x0-y0.jpg
time identify -ping -format '%w %h' image.jpg
time identify -format '%w %h' image.jpg
I however observed that -ping at least in this case did not make any difference on the time, maybe it only matters for other image formats?
Tested on ImageMagick 6.9.10, Ubuntu 20.04.
See also: Fast way to get image dimensions (not filesize)
You could use an extra call to identify:
convert -size 320x240 image.jpg; identify -format "%[fx:w]x%[fx:h]" image.jpg
I'm not sure with the %w and %h format. While Photoshop says my picture is 2678x3318 (and I really trust Photoshop), identify gives me:
identify -ping -format '=> %w %h' image.jpg
=> 643x796
(so does [fx:w] and [fx:h])
I had to use
identify -ping -format '=> %[width] %[height]' image.jpg
=> 2678x3318
I don't know what's going on here, but you can see both values on standard output (where the width and height before the => are the correct ones)
identify -ping image.jpg
image.jpg PAM 2678x3318=>643x796 643x796+0+0 16-bit ColorSeparation CMYK 2.047MB 0.000u 0:00.000
The documentation says %w is the current width and %[width] is original width. Confusing.
%w and %h may be correct for most uses, but not for every picture.
If you specify option -verbose, convert prints:
original.jpg=>scaled.jpg JPEG 800x600=>100x75 100x75+0+0 8-bit sRGB 4.12KB 0.020u 0:00.009
^^^^^^

Resources