Commands for ImageMagick to create thumbnails - image

Given an photograph uploaded by a user, what is best approach to creating a number various sized thumbnails Using ImageMagick (or GraphicsMagick)? My guess to the steps:
Create a super sample of the image, maintaining original aspect ratio
Apply watermark to super sample
Create the various sized thumbnails using the watermarked super sampled image
Additional requirements:
Best quality possible (does this mean PNG over JPG?)
Smallest file size possible (does this mean JPG over PNG?)
Use density of 72x72, units is ppi
Since I am not that familiar with the intricacies of IM (or GM), some guidance to the best commands that meet my objectives would be highly appreciated. Thanks.

Check out the ImageMagick documentation:
For a specific size http://www.imagemagick.org/Usage/thumbnails/#fit
For watermarks http://www.imagemagick.org/Usage/annotating/#watermarking
The best quality possible is complicated since difference images compress differently. I'm partial to PNG since it has a variety if compression techniques available to allow for experimentation.

Related

If I have an iPad app with lots of images, is PNG still the best option?

I am working on an iPad application which has hundreds of photo-quality images. I would have naturally assumed to store these images as JPEGs so as to optimize the app file size. However, Apple's guidelines state:
Use the PNG format for images. The PNG format provides lossless image content, meaning that saving image data to a PNG format and then reading it back results in the exact same pixel values. PNG also has an optimized storage format designed for faster reading of the image data. It is the preferred image format for iOS.
However, if I store the same images as JPEGs at 100% quality, the size of them drops to about half that of the PNG lossless versions.
Is there really that much of a performance hit to use JPEG instead of PNG? If I am viewing these images in a carousel or gallery style, do I really need to worry about the performance and use PNGs instead?
Thanks!
Regarding the quality PNG is good for application kind of images, but JPEG is preferred for photos. Choose the lowest JPEG quality that gives good enough quality for your images.
Regarding speed, size also matters. I have no IPad to test with, but the smaller file size to read from flash or network might very well out weight any additional decompression cost. The only way to find out is to measure on your actual device.
There is a performance consideration but while PNG is preferred for quality, given your application, I'd suggest JPEG would be preferable.
Pure performance isn't the only factor of interest or concern; an iPad has only a finite space available to it, and filling that up with image data that most users are not going to need or want seems preferable to using more computational power for most cases.
One other thing to consider - on a gallery, you are strongly recommended to generate thumbnails which give you the best of both worlds: the smaller, more accessible image for general use and the full original image for 'best'.
If in doubt, benchmark with both and see how big the difference is in your application - and if the difference is something you can live with versus the space saving, go with JPEG.

Quality for images in LaTeX documents

What are some of the points that I need to follow if I want to have good quality images in a LaTeX document. These images are mostly screenshots of a software application or flow charts.
Below are two such images.
Flow Chart
Screenshot
Thanx
For diagrams, the rule is to use vector formats as much as you can — PDF, EPS or native LaTeX packages. When using vector graphics, the picture does not loose resolution and can be scaled freely. For a flow chart, I would either export it from the drawing application as a PDF, or use PGF/Tikz to produce it from LaTeX (see also examples). If your drawing application does not have a PDF export, consider using one that does — e.g., UMLet.
If you can't use vector graphics (e.g., because it is a screenshot), make sure you use high-enough resolution to begin with. If it is an academic paper, the publisher usually has guidelines for this.
If you use PDFLatex you can use png images and in those cases you definately should use png over jpeg. PNG compression is not lossy, so you get the best quality at the expense of file size.
The second important point is to create the images with sufficient resolution, for printing it should be about 300-600 dpi, higher is better but the filesize of the images and the resulting document will increase. For documents that will only be looked at a screen you can use a lower resolution, about 72-100 dpi should be enough.
For diagrams you should create vector graphics (eps or pdf) if possible, that way you do not lose any quality.
For screenshots, there is not much to do, but for flow charts, I'd suggest to create them in PDF format (vectorized) and to compile your LaTeX source with pdflatex.
for the flowchart i'd suggest TikZ, then your chart is directly typeset in TeX. Here's an example: http://www.texample.net/tikz/examples/simple-flow-chart/
Screenshots are pretty much a lost cause. I've had a good experience saving them as PDF and then embedding them, but you want to make sure you're on a high-res capture to begin with.
Charts are very easy. Most graphics programs (e.g., Vizio, OmniGraffle) will let you save it as EPS or PDF, and scaling works fairly well.

Self-describing file format for gigapixel images?

In medical imaging, there appears to be two ways of storing huge gigapixel images:
Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format.
Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later.
In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata.
Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata?
The main issues are:
Storing images in a way that allows for rapid random access (tiling)
Storing downsampled images for rapid zooming (pyramid)
Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image)
Storing important metadata, including associated images like a slide's label and thumbnail
Support for lossy storage
What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.
It seems like starting with TIFF or BigTIFF and defining a useful subset of tags + XMP metadata might be the way to go. FITS is no good since it is basically for lossless data and doesn't have a very appropriate metadata mechanism.
The problem with TIFF is that it just allows too much flexibility, but a subset of TIFF should be acceptable.
The solution may very well be http://ome-xml.org/ and http://ome-xml.org/wiki/OmeTiff.
It looks like DICOM now has support:
ftp://medical.nema.org/MEDICAL/Dicom/Final/sup145_ft.pdf
You probably want FITS.
Arbitrary size
1--3 dimensional data
Extensive header
Widely used in astronomy and endorsed by NASA and the IAU
I'm a pathologist (and hobbyist programmer) so virtual slides and digital pathology are a huge interest of mine. You may be interested in the OpenSlide project. They have characterized a number of the proprietary formats from the large vendors (Aperio, BioImagene, etc). Most seem to consist of a pyramidal zoomed (scanned at different microscopic objectives, of course), large tiff files containing multiple tiled tiffs or compressed (JPEG or JPEG2000) images.
The industry standard is DICOM Sup 145; getting vendors to adopt it though has been sluggish, but inventing yet another format would probably not be helpful.
PNG might work for you. It can handle large images, metadata, and the PNG format can have some interlacing, so you can get up to (down to?) an n/8 x n/8 downsampled image pretty easily.
I'm not sure if PNG can do rapid random access. It is chunked, but that might not be enough.
You could represent sparse data with the transparency channel.
JPEG2000 might be worth a look, some interesting efforts from National libraries in this space.

How to detect subjective image quality

For an image-upload tool I want to detect the (subjective) quality of an image automatically, resulting in a rating of the quality.
I have the following idea to realize this heuristically:
Obviously incorporate the resolution into the rating.
Compress it to JPG (75%), decompress it and compare jpg-size vs. decompressed size to gain a ratio. The blurrier the image is, the higher the ratio.
Obviously my approach would use up a lot of cycles and memory if large images are rated, although this would do in my scenario (fat server, not many uploads), and I could always build in a "short circuit" around the more expensive steps if the image exceeds a certain resolution.
Is there something else I can try, or is there a way to do this more efficiently?
Assesing the image (the same goes for sound or video) quality is not an easy task, and there are numerous publications tackling the problem.
Much depends on the nature of the image - different set of criteria is appropriate for artificially created images (i.e. diagrams) or natural images (i.e. photographs). There are subtle effects that have to be taken into consideration - like color masking, luminance masking, contrast perception. For some images a given compression ratio is perfectly adequate, while for other it will result in significant loss of quality.
Here is a free-access publication giving a brief introduction to the subject of image quality evaluation.
The method you mentioned - compressing the image and comparing the result with the original is far from perfect. What will be the metric that you plan to use? MSE? MSE per block? For sure it is not too difficult to implement, but the results will be difficult to interpret (consider images with high-frequency components and without them).
And if you want to delve more into the are of image quality assessment there is also a lot of research done by the machine learning community.
You could try looking in the EXIF tags of the image (using something like exiftool), what you get will vary a lot. On my SLR, for example, you even get which of the focus points were active when the image was taken. There may also be something about compression quality.
The other thing to check is the image histogram - watch out for images biased to the left, which suggests under-exposure or lots of saturated pixels.
For image blur you could look at the high frequency components of the Fourier transform, this is probably accessing parameters relating to the JPG compression anyway.
This is a bit of a tricky area because most "rules" you might be able to implement could arguably be broken for artistic effect.
I'd like to shoot down the "obviously incorporate resolution" idea. Resolution tells you nothing. I can scale an image by a factor of 2 , quadrupling the number of pixels. This adds no information whatsoever, nor does it improve quality.
I am not sure about the "compress to JPG" idea. JPG is a photo-oriented algorithm. Not all images are photos. Besides, a blue sky compresses quite well. Uniformly grey even better. Do you think exact cloud types determine the image quality?
Sharpness is a bad idea, for similar reasons. Depth of Field is not trivially related to image quality. Items photographed against a black background will have a lot of pixels with quite low intensity, intentionally. Again, this does not signal underexposure, so the histogram isn't a good quality indicator by itself either.
But what if the photos are "commercial?" Does the value of the existing technology work if the photos are of every-day objects and purposefully non-artistic?
If I hire hundreds of people to take pictures of park benches I want to quickly know which pictures are of better quality (in-focus, well-lit) and which aren't. I don't want pictures of kittens, people, sunsets, etc.
Or what if the pictures are supposed to be of items for a catalog? No models, just garments. Would image-quality processing help there?
I'm also really interested working out how blurry a photograph is.
What about this:
measure the byte size of the image when compressed as JPEG
downscale the image to 1/4th
upscale it 4x, using some kind of basic interpolation
compress that version using JPEG
compare the sizes of the two compressed images.
If the size did not go down a lot (past some percentage threshold), then downscaling and upscaling did not lose much information, therefore the original image is the same as something that has been zoomed.

Ruthlessly compressing large images for the web

I have a very large background image (about 940x940 pixels) and I'm wondering if anyone has tips for compressing a file this large further than Photoshop can handle? The best compression without serious loss of quality from Photoshop is PNG 8 (250 KB); does anyone know of a way to compress an image down further than this (maybe compress a PNG after it's been saved)?
I don't normally deal with optimizing images this large, so I was hoping someone would have some pointers.
It will first depend on what kind of image you are trying to compress. The two basic categories are:
Picture
Illustration
For pictures (such as photographs), a lossy compression format like JPEG will be best, as it will remove details that aren't easily noticed by human visual perception. This will allow very high compression rates for the quality. The downside is that excessive compression will result in very noticeable compression artifacts.
For illustrations that contain large areas of the same color, using a lossless compression format like PNG or GIF will be the best approach. Although not technically correct, you can think of PNG and GIF will compress repetitions the same color very well, similar to run-length encoding (RLE).
Now, as you've mentioned PNG specifically, I'll go into that discussion from my experience of using PNGs.
First, compressing a PNG further is not a viable option, as it's not possible to compress data that has already been compressed. This is true with any data compression; removing the entropy from the source data (basically, repeating patterns which can be represented in more compact ways) leads to the decrease in the amount of space needed to store the information. PNG already employs methods to efficiently compress images in a lossless fashion.
That said, there is at least one possible way to drop the size of a PNG further: by reducing the number of colors stored in the image. By using "indexed colors" (basically embedding a custom palette in the image itself), you may be able to reduce the size of the file. However, if the image has many colors to begin with (such as having color gradients or a photographic image) then you may not be able to reduce the number of colors used in a image without perceptible loss of quality.
Basically it will come down to some trial-and-error to see if the changes to the image will cause any change in image quailty and file size.
The comment by Paul Fisher reminded me that I also probably wouldn't recommend using GIF either. Paul points out that PNG compresses static line art better than GIF for nearly every situation.
I'd also point out that GIF only supports 8-bit images, so if an image has more than 256 colors, you'll have to reduce the colors used.
Also, Kent Fredric's comment about reducing the color depth has, in some situtations, caused a increase in file size. Although this is speculation, it may be possible that dithering is causing the image to become less compressible (as dithering introduces pixels with different color to simulate a certain other color, kind of like mixing pigment of different color paint to end up with another color) by introducing more entropy into the image.
Have a look at http://www.irfanview.com/, is an oldy but a goody.
Have found this is able to do multipass png compression pretty well, and does batch processing way faster than PS.
There is also PNGOUT available here http://advsys.net/ken/utils.htm, which is apparently very good.
Heres a point the other posters may not have noticed that I found out experimentally:
On some installations, the default behaviour is to save a full copy of the images colour profile along with the image.
That is, the device calibration map, usually SRGB or something similar, that tells using agents how to best map the colour to real world-colours instead of device independant ones.
This image profile is however quite large, and can make some of the files you would expect to be very small to be very large, for instance, a 1px by 1px image consuming a massive 25kb. Even a pure BMP format ( uncompressed ) can represent 1 pixel in less.
This profile is generally not needed for the web, so, when saving your photoshop images, make sure to export them without this profile, and you'll notice a marked size improvement.
You can strip this data using another tool such as gimp, but it can be a little time consuming if there are many files.
pngcrush can further compress PNG files without any data loss, it applies different combinations of the encoding and compression options to see which one works best.
If the image is photographic in nature, JPEG will compress it far better than PNG8 for the same loss in quality.
Smush.It claims to go "beyond the limitations of Photoshop". And it's free and web-based.
It depends a lot on the type of image. If it has a lot of solid colors and patterns, then PNG or GIF are probably your best bet. But if it's a photo-realistic image then JPG will be better - and you can crank down the quality of JPG to the point where you get the compression / quality tradeoff you're looking for (Photoshop is very good at showing you a preview of the final image as you adjust the quality).
The "compress a PNG after it's been saved" part looks like a deep misunderstanding to me. You cannot magically compress beyond a certain point without information loss.
First point to consider is whether the resolution has to be this big. Reducing the resolution by 10% in both directions reduces the file size by 19%.
Next, try several different compression algorithms with different grades of compression versus information/quality loss. If the image is sketchy, you might get away with quite rigorous JPEG compression.
I would tile it, Unless you are absolutely sure that you audience has bandwidth.
next is jpeg2k.
To get more out of a JPEG file you can use the 'Modified Quality Setting' of the "Save as Web" dialog.
Create a mask/selection that contains white where you want to keep the most detail, eq around Text. You can use Quick-Mask to draw the mask with a brush. It helps to Feather the selection, this results in a nice white to black transition in the next step.
save this mask/selection as a channel and give the channel a name
Use File->Save as Web
Select JPEG as file format
Next to the Quality box there is a small button with a circle on it. Click that. Select the saved channel in step 2 and play with the quality setting for the white and black part of the channel content.
http://www.jpegmini.com is a new service that creates standard jpgs with an impressively small filesize. I've had good success with it.
For best quality single images, I highly recommend RIOT. You can see the original image, aside from the changed one.
The tool is free and really worth trying out.
JPEG2000 gives compression ratios on photographic quality images that are significantly higher than JPEG (or PNG). Also, JPEG2000 has both "lossy" and "lossless" compression options that can be tuned quite nicely to your individual needs.
I've always had great luck with jpeg. Make sure to configure photoshop to not automatically save thumbnails in jpegs. In my experience I get the greatest bang/buck ratio by using 3 pass progressive compression, though baseline optimized works pretty well. Choose very low quality levels (e.g. 2 or 3) and experiment until you've found a good trade off.
PNG images are already compressed internally, in a manner that doesn't benefit from more compression much (and may actually expand if you try to compress it).
You can:
Reduce the resolution from 940x940 to something smaller like 470x470.
Reduce the color depth
Compress using a lossy compression tool like JPEG
edit: Of course 250KB is large for a web background. You might also want to rethink the graphic design that requires this.
Caesium is the best tool i have ever seen.

Resources