reduce image file size but keep quality, with samle - image

I am trying to put an image at a website,
the image has a transparent background and i want it to be in very good quality.
I saved it in .PNG format and in high quality but the problem is that it is really heavy and takes lots of time to load.
how can i show the picture in the same size and quality, with transparent background but with smaller file size to load quickly?
i'm talking about the image in the center of this website, with two cordless drils:
http://www.tigertools.co.il

ImageAlpha (pngquant) can substantially reduce size of transparent PNGs.
Whether it reduces quality depends on the image. Usually loss is not noticeable.

Dithering to 256 color (optimized palette) and saving as PNG seems to bring down file size to 96KB. This is using IrfanView.
However, not all dithering software handles the semi-opaque pixels near the object boundary correctly.
With regard to the quality loss, it's better to do a double-blind test to get unbiased subjective opinion. Keep in mind that the reduced website loading time will make users happier, which may compensate the hypothetical slight loss in quality.

Related

When to interlace an image?

As a general rule of thumb when is it appropriate to make a gif interlaced, a png interlaced and a jpeg progressive?
Especially when publishing the image on the web.
JPEG: YES — use progressive scan. It makes files smaller (each pass gets its own Huffman table), and partial rendering looks quite good.
GIF: NO — it's unlikely to make the file smaller, partial rendering is poor, and it's pointless for animGIFs. It's best not to use GIF at all (yes, even for anims).
PNG: NO — it hurts compression (as data from each pass is statistically quite different). If the image is large, use high-quality JPEG or lossy PNG if possible, as these may load quicker than a pixelated preview of a large lossless PNG.
ImageOptim will automatically change progressive/interlaced formats when it makes files smaller.
Disclaimers for nitpickers:
In case of small and medium-sized images the progressive preview of each image is not going to be visible long enough for the user to appreciate it. Some browsers don't even bother rendering anything until the whole file is downloaded, so it's better to focus on saving bandwidth to get the whole page loaded ASAP.
Non-progressive JPEG is a bit more efficient when the files are tiny (small thumbnails), but then the savings are tiny, too.
iOS Safari has a higher maximum allowed image size for baseline JPEG than progressive, but the right solution there is to serve images at sizes reasonable for mobile in the first place.
My general rule of thumb: don't ever use interlacing. Interlaced formats typically occupy more space, have (slightly) more complexity and less support in decoders, and the alleged advantages for the user experience are at least debatable. Some arguments for PNG, and in general.
Some people like interlaced or "progressive" images, which load
gradually. The theory behind these formats is that the user can at
least look at a fuzzy full-size proxy for the image while all the bits
are loading. In practice, the user is forced to look at a fuzzy
full-size proxy for the image while all the bits are loading. Is it
done? Well, it looks kind of fuzzy. Oh wait, the top of the image
seems to be getting a little more detail. Maybe it is done now. It is
still kind of fuzzy, though. Maybe the photographer wasn't using a
tripod. Oh wait, it seems to be clearing up now ...
Interlaced images are slightly less efficient, but show up after shorter delay on the client side when transported over the network. IMHO they should be used when the expected download time for the image is long enough to be perceived by the user (say, above 1 second). The difference in file size is really quite small, so it's better to be too-cautious and use interlacing too much rather than too little.
In common broadband internet as of 2012, I'd just use it for every image > 100kb.
These points must be useful.
Interlacing (more generally, progressive display) is a method of displaying images on a monitor.
When to use it? Your decision should be base on these factors:
•> Non-interlaced images are smaller than interlaced images.
•> Interlaced images cause less flickering than non-interlaced ones
•> Interlaced images are much more easily view-able.
The interlace lets you see the picture before all the data has been transmitted (makes them appear faster and better-looking) and gives you the "feeling" that it is being downloaded faster.
TIP: Interlacing is not recommended for small images but is a must if
the viewer uses a slow connection
This is just a copy from Y answers i thought could help to understand.
Original answer could be find at: https://answers.yahoo.com/question/index?qid=20090211121956AAz7Xz8
Just to throw my twopenneth into the argument: Interlacing was introduced years ago when internet speeds were slow, the idea being that the image would present itself in a gradually more defined manner, still giving an overall look and feel to an image without having to wait for the entire thing to load.
Interlacing, today, is basically unnecessary and should be used based on the overall size of the image being transferred.
Progressive scans on JPEG images images do provide a more refined image while attempting to reduce the overall file size (i.e. is an actual compression mode rather than a streaming method for the bits making up the image).
PNGs use a more complex algorithm than GIF.
There is an interesting related post on webmasters
https://webmasters.stackexchange.com/questions/574/progressive-jpeg-why-do-many-web-sites-avoid-rendering-jpegs-that-way-pros
Untimately it depends on how they are going to be used.
The post suggests that there is limited - genuine - support for progressive images. And sometimes they may cause issues with plugins which don't support the progressive format.
Hope that helps.

How to estimate the size of JPEG image which will be scaled down

For example, I have an 1024*768 JPEG image. I want to estimate the size of the image which will be scaled down to 800*600 or 640*480. Is there any algorithm to calculate the size without generating the scaled image?
I took a look in the resize dialog in Photoshop. The size they show is basically (width pixels * height pixels * bit/pixel) which shows a huge gap between the actual file size.
I have mobile image browser application which allow user to send image through email with options to scale down the image. We provide check boxes for the user to choose down-scale resolution with the estimate size. For large image (> 10MB), we have 3 down scale size to choose from. If we generate a cached image for each option, it may hurt the memory. We are trying to find the best solution which avoid memory consumption.
I have successfully estimated the scaled size based on the DQT - the quality factor.
I conducted some experiments and find out if we use the same quality factor as in the original JPEG image, the scaled image will have size roughly equal to (scale factor * scale factor) proportion of the original image size. The quality factor can be estimate based on the DQT defined in the every JPEG image. Algorithm has be defined to estimate the quality factor based on the standard quantization table shown in Annex K in JPEG spec.
Although other factors like color subsampling, different compression algorithm and the image itself will contribute to error, the estimation is pretty accurate.
P.S. By examining JPEGSnoop and it source code, it helps me a lot :-)
Cheers!
Like everyone else said, the best algorithm to determine what sort of JPEG compression you'll get is the JPEG compression algorithm.
However, you could also calculate the Shannon entropy of your image, in order to try and understand how much information is actually present. This might give you some clues as to the theoretical limits of your compression, but is probably not the best solution for your problem.
This concept will help you measure the differences in information between an all white image and that of a crowd, which is related to it's compressibility.
-Brian J. Stinar-
Why estimate what you can measure?
In essence, it's impossible to provide any meaningful estimate due to the fact that different types of images (in terms of their content) will compress very differently using the JPEG algorithm. (A 1024x768 pure white image will be vastly smaller than a photograph of a crowd scene for example.)
As such, if you're after an accurate figure it would make sense to simply carry out the re-size.
Alternatively, you could just provide an range such as "40KB to 90KB", based on an "average" set of images.
I think what you want is something weird and difficult to do. Based on JPG compression level some images are heavier that others in terms of heavier (size).
My hunch for JPEG images: Given two images at same resolution, compressed at the same quality ratio - the image taking smaller memory will compress more (in general) when its resolution is reduced.
Why? From experience: many times when working with a set of images, I have seen that if a thumbnail is occupying significantly more memory than most others, reducing its resolution has almost no change in the size (memory). On other hand, reducing resolution of one of the average size thumbnails reduces the size significantly. (all parameters like original/final resolution and JPEG quality being the same in the two cases).
Roughly speaking - higher the entropy, less will be the impact on size of image by changing resolution (at the same JPEG quality).
If you can verify this with experiments, maybe you can use this as a quick method to estimate the size. If my language is confusing, I can explain with some mathematical notation/psuedo formula.
An 800*600 image file should be roughly (800*600)/(1024*768) times as large as the 1024*768 image file it was scaled down from. But this is really a rough estimate, because the compressibility of original and scaled versions of the image might be different.
Before I attempt to answer your question, I'd like to join the ranks of people that think it's simpler to measure rather than estimate. But it's still an interesting question, so here's my answer:
Look at the block DCT coefficients of the input JPEG image. Perhaps you can find some sort of relationship between the number of higher frequency components and the file size after shrinking the image.
My hunch: all other things (e.g. quantization tables) being equal, the more higher frequency components you have in your original image, the bigger the difference in file size between the original and shrinked image will be.
I think that by shrinking the image, you will reduce some of the higher frequency components during interpolation, increasing the possibility that they will be quantized to zero during the lossy quantization step.
If you go down this path, you're in luck: I've been playing with JPEG block DCT coefficients and put some code up to extract them.

How to scale JPEG image down so that text is clear as possible?

I have some JPEG images that I need scale down to about 80% of original size. Original image dimension are about 700px × 1000px. Images contain some computer generated text and possibly some graphics (similar to what you would find in corporate word documents).
How to scale image so that the text is as legible as possible? Currently we are scaling the imaeg down using bicubic interpolation, but that makes the text blurry and foggy.
Two options:
Use a different resampling algorithm. Lanczos gives you a much less blurrier result.
You ight use an advances JPEG library that resamples the 8x8 blocks to 6x6 pixels.
If you are not set on exactly 80% you can try getting and building djpeg from http://www.ijg.org/ as it can decompress your jpeg to 6/8ths (75%) or 7/8ths (87.5%) size and the text quality will still be pretty good:
Original
7/8
3/4
(SO decided to scale the images when showing them inline)
There may be a scaling algorithm out there that works similarly, but this is an easy off the shelf solution.
There is always a loss involved in scaling down, but it again depends of your trade offs.
Blurring and artifact generation is normal for jpeg images, so its recommended that you generate images is the correct size the first time.
Lanczos is a fine solution, but you have your trade offs
If its just the text and you are concerned about it, you could try dilation filter over the resampled image. This would correct some blurriness but may also affects the graphics. If you can live with it, its good. Alternatively if you can identify the areas of text, you can apply dilation just over those areas.

How to detect subjective image quality

For an image-upload tool I want to detect the (subjective) quality of an image automatically, resulting in a rating of the quality.
I have the following idea to realize this heuristically:
Obviously incorporate the resolution into the rating.
Compress it to JPG (75%), decompress it and compare jpg-size vs. decompressed size to gain a ratio. The blurrier the image is, the higher the ratio.
Obviously my approach would use up a lot of cycles and memory if large images are rated, although this would do in my scenario (fat server, not many uploads), and I could always build in a "short circuit" around the more expensive steps if the image exceeds a certain resolution.
Is there something else I can try, or is there a way to do this more efficiently?
Assesing the image (the same goes for sound or video) quality is not an easy task, and there are numerous publications tackling the problem.
Much depends on the nature of the image - different set of criteria is appropriate for artificially created images (i.e. diagrams) or natural images (i.e. photographs). There are subtle effects that have to be taken into consideration - like color masking, luminance masking, contrast perception. For some images a given compression ratio is perfectly adequate, while for other it will result in significant loss of quality.
Here is a free-access publication giving a brief introduction to the subject of image quality evaluation.
The method you mentioned - compressing the image and comparing the result with the original is far from perfect. What will be the metric that you plan to use? MSE? MSE per block? For sure it is not too difficult to implement, but the results will be difficult to interpret (consider images with high-frequency components and without them).
And if you want to delve more into the are of image quality assessment there is also a lot of research done by the machine learning community.
You could try looking in the EXIF tags of the image (using something like exiftool), what you get will vary a lot. On my SLR, for example, you even get which of the focus points were active when the image was taken. There may also be something about compression quality.
The other thing to check is the image histogram - watch out for images biased to the left, which suggests under-exposure or lots of saturated pixels.
For image blur you could look at the high frequency components of the Fourier transform, this is probably accessing parameters relating to the JPG compression anyway.
This is a bit of a tricky area because most "rules" you might be able to implement could arguably be broken for artistic effect.
I'd like to shoot down the "obviously incorporate resolution" idea. Resolution tells you nothing. I can scale an image by a factor of 2 , quadrupling the number of pixels. This adds no information whatsoever, nor does it improve quality.
I am not sure about the "compress to JPG" idea. JPG is a photo-oriented algorithm. Not all images are photos. Besides, a blue sky compresses quite well. Uniformly grey even better. Do you think exact cloud types determine the image quality?
Sharpness is a bad idea, for similar reasons. Depth of Field is not trivially related to image quality. Items photographed against a black background will have a lot of pixels with quite low intensity, intentionally. Again, this does not signal underexposure, so the histogram isn't a good quality indicator by itself either.
But what if the photos are "commercial?" Does the value of the existing technology work if the photos are of every-day objects and purposefully non-artistic?
If I hire hundreds of people to take pictures of park benches I want to quickly know which pictures are of better quality (in-focus, well-lit) and which aren't. I don't want pictures of kittens, people, sunsets, etc.
Or what if the pictures are supposed to be of items for a catalog? No models, just garments. Would image-quality processing help there?
I'm also really interested working out how blurry a photograph is.
What about this:
measure the byte size of the image when compressed as JPEG
downscale the image to 1/4th
upscale it 4x, using some kind of basic interpolation
compress that version using JPEG
compare the sizes of the two compressed images.
If the size did not go down a lot (past some percentage threshold), then downscaling and upscaling did not lose much information, therefore the original image is the same as something that has been zoomed.

Ruthlessly compressing large images for the web

I have a very large background image (about 940x940 pixels) and I'm wondering if anyone has tips for compressing a file this large further than Photoshop can handle? The best compression without serious loss of quality from Photoshop is PNG 8 (250 KB); does anyone know of a way to compress an image down further than this (maybe compress a PNG after it's been saved)?
I don't normally deal with optimizing images this large, so I was hoping someone would have some pointers.
It will first depend on what kind of image you are trying to compress. The two basic categories are:
Picture
Illustration
For pictures (such as photographs), a lossy compression format like JPEG will be best, as it will remove details that aren't easily noticed by human visual perception. This will allow very high compression rates for the quality. The downside is that excessive compression will result in very noticeable compression artifacts.
For illustrations that contain large areas of the same color, using a lossless compression format like PNG or GIF will be the best approach. Although not technically correct, you can think of PNG and GIF will compress repetitions the same color very well, similar to run-length encoding (RLE).
Now, as you've mentioned PNG specifically, I'll go into that discussion from my experience of using PNGs.
First, compressing a PNG further is not a viable option, as it's not possible to compress data that has already been compressed. This is true with any data compression; removing the entropy from the source data (basically, repeating patterns which can be represented in more compact ways) leads to the decrease in the amount of space needed to store the information. PNG already employs methods to efficiently compress images in a lossless fashion.
That said, there is at least one possible way to drop the size of a PNG further: by reducing the number of colors stored in the image. By using "indexed colors" (basically embedding a custom palette in the image itself), you may be able to reduce the size of the file. However, if the image has many colors to begin with (such as having color gradients or a photographic image) then you may not be able to reduce the number of colors used in a image without perceptible loss of quality.
Basically it will come down to some trial-and-error to see if the changes to the image will cause any change in image quailty and file size.
The comment by Paul Fisher reminded me that I also probably wouldn't recommend using GIF either. Paul points out that PNG compresses static line art better than GIF for nearly every situation.
I'd also point out that GIF only supports 8-bit images, so if an image has more than 256 colors, you'll have to reduce the colors used.
Also, Kent Fredric's comment about reducing the color depth has, in some situtations, caused a increase in file size. Although this is speculation, it may be possible that dithering is causing the image to become less compressible (as dithering introduces pixels with different color to simulate a certain other color, kind of like mixing pigment of different color paint to end up with another color) by introducing more entropy into the image.
Have a look at http://www.irfanview.com/, is an oldy but a goody.
Have found this is able to do multipass png compression pretty well, and does batch processing way faster than PS.
There is also PNGOUT available here http://advsys.net/ken/utils.htm, which is apparently very good.
Heres a point the other posters may not have noticed that I found out experimentally:
On some installations, the default behaviour is to save a full copy of the images colour profile along with the image.
That is, the device calibration map, usually SRGB or something similar, that tells using agents how to best map the colour to real world-colours instead of device independant ones.
This image profile is however quite large, and can make some of the files you would expect to be very small to be very large, for instance, a 1px by 1px image consuming a massive 25kb. Even a pure BMP format ( uncompressed ) can represent 1 pixel in less.
This profile is generally not needed for the web, so, when saving your photoshop images, make sure to export them without this profile, and you'll notice a marked size improvement.
You can strip this data using another tool such as gimp, but it can be a little time consuming if there are many files.
pngcrush can further compress PNG files without any data loss, it applies different combinations of the encoding and compression options to see which one works best.
If the image is photographic in nature, JPEG will compress it far better than PNG8 for the same loss in quality.
Smush.It claims to go "beyond the limitations of Photoshop". And it's free and web-based.
It depends a lot on the type of image. If it has a lot of solid colors and patterns, then PNG or GIF are probably your best bet. But if it's a photo-realistic image then JPG will be better - and you can crank down the quality of JPG to the point where you get the compression / quality tradeoff you're looking for (Photoshop is very good at showing you a preview of the final image as you adjust the quality).
The "compress a PNG after it's been saved" part looks like a deep misunderstanding to me. You cannot magically compress beyond a certain point without information loss.
First point to consider is whether the resolution has to be this big. Reducing the resolution by 10% in both directions reduces the file size by 19%.
Next, try several different compression algorithms with different grades of compression versus information/quality loss. If the image is sketchy, you might get away with quite rigorous JPEG compression.
I would tile it, Unless you are absolutely sure that you audience has bandwidth.
next is jpeg2k.
To get more out of a JPEG file you can use the 'Modified Quality Setting' of the "Save as Web" dialog.
Create a mask/selection that contains white where you want to keep the most detail, eq around Text. You can use Quick-Mask to draw the mask with a brush. It helps to Feather the selection, this results in a nice white to black transition in the next step.
save this mask/selection as a channel and give the channel a name
Use File->Save as Web
Select JPEG as file format
Next to the Quality box there is a small button with a circle on it. Click that. Select the saved channel in step 2 and play with the quality setting for the white and black part of the channel content.
http://www.jpegmini.com is a new service that creates standard jpgs with an impressively small filesize. I've had good success with it.
For best quality single images, I highly recommend RIOT. You can see the original image, aside from the changed one.
The tool is free and really worth trying out.
JPEG2000 gives compression ratios on photographic quality images that are significantly higher than JPEG (or PNG). Also, JPEG2000 has both "lossy" and "lossless" compression options that can be tuned quite nicely to your individual needs.
I've always had great luck with jpeg. Make sure to configure photoshop to not automatically save thumbnails in jpegs. In my experience I get the greatest bang/buck ratio by using 3 pass progressive compression, though baseline optimized works pretty well. Choose very low quality levels (e.g. 2 or 3) and experiment until you've found a good trade off.
PNG images are already compressed internally, in a manner that doesn't benefit from more compression much (and may actually expand if you try to compress it).
You can:
Reduce the resolution from 940x940 to something smaller like 470x470.
Reduce the color depth
Compress using a lossy compression tool like JPEG
edit: Of course 250KB is large for a web background. You might also want to rethink the graphic design that requires this.
Caesium is the best tool i have ever seen.

Resources