I'm trying to make my images optimized for google pagespeed test. I have an image with 1200x393 dimensions. When I optimize the image with Photoshop, its size is approximately 250kb and with Corel it becomes 100kb. Google doesn't accept either. It says Compressing and resizing ... .jpg could save 92.6KiB (90% reduction).
How can I pass pagespeed test?
From Image Optimization:
Image optimization boils down to two criteria: optimizing the number
of bytes used to encode each image pixel, and optimizing the total
number of pixels: the filesize of the image is simply the total number
of pixels times the number of bytes used to encode each pixel. Nothing
more, nothing less.
As a result, one of the simplest and most
effective image optimization techniques is to ensure that we are not
shipping any more pixels than needed to display the asset at its
intended size in the browser. Sounds simple, right? Unfortunately,
most pages fail this test for many of their image assets: typically,
they ship larger assets and rely on the browser to rescale them -
which also consumes extra CPU resources - and display them at a lower
resolution. ...
you should ensure that the number of unnecessary
pixels is minimal, and that your large assets in particular are
delivered as close as possible to their display size
Common error is to have big image in source and scale it down with width and height attributes on UI.
Related
We're building an online video editing service. One of the features allows users to export a short segment from their video as an animated gif. Imgur has a file size limit of 2Mb per uploaded animated gif.
Gif file size depends on number of frames, color depth and the image contents itself: a solid flat color result in a very lightweight gif, while some random colors tv-noise animation would be quite heavy.
First I export each video frame as a PNG of the final GIF frame size (fixed, 384x216).
Then, to maximize gif quality I undertake several gif render attempts with slightly different parameters - varying number of frames and number of colors in the gif palette. The render that has the best quality while staying under the file size limit gets uploaded to Imgur.
Each render takes time and CPU resources — this I am looking to optimize.
Question: what could be a smart way to estimate the best render settings depending on the actual images, to fit as close as possible to the filesize limit, and at least minimize the number of render attempts to 2–3?
The GIF image format uses LZW compression. Infamous for the owner of the algorithm patent, Unisys, aggressively pursuing royalty payments just as the image format got popular. Turned out well, we got PNG to thank for that.
The amount by which LZW can compress the image is extremely non-deterministic and greatly depends on the image content. You, at best, can provide the user with a heuristic that estimates the final image file size. Displaying, say, a success prediction with a colored bar. You'd can color it pretty quickly by converting just the first frame. That won't take long on 384x216 image, that runs in human time, a fraction of a second.
And then extrapolate the effective compression rate of that first image to the subsequent frames. Which ought to encode only small differences from the original frame so ought to have comparable compression rates.
You can't truly know whether it exceeds the site's size limit until you've encoded the entire sequence. So be sure to emphasize in your UI design that your prediction is just an estimate so your user isn't going to disappointed too much. And of course provide him with the tools to get the size lowered, something like a nearest-neighbor interpolation that makes the pixels in the image bigger. Focusing on making the later frames smaller can pay off handsomely as well, GIF encoders don't normally do this well by themselves. YMMV.
There's no simple answer to this. Single-frame GIF size mainly depends on image entropy after quantization, and you could try using stddev as an estimator using e.g. ImageMagick:
identify -format "%[fx:standard_deviation]" imagename.png
You can very probably get better results by running a smoothing kernel on the image in order to eliminate some high-frequency noise that's unlikely to be informational, and very likely to mess up compression performance. This goes much better with JPEG than with GIF, anyway.
Then, in general, you want to run a great many samples in order to come up with something of the kind (let's say you have a single compression parameter Q)
STDDEV SIZE W/Q=1 SIZE W/Q=2 SIZE W/Q=3 ...
value1 v1,1 v1,2 v1,3
After running several dozens of tests (but you need do this only once, not "at runtime"), you will get both an estimate of, say, , and a measurement of its error. You'll then see that an image with stddev 0.45 that compresses to 108 Kb when Q=1 will compress to 91 Kb plus or minus 5 when Q=2, and 88 Kb plus or minus 3 when Q=3, and so on.
At that point you get an unknown image, get its stddev and compression #Q=1, and you can interpolate the probable size when Q equals, say, 4, without actually running the encoding.
While your service is active, you can store statistical data (i.e., after you really do the encoding, you store the actual results) to further improve estimation; after all you'd only store some numbers, not any potentially sensitive or personal information that might be in the video. And acquiring and storing those numbers would come nearly for free.
Backgrounds
It might be worthwhile to recognize images with a fixed background; in that case you can run some adaptations to make all the frames identical in some areas, and have the GIF animation algorithm not store that information. This, when and if you get such a video (e.g. a talking head), could lead to huge savings (but would throw completely off the parameter estimation thing, unless you could estimate also the actual extent of the background area. In that case, let this area be B, let the frame area be A, the compressed "image" size for five frames would be A+(A-B)*(5-1) instead of A*5, and you could apply this correction factor to the estimate).
Compression optimization
Then there are optimization techniques that slightly modify the image and adapt it for a better compression, but we'd stray from the topic at hand. I had several algorithms that worked very well with paletted PNG, which is similar to GIF in many regards, but I'd need to check out whether and which of them may be freely used.
Some thoughts: LZW algorithm goes on in lines. So whenever a sequence of N pixels is "less than X%" different (perceptually or arithmetically) from an already encountered sequence, rewrite the sequence:
018298765676523456789876543456787654
987678656755234292837683929836567273
here the 656765234 sequence in the first row is almost matched by the 656755234 sequence in the second row. By changing the mismatched 5 to 6, the LZW algorithm is likely to pick up the whole sequence and store it with one symbol instead of three (6567,5,5234) or more.
Also, LZW works with bits, not bytes. This means, very roughly speaking, that the more the 0's and 1's are balanced, the worse the compression will be. The more unpredictable their sequence, the worse the results.
So if we can find out a way of making the distribution more **a**symmetrical, we win.
And we can do it, and we can do it losslessly (the same works with PNG). We choose the most common colour in the image, once we have quantized it. Let that color be color index 0. That's 00000000, eight fat zeroes. Now we choose the most common colour that follows that one, or the second most common colour; and we give it index 1, that is, 00000001. Another seven zeroes and a single one. The next colours will be indexed 2, 4, 8, 16, 32, 64 and 128; each of these has only a single bit 1, all others are zeroes.
Since colors will be very likely distributed following a power law, it's reasonable to assume that around 20% of the pixels will be painted with the first nine most common colours; and that 20% of the data stream can be made to be at least 87.5% zeroes. Most of them will be consecutive zeroes, which is something that LZW will appreciate no end.
Best of all, this intervention is completely lossless; the reindexed pixels will still be the same colour, it's only the palette that will be shifted accordingly. I developed such a codec for PNG some years ago, and in my use case scenario (PNG street maps) it yielded very good results, ~20% gain in compression. With more varied palettes and with LZW algorithm the results will be probably not so good, but the processing is fast and not too difficult to implement.
I have a page with many large fluid images. http://altarjewelry.com/gallery
I want to get a smooth 60fps webapp feel while scrolling. The Chrome DevTools tell me my paint times are the biggest problem (which you can check for yourself while scrolling). I'm assuming this is due to my many large fluid images.
I've read every article on HTML5Rocks about performance. I found many good tips on JS performance but no help optimizing large image paint times other then using small fixed size images, which is not an option for me as I'm building a responsive site.
I'm already serving up responsive images depending on the client.
Thank you for your help.
Not really sure about how your gallery looks because it never loaded from the URL in your post, and I don't know if that's a javascript issue or what--but I'll take a stab at helping you come up with a solution. Image optimization is image optimization, regardless of whether or not you're building a responsive site.
Approach and Design Considerations
Do you really need one large, high resolution image for each item, at the same DPI/PPI and compression, that should be responsive?
Or, should you serve appropriately sized images at differing DPI/PPI and compression, to different displays, all of which are still used in a responsive application?
Popular Convention
You're showing a gallery, and typically, you want smaller representations of the actual image--thumbnails or placeholders, generally of lower resolution, which link to the actual image at a higher resolution. This is an accepted design approach, and if you're going to vary from it, be sure it's with good reason.
The Lowest Common Denominator
If you're building a responsive site, some users will obviously be on mobile devices which may have resolutions as small as 320 pixels wide. Consider things like that, and this: even if someone shows up on a desktop, are you going to have huge, full width images loading? They will take forever to load, and visitors will never see your gallery. How is your gallery to look on a wide screen desktop? If your intention is to have one image full width across the entire page, and load the same image regardless of the device accessing your site, you may be using responsive design, but you'll find that's far away from best or even good practice.
The Flip-Side, Large/Wide Screens
Why not have four gallery images going across a desktop? Or more? And if that's the case, they're likely to have a maximum size in any case. I honestly don't know because I've tried to load your site a few times and get nothing. But consider that if there's a maximum size practically for your gallery images in an initial display, say 6 images at 200 pixels each across a 1200 pixel max layout width (Or, are you using a % based framework and using 100% of the display width? Even responsive sites often limit the max width of the content area, and these things all would help determining a more appropriate answer) solutions begin to emerge.
Since no image needs to be larger than 200 pixels in that case, and on a phone where your columns might be displaying only one image that you want full width, you can work with a maximum initial width of 480px wide images.
Higher Quality, Smaller Files
We'll assume you want them high quality. That's fine. You still need to reduce files size, and you do that with compression. Now, you may feel compressing a photo to 50% or even more makes it blurry, and it certainly will at low ppi (pixels per inch) settings.
The Secret To Better Compression
What you need to do is change default image editor settings from traditional defaults like 72 or 90 ppi, and crank them up to 300, 400, 500, or more--and THEN apply compression. If that image is 480px wide, and you've only got 72ppi, compression will quickly erode quality. However, having several hundred extra pixels per inch will allow more information to be stored. Then, you can apply much higher compression rates, and shrink file sizes down quite a bit more.
The Oversized Image Approach
Another trick is to do the same thing, and slightly oversize the image. If 480px is the max size for your thumbnails/small pics, make them actually 540-600 px wide, with 400-500ppi and compress them at really high settings. The browser will resize to the max width of 480 px...but then you have a performance hit there. Everything is a trade off. You can blur backgrounds in images as well, allowing the foreground/main focus of the photo to be of higher quality while the background requires less information, yielding smaller file sizes.
Not Suitable For Batch Processing
This should be done individually for each image, batch editing does not generally get the most out of this technique, because the color information is so different in each photo. One photo might be best quality and smallest size for your purposes at 300ppi and 50% quality, another at 500ppi and 35% quality. You'll want to do this not just for your gallery thumbnails, but multiple images. No point in serving up a 1400px wide full size desktop to someone who's browsing your site with 480px wide/resolution display after all. Use media queries to serve up the appropriate ballpark sized image, and have a small, medium and large variant. Done right, you don't even need to be serving larger images to those browsing with phones...the gallery images they are viewing are good enough.
The compression setting is not so heavily determinate of the final image quality as the number of pixels you have to compress goes up. More pixels to work with, the better quality at higher compression settings.
Design Considerations and Smart Image Loading
Break It Up Into Smaller Content Chunks
Also, consider the process/design of your gallery. Do you have 20 items? 100? 400? Are you trying to show them all on one page? Break it up into small numbers...12-20 per page. Smaller and fewer images will load faster, and can remain responsive, with links for those who want a larger or higher quality image. No need to show a huge, high quality image to someone browsing with their phone.
Pre-fetching and Loading
Server side scripting, and even some javascript solutions can help with this. You might do things like limiting each gallery page to four rows of four images, and then after page load, have a javascript that pre-fetches the first four images that will display on page 2. If your visitor goes to page two after scrolling through page one, the first four images are loaded in cache, and display quickly while the others load normally, giving the experience of a faster page load.
If the visitor goes to another page in the site, you didn't waste bandwidth on 12 images and only cost you the bandwidth of four. Smart design might be to use those first four images on page two of the gallery elsewhere on the site...so that first gallery page visit actually sped up page load elsewhere and does not in fact give up bandwidth for loading 4 unnecessary images. Think the process through, and solutions will suggest themselves.
Resources
Anyway, here are relevant articles/posts/links you may find helpful in understanding all of this:
Are Compressive Images A Good Solution For High Resolution Displays?
http://www.vanseodesign.com/web-design/compressive-image-tests/
Reducing image sizes (ResponsiveDesign.is)
https://responsivedesign.is/articles/reducing-image-sizes
Search benfrain dot com for this post:
How to serve high-resolution website images for retina displays
And a tool you might find useful...
adaptive-images dot com
I am trying to put an image at a website,
the image has a transparent background and i want it to be in very good quality.
I saved it in .PNG format and in high quality but the problem is that it is really heavy and takes lots of time to load.
how can i show the picture in the same size and quality, with transparent background but with smaller file size to load quickly?
i'm talking about the image in the center of this website, with two cordless drils:
http://www.tigertools.co.il
ImageAlpha (pngquant) can substantially reduce size of transparent PNGs.
Whether it reduces quality depends on the image. Usually loss is not noticeable.
Dithering to 256 color (optimized palette) and saving as PNG seems to bring down file size to 96KB. This is using IrfanView.
However, not all dithering software handles the semi-opaque pixels near the object boundary correctly.
With regard to the quality loss, it's better to do a double-blind test to get unbiased subjective opinion. Keep in mind that the reduced website loading time will make users happier, which may compensate the hypothetical slight loss in quality.
I am building a map system that requires a large image (native 13K pixels wide by 20K pixels tall) to be overlayed onto an area of the US covering about 20 kilometers or so. I have the file size of the image in jpg format down to 23 MB and it loads onto the map fairly quickly. I can zoom in and out and it looks great. It's even located exactly where I need it to be (geographically). However, that 25 MB file is causing Firefox to consume an additional 1GB of memory!!! I am using Memory Restart extension on Firefox and without the image overlay, the memory usage is about 360 MB to 400 MB, which seems to be about the norm for regular usage, browsing other websites etc. But when I add the image layer, the memory usage jumps to 1.4 GB. I'm at a complete loss to explain WHY that is and how to fix it. Any ideas would be greatly appreciated.
Andrew
The file only takes up 23 MB as a JPEG. However, the JPEG format is compressed, and any program (such as FireFox) that wants to actually render the image has to uncompress it and store every pixel in memory. You have 13k by 20k pixels, which makes 260M pixels. Figure at least 3 bytes of color info per pixel, that's 780 MB. It might be using 4 bytes, to have each pixel aligned at a word boundary, which would be 1040 MB.
As for how to fix it, well, I don't know if you can, except by reducing the image size. If the image contains only a small number of colors (for instance, a simple diagram drawn in a few primary colors), you might be able to save it in some format that uses indexed colors, and then FireFox might be able to render it using less memory per pixel. It all depends on the rendering code.
Depending on what you're doing, perhaps you could set things up so that the whole image is at lower resolution, then when the user zooms in they get a higher-resolution image that covers less area.
Edit: to clarify that last bit: right now you have the entire photograph at full resolution, which is simple but needs a lot of memory. An alternative would be to have the entire photograph at reduced resolution (maximum expected screen resolution), which would take less memory; then when the user zooms in, you have the image at full resolution, but not the entire image - just the part that's been zoomed in (which likewise needs less memory).
I can think of two approaches: break up the big image into "tiles" and load the ones you need (not sure how well that would work), or use something like ImageMagick to construct the smaller image on-the-fly. You'd probably want to use caching if you do it that way, and you might need to code up a little "please wait" message to show while it's being constructed, since it could take several seconds to process such a large image.
For example, I have an 1024*768 JPEG image. I want to estimate the size of the image which will be scaled down to 800*600 or 640*480. Is there any algorithm to calculate the size without generating the scaled image?
I took a look in the resize dialog in Photoshop. The size they show is basically (width pixels * height pixels * bit/pixel) which shows a huge gap between the actual file size.
I have mobile image browser application which allow user to send image through email with options to scale down the image. We provide check boxes for the user to choose down-scale resolution with the estimate size. For large image (> 10MB), we have 3 down scale size to choose from. If we generate a cached image for each option, it may hurt the memory. We are trying to find the best solution which avoid memory consumption.
I have successfully estimated the scaled size based on the DQT - the quality factor.
I conducted some experiments and find out if we use the same quality factor as in the original JPEG image, the scaled image will have size roughly equal to (scale factor * scale factor) proportion of the original image size. The quality factor can be estimate based on the DQT defined in the every JPEG image. Algorithm has be defined to estimate the quality factor based on the standard quantization table shown in Annex K in JPEG spec.
Although other factors like color subsampling, different compression algorithm and the image itself will contribute to error, the estimation is pretty accurate.
P.S. By examining JPEGSnoop and it source code, it helps me a lot :-)
Cheers!
Like everyone else said, the best algorithm to determine what sort of JPEG compression you'll get is the JPEG compression algorithm.
However, you could also calculate the Shannon entropy of your image, in order to try and understand how much information is actually present. This might give you some clues as to the theoretical limits of your compression, but is probably not the best solution for your problem.
This concept will help you measure the differences in information between an all white image and that of a crowd, which is related to it's compressibility.
-Brian J. Stinar-
Why estimate what you can measure?
In essence, it's impossible to provide any meaningful estimate due to the fact that different types of images (in terms of their content) will compress very differently using the JPEG algorithm. (A 1024x768 pure white image will be vastly smaller than a photograph of a crowd scene for example.)
As such, if you're after an accurate figure it would make sense to simply carry out the re-size.
Alternatively, you could just provide an range such as "40KB to 90KB", based on an "average" set of images.
I think what you want is something weird and difficult to do. Based on JPG compression level some images are heavier that others in terms of heavier (size).
My hunch for JPEG images: Given two images at same resolution, compressed at the same quality ratio - the image taking smaller memory will compress more (in general) when its resolution is reduced.
Why? From experience: many times when working with a set of images, I have seen that if a thumbnail is occupying significantly more memory than most others, reducing its resolution has almost no change in the size (memory). On other hand, reducing resolution of one of the average size thumbnails reduces the size significantly. (all parameters like original/final resolution and JPEG quality being the same in the two cases).
Roughly speaking - higher the entropy, less will be the impact on size of image by changing resolution (at the same JPEG quality).
If you can verify this with experiments, maybe you can use this as a quick method to estimate the size. If my language is confusing, I can explain with some mathematical notation/psuedo formula.
An 800*600 image file should be roughly (800*600)/(1024*768) times as large as the 1024*768 image file it was scaled down from. But this is really a rough estimate, because the compressibility of original and scaled versions of the image might be different.
Before I attempt to answer your question, I'd like to join the ranks of people that think it's simpler to measure rather than estimate. But it's still an interesting question, so here's my answer:
Look at the block DCT coefficients of the input JPEG image. Perhaps you can find some sort of relationship between the number of higher frequency components and the file size after shrinking the image.
My hunch: all other things (e.g. quantization tables) being equal, the more higher frequency components you have in your original image, the bigger the difference in file size between the original and shrinked image will be.
I think that by shrinking the image, you will reduce some of the higher frequency components during interpolation, increasing the possibility that they will be quantized to zero during the lossy quantization step.
If you go down this path, you're in luck: I've been playing with JPEG block DCT coefficients and put some code up to extract them.