this is not a question about a specific programming problem, it's about examining different concepts. If the moderators don't feel this is ok, delete my question.
I need to display 100 png images in a table td, and the images are 75x16 PNGs. In order to reduce the number of HTTP requests, I grouped all 166 images (only roughly 100 are shown at one time) in a big spritesheet, and have used the IMG tag to display the output, one image at a time. This is the code:
CSS:
.sprites {background-image:url('folder/spritesheet.png');background-color:transparent;background-repeat:no-repeat;display:inline;}
#png3 {height:16px;width:75px;background-position:0px 0;}
#png5 {height:16px;width:75px;background-position:-75px 0;}
PHP:
$classy = "png" . $db_field['imageid'];
echo "<td>" , "<img src='transparent.gif' class='sprites' id='$classy' alt='alt' align='absmiddle'/>" , "</td>";
$classy is a variable which is calling the correct image based on the SQL query output. transparent.gif is a 1px transparent gif. And this is the result, images are shown correctly inside a table:
The page loading speed increased significantly, maybe 50-60%. In one of my earlier questions some concerns were raised over this being good practice or not. But it works.
The only other solution I've found is using jar compression, but that concept works for Firefox only. This is the code which is used for displaying these same images using jar compression (PHP, no CSS):
$logo = "jar:http://www.website.com/logos.jar!/" . $db_field['imageid'] . ".png";
echo "<tr>" , "<td>" , "<img src='$logo' alt='alt' width='75' height='16'/>";
All of the 166 images are compressed in a jar archive and uploaded to the server, and as jar is a non-solid archive, only the called image is extracted, not all of them. This solution is lighting fast and I've never seen a faster way of displaying that many images. The concept is here and deserves a link. Another advantage over CSS sprites is that with jar each image can be individually optimized for size (e.g one is optimized to 64 colors, another one to 128, 32...depending on the image) and a large spritesheet can not be optimized as it contains a lot of colors.
So, does anyone know of a solution which would be equally fast as jar? If using CSS sprites to display content is bad practice - what is good practice which gives the same result? The key here is the loading speed of the website with as few HTTP requests as possible.
Not really an expert on this but I had my share in these thing also
HTTP Requests
Ever heard of the "2 concurrent connections" (recent browsers have around 6-8). Loading a lot of stuff means if 2 are loading at the same time, the others have to wait in line. Loading it in one big chunk is better. This is the main reason why spriting is used. Aside from the connection limit, you manage those "same purpose" images in one image.
Cache
Now, one big chunk I say but you might ask "Does that make it even worse?". Nope, becasue I have an ace up my sleeve and that's where "cache" comes in to play. One page load is all you need, and then, poof! The rest of the pages that need that image are like the speed of light and Saves you from another HTTP request. Never underestimate the power of the cache.
Images
Other things you can do is to optimize your images. I have used Fireworks and I really love it's image optimization tools. To optimize, here are personal guidelines i follow which you can use in your situation:
GIFs for icons, JPGs for images, PNGs for transparent stuff.
remove unused colors. yes, you can do this in some tools. cuts down size
never resize images in the html. have resized versions instead.
lose quality. yes, there is such thing. lower your image quality to reasonable limits. losing it too much makes your image too "cloudy" or "blocky"
progressive loading images. What it does is it "fast-loads" a blurred image then clears it up later.
avoid animated images. they are a bloat, not to mention annoying.
Server Tricks
There are connection limits - but that does not prevent you from using other domains or even subdomains! Distribute your content to other domains or subdomains to increase the number your connections. For images, dedicate a subdomain or two for it, like say img1.mysite.com and img2.mysite.com or another domain mysite2.com. not only is it beneficial for your user, it's beneficial to distributing server load.
Another method is using a Content Delivery Network (CDN). CDN has a global network of servers, which contain "cached" versions of your website resources. Like say i'm in Asia, when i view your site with CDN'ed resources, it finds that resource in the server nearest Asia.
Mark-up
Not necessarily related speed and semantics but the use of id should be reserved for more important purposes. If you use ID to mark images for their styles, what if there was another element that needs the same image? IDs need to be unique, they can't be used twice. So i suggest using multiple classes instead.
also, IDs take precedence over classes. to avoid unexpected overrides, use classes. learn more about CSS specificity.
.sprites {
background-image:url('folder/spritesheet.png');
background-color:transparent;
background-repeat:no-repeat;
display:inline;
height:16px; /*same width and heights? place them here instead*/
width:75px;
}
.png3 {
height:16px; /* in cases you need a different dimension, this will override */
width:75px;
background-position:0px 0;
}
.png5 {
background-position:-75px 0;
}
$classy = "png" . $db_field['imageid'];
echo <img src='transparent.gif' class='sprites {$classy}' alt='alt' align='absmiddle'/>";
I embed small images/icons in the style sheet:
.someicon{
background-image:url('data:image/png;base64,iVBORw0KGg....');
}
this works with all modern browsers, and does not require me to create sprites (plus it even saves one additional file to load).
In development, the images are defined in the style sheet normally like so:
.someicon{
background-image:url('../images/someicon.png');
}
and I have a system that generates the final style sheet (including consolidating all CSS into one, minifying and replacing the image reference with the data: url) automatically whenever I make a change to a style sheet.
This works well and saves me a ton of work. When compressed with gzip, the CSS file is not much bigger than the individual files added together. After optimizing the PNG/JPG files, the CSS for my start page is 63K uncompressed. Even with a slightly smaller sprite file, I would probably not save more than a fraction of a second in load time for the average user, so I do not bother with sprites.
Related
I'm planing to create a blog and I'm thinking about how to serve images in the best possible quality without affecting loading times for mobile devices too much.
My content column is aprox. 768 display points wide and I'm posting screenshots using a retina Macbook so the images will be at least 1500px wide. Other Retina/HIDPI users should see the version best for them.
Is there a recommended best practice for this case? I'm talking only about content images (<img />); no CSS backgrounds. A true WordPress solution that serves images based on the uploaded hires images would be great.
I have used adaptive images in the past but it does not distinguish between HIDPI and normal clients. Other solutions serve for retina but not for different screen sizes.
No, there is no good method for doing this. This isn't a fault of WordPress, but of the state of the web in general. HiDPI on the web is currently a big hack, basically, and the web has not yet adapted to it properly.
Basically, there's no good way to specify multiple resolution images in the HTML in a way that the browser can then use to determine the proper image to show. CSS works, but only if you're using images as backgrounds. If you're dealing with IMGs, then you have to rely on Javascript to do the job, and that has various downsides depending on your methodology.
In the future, when browsers have started to adopt methods for specifying multiple resolution images, then this won't be as much of a problem.
Edit: This deserves a bit more explanation.
The basic problem with using JS to do this is that it relies on a technique of image-replacement. The JS code, through whatever logic it has, finds the IMGs on the page, then replaces them with higher resolution versions from some other file. No matter what method is used to do this, it has two major downsides:
First, the initial image is loaded in low resolution, then loaded again in high resolution when the image is replaced. This means extra bandwidth usage, and considering most HiDPI devices are currently mobile ones, this doesn't make much sense to waste that sort of bandwidth on those devices.
Second, the image is generally first displayed in low resolution and then replaced with the high resolution image. This means that there's a second or two of showing a fuzzy image on HiDPI screens, before it "pops" into focus as the replacement occurs. This is a really crappy effect to have.
Other methods exist by which you can simply serve the proper image in the first place, but they have downsides as well.
Server-side browser detection by User-Agent is a pretty crappy way to do things in the first place, and best avoided simply because there's so many different agents out there in the wild that maintaining a list of them is virtually impossible. However, if you want to go through the pain of it, this method can work.
Alternatively, it's possible to have Javascript in the browser detect the device-pixel-ratio of the browser and set that in a Cookie to return to the server. The server can then use this cookie to serve the proper resolution image. This works, but is unreliable because many mobile user agents fail to properly set and return the cookie, and even on those that do, the high-resolution images would only be shown on the second visit to the page. First impressions are low-res, which isn't good in general.
Like you can see: It's just hacks all the way down. Until the browser is capable of being told for a specific IMG that multiple versions exist, and their parameters, and then being allowed to choose for itself, then these are the only ways to do it. But, if things like the proposed HTML5 "srcset" or the PICTURE tag are implemented, then we'll have better choices.
After thinking a while about this, I have to offer two ways to approach (parts of) this problem (partly). Still: Read the answer from #Otto carefully. It has a lot of in depth details for a reason.
Javascript detection
I wrote an answer about »How to add default images« for attachments. You can alter the filter callback to always add a simple png/gif (that has just the background color) instead of the normal image or thumbnail. This would help you serving an extremely small image (maybe below 1kB - smushIt for the rescue) that the user agent/browser would cache at the first load and then simply insert for each other call to an image.
Then, simply the display pixel density with…
var dpr = 1;
if ( window.devicePixelRatio !== undefined )
dpr = window.devicePixelRatio;
if ( dpr > 1 ) {
// High pixel density pixel displays
}
…exchange the images when the DOM has fully loaded.
CSS routing
Adding stylesheets for different pixel density displays isn't that hard:
// For default/low density pixel displays only:
wp_enqueue_style(
'low_density_styles'
,trailingslashit( get_stylesheet_directory_uri() ).'low_density.css'
,array()
,filemtime( locate_template( 'low_density.css' ) )
,'screen'
);
// For high density pixel displays only:
$media = 'only screen and (min--moz-device-pixel-ratio: 2), ';
$media .= 'only screen and (-o-min-device-pixel-ratio: 2/1), ';
$media .= 'only screen and (-webkit-min-device-pixel-ratio: 2), ';
$media .= 'only screen and (min-device-pixel-ratio: 2)';
wp_enqueue_style(
'high_density_styles'
,trailingslashit( get_stylesheet_directory_uri() ).'high_density.css'
,array()
,filemtime( locate_template( 'high_density.css' ) )
,$media
);
Now you can simply switch between high/low density images - for your default images (header, logo, etc.). This doesn't work for post type attachments (again: refer to #Otto answer).
Up until now, when a user has uploaded an image, I have been saving several different versions of it for use throughout my site. As the site has grown, so have the numbers of sizes needed.
At the moment each uploaded image is sized in to about 6 new images and saved on the server.
The downside is that every time I need to create a new size (right now, for instance, I'm making a new size for an image gallery), I have to cycle through all the thousands of images and re-cut a new size for each.
Whereas, when I started, it was a nice quick way to avoid resizing images on the fly, now it's starting to turn into a nightmare.
Is it better to continue saving different sizes, and just deal with the overhead, or is it better at this point to get maybe 3 general sizes, and resize them on the fly as needed?
"Resizing" images using html/css (e.g., specifying height & width) is generally not what you want to do - it results in poorly scaled images with artifacts from the resize, and is inefficient as the user is potentially downloading a much larger file than they actually need.
Rather, having some kind of server-side solution to allow for on-the-fly resizing is probably what you want. I'd recommend using ImageMagick - combined with the implementation for your favorite language and some web-server voodoo (e.g., using .htaccess for Apache), you can easily have /path/to/yourimage.png?50x50 fire a call to a resize script that resizes the image, saves it in a cache folder, and outputs the resized file to the browser. This is better all around - you get proper resizing, your user only downloads the exact file they need, and the end-result is cached so your resize action only occurs once. Check out Image::Magick::Thumbnail for an example (in perl)
Edit - if you respond to this with what server-side language/framework you are using, I would be happy to point you in the direction of a thumbnail/resizing implementation of ImageMagick or something else for your platform.
Multiple versions.
Some browsers simply don't scale these things well and you end up with choppy nasty in the image, bad pixelation, etc...
The exception could be if you know all the images are photographic. Then have versions for your larger sizes, but shrinking could be ok. But if these have illustration or text, the effect will be noticeable.
.resize {
width: 200px;
height : auto;
}
.resize {
width: auto;
height : 300px;
}
i'm buildling a website and I want to create a translucid menu. I know that .gif image types enable transparency, but from my experience, not translucidy(anything between being transparent and opaque) - by default it seems to set the opacity to 100%, ie a solid image without any translucity/transparency.
I'm not sure if the issue is with the file type, or with how I'm exporting my menu. If it's worth anything, I'm using Fireworks to create and export my menu.
As is, I'm exporting my seperate files for my menu as .pngs, which seem to support translucent images, but I know that I'll be wanting to reduce the file size of these images soon, so is there a better alternative to getting a semi-transparent image other than using the .png file type?
Thanks,
Patrick
I'd say PNG is probably the best bet. The more modern browsers (read: not IE6) understand the 8-bit alpha channel it provides, whereas GIFs just have the transparency key.
Often these days, the bottleneck on sites isn't the size of the image (either in dimensions or in data) but rather the number of requests that it takes to load a page. More modern website designs try to pack as many images into one using techniques like CSS Spriting (woot.com, most of google). The other bottleneck is often not setting caching up correctly, forcing return visitors to reload a bunch of stuff.
You'll see google's various pages caching everything it can, and reducing the number of things a single page needs to download (combine all Javascripts into one, all CSS stylesheets into one) so that the browser is make 2 and 3 requests instead of 15-20.
I'd go with PNGs, and look into CSS sprites and caching as an alternative optimization.
See here for an example of an image sprite used on google's homepage.
This may well be a little of an open-ended question
The site I am working on requires to be optimised for performance. One of the key areas is to optimise the file sizes of the images used upon the site.
Unfortunatley these images are being created by employees who do not have the required knowledge for creating images for the web, and it is my job to produce a set of guidelines for them to use.
I was wondering whether there was any resource/guidlines/literature regarding typical images file sizes for images of different dimensions - as I would like to include something like this to aid them to ensure their images are being created properly.
Any info would be greatly appreciated.
Thanks in advance
I can't answer the opinion question, but I can suggest some guidelines that will keep your images smaller.
First off, if they're using Photoshop to edit their images, it's likely they're storing a whole bunch of crap in the headers (digital papertrail, EXIF data, and such). Also, folks will frequently save in too high a bit depth.
For novice users, trying to explain why they need to use "save for web" is more likely to confuse them. Instead, just point them at:
http://www.smushit.com/ysmush.it/
This site is rather handy - it will compress all the images on a page you specify, or you can upload the images.
You should strongly consider writing some guidelines about where images are stored as well. It's frequently very beneficial to have your static image content stored on several servers, apart from your dynamic content. Most browsers will only download a limited # of files at a time from any given website (usually it's 2).
Unless there's a good reason, all your images should be cached using one of the HTTP cache techniques (expires, etags, etc).
Good luck.
72 dpi as a resolution and either jpeg or png formats work best.
Try to use images at the exact pixel area size they will end up being displayed as. This is specified by the images height and width attributes.
You can set the output quality of a jpeg image which will also save file size although there is a trade off against image quality.
I hope this is of use.
We use large background images (hi-res photos, up to 700 KB) for our page design.
It's part of the experience of the site that as you browse around, you see different images.
At the moment a different (random) image is loaded on each page request, from a pool of ~15 images, which could grow over time.
I'm looking for a sane way to optimize this:
To avoid the user having to download a big image file on every page view
To reduce load on the server (is this an issue, will the server keep the images in memory?)
The ideas I have so far include:
A timer which loads a different image at set intervals
Progressively loading other images in the background with ajax
Associating images with specific content (pages, tags)
The question is, how to keep it feeling somewhat random, while minimizing page load times and server hit?
I usually avoid sites with huge images, I am very impatient. I would rethink your design.
As a first step you should make sure, that the images can be properly cached:
use sane urls (no session id's etc)
set appropriate http headers ETag
Firstly, hearing that the background-images alone are 700kb astounds me. In addition to the content ON screen...that is a pretty heavy site.
For starters, I would try to use image compression tools. Two tools come to mind Imagemagick and PNGCrush. PNGCrush is excellent in reducing all the extraneous metadata attached to photos, without compromising photo quality.
I only recommend this as compressing the images will assist you in enabling the user to download a smaller quantity of content, which means quicker load times, which...at the end of the day...is what users want.
I would also cache the images, such that when a user re-visits the site, the image is already cached on their end. This minimises the HTTP requests that are made each time a user visits your site.
An example of where this technique is used on a commercial site is www.reactive.com. If you look the /js/headerImages.js file, they make use of image caching. Funnily enough, you will find the same src code at: http://javascript.internet.com/miscellaneous/random-image.html
Considering that you have mentioned that images are randomly loaded, I am assuming you are using a Javascript library such as jQuery to create the effect.
If you are, you can minimize page load times by using a CDN as opposed to referencing to a local copy of the jQuery lib which is stored on your server. I have performed performance testing on a site I made for a client, and over an average of 20 hits, saved 1.6 seconds through this technique!
Hope that helps for now :)