Are modern browsers good at scaling images down nowadays? - image

Back in the old days, one would always need to scale images in Photoshop, because the browsers did a terrible job of it. Firefox now seems to scale images smaller quite nicely. Do the other browsers do a nice job too?

Internet Explorer 7 doesn't do so well, but IE8 does. You can change IE7 to use the better method from IE8 with a simple line of CSS.
img { -ms-interpolation-mode: bicubic; }
P.S. I found this out when working on an intranet page where bandwidth wasn't a problem. For something on the internet I would seriously consider resizing the image to reduce the number of bytes.

If you're asking whether or not you're safe to display large images in small areas on a webpage, I wouldn't risk it: resize the image prior to posting/let your framework resize the image. You'll save the client from downloading a huge file, and you'll know exactly how the image will display.

Most modern browsers are good at scaling images, although none will be as good as a proper graphics application.
However if you have a graphic you are scaling to be smaller it still has to be downloaded before scaling it, so it is better to have a small one if you can, especially if you have mobile clients

Related

Performance gains from an image-less web design

As browsers are finally starting to agree on CSS3, many web developers are rubbing their hands in excitement over their new, image-less web designs. They're clean. They're scaleable. They're powerful.
With border-radii, box-shadows, font-faces and the like, now we can convert our designer's beautiful design into lines of code, instead of packing our pages with img tags.
Two related questions:
Is there a point when the stylesheet(s) get so large that they actually [negatively] affect performance to a noticeable degree?
In a web application with lots of icons (in the range of 16px to 48px size), how noticeable would the performance boost be by using an icon font?
Is there a point when the stylesheet(s) get so large that they actually [negatively] affect performance to a noticeable degree?
It's just common sense really. If your stylesheet(s) become quite large then that will have just as negative an effect as having lots of images. In general terms a stylesheet (with lots of CSS3 and fancy bits) will be faster to download than a load of images.
I'd recommend taking it case-by-case and deciding whether CSS or images provides the better solution taking into account download speeds, browser support requirements, desktop vs mobile and so on.
In a web application with lots of icons (in the range of 16px to 48px size), how noticeable would the performance boost be by using an icon font?
Unless you were talking about hundreds/thousands of icons then there really isn't going to be a hugely noticeable difference in performance. Remember with icon fonts you're also probably having the user download some custom fonts.
Again it's really a case of using what is best for the current project.
I don't think there is a definitive answer to your question but hopefully what I've said clears it up a little for you.
I don't think there is a "point", and performance is not related so much to the size as it's related to the style rules themselves. The main issues I've seen are related to CSS gradients (especially radial gradients). This was a year or so back, but I remember testing some mobile devices with some mixture of gradients and on (webkit) mobile devices, the web page's display was notably lagging. Removing the gradients and adding images removed the lag. Now, for all I know, most devices now may have newer webkit display engines which might fix that issue, but I do think it's still a valid point to consider.
The size of the CSS files does affect performance in a few cases. If the CSS is especially large, it will cause some issues. But as someone else mentioned, it is text, so its probably pretty small file. But if that CSS is split across many files, that becomes a bigger issue. I have seen some sites with 10-20 css files, all located on a single server. Depending on the browser, it can only open 2-6 (maybe 8) connections at a time to each server. If you have 10 css files, plus another 10 js, plus 100 other assets, thats going to take a relatively long time to open and close the various connections. One way to solve this is to concatenate the CSS files as part of your development process. Some tools that do this that I like are Yeoman or Codekit. Those tools will also automatically minify the CSS as well making the one file they generate that much smaller.
Using an icon font is good because they are scalable and use a single file. As you zoom in on a page, the icons continue to look great, while the pngs you might use look terrible. The icon font is also one file, so for the same reason as above, its one file versus 10s-100s. If you do need to use PNG files for some parts that are not icons, look into using sprites, often referred to as CSS Sprites. This is a technique for combining several images into a single png file, which can then be used on the site with some creative CSS.
The whole point of CSS is that the styles cascade upon one another. Extremely large style sheets would lead me to believe the CSS is not written properly and therefore not cascading down. That being said, I suppose its possible have large properly cascading style sheets in which case I would recommend using a minifier to eliminate white space and compress your code to make the load time faster.
As for the icons are concerned, you could possible create a sprite (a collection of multiple images) and then use CSS positioning to only show the icon you needed. In this way the server would fetch only one large image instead of a lot of smaller images. Granted it, the one image is larger than the others, but nothing kills load time and slows a page's performance than a ton of fetch requests by a server for images.

What's the best way to cycle through a large number of fixed position images in WebKit efficiently?

I'm currently working on a little site for my family. One of the things I wanted to do was to make a basic 'making of' stop-motion video. I could assemble it and upload it to Vimeo or something but I thought it was a perfect opportunity to use nothing but HTML, CSS, and Javascript.
I've got everything styled and my JS is working, etc. except that it performs atrociously in Chrome and Safari. Interestingly, it works great in Firefox and I'm not supporting it yet in IE. I'm hoping for 8 to 12 frames per second, with music playing, which I haven't bothered trying yet due to this. Bad performance is anything less than that. Currently I'm getting roughly 3 fps in Firefox (acceptable, but not what I was looking for) and in Chrome and Safari I'm getting roughly .6795 fps.
When running the Chrome Profiler, I get the following (relevant) output.
99.96% 99.96% (program)
0.03% 0.03% (garbage collector)
0.01% 0.01% script.js:5:nextSlide
I've never used the Profiler before but I believe this is showing me that my JS is not what's hitting the performance so hard.
I've published a test page that documents the performance differences that you can visit with Chrome and Firefox.
I've also discovered that this seems to be related to the images cycled. Cycling different, simpler images seems to work just fine in both Chrome and Firefox, despite the fact that Chrome is still a little more power hungry than Firefox.
As further proof of at least this conclusion, though it's entirely unacceptable, is demonstrated here, after running the images through convert -compress JPEG -quality 1. They cycle much more efficiently, but of course the quality is terrible.
I have run these test pages in Chrome (16.0.912.63), Safari (5.1.2 (6534.52.7)), WebKit nightly (Version 5.1.2 (6534.52.7, r102985)), and Mobile Safari (latest as of 2011/12/28) and only Mobile Safari performs as well as FireFox. The desktop browsers were tested on a MacBook Pro.
2.7 GHz Intel Core i7
8 GB 1333 MHz DDR3
Interestingly, Mobile Safari on an iPad 2 performs as well as FireFox when rendering the test page. Though Mobile Safari is based on WebKit, in this instance it performs entirely different.
Decreasing the setTimeout call to 144 from 244 also seems to not do anything. I've arrived at 244 entirely arbitrarily at this point as it became clear early on that the timing of the display compared to the call didn't seem to correspond nearly directly. This leads me to believe that I'm rendering the slide show as quickly as I can on each browser.
So my question is, how can I make this performant in WebKit?
You can debug the page performance in Chrome using the Timeline tab under the Chrome developer tools. The problem with your script is that your repaint cycle is simply too expensive, it currently takes 1.35s to repaint every frame.
The bad performance has nothing to do with the quality of the jpeg images (although the image quality also affects the page render time). The problem is that you are updating the z-index which causes the Chrome to repaint all images instead of just the next frame (You have a O(n) image slider website!).
The browsers try to do the minimal possible actions in response to a change e.g.: changes to an elements color will cause only repaint of the element.
Changing the element z-index property is basically the same as removing a node from the tree and adding another node to it. This will cause layout and repaint of the element, its children and possibly siblings. My guess is that in Chrome, the siblings are being repainted too, this explains the horrible performance.
A way to fix this problem is to update the opacity property instead of the z-index. Unlike the z-index, the opacity does not modifies the DOM tree. It only tells the render to ignore that element. The element is still 'physically' present in the DOM. That means that only one element gets repainted and not all siblings and children.
This simple changes in your CSS should do the trick:
.making-of .slide#slide-top {
opacity: 1;
/* z-index: 5000; */
}
.making-of .slide {
position: fixed;
/* z-index: 4000; */
opacity: 0;
....
}
And this is the result, the repaint went from 1.35s to 1ms:
EDIT:
Here is a jsfiddle using the opacity solution, I also added CSS3 transitions (just for fun!)
http://jsfiddle.net/KN7Y5/3/
More info on how the browser rendering works:
http://www.html5rocks.com/en/tutorials/internals/howbrowserswork/
I took a look at the code on your site and found two things that are limiting the speed.
1) In the JavaScript, you have a timeout of approximately 1/4 second (244 milliseconds). This means that your best-cast frame-rate is about 4 FPS (frames-per-second). This can be fixed by simply reducing the delay to match the frame rate that you actually want. I see that your most recent edit addresses this point, but I didn't want to ignore it since it is ultimately critical to achieving the higher frame-rates that you want.
2) You are using z-index to control which image is visible. In the general case, z-index handling allows for objects that have different sizes and positions to be ordered so that you can control which object is visible at locations where two or more objects overlap. In your case, all of the objects overlap perfectly, and the z-index approach works fine except for one major problem: browsers don't optimize z-index processing for this case and therefore they are actually processing every image on every frame. I verified this by creating a modified version of your demo which used twice as many images -- the FPS was reduced by nearly a factor of 2 (in other words, it took 4 times as long to display the entire set).
I hacked together an alternative approach that achieved a much higher FPS (60 or more) under both Chrome and Firefox. The gist of it was that I used the display property instead of manipulating z-index:
.making-of .slide#other {
display: none;
}
.making-of .slide#slide-top {
display: inline;
}
and the JavaScript:
function nextSlide() {
...
topSlide.id='other';
nextTopSlide.id='slide-top';
...
setTimeout(nextSlide, 1);
...
}
I made some changes in the HTML too, notably including id="other" in the tag for each image.
So why is WebKit so slow? As has been pointed out in other comments, the extra-poor performance that you are seeing on Webkit seems to be Mac specific. My best guess about this is that the Mac version of WebKit is not actually using the "turbo" version of libjpeg (despite the fact that it is listed in the credits). In your test, JPEG decompression could very well be the gating factor if it is actually decompressing every image on every frame (as is likely the case). Benchmarking of libjpeg-turbo has shown about a 5x improvement in decompression speed. This roughly matches the difference that you are seeing between Firefox and Chrome (3 FPS vs. 0.6795 FPS).
For more notes on libjpeg-turbo and how this hypothesis explains some of your other results, see my other answer.
Key in my experience is to keep as less as possible images in the DOM and in javascript arrays, so don't load all of the at once, keep it to a minimum. Also make sure you destroyed already used DOM elements as well as javascript objects holding images, manual garbage collection. This will improve performance.
Random guess: GPU acceleration. It is device-dependent, and there is a big race among browsers now.
You could try with a more recent Chrome like the canaries, http://tools.google.com/dlpage/chromesxs (it's 18.x now), just to get more data.
about:version in Chrome should give you version of WebKit.
Also, have you tried existing slideshow solutions like http://jquery.malsup.com/cycle/ ? I wonder if playing with the z-index is the bottleneck here... maybe having only 1-2 images displayed (all the rest using display:none) would help. This is again a guess.
The best way to achieve better performance when it comes to graphics is to compress them, but like you want, but keep
If you are using Linux, I have used JPEG compression tool http://linuxpoison.blogspot.com/2011/01/utility-to-optimize-compress-jpeg-files.html before. It doesn't hurt quality as much as the ImageMagick example you gave.
Also http://trimage.org/ has JPG support, and would be my first recommendation!
If you are on Windows, maybe something like this:
http://www.trans4mind.com/personal_development/convertImage/index.html
I have not tested the Windows method, and I'm not even sure it supports batch
Hope that helps!
P.S. For PNGs I use sometimes use http://pmt.sourceforge.net/pngcrush/ along with or without http://trimage.org/
There has been some relatively recent work on the JPEG image compression library that is used in many applications including browsers such as Firefox and Chrome. This new library achieves a significant speed increase by using special media-processing instructions available in modern CPUs. It may simply be that your version of Chrome doesn't use the new library.
Your question requests a way to fix your images, but that shouldn't be necessary -- after all, some other browsers work fine. Therefore, the fix should be in the browser (and browsers are constantly being improved).
You said that you improved Chrome's speed by dramatically reducing the quality or complexity of your images. This could be explained by the fact that for areas of very low detail, the JPEG decompression algorithm can bypass a lot of the work that it would normally need to perform. If an 8x8 tile of pixels can be reduced to a single color, then decompression of that tile becomes a very simple matter.
This Wikipedia article provides some additional info and sources. It says that Chrome version 11 has the new library. You can enter "chrome://credits" in your location bar and see if it references "libjpeg-turbo". "libjpeg" is the original library and "libjpeg-turbo" is the optimized version.
One other possibility is that libjpeg-turbo isn't supported in Webkit on the Mac (although I don't specifically know that). There is a hint as to why that might be the case posted here.
P.S. You may get better decompression speed by compressing with a different algorithm, such as PNG (although your compression ratios will likely suffer). On the other hand, maybe you should use HTML5 video, probably with the WebM format.
I tested it in opera and it ran slow as hell, i noticed that opera had queued 150+ images to download it could be worth a try to download ~20 at a time?
An alternative approach would be to render this content as a video - it is ideal for this kind of thing and can easily contain audio and subtitles. You can access each pixel from each frame using JavaScript if you want to get funky.

Browser Repaint/Reflow performance: using CSS3 Gradients vs PNG Gradients

I am working at an app that causes lots of browser reflows. Performance is a key issue here. From the performance point of view Is it better to use a CSS3 gradient or an image gradient for some DOM elements? Does a page that uses CSS text shadows and gradients will have a slower reflow as a page that uses images to achieve those visual effects?
Also, are there any reflow tests out there I can use?
For drawing, CSS gradients and shadows do task the CPU more than images. Performance used to be pretty bad, these days they are acceptable. If you have a ton of gradients/shadows, you should just implement them and do the tests in your real-world setting. If you just have a few, I wouldn't worry about it.
It depends a lot on how the browser renders it, but for the most part those things will render slower. In addition, you'll have a less pixel-perfect display in older browsers. However, this also serves to segment your audience, as generally those with updated browsers also have updated computers. So, it's a trade-off that can work in your favor to essentially serve a stripped down version of your site to those that wouldn't be able to handle it. It's not guaranteed, but I've found it usually balances out pretty well.
Overall, real-world testing is the way to go. Build it, see if it works, and fix performance issues once you find them. I wouldn't hesitate just because there's a chance it might not work. If it works just fine and you don't try it, you'll never know!

CSS - Optimizing rounded corners for speed

I'm trying to optimize my site for speed. I used images for the rounded corners before but now I've changed them with border-radius and -moz-border-radius css rules. Which way is the best for speed? I used to think that css rules are faster but I've seen a lot of sites talking about css sprites and I'm a bit confused now. Oh and I don't care about IE compatibility so you can suggest any method you want.
The speed goes like this: CSS > sprites > separate images.
The sprites is when instead of having separate images for the corners you use a single image and slice/position it with CSS. It's fatser, because you only download one image then. CSS is the fastest, because it doesn't need to download anything.
For those browsers that support radius CSS properties, use those. They are definitely faster, because no image needs to be loaded and they can be rendered by the browser's native engine.
For those (older) browsers that don't, apply an image-based workaround.
Don't worry too much about this stuff, though. The speed improvements reachable through optimizations in this area are very, very minuscule.
Both are exactly the same, except that because CSS3 specifications has yet to be finalized, Mozilla implemented border-radius with the -moz- vendor prefix. You'll need that, and the -webkit- version for rounded corners to function on Webkit (Chrome, Safari) and Mozilla (Firefox) browsers.
As for speed.. it is unclear whether you are talking about transfer or rendering speed. In either case I would suggest that the difference is negligible, and you should use all three for maximum browser support (minus IE, of course)
I would recommend CSS Sprites. This is a good tutorial: http://bavotasan.com/tutorials/simple-rounded-corners-with-a-css-sprite/

what is the best screen resolution to develop web pages?

I m, on a daily basis creating web pages. My preferred development screen resolution is: 1600x1200 but what is yours?
And do you use any other plug-ins?
I use window re sizer 1.0 for Firefox. But are there better options?
In my opinion, if you're using a window resizer, you're already on top of the game. I try to aim for pages that work well on a 1024x768 screen, accounting for scrollbars and toolbars and whatnot. It may be worth resizing your screen to 1024x768 (or whatever the minimum is that you support) every once in a while just to fully understand that user experience, but in general the window resizer keeps you aware enough.
I disagree - fixed size layouts are just fine. In fact, Stackoverflow.com uses a fixed size layout, as do a great many professional sites out there. The reason? Predictability.
A few things:
Never have horizontal scroll bars
Try to avoid vertical scroll bars when it's reasonable to do so
Remember, AJAX and other newer technologies can help you save space on your page with popups and other niceties.
My 2 cents,
-Doug
It shouldn't matter - design your web pages to be flexible and fluid such that they degrade gracefully on any reasonable screen resolution. Cater for mobile devices with very minimal screen space and massive displays.
I develop with a 2x1440x900 setup, but I leave Firefox as a window at 1024x768 using Web Developer Toolbar.
The dual monitor setup is really useful when you have the code on one screen and Firefox on the other.
I wouldn't go over 800x600. However, ideally your layout is not fixed to a screen size, and can resize and still look right.
I stick with 1024x768. It's usually big enough for what you need to display, and not everyone is quite to 1600x1200 yet. Maybe in a few years. I'd stick with a smaller display...that way it may force you to be more design conscience.
Two screens are invaluable regardless of screen size. One screen to run your editor, and one screen to run your browser. It's amazing how much smoother development becomes.
With my stats showing 1024x768 as my users' dominant resolution, I certainly wouldn't go below that. Beyond that, I agree with, apparently, everyone else here that fixed size layouts are just a bad idea, and your design should adapt to render context.
For the love of Pete don't use pixel sized fonts. Use em or pt sizing instead.
It all really depends on what kind of page you are designing. I would try to design with the ability for the page to scale in mind. There is nothing I hate more than having to zoom in a page that was designed for 800x600 on a 1920x1200 display.
I think the best advice given here is just to try it at different resolutions instead of your native one, and try to make it look good at a variety of sizes.

Resources