What does Website Performance refer to exactly? - performance

I was reading various articles regarding Responsive Web Design and I came across how important website performance is for a good user experience, however I cannot understand what exactly is meant by website performance except of course a fast loading time?

With responsive design, being responsible means only loading resources that a particular device needs. You wouldn't necessarily send very large images to a small mobile device, for example, nor would you load heavy JavaScript for apps that don't apply on a particular device.
What Causes Poor Performance?
Most of poor performance is our fault: the average page in 2012 weighs over a megabyte. Much of this weight comes form blocking assets like Javascript and CSS that prevent the page from being displayed.
The average size of images on a Web page is 788KB. That’s a lot to send down to mobile devices.
Javascript, on average, is 211KB per transfer. This comes from the libraries and code we choose to include from third party networks. This cost is always transferred to our users. We need to stop building things for developer convenience and instead build them for user experience.
86% of responsive designs send the same assets to all devices.
http://www.lukew.com/ff/entry.asp?1684=

Website performance from a user point of view generally means loading time + displaying time + fast response for user actions ...but in fact it is much more complex.From designer point of view you need to worry about limited part of the problem - just try to make your designs less resource-consuming (data size, no. of requests, CPU, memory, user actions).
There's a lot of knowledge out there - this article might be interesting for you:
https://developers.google.com/speed/docs/best-practices/rules_intro

Ilya Gregoriks talk Breaking the 1000ms Time to Glass Mobile Barrier explains that latency on mobile is a big issue. Every additional request has a dramatic impact compared to non-mobile.
On top of the loading time i assume that the different cpu speed on mobile must be considered. If the page feels sluggish due to havy javascript use the mobile user will not use your page. Also a few pages i visited did not work on my android tablet; since the marketshare of mobile is increasing every page that does not take this into account will loose visitors

Related

What are the performance costs imposed on a browser to download images?

Images don't block initial render. I believe this to be mostly true. This means requesting/downloading the image from the network does not happen on the main thread. I'm assuming decoding/rasterizing an image also happen off the main thread for some browsers (but I might be wrong).
Often I've heard people simply say "just let the images download in the background". However, making the next reasonable step with this information alone, images should have 0 impact on the performance of your web app when looking at Time to Interactive or Time to First Meaningful paint. From my experience, it doesn't. Performance is improved by 2-4 seconds by lazy loading images (using IntersectionObserver) on an image heavy page vs. "just letting them download in the background".
What specific browser internals/steps to decode/paint an image causes performance regressions when loading a web page? What steps take resources from the main thread?
That's a bit broad, there are many things that will affect the rest of the page, and all depending of a lot of different factors.
Network requests are not handled by the CPU, so there should be no overhead here.
Parsing the metadata is implementation dependent, could be using the same process or some dedicated one, but generally that's not an heavy call.
Decoding the image data and rasterization is implementation dependent too, some browsers will do it directly when they get the data, while others will wait until it's really needed to do this, though there are ways to ensure it's done synchronously (on the same thread).
Painting the image may be Hardware Accelerated (done on the GPU) or done by software (on the CPU) and in that case that could impact general performances, but modern renderers will probably discard the images that are not in the current viewport.
And finally, having your <img> elements get resized by their content will cause a complete reflow of your page. That's usually the most notable performance hit when loading images in a web-page, so be sure you set the sizes of your images through CSS in order to prevent that reflow.

Browser getting more responsive after a while on heavy web page

When we load in a very heavy web page with a huge html form and lots of event handler code on it, the page gets very laggy for some time, responding to any user interaction (like changing input values) with a 1-2 second delay.
Interestingly, after a while (depending on the size of the page and code to parse, but around 1-2 minutes) it gets as snappy as it normally is with average size web pages. We tried to use the profiler in the dev tools to see what could be running in the background but nothing surprising is happening.
No network traffic is taking place after the page load, neither is there any blocking code running and HTML parsing is long gone at the time according to the profiler.
My questions are:
do browsers do any kind of indexing on the DOM in the background to speed up queries of elements?
any other type of optimization like caching repeated function call results?
what causes it to "catch up" after a while?
Note: it is obvious that our frontend is quite outdated and inefficient but we'd like to squeeze out everything from it before a big rewrite.
Yes, modern browsers, namely modern Javascript runtimes performs many optimisations during load and more importantly during page lifecycle: one of them is "Lazy / Just In Time Compilation, what in general means that runtime observes demanding or frequently performed patterns and translates them to faster, "closer to metal" format. Often in cost of higher memory consumption. Amusing fact is that such optimisations often makes "seemingly ugly and bad but predictable" code faster than well-thought complex "hand-crafted optimised" one.
But Iʼm not completely sure this is the main cause of phenomenon you are describing. Initial slowness and unresponsiveness is more often caused by battle of network requests, blocking code, HTML and CSS parsing and CPU/GPU rendering, i.e. wire/cache->memory->cpu/gpu loop, which is not that dependant on Javascript optimisations mentioned before.
Further reading:
http://creativejs.com/2013/06/the-race-for-speed-part-3-javascript-compiler-strategies/
https://developers.google.com/web/tools/chrome-devtools/profile/evaluate-performance/timeline-tool

Speed test of a web page : how to get the theoretical loading speed

There are many tools online to measure the speed of a web page.
They provide data such as the loading time of a page.
This loading time depends on the number of files downloaded at the same time and the connection speed (and many other things such as the network state, the content providers, so on).
However, because it is based on the speed of the connection, we don't have the theoretical loading time.
A browser downloads many resources at the same time within a certain limit (5 resource at the same time). So it is optimized to load the resources faster.
If we could set the speed connection to a fixed amount, the loading page of a page would "never" change.
So does anyone know a tool which computes this theoretical loading time of a web page ?
I'd like to get this kind of results :
Theoretical loading time : 56 * t
With t equals the amount of time to download 1kb of data.
What do you mean by "theoretical loading time"? Such a formula would have a huge number of variables (Bandwidth, Round-Trip Time, Processor speed, Antenna Wakeup Time, Load on Server, Packet Loss, Whether TCP Connections Are Already Open, ...) which ones would you put into your theoretical calculation?
And what is the problem you are trying to solve? If you just want a more objective measure of site speed, you can use a tester like http://www.webpagetest.org/ which allows you to choose the network speed and then run many tests to find the distribution of load times.
Note also, that there is not even agreement on when a page is finished loading! Most people measure time until onload handler is called, but that can easily be sped up by prematurely reaching onload and then doing the actual loading of resources with JS afterwards. Waiting until all resources are loaded could also be a bad measure, because many modern pages will continually be updating themselves and loading new resources.
You can use any of these three tools: SpeedPage, DevTools, and WebPageTest. Read more at: TEST YOUR WEBSITE'S LOADING TIME / MOBILE SITE LOADING SPEED blog post.

LAMP stack performance under heavy traffic loads

I know the title of my question is rather vague, so I'll try to clarify as much as I can. Please feel free to moderate this question to make it more useful for the community.
Given a standard LAMP stack with more or less default settings (a bit of tuning is allowed, client-side and server-side caching turned on), running on modern hardware (16Gb RAM, 8-core CPU, unlimited disk space, etc), deploying a reasonably complicated CMS service (a Drupal or Wordpress project for arguments sake) - what amounts of traffic, SQL queries, user requests can I resonably expect to accommodate before I have to start thinking about performance?
NOTE: I know that specifics will greatly depend on the details of the project, i.e. optimizing MySQL queries, indexing stuff, minimizing filesystem hits - assuming web developers did a professional job - I'm really looking for a very rough figure in terms of visits per day, traffic during peak visiting times, how many records before (transactional) MySQL fumbles, so on.
I know the only way to really answer my question is to run load testing on a real project, and I'm concerned that my question may be treated as partly off-top.
I would like to get a set of figures from people with first-hand experience, e.g. "we ran such and such set-up and it handled at least this much load [problems started surfacing after such and such]". I'm also greatly interested in any condenced (I'm short on time atm) reading I can do to get a better understanding of the matter.
P.S. I'm meeting a client tomorrow to talk about his project, and I want to be prepared to reason about performance if his project turns out to be akin FourSquare.
Very tricky to answer without specifics as you have noted. If I was tasked with what you have to do, I would take each component in turn ( network interface, CPU/memory, physical IO load, SMP locking etc) and get the maximum capacity available, divide by rough estimate of use per request.
For example, network io. You might have 1x 1Gb card, which might achieve maybe 100Mbytes/sec. ( I tend to use 80% of theoretical max). How big will a typical 'hit' be? Perhaps 3kbytes average, for HTML, images etc. that means you can achieve 33k requests per second before you bottleneck at the physical level. These numbers are absolute maximums, depending on tools and skills you might not get anywhere near them, but nobody can exceed these maximums.
Repeat the above for every component, perhaps varying your numbers a little, and you will build a quick picture of what is likely to be a concern. Then, consider how you can quickly get more capacity in each component, can you just chuck $$ and gain more performance (eg use SSD drives instead of HD)? Or will you hit a limit that cannot be moved without rearchitecting? Also take into account what resources you have available, do you have lots of skilled programmer time, DBAs, or wads of cash? If you have lots of a resource, you can tend to reduce those constraints easier and quicker as you move along the experience curve.
Do not forget external components too, firewalls may have limits that are lower than expected for sustained traffic.
Sorry I cannot give you real numbers, our workloads are using custom servers, high memory caching and other tricks, and not using all the products you list. However, I would concentrate most on IO/SQL queries and possibly network IO, as these tend to be more hard limits, than CPU/memory, although I'm sure others will have a different opinion.
Obviously, the question is such that does not have a "proper" answer, but I'd like to close it and give some feedback. The client meeting has taken place, performance was indeed a biggie, their hosting platform turned out to be on the Amazon cloud :)
From research I've done independently:
Memcache is a must;
MySQL (or whatever persistent storage instance you're running) is usually the first to go. Solutions include running multiple virtual instances and replicate data between them, distributing the load;
http://highscalability.com/ is a good read :)

HTTP request cost vs. page size cost?

I know it's a good practice to minimize the number of requests each page needs. For example, combining javascript files and using css sprites will greatly reduce the number of requests needed to render your page.
Another approach I've seen is to keep javascript embedded in the page itself, especially for javascript specific to that page and not really shared across other pages.
But my question is this:
At what point does my javascript grow too large that it becomes more efficient to pull the script into a separate file and allow the additional request for the separate js file?
In other words, how do I measure how much bytes equates to the cost of one request?
Since successive requests are cached, the only cost of calling that same js file is the cost of the request. Whereas keeping the js in the page will always incur the cost of additional page size, but will not incur the cost of an additional request.
Of course, I know several factors go into this: speed of the client, bandwidth speed, latency. But there has to be a turning point to where it makes more sense to do one over the other.
Or, is bandwidth so cheap (speed, not money) these days that it requires many more bytes than it used to in order to exceed the cost of a request? It seems to be the trend that page size is become less of a factor, while the cost of a request has plateaued.
Thoughts?
If you just look at the numbers and assume an average round-trip time for a request of 100 ms and an average connection speed of 5 Mbps, you can arrive at a number which says that up to 62.5 KB can be added to a page before breaking it out to a separate file becomes worthwhile. Assuming that gzip compression is enabled on your server, then the real amount of JavaScript that you can add is even larger still.
But, this number ignores a number of important considerations. For instance, if you move your JavaScript to a separate file, the user's browser can cache it more effectively such that a user that hits your page 100 times might only download the JavaScript file once. If you don't do this, and assuming that your webpage has any dynamic content whatsoever, then the same user would have to download the entire script every single time.
Another issue to consider is the maintainability of the page. As a general rule, the more JavaScript you add, the more difficult it becomes to maintain your page and make changes and updates without introducing bugs and other problems. So even if you don't have quite 62.5 KB of JavaScript and even if you don't care about the caching side of things, you have to ask yourself whether or not having a separate JavaScript file would improve maintainability and if so, whether it's worth sacrificing that maintainability for a slightly faster page load.
So there really isn't an exact answer here, but as a general rule I think that if the JavaScript is stuff that is truly intrinsic to the page (onclick handlers, effects/animations, other things that interface directly with elements on the page) then it belongs with the page. But if you have a bunch of other code that your handlers, effects, and other things use more like a library/helper utility, then that code can be moved to a separate file. Favor maintainability of your code over both page size and load times. That's my recommendation, anyways.
This is a huge topic - you are indirectly asking about many different aspects of web performance, so there are a few tricks, some of which wevals mentions.
From my own experience, I think it comes down partially to modularization and making tradeoffs. So for instance, it makes sense to pack together javascript that's common across your entire site. If you serve the assets from a CDN and set correct HTTP headers (Cache-Control, Etag, Expires), you can get a big performance boost.
It's true that you will incur the cost of the browser making a request and receiving a 304 Not Modified from the server, but that response at least will be fast to go across the wire. However, you will (typically) still incur the cost of the server processing your request and deciding that the asset is unchanged. This is where web proxies like Squid, Varnish and CDNs in general shine.
On the topic of CDN, especially with respect to JavaScript, it makes sense to pull libraries like jQuery out of one of the public CDNs. For example, Google makes a lot of the most popular libraries available via its CDN which is almost always going to be faster than than you serving it from your own server.
I also agree with wevals that page size is still very important, particularly for international sites. There are many countries where you get charged by how much data you download and so if your site is enormous there's a real benefit to your visitors when you serve them small pages.
But, to really boil it down, I wouldn't worry too much about "byte cost of request" vs "total download size in bytes" - you'd have to be running a really high-traffic website to worry about that stuff. And it's usually not an issue anyway since, once you get to a certain level, you really can't sustain any amount of traffic without a CDN or other caching layer in front of you.
It's funny, but I've noticed that with a lot of performance issues, if you design your code in a sensible and modular way, you will tend to find the natural separations more easily. So, bundle together things that make sense and keep one-offs by themselves as you write.
Hope this helps.
With the correct headers set (far future headers see: 1), pulling the js into a separate file is almost always the best bet since all subsequent requests for the page will not make any request or connection at all for the js file.
The only only exception to this rule is for static websites where it's safe to use a far future header on the actual html page itself, so that it can be cached indefinitely.
As for what byte size equating to the cost of an http connection, this is hard to determine because of the variables that you mentioned as well as many others. HTTP resource requests can be cached at nodes along the way to a user, they can be paralleled in a lot of situations, and a single connection can be reused for multiple request (see: 2).
Page size is still extremely important on the web. Mobile browsers are becoming much more popular and along with that flaky connections through mobile providers. Try and keep file size small.
http://developer.yahoo.com/performance/rules.html
http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Persistent_connections
Addition: It's worth noting that major page size achievements can be achieved through minification and gzip which are super simple to enable through good build tools and web servers respectively.

Resources