HTTP request cost vs. page size cost? - performance

I know it's a good practice to minimize the number of requests each page needs. For example, combining javascript files and using css sprites will greatly reduce the number of requests needed to render your page.
Another approach I've seen is to keep javascript embedded in the page itself, especially for javascript specific to that page and not really shared across other pages.
But my question is this:
At what point does my javascript grow too large that it becomes more efficient to pull the script into a separate file and allow the additional request for the separate js file?
In other words, how do I measure how much bytes equates to the cost of one request?
Since successive requests are cached, the only cost of calling that same js file is the cost of the request. Whereas keeping the js in the page will always incur the cost of additional page size, but will not incur the cost of an additional request.
Of course, I know several factors go into this: speed of the client, bandwidth speed, latency. But there has to be a turning point to where it makes more sense to do one over the other.
Or, is bandwidth so cheap (speed, not money) these days that it requires many more bytes than it used to in order to exceed the cost of a request? It seems to be the trend that page size is become less of a factor, while the cost of a request has plateaued.
Thoughts?

If you just look at the numbers and assume an average round-trip time for a request of 100 ms and an average connection speed of 5 Mbps, you can arrive at a number which says that up to 62.5 KB can be added to a page before breaking it out to a separate file becomes worthwhile. Assuming that gzip compression is enabled on your server, then the real amount of JavaScript that you can add is even larger still.
But, this number ignores a number of important considerations. For instance, if you move your JavaScript to a separate file, the user's browser can cache it more effectively such that a user that hits your page 100 times might only download the JavaScript file once. If you don't do this, and assuming that your webpage has any dynamic content whatsoever, then the same user would have to download the entire script every single time.
Another issue to consider is the maintainability of the page. As a general rule, the more JavaScript you add, the more difficult it becomes to maintain your page and make changes and updates without introducing bugs and other problems. So even if you don't have quite 62.5 KB of JavaScript and even if you don't care about the caching side of things, you have to ask yourself whether or not having a separate JavaScript file would improve maintainability and if so, whether it's worth sacrificing that maintainability for a slightly faster page load.
So there really isn't an exact answer here, but as a general rule I think that if the JavaScript is stuff that is truly intrinsic to the page (onclick handlers, effects/animations, other things that interface directly with elements on the page) then it belongs with the page. But if you have a bunch of other code that your handlers, effects, and other things use more like a library/helper utility, then that code can be moved to a separate file. Favor maintainability of your code over both page size and load times. That's my recommendation, anyways.

This is a huge topic - you are indirectly asking about many different aspects of web performance, so there are a few tricks, some of which wevals mentions.
From my own experience, I think it comes down partially to modularization and making tradeoffs. So for instance, it makes sense to pack together javascript that's common across your entire site. If you serve the assets from a CDN and set correct HTTP headers (Cache-Control, Etag, Expires), you can get a big performance boost.
It's true that you will incur the cost of the browser making a request and receiving a 304 Not Modified from the server, but that response at least will be fast to go across the wire. However, you will (typically) still incur the cost of the server processing your request and deciding that the asset is unchanged. This is where web proxies like Squid, Varnish and CDNs in general shine.
On the topic of CDN, especially with respect to JavaScript, it makes sense to pull libraries like jQuery out of one of the public CDNs. For example, Google makes a lot of the most popular libraries available via its CDN which is almost always going to be faster than than you serving it from your own server.
I also agree with wevals that page size is still very important, particularly for international sites. There are many countries where you get charged by how much data you download and so if your site is enormous there's a real benefit to your visitors when you serve them small pages.
But, to really boil it down, I wouldn't worry too much about "byte cost of request" vs "total download size in bytes" - you'd have to be running a really high-traffic website to worry about that stuff. And it's usually not an issue anyway since, once you get to a certain level, you really can't sustain any amount of traffic without a CDN or other caching layer in front of you.
It's funny, but I've noticed that with a lot of performance issues, if you design your code in a sensible and modular way, you will tend to find the natural separations more easily. So, bundle together things that make sense and keep one-offs by themselves as you write.
Hope this helps.

With the correct headers set (far future headers see: 1), pulling the js into a separate file is almost always the best bet since all subsequent requests for the page will not make any request or connection at all for the js file.
The only only exception to this rule is for static websites where it's safe to use a far future header on the actual html page itself, so that it can be cached indefinitely.
As for what byte size equating to the cost of an http connection, this is hard to determine because of the variables that you mentioned as well as many others. HTTP resource requests can be cached at nodes along the way to a user, they can be paralleled in a lot of situations, and a single connection can be reused for multiple request (see: 2).
Page size is still extremely important on the web. Mobile browsers are becoming much more popular and along with that flaky connections through mobile providers. Try and keep file size small.
http://developer.yahoo.com/performance/rules.html
http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Persistent_connections
Addition: It's worth noting that major page size achievements can be achieved through minification and gzip which are super simple to enable through good build tools and web servers respectively.

Related

What is the relationship between request content size and request duration

At the company I work, all our APIs send and expect requests/responses that follow the JSON:API standard, making the structure of the request/response content very regular.
Because of this regularity and the fact that we can have hundreds or thousands of records in one request, I think it would be fairly doable and worthwhile to start supporting compressed requests (every record would be something like < 50% of the size of its JSON:API counterpart).
To make a well informed judgement about the viability of this actually being worthwhile, I would have to know more about the relationship between request size and duration, but I cannot find any good resources on this. Anybody care to share their expertise/resources?
Bonus 1: If you were to have request performance issues, would you look at compression as a solution first, second, last?
Bonus 2: How does transmission overhead scale with size? (If I cut the size by 50%, by what percentage will the transmission overhead be cut?)
Request and response compression adds to a time and CPU penalty on both sender's side and receiver's side. The savings in time is in the transmission.
The weighing of the tradeoff depends a lot on the customers of the API -- when they make requests, how much do they request, what is requested, where they are located, type of device/os and capabilities etc.,
If the data is static -- for eg: a REST query apihost/resource/idxx returning a static resource, there are web standard approaches like caching of static resources that clients / proxies will be able to assist with.
If the data is dynamic -- there are architectural patterns that could be used.
If the data is huge -- eg: big scientific data sets, video etc., almost always you would find them being served statically with a metadata service that provides the dynamic layer. For eg: MPEG-DASH or HLS is just a collection of files.
I would choose compression as a last option relative to the other architectural options.
There are also implementation optimizations that would precede using compression of request/response. For eg:
Are your services using all available resources at disposal (cores, memory, i/o)
Does the architecture allow scale-up and scale-out and can the problem be handled effectively using that (remember the penalties on client side due to compression)
Can you use queueing, caching or other mechanisms to make things appear faster?
If you have explored all these and the answer is your system is optimal and you are looking at the most granular unit of service where data volume is an issue, by all means go after compression. Keep in mind that you need to budget compute resources for compression on the server side as well (for a fixed workload).
Your question#2 on transmission overhead vs size is a question around bandwidth and latency. Bandwidth determines how much you can push through the pipe. Latency governs the perceived response times. Whether the payload is 10 bytes or 10MB, latency for a client across the world encountering multiple hops will be larger relative to a client encountering only one or two hops and is bound by the round-trip time. So, a solution may be to distribute the servers and place them closer to your clients from across the world rather than compressing data. That is another reason why compression isn't the first thing to look at.
Baseline your performance and benchmark your experiments for a representative user base.
I think what you are weighing here is going to be the speed of your processor / cpu vs the speed of your network connection.
Network connection can be impacted by things like distance, signal strength, DNS provider, etc; whereas, your computer hardware is only limited by how much power you've put in it.
I'd wager that compressing your data before you are sending would result in shorter response times, yes, but it's=probably going to be a very small amount. If you are sending json, usually text isn't all that large to begin with, so you would probably only see a change in performance at the millisecond level.
If that's what you are looking for, I'd go ahead and implement it, set some timing before and after, and check your results.

Batching requests over HTTP2

Is it possible to get a better throughput from our servers, if we make one large HTTP request, as opposed to multiple smaller HTTP requests over HTTP2.
As per my understanding it should not produce any significant difference in performance since with HTTP2 we can have multiple requests multiplexed over one TCP connection.
Yes at a network level one large request will be more efficient than multiple small requests. This is due to the overhead of making a network request.
This is also why concatenating CSS and JavaScript and spriting for images were recommended under HTTP/1.1 so the amount of data sent was the same, but the amount of requests was considerably lower. In fact due to the way compression like gzip works the amount of data was often smaller when sending large requests.
HTTP/2 was designed to make the costs of HTTP requests a lot smaller though the reuse of a single TCP connection using multiplexing. In theory this would allow us to give up concatenation and spriting. The reality has been a little less than perfect though - usually due to browser inefficiencies rather than HTTP/2’s fault. The bottleneck has just moved and we need to optimise browsers for the new world. So, for now, some level of concenation and spriting is still recommended.
Getting back to your question, then yes it should have single effects at that network level and in fact HTTP/1.1 and HTTP/2 may even be similar in performance if you do this.
However beyond the network level you may discover other reasons not to bundle into fewer files. If you have one large JavaScript file for example then the browser must wait for it all to be downloaded before it can be parsed, compiled and run. You may be better getting smaller, more important JavaScript downloaded first. Similarly with image spriting you may be waiting for the entire sprite file to download before a single image is display.
Then there are the caching implications. Changing a single line of JS or adding a single image to the image Sprite requires creating a whole new large file, meaning the older one cannot be used and the whole thing needs to be downloaded again in its entirety.
Plus having large files can be more complicated to implemented and managed. They require a build step (maybe not a big deal as many sites do) and creating and managing image sprites through CSS is often more difficult.
Also if using this to stick with HTTP/1.1, then you may be missing out on are the other benefits of HTTP/2 including HPACK header compression and HTTP/2 push (though this is also more tricky to get right than initially thought/hoped!).
It’s a fascinating topic that I’ve spent a lot of time on, and best advice (as always!) is to understand the technology and test, test, test!

Browser getting more responsive after a while on heavy web page

When we load in a very heavy web page with a huge html form and lots of event handler code on it, the page gets very laggy for some time, responding to any user interaction (like changing input values) with a 1-2 second delay.
Interestingly, after a while (depending on the size of the page and code to parse, but around 1-2 minutes) it gets as snappy as it normally is with average size web pages. We tried to use the profiler in the dev tools to see what could be running in the background but nothing surprising is happening.
No network traffic is taking place after the page load, neither is there any blocking code running and HTML parsing is long gone at the time according to the profiler.
My questions are:
do browsers do any kind of indexing on the DOM in the background to speed up queries of elements?
any other type of optimization like caching repeated function call results?
what causes it to "catch up" after a while?
Note: it is obvious that our frontend is quite outdated and inefficient but we'd like to squeeze out everything from it before a big rewrite.
Yes, modern browsers, namely modern Javascript runtimes performs many optimisations during load and more importantly during page lifecycle: one of them is "Lazy / Just In Time Compilation, what in general means that runtime observes demanding or frequently performed patterns and translates them to faster, "closer to metal" format. Often in cost of higher memory consumption. Amusing fact is that such optimisations often makes "seemingly ugly and bad but predictable" code faster than well-thought complex "hand-crafted optimised" one.
But Iʼm not completely sure this is the main cause of phenomenon you are describing. Initial slowness and unresponsiveness is more often caused by battle of network requests, blocking code, HTML and CSS parsing and CPU/GPU rendering, i.e. wire/cache->memory->cpu/gpu loop, which is not that dependant on Javascript optimisations mentioned before.
Further reading:
http://creativejs.com/2013/06/the-race-for-speed-part-3-javascript-compiler-strategies/
https://developers.google.com/web/tools/chrome-devtools/profile/evaluate-performance/timeline-tool

What does Website Performance refer to exactly?

I was reading various articles regarding Responsive Web Design and I came across how important website performance is for a good user experience, however I cannot understand what exactly is meant by website performance except of course a fast loading time?
With responsive design, being responsible means only loading resources that a particular device needs. You wouldn't necessarily send very large images to a small mobile device, for example, nor would you load heavy JavaScript for apps that don't apply on a particular device.
What Causes Poor Performance?
Most of poor performance is our fault: the average page in 2012 weighs over a megabyte. Much of this weight comes form blocking assets like Javascript and CSS that prevent the page from being displayed.
The average size of images on a Web page is 788KB. That’s a lot to send down to mobile devices.
Javascript, on average, is 211KB per transfer. This comes from the libraries and code we choose to include from third party networks. This cost is always transferred to our users. We need to stop building things for developer convenience and instead build them for user experience.
86% of responsive designs send the same assets to all devices.
http://www.lukew.com/ff/entry.asp?1684=
Website performance from a user point of view generally means loading time + displaying time + fast response for user actions ...but in fact it is much more complex.From designer point of view you need to worry about limited part of the problem - just try to make your designs less resource-consuming (data size, no. of requests, CPU, memory, user actions).
There's a lot of knowledge out there - this article might be interesting for you:
https://developers.google.com/speed/docs/best-practices/rules_intro
Ilya Gregoriks talk Breaking the 1000ms Time to Glass Mobile Barrier explains that latency on mobile is a big issue. Every additional request has a dramatic impact compared to non-mobile.
On top of the loading time i assume that the different cpu speed on mobile must be considered. If the page feels sluggish due to havy javascript use the mobile user will not use your page. Also a few pages i visited did not work on my android tablet; since the marketshare of mobile is increasing every page that does not take this into account will loose visitors

Accelerated downloads with HTTP byte range headers

Has anybody got any experience of using HTTP byte ranges across multiple parallel requests to speed up downloads?
I have an app that needs to download fairly large images from a web service (1MB +) and then send out the modified files (resized and cropped) to the browser. There are many of these images so it is likely that caching will be ineffective - i.e. the cache may well be empty. In this case we are hit by some fairly large latency times whilst waiting for the image to download, 500 m/s +, which is over 60% our app's total response time.
I am wondering if I could speed up the download of these images by using a group of parallel HTTP Range requests, e.g. each thread downloads 100kb of data and the responses are concatenated back into a full file.
Does anybody out there have any experience of this sort of thing? Would the overhead of the extra downloads negate a speed increase or might this actually technique work? The app is written in ruby but experiences / examples from any language would help.
A few specifics about the setup:
There are no bandwidth or connection restrictions on the service (it's owned by my company)
It is difficult to pre-generate all the cropped and resized images, there are millions with lots of potential permutations
It is difficult to host the app on the same hardware as the image disk boxes (political!)
Thanks
I found your post by Googling to see if someone had already written a parallel analogue of wget that does this. It's definitely possible and would be helpful for very large files over a relatively high-latency link: I've gotten >10x improvements in speed with multiple parallel TCP connections.
That said, since your organization runs both the app and the web service, I'm guessing your link is high-bandwidth and low-latency, so I suspect this approach will not help you.
Since you're transferring large numbers of small files (by modern standards), I suspect you are actually getting burned by the connection setup more than by the transfer speeds. You can test this by loading a similar page full of tiny images. In your situation you may want to go serial rather than parallel: see if your HTTP client library has an option to use persistent HTTP connections, so that the three-way handshake is done only once per page or less instead of once per image.
If you end up getting really fanatical about TCP latency, it's also possible to cheat, as certain major web services like to.
(My own problem involves the other end of the TCP performance spectrum, where a long round-trip time is really starting to drag on my bandwidth for multi-TB file transfers, so if you do turn up a parallel HTTP library, I'd love to hear about it. The only tool I found, called "puf", parallelizes by files rather than byteranges. If the above doesn't help you and you really need a parallel transfer tool, likewise get in touch: I may have given up and written it by then.)
I've written the backend and services for the sort of place you're pulling images from. Every site is different so details based on what I did might not apply to what you're trying to do.
Here's my thoughts:
If you have a service agreement with the company you're pulling images from (which you should because you have a fairly high bandwidth need), then preprocess their image catalog and store the thumbnails locally, either as database blobs or as files on disk with a database containing the paths to the files.
Doesn't that service already have the images available as thumbnails? They're not going to send a full-sized image to someone's browser either... unless they're crazy or sadistic and their users are crazy and masochistic. We preprocessed our images into three or four different thumbnail sizes so it would have been trivial to supply what you're trying to do.
If your request is something they expect then they should have an API or at least some resources (programmers) who can help you access the images in the fastest way possible. They should actually have a dedicated host for that purpose.
As a photographer I also need to mention that there could be copyright and/or terms-of-service issues with what you're doing, so make sure you're above board by consulting a lawyer AND the site you're accessing. Don't assume everything is ok, KNOW it is. Copyright laws don't fit the general public's conception of what copyrights are, so involving a lawyer up front can be really educational, plus give you a good feeling you're on solid ground. If you've already talked with one then you know what I'm saying.
I would guess that using any p2p network would be useless as there is more permutations then often used files.
Downloading parallel few parts of file can give improvement only in slow networks (slower then 4-10Mbps).
To get any improvement of using parallel download you need to ensure there will be enough server power. From you current problem (waiting over 500ms for connection) I assume you already have problem with servers:
you should add/improve load-balancing,
you should think of changing server software for something with more performance
And again if 500ms is 60% of total response time then you servers are overloaded, if you think they are not you should search for bottle neck in connections/server performance.

Resources