HTTP Keep Alive benefits on AJAX site - ajax

We have a reasonably heavy AJAX site at http://www.beckworthemporium.com/index.php?option=com_rsappt_pro2&view=booking_screen_gad&Itemid=58
Currently each page request uses 5/6 AJAX requests to return the various pieces of the page and are fairly mySQL intensive. We'll be due a slow increase in traffic up until Christmas. Would we see any benefit of using keep alive?

How much traffic are you talking about? If you plan to use Keep-alive then you might want to ensure you have enough memory and lower the keep alive timeout to the lowest possible otherwise if you end up getting a lot of traffic it could hurt you. Sites with excessive traffic usually have keep alive disabled.
Also take a look and read http keep-alive in the modern age
I ran a report on your page at
http://www.webpagetest.org/result/121008_TX_KB9/
As for your AJAX calls, I would improve those whether you use keep-alive or not. I would cache the responses. For example after you run your mysql queries and generate your output, cache it to disk for a few hours (or longer if possible) and then on subsequent calls pull the data from disk if it has not expired. This will save a bunch and speed up things overall.
Also if you're concerned about speed, I would use image sprites for a lot of your image resources. I notice some of your images are placeholders and are 100% transparent, consider using css only for those. This will reduce your overall requests dramatically.
I would also enable mod_expires and add some Expires headers. For an example htaccess file using these and more good practice features look at:
https://github.com/h5bp/html5-boilerplate/blob/master/dist/.htaccess
EDIT
Jeepstone, I would recommend you don't enable keep-alive and maybe use a CDN and Parallelize your resources. You may also want to look at your database config. For example MySQL has low max connections and you might want to optimize slow queries, connection timeouts and ensure that you do not use any persistent connections.
Also if you're really concerned about the state of your web stack you can test it right now instead of finding out the problems down the line when there are a lot more real customers knocking. I'm talking about stress/load testing.
Check out Best way to stress test a website
and http://blazemeter.com/
and maybe http://loadimpact.com/

There are two big issues on large sites: concurrent connections and latency.
Looking at your Christmas numbers, I don't think your site is large. Therefore I would not see a big benefit from using Keep-Alive. Enabling Keep-Alive might increase the number of idle connections which would not help.
I would go the route of load testing your website, see what is the biggest bottleneck and fix it. Iterate by finding the biggest bottleneck and improve. You might find in your instance that the database is the main culprit. There are free load testing tools out there, or semi-free: VS Test Suite offers free for MSDN subscribers up to 250 virtual real concurrent users, which is more than you need.

Related

CDN-server with http/1.1 vs. webserver with http/2

I have a hosted webserver with http/2 (medium fast) and additionally I have a space on a fast CDN-Server with only http/1.1.
Is it recommended to load some ressources from the CDN or should I use only the webserver because of http/2?
Loading too many recources from the CDN could be a bottleneck due to http/1.1?
Would be kind to get some hints...
You need to test. It really depends on your app, your users and your servers.
Under HTTP/1.1 you are limited to 6 connections to a domain. So hosting content on a separate domain (e.g. static.example.com) or loading from a CDN was a way to increase that limit beyond 6. These separate domains are also often cookie-less as they are on separate domains which is good for performance and security. And finally if loading jQuery from code.jquery.com then you might benefit from the user already having downloaded it for another site so save that download completely (though with the number of versions of libraries and CDNs the chance of having a commonly used library already downloaded and in the browser cache is questionable in my opinion).
However separate domains requires setting up a separate connection. Which means a DNS lookup, a TCP connection and usually an HTTPS handshake too. This all takes time and especially if downloading just one asset (e.g. jQuery) then those can often eat up any benefits from having the assets hosted on a separate site! This is in fact why browsers limit the connections to 6 - there was a diminishing rate of return in increasing it beyond that. I've questioned the value of sharded domains for a while because of this and people shouldn't just assume that they will be faster.
HTTP/2 aims to solve the need for separate domains (aka sharded domains) by removing the need for separate connections by allowing multiplexing, thereby effectively removing the limit of 6 "connections", but without the downsides of separate connections. They also allow HTTP header compression, reducing the performance downside to sending large cookies back and forth.
So in that sense I would recommended just serving everything from your local server. Not everyone will be on HTTP/2 of course but the support is incredible strong so most users should.
However, the other benefit of a CDN is that they are usually globally distributed. So a user on the other side of the world can connect to a local CDN server, rather than come all the way back to your server. This helps with connection time (as TCP handshake and HTTPS handshake is based on shorter distances) and content can also be cached there. Though if the CDN has to refer back to the origin server for a lot of content then there is still a lag (though the benefits for the TCP and HTTPS setup are still there).
So in that sense I would advise to use a CDN. However I would say put all the content through this CDN rather than just some of it as you are suggesting, but you are right HTTP/1.1 could limit the usefulness of that. That's weird those as most commercial CDNs support HTTP/2, and you also say you have a "CDN server" (rather than a network of servers - plural) so maybe you mean a static domain, rather than a true CDN?
Either way it all comes down to testing as, as stated at the beginning of this answer it really depends on your app, your users and your servers and there is no one true, definite answer here.
Hopefully that gives you some idea of the things to consider. If you want to know more, because Stack Overflow really isn't the place for some of this and this answer is already long enough, then I've just written a book which spends large parts discussing all this: https://www.manning.com/books/http2-in-action

Is the improvement from switching on http/2.0 in Cloudfront for an SPA noticeable for the average user of a large site during bootstrap?

We have a large SPA in backbone and Angular that calls out to a set of Java APIs for a financial system with a large number of users.
One person said:
Switching on http/2.0 will have a massive different for our users in terms of page load time due to the nature of the protocol.
Another person said:
Browsers like Chrome are actually pretty good even without http/2.0. Switching it on won't make a noticeable different to the end user.
We made the change, and measured static page load times before and after the change. We didn't see a difference over 48 hours of data each before and after the change. (By both browser tests, and getting logging data on page load times forwarded to the application from the browser in our logs.)
My question is: Is the improvement from switching on http/2.0 in Cloudfront for an SPA noticeable for the average user of a large site during bootstrap?
Way too vague a question to answer I’m afraid.
Some of the things to consider:
Is your site super optimised with HTTP/1 performance issue workarounds (e.g.
concatenation, spriting, sharding) that HTTP/2 (which looks to remove the need for those) provides no real noticeable performance benefits?
Is your site so full of crappy JavaScript that HTTP downloads (which HTTP/2 looks to make more efficient) are a tiny and almost unnoticeable part of the performance problem in the grand scale of things?
Is your site bandwidth bound (e.g. full of print quality images) so that bandwidth rather than HTTP queuing is the problem?
Is your backend and/or web server so sucky that it takes a long time to generate your pages so again the HTTP transfer part is a tiny, almost unnoticeable part of the problem?
Is your site a really small site with just one HTML page and one JavaScript load?
Could your site be more optimised for HTTP/2 (e.g. hosting everything on a single domain, potentially using HTTP/2 Push...etc.) to allow you to get more performance than you could out of HTTP/2?
All of these things could impact whether switching to HTTP/2 makes a noticeable difference or not. Google found that a sample of sites get a 27%-60% performance improvement for SPDY (that HTTP/2 is based upon), but it really does depend on the site in question.
Ultimately HTTP/2 aims to make downloading many assets more efficient as this is inefficient under HTTP/1 - and particularly on low latency conditions. If you don’t have many assets, or downloading those is not a problem then HTTP/2 then you may not notice much difference.
I’ve a blog post to help show the problems in HTTP/1 that HTTP/2 looks to address (including analysing a real world example - Amazon.com) which may help you look at your site for the same potential issues (full disclosure it’s part of a book I’m writing on the subject).

Minimising number of requests vs Browser Caching & Multiple domains

I have recently been working on improving the front end performance of our website and have been employing a number of best practices.
However I have had a recent example where some of the practices are slightly at odds with each other
Minimise HTTP requests
In order to "trick" the browser into making more concurrent requests have some assets served from a different domain
Leverage browser caching
Why?
We used to bundle almost all of our Javascript into one file to minimise HTTP requests. This included JQuery and JQuery UI.
I thought this was silly as many users are likely to have JQuery already cached in their browser so I decided we should remove it from our all.js and instead serve it from Google's CDN. This would save users downloading the code again and because it's on a different domain it can be downloaded in parallell with other resources from our own domains.
The concurrent downloading is shown in the graph below:
This of course has raised the number of requests for people without JQuery already cached which isn't great though.
So my question is this:
Is the change a sensible one? Do the benefits of leveraging caching and allowing concurrent requests outweigh a slight increase in the number of requests?
That is a very good question.
You have explained your reasoning well and they are all good reasons for making this change.
But there still remains benefits to both approaches.
Keeping everything combined in one file
Reduce number of HTTP requests, reduces the negative effects of round-trip latency on the user's connection.
All libraries/plugins are downloaded at once, and should remain cached for when they are later needed.
Reduce dependency on other services (although, Google is going to be quite reliable).
Separate files spread across domains
Increase parallelisation of downloads, reduces the negative effects of bandwidth shaping on the user's connection. (Note that most browsers don't limit concurrent per-domain requests to 2 anymore though.)
Increase granularity - separate parts can be downloaded on-demand as needed, ie if a particular plugin is not needed on the first page hit, it isn't downloaded.
Personally, I'd normally lean a little bit towards the former (reducing HTTP requests by combining them into one big file). I feel like most of my audience is going to be on a fairly high-bandwidth connection and I can reduce latency. Remember to use Google and Yahoo's page speed tools to find other ways of speeding things up.

What kind of performance gain will I get from ditching Apache for NGINX?

What kind of performance gain will I get from ditching Apache for NGINX if I have a very low traffic web site (e.g. 1000 unique visitors a day, approx 5 requests/sec at highest load, and approx 50 MB of traffic per day since lots of photos are being displayed).
Specifically, what gains (if any) would I have for:
Loading speed of the web site from the web user perspective
Server load
Concurrency
Again, this is for a low traffic web site and I'm running on a VPS.
If you have such a low traffic, I am not sure you need to go through the troubles of changing your webserver : kind of looks like "premature optimisation" to me.
Well, at least, if those 1,000 visitors don't visit too many pages, and don't all arrive at exactly the same time.
You'd probably have way better gains for your users (and that's what matter !) by activating gzip compression for JS/CSS/HTML, and/or regrouping JS/CSS files into one instead of several, for instance.
About that, running yslow on your webite, and following some of the advices it'll give you, will probably bring more speed to your users than changing server.
Just to make clear : I don't say that you shouldn't optimize your server -- but that, with such a low traffic, it might be more interesting to display pages faster ; at least, first.
Is your Apache server taking too much CPU or RAM? I switched from Apache to Nginx to save memory, especially to serve static file: I seem to be using about 75% less memory with Nginx.
Like the other comment said, are you sure that Apache is the bottle neck? If you are not swapping, then you have enough memory. I don't think you will save any significant server side latency.

HTTPS on Apache; Will it slow Apache?

Our company runs a website which currently supports only http traffic.
We plan to support https traffic too as some of the customers who link to our pages want us to support https traffic.
Our website gets moderate amount of traffic, but is expected to increase over time.
So my question is this:
Is it a good idea to make our website https only?(redirect all http traffic to https)
Will this bring down the websites performance?
Has anyone done any sort of measurement?
PS: I am a developer who also doubles up as a apache admin.
Yes, it will impact performance, but it's usually not too bad compared to the running all the DB queries that go into the typical dymanically generated page.
Of course the real answer is: don't guess, benchmark it. Try it both ways and see the difference. You can use tools like siege and ab to simulate traffic.
Also, I think you may have more luck with this question over at http://www.serverfault.com/
I wouldn't worry about the load on the server; unless you are serving high volumes of static content, the encryption itself won't create much of a burden, in my experience.
However, using SSL dramatically slows down web sites by creating a lot more latency in connection setup.
An encrypted session requires about* three times as much time to set up as an unencrypted one, and the exact time depends on the latency.
Even on low latency connections, it is noticeable to the end user, but on higher latency (e.g. different continents, especially Australasia where latency to America/Europe is quite high) it makes a dramatic difference and will severely impact the user experience.
There are things you can do to mitigate it, such as ensuring that keep-alives are on (But don't turn them on without understanding exactly what the impact is), minimising the number of requests and maximising the use of browser cache.
Using HTTPS also affects browser behaviour in some cases. Certain optimisations tend to get turned off for security reasons, and some web browsers don't store objects loaded over HTTPS in the disc cache, which means they'll need to get them again in a later session, further impacting the user experience.
* An estimate based on some informal measurement
Is it a good idea to make our website
https only?(redirect all http traffic
to https) Will this bring down the
websites performance?
I'm not sure if you really mean all HTTP traffic or just page traffic. A lot of sites unnecessarily encrypt images, javascript and a bunch of other content that doesn't need to be hidden. This kind of content comprises most of the data transferred in a request so
if you do find feel that HTTPs is taking too much out of the system you can recommend the programmers separate content that needs to be secured from the content that does not.
Most webservers, unless severely underpowered, do not even use a fraction of the CPU power for serving up content. Most production servers I've seen are under 10%, even when using some SSL traffic. I think it would be best to see where your current CPU usage is at, and then do some of your own benchmarking to see how much extra CPU usage is used by an SSL request. I would guess it isn't that much.
No, it is not good idea to make any website as only https. Page loading speed might be little slower, because your server has to perform redirection operation unnecessarily for each web page request. It is better idea to make only pages as https that may contain secure/personal/sensitive information of users or organization. Even if the user information passing through web pages, you can use https. The web page which have information that can be shown to all in the world can normally use http. Finally, it is up to your requirement. If all pages contain secure information, you may make the website as https only.

Resources