Website Speed 'Connect Time' - performance

After noticing a drastically slow load time on one of my website I started running some tests on Pingdom - http://tools.pingdom.com/
I've been comparing 2 sites, and the drastic difference is the 'Connect' time. On the slower site its around 2.5 seconds where as on my other sites its down around 650ms. I suppose its worth mentioning the slower site is hosted by a different company.
Thew only definition Pingdom offers is "The web browser is connecting to the server". I was hoping
Someone could elaborate on this a little for me, and
Point me in a direction of resolving it.
Thanks in advance

Every new TCP connection goes through a three-way handshake before the client can issue a request e.g. GET, to the web server.
Client sends SYN to server, server responds with SYN-ACK, client responds with ACK and then sends the request.
How long this process takes is latency bound i.e. if the round-trip to the server is 100ms then the full handshake will take 150ms, but as the client sends the request just after it sends the ACK work on the basis it's a cost of one-roundtrip.
Congestion and other factors can also affect the TCP connect time.
Connect times should be in the milliseconds range not in the seconds range - my round-trip time from the UK to a server in NY is 100ms so that's roughly what I'd expect the TCP connect time to be if I was requesting something from a server there.
See #igrigorik's High Performance Browser Networking for a really in-depth discussion / explanation - http://chimera.labs.oreilly.com/books/1230000000545/ch02.html#TCP_HANDSHAKE

Related

HTTPS connection keep alive performance

Does anyone know what’s the difference, in milliseconds and percentage, between the total time it takes to make an HTTPS request that is allowed to use keep alive vs one that doesn’t? For the sake of this question, let’s assume a web server that has one GET endpoint called /time that simply returns the server’s local time, and that clients call this endpoint on average once a minute.
My guess is that, putting the server on my home LAN, and calling /time from my laptop on the LAN would take 200ms. With keep-alive it’s probably going to be 150ms. So that’s 50ms difference, and 25% improvement.
My second question is similar, but only considers server processing time. Let’s say the server takes 100ms to process a GET /time request, but only 50ms to process the same with keep-alive. That’s 50ms faster, but a 50% performance gain, which is very meaningful as it increases the server’s capacity.
I think you have confused a lot of tings here. Keepalive header in HTTP protocol suggests a client that server wouldn't mind to accept multiple requests through the same connection.
Connection is a term related to underlying TCP protocol, and there is an overhead (three way handshake) in establishing it. On the other hand, too many connections at once hurt server's performance. That's why those options exist.
HTTPS implies security-associated workflow on top of HTTP protocol and I suspect it bears no relevance in the context of your question whatsoever.
So if you talk a request a minute, there is no any noticeable difference. The overhead of connection establishment is on the order of doezens milliseconds, so you will notice a difference starting at hundreds requests a second.
Here is my experiment. It's not HTTP, but it illustrates well the benefits of keeping the connection alive.
My setup is a network of servers that create their own secure connections.
I wrote a stress test that creates 100 threads on Server1. Each thread opens a TCP connection and establishes a secure channel with Server2. The thread on server2 sends the numbers 1..1000, and the thread on Server1 simply reads them, and sends "OK" to Server2. The TCP connections and secure channels are "kept alive".
First run:
100 threads are created on Server1
100 TCP connections are established between Server1 and Server2
100 threads are created on Server2 (to serve the Server1 requests)
100 secure channels are established, one per thread
total runtime : 10 seconds
Second run:
100 threads are created on Server1 (but those might have been reused by the JVM from the previous runs)
No new TCP connections are needed. The old ones are reused.
No threads are created on Server2. They are still waiting for requests.
No secure channels are established
total runtime : 1 second

Slow HTTP vs Web Sockets - Resource utilization

If a bunch of "Slow HTTP" connection to a server can consume so much resources so as to cause a denial of service, why wouldn't a bunch of web sockets to a server cause the same problem?
The accepted answer to a different SO question says that it is almost free to maintain a idle connection.
If it costs nothing to maintain an open TCP connection, why does a "Slow HTTP" cause denial of service?
A WebSocket and a "slow" HTTP connection both use an open connection. The difference is in expectations of the server design.
Typical HTTP servers do not need to handle a large number of open connections and are designed around the assumption that the number of open connections is small. If the server does not protect against slow clients, then an attacker can force a server designed around this assumption to hit a resource limit.
Here are a couple of examples showing how the different expectations can impact the design:
If you only have a few HTTP requests in flight at a time, then it's OK to use a thread per connection. This is not a good design for a WebSocket server.
The default file descriptor limits are often adequate for typical HTTP scenarios, but not for a large numbers of connections.
It is possible to design an HTTP server to handle a large number of open connections and several servers do so out of the box.

TCP connection time in windows

I am doing some performance testing with a large number of threads. Each thread is sending HTTP requests to another IP. It looks like at some stages the connections are closed (because there are too many threads) and then of course have to be reopned.
I am looking to get some ball park figures for how long it takes windows to Open TCP connections.
Is there any way I can get this?
Thanks.
This is highly dependent on the endpoints you're trying to connect to, is it not?
As an extreme best case, you can test it yourself by targeting an IIS on localhost.
I wouldn't be surprised if routers and servers that you are connecting through may drop connections as a measure against what could be perceived as connection storms or even denial-of-service attacks.

Understanding HTTPS connection setup overhead

I'm building a web-based chat app which will need to make an AJAX request for every message sent or received. I'd like the data to be encrypted and am leaning towards running AJAX (with long-polling) over HTTPS.
However, since the frequency of requests here is a lot higher than with basic web browsing, I'd like to get a better understanding of the overhead (network usage, time, server CPU, client CPU) in setting up the encrypted connection for each HTTPS request.
Aside from any general info/advice, I'm curious about:
As a very rough approximation, how much extra time does an HTTPS request take compared to HTTP? Assume content length of 1 byte and an average PC.
Will every AJAX request after the first have anything significant cached, allowing it to establish the connection quicker? If so, how much quicker?
Thank you in advance :-)
Everything in HTTPS is slower. Personal information shouldn't be cached, you have encryption on both ends, and an SSL handshake is relatively slow.
Long-polling will help. Long keep-alives are good. Enabling SSL sessions on your server will avoid a lot of the overhead as well.
The real trick is going to be doing load-balancing or any sort of legitimate caching. Not sure how much that will come into play in your system, being a chat server, but it's something to consider.
You'll get more information from this article.
Most of the overhead is in the handshake (exchanging certificates, checking for their revocation, ...). Session resumption and the recent false start extension helps in that respect.
In my experience, the worse case scenario happens when using client-certificate authentication and advertising too many CAs (the CertificateRequest message sent by the server can even become too big); this is quite rare since in practice, when you use client-certificate authentication, you would only accept client-certificates from a limited number of CAs.
If you configure your server properly (for resources for which it's appropriate), you can also enable browser caching for resources served over HTTPS, using Cache-Control: public.

What do the colored bars in the Firefox net panel represent?

In the firefox developer tools, under the "Net" panel, resources that are loaded have their load time split into different colors/categories. These are:
DNS Lookup
Connecting
Blocking
Sending
Waiting
Receiving
What do each of these represent, and more specifically, does any of them accurately represent the amount of time that the server is thinking (accessing the database, running algorithms, etc)?
Thanks.
You couldn't accurately determine what the server is doing as such, I'm afraid.
You can discount most of them except Waiting, however, as the rest occur before and after the server handles your request. What it actually does while you wait will be a 'black box'.
There may be some asynchronous operations taking place during Sending and Receiving, so again it's hard to be accurate but you can get a ballpark figure of the time the server is working and the time the request spends travelling back and forth.
EDIT
Rough Definitions:
DNS Lookup: Translating the web address into a destination IP address by using a DNS server
Connecting: Establishing a connection with the web server
Blocking: Previously known as 'queueing', this is explained in more detail here
Sending: Sending your HTTP Request to the server
Waiting: Waiting for a response from the server - this is where it's probably doing all the work
Receiving: Getting the HTTP response back from the server
The firebug wiki also explains these (see the Timeline section).
Blocking Time spent in a browser queue waiting for a network
connection (formerly called Queueing). For SSL connections this includes the SSL Handshake and the OCSP validation step.
DNS Lookup DNS resolution time
Connection Elapsed time required to create a TCP connection
Waiting Waiting for a response from the server
Receiving Time required to read the entire response from the server
(and/or time required to read from cache)
'DOMContentLoaded' (event) Point in time when DOMContentLoaded event was fired (since the beginning of the request, can be negative if the request has been
started after the event)
'load' (event) Point in time when the page load event was fired (since the beginning of the request, can be negative if the request has been started after the event)
There's a pretty good article with time charts and a protocol level explanation of what's happening at each stage here.I found it pretty helpful as they also visually demonstrate the impact of using persistent and parallel connections versus serial connections.

Resources