Understanding HTTPS connection setup overhead - ajax

I'm building a web-based chat app which will need to make an AJAX request for every message sent or received. I'd like the data to be encrypted and am leaning towards running AJAX (with long-polling) over HTTPS.
However, since the frequency of requests here is a lot higher than with basic web browsing, I'd like to get a better understanding of the overhead (network usage, time, server CPU, client CPU) in setting up the encrypted connection for each HTTPS request.
Aside from any general info/advice, I'm curious about:
As a very rough approximation, how much extra time does an HTTPS request take compared to HTTP? Assume content length of 1 byte and an average PC.
Will every AJAX request after the first have anything significant cached, allowing it to establish the connection quicker? If so, how much quicker?
Thank you in advance :-)

Everything in HTTPS is slower. Personal information shouldn't be cached, you have encryption on both ends, and an SSL handshake is relatively slow.
Long-polling will help. Long keep-alives are good. Enabling SSL sessions on your server will avoid a lot of the overhead as well.
The real trick is going to be doing load-balancing or any sort of legitimate caching. Not sure how much that will come into play in your system, being a chat server, but it's something to consider.

You'll get more information from this article.
Most of the overhead is in the handshake (exchanging certificates, checking for their revocation, ...). Session resumption and the recent false start extension helps in that respect.
In my experience, the worse case scenario happens when using client-certificate authentication and advertising too many CAs (the CertificateRequest message sent by the server can even become too big); this is quite rare since in practice, when you use client-certificate authentication, you would only accept client-certificates from a limited number of CAs.
If you configure your server properly (for resources for which it's appropriate), you can also enable browser caching for resources served over HTTPS, using Cache-Control: public.

Related

SSL performance implications [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How much overhead does SSL impose?
I recently had a conversation with a developer who told me that having SSL implemented site-wide puts 300 times the load on the server. Is this really credible? I currently use SSL across all pages and we have several thousand users accessing the system daily without any noticeable lag. We are using an IIS 7 server.
His solution was to only use SSL on the login page to secure the transmission of the login credentials. Then redirect them back to HTTP...Is this good practice?
What's costly in HTTPS is the handshake, both in terms of CPU (the asymmetric cryptographic operations are more expensive) and network round trips (not just for the handshake itself, but also for checking the certificate revocation). After this, the encryption is done using symmetric cryptography, which shouldn't impose a big overhead on a modern CPU. There are ways to reduce the overhead due to the handshake (in particular, via session resumption, if supported and configured).
In a number of cases, it's useful to configure the static content to be cacheable on the client-side too (see Cache-Control: public). Some browsers don't cache HTTPS content by default.
Increasing the server's CPU load by 300 when using HTTPS sounds like something isn't configured appropriately.
His solution was to only use SSL on the login page to secure the
transmission of the login credentials. Then redirect them back to
HTTP...Is this good practice?
A number of sites do this (including StackOverflow). It depends on how much security is required. If you do this, only the credentials will be secured. An attacker could eavesdrop the cookie (or similar authentication token) passed in plain HTTP and use it to impersonate the authenticated user.
Great care needs to be taken when switching from HTTP to HTTPS or the other way around. For example, the authentication token coming from the login page should be considered as "compromised" once passed to plain HTTP. In particular, you can't assume that subsequent HTTPS requests that still use that authentication token come from the legitimate user (e.g. don't allow it to edit 'My Account' details, or anything similar).
He is making it up. Surely it occurred to you that 300 is a suspiciously round number? Ask him to prove it. Test and measure.
It certainly puts more load in the server, most of which can be offloaded to a hardware crypto accelerator or a front-end box if you really have a problem, but in my experience it is negligible. See here for more information.
His suggestion about reverting to HTTP after the login only makes sense if the login page is the only page in the site that you want transport security for. This is unlikely to be the case.
Frankly he doesn't appear to know much about any of this.
I did a large experiment about 15 years ago which showed that over the Internet the time overhead of SSL is about 30%.

What are the most widespread low bandwidth alternatives to HTTPS?

I'm porting a web app to a mobile device and working with the major carriers to minimize our bandwidth use, but need to maintain security.
The SSL handshaking overhead associated with HTTPS is more than 50% of the bandwidth currently. Can someone recommend a lightweight, low bandwidth alternative to HTTPS?
The payload is HTTP/XML, but can be modified to any format. I'm using Ruby on Rails so something with a Ruby library is ideal.
It sounds like your connection is short lived, and your payload small. Would it be possible to hold the connection open, and send multiple "messages" through it, that way, as more responses get send, your SSL overhead becomes a smaller portion of the cumulative data transfer. This would avoid the need to repeat the handshake. HTTP has some keep-alive capabilities with it, hopefully those can be applied in Ruby to a SSL connection.

Are there any downsides of running your full website in https

I have a website that makes heavy use of AJAX. There is an almost constant transfer of sensitive data.
Because of this I was thinking of running my full website in HTTPS, making it secure throughout your stay.
I was wondering if there are any downsides doing this. Performance is a huge issue for me, the faster the app runs the better. I can safely say that speed is a larger issue than the security.
On the security side, I already generate a new session id when sensitive data is transfered,so there is no real need to make it all https, but if there are no downsides why not use it.
Can someone please explain to me what the downsides are of using https for everything.
Well, there is obviously the overhead of encrypting everything all the time. It's probably not a huge problem for the client (since it's only encrypting data for a single connection) but it can become a bottleneck on the server (since it has to encrypt everything for every connection).
You could implement an SSL proxy where you have a front-end web server that talks SSL to clients and then forwards requests to the "backend" webservers for real processing. The backend webservers would be firewalled and not use SSL.

SSL Client Cert Verification optimisation

We currently have a group of web-services exposing interfaces to a variety of different client types and roles.
Background:
Authentication is handled through SSL Client Certificate Verification. This is currently being done in web-service code (not by the HTTP server). We don't want to use any scheme less secure than this. This post is not talking about Authorisation, only Authentication.
The web-services talk both SOAP and REST(JSON) and I'm definitely not interested in starting a discussion about the merits of either approach.
All operations exposed via the web-services are stateless.
My problem is that verifying the client certificate on each requests is very heavyweight, and easily dominates CPU time on the application server. I've already tried seperating the Authentication & Application portions onto different physical servers to reduce load, but that doesn't improve dispatch speed overall - the request still takes a constant time to authenticate, no matter where that is done.
I'd like to try limiting the number of authentications by generating an HTTP cookie (with an associated server-side session) after successful client certificate verification, which when supplied by the client will cause client certificate verification to be skipped (though still talking over SSL). I'd also like to time-limit the sessions, and make the processes as transparent as possible from a client perspective.
My questions:
Is this still as secure? (and how can we optimise for security and pragmatism?)
Are there free implementations of this scheme? (I'm aware of the SiteMinder product by CA)
Given the above, should we continue to do Authentication in-application, or move to in-server ?
generating an HTTP cookie (with an
associated server-side session) after
successful client certificate
verification, which when supplied by
the client will cause client
certificate verification to be skipped
Is this still as secure? (and how can
we optimise for security and
pragmatism?)
It's not quite as secure in theory, because the server can no longer prove to himself that there's not a man-in-the-middle.
When the client was presents a client-side certificate, the server can trust it cryptographically. The client and server should be encrypting and data (well, the session key) based on the client's key. Without a client-side cert, the server can only hope that the client has done a good job of validating the server's certificate (as perceived by the client) and by doing so eliminated the possibility of Mr. MitM.
An out-of-the-box Windows client trusts over 200 root CA certificates. In the absence of a client-side cert, the server ends up trusting by extension.
Here's a nice writeup of what to look for in a packet capture to verify that a client cert is providing defense against MitM:
http://www.carbonwind.net/ISA/ACaseofMITM/ACaseofMITMpart3.htm
Explanation of this type of MitM.
http://www.networkworld.com/community/node/31124
This technique is actually used by some firewall appliances boxes to perform deep inspection into the SSL.
MitM used to seem like a big Mission Impossible-style production that took a lot to pull off. Really though it doesn't take any more than a compromised DNS resolver or router anywhere along the way. There are a lot of little Linksys and Netgear boxes out there in the world and probably two or three of them don't have the latest security updates.
In practice, this seems to be good enough for major financial institutions' sites, although recent evidence suggests that their risk assessment strategies are somewhat less than ideal.
Are there free implementations of this scheme? (I'm aware of the SiteMinder product by CA)
Just a client-side cookie, right? That seems to be a pretty standard part of every web app framework.
Given the above, should we continue to do Authentication in-application, or move to in-server ?
Hardware crypto accelerators (either a SSL proxy front end or an accelerator card) can speed this stuff up dramatically.
Moving the cert validation into the HTTP server might help. You may be doing some duplication in the crypto math anyway.
See if you would benefit from a cheaper algorithm or smaller key size on the client certs.
Once you validate a client cert, you could try caching a hash digest of it (or even the whole thing) for short time. That might save you from having to repeat the signature validations all the way up the chain of trust on every hit.
How often to your clients transact? If the ones making up the bulk of your transactions are hitting you frequently, you may be able to convince them to combine multiple transactions in a single SSL negotiation/authentication. Look into setting the HTTP Keep-Alive header. They may be doing that already to some extent. Perhaps your app is doing client cert validation on every HTTP request/response, or just once at the beginning of each session?
Anyway, those are some ideas, best of luck!

Any HTTP proxies with explicit, configurable support for request/response buffering and delayed connections?

When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following:
A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part.
The proxy server stops buffering the request when:
A size limit has been reached (say, 4KB), or
The request has been received completely, headers and body
Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed.
The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.)
Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed.
The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources.
I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model?
(Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)
What about using both nginx and Squid (client — Squid — nginx — backend)? When returning data from a backend, Squid does convert it from C-T-E: chunked to a regular stream with Content-Length set, so maybe it can normalize POST also.
Nginx can do everything you want. The configuration parameters you are looking for are
http://wiki.codemongers.com/NginxHttpCoreModule#client_body_buffer_size
and
http://wiki.codemongers.com/NginxHttpProxyModule#proxy_buffer_size
Fiddler, a free tool from Telerik, does at least some of the things you're looking for.
Specifically, go to Rules | Custom Rules... and you can add arbitrary Javascript code at all points during the connection. You could simulate some of the things you need with sleep() calls.
I'm not sure this method gives you the fine buffering control you want, however. Still, something might be better than nothing?
Squid 2.7 can support 1-3 with a patch:
http://www.squid-cache.org/Versions/v2/HEAD/changesets/12402.patch
I've tested this and found it to work well, with the proviso that it only buffers to memory, not disk (unless it swaps, of course, and you don't want this), so you need to run it on a box that's appropriately provisioned for your workload.
Chunked POSTs are a problem for most servers and intermediaries. Are you sure you need support? Usually clients should retry the request when they get a 411.
Unfortunately, I'm not aware of a ready-made solution for this. In the worst case scenario, consider developing it yourself, say, using Java NIO -- it shouldn't take more than a week.

Resources