SSL Client Cert Verification optimisation - performance

We currently have a group of web-services exposing interfaces to a variety of different client types and roles.
Background:
Authentication is handled through SSL Client Certificate Verification. This is currently being done in web-service code (not by the HTTP server). We don't want to use any scheme less secure than this. This post is not talking about Authorisation, only Authentication.
The web-services talk both SOAP and REST(JSON) and I'm definitely not interested in starting a discussion about the merits of either approach.
All operations exposed via the web-services are stateless.
My problem is that verifying the client certificate on each requests is very heavyweight, and easily dominates CPU time on the application server. I've already tried seperating the Authentication & Application portions onto different physical servers to reduce load, but that doesn't improve dispatch speed overall - the request still takes a constant time to authenticate, no matter where that is done.
I'd like to try limiting the number of authentications by generating an HTTP cookie (with an associated server-side session) after successful client certificate verification, which when supplied by the client will cause client certificate verification to be skipped (though still talking over SSL). I'd also like to time-limit the sessions, and make the processes as transparent as possible from a client perspective.
My questions:
Is this still as secure? (and how can we optimise for security and pragmatism?)
Are there free implementations of this scheme? (I'm aware of the SiteMinder product by CA)
Given the above, should we continue to do Authentication in-application, or move to in-server ?

generating an HTTP cookie (with an
associated server-side session) after
successful client certificate
verification, which when supplied by
the client will cause client
certificate verification to be skipped
Is this still as secure? (and how can
we optimise for security and
pragmatism?)
It's not quite as secure in theory, because the server can no longer prove to himself that there's not a man-in-the-middle.
When the client was presents a client-side certificate, the server can trust it cryptographically. The client and server should be encrypting and data (well, the session key) based on the client's key. Without a client-side cert, the server can only hope that the client has done a good job of validating the server's certificate (as perceived by the client) and by doing so eliminated the possibility of Mr. MitM.
An out-of-the-box Windows client trusts over 200 root CA certificates. In the absence of a client-side cert, the server ends up trusting by extension.
Here's a nice writeup of what to look for in a packet capture to verify that a client cert is providing defense against MitM:
http://www.carbonwind.net/ISA/ACaseofMITM/ACaseofMITMpart3.htm
Explanation of this type of MitM.
http://www.networkworld.com/community/node/31124
This technique is actually used by some firewall appliances boxes to perform deep inspection into the SSL.
MitM used to seem like a big Mission Impossible-style production that took a lot to pull off. Really though it doesn't take any more than a compromised DNS resolver or router anywhere along the way. There are a lot of little Linksys and Netgear boxes out there in the world and probably two or three of them don't have the latest security updates.
In practice, this seems to be good enough for major financial institutions' sites, although recent evidence suggests that their risk assessment strategies are somewhat less than ideal.
Are there free implementations of this scheme? (I'm aware of the SiteMinder product by CA)
Just a client-side cookie, right? That seems to be a pretty standard part of every web app framework.
Given the above, should we continue to do Authentication in-application, or move to in-server ?
Hardware crypto accelerators (either a SSL proxy front end or an accelerator card) can speed this stuff up dramatically.
Moving the cert validation into the HTTP server might help. You may be doing some duplication in the crypto math anyway.
See if you would benefit from a cheaper algorithm or smaller key size on the client certs.
Once you validate a client cert, you could try caching a hash digest of it (or even the whole thing) for short time. That might save you from having to repeat the signature validations all the way up the chain of trust on every hit.
How often to your clients transact? If the ones making up the bulk of your transactions are hitting you frequently, you may be able to convince them to combine multiple transactions in a single SSL negotiation/authentication. Look into setting the HTTP Keep-Alive header. They may be doing that already to some extent. Perhaps your app is doing client cert validation on every HTTP request/response, or just once at the beginning of each session?
Anyway, those are some ideas, best of luck!

Related

how to reduce ssl time of website

I have an HTTPS website and I want to reduce the SSL time of this website. The SSL certificate has been installed on AWS ELB.
If I access the site from Netherlands, the SSL Time is high but if I access the same site from other countries then the SSL time is low. Why is that?
I am basically trying to minimize the time which is showing in this page
http://tools.pingdom.com/fpt/#!/ed9oYJ/https://www.google.com/index.html
Many things influence the SSL time including:
Infrastructure (this won't affect just SSL but ALL network traffic):
Standard network issues (how far away your server is from client, how fast the network is in between... etc) as the SSL/TLS handshake takes several round trips. You have little control over these except changing hosting provider and/or using a CDN. AWS is, in my experience fast and you are only asking to improve SSL rather than general access times so maybe skip this one for now.
Server response time. Is the server under powered in CPU, Ram or disk? Are you sharing this host? Again general issue so maybe skip past this but SSL/TLS does take some processing power though, with modern servers it is barely noticeable nowadays.
Server OS. Newer is better. So if running Red Hat Linux 4 for example then expect it to be considerably slower than the latest Red Hat Linux 7, with improved networking stack and newer versions of key software like OpenSSL.
SSL set up (run your site through https://www.ssllabs.com/ssltest and you should get a state of health of this):
Ciphers used. There are older and slower ciphers and faster and new ones. Can get complicated here really quickly but generally you should be looking for ECDHE ciphers for most clients (and preferable ECDHE...GCM ones) and want to specify that server order should be used so you get to pick the cipher used rather than the client.
Certificate used. You'll want a RSA 2048 cert. Anything more is overkill and slow. Some sites (and some scanning tools) choose RSA 4096 certificates but these do have a noticeable impact on speed with no real increase in security (at this time - that may change). There are newer ECDSA certs (usually shown as 256 EC cert in ssllabs report) but these faster ECDSA certs are not supplied by all CAs and are not universally supported by all clients, so visitors on older hardware and software may not be able to connect with them. Apache (and very recently Nginx from v 1.11.0) supports dual certs to have best of both worlds but at the expense of having two certs and some complexity of setting them up.
Certificate Chain. You'll want a short certificate chain (ideal 3 cert long: your server, intermediary and the CAs root certificate). Your server should return everything but the last cert (which is already in browsers certificate store). If any of the chain is missing, some browsers will attempt to look the musing ones but this takes time.
Reliable cert provider. As well as shorter cert chains, better OCSP responders, their intermediaries also are usually cached in users browsers as they are likely to be used by other sites.
OCSP Stapling saves network trip to check cert is valid, using OCSP or CRL. Turning it on won't make a difference for Chrome as they don’t check for revocation (mostly but EV certificates do get checked). It can make a noticeable difference to IE so should be turned on if your server supports them but do be aware of some implementation issues - particularly nginx’s first Request after restart always fails when OCSP Stapling is turned on.
TLSv1.2 should be used and possibly TLSv1 .0 for older clients but no SSLv2 and SSLv3. TLSv1.1 is kind of pointless (pretty much everyone that supports that also supports the newer and better TLSv1.2). TLSv1.3 is currently being worked on and has some good performance improvements but has not been fully standardised yet as there are some known compatibility issues. Hopefully these will be resolved soon so it can be used. Note PCI compliance (if taking credit cards on your site) demands TLSv1.2 or above on new sites, and on all sites by 30th June 2018.
Repeat visits - while above will help with the initial connection, most sites require several resources to be downloaded and with bad set up can have to go through whole handshake each time (this should be obvious if you're seeing repeated SSL connection set ups for each request when running things like webpagetest.org):
HTTP Keep-Alives should be turned on so the connection is not dropped after each HTTP Request (this should be the default for HTTP/1.1 implementations).
SSL caching and tickets should be on in my opinion. Some disagree for some obscure security reasons that should be fixed in TLSv1.3 but for performance reasons they should be on. Sites with highly sensitive information may choose the more complete security over performance but in my opinion the security issues are quite complex to exploit, and the performance gain is noticeable.
HTTP/2 should be considered, as it only opens one connection (and hence only one SSL/TLS setup) and has other performance improvements.
I would really need to know your site to see which if above (if any) can be improved. If not willing to give that, then I suggest you run ssllabs test and ask for help with anything it raises you don't understand, as it can require a lot of detailed knowledge to understand.
I run a personal blog explaining some of these concepts in more detail if that helps: https://www.tunetheweb.com/security/https/
You can try ECDSA certificates: https://scotthelme.co.uk/ecdsa-certificates/
But the cost of https is only visible on the first requests: session tickets avoid that cost for all other requests. Are they activated? ( you can check it with ssllabs.com )
If you can you should use SPDY or http2, it may improve the speed too.
ECDSA keys, SPDY and http2 reduce the number of round trip necessary so it should reduce the difference between the two location.
You say that you're not using a CDN, but I believe you should be. Here's why:
Connecting via TLS/SSL involves handshaking a secure connection, and that requires extra communication between the client and server before any data can begin flowing. This link has a handy diagram of the SSL handshake, and this link explains the first few milliseconds of an HTTPS connection.
Jordan Sissel wrote about his experiences with SSL handshake latency:
I started investigating the latency differences for similar requests between HTTP and HTTPS.
...
It's all in the handshake.
...
The point is, no matter how fast your SSL accelerators (hardware loadbalancer, etc), if your SSL end points aren't near the user, then your first connect will be slow.
If you use a CDN, then the handshaking can be done between the client and the nearest edge location, dramatically improving the latency.

SSL performance implications [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How much overhead does SSL impose?
I recently had a conversation with a developer who told me that having SSL implemented site-wide puts 300 times the load on the server. Is this really credible? I currently use SSL across all pages and we have several thousand users accessing the system daily without any noticeable lag. We are using an IIS 7 server.
His solution was to only use SSL on the login page to secure the transmission of the login credentials. Then redirect them back to HTTP...Is this good practice?
What's costly in HTTPS is the handshake, both in terms of CPU (the asymmetric cryptographic operations are more expensive) and network round trips (not just for the handshake itself, but also for checking the certificate revocation). After this, the encryption is done using symmetric cryptography, which shouldn't impose a big overhead on a modern CPU. There are ways to reduce the overhead due to the handshake (in particular, via session resumption, if supported and configured).
In a number of cases, it's useful to configure the static content to be cacheable on the client-side too (see Cache-Control: public). Some browsers don't cache HTTPS content by default.
Increasing the server's CPU load by 300 when using HTTPS sounds like something isn't configured appropriately.
His solution was to only use SSL on the login page to secure the
transmission of the login credentials. Then redirect them back to
HTTP...Is this good practice?
A number of sites do this (including StackOverflow). It depends on how much security is required. If you do this, only the credentials will be secured. An attacker could eavesdrop the cookie (or similar authentication token) passed in plain HTTP and use it to impersonate the authenticated user.
Great care needs to be taken when switching from HTTP to HTTPS or the other way around. For example, the authentication token coming from the login page should be considered as "compromised" once passed to plain HTTP. In particular, you can't assume that subsequent HTTPS requests that still use that authentication token come from the legitimate user (e.g. don't allow it to edit 'My Account' details, or anything similar).
He is making it up. Surely it occurred to you that 300 is a suspiciously round number? Ask him to prove it. Test and measure.
It certainly puts more load in the server, most of which can be offloaded to a hardware crypto accelerator or a front-end box if you really have a problem, but in my experience it is negligible. See here for more information.
His suggestion about reverting to HTTP after the login only makes sense if the login page is the only page in the site that you want transport security for. This is unlikely to be the case.
Frankly he doesn't appear to know much about any of this.
I did a large experiment about 15 years ago which showed that over the Internet the time overhead of SSL is about 30%.

Send password to website safely with Ruby

Minecraft uses a launcher on to reduce theft of the game: anyone can download without charge, but the user must provide credentials for a premium account to be able to update the game. I want to build a similar launcher (in Ruby) for a project, but I'm having trouble figuring out how to securely send the password over to an HTTP server (written with Sinatra, if it matters). Obviously, putting it as a parameter in a URL is a really bad idea.
Also, I've though about somehow sending it using password fields, but I don't know how they work (I don't usually do HTTP stuff). This is still a possibility.
Shorter summary: In Ruby, I want to send sensitive info over an HTTP request to a Ruby/Sinatra server.
Thanks for reading this!
Using password fields is no help. Not even when sent over POST. They are sent in plaintext, no matter how hard you try to hide them - that's the very nature of http.
You should definitely use TLS over https instead. The stdlib gives you Net::HTTP for that, but you may use any http client that supports https.
If there is money/value involved in this scheme, don't settle for anything less! Inventing your own protocol is
way more work (I admit TLS is not always easy to set up, but still a lot less work)
not secure in 99,9% of the cases
completely broken in the rest of the cases
No, honestly, inventing secure protocols is probably one of the hardest jobs out there. So be lame and stick to the mainstream (https), it will pay off in the end.
Edit:
You asked whether TLS costs money because of the need for a certificate. That's only an issue on the server side, in one-way authenticated TLS only the server needs to present a certificate, so clients connecting to that server will not have to buy such a certificate. If you operate the server, too, however, then you will need such a certificate. If you don't want to spend the money, you may look into hosting that gives you https for free. Heroku offers such a free service that I know of, and I assume there are other providers as well.
As #Len said, use HTTPS, or, if that is not an option, encrypt just the password with:
At the very least, XOR encryption
Maybe DES
Or RSA (total overkill, unless your game is worth the attention of power hackers to try to break your encryption, as RSA is banking and military grade)
You would then distribute a public RSA key with your launcher, the private on your server, and use those to encrypt the password (or, use those to encrypt the symmetrical encryption key).

Preventing man in the middle attack while using https

I am writing a little app similar to omegle. I have a http server written in Java and a client which is a html document. The main way of communication is by http requests (long polling).
I've implemented some sort of security by using the https protocol and I have a securityid for every client that connects to the server. When the client connects, the server gives it a securityid which the client must always send back when it wants a request.
I am afraid of the man in the middle attack here, do you have any suggestions how I could protect the app from such an attack.
Note that this app is build for theoretical purposes, it won't be ever used for practical reasons so your solutions don't have to be necessarily practical.
HTTPS does not only do encryption, but also authentication of the server. When a client connects, the server shows it has a valid and trustable certificate for its domain. This certificate can not simply be spoofed or replayed by a man-in-the-middle.
Simply enabling HTTPS is not good enough because the web brings too many complications.
For one thing, make sure you set the secure flag on the cookies, or else they can be stolen.
It's also a good idea to ensure users only access the site via typing https://<yourdomain> in the address bar, this is the only way to ensure an HTTPS session is made with a valid certificate. When you type https://<yourdomain>, the browser will refuse to let you on the site unless the server provides a valid certificate for <yourdomain>.
If you just type <yourdomain> without https:// in front, the browser wont care what happens. This has two implications I can think of off the top of my head:
The attacker redirects to some unicode domain with a similar name (ie: looks the same but has a different binary string and is thus a different domain) and then the attacker provides a valid certificate for that domain (since he owns it), the user probably wouldn't notice this...
The attacker could emulate the server but without HTTPS, he would make his own secured connection to the real server and become a cleartext proxy between you and the server, he can now capture all your traffic and do anything he wants because he owns your session.

How much data is leaked from SSL connection?

Say I was trying to access https://www.secretplace.com/really/really/secret.php, what's actually sent in plain text before the SSL session is established?
Does the browser intervene, see that I want https, initiate a SSL session with secretplace.com (i.e. without passing the path in plain text) and only after the SSL session is set up pass the path?
Just curious.
HTTP Secure
The level of protection depends on the correctness of the implementation of the web browser and the server software and the actual cryptographic algorithms supported.
Also, HTTPS is vulnerable when applied to publicly-available static content. The entire site can be indexed using a web crawler, and the URI of the encrypted resource can be inferred by knowing only the intercepted request/response size. This allows an attacker to have access to the plaintext (the publicly-available static content), and the encrypted text (the encrypted version of the static content), permitting a cryptographic attack.
Because SSL operates below HTTP and has no knowledge of higher-level protocols, SSL servers can only strictly present one certificate for a particular IP/port combination. This means that, in most cases, it is not feasible to use name-based virtual hosting with HTTPS. A solution called Server Name Indication (SNI) exists which sends the hostname to the server before encrypting the connection, although many older browsers don't support this extension. Support for SNI is available since Firefox 2, Opera 8, and Internet Explorer 7 on Windows Vista.
In general, the name of the server you are talking to is leaked ("stackoverflow.com"). This was probably leaked via DNS before SSL/TLS could begin connecting, though.
The server's certificate is leaked. Any client certificate you sent (not a common configuration) may or may not have been sent in-the-clear. An active attacker (man-in-the-middle) could probably just ask your browser for it and receive it anyway.
The path portion of the URL ("/questions/2146863/how-much-data-is-leaked-from-ssl-connection") should NOT be leaked. It is passed encrypted and secure (assuming the client and server are set up correctly and you didn't click-through any certificate errors).
The other poster is correct, that there are possible traffic analysis attacks which may be able to deduce some things about static content. If the site is very large and dynamic (say stackoverflow.com) I suspect that it could be quite difficult to get much useful info out of it. However, if there are only a few files with distinctive sizes, which downloads could be obvious.
Form POST data should NOT be leaked. Although usual caveats apply if you are transmitting objects of known sizes.
Timing attacks may reveal some information. For example, an attacker ccould put stress on various parts of the application (e.g., a certain database table) or pre-load some static files from the disk and watch how your connection slows down or speeds up in response.
This is an information "leak" but probably not a big deal for most sites.
The request is made by your browser to https://url:443, and that's in the clear. Then the server and client will negotiate a ciphersuite to protect the data. Normally, this will include a symmetric encryption algorithm (eg; 3DES or RC4 or AES) and a message authentication code (such as HMAC-SHA1) to detect tampering. Note that technically, both of these are optional, it IS possible to have SSL with no encryption but unlikely today.
Once the client (your browser) and the web server have agreed on a ciphersuite and keys are determined, the rest of the conversation is protected.
To be honest, I would hook up a protocol analyzer and watch it all unfold before your eyes!!
Remember, that SSL is at the Transport layer of the TCP/IP stack, it's below the browser data, so it's all protected.
Hope that helps.

Resources