Performance difference Kerberos versus NTLM - performance

I understand that Kerberos has better performance than NTLM.
But does anyone have any figures or any experience of how much better it is?

Kerberos is better when it comes to performance. Mainly because it is a lot less chatty than NTLM. For more details refer to...
http://technet.microsoft.com/en-us/magazine/ee914605.aspx

Kerberos performance and security is far better than NTLMv1 or NTLMv2.
It's not even up for debate.
Every third packet needs to be sent to the domain controller for challenge/response when using NTLM. That slows down your domain controllers and causes cascading performance issues for all the other services a DC performs.
NTLMv1 hashes can be cracked in about 8 seconds with an 8088 processor (they are always the same length and are not salted). NTLMv2 is a little better, but not much (variable length and salted hash).
Microsoft has been strongly advising everyone to switch to Kerberos and stop using NTLM wherever possible since Windows2000 was released.

Related

Does SSL affect performance nowadays?

I remember years ago, one of the reasons for not using SSL was the it used a lot of resources, so it affected the performance of applications.
Nowadays, with the current technologies, is this still a point to bear in mind?
This question arose as a workmate is concerned that using SSL will hinder the performance of his application.
Why? The idea is that there will be thousands of clients that will be opening temporary connections every some determined time frame (I think it's set to 1 minute). So he's concerned that all the authentication process of all those clients is going to be very power consuming and affect the performance of his application. The other alternative is to use permanent connection so the authentication is done only once, but the CTO still hasn't decided which method we'll be using (last notice was temporary, hence this question).
The question is ill-formed. If you need security, you have little choice but to use SSL, and so comparing it to plaintext is completely pointless. If on the other hand you don't need security, you don't need SSL.
However I did an extensive experiment over the Internet some years ago, which showed that SSL was roughly 3x as slow as plaintext.
In the last 4 years I have seen (benchmarked) iPhone AES encryption speeds increase 13x on iPhones. Speeds are also dependent on the data length since there are two parts: the setup and data encryption/decryption.
As usual benchmark your usage and judge if there is a performance issue.
As #EJP states, if you need security you need to use https (TLS) encryption.

Verifying clients when using interprocess communication

I'm building an application that will provide a service to other applications (let's pretend like it solves differential equations). So my DifEq service will be running all the time and a client application can send it requests to solve DifEqs at any point.
This would be trivial using sockets or pipes.
The problem is some applications nefariously want to send linear equations instead of differential equations, so I want to register applications that I know are sending proper DifEqs to my application.
Traditional sockets break down here, as far as I know.
Ideally, I'd like to be able to look at some information about the application that is making a request of me and (either through some meta-data on that application, through communication with my web site, or through some other, unkown method) determine it is an acceptable DifEq app. Furthermore, this ideal method would not be spoofable without a root/admin-level compromise of the underlying OS. If the linear equation app is also a root kit, I'll concede to being broken. :)
I need to be able to do this on Windows, OS X, and Linux (and maybe Android); but I recognize that it may not be the same solution on all platforms. So, how would you accomplish this (specify the platform you are focusing on, if appropriate)? I've done a lot of server-side development, but it's been way too many years since I've done any client-side development outside the browser and the world is very different today than it was then.
I think your question is a little confusing when it comes to talking about DifEQ vs LinearEQ.
It sounds to me like you are just looking for a routine way to verify that clients are authorized to connect. There is a lot to read up on this subject. Common methods would be to use SSL certificates to verify the identity of clients. You can also tunnel over SSH, or use OAUTH, etc, etc.
You'll have to do some more digging around the web to see what kind of authentication fits your scenario. You mention 'not spoofable'. I think that people generally end up compiling-in a certificate of private key into their application. This will stop all but the very dedicated and experienced hackers.

Good practice or bad practice to force entire site to HTTPS?

I have a site that works very well when everything is in HTTPS (authentication, web services etc). If I mix http and https it requires more coding (cross domain problems).
I don't seem to see many web sites that are entirely in HTTPS so I was wondering if it was a bad idea to go about it this way?
Edit: Site is to be hosted on Azure cloud where Bandwidth and CPU usage could be an issue...
EDIT 10 years later: The correct answer is now to use https only.
you lose a lot of features with https (mainly related to performance)
Proxies cannot cache pages
You cannot use a reverse proxy for performance improvement
You cannot host multiple domains on the same IP address
Obviously, the encryption consumes CPU
Maybe that's no problem for you though, it really depends on the requirements
HTTPS decreases server throughput so may be a bad idea if your hardware can't cope with it. You might find this post useful. This paper (academic) also discusses the overhead of HTTPS.
If you have HTTP requests coming from a HTTPS page you'll force the user to confirm the loading of unsecure data. Annoying on some websites I use.
This question and especially the answers are OBSOLETE. This question should be tagged: <meta name="robots" content="noindex"> so that it no longer appears in search results.
To make THIS answer relevant:
Google is now penalizing website search rankings when they fail to use TLS/https. You will ALSO be penalized in rankings for duplicate content, so be careful to serve a page EITHER as http OR https BUT NEVER BOTH (Or use accurate canonical tags!)
Google is also aggressively indicating insecure connections which has a negative impact on conversions by frightening-off would-be users.
This is in pursuit of a TLS-only web/internet, which is a GOOD thing. TLS is not just about keeping your passwords secure — it's about keeping your entire world-facing environment secure and authentic.
The "performance penalty" myth is really just based on antiquated obsolete technology. This is a comparison that shows TLS being faster than HTTP (however it should be noted that page is also a comparison of encrypted HTTP/2 HTTPS vs Plaintext HTTP/1.1).
It is fairly easy and free to implement using LetsEncrypt if you don't already have a certificate in place.
If you DO have a certificate, then batten down the hatches and use HTTPS everywhere.
TL;DR, here in 2019 it is ideal to use TLS site-wide, and advisable to use HTTP/2 as well.
</soapbox>
If you've no side effects then you are probably okay for now and might be happy not to create work where it is not needed.
However, there is little reason to encrypt all your traffic. Certainly login credentials or other sensitive data do. One the main things you would be losing out on is downstream caching. Your servers, the intermediate ISPs and users cannot cache the https. This may not be completely relevant as it reads that you are only providing services. However, it completely depends on your setup and whether there is opportunity for caching and if performance is an issue at all.
It is a good idea to use all-HTTPS - or at least provide knowledgeable users with the option for all-HTTPS.
If there are certain cases where HTTPS is completely useless and in those cases you find that performance is degraded, only then would you default to or permit non-HTTPS.
I hate running into pointlessly all-https sites that handle nothing that really requires encryption. Mainly because they all seem to be 10x slower than every other site I visit. Like most of the documentation pages on developer.mozilla.org will force you to view it with https, for no reason whatsoever, and it always takes long to load.

Will it ever be possible to run all web traffic via HTTPS?

I was considering what would it take (technologically) to move all the web traffic to HTTPS. I thought that computers are getting faster, and faster, so some time from now it will be possible to run all traffic via HTTPS without any noticeable cost.
But then again, I thought, encryption strength will have to evolve to counter the loss of security. If computers get 10x faster, encryption will have to be 10x stronger, or it will be 10x easier to break.
So, will we ever be able to encrypt all web traffic "for free"?
Edit: I'm asking only about the logic of performance increases in computing vs encryption. If we can use the same crypto algorhytms and keys in 20 years, they will consume a far lower percentage of the overall computing capacity of a server (or client), and in effect, that will make it "free" to encrypt and sign everything that we transmit over networks.
One of the big issues with using HTTPS is that its considered secure and so most web browsers don't do any caching, or at least do very limited caching.
Without the cache, you'll notice that HTTPS pages load significantly slower and a non-encrypted page would.
HTTPS should be used to protect sensitive information.
I have no idea about the CPU impact of running everything through SSL. I would say that on the client side, the CPU isn't an issue since most workstations are running idle most of the time anyway. The big program would be on the web server side due to the sheer number of concurrent requests that are being handled.
In order to get to the point that SSL is basically 'free', you'd have to have dedicated hardware for encryption (which already exists today).
EDIT: Based on the comments, the question's author suggests this is the answer he was looking for :
Using crypto is already pretty fast,
particularly considering that we're
using CPU cycles vs. data
transmission. Crypto keys do not need
to get longer. I don't think there's
any technical reason why this is
impractical.
-David Thornley
UPDATE: I just read that Google's SPDY protocol (designed to replace HTTP) looks like it will use SSL on every connection. So, it looks like Google thinks that it's possible!
To make SSL the underlying transport
protocol, for better security and
compatibility with existing network
infrastructure. Although SSL does
introduce a latency penalty, we
believe that the long-term future of
the web depends on a secure network
connection. In addition, the use of
SSL is necessary to ensure that
communication across existing proxies
is not broken.
Chris Thompson mentions browser caching, but that's easily fixable in the browser. What isn't fixable on switching everything to HTTPS is proxy caching. Because HTTPS is encrypted end-to-end, transparent HTTP proxies don't work. There are a lot of places where transparent proxying can speed things up (for instance at NAT boundaries).
Dealing with the additional bandwidth from losing transparent proxying is probably doable - allegedly HTTP traffic is trivial compared with p2p anyway, so it's not as if transparent proxies are the only thing keeping the internet online. It will hit latency irrevocably, and make a slashdotting even worse than it is currently. But then with cloud hosting, both those might be dealt with by tech. Of course "secure server" takes on a different meaning with cloud hosting, or even with other forms of de-centralisation of content across the network like akamai.
I don't think the CPU overhead is that significant. Sure, if your server is currently CPU bound at least some of the time, then switching all traffic from HTTP to HTTPS will kill it stone dead. Some servers may decide that HTTPS is not worth the monetary cost of a CPU that can handle the load, and they will prevent literally everyone adopting it. But I doubt it will be a major barrier for long. For instance, Google has crossed it already and happily serves apps (although not searches) as https without fuss. And the more work servers are doing per connection, the less proportional extra work is required to SSL-secure that connection. SSL can be and is hardware accelerated where necessary.
There's also the management/economic problem that HTTPS relies on trusted CAs, and trusted CAs cost money. There are other ways to design a PKI than the one SSL actually uses, but there are reasons SSL works how it does. For example SSH places the responsibility on the user to obtain a key fingerprint from the server by a secure side-channel, and this is the result: some users don't think that level of inconvenience is justified by its security purpose. If users don't want security, then they won't get it unless it's impossible for them to avoid it.
If users just auto-click "accept" for untrusted SSL certificates, then you pretty much might as well not have it, since these days a man-in-the-middle attack is not significantly more difficult than plain eavesdropping. So, again, there's a significant block of servers which just aren't interesting in paying for (working) HTTPS.
Encryption would not have to get 10x stronger in the sense that you would not need to use 10x more bits. The difficulty of brute force cracking increases exponentially with an increasing key length. At most key lengths would have to get slightly longer.
What would be the point of running all traffic through SSL, even stuff where there is obviously no advantage? This seems incredibly wasteful. For example, it seems ridiculous to download a Linux distro through SSL.
The cost isn't that great nowadays.
Also...having a computer that is 10x faster will in no way make it necessary to change encryption. AES (a common encryption for SSL) is strong enough that it would take a very very long time to break.
Will it be possible? YES
Will it be advisable? NO
For a few reasons.
extra cpu cycles on server and client would use more power which incurs cost and emissions
ssl certs would be required for every server
it's useless to encrypt data that doesn't need to be hidden
IMO, the answer is no. The main reason for this is that if you consider how many pages have items from multiple sources that would each have to use https and have a valid certificate that I don't think would work for some of the big companies that would have to change all their links.
It isn't a bad idea and maybe some Web x.0 would have more secure communications by default, but I don't think http will be that protocol.
Just to give a couple of examples, though I am from Canada which may affect how these sites render:
www.msn.com :
atdmt.com
s-msn.com
live.com
www.cnn.com :
revsci.net
cnn.net
turner.com
dl-rms.com
Those were listed through "NoScript" which notes this page has code from "google-analytics.com" and "quantserve.com" besides the stackoverflow.com for a third example of this.
A major difference with https is that a session is kept open until you close it. Saves a lot of hassle with session cookies but puts a load on the server.
How long should google keep the https session with you alive after you send a query?
Do you want a persistent connection to every popup ad?

ssl impact on web server

Many of us have web and application servers that use plain TCP.
Some of us have web and other servers that use a secure layer such as SSL.
My understanding of SSL is that the handshaking is very computationally intensive, and the encryption of an ongoing connection is (relatively) cheap.
My assumption for you to correct: an average hosting box (and info on what is average at cloud hosting would be cool too) might easy be expected to be able to saturate its network connections with AES-encrypted packets, but have difficulty doing a thousand RSA handshakes per second. Client authentication with certificates is substantially more expensive for the server than anonymous clients too.
What kind of rules of thumb for the number of session setups per second for SSL are there?
Why not just measure? It will give you real numbers on the exact software and hardware that you are using. You'll also be able to measure the impact of changes in the server infrastructure (adding more boxes, SSL accelerators, tweaking parameters, what have you).
You are correct that you would be hard pressed to get to a thousand SSL handshakes per second on a single box. In fact, I'd say it's probably impossible. A few dozen per second, not a problem. A thousand, not without a lot of $$$.
It's also likely that you don't really need 1000 handshakes per second. That's quite a lot, and you'd already need quite a lot of traffic to need something like that: See this: What do I need in SSL TPS Performance?
Remember that normally you won't be doing new SSL handshakes all the time. Browsers do the handshake once, and keep the connection open over a number of requests and/or page views, so your needs for handshakes per second may be much lower than you think.
As Ville said there is no real option then to try it out on your configuration. But don't underestimated the symmetric encryption of data after establishing a link. It might be less expensive but if you are going to download a lot of data over the encrypted channel than it might cost a lot more than the initial negotiation.
So for this you have to build a common scenario for the usage of your site and then stress test.

Resources