What is the performance difference of pki to symmetric encryption? - performance

We are looking to do some heavy security requirements on our project, and we need to do a lot of encryption that is highly performant.
I think that I know that PKI is much slower and more complex than symmetric encryption, but I can't find the numbers to back up my feelings.

Yes, purely asymmetric encryption is much slower than symmetric cyphers (like DES or AES), which is why real applications use hybrid cryptography: the expensive public-key operations are performed only to encrypt (and exchange) an encryption key for the symmetric algorithm that is going to be used for encrypting the real message.
The problem that public-key cryptography solves is that there is no shared secret. With a symmetric encryption you have to trust all involved parties to keep the key secret. This issue should be a much bigger concern than performance (which can be mitigated with a hybrid approach)

On a Macbook running OS X 10.5.5 and a stock build of OpenSSL, "openssl speed" clocks AES-128-CBC at 46,000 1024 bit blocks per second. That same box clocks 1024 bit RSA at 169 signatures per second. AES-128-CBC is the "textbook" block encryption algorithm, and RSA 1024 is the "textbook" public key algorithm. It's apples-to-oranges, but the answer is: RSA is much, much slower.
That's not why you shouldn't be using public key encryption, however. Here's the real reasons:
Public key crypto operations aren't intended for raw data encryption. Algorithms like Diffie-Hellman and RSA were devised as a way of exchanging keys for block crypto algorithms. So, for instance, you'd use a secure random number generator to generate a 128 bit random key for AES, and encrypt those 16 bytes with RSA.
Algorithms like RSA are much less "user-friendly" than AES. With a random key, a plaintext block you feed to AES is going to come out random to anyone without the key. That is actually not the case with RSA, which is --- more so than AES --- just a math equation. So in addition to storing and managing keys properly, you have to be extremely careful with the way you format your RSA plaintext blocks, or you end up with vulnerabilities.
Public key doesn't work without a key management infrastructure. If you don't have a scheme to verify public keys, attackers can substitute their own keypairs for the real ones to launch "man in the middle" attacks. This is why SSL forces you to go through the rigamarole of certificates. Block crypto algorithms like AES do suffer from this problem too, but without a PKI, AES is no less safe than RSA.
Public key crypto operations are susceptible to more implementation vulnerabilities than AES. For example, both sides of an RSA transaction have to agree on parameters, which are numbers fed to the RSA equation. There are evil values attackers can substitute in to silently disable encryption. The same goes for Diffie Hellman and even more so for Elliptic Curve. Another example is the RSA Signature Forgery vulnerability that occurred 2 years ago in multiple high-end SSL implementations.
Using public key is evidence that you're doing something "out of the ordinary". Out of the ordinary is exactly what you never want to be with cryptography; beyond just the algorithms, crypto designs are audited and tested for years before they're considered safe.
To our clients who want to use cryptography in their applications, we make two recommendations:
For "data at rest", use PGP. Really! PGP has been beat up for more than a decade and is considered safe from dumb implementation mistakes. There are open source and commercial variants of it.
For "data in flight", use TLS/SSL. No security protocol in the world is better understood and better tested than TLS; financial institutions everywhere accept it as a secure method to move the most sensitive data.
Here's a decent writeup [matasano.com] me and Nate Lawson, a professional cryptographer, wrote up a few years back. It covers these points in more detail.

Use the OpenSSL speed subcommand to benchmark the algorithms and see for yourself.
[dave#hal9000 ~]$ openssl speed aes-128-cbc
Doing aes-128 cbc for 3s on 16 size blocks: 26126940 aes-128 cbc's in 3.00s
Doing aes-128 cbc for 3s on 64 size blocks: 7160075 aes-128 cbc's in 3.00s
...
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes
aes-128 cbc 139343.68k 152748.27k 155215.70k 155745.61k 157196.29k
[dave#hal9000 ~]$ openssl speed rsa2048
Doing 2048 bit private rsa's for 10s: 9267 2048 bit private RSA's in 9.99s
Doing 2048 bit public rsa's for 10s: 299665 2048 bit public RSA's in 9.99s
...
sign verify sign/s verify/s
rsa 2048 bits 0.001078s 0.000033s 927.6 29996.5

Practical PKI-based encryption systems use asymmetric encryption to encrypt a symmetric key, and then symmetric encryption with that key to encrypt the data (having said that, someone will point out a counter-example).
So the additional overhead imposed by asymmetric crypto algorithms over that of symmetric is fixed - it doesn't depend on the data size, just on the key sizes.
Last time I tested this, validating a chain of 3 or so X.509 certificates [edit to add: and the data they were signing] was taking a fraction of a second on an ARM running at 100MHz or so (averaged over many repetitions, obviously). I can't remember how small - not negligible, but well under a second.
Sorry I can't remember the exact details, but the summary is that unless you're on a very restricted system or doing a lot of encryption (like if you want to accept as many as possible SSL connections a second), NIST-approved asymmetric encryption methods are fast.

Apparently it is 1000x worse. (http://windowsitpro.com/article/articleid/93787/symmetric-vs-asymmetric-ciphers.html). But unless you're really working through a lot of data it isn't going to matter. What you can do is use asymmetric encryption to exchange a symmetric encryption key.

Perhaps you can add some details about your project so that you get better quality answers. What are you trying to secure? From whom? If you could explain the requirements of your security, you'll get a much better answer. Performance doesn't mean much if the encryption mechanism isn't protecting what you think it is.
For instance, X509 certs are an industrial standard way of securing client/server endpoints. PGP armoring can be used to secure license files. For simplicity, Cipher block chaining with Blowfish (and a host of other ciphers) is easy to use in Perl or Java, if you control both end points.
Thanks.

Yes, the hybrid encryption offered by standardized cryptographic schemes like PGP, TLS, and CMS does impose a fixed performance cost on each message or session. How big that impact is depends on the algorithms selected and which operation you are talking about.
For RSA, decryption and signing operations are relatively slow, because it requires modular exponentiation with a large private exponent. RSA encryption and signature verification, on the other hand, is very fast, because it uses the small public exponent. This difference scales quadratically with the key length.
Under ECC, because peers are doing the same math with keys of similar size, operations are more balanced than RSA. In an integrated encryption scheme, an ephemeral EC key can be generated, and used in a key agreement algorithm; that requires a little extra work for the message sender. ECDH key agreement is much, much slower than RSA encryption, but much faster than RSA decryption.
In terms of relative numbers, decrypting with AES might be 100,000x faster than decrypting with RSA. In terms of absolute numbers, depending heavily on hardware, AES might take a few nanoseconds per block, while RSA takes a millisecond or two. And that prompts the question, why would anyone use asymmetric algorithms, ever?
The answer is that these algorithms are used together, for different purposes, in hybrid encryption schemes. Fast, symmetric algorithms like AES are used to protect the message itself, and slow, asymmetric algorithms like RSA are used in turn to protect the keys needed by the symmetric algorithms. This is what allows parties that have never previously shared any secret information, like you and your search engine, to communicate securely with each other.

Related

Can we generate BCrypt / SCrypt / Argon2 hash password using CNG (Windows Cryptography API)?

Is it possible with the CNG (Windows Cryptography API: Next Generation) to generate BCrypt / SCrypt / Argon2 hash password ?
BCrypt is a computationally difficult algorithm designed to store
passwords by way of a one-way hashing function. You input your
password to the algorithm and after significant (relative)
computation, an output is produced. Bcrypt has been around since the
late 90s and has handled significant scrutiny by the information
security/cryptography community. It has proven reliable and secure
over time.
Scrypt is an update to the same model from which Bcrypt arose. Scrypt
is designed so as to rely on high memory requirements as opposed to
high requirements on computational power. The realization that lead to
this, was that specialized computer chips (FPGA/ASICs/GPUs) could be
purchased at scale by an attacker easier than could huge amounts of
memory for a traditional computer.
Short Answer
No.
Long Answer
Neither CryptoAPI nor Crypto API Next Generation (CryptNG) support bcrypt, scrypt, or argon2
bcrypt is a customized version of the blowfish encryption algorithm. Blowfish is not supported by CNG. And even if it was, bcrypt uses a version of bcrypt with a custom "expensive" key setup.
scrypt is (nearly) PBKDF2, which is supported by CNG:
Byte[] scrypt(String password, int DesiredNumberOfBytes, ...)
{
Byte[] salt = SpecialScryptSaltGeneration(password, ...)
return PBKDF2(password, salt, DesiredNumberOfBytes, 1);
}
but the SpecialScryptSaltGenration uses primitives not included in CNG (ChaCha, Salsa/20).
Argon2 uses custom primitives that don't exist anywhere.

ANSI X9.31 PRNG successor

I want to know if there is a successor for X9.31 (AES) based generators in crypto++ library, since the X9.31 cannot be used after December 2015 (NIST SP800-131A)?
If you're looking for an approved CSPRNG, then you want either the HMAC DRBG or the Hash DRBG, both of which are approved by NIST and are in Crypto++. You will want to seed it using the OS CSPRNG, which is also available in Crypto++.
The HMAC DRBG has been found to be slightly more likely to resist attacks, but the Hash DRBG is going to be faster in a typical implementation. There are no known practical attacks on either when used correctly.
If you don't have a strong need for compliance, I'd go with just using the OS PRNG and stick with that. The OS PRNG is going to be robust and secure and even if you picked another algorithm, you'd still need to seed with it anyway.
If you do need to pick an approved algorithm for compliance reasons but don't otherwise have a strong preference, use an HMAC DRBG with SHA-512, seeded from the OS RNG, which provides the best possible security and the best possible performance for an HMAC DRBG on 64-bit systems.

Does winapi's bcrypt.h actually support bcrypt hashing?

This may sound like a strange question, and it feels a bit bizarre that I actually have to ask this, but after spending a couple hours looking over the MSDN documentation for the bcrypt routines that were added in Vista, I've almost reached the conclusion that there is no actual bcrypt support!
According to Wikipedia:
bcrypt is an adaptive cryptographic hash function for passwords
... based on the Blowfish cipher ... Besides incorporating a
salt to protect against rainbow table attacks, bcrypt is an adaptive
hash: over time it can be made slower and slower so it remains
resistant to specific brute-force search attacks against the hash and
the salt.
However, from the documentation on MSDN, the "bcrypt" library is apparently actually a generic interface for encryption and hashing. You have to obtain a handle to an "algorithm provider" via the BCryptOpenAlgorithmProvider function, which has several built-in algorithms to choose from. But the word "blowfish" does not appear anywhere in the list.
So am I missing something? Am I reading this wrong? Or does Windows's "bcrypt" library not actually support bcrypt at all?
In the context of the MSDN, BCrypt is a shortform of "BestCrypt", but the PR name for it is:
Cryptography API: Next Generation (Cng)
It is implemented in bcrypt.dll.
BestCrypt/BCrypt/Cng is the successor to the older CryptoAPI.
Microsoft is slowly removing references to "BestCrypt" from their site, but you can still see it in some pages like:
SHA256Cng Class
This algorithm is for hashing only and does not provide any encryption or decryption. It uses the BCrypt (BestCrypt) layer CNG.
It's interesting (to me anyway) that the .NET framework generally can provide you three implementations for the each kind of crypto algorithm. For example, for SHA2 hashing, there is:
SHA256Managed: an implementation written purely in managed code
SHA256CryptoServiceProvider: a wrapper around the native Cryptographic Service Provider (CSP) implementation
SHA256Cng: a wrapper around Cryptography Next Gen (Cng) implementation
Short version
No, bcrypt is short for bestcrypt. And, no, it doesn't support bcrypt (blowfish crypt) password hashing.
the BCrypt APIs are generic and support various cryptographic hash algorithms, but bcrypt is not one of them. The B Prefix seems to be just a way to distinguish between the older APIs and the Next Generation.

Can the Diffie-Hellman protocol be used as a base for digital signatures?

I am implementing a custo crypto library using the Diffie-Hellman protocol (yes, i know about rsa/ssl/and the likes - i am using it specific purposes) and so far it turned out better than i original expected - using GMP, it's very fast.
My question is, besides the obvious key exchange part, if this protocol can be used for digital signatures as well.
I have looked at quite a few resources online, but so far my search has been fruitless.
Is this at all possible?
Any (serious) ideas are welcome.
Update:
Thanks for the comments. And for the more curious people:
my DH implementation is meant - among other things - to distribute encrypted "resources" to client-side applications. both are, for the most part, my own code.
every client has a DH key pair, and i use it along with my server's public key to generate the shared keys. in turn, i use them for HMACs and symmetric encryption.
DH keys are built anywhere from 128 up to 512 bits, using safe primes as modulus.
I realize how "pure" D-H alone can't be used for signatures, i was hoping for something close to it (or as simple).
It would appear this is feasible: http://www.quadibloc.com/crypto/pk050302.htm.
I would question why you are doing this though. The first rule of implementing crypto is don't implement crypto. There are plenty of libraries that already exist, you would probably be better off leveraging these, crypto code is notoriously hard to get right even if you understand the science behind it.
DSA is the standard way to make digital signatures based on the discrete logarithm problem.
And to answer a potential future question, Ephemeral-static Diffie-Hellman is the standard way to implement asymmetric encryption (to send messages where you know and trust the recipients public key (for example through a certificate), but the recipient does not know your key).

Recommended key size and type for GnuPG?

A Practical Introduction to GNU Privacy Guard in Windows recommends DSA and ElGamal, but I would like to know if RSA is good enough to use these days, and if so, what minimum key size should I use? Is it ok to use SHA-256 for signing (for compatibility with e-mail clients)?
Also, beside e-ignite: Key Types, can you point to other sources for this subject?
RSA/DSA minimum today is 1024 bit actually, so Elleptical Curves becoming more in use since they are faster and using shorter keys.
To have a similar security as AES256 you will need at least 3072 bit (384 bytes) key...
Email clients using certificates nowadays - so it's separate thing (X.509), but for using with RSA/DSA most common option is SHA-1 (somewhat weak now).
I recommend study of:
http://en.wikipedia.org/wiki/Digital_signature
http://en.wikipedia.org/wiki/X.509
http://www.rsa.com/rsalabs/node.asp?id=2264
I know the topic is old, but at this time, DSA 1024 is considered to be too weak, as is SHA-1.
You should use RSA 2048 (for signing and encryption) and SHA256 (for digest). Normally, the symmetric algorithm used is AES256, which is good enough.
When encrypting, GPG gzips the data, creates an AES256 key and encrypts the data with it. It then encrypts the AES key with the recipient RSA or ElGamal public key and sends the encrypted AES key + the encrypted data in a pack.
RSA 2048 is said to protect data until 2015 or so, and RSA 4096 would protect data until 2020, based on the predicted computer power at that time. (I'm not totally sure about the dates, but it is logical that a 4096 bit key would be harder to crack than a 2048 bit one)
SHA-1 is weak, but not fully broken. SHA-256 is just an extension to SHA-1, currently it's probably also weaker than first thought (given the same weakness is thought to affect the whole sha family), however it still requires a lot of computing power to get a match.
Anyway, in terms of digital signatures, this becomes less of a problem due to the way that's just the final step. There is still encryption first.
As for key size whether RSA or ElGammel/DSA I would recommend 2048 bit keys anyway now.
the difference is RSA is based on factorial math while ElGammel/DSA is based on logarithmic math, neither can necessarily be considered better or worse (to not though i that elliptic curve based stuff is closely related to the logarithms stuff).
I would recommend RSA/RSA 4096 with AES256 and SHA512
GPG can only use RSA for signing, not encryption. The default is DSA/Elgamal 1024/2048. The Elgamal default key length used to be 1024, but someone must have decided that was not secure enough. People on the GPG mailing list say that most people shouldn't need more than 2048.
I'm less clear on the various signing algorithms. I know there are issues with SHA-1, but how does this relate to DSA/RSA?
I've had the same key for years that uses the above default values. I don't use it much, but am wondering whether generating a new one is justified.
If you don't know, you should use the GPG defaults! (This is how the authors have intended it.)

Resources