High cost encryption but less cost decryption - algorithm

I want that user/attacker to encrypt the data and send to server. Now I want an algorithm that is completely opposite to standard algorithms (fast to use, hard to decrypt), i.e it is very hard to encrypt the data like passwords using a key send by the server, to protect against random attacks, but very easy to decrypt so server consumes very less time in validating the user, but it becomes very hard for attacker, to every time encrypt new trial password with the key send by server.
Once again I am not talking about SSL.

It sounds as if you're looking for a proof-of-work scheme -- one of the applications for such a scheme is just what you describe: To force a client to do a certain amount of work and so prevent it from flooding a server with requests.

One silly idea that might work really well is to attach a "puzzle" to the encryption scheme. Make it so that in order to post encrypted data to the server, you have to solve some known NP-hard problem (for example, finding a satisfying assignment to a large Boolean formula) and send the answer along with. The server can then easily verify the solution, but under the assumption that P ≠ NP, clients trying to post data have to do a superpolynomial amount of extra work, preventing them from flooding your server.
Hope this helps!

You can use RSA signatures (e.g. PKCS#1).
Your server may accept answers only if they are signed with a certain RSA key whose private part you distributed earlier. The server uses the public part.
RSA has the property that verification is much quicker than signing when you select a small public exponent (typically called e, for instance e=3) by a factor or x10 or x100, depending on the key length and on whether your clients are smart enough to use CRT.
RSA key generation is extremely slow, but it is probably sufficient to use one key for all clients, as long as you include a challenge in your protocol to foil replay attacks.
Finally, RSA signing does not give you confidentiality.

Related

Is there a cryptographic disadvantage to applying bcrypt to an already hashed password

Imagine a scenario where a client application is sending a password to a backend server so that the server can validate that the user entered the correct password when being compared to a stored variation of the password.
The transport mechanism is HTTPS with the server providing HSTS & HPKP to the user agent and strong cryptographic ciphers being preferred by the server scoring A+ on SSL labs test. None the less, we may wish to avoid sending the original user provided password to the server from the user agent. Instead perhaps we'd send a hash after a number of rounds of SHA-256 on the client.
On the server-side, for the storage of passwords we are using bcrypt with a large number of rounds.
From a cryptographic point of view, is there any disadvantage to performing bcrypt on the already sha-256 hashed value as opposed to directly on the plain text password? Does the fixed length nature of the input text when using hashes somehow undermine the strengths of the algorithm.
EDIT: I'm not asking about performance such as the memory, CPU, storage requirements or wall clock time required to calculate, store, sent or compare values. I'm purely interested in whether applying a hash prior to applying bcrypt could weaken the strength of bcrypt in the case of a disclosure of the full list of stored values.
For anyone interested in this, I followed advice and asked on security.stackexchange.com here

Is there a good symmetric encryption algorithm that changes ciphertext if password or plaintext changes?

I want that when the original data or the password changes (I mean, any one of them changes, or both of them change), the encrypted data will always change. In other words, once the encrypted data is certain, then both the original data and the password will be certain, although they are not known to those who don't have the password.
Is there any good symmetric encryption algorithm that fits my specific need?
I assume that you use the password to derive the key for the cipher.
Changing key
Every modern encryption algorithm produces different ciphertexts when a different key is used. That's just how encryption usually works. If it doesn't have this property then everything is broken. All the usual suspect like AES, Blowfish, 3DES have this property.
Changing plaintext
The other property is a little harder to do. This runs under the umbrella of semantic security.
Take for example any modern symmetric cipher in ECB mode. If only a single block of plaintext changes then only the same block changes in the ciphertext. So if you encrypt many similar plaintexts, an attacker who observes the ciphertexts can infer relationships between those. ECB mode is really bad.
Ok, now take a cipher in CBC mode. If you use the same IV over and over again, then an attacker may infer similar relationships as in ECB mode. If the 10th block of plaintext changes, then the previous 9 blocks will be the same in both ciphertexts. So, if you use a new random IV for every encryption, then there is nothing an attacker can deduce besides the length without breaking the underlying cipher.
In other words, once the encrypted data is certain, then both the original data and the password will be certain
The previous paragraph may not be completely what you wanted, because now if you encrypt the same plaintext with the same key twice, you get different results (this is a weak security property) due to a random IV. Since you derive the key from a password, you may also derive the IV from the same password. If you use for example PBKDF2, you can set the number of output bits to be the size of key+IV. You will need to use a static salt value.
If you don't need that last property, then I suggest you use an authenticated mode like GCM or EAX. When you transmit ciphertext or give the attacker an encryption oracle then there are possible attack vectors when no integrity checks are used. An authenticated mode solves this for you without the need to use an encrypt-then-MAC scheme.
If all you care about is detecting when either the data or the password changes, create a file with the data and then append the password. Then use a cryptographic hash like SHA-2 on the file. If either the data or password changes, the hash will change.
Encryption algorithms are generally used to keep data private or to verify identities. Hashes are for detecting when two data objects are different.
The usual reason for encrypting data is that it will be placed in an insecure environment. If the user's encrypted data is in an insecure environment, an opponent can copy it and use software the opponent obtained or wrote, instead of your software, to try to decrypt it. This is similar to some of the controls put on PDFs in Adobe software, such as not being able to cut and paste from the document. Other brands of software may not enforce the no-cut-and-paste restriction.
If the opponent discerns the encryption algorithm, but uses the wrong password, the chances of getting the correct plain text are very small; for the popular modern algorithms, the chance is small enough to neglect compared to other everyday risks we must endure, such as life on earth being destroyed by a comet.

Encryption algorithm/hardware that can sign/verify natural time?

So here is my stupid quesion:
PGP/GPGP can used to sign text, others use public key to verify them.
Let's say, asymmetric cryptographic algorithm deal with space.
Are there any algorithm that can deal with time?
E.g. Sign a document at 2011-10-10 10:10:10, others can verify this document is really, truely signed at 2011-10-10 10-10-10.
I know there are maybe solutions based on a common trusted time source, but is it possible that a pure algorithm solution exists?
Or perhaps some amazing dedicated piece of hardware can do this? (E.g. half-life of certain well-known material, etc.)
This is out of the scope of a cryptographic algorithm, because there's no such thing as absolute time (and if there were, there's no way to measure it from inside a computer). What you're asking for requires a trusted third party - the wikipedia article on Trusted Timestamping describes this. Another approach is one similar to that used in bitcoin, where distributed agreement is created on a sequence of events; the order of events is decided by the order in which they became part of the consensus. This provides a relative measure of time as a sequence of events, not a wallclock time.
OpenPGP signs data (not text), as does any other signing mechanism. Data has no special meaning besides the one humans put into it. "3.1415926" can be treated as value of Pi expressed in decimals or as a randomizer salt or as a password to one's home computer, you choose.
Consequently, time is just some large number, relative to starting point agreed by people (eg. 30 Dec 1899 in OLE time). You can sign time as you would sign any other data. The question is where you get the value. As Nick Johnson pointed, you need a trusted time source. This can be some NTP server (then you need to sign the value) or some TSP server (it produces signed values if you trust it).
In general, you can read RFC 3161 about Timestamping protocol which relies on trusted third parties. While there exist other schemes and they are great, TSP protocol described in RFC 3161 became the most widely used approach nowadays.

how secure is a digital signature?

Digital signature, if I understood right, means sending the message in clear along with a hash of the message which is encrypted using a private key.
The recipient of the message calculates the hash, decrypts the received hash using the public key, then compares the two hashes for a match.
How safe is this? I mean, you can obtain the hash of the message easily and you also have the encrypted hash. How easy is it to find the private key used to create the Encrypted_hash?
Example:
Message Hash Encrypted_hash
-----------------------------------------
Hello world! 1234 abcd
Hi there 5678 xyzt
Bla bla 0987 gsdj
...
Given the Hash and the Encrypted_hash values, and enough of these messages, how easy/hard is it to find out the private key?
Because of the algorithms used to generate the keys (RSA is the typical one), the answer is essentially "impossible in any reasonable amount of time" assuming that the key is of a sufficient bit length. As long as the private key is not stolen or given away, you won't be able to decrypt it with just a public key and a message that was hashed with the private key.
As linked to in #Henk Holterman's answer, the RSA algorithm is built on the fact that the computations needed to decrypt the private key - prime factorization being one of them - are hard problems, which cannot be solved in any reasonable amount time (that we currently know of). In other words, the underlying problem (prime factorization) is an NP problem, meaning that it cannot be solved in polynomial time (cracking the private key) but it can be verified in polynomial time (decrypting using the public key).
Ciphers developed before electronic computers were often vulnerable to "known plain-text" attack, which is essentially what is described here: if an attacker had the cipher-text and the corresponding plain-text, he could discover the key. World War II-era codes were sometimes broken by guessing at plain-text words that had been encrypted, like the locations of battles, ranks, salutations, or weather conditions.
However, the RSA algorithm used most often for digital signatures is invulnerable even to a "chosen plain-text attack" when proper padding is used (like OAEP). Chosen plain-text means that the attacker can choose a message, and trick the victim into encrypting it; it's usually even more dangerous than a known plain-text attack.
Anyway, a digital signature is safe by any standard. Any compromise would be due to an implementation flaw, not a weakness in the algorithm.
A digital signature says nothing about how the actual message is transferred. Could be clear text or encrypted.
And current asymmetric algorithms (public+private key) are very secure, how secure depends on the key-size.
An attacker does have enough information to crack it. But it is part of the 'proof' of asymmetric encryption that that takes an impractical amount of CPU time: the method is computationally safe.
What you're talking about is known as a "known plaintext" attack. With any reasonably secure modern encryption algorithm known plaintext is of essentially no help in an attack. When you're designing an encryption algorithm, you assume that an attacker will have access to an arbitrary amount of known plaintext; if that assists the attacker, the algorithm is considered completely broken by current standards.
In fact, you normally take for granted that the attacker will not only have access to an arbitrary amount of known plaintext, but even an arbitrary amount of chosen plaintext (i.e., they can choose some text, somehow get you to encrypt it, and compare the result to the original. Again, any modern algorithm needs to be immune to this to be considered secure.
Given the Hash and the Encrypted_hash values, and enough of these messages, how easy/hard is it to find out the private key?
This is the scenario of a Known-plaintext attack: you are given many plaintext messages (the hash) and corresponding cipher texts (the encrypted hash) and you want to find out the encryption key.
Modern cryptographic algorithms are designed to withstand this kind of attack, like the RSA algorithm, which is one of the algorithms currently in use for digital signatures.
In other words, it is still extremely difficult to find out the private key. You'd either need an impossible amount of computing power, or you'd need to find a really fast algorithm for factorizing integers, but that would guarantee you lasting fame in the history of mathematics, and hence is even more difficult.
For a more detailed and thorough understanding of cryptography, have a look at the literature, like the Wikipedia pages or Bruce Schneier's Applied Cryptography.
For a perfectly designed hash it is impossible (or rather - there is no easier way than trying every possible input key)

Why do you need lots of randomness for effective encryption?

I've seen it mentioned in many places that randomness is important for generating keys for symmetric and asymmetric cryptography and when using the keys to encrypt messages.
Can someone provide an explanation of how security could be compromised if there isn't enough randomness?
Randomness means unguessable input. If the input is guessable, then the output can be easily calculated. That is bad.
For example, Debian had a long standing bug in its SSL implementation that failed to gather enough randomness when creating a key. This resulted in the software generating one of only 32k possible keys. It is thus easily possible to decrypt anything encrypted with such a key by trying all 32k possibilities by trying them out, which is very fast given today's processor speeds.
The important feature of most cryptographic operations is that they are easy to perform if you have the right information (e.g. a key) and infeasible to perform if you don't have that information.
For example, symmetric cryptography: if you have the key, encrypting and decrypting is easy. If you don't have the key (and don't know anything about its construction) then you must embark on something expensive like an exhaustive search of the key space, or a more-efficient cryptanalysis of the cipher which will nonetheless require some extremely large number of samples.
On the other hand, if you have any information on likely values of the key, your exhaustive search of the keyspace is much easier (or the number of samples you need for your cryptanalysis is much lower). For example, it is (currently) infeasible to perform 2^128 trial decryptions to discover what a 128-bit key actually is. If you know the key material came out of a time value that you know within a billion ticks, then your search just became 340282366920938463463374607431 times easier.
To decrypt a message, you need to know the right key.
The more possibly keys you have to try, the harder it is to decrypt the message.
Taking an extreme example, let's say there's no randomness at all. When I generate a key to use in encrypting my messages, I'll always end up with the exact same key. No matter where or when I run the keygen program, it'll always give me the same key.
That means anyone who have access to the program I used to generate the key, can trivially decrypt my messages. After all, they just have to ask it to generate a key too, and they get one identical to the one I used.
So we need some randomness to make it unpredictable which key you end up using. As David Schmitt mentions, Debian had a bug which made it generate only a small number of unique keys, which means that to decrypt a message encrypted by the default OpenSSL implementation on Debian, I just have to try this smaller number of possible keys. I can ignore the vast number of other valid keys, because Debian's SSL implementation will never generate those.
On the other hand, if there was enough randomness in the key generation, it's impossible to guess anything about the key. You have to try every possible bit pattern. (and for a 128-bit key, that's a lot of combinations.)
It has to do with some of the basic reasons for cryptography:
Make sure a message isn't altered in transit (Immutable)
Make sure a message isn't read in transit (Secure)
Make sure the message is from who it says it's from (Authentic)
Make sure the message isn't the same as one previously sent (No Replay)
etc
There's a few things you need to include, then, to make sure that the above is true. One of the important things is a random value.
For instance, if I encrypt "Too many secrets" with a key, it might come out with "dWua3hTOeVzO2d9w"
There are two problems with this - an attacker might be able to break the encryption more easily since I'm using a very limited set of characters. Further, if I send the same message again, it's going to come out exactly the same. Lastly, and attacker could record it, and send the message again and the recipient wouldn't know that I didn't send it, even if the attacker didn't break it.
If I add some random garbage to the string each time I encrypt it, then not only does it make it harder to crack, but the encrypted message is different each time.
The other features of cryptography in the bullets above are fixed using means other than randomness (seed values, two way authentication, etc) but the randomness takes care of a few problems, and helps out on other problems.
A bad source of randomness limits the character set again, so it's easier to break, and if it's easy to guess, or otherwise limited, then the attacker has fewer paths to try when doing a brute force attack.
-Adam
A common pattern in cryptography is the following (sending text from alice to bob):
Take plaintext p
Generate random k
Encrypt p with k using symmetric encryption, producing crypttext c
Encrypt k with bob's private key, using asymmetric encryption, producing x
Send c+x to bob
Bob reverses the processes, decrypting x using his private key to obtain k
The reason for this pattern is that symmetric encryption is much faster than asymmetric encryption. Of course, it depends on a good random number generator to produce k, otherwise the bad guys can just guess it.
Here's a "card game" analogy: Suppose we play several rounds of a game with the same deck of cards. The shuffling of the deck between rounds is the primary source of randomness. If we didn't shuffle properly, you could beat the game by predicting cards.
When you use a poor source of randomness to generate an encryption key, you significantly reduce the entropy (or uncertainty) of the key value. This could compromise the encryption because it makes a brute-force search over the key space much easier.
Work out this problem from Project Euler, and it will really drive home what "lots of randomness" will do for you. When I saw this question, that was the first thing that popped into my mind.
Using the method he talks about there, you can easily see what "more randomness" would gain you.
A pretty good paper that outlines why not being careful with randomness can lead to insecurity:
http://www.cs.berkeley.edu/~daw/papers/ddj-netscape.html
This describes how back in 1995 the Netscape browser's key SSL implementation was vulnerable to guessing the SSL keys because of a problem seeding the PRNG.

Resources