Related
I am planning on generating a set of public/private keys from a deterministic identifying piece of information from a person and was planning on using fingerprints.
My question, therefore, is: what is the output of a fingerprint scanner? Is there any deterministic output I could use, or is it always going to be a matter of "confidence level"? i.e. Do I always get a "number" which, if matched exactly to the database, will allow access, or do I rather get a number which, if "close enough" to the stored value on the database, allows access, based on a high degree of confidence, rather than an exact match?
I am quite sure the second option is the answer but just wanted to double-check. Is there any way to get some sort of deterministic output? My hope was to re-generate keys every time rather than actually storing fingerprint data. That way a wrong fingerprint would simply generate a new and useless key.
Any suggestions?
Thanks in advance.
I would advise against it for several reasons.
The fingerprints are not entirely deterministic. As suggested in #ImSimplyAnna answer, you might 'round' the results in order to have more chances to obtain a deterministic result. But that would significantly reduce the number of possible/plausible fingerprints, and thus not meet the search space size requirement for a cryptographic algorithm. On top of it, I suspect the entropy of such result to be somehow low, compared to the requirements of modern algorithm which are always based on high quality random numbers.
Fingerprints are not secret, we expose them to everyone all the time, and they can be revealed to an attacker at any time, and stored in a picture using a simple camera. A key must be a secret, and the only place we know we can store secrets without exposing them is our brain (which is why we use passwords).
An important feature for cryptographic keys is the possibility to generate new one if there is a reason to believe the current ones might be compromised. This is not possible with fingerprints.
That is why I would advise against it. Globally, I discourage anyone (myself included) to write his/her own cryptographic algorithm, because it is so easy to screw them up. It might be the easiest thing to screw up, out of all the things you could write, because attacker are so vicicous!
The only good approach, if you're not a skilled specialist, is to use libraries that are used all around, because they've been written by experts on the matter, and they've been subject to many attacks and attempts to break them, so the ones still standing will offer much better levels of protection that anything a non specialist could write (or basically anything a single human could write).
You can also have a look at this question, on the crypto stack exchange. They also discourage the OP in using anything else than a battle hardened algorithm, or protocol.
Edit:
I am planning on generating a set of public/private keys from a
deterministic identifying piece of information
Actually, It did not strike me at first (it should have), but keys MUST NOT be generated from anything which is not random. NEVER.
You have to generate them randomly. If you don't, you already give more information to the attacker than he/she wants. Being a programmer does not make you a cryptographer. Your user's informations are at stake, do not take any chance (and if you're not a cryptographer, you actually don't stand any).
A fingerprint scanner looks for features where the lines on the fingerprint either split or end. It then calculates the distances and angles between such features in an attempt to find a match.
Here's some more reading on the subject:
https://www.explainthatstuff.com/fingerprintscanners.html
in the section "How fingerprints are stored and compared".
The source is the best explanation I can find, but looking around some more it seems that all fingerprint scanners use some variety of that algorithm to generate data that can be matched.
Storing raw fingerprints would not only take up way more space on a database but also be a pretty significant security risk if that information was ever leaked, so it's not really done unless absolutely necessary.
Judging by that algorithm, I would assume that there is always some "confidence level". The angles and distances will never be 100% equal between scans, so there has to be some leeway to make sure a match is still found even if the finger is pressed against the scanner a bit harder or the finger is at a slightly different angle.
Based on this, I'd assume that generating a key pair based on a fingerprint would be possible, if you can figure out a way to make similar scans result in the same information. Simply rounding the angles and distances may work, but may introduce cases where two different people generate the same key pairs, or cases where different scans of the same fingerprint have a high chance of generating several different keys.
I am following the ETSFS enryption algorithm here
To give some context, ETSFS is an encryption algorithm encrypts a 4x4 data matrix by putting it through a series of data changing functions.
The step (with I/O) examples provided are quite clear in the before and after but the shifting step of the algorithm is somewhat confusing to me. (See Page 5/10) Here is the shifting I/O example from that page:
Please note that the allowed symbols in this algorithm is, in this order:
* - . / : # _
['*','-','.','/',':','#','_']
a-z and A-Z are labeled 0-25
It says that the shifting is based on position of the element in the array. Unfortunately I cannot draw much information out of the given image. In the third line, it is not clear to me how v became s when I cannot see a correlation between 3 and 2. Especially the last line in the image above where 4 becomes 2.
How does this shifting work?
As far as I can tell, the information you're seeking (i.e. the content of the arrayAlpha, arrayNumber and arraySymbol arrays used in the shifting step) is simply not specified anywhere in the paper you cite, nor do they appear to be derived from the key. If you wanted to know what those arrays should contain, you'd have to contact the authors and ask.
In any case, I'd advise you not to bother. The paper you cite does not appear to be written by professional cryptographers, and the algorithm it describes does not seem to be a secure encryption scheme in the modern sense. It clearly does not provide semantic security, much less non-malleability.
Certainly, it would never have been published in any actual crypto conference. The authors appear to have fallen into the common amateur cryptographer's trap of designing a cipher just complex enough that they themselves can't think of a way to break it. Also, they don't appear to be particularly experienced at breaking (or designing) ciphers.
I'd suggest using the time you would've spent working on this algorithm to instead familiarize yourself with the actual state of the art and best practices in modern cryptography. For that, I'd recommend any decent introductory crypto book, such as Katz & Lindell's Introduction to Modern Cryptography or Ferguson, Schneier & Kohno's Cryptography Engineering.
Oh, and if you can avoid it, don't try to write your own low-level crypto code (except as a learning exercise). Instead, find an existing reputable crypto library written and reviewed by professional cryptographers, such as NaCl, and use it.
In fact, the "ETSFS" encryption scheme described in the paper looks pretty weak even for an amateur cipher. As far as I can tell, the whole thing seems to amount to nothing more than a polyalphabetic substitution cipher with 16 distinct key-dependent alphabets, combined with a key-independent(!) transposition cipher. (That's not how it's specified, but that's what it appears to work out to, if you trace out how each data character is affected by the iterated steps.)
As such, given that the transposition part of the cipher is fixed and publicly known, a few dozen 16-character chosen plaintext / ciphertext pairs (specifically, one for each character of the alphabet) encrypted with the same key should be sufficient to fully determine the substitution part, thus allowing decryption of any data encrypted with that particular key. If a chosen-plaintext attack is not possible, a slightly larger sample of known plaintext should be sufficient to recover most of the substitution table, if not all of it.
Also, given that there appears to be no mixing between the ciphertext letters, even if the transposition part was made key-dependent, breaking it would still only require 16 more chosen 16-character plaintext / ciphertext pairs. (Exercise: Figure out a set of 16 plaintexts to choose so that, given the corresponding ciphertexts, you can fully determine the transposition part of the cipher without any prior knowledge of the substitution part. Then figure out which additional plaintexts you need to also determine the substitution part.)
I have a project that needs to do validation on the frontend for an American Social Security Number (format ddd-dd-dddd). One suggestion would be to use a hash algorithm, but given the tiny character set used ([0-9]), this would be disastrous. It would be acceptable to validate with some high probability that a number is correct and allow the backend to do a final == check, but I need to do far better than "has nine digits" etc etc.
In my search for better alternatives, I came upon the validation checksums for ISBN numbers and UPC. These look like a great alternative with a high probability of success on the frontend.
Given those constraints, I have three questions:
Is there a way to prove that an algorithm like ISBN13 will work with a different category of data like SSN, or whether it is more or less fit to the purpose from a security perspective? The checksum seems reasonable for my quite large sample of one real SSN, but I'd hate to find out that they aren't generally applicable for some reason.
Is this a solved problem somewhere, so that I can simply use a pre-existing validation scheme to take care of the problem?
Are there any such algorithms that would also easily accommodate validating the last 4 digits of an SSN without giving up too much extra information?
Thanks as always,
Joe
UPDATE:
In response to a question below, a little more detail. I have the customer's SSN as previously entered, stored securely on the backend of the app. What I need to do is verification (to the maximum extent possible) that the customer has entered that same value again on this page. The issue is that I need to prevent the information from being incidentally revealed to the frontend in case some non-authorized person is able to access the page.
That is why an MD5/SHA1 hash is inappropriate: namely that it can be used to derive the complete SSN without much difficulty. A checksum (say, modulo 11) provides nearly no information to the frontend while still allowing a high degree of accuracy for the field validation. However, as stated above I have concerns over its general applicability.
Wikipedia is not the best source for this kind of thing, but given that caveat, http://en.wikipedia.org/wiki/Social_Security_number says
Unlike many similar numbers, no check digit is included.
But before that it mentions some widely used filters:
The SSA publishes the last group number used for each area number. Since group numbers are allocated in a regular (if unusual) pattern, it is possible to identify an unissued SSN that contains an invalid group number. Despite these measures, many fraudulent SSNs cannot easily be detected using only publicly available information. In order to do so there are many online services that provide SSN validation.
Restating your basic requirements:
A reasonably strong checksum to protect against simple human errors.
"Expected" checksum is sent from server -> client, allowing client-side validation.
Checksum must not reveal too much information about SSN, so as to minimize leakage of sensitive information.
I might propose using a cryptographic has (SHA-1, etc), but do not send the complete hash value to the client. For example, send only the lowest 4 bits of the 160 bit hash result[1]. By sending 4 bits of checksum, your chance of detecting a data entry error are 15/16-- meaning that you'll detect mistakes 93% of the time. The flip side, though, is that you have "leaked" enough info to reduce their SSN to 1/16 of search space. It's up to you to decide if the convenience of client-side validation is worth this leakage.
By tuning the number of "checksum" bits sent, you can adjust between convenience to the user (i.e. detecting mistakes) and information leakage.
Finally, given your requirements, I suspect this convenience / leakage tradeoff is an inherent problem: Certainly, you could use a more sophisticated crypto challenge / response algorithm (as Nick ODell astutely suggests). However, doing so would require a separate round-trip request-- something you said you were trying to avoid in the first place.
[1] In a good crypto hash function, all output digits are well randomized due to avalanche effect, so the specific digits you choose don't particularly matter-- they're all effectively random.
Simple solution. Take the number mod 100001 as your checksum. There is 1/100_000 chance that you'll accidentally get the checksum right with the wrong number (and it will be very resistant to one or two digit mistakes canceling out), and 10,000 possible SSNs that it could be so you have not revealed the SSN to an attacker.
The only drawback is that the 10,000 possible other SSNs are easy to figure out. If the person can get the last 4 of the SSN from elsewhere, then they can probably figure out the SSN. If you are concerned about this then you should take the user's SSN number, add a salt, and hash it. And deliberately use an expensive hash algorithm to do so. (You can just iterate a cheaper algorithm, like MD5, a fixed number of times to increase the cost.) Then use only a certain number of bits. The point here being that while someone can certainly go through all billion possible SSNs to come up with a limited list of possibilities, it will cost them more to do so. Hopefully enough that they don't bother.
Digital signature, if I understood right, means sending the message in clear along with a hash of the message which is encrypted using a private key.
The recipient of the message calculates the hash, decrypts the received hash using the public key, then compares the two hashes for a match.
How safe is this? I mean, you can obtain the hash of the message easily and you also have the encrypted hash. How easy is it to find the private key used to create the Encrypted_hash?
Example:
Message Hash Encrypted_hash
-----------------------------------------
Hello world! 1234 abcd
Hi there 5678 xyzt
Bla bla 0987 gsdj
...
Given the Hash and the Encrypted_hash values, and enough of these messages, how easy/hard is it to find out the private key?
Because of the algorithms used to generate the keys (RSA is the typical one), the answer is essentially "impossible in any reasonable amount of time" assuming that the key is of a sufficient bit length. As long as the private key is not stolen or given away, you won't be able to decrypt it with just a public key and a message that was hashed with the private key.
As linked to in #Henk Holterman's answer, the RSA algorithm is built on the fact that the computations needed to decrypt the private key - prime factorization being one of them - are hard problems, which cannot be solved in any reasonable amount time (that we currently know of). In other words, the underlying problem (prime factorization) is an NP problem, meaning that it cannot be solved in polynomial time (cracking the private key) but it can be verified in polynomial time (decrypting using the public key).
Ciphers developed before electronic computers were often vulnerable to "known plain-text" attack, which is essentially what is described here: if an attacker had the cipher-text and the corresponding plain-text, he could discover the key. World War II-era codes were sometimes broken by guessing at plain-text words that had been encrypted, like the locations of battles, ranks, salutations, or weather conditions.
However, the RSA algorithm used most often for digital signatures is invulnerable even to a "chosen plain-text attack" when proper padding is used (like OAEP). Chosen plain-text means that the attacker can choose a message, and trick the victim into encrypting it; it's usually even more dangerous than a known plain-text attack.
Anyway, a digital signature is safe by any standard. Any compromise would be due to an implementation flaw, not a weakness in the algorithm.
A digital signature says nothing about how the actual message is transferred. Could be clear text or encrypted.
And current asymmetric algorithms (public+private key) are very secure, how secure depends on the key-size.
An attacker does have enough information to crack it. But it is part of the 'proof' of asymmetric encryption that that takes an impractical amount of CPU time: the method is computationally safe.
What you're talking about is known as a "known plaintext" attack. With any reasonably secure modern encryption algorithm known plaintext is of essentially no help in an attack. When you're designing an encryption algorithm, you assume that an attacker will have access to an arbitrary amount of known plaintext; if that assists the attacker, the algorithm is considered completely broken by current standards.
In fact, you normally take for granted that the attacker will not only have access to an arbitrary amount of known plaintext, but even an arbitrary amount of chosen plaintext (i.e., they can choose some text, somehow get you to encrypt it, and compare the result to the original. Again, any modern algorithm needs to be immune to this to be considered secure.
Given the Hash and the Encrypted_hash values, and enough of these messages, how easy/hard is it to find out the private key?
This is the scenario of a Known-plaintext attack: you are given many plaintext messages (the hash) and corresponding cipher texts (the encrypted hash) and you want to find out the encryption key.
Modern cryptographic algorithms are designed to withstand this kind of attack, like the RSA algorithm, which is one of the algorithms currently in use for digital signatures.
In other words, it is still extremely difficult to find out the private key. You'd either need an impossible amount of computing power, or you'd need to find a really fast algorithm for factorizing integers, but that would guarantee you lasting fame in the history of mathematics, and hence is even more difficult.
For a more detailed and thorough understanding of cryptography, have a look at the literature, like the Wikipedia pages or Bruce Schneier's Applied Cryptography.
For a perfectly designed hash it is impossible (or rather - there is no easier way than trying every possible input key)
I've seen it mentioned in many places that randomness is important for generating keys for symmetric and asymmetric cryptography and when using the keys to encrypt messages.
Can someone provide an explanation of how security could be compromised if there isn't enough randomness?
Randomness means unguessable input. If the input is guessable, then the output can be easily calculated. That is bad.
For example, Debian had a long standing bug in its SSL implementation that failed to gather enough randomness when creating a key. This resulted in the software generating one of only 32k possible keys. It is thus easily possible to decrypt anything encrypted with such a key by trying all 32k possibilities by trying them out, which is very fast given today's processor speeds.
The important feature of most cryptographic operations is that they are easy to perform if you have the right information (e.g. a key) and infeasible to perform if you don't have that information.
For example, symmetric cryptography: if you have the key, encrypting and decrypting is easy. If you don't have the key (and don't know anything about its construction) then you must embark on something expensive like an exhaustive search of the key space, or a more-efficient cryptanalysis of the cipher which will nonetheless require some extremely large number of samples.
On the other hand, if you have any information on likely values of the key, your exhaustive search of the keyspace is much easier (or the number of samples you need for your cryptanalysis is much lower). For example, it is (currently) infeasible to perform 2^128 trial decryptions to discover what a 128-bit key actually is. If you know the key material came out of a time value that you know within a billion ticks, then your search just became 340282366920938463463374607431 times easier.
To decrypt a message, you need to know the right key.
The more possibly keys you have to try, the harder it is to decrypt the message.
Taking an extreme example, let's say there's no randomness at all. When I generate a key to use in encrypting my messages, I'll always end up with the exact same key. No matter where or when I run the keygen program, it'll always give me the same key.
That means anyone who have access to the program I used to generate the key, can trivially decrypt my messages. After all, they just have to ask it to generate a key too, and they get one identical to the one I used.
So we need some randomness to make it unpredictable which key you end up using. As David Schmitt mentions, Debian had a bug which made it generate only a small number of unique keys, which means that to decrypt a message encrypted by the default OpenSSL implementation on Debian, I just have to try this smaller number of possible keys. I can ignore the vast number of other valid keys, because Debian's SSL implementation will never generate those.
On the other hand, if there was enough randomness in the key generation, it's impossible to guess anything about the key. You have to try every possible bit pattern. (and for a 128-bit key, that's a lot of combinations.)
It has to do with some of the basic reasons for cryptography:
Make sure a message isn't altered in transit (Immutable)
Make sure a message isn't read in transit (Secure)
Make sure the message is from who it says it's from (Authentic)
Make sure the message isn't the same as one previously sent (No Replay)
etc
There's a few things you need to include, then, to make sure that the above is true. One of the important things is a random value.
For instance, if I encrypt "Too many secrets" with a key, it might come out with "dWua3hTOeVzO2d9w"
There are two problems with this - an attacker might be able to break the encryption more easily since I'm using a very limited set of characters. Further, if I send the same message again, it's going to come out exactly the same. Lastly, and attacker could record it, and send the message again and the recipient wouldn't know that I didn't send it, even if the attacker didn't break it.
If I add some random garbage to the string each time I encrypt it, then not only does it make it harder to crack, but the encrypted message is different each time.
The other features of cryptography in the bullets above are fixed using means other than randomness (seed values, two way authentication, etc) but the randomness takes care of a few problems, and helps out on other problems.
A bad source of randomness limits the character set again, so it's easier to break, and if it's easy to guess, or otherwise limited, then the attacker has fewer paths to try when doing a brute force attack.
-Adam
A common pattern in cryptography is the following (sending text from alice to bob):
Take plaintext p
Generate random k
Encrypt p with k using symmetric encryption, producing crypttext c
Encrypt k with bob's private key, using asymmetric encryption, producing x
Send c+x to bob
Bob reverses the processes, decrypting x using his private key to obtain k
The reason for this pattern is that symmetric encryption is much faster than asymmetric encryption. Of course, it depends on a good random number generator to produce k, otherwise the bad guys can just guess it.
Here's a "card game" analogy: Suppose we play several rounds of a game with the same deck of cards. The shuffling of the deck between rounds is the primary source of randomness. If we didn't shuffle properly, you could beat the game by predicting cards.
When you use a poor source of randomness to generate an encryption key, you significantly reduce the entropy (or uncertainty) of the key value. This could compromise the encryption because it makes a brute-force search over the key space much easier.
Work out this problem from Project Euler, and it will really drive home what "lots of randomness" will do for you. When I saw this question, that was the first thing that popped into my mind.
Using the method he talks about there, you can easily see what "more randomness" would gain you.
A pretty good paper that outlines why not being careful with randomness can lead to insecurity:
http://www.cs.berkeley.edu/~daw/papers/ddj-netscape.html
This describes how back in 1995 the Netscape browser's key SSL implementation was vulnerable to guessing the SSL keys because of a problem seeding the PRNG.