I have been trying to detect a block of data encrypted with the Vigenere algorithm for a long time..I want to determine which encryption algorithm the data I have is encrypted.I don't want to reach the original version of the data.
In the first case, since I have the original version of the data, I can understand that it is encrypted with a little comparison, but as I said, I am having trouble at the point of which cryptological function is used.
Only the function in my hand (assuming that I have the data in an unencrypted state) should be able to say that it is encrypted with the Vigenere algorithm.
What kind of method should I follow here, what should I do? I am waiting for your help and ideas. Thank you very much in advance.
**(I looked at the Kasiski method, but it works to find the key value of the encryption algorithm and to decrypt it.)
Related
In the process of a non graded holiday school assignment, I was given the text "a2hxcGJ3ZXdsIHdqdWF3bGttbiBtYnVqcnRlcWNmIHV0ciBtbSB2c2p3IHh6IGxzdWd3aQ==" that it is encrypted using with a symmetric encryption algorithm with a week password (up to 8 chars) and then hashed. I am asked to identify the algorithm, state the process and decrypt the text.
The original text has which from the the two == I suspected it was a Base64 encoding and after running it from a validator I was right so the output is "khqpbwewl wjuawlkmn mbujrteqcf utr mm vsjw xz lsugwi".
The problem is that I really don't know where to go from here and I am not able to find resources regarding how I am going to do it.
Can you give me some advice, guidelines, methodology, tools etc in order to investigate the solution?
ps. The Hash above does not contain sensitive information, it is for educational purposes only
I want that when the original data or the password changes (I mean, any one of them changes, or both of them change), the encrypted data will always change. In other words, once the encrypted data is certain, then both the original data and the password will be certain, although they are not known to those who don't have the password.
Is there any good symmetric encryption algorithm that fits my specific need?
I assume that you use the password to derive the key for the cipher.
Changing key
Every modern encryption algorithm produces different ciphertexts when a different key is used. That's just how encryption usually works. If it doesn't have this property then everything is broken. All the usual suspect like AES, Blowfish, 3DES have this property.
Changing plaintext
The other property is a little harder to do. This runs under the umbrella of semantic security.
Take for example any modern symmetric cipher in ECB mode. If only a single block of plaintext changes then only the same block changes in the ciphertext. So if you encrypt many similar plaintexts, an attacker who observes the ciphertexts can infer relationships between those. ECB mode is really bad.
Ok, now take a cipher in CBC mode. If you use the same IV over and over again, then an attacker may infer similar relationships as in ECB mode. If the 10th block of plaintext changes, then the previous 9 blocks will be the same in both ciphertexts. So, if you use a new random IV for every encryption, then there is nothing an attacker can deduce besides the length without breaking the underlying cipher.
In other words, once the encrypted data is certain, then both the original data and the password will be certain
The previous paragraph may not be completely what you wanted, because now if you encrypt the same plaintext with the same key twice, you get different results (this is a weak security property) due to a random IV. Since you derive the key from a password, you may also derive the IV from the same password. If you use for example PBKDF2, you can set the number of output bits to be the size of key+IV. You will need to use a static salt value.
If you don't need that last property, then I suggest you use an authenticated mode like GCM or EAX. When you transmit ciphertext or give the attacker an encryption oracle then there are possible attack vectors when no integrity checks are used. An authenticated mode solves this for you without the need to use an encrypt-then-MAC scheme.
If all you care about is detecting when either the data or the password changes, create a file with the data and then append the password. Then use a cryptographic hash like SHA-2 on the file. If either the data or password changes, the hash will change.
Encryption algorithms are generally used to keep data private or to verify identities. Hashes are for detecting when two data objects are different.
The usual reason for encrypting data is that it will be placed in an insecure environment. If the user's encrypted data is in an insecure environment, an opponent can copy it and use software the opponent obtained or wrote, instead of your software, to try to decrypt it. This is similar to some of the controls put on PDFs in Adobe software, such as not being able to cut and paste from the document. Other brands of software may not enforce the no-cut-and-paste restriction.
If the opponent discerns the encryption algorithm, but uses the wrong password, the chances of getting the correct plain text are very small; for the popular modern algorithms, the chance is small enough to neglect compared to other everyday risks we must endure, such as life on earth being destroyed by a comet.
If i have an encrypted files is there any methodology to determine what is the algorithm that is used to encrypt that file. So that i can decrypt it. Is there any unique pattern that algorithms use while it encrypts? Please be kind enough to help me out. Thank you.
The output from any good encryption algorithm is indistinguishable from truly random data. Hence there is no way to determine the algorithm itself.
Digital signature, if I understood right, means sending the message in clear along with a hash of the message which is encrypted using a private key.
The recipient of the message calculates the hash, decrypts the received hash using the public key, then compares the two hashes for a match.
How safe is this? I mean, you can obtain the hash of the message easily and you also have the encrypted hash. How easy is it to find the private key used to create the Encrypted_hash?
Example:
Message Hash Encrypted_hash
-----------------------------------------
Hello world! 1234 abcd
Hi there 5678 xyzt
Bla bla 0987 gsdj
...
Given the Hash and the Encrypted_hash values, and enough of these messages, how easy/hard is it to find out the private key?
Because of the algorithms used to generate the keys (RSA is the typical one), the answer is essentially "impossible in any reasonable amount of time" assuming that the key is of a sufficient bit length. As long as the private key is not stolen or given away, you won't be able to decrypt it with just a public key and a message that was hashed with the private key.
As linked to in #Henk Holterman's answer, the RSA algorithm is built on the fact that the computations needed to decrypt the private key - prime factorization being one of them - are hard problems, which cannot be solved in any reasonable amount time (that we currently know of). In other words, the underlying problem (prime factorization) is an NP problem, meaning that it cannot be solved in polynomial time (cracking the private key) but it can be verified in polynomial time (decrypting using the public key).
Ciphers developed before electronic computers were often vulnerable to "known plain-text" attack, which is essentially what is described here: if an attacker had the cipher-text and the corresponding plain-text, he could discover the key. World War II-era codes were sometimes broken by guessing at plain-text words that had been encrypted, like the locations of battles, ranks, salutations, or weather conditions.
However, the RSA algorithm used most often for digital signatures is invulnerable even to a "chosen plain-text attack" when proper padding is used (like OAEP). Chosen plain-text means that the attacker can choose a message, and trick the victim into encrypting it; it's usually even more dangerous than a known plain-text attack.
Anyway, a digital signature is safe by any standard. Any compromise would be due to an implementation flaw, not a weakness in the algorithm.
A digital signature says nothing about how the actual message is transferred. Could be clear text or encrypted.
And current asymmetric algorithms (public+private key) are very secure, how secure depends on the key-size.
An attacker does have enough information to crack it. But it is part of the 'proof' of asymmetric encryption that that takes an impractical amount of CPU time: the method is computationally safe.
What you're talking about is known as a "known plaintext" attack. With any reasonably secure modern encryption algorithm known plaintext is of essentially no help in an attack. When you're designing an encryption algorithm, you assume that an attacker will have access to an arbitrary amount of known plaintext; if that assists the attacker, the algorithm is considered completely broken by current standards.
In fact, you normally take for granted that the attacker will not only have access to an arbitrary amount of known plaintext, but even an arbitrary amount of chosen plaintext (i.e., they can choose some text, somehow get you to encrypt it, and compare the result to the original. Again, any modern algorithm needs to be immune to this to be considered secure.
Given the Hash and the Encrypted_hash values, and enough of these messages, how easy/hard is it to find out the private key?
This is the scenario of a Known-plaintext attack: you are given many plaintext messages (the hash) and corresponding cipher texts (the encrypted hash) and you want to find out the encryption key.
Modern cryptographic algorithms are designed to withstand this kind of attack, like the RSA algorithm, which is one of the algorithms currently in use for digital signatures.
In other words, it is still extremely difficult to find out the private key. You'd either need an impossible amount of computing power, or you'd need to find a really fast algorithm for factorizing integers, but that would guarantee you lasting fame in the history of mathematics, and hence is even more difficult.
For a more detailed and thorough understanding of cryptography, have a look at the literature, like the Wikipedia pages or Bruce Schneier's Applied Cryptography.
For a perfectly designed hash it is impossible (or rather - there is no easier way than trying every possible input key)
I've seen it mentioned in many places that randomness is important for generating keys for symmetric and asymmetric cryptography and when using the keys to encrypt messages.
Can someone provide an explanation of how security could be compromised if there isn't enough randomness?
Randomness means unguessable input. If the input is guessable, then the output can be easily calculated. That is bad.
For example, Debian had a long standing bug in its SSL implementation that failed to gather enough randomness when creating a key. This resulted in the software generating one of only 32k possible keys. It is thus easily possible to decrypt anything encrypted with such a key by trying all 32k possibilities by trying them out, which is very fast given today's processor speeds.
The important feature of most cryptographic operations is that they are easy to perform if you have the right information (e.g. a key) and infeasible to perform if you don't have that information.
For example, symmetric cryptography: if you have the key, encrypting and decrypting is easy. If you don't have the key (and don't know anything about its construction) then you must embark on something expensive like an exhaustive search of the key space, or a more-efficient cryptanalysis of the cipher which will nonetheless require some extremely large number of samples.
On the other hand, if you have any information on likely values of the key, your exhaustive search of the keyspace is much easier (or the number of samples you need for your cryptanalysis is much lower). For example, it is (currently) infeasible to perform 2^128 trial decryptions to discover what a 128-bit key actually is. If you know the key material came out of a time value that you know within a billion ticks, then your search just became 340282366920938463463374607431 times easier.
To decrypt a message, you need to know the right key.
The more possibly keys you have to try, the harder it is to decrypt the message.
Taking an extreme example, let's say there's no randomness at all. When I generate a key to use in encrypting my messages, I'll always end up with the exact same key. No matter where or when I run the keygen program, it'll always give me the same key.
That means anyone who have access to the program I used to generate the key, can trivially decrypt my messages. After all, they just have to ask it to generate a key too, and they get one identical to the one I used.
So we need some randomness to make it unpredictable which key you end up using. As David Schmitt mentions, Debian had a bug which made it generate only a small number of unique keys, which means that to decrypt a message encrypted by the default OpenSSL implementation on Debian, I just have to try this smaller number of possible keys. I can ignore the vast number of other valid keys, because Debian's SSL implementation will never generate those.
On the other hand, if there was enough randomness in the key generation, it's impossible to guess anything about the key. You have to try every possible bit pattern. (and for a 128-bit key, that's a lot of combinations.)
It has to do with some of the basic reasons for cryptography:
Make sure a message isn't altered in transit (Immutable)
Make sure a message isn't read in transit (Secure)
Make sure the message is from who it says it's from (Authentic)
Make sure the message isn't the same as one previously sent (No Replay)
etc
There's a few things you need to include, then, to make sure that the above is true. One of the important things is a random value.
For instance, if I encrypt "Too many secrets" with a key, it might come out with "dWua3hTOeVzO2d9w"
There are two problems with this - an attacker might be able to break the encryption more easily since I'm using a very limited set of characters. Further, if I send the same message again, it's going to come out exactly the same. Lastly, and attacker could record it, and send the message again and the recipient wouldn't know that I didn't send it, even if the attacker didn't break it.
If I add some random garbage to the string each time I encrypt it, then not only does it make it harder to crack, but the encrypted message is different each time.
The other features of cryptography in the bullets above are fixed using means other than randomness (seed values, two way authentication, etc) but the randomness takes care of a few problems, and helps out on other problems.
A bad source of randomness limits the character set again, so it's easier to break, and if it's easy to guess, or otherwise limited, then the attacker has fewer paths to try when doing a brute force attack.
-Adam
A common pattern in cryptography is the following (sending text from alice to bob):
Take plaintext p
Generate random k
Encrypt p with k using symmetric encryption, producing crypttext c
Encrypt k with bob's private key, using asymmetric encryption, producing x
Send c+x to bob
Bob reverses the processes, decrypting x using his private key to obtain k
The reason for this pattern is that symmetric encryption is much faster than asymmetric encryption. Of course, it depends on a good random number generator to produce k, otherwise the bad guys can just guess it.
Here's a "card game" analogy: Suppose we play several rounds of a game with the same deck of cards. The shuffling of the deck between rounds is the primary source of randomness. If we didn't shuffle properly, you could beat the game by predicting cards.
When you use a poor source of randomness to generate an encryption key, you significantly reduce the entropy (or uncertainty) of the key value. This could compromise the encryption because it makes a brute-force search over the key space much easier.
Work out this problem from Project Euler, and it will really drive home what "lots of randomness" will do for you. When I saw this question, that was the first thing that popped into my mind.
Using the method he talks about there, you can easily see what "more randomness" would gain you.
A pretty good paper that outlines why not being careful with randomness can lead to insecurity:
http://www.cs.berkeley.edu/~daw/papers/ddj-netscape.html
This describes how back in 1995 the Netscape browser's key SSL implementation was vulnerable to guessing the SSL keys because of a problem seeding the PRNG.