I'm using BCrypt in Spring and it's giving me different hashes then some online tools are using like https://bcrypt-generator.com/
Any ideas why?
I've tried setting the strength to 12 in Spring and on the bcrypt-generator.com setting rounds to 12 and it did not work.
DaoAuthenticationProvider provider = new DaoAuthenticationProvider();
provider.setPasswordEncoder(new BCryptPasswordEncoder(12));
provider.setUserDetailsService(bettingBotUserDetailsService);
For the raw password "admin" I get these results:
bcrypt-generator.com with 12 rounds:
$2y$12$15h6Idq/TwfcuJu6H1VXie/ao7P4AKlLgIrC5yxbwlEUdJjx9Sl5S
Spring (captured from debug mode):
$2a$10$ED5wQChpxzagbvhlqEqD2.iIdIKv9ddvJcX0WKrQzSOckgc3RHFLW
BCrypt generates the different salt for the same Input. Bcrypt Algorithm
BCrypt returns a different hash each time because it incorporates a different random value into the hash. This is known as a "salt". It prevents people from attacking your hashed passwords with a "rainbow table", a pre-generated table mapping password hashes back to their passwords. The salt means that instead of there being one hash for a password, there's 2^16 of them.
We can check the hashed with normal string as follow
Boolean isMatch = passwordEncoder().matches(currentPassword,dbPassword);
#Bean
public PasswordEncoder passwordEncoder() {
return new BCryptPasswordEncoder();
}
Related
I'm playing around with Elliptic Curves using the Ruby 2.5.x OpenSSL library. I can easily generate a private and public key pair using
curve = OpenSSL::PKey::EC.new('secp256k1')
curve.generate_key
But given a private key I want to regenerate the public key.
I know OpenSSL can do it because the command line allows you to do it, and also the Ruby Bitcoin project does it. But the Ruby Bitcoin project has its own interface to OpenSSL using FFI rather than the one provided by Ruby.
Does Ruby 2.5.x openssl library not expose enough of the OpenSSL interfaces to be able to generate an elliptic curve public key from a private key, or that it can but it's not documented?
In case someone interested to get public key in pem format too :)
example_key = OpenSSL::PKey::EC.new('secp256k1').generate_key
puts example_key.to_pem
pkey = OpenSSL::PKey::EC.new(example_key.public_key.group)
pkey.public_key = example_key.public_key
puts pkey.to_pem
The Ruby OpenSSL bindings don’t allow you to directly get the public key from a PKey::EC object as far as I can tell, but they do expose enough to do the calculation yourself, which is straightforward.
Given a private key as an OpenSSL:BN object, which for the example we can generate like this:
example_key = OpenSSL::PKey::EC.new('secp256k1').generate_key
private_key = example_key.private_key
We can calculate the public key by multiplying the group base point (i.e. the generator) by the private key:
group = OpenSSL::PKey::EC::Group.new('secp256k1')
public_key = group.generator.mul(private_key)
The public key is an OpenSSL::PKey::EC::Point. You can compare with the original to see that is the same:
puts example_key.public_key == public_key # => true
I'm learning about the OpenSSL ruby module.
Shown below is a pry session where I generate a key using the RSA asymmetric public key algorithm. I also call the #private? and #public? instance methods:
[1] pry(main)> require 'openssl'
=> true
[2] pry(main)> alices_key = OpenSSL::PKey::RSA.new 2048
=> #<OpenSSL::PKey::RSA:0x007fc0751cb028>
[3] pry(main)> alices_key.public?
=> true
[4] pry(main)> alices_key.private?
=> true
Why is the #<OpenSSL::PKey::RSA:0x007fc0751cb028> object both public and private?
Usually the data structure of the private key also contains the public exponent. They are generated in the same key pair generation in the first place.
It is easy to store them together as the public key is the modulus + the public exponent (usually the value 0x10001, the fourth prime of Fermat). The modulus of course is also part of the private key, so that doesn't need to be duplicated.
The public key may also be used to protect against some side channel attacks although that's not such a big issue in software.
It depends on the software if the private key can also be used as a public key and if the public exponent is stored with the private key. But it is quite common, e.g. a private key object in PKCS#11 (used for software, smart cards and HSM's) also contains the public exponent. On the other hand Java has separate PrivateKey and PublicKey classes where the PrivateKey doesn't contain the public exponent (or it doesn't expose it through the public API anyway).
In the end we cannot answer the question without consulting the original OpenSSL guys (Mr. Young and Mr Hudson, I suppose) but there are good reasons for storing the public exponent as well, and as the public key is public it doesn't hurt either.
In a Hadoop Reducer, I would like to create and emit new keys under specific conditions, and I'd like to ensure that these keys are unique.
The pseudo-code for what I want goes like:
#Override
protected void reduce(WritableComparable key, Iterable<Writable> values, Context context)
throws IOException, InterruptedException {
// do stuff:
// ...
// write original key:
context.write(key, data);
// write extra key:
if (someConditionIsMet) {
WritableComparable extraKey = createNewKey()
context.write(extraKey, moreData);
}
}
So I now have two questions:
Is it possible at all to emit more than one different key in reduce()? I know that keys won't be resorted but that is ok for me.
The extra key has to be unique across all reducers - both for application reasons and because I think it would otherwise violate the contract of the reduce stage.
What is a good way to generate a key that is unique across reducers (and possibly across jobs?)
Maybe get reducer/job IDs and incorporate that into key generation?
Yes you can output any number of keys
You can incorporate the task attempt information into your key to make it job unique (across the reducers and even handling speculative execution if you want). you can acquire this information from the reducer's Context.getTaskAttemptID() method and then pull out the reducer ID number with TaskAttemptID.getTaskID().getId()
I'd like to implement the CRC32 and MD5 algorithms on my own but I'm still trying to wrap my head around the different sources I've found on the subject. Could someone helpful point me to a ressource that explains the algorithms in a simple format or post a bullet list of the different steps so I can attempt to fill them in. TIA.
Here's the respective wikipedia pages on each. I understand part of what's being done but bitwise operations are something I have difficulty with. That and mathematics isn't my forte.
http://en.wikipedia.org/wiki/Cyclic_redundancy_check
http://en.wikipedia.org/wiki/MD5
The RFC-1321 spec about MD5 also contains a detailed explanation of the algo. The Wiki article about CRC is pretty clear enough.
After all, your major problem is apparently actually the ignorance about the binary system and the bitwise operators. Here are several excellent guides about the binary system and the involved operators:
Guide: The Binary System
Wikipedia: Bitwise operation
Javaranch: Bit Shifting
This must get you started.
Edit: if the actual reason that you wanted to homegrow a MD5 function is that you actually can't seem to find an existing function in Java, then you may find this snippet useful:
/**
* Generate MD5 hash for the given String.
* #param string The String to generate the MD5 hash for.
* #return The 32-char hexadecimal MD5 hash of the given String.
*/
public static String hashMD5(String string) {
byte[] hash;
try {
hash = MessageDigest.getInstance("MD5").digest(string.getBytes("UTF-8"));
} catch (NoSuchAlgorithmException e) {
// Unexpected exception. "MD5" is just hardcoded and supported.
throw new RuntimeException("MD5 should be supported?", e);
} catch (UnsupportedEncodingException e) {
// Unexpected exception. "UTF-8" is just hardcoded and supported.
throw new RuntimeException("UTF-8 should be supported?", e);
}
StringBuilder hex = new StringBuilder(hash.length * 2);
for (byte b : hash) {
if ((b & 0xff) < 0x10) hex.append("0");
hex.append(Integer.toHexString(b & 0xff));
}
return hex.toString();
}
According to DRY you should do
final public class Security {
synchronized public static String MD5(String msg) {
try {
MessageDigest md = MessageDigest.getInstance("MD5");
md.update(msg.getBytes());
byte[] digest = md.digest();
return new BigInteger(1, digest).toString(16);
} catch (NoSuchAlgorithmException ex) {
return "" + msg.hashCode();
}
}
}
but if you really wanna figure out what is going on with md5 / sha1 etc, you should probably take a security course, which i tried but failed :( good luck to you!
For those that are interested, the first link is an rfc document on md5. The second is a link to download an implementation for Java:
http://www.ietf.org/rfc/rfc1321.txt
http://www.freevbcode.com/ShowCode.Asp?ID=741
I have written an Service Provider implementation for OAuth and one of the Devs found a bug in the way the implementation was ordering query parameters. I totally missed the lexicographical ordering requirement in the OAuth spec and was just doing a basic string sort on the name value parameters
Given the following URI request from the consumer:
http://api.com/v1/People/Search?searchfor=fl&communication=test#test.com&Include=addresses
The resulting signature base should order the parameters as:
Include=addresses, communication=test#test.com, searchfor=fl
Given the following URI request from the consumer:
http://api.com/v1/People/Search?searchfor=fl&communication=test#test.com&include=addresses
The resulting signature base should order the parameters as:
communication=test#test.com, include=addresses, searchfor=fl
Note the case difference in the querystring parameter "include". From what I understand, lexicographical byte value ordering will order parameters using the ascii value and then order asc.
Since I = 73 and i = 105, the capital I should be ordered before the lowercase i.
I have the following so far:
IEnumerable<QueryParameter> queryParameters = parameters
.OrderBy(parm => parm.Key)
.ThenBy(parm => parm.Value)
.Select(
parm => new QueryParameter(parm.Key, UrlEncode(parm.Value)));
But that will not cover the ascii character by character sort (IncLude=test&Include=test will not sort properly).
Any thoughts on how to make an efficient algorithm that will answer this problem? Or how to make the sort case sensitive via ICompare?
I solved the problem by creating a custom comparer, however, it seems clunky and I feel there must be a better way:
public class QueryParameterComparer : IComparer<QueryParameter> {
public int Compare(QueryParameter x, QueryParameter y) {
if(x.Key == y.Key) {
return string.Compare(x.Value, y.Value, StringComparison.Ordinal);
}
else {
return string.Compare(x.Key, y.Key, StringComparison.Ordinal);
}
}
}
Using the Ordinal string comparison is what did it for me. It does byte comparisons which is exactly what I needed.
Nick, as you've discovered StringComparison.Ordinal is the way to go. I just wanted to call out a caution that you sort AFTER you URI encode each of the keys and values.
BTW, there are already a few OAuth libraries out there, DotNetOpenAuth being my favorite (disclaimer: for biased reasons). Are you sure you want to build/maintain this one?
I had problem with the same expression (though it is not misspelled anymore in the OAuth documentation, - if that's the place where it was misspelled.)
Wikipedia sais, "lexicographical ordering" has meaning, when the letter ordering is given. It specifies the ordering of series of elements (letters) when the element's order is specified already.
It sounds reasonable to me. :).