I'm learning about the OpenSSL ruby module.
Shown below is a pry session where I generate a key using the RSA asymmetric public key algorithm. I also call the #private? and #public? instance methods:
[1] pry(main)> require 'openssl'
=> true
[2] pry(main)> alices_key = OpenSSL::PKey::RSA.new 2048
=> #<OpenSSL::PKey::RSA:0x007fc0751cb028>
[3] pry(main)> alices_key.public?
=> true
[4] pry(main)> alices_key.private?
=> true
Why is the #<OpenSSL::PKey::RSA:0x007fc0751cb028> object both public and private?
Usually the data structure of the private key also contains the public exponent. They are generated in the same key pair generation in the first place.
It is easy to store them together as the public key is the modulus + the public exponent (usually the value 0x10001, the fourth prime of Fermat). The modulus of course is also part of the private key, so that doesn't need to be duplicated.
The public key may also be used to protect against some side channel attacks although that's not such a big issue in software.
It depends on the software if the private key can also be used as a public key and if the public exponent is stored with the private key. But it is quite common, e.g. a private key object in PKCS#11 (used for software, smart cards and HSM's) also contains the public exponent. On the other hand Java has separate PrivateKey and PublicKey classes where the PrivateKey doesn't contain the public exponent (or it doesn't expose it through the public API anyway).
In the end we cannot answer the question without consulting the original OpenSSL guys (Mr. Young and Mr Hudson, I suppose) but there are good reasons for storing the public exponent as well, and as the public key is public it doesn't hurt either.
Related
I'm trying to offline sign a Tron transaction in Ruby.
TronGrid has an endpoint to sign transactions, but they require to send them the account private key in order to do it, which feels a potential risk, so I'd like to sign the transaction locally in order to avoid having the private key value leaving the server.
In the specific I'm trying to convert this Javascript method in Ruby: https://github.com/tronprotocol/tronweb/blob/master/src/utils/crypto.js#L218
I've been trying using both OpenSSL and a gem to do this without much success.
This is what I've got so far:
bn = OpenSSL::BN.new(hex_private_key, 16)
ec = OpenSSL::PKey::EC.new('secp256k1')
ec.private_key = bn
ec.dsa_sign_asn1(transaction_id).unpack1('H*')
and
bn = OpenSSL::BN.new(hex_private_key, 16)
private_key = EC::PrivateKey.new(bn.to_i)
signature = private_key.sign(transaction_id)
The latter gives me the r and s that is then used in the javascript function (even though they wouldn't match what I'd get in JS), and I'm not sure where I could get that recoveryParam.
And the former doesn't return me the signature I was expecting.
I'm kinda lost on how to find out a way to sign those transactions.
Did you find out how to do it?
In the example takes raw_data_hex:
private static byte[] signTransaction2Byte(byte[] transaction, byte[] privateKey)
throws InvalidProtocolBufferException {
ECKey ecKey = ECKey.fromPrivate(privateKey);
Transaction transaction1 = Transaction.parseFrom(transaction);
byte[] rawdata = transaction1.getRawData().toByteArray();
byte[] hash = Sha256Sm3Hash.hash(rawdata);
byte[] sign = ecKey.sign(hash).toByteArray();
return transaction1.toBuilder().addSignature(ByteString.copyFrom(sign)).build().toByteArray();
}`
`Sha256Sm3Hash.hash` returns `sha256` or `sm3` depends on private key.
I'm using BCrypt in Spring and it's giving me different hashes then some online tools are using like https://bcrypt-generator.com/
Any ideas why?
I've tried setting the strength to 12 in Spring and on the bcrypt-generator.com setting rounds to 12 and it did not work.
DaoAuthenticationProvider provider = new DaoAuthenticationProvider();
provider.setPasswordEncoder(new BCryptPasswordEncoder(12));
provider.setUserDetailsService(bettingBotUserDetailsService);
For the raw password "admin" I get these results:
bcrypt-generator.com with 12 rounds:
$2y$12$15h6Idq/TwfcuJu6H1VXie/ao7P4AKlLgIrC5yxbwlEUdJjx9Sl5S
Spring (captured from debug mode):
$2a$10$ED5wQChpxzagbvhlqEqD2.iIdIKv9ddvJcX0WKrQzSOckgc3RHFLW
BCrypt generates the different salt for the same Input. Bcrypt Algorithm
BCrypt returns a different hash each time because it incorporates a different random value into the hash. This is known as a "salt". It prevents people from attacking your hashed passwords with a "rainbow table", a pre-generated table mapping password hashes back to their passwords. The salt means that instead of there being one hash for a password, there's 2^16 of them.
We can check the hashed with normal string as follow
Boolean isMatch = passwordEncoder().matches(currentPassword,dbPassword);
#Bean
public PasswordEncoder passwordEncoder() {
return new BCryptPasswordEncoder();
}
I'm playing around with Elliptic Curves using the Ruby 2.5.x OpenSSL library. I can easily generate a private and public key pair using
curve = OpenSSL::PKey::EC.new('secp256k1')
curve.generate_key
But given a private key I want to regenerate the public key.
I know OpenSSL can do it because the command line allows you to do it, and also the Ruby Bitcoin project does it. But the Ruby Bitcoin project has its own interface to OpenSSL using FFI rather than the one provided by Ruby.
Does Ruby 2.5.x openssl library not expose enough of the OpenSSL interfaces to be able to generate an elliptic curve public key from a private key, or that it can but it's not documented?
In case someone interested to get public key in pem format too :)
example_key = OpenSSL::PKey::EC.new('secp256k1').generate_key
puts example_key.to_pem
pkey = OpenSSL::PKey::EC.new(example_key.public_key.group)
pkey.public_key = example_key.public_key
puts pkey.to_pem
The Ruby OpenSSL bindings don’t allow you to directly get the public key from a PKey::EC object as far as I can tell, but they do expose enough to do the calculation yourself, which is straightforward.
Given a private key as an OpenSSL:BN object, which for the example we can generate like this:
example_key = OpenSSL::PKey::EC.new('secp256k1').generate_key
private_key = example_key.private_key
We can calculate the public key by multiplying the group base point (i.e. the generator) by the private key:
group = OpenSSL::PKey::EC::Group.new('secp256k1')
public_key = group.generator.mul(private_key)
The public key is an OpenSSL::PKey::EC::Point. You can compare with the original to see that is the same:
puts example_key.public_key == public_key # => true
I have a performance problem in my Spring Boot application when it's communicating with Redis that I was hoping someone with expertise on the topic could shed some light on.
Explanation of what I'm trying to do
In short, my application I have 2 nested maps and 3 maps of lists which I want to save to Redis and load back into the application when the data is needed. The data in the first nested map is fairly big, with several levels of non-primitive data types (and lists of these). At the moment I have structured the data in Redis using repositories and Redis Hashes, with repositories A, B, and C, and two different ways of lookup on id for the primary datatype (MyClass) in A. B and C holds data that is referenced from a value in A (with the #Reference annotation).
Performance analysis
Using JProfiler, I have found that the bottleneck is somewhere between my call to a.findOne() and the end of reading the response from Redis (before any conversion from byte[] to MyClass has taken place). I have looked at the slowlog on my Redis server to check for any slow and blocking actions and found none. Each HGETALL command in Redis takes 400μs on average (for a complete hash in A, including finding the referenced hashes in B and C). What strikes me as weird is that timing the a.findOne() call takes from 5-20ms for one single instance of MyClass, depending on how big the hashes in B and C are. A single instance has on average ~2500 hash fields in total when references to B and C are included. When this is done ~900 times on the for the first nested map, I have to wait 10s to get all my data, which is way too long. In comparison, the other nested nested map, which has no references to C (the biggest part of the data), is timed to ~10μs in Redis and <1ms in Java.
Does this analysis seem like normal behavior when the Redis instance is run locally on the same 2015 MacBook Pro as the Spring Boot application? I understand that it will take longer for the complete findOne() method to finish than the actual HGETALL command in Redis, but I don't get why the difference is this big. If anyone could shed some light on the performance of the stuff going on under the hood in the Jedis connection code, I'd appreciate it.
Examples of my data structure in Java
#RedisHash("myClass")
public class MyClass {
#Id
private String id;
private Date date;
private Integer someValue;
#Reference
private Set<C> cs;
private someClass someObject;
private int somePrimitive;
private anotherClass anotherObject;
#Reference
private B b;
Excerpt of class C (a few primitives removed for clarity):
#RedisHash("c")
public class C implements Comparable<BasketValue>, Serializable {
#Id
private String id;
private EnumClass someEnum;
private aClass anObject;
private int aNumber;
private int anotherNumber;
private Date someDate;
private List<CounterClass> counterObjects;
Excerpt of class B:
#RedisHash("b")
public class B implements Serializable {
#Id
private int code;
private String productCodes;
private List<ListClass> listObject;
private yetAnotherClass yetAnotherObject;
private Integer someInteger;
I was reading that there are many reasons for making a class final in SO threads and also in an arcticle
Two of which were
1. To remove extensibility
2. to make class immutable.
Does making a class immutable have the characteristic along with it as being final ( it's methods )? I don't see the difference between the two?
Immutable object does not allow to change his state. Final class does not allow to inherit itself. For example class Foo (see below) is immutable (the state, ie _name is never changed ) and class Bar is mutable (rename method allows to change the state):
final class Foo
{
private String _name;
public Foo(string name)
{
_name = name;
}
public String getName()
{
return _name;
}
}
final class Bar
{
private String _name;
public Bar(string name)
{
_name = name;
}
public String getName()
{
return _name;
}
public void rename(string newName)
{
_name = newName;
}
}
It can sometimes be useful to recognize types as "verifiably deeply immutable", meaning that static analysis can demonstrate that (1) once an instance is constructed, none of its properties will ever change, and (2) every object instance to which it holds a reference is verifiably deeply immutable. Classes which are open to extension cannot be verifiably deeply immutable, because a static analyzer would have no way of knowing whether a mutable subclass might be created, and a reference to that mutable subclass stored within what's supposed to be a verifiably deeply immutable object.
On the other hand, it can sometimes be useful to have abstract (and thus extensible) classes which are specified to be deeply immutable. The abstract class would have no way of forcing derived classes to immutable, but any mutable derived classes should be considered "broken". The situation would be somewhat analogous to the requirement that two object instances which report themselves as "equal" to each other should report the same hash code. It's possible to design classes which violate that requirement, but any errant hash-table behavior that results is the fault of the broken hash-code function, rather than the hash table.
For example, one might have an abstract ImmutableMatrix property with a method to read the element at a given (row,column) location. One possible implementation would be to back an NxM ImmutableMatrix with an array of N*M elements. On the other hand, it may also be useful to define some subclasses like ImmutableDiagonalMatrix, with an array of N elements, where Value(R,C) would yield 0 for R!=C, and Arr[R] for R==C. If a significant fraction of the arrays one is using will be diagonal arrays, one could save a lot of memory for each such instance. While leaving the class extensible would leave open the possibility that someone might extend it in a fashion which is open to mutation, it would also leave open the possibility that a programmer who knew that many of the arrays a program used would fit some particular form could design a class to optimally store that form.