How do I sign a curve25519 key in golang? - algorithm

I am trying to implement the X3DH algorithm from Signal in Go. However, I got stuck on how to sign the Public Signed PreKey.
Per the specifications it is supposed to be an X25519 key. In looking at previous implementations on Github they generated a [32]byte key from the curve25519 package and then converted it to an ed25519 key and then signed it.
However, the packages they used for the conversion are deprecated (github.com/agl/ed25519). Therefore, I either need to be able to convert the keys to ed25519 so I can sign them with the current ed25519 package (golang.org/x/crypto/25519) or implement a sign and verify function for curve25519 keys.

Ed25519 keys can be converted to X25519 keys easily, the twisted Edwards curve used by Ed25519 and the Montgomery curve used by X25519 are birationally equivalent.
Points on the Edwards curve are usually referred to as (x, y), while points on the Montgomery curve are usually referred to as (u, v).
You don't need a library to do the conversion, it's really simple...
(u, v) = ((1+y)/(1-y), sqrt(-486664)*u/x)
(x, y) = (sqrt(-486664)*u/v, (u-1)/(u+1))
Here is a great blog by Filippo Valsorda, the Golang security lead at Google, discussing this topic.

This takes in a public curve25519 key and converts it to an ed25519 public key. I did not write this code but appears to be doing what Woodstock above is stating. Further information would be welcomed:
func Verify(publicKey [32]byte, message []byte, signature *[64]byte) bool {
publicKey[31] &= 0x7F
/* Convert the Curve25519 public key into an Ed25519 public key. In
particular, convert Curve25519's "montgomery" x-coordinate into an
Ed25519 "edwards" y-coordinate:
ed_y = (mont_x - 1) / (mont_x + 1)
NOTE: mont_x=-1 is converted to ed_y=0 since fe_invert is mod-exp
Then move the sign bit into the pubkey from the signature.
*/
var edY, one, montX, montXMinusOne, montXPlusOne FieldElement
FeFromBytes(&montX, &publicKey)
FeOne(&one)
FeSub(&montXMinusOne, &montX, &one)
FeAdd(&montXPlusOne, &montX, &one)
FeInvert(&montXPlusOne, &montXPlusOne)
FeMul(&edY, &montXMinusOne, &montXPlusOne)
var A_ed [32]byte
FeToBytes(&A_ed, &edY)
A_ed[31] |= signature[63] & 0x80
signature[63] &= 0x7F
var sig = make([]byte, 64)
var aed = make([]byte, 32)
copy(sig, signature[:])
copy(aed, A_ed[:])
return ed25519.Verify(aed, message, sig)
This uses functions from "golang.org/x/crypto/ed25519/internal"

Related

How to decrypt of AES-256-GCM created with ruby in sjcl.js

I'm trying to decrypt an AES cipher generated by Ruby with the sjcl.js library.
I'm getting a "corrupt" error for an unknown reason……. I want to fix the problem.
For reference, when encryption and decryption were attempted in CBC mode, decryption was successful.
Ruby Code:
cipher = OpenSSL::Cipher.new('aes-256-gcm')
cipher.encrypt
iv = cipher.random_iv
cipher.key = Digest::SHA256.digest(password)
ciphertext = cipher.update(plaintext) + cipher.final
return Base64.strict_encode64(iv) + Base64.strict_encode64(ciphertext)
Javascript Code:
var iv = sjcl.codec.base64.toBits(IV_BASE64);
var ciphertext = sjcl.codec.base64.toBits(CIPHERTEXT_BASE64);
var key = sjcl.hash.sha256.hash(KEY_UTF8);
var decrypted = sjcl.mode.gcm.decrypt(new sjcl.cipher.aes(key), ciphertext, iv);
AES-GCM is an authenticated encryption algorithm. It automatically generates an authentication tag during encryption, which is used for authentication during decryption. This tag is not considered in the current Ruby code. It is 16 bytes by default, can be retrieved with cipher.auth_tag and must be added, e.g.:
ciphertext = cipher.update(plaintext) + cipher.final + cipher.auth_tag
Regarding nonce/IV, note that Base64 encoding should actually be done after concatenation (which, however, is not critical for a 12 bytes nonce/IV commonly used with GCM).
On the JavaScript side the separation of the nonce/IV is missing. Ciphertext and tag do not need to be separated because the sjcl processes the concatenation of both (ciphertext|tag):
const GCM_NONCE_LENGTH = 12 * 8
const GCM_TAG_LENGTH = 16 * 8
// Separate IV and ciptertext/tag combination
let ivCiphertextTagB64 = "2wLsVLuOJFX1pfwwjoLhQrW7f/86AefyZ7FwJEhJVIpU+iG2EITzushCpDRxgqK2cwVYvfNt7KFZ39obMMmIqhrDCIeifzs="
let ivCiphertextTag = sjcl.codec.base64.toBits(ivCiphertextTagB64)
let iv = sjcl.bitArray.bitSlice(ivCiphertextTag, 0, GCM_NONCE_LENGTH)
let ciphertextTag = sjcl.bitArray.bitSlice(ivCiphertextTag, GCM_NONCE_LENGTH)
// Derive key via SHA256
let key = sjcl.hash.sha256.hash("my password")
// Decrypt
let cipher = new sjcl.cipher.aes(key)
let plaintext = sjcl.mode.gcm.decrypt(cipher, ciphertextTag, iv, null, GCM_TAG_LENGTH)
//let plaintext = sjcl.mode.gcm.decrypt(cipher, ciphertextTag, iv) // works also; here the defaults for the AAD ([]) and the tag size (16 bytes) are applied
console.log(sjcl.codec.utf8String.fromBits(plaintext))
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/sjcl/1.0.8/sjcl.min.js "></script>
The ciphertext used in the above code was generated with the Ruby code considering the authentication tag and is successfully decrypted.
Note that key derivation with a digest is insecure. Instead, a reliable key derivation function like PBKDF2 should be used.

Random bytes and encodings

I have a function that needs a random sequence of bytes as input (e.g. a salt for hashing a password). I generate that string using a CSPRNG function and then encode in to base64.
Now I pass that string to the function that needs it, but that function works with bytes, so if it receive a string it turns it into a byte-buffer by reading the string as utf8. The string given as input is not the same sequence of bytes generated with the CSPRNG function but is the utf8 decoded string of the base64 encoded random bytes. So if I generate N bytes, the transformations with encodings turns it in 4/3*N bytes. Can I assume that these expanded bytes are still random after the transformations? Are there any security implications?
Here's a pseudo code to make it more clear:
function needsRandBytes(rand) {
if (typeof rand == 'string') {
rand = Buffer.from(rand, 'utf8'); // here's the expansion
}
// use the rand bytes...
}
randBytes = generateRandomBytes(N); // cryptographically secure function
randString = randBytes.toString('base64');
needsRandBytes(randString);

How to convert a char to a libc::c_char?

I have a C function:
Node * first_element_by_path(const Node * node, const char * path, char delimiter);
And a Rust glue function:
pub fn first_element_by_path(node: *mut CNode, path: *const c_char, delimiter: c_char) -> *mut CNode;
It expects a c_char as delimiter. I want to send a char to it, but c_char is a i8 and not a char. How can I convert a Rust char to i8 or c_char in this case?
You are asking the question:
How do I fit a 32-bit number into an 8-bit value?
Which has the immediate answer: "throw away most of the bits":
let c = rust_character as libc::c_char;
However, that should cause you to stop and ask the questions:
Are the remaining bits in the right encoding?
What about all those bits I threw away?
Rust chars allow encoding all Unicode scalar values. What is your desired behavior for this code:
let c = '💩' as libc::c_char;
It's probably not to create the value -87, a non-ASCII value! Or this less-silly and perhaps more realistic variant, which is -17:
let c = 'ï' as libc::c_char;
You then have to ask: what does the C code mean by a character? What encoding does the C code think strings are? How does the C code handle non-ASCII text?
The safest thing may be to assert that the value is within the ASCII range:
let c = 'ï';
let v = c as u32;
assert!(v <= 127, "Invalid C character value");
let v = v as libc::c_char;
Instead of asserting, you could also return a Result type that indicates that the value was out of range.
should I change my function (the one that will call the glue function) to receive a c_char instead of a char?
That depends. That may just be pushing the problem further up the stack; now every caller has to decide how to create the c_char and worry about the values between 128 and 255. If the semantics of your code are such that the value has to be an ASCII character, then encode that in your types. Specifically, you can use something like the ascii crate.
In either case, you push the possibility for failure into someone else's code, which makes your life easier at the potential expense of making the caller more frustrated.

In etcd v3.0.x, how do I request all keys with a given prefix?

In etcd 3.0.x, a new API was introduced, and I'm just reading up on it. One thing is unclear to me, in the RangeRequest object. In the description of the property range_end, it says:
If the range_end is one bit larger than the given key,
then the range requests get the all keys with the prefix (the given key).
Here is the complete text, to provide some context:
// key is the first key for the range. If range_end is not given, the request only looks up key.
bytes key = 1;
// range_end is the upper bound on the requested range [key, range_end).
// If range_end is '\0', the range is all keys >= key.
// If the range_end is one bit larger than the given key,
// then the range requests get the all keys with the prefix (the given key).
// If both key and range_end are '\0', then range requests returns all keys.
bytes range_end = 2;
My question is: What is meant by
If the range_end is one bit larger than the given key
? Does it mean that range_end is 1 bit longer than key? Does it mean it must be key+1 when interpreted as integer? If the latter, in which encoding?
There's a PR which resolves this confusion.
If range_end is key plus one (e.g., "aa"+1 == "ab", "a\xff"+1 == "b"),
then the range request gets all keys prefixed with key.
UPDATE:
var key = "/aaa"
var range_end = "/aa" + String.fromCharCode("a".charCodeAt(2) + 1);
One bit bigger than the last byte of key.
For example, if key is "09903x", then range_end should be "09903y".
There is only byte stream when sending to etcd server, so you should care about the serialization of the driver, and determined the value of range_end.
A great TypeScript example here: https://github.com/mixer/etcd3/blob/7691f9bf227841e268c3aeeb7461ad71872df878/src/util.ts#L25
work js example with TextEncoder/TextDecoder:
function endRangeForPrefix(value) {
let textEncoder = new TextEncoder();
let encodeValue = textEncoder.encode(value);
for (let i = encodeValue.length - 1; i >= 0; i--) {
if (encodeValue[i] < 0xff) {
encodeValue[i]++;
encodeValue = encodeValue.slice(0, i + 1);
let textDecoder = new TextDecoder();
let decode = textDecoder.decode(encodeValue);
return decode;
}
}
return '';
}
I am using python aioetcd3. I also encountered the same problem, but I found a way in his source code.
aioetcd3/utils.py line 14
def increment_last_byte(byte_string):
s = bytearray(to_bytes(byte_string))
for i in range(len(s) - 1, -1, -1):
if s[i] < 0xff:
s[i] += 1
return bytes(s[:i+1])
else:
return b'\x00'
usage:
await Client().delete([db_key, increment_last_byte(db_key)], prev_kv=True)

Need a reality check: Is my analysis of this VB6 Blowfish bug correct?

Recently I had cause to compare Blowfish algorithms. I was comparing outputs from DI Management's library and PHP's mcrypt. I could not get them to agree in any way.
This led me on an interesting chase. According to this posting on Bruce Schneier's website, there was a sign extension bug in early versions of the Blowfish code, and it would seem that the DI Management code implements the pre-bug-report code.
The blurb in the bug report says, in part,
bfinit(char *key,int keybytes)
{
unsigned long data;
...
j=0;
...
data=0;
for(k=0;k<4;k++){
data=(data<<8)|key[j];/* choke*/
j+=1;
if(j==keybytes)
j=0;
}
...
}
It chokes whenever the most significant bit
of key[j] is a '1'. For example, if key[j]=0x80,
key[j], a signed char, is sign extended to 0xffffff80
before it is ORed with data.
The equivalent code in the blf_Initialise function in basBlowfish.bas is
wData = &H0
For k = 0 To 3
wData = uw_ShiftLeftBy8(wData) Or aKey(j)
j = j + 1
If j >= nKeyBytes Then j = 0
The bug-report suggests the following fix to the C code:
data<<=8;
data|=(unsigned long)key[j]&0xff;
which I've implemented in VB6 as
wData = uw_ShiftLeftBy8(wData)
wData = wData Or ( aKey(j) And &HFF )
In fact, I've written it so that both methods are used and then put in an assertion to check whether the values are the same or not, viz:
wData = uw_ShiftLeftBy8(wData)
wData = wData Or (aKey(j) And &HFF)
wDCheck = uw_ShiftLeftBy8(wData) Or aKey(j)
Debug.Assert wData = wDCheck
When aKey(j) contains 255, I get an assertion error.
Am I reading this situation aright? Is a sign-extension error occurring or am I seeing bugs that aren't there?
Strangely, the tests that come with the DI Management code appear to work correctly both with and without this change (which may mean that my search for equivalence between the two algorithms may depend on something else.)
If I'm reading that right (certainly not guaranteed at this hour), you do have a bug. Maybe even two. Remember that in C, type casts have a higher precendence than bitwise operations. The C code casts the signed char to an unsigned long before &ing it with 0xFF. Written verbosely:
data = (data << 8) | ( ((unsigned long)key[j]) & 0xFF );
However, the VB code you posted is equivalent to:
wData = (wData << 8) | (unsigned long)(aKey[j] & 0xFF);
Hello, sign extension.
Also, did you mean to write this?
wDCheck = uw_ShiftLeftBy8(wDCheck) Or aKey(j)
Otherwise, you're setting wDCheck using the new value of wData.

Resources