Include Key AND salt for signing JWT in NestJS - phoenix-framework

I have a component written with the Phoenix-Framework (it's elixir under the hood),
which I am re-writing in NestJS.
JWTs are used as a mean of authentication.
There is a part where after a user registers, an e-mail containing an confirmation link is send.
The way Phoenix signs the JWT Token is by using the sign Method.
It has the following signature: sign(context, salt, data, opts \\ [])
Here's the thing:
Context is where Elixir finds the secret key base, this can be seen as a symmetric key for the signing process. But, you also pass another "secret", which is the salt.
Now, in NestJS, all I have found so far is the jwt utility nest module which uses the json-webtoken node module.
In there, all you can do is the following: jwt.sign(payload, secretOrPrivateKey, [options, callback])
Under options, there is no "salt" option.
My main question: How do I include a salt in here?
My other side-questions:
1) If I don't use basic SHA-256, but HMAC-SHA or whats it called, do I even need a salt?
2) The sign method from nestjs/jwt only uses 1 parameter:
jwtService.sign(payload: string | Object | Buffer, options?: SignOptions): string
The sign method is an implementation of jsonwebtoken .sign().
How do I even pass the secret-key there? Or is it automatically used after I configured it at the beginning:
JwtModule.registerAsync({
imports: [ConfigModule],
useFactory: async (configService: ConfigService) => ({
secret: configService.getString('SECRET'),
}),
inject: [ConfigService],
}),

I cannot answer you how to include salt in this particular case, but as far as I know, it is quite common to only use a secret for the signing of the header and payload.
The secret is automatically used from the configuration of the JwtModule by using the sing() function.

Related

Why is NEAR requesting authorization when I call a change method on the contract?

I have a change method on my NEAR AssemblyScript Contract, which I'm calling from my front end (browser) code. The user is already signed in and can read the view methods on the contract. It's only after trying to call the change method that the user is prompted with the NEAR Wallet requesting authorization.
How can I call the change method without receiving the prompt to request permission again?
You can also see some examples here:
basic github.com/Learn-NEAR/starter--near-api-js
advanced github.com/Learn-NEAR/starter--near-api-js
There's basically 2 interfaces here:
// pass the contract name (as you did)
wallet.requestSignIn('dev-1634098284641-40067785396400');
// pass an object with contract and empty list of methods
wallet.requestSignIn({
contractId: 'dev-1634098284641-40067785396400',
methodNames: []
});
// add a list of methods to constrain what you can call
wallet.requestSignIn({
contractId: 'dev-1634098284641-40067785396400',
methodNames: ['write']
});
I'm answering my own question, because I spent some time trying to figure it out.
When I was calling the function requestSignIn() on the wallet, I also had to specify the contract name.
import {
connect,
WalletConnection
} from 'near-api-js';
...
const near = await connect(nearConfig);
const wallet = new WalletConnection(near);
wallet.requestSignIn('dev-xxxx-yyyy'); // Need to pass the correct contract name to be able to call change methods on behalf of the user without requesting permission again

How to integrate Google Picker with new Google Identity Services JavaScript library

Because of the known issue described in here (https://developers.google.com/identity/sign-in/web/troubleshooting) I want to update my application to be using the new gsi sign-in that uses less cookies than the previous versions and therefore might have the solution for the mentioned error...
My problem is that there's little to no documentation on how to integrate google picker with the new gsi.
I used to use gapi for some picker-related code like even loading the library gapi.load('picker', () => {}). The migration doc says to replace the apis.google.com/js/api.js with the new gsi url, and a lot of other methods such as googleAuth.signIn or gapi.client.init are now to be deprecated by 2023. But then:
How to load picker without gapi available? Or gapi still needs to be imported but will not contain any sign-in related methods?
How will I pass apiKey and scopes to be able to init googlePicker?
For methods such as GoogleAuth.isSignedIn docs simply states "Remove. A user's current sign-in status on Google is unavailable. Users must be signed-in to Google for consent and sign-in moments." what does that even mean? I need to check if user is signed in in order to not show again the popup every time they want to upload a file from gPicker...
Before, we used to have a access_token on the callback of a reloadAuthResponse or a signIn, now how do we get the token?
Sorry for the question being confusing, I'm very confused with everything. Any input helps, thanks!
I came across https://developers.google.com/identity/oauth2/web/guides/use-token-model through: How to use scoped APIs with (GSI) Google Identity Services
I changed our code to load this script: https://accounts.google.com/gsi/client, and then modified the our "authorize" function (see below) to use window.google.accounts.oauth2.initTokenClient instead of window.gapi.auth2.authorize to get the access_token.
Note that the callback has moved from the second argument of the window.gapi.auth2.authorize function to the callback property of the first argument of the window.google.accounts.oauth2.initTokenClient function.
After calling tokenClient.requestAccessToken() (see below), the callback passed to window.gapi.auth2.authorize is called with an object containing access_token.
const authorize = () =>
- new Promise(res => window.gapi.auth2.authorize({
- client_id: GOOGLE_CLIENT_ID,
- scope: GOOGLE_DRIVE_SCOPE
- }, res));
+ new Promise(res => {
+ const tokenClient = window.google.accounts.oauth2.initTokenClient({
+ client_id: GOOGLE_CLIENT_ID,
+ scope: GOOGLE_DRIVE_SCOPE,
+ callback: res,
+ });
+ tokenClient.requestAccessToken();
+ });
The way access_token is used was not changed:
new window.google.picker.PickerBuilder().setOAuthToken(access_token)
#piannone is correct, adding to their answer:
You'll still need to load 'client' code, as you're using authentication. That means you'll still include https://apis.google.com/js/api.js in your list of scripts. Only don't load 'auth2'. So, while you won't do:
gapi.load('auth2', onAuthApiLoad);
gapi.load('picker', onPickerApiLoad);
you will need to:
gapi.load('client', onAuthApiLoad);
gapi.load('picker', onPickerApiLoad);
(this is instead of directly loading https://accounts.google.com/gsi/client.js I guess.)

How to use S3 SSE C (Server Side Encryption with Client Provided Keys) with Ruby

I'm trying to upload a file to S3 and have it encrypted using the SSE-C encryption options. I can upload without the SSE-C options, but when I supply the sse_customer_key options I'm getting the following error:
ArgumentError: header x-amz-server-side-encryption-customer-key has field value "QkExM0JGRTNDMUUyRDRCQzA5NjAwNEQ2MjRBNkExMDYwQzBGQjcxODJDMjM0\nnMUE2MTNENDRCOTcxRjA2Qzk1Mg=", this cannot include CR/LF
I'm not sure if the problem is with the key I'm generating or with the encoding. I've played around with different options here, but the AWS documentation is not very clear. In the general SSE-C documentation it says you need to supply a x-amz-server-side​-encryption​-customer-key header, which is described as this:
Use this header to provide the 256-bit, base64-encoded encryption key
for Amazon S3 to use to encrypt or decrypt your data.
However, if I look at the Ruby SDK documentation for uploading a file the 3 options have a slightly different description
:sse_customer_algorithm (String) — Specifies the algorithm to use to when encrypting the object (e.g.,
:sse_customer_key (String) — Specifies the customer-provided encryption key for Amazon S3 to use in
:sse_customer_key_md5 (String) — Specifies the 128-bit MD5 digest of the encryption key according to RFC
(I didn't copy that wrong, the AWS documentation is literally half-written like that)
So the SDK documentation makes it seem like you supply the raw sse_customer_key and that it would base64-encode it on your behalf (which makes sense to me).
So right now I'm building the options like this:
sse_customer_algorithm: :AES256,
sse_customer_key: sse_customer_key,
sse_customer_key_md5: Digest::MD5.hexdigest(sse_customer_key)
I previously tried doing Base64.encode64(sse_customer_key) but that gave me a different error:
Aws::S3::Errors::InvalidArgument: The secret key was invalid for the
specified algorithm
I'm not sure if I'm generating the key incorrectly or if I'm supplying the key incorrectly (or if it's a different problem altogether).
This is how I'm generating the key:
require "openssl"
OpenSSL::Cipher.new("AES-256-CBC").random_key
Oh, did you notice that your key contains '\n'? That's most probably why you get the CR/LF error:
QkExM0JGRTNDMUUyRDRCQzA5NjAwNEQ2MjRBNkExMDYwQzBGQjcxODJDMjM0(\n)nMUE2MTNENDRCOTcxRjA2Qzk1Mg=
As mentioned by the colleague in the comments, strict_encode64 is an option, as it complies to RFC 2045.
By the way, I got this insight from here: https://bugs.ruby-lang.org/issues/14664
Hope it helps! :)
First of all, please make sure that you are using the latest version of the SDK (2.2.2.2) from here
So, As I understand while we generate the presigned URL, we have to specify the SSECustomerMethod and when consuming the URL, the "x-amz-server-side-encryption-customer-key" header is set with the customer key, you also need to set the "x-amz-server-side-encryption-customer-algorithm" header.
var getPresignedUrlRequest = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = "EncryptedObject",
SSECustomerMethod= SSECustomerMethod.AES256,
Expires = DateTime.Now.AddMinutes(5)
};
var url = AWSClients.S3.GetPreSignedURL(getPresignedUrlRequest);
var webRequest = HttpWebRequest.Create(url);
webRequest.Headers.Add("x-amz-server-side-encryption-customer-algorithm", "AES256");
webRequest.Headers.Add("x-amz-server-side-encryption-customer-key", base64Key);
using (var response = webRequest.GetResponse())
using (var reader = new StreamReader(response.GetResponseStream()))
{
var contents = reader.ReadToEnd();
}

Verifying signature of non-hashed data with Ruby OpenSSL

I have an RSA public key, some data and a signature of that data. I need to verify the signature. However, the signature is not of a digest of the data, but of the entire data. (The data itself is only 16 bytes, so the signer doesn't bother to hash the data before signing it.) I can verify the signature in C by specifying a NULL engine when initializing the context:
EVP_PKEY_CTX *ctx = EVP_PKEY_CTX_new(verify_key, NULL);
However, I have been unable to find an equivalent in Ruby's OpenSSL::PKey::PKey verify method. That method requires a Digest object, and there is no Digest that I can find that doesn't actually hash but just returns the data as-is. I tried creating my own Digest subclass, but I don't believe that can work, since the underlying OpenSSL library won't know about the existence of a custom digest type.
Am I stuck, or is there a way to solve this problem given that I cannot modify the code run by the signer?
Summarizing the answer from the comments in order to remove this question from the "Unanswered" filter...
owlstead:
Have you tried to find a function like public_decrypt? It may work, as normally you should not encryption with a private key and decrypt with a public key. With a bit of luck it will accept the signature version of PKCS#1 padding (note that the padding used for encryption and signing is different in PKCS#1).
Wammer:
Of course - decrypting the signature with the public key and verifying that it matches the data works fine. So far this is working fine with the standard PKCS#1 padding, but I'll do some more research to see if the differing encryption and signing paddings are a problem in practice. Thanks.
owlstead:
After a decrypt and validation of the padding, all that is left is a (if possible, secure) compare. So that would replace the verification function pretty well. Most of the security is in the modular arithmetic and padding.

How can I asymmetrically encrypt data using OpenPGP with Ruby?

This feels like it should be dead simple, yet I'm not having any luck.
The scenario is this: I have a public *.asc key file. I want to use this file (not my personal keyring) to encrypt data on a server, so that I can decrypt it locally with a secret key.
From the command line I can achieve this using gpg, but I'd prefer to use a Ruby library that isn't just a wrapper around the CLI (i.e., presumably one that provides bindings to the C library). I've looked at the GPGME and OpenPGP gems and haven't been able to figure out how to use them. The documentation (especially for OpenPGP) is quite sparse.
Here, for example, is something I've tried using GPGME, without any luck:
key = GPGME::Data.new(File.open(path_to_file))
data = GPGME::Data.new("I want to encrypt this string.")
# Raises GPGME::Error::InvalidValue
GPGME::Ctx.new do |ctx|
e = ctx.encrypt(key, data)
end
Has anyone been through this already? Surely this can't be that complicated?
I believe I've now got this figured out. It was actually just a few simple pieces I was missing:
Initializing the GPGME::Ctx object with a keylist_mode of GPGME::KEYLIST_MODE_EXTERN.
Importing the public key file using GPGME::Ctx#import.
Using GPGME::Crypto#encrypt to perform the encryption and specifying the correct recipient.
So my solution now looks like this:
key = GPGME::Data.new(File.open(path_to_file))
data = GPGME::Data.new("I want to encrypt this string.")
GPGME::Ctx.new(GPGME::KEYLIST_MODE_EXTERN) do |ctx|
ctx.import(key)
crypto = GPGME::Crypto.new(:armor => true, :always_trust => true)
e = crypto.encrypt(data, :recipients => "recipient#domain.com")
end

Resources