Keycloak fails to authenticate openid-connect token in cluster mode - session

I'm running two keycloak docker instances and configured cluster as specified here https://hub.docker.com/r/jboss/keycloak/
I can able see logs related to clustering and two records in JGROUPSPING table. Also it works when authenticate(openid-connect) through Host1 and get access token/refresh token and able to retrieve new access_token using refresh token via Host2, which means I believe clustering setup is working.
But Im getting 401 error when I make API call to Host2 either using the access token I received from Host1 or access_token I got from Host1's refresh token. It works only when I use access_token received from same host.
My understanding is that these access_tokens doesn't related to cookie it should be working seamlessly. But it fails.

I had a problem with the verification of the access token signature.
The access token are signed by Keycloak with a keystore. If you don't have a certificate and key mounted in the docker, this keystore will be different between the nodes in your cluster, and a token generated by one node will not be valid for another node.
So you have to follow the "Setting up TLS(SSL)" part of the documentation of the docker.

Related

Use gMSA for Hashicorp Vault mssql credential rotation

I want to start using Vault to rotate credentials for mssql databases, and I need to be able to use a gMSA in my mssql connection string. My organization currently only uses Windows servers and will only provide gMSAs for service accounts.
Specifying the gMSA as the user id in the connection string returns the 400 error error creating database object: error verifying connection: InitialBytes InitializeSecurityContext failed 8009030c.
I also tried transitioning my vault services to use the gMSA as their log on user, but this made nodes unable to become a leader node even though they were able to join the cluster and forward requests.
My setup:
I have a Vault cluster running across a few Windows servers. I use nssm to run them as a Windows service since there is no native Windows service support.
nssm is configured to run vault server -config="C:\vault\config.hcl" and uses the Local System account to run under.
When I change the user, the node is able to start up and join the raft cluster as a follower, but can not obtain leader status, which causes my cluster to become unresponsive once the Local System user nodes are off.
The servers are running on Windows Server 2022 and Vault is at v1.10.3, using integrated raft storage. I have 5 vault nodes in my cluster.
I tried running the following command to configure my database secret engine:
vault write database/config/testdb \
connection_url='server=myserver\testdb;user id=domain\gmsaUser;database=mydb;app name=vault;' \
allowed_roles="my-role"
which caused the error message I mentioned above.
I then tried to change the log on user for the service. I followed these steps to rotate the user:
Updated the directory permissions for everywhere vault is touching (configs, certificates, storage) to include my gMSA user. I gave it read permissions for the config and certificate files and read/write for storage.
Stopped the service
Removed the node as a peer from the cluster using vault operator raft remove-peer instanceName.
Deleted the old storage files
Changed the service user by running sc.exe --% config "vault" obj="domain\gmsaUser" type= own.
Started the service back up and waited for replication
When I completed the last step, I could see the node reappear as a voter in the Vault UI. I was able to directly hit the node using the cli and ui and get a response. This is not an enterprise cluster, so this should have just forwarded the request to the leader, confirming that the clustering portion was working.
Before I got to the last node, I tried running vault operator step-down and was never able to get the leader to rotate. Turning off the last node made the cluster unresponsive.
I did not expect changing the log on user to cause any issue with node's ability to operate. I reviewed the logs but there was nothing out of the ordinary, even by setting the log level to trace. They do show successful unseal, standby mode, and joining the raft cluster.
Most of the documentation I have found for the mssql secret engine includes creating a user/pass at the sql server for Vault to use, which is not an option for me. Is there any way I can use the gMSA in my mssql config?
When you put user id into the SQL connection string it will try to do SQL authentication and no longer try windows authentication (while gMSA is a windows authentication based).
When setting up the gMSA account did you specify the correct parameter for who is allowed to retrieve the password (correct: PrincipalsAllowedToRetrieveManagedPassword, incorrect but first suggestion when using tab completion PrincipalsAllowedToDelegateToAccount)
maybe you need to Install-ADServiceAccount ... on the machine you're running vault on

Spring Boot - Generate JWT on one server and authenticate on another server

I have a distributed system, a user will connect to a server and that server will assign them to a specific node/server to make their API calls.
I want to generate a JWT token on the first server that the client connects to and when the user is redirected to the new server it will authorize them based on their username and password that is pulled from a local database and check the JWT if it is correct (i.e to make sure they're redirected from the first server and no where else).
This might be a bad question but I can't find any resources regarding something like this, how can I generate a JWT token from one server and authenticate it on another server?

Databricks and Azure Blob Storage

I am running this on databricks notebook
dbutils.fs.ls("/mount/valuable_folder")
I am getting this error
Caused by: StorageException: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
I tried using dbutils.fs.refreshMounts()
to get any updates in azure blob storage, but still getting above error.
Such errors most often arise when credentials that you have used for mounting are expired - for example, SAS is expired, storage key is rotated, or service principal secret is expired. You need to unmount the storage using dbutils.fs.unmount and mount it again with dbutils.fs.mount. dbutils.fs.refreshMounts() just refreshes a list of mounts in the backend, it doesn't recheck for credentials.

How to override DefaultAWSCredentialsProviderChain by our own implementaion of credential provider with assume role

I am trying to use spring config server with cross account as I am deploying config server in kubernetise with aws backed.
but due to DefaultAWSCredentialsProviderChain I am unable to get connected to s3 bucket and gets 403 error.
In DefaultAWSCredentialsProviderChain as per logs WebIdentityTokenCredentialsProvider try to get credentials get 403 error.
but when I am try to connect with my awss3 client with STSAssumeRoleSessionCredentialsProvider it gets connect.
Is there any way so that I can provide STSAssumeRoleSessionCredentialsProvider instead of DefaultAWSCredentialsProviderChain

How to get new Client Secret once the old one expires in Azure App?

My Azure App's client secret expiry was set to 3 months which has expired and the application has stopped. My questions are:
How can I get the new client secret to the same Azure App to
replace the new client secret in my NodeJS application?
Also is there a way to get a warning or message/mail before the client secret expire?
How to check the expiry of client credentials without using the Azure portal( that is by using REST requests if any)?
Screen Shot showing expiry in Azure portal. Can we get this expiry somehow by REST requests?
How to check the expiry of client credentials without using the Azure
portal( that is by using REST requests if any)?
You should be able to use Graph API to get this information. The operation you would want to invoke is List applications which will give you a list of application objects. The property you would want to check is passwordCredential for credential expiry.
Also is there a way to get a warning or message/mail before the client secret expire?
AFAIK, there is not an automated way to do this. I believe I read somewhere that Graph API team is working on it but there was no ETA provided for this by them. For now you have to roll out your own solution. You may write a timer-triggered Azure Function which runs daily. This Function can get the list of applications and filter out the applications credentials for which are expiring soon and take action on that.
How can I get the new client secret to the same Azure App to replace the new client secret in my NodeJS application?
Based on your comment, considering you are currently doing this process manually so I would assume you can continue to do so. Once you know that the secret is expiring soon, you can create a new application secret and at appropriate time replace the old secret with the new secret.

Resources