Spring Cloud Security JWT: Distribute Public Key using Config Server / Key Rotation - spring

How do you manage your Private / Public Keys for signing / validating JWTs in Spring Cloud environment?
The "problem":
At the moment I generate a Key Pair. Then copy Private + Public Key to my auth-server application. And also copy the Public Key to each and every Resource Server.
When I now want to implement "Key Rotation" I have to somehow populate the new keys to every service.
The idea:
Maybe I could use the spring-cloud-config-server to store and distribute the Key Pairs?
The config server already provides database login credentials. So why not store even more sensitive information there?
Question(s):
If this is the way to go: How would you implement the key pair distribution with spring-cloud-config-server?
Do you have any security concerns?
How did you solve this problem? I guess there are better solutions.
EDIT:
Maybe there's some solution using Spring Oauth's security.oauth2.resource.jwt.keyUri property for JWKs?

First of all, I would had a gateway to hide the JWT mechanism. It will allow you to revoke tokens from the gateway. If an user know about his token, you can't revoke it without revoke the public key. It will look like this :
It's easy to implement with zuul's filters and session-scoped beans.
Secondly, has you said it in comments, you can simply create a new private key to generate new tokens. But all your resource servers must be able to read all the previously generated tokens. So you need to have a list of public key on each resource servers, and each time you receive a request, you must try to verify it with each public key. Maybe you can had a public key id (and put the id on each generated token) to avoid to do dumb look for this task.
For key distribution, use spring cloud bus and rabbit mq seems right to me.

You should consider the use of Spring Cloud Consul Config instead:
Consul provides a Key/Value Store for storing configuration and other
metadata. Spring Cloud Consul Config is an alternative to the Config
Server and Client. Configuration is loaded into the Spring Environment
during the special "bootstrap" phase. Configuration is stored in the
/config folder by default. Multiple PropertySource instances are
created based on the application’s name and the active profiles that
mimicks the Spring Cloud Config order of resolving properties.
You can POST to /refresh to update your key, or watch for changes:
The Consul Config Watch takes advantage of the ability of consul to
watch a key prefix. The Config Watch makes a blocking Consul HTTP API
call to determine if any relevant configuration data has changed for
the current application. If there is new configuration data a Refresh
Event is published.

Related

HashiCorp Vault dynamic secrets and Spring Boot

I am confused about the use case where HashiCorp Vault is used to provide database secrets dynamically for Spring Boot. Lets say you have two microservices: one containing the application logic and one running a database engine. The first obviously needs to authenticate towards the database and this is where dynamic secrets come into play. Vault can provide such credentials to the first microservice so you don't have to use e.g. ENV variables in a docker-compose file managing both microservices.
The App could be a Spring Boot microservice relying on Spring Cloud Vault to handle communication with HashiCorp Vault for credentials management. The microservice asks Vault for temporary db credentials (in this case they last for one hour) when it is started. During this one hour interval, the app can connect to the database and do whatever needs to be done. After one hour, the credentials expire and no communications is allowed.
The Spring Boot Cloud Vault documentation mentions
Spring Cloud Vault does not support getting new credentials and configuring your DataSource with them when the maximum lease time has been reached. That is, if max_ttl of the Database role in Vault is set to 24h that means that 24 hours after your application has started it can no longer authenticate with the database.
In other words, after one hour, the connection is lost and there seems to be no other way to get new db credentials other then by restarting the microservice.
So if have the following questions:
What is the added value of using Vault in this particular example if you are (seemingly) forced to restart your entire application each time the TTL expires?
Does the same apply when you use static secrets instead?
Can this issue be solved without changing microservice code? (K8S, Istio, etc.?)
My guess is the intended use of Vault with Spring Boot is different compared to my understanding.
This article describes 4 possible solutions to mitigate the issue described in the question. Being valid approaches to solve the problem, a more generic (referring to the 'heavy rotation of dynamic secrets'-approach) and less aggressive (referring to the 'restarting the service when connectivity is lost'-approaches) should be in place.

Spring Cloud GCP Starter Authentication Issues

I am using spring-cloud-gcp-starter,spring-cloud-gcp-starter-pubsub,spring-cloud-gcp-starter-data-datastore for the autoconfiguration of my gcp dependencies.
It fetches the key from system variable:: spring.cloud.gcp.credentials.encoded-key
which I am setting in my configuration class as System.setProperty("spring.cloud.gcp.credentials.encoded-key","privatevalue");
There is a case where my key will be rotated every x days and I want to ensure that my application gives me authorization when the key rotates.
One way I have thought is to overwrite the system variable when my key rotates but how do we make sure gcp uses the latest key for authentication or will this approach work?.
I looked at the CredentialsProvider class and it seems they only have getter method and setter is handled via autoconfiguration.
You are right that CredentialsProvider bean in spring-cloud-gcp is created in autoconfiguration.
In Spring Cloud ecosystem, you can refresh configuration by using #RefreshScope. So then all configurations in this scope will get refreshed when the /refresh endpoint is hit. Read more in spring documentation here.
For rotating the keys, you can override the CredentialsProvider bean in your configuration with #RefreshScope, so that you can refresh your keys without restarting the application.
You can refer to how it is done in this sample application.

Pub/Sub Implementation in Spring boot

Currently in our project we already implemented firebase messaging service(FCM).We already have service account created for this. Now we need to implement a pub/sub with different google and service account.
When I try to implement this its taking default credentials.
How can we configure different service account credentials for FCM and pub/sub?
Kindly let me know how can we fix this.
default credentials
Dependencies added
Error I am facing
To explicitly provide credentials for Spring Cloud GCP Pub/Sub, use the spring.cloud.gcp.pubsub.credentials.location or spring.cloud.gcp.pubsub.credentials.encoded-key property.
Documentation available here.
The error you have is unrelated to GCP authentication, though -- the issue is that two different starters are defining a Jwt parsing bean. If you don't need to extract identity from Firebase, then it can be turned off with spring.cloud.gcp.security.firebase.enabled=false. If you do need it, and com.magilhub is under your control, follow the Spring Boot suggestion and use #Qualifier to get the specific one you need.

Automatically renew AWS credentials in a Spring Boot application using Spring Cloud Vault

I'm trying to create a Spring Boot application that regularly fetch data from AWS S3.
The AWS S3 credentials are fetched from Vault using Spring Cloud Vault when the application start.
My issue is that AWS S3 credentials have a limited lifespan due to Vault policy so I have to restart my application from time to time to obtain new credentials from Vault
Is there a way to automatically restart bean using those credentials?
TL;DR
No, there is no automatism, but you can do this yourself.
The longer read
Spring Boot and Spring Cloud aren't really intended for applying continuous updates to the configuration without interruption. Spring Cloud Config ships with Refresh Scope support that allows to annotate beans with #RefreshScope and trigger a refresh of the beans that get re-initialized. This approach requires either integration with a message bus or triggering the refresh endpoint.
The other alternative, which is limited to AWS functionality, is providing an own AWSCredentialsProvider implementation that is backed by a Vault PropertySource that applies rotation to your credential. This requires you to provide a bit of code that integrates with VaultConfigurer or even directly via SecretLeaseContainer to get secret lifecycle event callbacks. See here for an integration example.
There is a ticket asking for the same question that contains background why this pattern isn't widely applicable.

Basic authentication required while accessing hazelcast rest api

I am trying to use hazelcast rest api (hazelcast version 3.9.1) to gather caching information. I am exposing Rest endpoint in my application (e.g. http://localhost:8080/cache/info) using which the caching information will get collected (using hazelcast rest api e.g. /cache/localinfo) but ever time I hit the endpoint it pop up "Authentication Required" dialog and entering same credential which I used to set group config name and password doesn't work.
I am wondering how to first disable authentication (if possible).
If not what credential it is looking for ? Shouldn't it be same what is being used to setup group config name and password while configuration hazelcast ? e.g. Config config = new Config();
config.getGroupConfig().setName("hazel-instance"); config.getGroupConfig().setPassword("password");
Hazelcast doesn't offer the possibility to secure the REST API by using credentials. Hazelcast is not designed to be open to the public internet. If you want to have it for internal authentication we recommend to put nginx in front of the Hazelcast REST API and use a proxy mechanism.
Anyhow the REST API is considered a legacy API for situations where the programming language doesn't have a native client. The REST API doesn't know about the internal partitioning and therefore will not offer best possible performance.

Resources