Token authentication not working when Hashicorp vault is sealed - spring-boot

I'm working on a sample application where I want to connect to the Hashicorp vault to get the DB credentials. Below is the bootstrap.yml of my application.
spring:
application:
name: phonebook
cloud:
config:
uri: http://localhost:8888/
vault:
uri: http://localhost:8200
authentication: token
token: s.5bXvCP90f4GlQMKrupuQwH7C
profiles:
active:
- local,test
The application builds properly when the vault server is unsealed. Maven fetches the database username from the vault properly. When I run the build after sealing the vault, the build is failing due to the below error.
org.springframework.vault.VaultException: Status 503 Service Unavailable [secret/application]: error performing token check: Vault is sealed; nested exception is org.springframework.web.client.HttpServerErrorException$ServiceUnavailable: 503 Service Unavailable: [{"errors":["error performing token check: Vault is sealed"]}
How can I resolve this? I want maven to get the DB username and password during the build without any issues from the vault even when though it is sealed.

It's a profit of Vault that it's not simple static storage, and on any change in the environment, you need to perform some actions to have a stable workable system.
Advice: create a script(s) for automation the process.
Example. I have a multi-services system and some of my services use Vault to get the configuration.
init.sh:
#!/bin/bash
export VAULT_ADDR="http://localhost:8200"
vault operator unseal <token1>
vault operator unseal <token2>
vault operator unseal <token3>
vault login <main token>
vault secrets enable -path=<path>/ -description="secrets for My projects" kv
vault auth enable approle
vault policy write application-policy-dev ./application-policy-DEV.hcl
application.sh:
#!/bin/bash
export VAULT_ADDR="http://localhost:8200"
vault login <main token>
vault delete <secret>/<app_path>
vault delete sys/policy/<app>-policy
vault delete auth/approle/role/<app>-role
vault kv put <secret>/<app_path> - < <(yq m ./application.yaml)
vault policy write <app>-policy ./<app>-policy.hcl
vault write auth/approle/role/<app>-role token_policies="application-policy"
role_id=$(vault read auth/approle/role/<app>-role/role-id -format="json" | jq -r '.data.role_id')
secret_id=$(vault write auth/approle/role/<app>-role/secret-id -format="json" | jq -r '.data.secret_id')
token=$(vault write auth/approle/login role_id="${role_id}" secret_id=${secret_id} -format="json" | jq -r '.auth.client_token')
echo 'Token:' ${token}
where <app> - the name of your application, application.yaml - file with configuration, <app>-policy.hcl - file with policy
Of course, all these files should not be public, only for Vault administration.
On any changes in the environment or Vault period termination just run init.sh. For getting a token for the application run application.sh. Also if you need to change a configuration parameter, change it in application.yaml, run application.sh and use result token.
Script result (for one of my services):
Key Value
--- -----
token *****
token_accessor *****
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
Success! Data deleted (if it existed) at: <secret>/<app>
Success! Data deleted (if it existed) at: sys/policy/<app>-policy
Success! Data deleted (if it existed) at: auth/approle/role/<app>-role
Success! Data written to: <secret>/<app>
Success! Uploaded policy: <app>-policy
Success! Data written to: auth/approle/role/<app>-role
Token: s.dn2o5b7tvxHLMWint1DvxPRJ
Process finished with exit code 0

Related

Unable to create Azure-keyvault-backed secret scope on Azure Databricks

I am not able to create secret scope on Azure Databricks from Databricks CLI. I run a command like this:
databricks secrets "create-scope" --scope "edap-dev-kv" --scope-backend-type AZURE_KEYVAULT --resource-id "/subscriptions/ba426b6f-65cb-xxxx-xxxx-9a1e1656xxxx/resourceGroups/edap-dev-rg/providers/Microsoft.KeyVault/vaults/edap-dev-kv" --profile profile_edap_dev2_dbx --dns-name "https://edap-dev-kv.vault.azure.net/"
I get error msg:
Error: b'<html>\n<head>\n<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>\n<title>
Error 400 io.jsonwebtoken.IncorrectClaimException:
Expected aud claim to be: 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d, but was: https://management.core.windows.net/.
</title>\n</head>\n<body><h2>HTTP ERROR 400</h2>\n<p>
Problem accessing /api/2.0/secrets/scopes/create.
Reason:\n<pre> io.jsonwebtoken.IncorrectClaimException:
Expected aud claim to be: 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d,
but was: https://management.core.windows.net/.</pre></p>\n</body>\n</html>\n'
I have tried doing it with both user (personal) and service principal's AAD token. (I've found somewhere that it it should be a AAD token of user account.)
I am able to do it with GUI using same parameters.
In your case, the personal access token was issued for incorrect service - it was issued for https://management.core.windows.net/. but it's required that you use resource ID of the Azure Databricks - 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d.
Simplest way to do that is to use az-cli with following command:
az account get-access-token -o tsv --query accessToken \
--resource 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d

AWS Secret Manager with Spring Boot Application

I tried to get secret manager value use this answer
How to integrate AWS Secret Manager with Spring Boot Application
But my application get secrets 2 times, first as I want with local profile, but second without any profile. Why application go to secret second time and how can I off this?
2021-08-19 11:40:01.214 INFO 9141 --- [ restartedMain] s.AwsSecretsManagerPropertySourceLocator : Loading secrets from AWS Secret Manager secret with name: secret/test_local, optional: false
2021-08-19 11:40:02.702 INFO 9141 --- [ restartedMain] s.AwsSecretsManagerPropertySourceLocator : Loading secrets from AWS Secret Manager secret with name: secret/test, optional: false
2021-08-19 11:40:02.956 ERROR 9141 --- [ restartedMain] o.s.boot.SpringApplication : Application run failed
my config in bootstrap.yaml
aws:
secretsmanager:
prefix: secret
defaultContext: application
profileSeparator: _
name: test
start application with -Dspring.profiles.active=local
upd. If I create secret for secret/test I have the next one
s.AwsSecretsManagerPropertySourceLocator : Loading secrets from AWS Secret Manager secret with name: secret/application_local, optional: false
Currently there is no way to disable prefix/defaultContext look up.
If you take a look here you will see that prefix/ + defaultcontext of loading is always added and used.
You can check docs as well, to have more clear way what is being loaded and in what order.
My recommendation is to switch to spring.config.import since that will be the way we are going to take Secrets Manager importing. Big difference is that, it gives users a lot more control of which secrets they want to import since you can specify each key individually. spring.config.import can be found explained in docs or you can check the project sample .

${VAULT_SCHEME} not working in bootstrap.properties

I have configured spring boot application to take properties from my environment but strangely I am facing an error while starting my application.
I have added the properties in my ~/.bash_profile and also did source ~/.bash_profile after adding them to the profile.
This is how my bootstrap.properties look like:
spring.application.name=gamification
spring.cloud.vault.enabled=${VAULT_ENABLE:true}
spring.cloud.vault.fail-fast=false
spring.cloud.vault.token=${VAULT_TOKEN}
spring.cloud.vault.scheme=${VAULT_SCHEME}
spring.cloud.vault.host=${VAULT_HOST}
spring.cloud.vault.port=${VAULT_PORT:8200}
I am getting this error:
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.cloud.vault.config.VaultReactiveBootstrapConfiguration]: Constructor threw exception; nested exception is java.lang.IllegalArgumentException: Scheme must be http or https
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:216) ~[spring-beans-5.2.4.RELEASE.jar:5.2.4.RELEASE]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:117) ~[spring-beans-5.2.4.RELEASE.jar:5.2.4.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:310) ~[spring-beans-5.2.4.RELEASE.jar:5.2.4.RELEASE]
... 30 common frames omitted
Caused by: java.lang.IllegalArgumentException: Scheme must be http or https
at org.springframework.util.Assert.isTrue(Assert.java:118) ~[spring-core-5.2.4.RELEASE.jar:5.2.4.RELEASE]
at org.springframework.vault.client.VaultEndpoint.setScheme(VaultEndpoint.java:167) ~[spring-vault-core-2.2.0.RELEASE.jar:2.2.0.RELEASE]
at org.springframework.cloud.vault.config.VaultConfigurationUtil.createVaultEndpoint(VaultConfigurationUtil.java:91) ~[spring-cloud-vault-config-2.2.2.RELEASE.jar:2.2.2.RELEASE]
at org.springframework.cloud.vault.config.VaultReactiveBootstrapConfiguration.<init>(VaultReactiveBootstrapConfiguration.java:110) ~[spring-cloud-vault-config-2.2.2.RELEASE.jar:2.2.2.RELEASE]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_231]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_231]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_231]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_231]
at org.springframework.beans.BeanUtils.instantiateClass(BeanUtils.java:203) ~[spring-beans-5.2.4.RELEASE.jar:5.2.4.RELEASE]
... 32 common frames omitted
I added a debug point in Vault Endpoint and found this:
Here as you can see, the VAULT_HOST is being taken as VAULT_HOST instead of the value of that environment variable, and same with the VAULT_SCHEME
[EDIT]
Adding bash_profile vault configuration:
export VAULT_ENABLE=true
export VAULT_SCHEME=http
export VAULT_HOST=vault-1.dev.lokal
export VAULT_PORT=8200
export VAULT_TOKEN=5F97X
[EDIT #2]
Tried out the solution suggested by #Gopinath
I am getting environment as null when trying to autowire it
The root cause of the problem can be found form this error message:
org.springframework.core.convert.ConverterNotFoundException:
No converter found capable of converting
from type [java.lang.String]
to type [org.springframework.cloud.vault.config.VaultProperties$Config]
The above message indicates that the VaultProperties object could not be initialized using the string parameter supplied.
Here is the link to documentation and instructions on configuring VaultProperties:
https://spring.io/guides/gs/vault-config/
Some more information to help understand vault:
References:
Spring Cloud Vault: https://cloud.spring.io/spring-cloud-vault/
Hashicorp Vault: https://www.vaultproject.io
What is a Vault?
A vault is a secure storage space meant for storing secret information.
Hashicorp Vault is one tool that offers vault functionality for cloud applications.
What is Spring Boot Vault?
Spring Boot applications commonly require secret information for those to work.
Some examples of secret information are:
Database password
Private key
API key
Usually, the input parameters are passed to Spring boot application through the
"application.properties" file or "bootstrap.properties" file.
The use of such properties file poses a security risk, if secret data is directly mentioned in the file.
Spring Boot Vault addresses this risk.
It pulls secret information from vault and supplies to the application at the start-up time.
The .properties file will only tell the application the names of parameters that it can expect from Vault.
The actual values of the parameters will be taken from vault.
How to setup Vault?
Step 1: Install and launch HashiCorp Vault from
https://www.vaultproject.io/downloads.html:
Step 2: After installing Vault, test whether it works, by launching
it in a console window.
> vault server --dev --dev-root-token-id="spring-boot-vault-demo"
==> Vault server configuration:
Api Address: http://127.0.0.1:8200
Cgo: disabled
Cluster Address: https://127.0.0.1:8201
Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
Log Level: info
Mlock: supported: false, enabled: false
Recovery Mode: false
Storage: inmem
Version: Vault v1.4.1
WARNING! dev mode is enabled!
.....
You may need to set the following environment variable:
PowerShell:
$env:VAULT_ADDR="http://127.0.0.1:8200"
cmd.exe:
set VAULT_ADDR=http://127.0.0.1:8200
The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.
Unseal Key: +Dihvgj/oRN2zo6/97ZqpWt086/CFRZEPkuauDu4uQo=
Root Token: spring-boot-vault-demo
Step 3: Store some secret data in the vault,
by running these commands in a separate command window:
> set VAULT_ADDR=http://127.0.0.1:8200
> set VAULT_TOKEN=spring-boot-vault-demo
> vault kv put secret/spring-boot-vault-demo password=££££$$$$%%%%
Key Value
--- -----
created_time 2020-05-02T09:59:41.2233332Z
deletion_time n/a
destroyed false
version 1
I did this:
I made a shell script called setenv.sh and put this under it:
#!/bin/bash
launchctl setenv VAULT_ENABLE true
launchctl setenv VAULT_SCHEME http
launchctl setenv VAULT_HOST vault-1.dev.lokal
launchctl setenv VAULT_PORT 8200
launchctl setenv VAULT_TOKEN 5F97X
And then, before starting the application I ran the shell script with
sudo sh setenv.sh
And the application seems to work fine without any errors. Strangely if I do it with my previous approach of adding the env variables inside the .bash_profile, it doesn't work.

Azure devops terraform pipeline generate client id and secret

I am using this terraform manifest to deploy AKS on Azure. I can do this via the commandline fine and it works, as I have azure cli configured on my machine to generate client id and secret
https://github.com/anubhavmishra/terraform-azurerm-aks
However, I am now building this on Azure Devops Pipeline
So, far i have managed to run terraform init and plan with backend storage on Azure, using Azure Devops using this extension
https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform
Question: How do i get client id and secret on the Azure devops pipeline and set that as an environment variable for terraform? I tried creating a bash az command in the pipeline
> az ad sp create-for-rbac --role="Contributor"
> --scopes="/subscriptions/YOUR_SUBSCRIPTION_ID"
but failed with this error
> 2019-03-27T10:41:58.1042923Z
2019-03-27T10:41:58.1055624Z Setting AZURE_CONFIG_DIR env variable to: /home/vsts/work/_temp/.azclitask
2019-03-27T10:41:58.1060006Z Setting active cloud to: AzureCloud
2019-03-27T10:41:58.1069887Z [command]/usr/bin/az cloud set -n AzureCloud
2019-03-27T10:41:58.9004429Z [command]/usr/bin/az login --service-principal -u *** -p *** --tenant ***
2019-03-27T10:42:00.0695154Z [
2019-03-27T10:42:00.0696915Z {
2019-03-27T10:42:00.0697522Z "cloudName": "AzureCloud",
2019-03-27T10:42:00.0698958Z "id": "88bfee03-551c-4ed3-98b0-be68aee330bb",
2019-03-27T10:42:00.0704752Z "isDefault": true,
2019-03-27T10:42:00.0705381Z "name": "Visual Studio Enterprise",
2019-03-27T10:42:00.0706362Z "state": "Enabled",
2019-03-27T10:42:00.0707434Z "tenantId": "***",
2019-03-27T10:42:00.0716107Z "user": {
2019-03-27T10:42:00.0717485Z "name": "***",
2019-03-27T10:42:00.0718161Z "type": "servicePrincipal"
2019-03-27T10:42:00.0718675Z }
2019-03-27T10:42:00.0719185Z }
2019-03-27T10:42:00.0719831Z ]
2019-03-27T10:42:00.0728173Z [command]/usr/bin/az account set --subscription 88bfee03-551c-4ed3-98b0-be68aee330bb
2019-03-27T10:42:00.8569816Z [command]/bin/bash /home/vsts/work/_temp/azureclitaskscript1553683312219.sh
2019-03-27T10:42:02.4431342Z ERROR: Directory permission is needed for the current user to register the application. For how to configure, please refer 'https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal'. Original error: Insufficient privileges to complete the operation.
2019-03-27T10:42:02.5271752Z [command]/usr/bin/az account clear
2019-03-27T10:42:03.3092558Z ##[error]Script failed with error: Error: /bin/bash failed with return code: 1
2019-03-27T10:42:03.3108490Z ##[section]Finishing: Azure CLI
Here is how I do it with Azure Pipelines.
Create a Service Principal for Terraform.
Create the following variables in your pipeline
ARM_CLIENT_ID
ARM_CLIENT_SECRET
ARM_SUBSCRIPTION_ID
ARM_TENANT_ID
If you choose to store ARM_CLIENT_SECRET as a secret in Azure DevOps you will need to do the following in your task under the Environment Variables sections of the task to get it decrypted so terraform can read it.
you just need to grant your service connections rights to create service principals. but I'd generally advise against that, just precreate a service principal and use it in your pipeline. creating a new service principal on each run seems excessive.
you can use build\release variables and populate those with client id\secret
The approach defined in the post https://medium.com/#maninder.bindra/creating-a-single-azure-devops-yaml-pipeline-to-provision-multiple-environments-using-terraform-e6d05343cae2?
can be considered as well. Here the Keyvault task is used to fetch the secrets from Azure Vault (these include terraform backend access secrets as well as aks sp secrets):
#KEY VAULT TASK
- task: AzureKeyVault#1
inputs:
azureSubscription: '$(environment)-sp'
KeyVaultName: '$(environment)-pipeline-secrets-kv'
SecretsFilter: 'tf-sp-id,tf-sp-secret,tf-tenant-id,tf-subscription-id,tf-backend-sa-access-key,aks-sp-id,aks-sp-secret'
displayName: 'Get key vault secrets as pipeline variables'
And then you can use the secrets as variables in the rest of the pipeline. FOr instance aks-sp-id can be referred to as $(aks-sp-id). So the bash/azure-cli task can be something like
# AZ LOGIN USING TERRAFORM SERVICE PRINCIPAL
- script: |
az login --service-principal -u $(tf-sp-id) -p $(tf-sp-secret) --tenant $(tf-tenant-id)
cd $(System.DefaultWorkingDirectory)/tf-infra-provision
Followed by terraform init and plan (plan shown below, see post for complete pipeline details)
# TERRAFORM PLAN
echo '#######Terraform Plan########'
terraform plan -var-file=./tf-vars/$(tfvarsFile) -var="client_id=$(tf-sp-id)" -var="client_secret=$(tf-sp-secret)" -var="tenant_id=$(tf-tenant-id)" -var="subscription_id=$(tf-subscription-id)" -var="aks_sp_id=$(aks-sp-id)" -var="aks_sp_secret=$(aks-sp-secret)" -out="out.plan"
Hope this helps.

Executing maven unit tests on a Google Cloud SQL environment

I have a Jenkins pod running in GCP's Kubernetes Engine and I'm trying to run a maven unit test that connects to a google cloud SQL database to perform said test. My application.yaml for my project looks like this:
spring:
cloud:
gcp:
project-id: <my_project_id>
sql:
database-name: <my_database_name>
instance-connection-name: <my_instance_connection_name>
jpa:
database-platform: org.hibernate.dialect.MySQL55Dialect
hibernate:
ddl-auto: create-drop
datasource:
continue-on-error: true
driver-class-name: com.mysql.cj.jdbc.Driver
username: <my_cloud_sql_username>
password: <my_cloud_sql_password>
The current Jenkinsfile associated with this project is:
Jenkinsfile:
pipeline {
agent any
tools{
maven 'Maven 3.5.2'
jdk 'jdk8'
}
environment {
IMAGE = readMavenPom().getArtifactId()
VERSION = readMavenPom().getVersion()
DEV_DB_USER = "${env.DEV_DB_USER}"
DEV_DB_PASSWORD = "${env.DEV_DB_PASSWORD}"
}
stages {
stage('Build docker image') {
steps {
sh 'mvn -Dmaven.test.skip=true clean package'
script{
docker.build '$IMAGE:$VERSION'
}
}
}
stage('Run unit tests') {
steps {
withEnv(['GCLOUD_PATH=/var/jenkins_home/google-cloud-sdk/bin']) {
withCredentials([file(credentialsId: 'key-sa', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials <cluster_name> --zone northamerica-northeast1-a --project <project_id>")
sh 'mvn test'
}
}
}
}
}
}
}
My problem is when the Pipeline actually tries to run the mvn test using the above configuration (in my application.yaml) I'm getting this error:
Caused by:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403
Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Insufficient Permission: Request had insufficient authentication scopes.",
"reason" : "insufficientPermissions"
} ],
"message" : "Insufficient Permission: Request had insufficient authentication scopes."
}
I have two Google Cloud projects:
One that has the Kubernetes Cluster where the Jenkins pod is running.
Another project where the K8s Cluster contains my actual Spring Boot Application and the Cloud SQL database that I'm trying to access.
I also created the service account only in my Spring Boot Project for Jenkins to use with three roles: Cloud SQL Editor, Kubernetes Engine Cluster Admin and Project owner (to verify that the service account is not at fault).
I enabled the Cloud SQL, Cloud SQL admin and Kubernetes APIs in both projects and I double checked my Cloud SQL credentials and they are ok. In addition, I authenticated the Jenkins pipeline using the json file generated when I created the service account, following the recommendations discussed here:
Jenkinsfile (extract):
...
withCredentials([file(credentialsId: 'key-sa', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials <cluster_name> --zone northamerica-northeast1-a --project <project_id>")
sh 'mvn test'
}
...
I don't believe the GCP Java SDK relies on gcloud CLI at all. Instead, it looks for an environment variable GOOGLE_APPLICATION_CREDENTIALS that points to your service account key file and GCLOUD_PROJECT (see https://cloud.google.com/docs/authentication/getting-started).
Try adding the following:
sh("export GOOGLE_APPLICATION_CREDENTIALS=${GC_KEY}")
sh("export GCLOUD_PROJECT=<project_id>")
There are a couple of different things you should verify to get this working. I'm assuming you are using the Cloud SQL JDBC SocketFactory for Cloud SQL.
You should create a testing service account and give it whatever permissions are needed to execute the tests. To connect to Cloud SQL, it needs at a minimum the "Cloud SQL Client" role for the same project as the Cloud SQL instance.
The Cloud SQL SF uses the Application Default Credentials (ADC) strategy for determining what authentication to use. This means the first place it looks for credentials is the GOOGLE_APPLICATION_CREDENTIALS env var, which should be a path to the key for the testing service account.

Resources