AWS Secret Manager with Spring Boot Application - spring-boot

I tried to get secret manager value use this answer
How to integrate AWS Secret Manager with Spring Boot Application
But my application get secrets 2 times, first as I want with local profile, but second without any profile. Why application go to secret second time and how can I off this?
2021-08-19 11:40:01.214 INFO 9141 --- [ restartedMain] s.AwsSecretsManagerPropertySourceLocator : Loading secrets from AWS Secret Manager secret with name: secret/test_local, optional: false
2021-08-19 11:40:02.702 INFO 9141 --- [ restartedMain] s.AwsSecretsManagerPropertySourceLocator : Loading secrets from AWS Secret Manager secret with name: secret/test, optional: false
2021-08-19 11:40:02.956 ERROR 9141 --- [ restartedMain] o.s.boot.SpringApplication : Application run failed
my config in bootstrap.yaml
aws:
secretsmanager:
prefix: secret
defaultContext: application
profileSeparator: _
name: test
start application with -Dspring.profiles.active=local
upd. If I create secret for secret/test I have the next one
s.AwsSecretsManagerPropertySourceLocator : Loading secrets from AWS Secret Manager secret with name: secret/application_local, optional: false

Currently there is no way to disable prefix/defaultContext look up.
If you take a look here you will see that prefix/ + defaultcontext of loading is always added and used.
You can check docs as well, to have more clear way what is being loaded and in what order.
My recommendation is to switch to spring.config.import since that will be the way we are going to take Secrets Manager importing. Big difference is that, it gives users a lot more control of which secrets they want to import since you can specify each key individually. spring.config.import can be found explained in docs or you can check the project sample .

Related

Spring Boot app in Docker container not starting in Cloud Run after building successfully - cannot access jarfile

I've set up continuous deployment to Cloud Run from GitHub for my Spring Boot project, and while it's successfully building in Cloud Build, when I go over to Cloud Run, I get the following error under Creating Revision:
The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable.
When I go over to the Logs, I see the following errors:
2022-09-23 09:42:47.881 BST
Error: Unable to access jarfile /app/target/educity-manager-0.0.1-SNAPSHOT.jar
{
insertId: "632d7187000d739d29eb84ad"
labels: {5}
logName: "projects/educity-manager/logs/run.googleapis.com%2Fstderr"
receiveTimestamp: "2022-09-23T08:42:47.883252595Z"
resource: {2}
textPayload: "Error: Unable to access jarfile /app/target/educity-manager-0.0.1-SNAPSHOT.jar"
timestamp: "2022-09-23T08:42:47.881565Z"
}
2022-09-23 09:43:48.800 BST
run.googleapis.com
…ager/revisions/educity-manager-00011-fod
Ready condition status changed to False for Revision educity-manager-00011-fod with message: Deploying Revision.
{
insertId: "w6ptr6d20ve"
logName: "projects/educity-manager/logs/cloudaudit.googleapis.com%2Fsystem_event"
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
resourceName: "namespaces/educity-manager/revisions/educity-manager-00011-fod"
response: {6}
serviceName: "run.googleapis.com"
status: {2}}
receiveTimestamp: "2022-09-23T08:43:49.631015104Z"
resource: {2}
severity: "ERROR"
timestamp: "2022-09-23T08:43:48.800371Z"
}
Dockerfile is as follows (and looking at the build log all of the commands in it completed successfully):
FROM openjdk:17-jdk-alpine
RUN addgroup -S spring && adduser -S spring -G spring
USER spring:spring
COPY . /app
ENTRYPOINT [ "java","-jar","/app/target/educity-manager-0.0.1-SNAPSHOT.jar" ]
I've read that Cloud Run defaults to exposing Port 8080, but just to be on the safe side I've put server.port=${PORT:8080} in my application.properties file (but it seems to make no difference one way or the other).
I have run into similar issues in the past. Usually, I am able to resolve this issue by:
specifying the port in the application itself (as you indicated in your post), and
exposing the required port in my dockerfile eg. EXPOSE 8080
Oh my good god I have done it. After two full days of digging, I realised that because I was doing it through github, my .gitignore file was excluding the /target folder containing the jar file, so Cloud Build never got the jar file mentioned in the Dockerfile.
I am going to have a cry and then go to the pub.

Spring Cloud Config Server spams "Adding property source" during health check [Spring Boot 2.6+]?

For a Spring Cloud Config Server project, we recently migrated from Spring Boot 2.1.8.RELEASE to 2.6.6. However, the application seemed to be flooded with below logs that eventually leads to k8s pod crashing/restarting. And the INFO log is generated each time /actuator/health from kube-probe is called.
2022-08-30 19:20:19.182 INFO [config-server,5bd83ee81e7d3ccb,e17a13026d9c85ee] 1 --- [nio-8888-exec-5] o.s.c.c.s.e.NativeEnvironmentRepository : Adding property source: Config resource 'file [{spring.cloud.config.server.git.basedir}/application.yml]' via location 'file:{spring.cloud.config.server.git.basedir}'
2022-08-30 19:20:19.543 INFO [config-server,7557d9d04d71f6c7,a3d5954fe6ebbab1] 1 --- [nio-8888-exec-8] o.s.c.c.s.e.NativeEnvironmentRepository : Adding property source: Config resource 'file [{spring.cloud.config.server.git.basedir}/application.yml]' via location 'file:{spring.cloud.config.server.git.basedir}'
...
Note that I have replaced the actual file path to config repo in the container with spring.cloud.config.server.git.basedir.
Is there something that we missed on how Spring Cloud Config Server behaves differently since the update? Or how to disable health check endpoint to add a property source? As EnvironmentController.java seems to be the culprit.

JHipster H2 DB Non-admin User

I am trying to run my spring-boot/liquibase/H2 database with a non-admin user and am having some problems understanding how to do this.
First off, I have seen some information here and tried to set up my application.yml this way.
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:h2:mem:test
username: USERLIMITED
password: user_limited_password
liquibase:
contexts: dev, faker
user: THELIQUIBASEUSER
password: THELIQUIBASEPASSWORD
Also put these sql statements in the changelog to run so that the user I want is created and given proper access controls:
<sql>DROP USER IF EXISTS USERLIMITED</sql>
<sql>CREATE USER USERLIMITED PASSWORD 'user_limited_password'</sql>
<sql>GRANT ALL ON APP TO USERLIMITED</sql>
When trying to start up the app, I get the following error:
2020-10-21 14:41:18.532 DEBUG 8704 --- [ restartedMain] c.c.config.LiquibaseConfiguration : Configuring Liquibase
2020-10-21 14:41:18.617 WARN 8704 --- [ test-task-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Starting Liquibase asynchronously, your database might not be
ready at startup!
2020-10-21 14:41:20.226 ERROR 8704 --- [ restartedMain] com.zaxxer.hikari.pool.HikariPool : Hikari - Exception during pool initialization.
org.h2.jdbc.JdbcSQLInvalidAuthorizationSpecException: Wrong user name or password [28000-200]
What is interesting is if I change the LiquibaseConfiguration file to use synchronous DB configuration vs. the async by default I do not get an error.
// If you don't want Liquibase to start asynchronously, substitute by this:
SpringLiquibase liquibase = SpringLiquibaseUtil.createSpringLiquibase(liquibaseDataSource.getIfAvailable(), liquibaseProperties, dataSource.getIfUnique(), dataSourceProperties);
// SpringLiquibase liquibase = SpringLiquibaseUtil.createAsyncSpringLiquibase(this.env, executor, liquibaseDataSource.getIfAvailable(), liquibaseProperties, dataSource.getIfUnique(), dataSourceProperties);
Then if I go to the H2 console and perform a query to see my users I only have the one admin user (which should be a non-admin).
Trying to log in as the liquibase user that I set up in the yml
user: THELIQUIBASEUSER
password: THELIQUIBASEPASSWORD
is not there and I get the Wrong user name or password [28000-200] error.
This leads me to believe that it is something with how the application starts up and asynchronous task execution priority.
Any help is very much appreciated!

Token authentication not working when Hashicorp vault is sealed

I'm working on a sample application where I want to connect to the Hashicorp vault to get the DB credentials. Below is the bootstrap.yml of my application.
spring:
application:
name: phonebook
cloud:
config:
uri: http://localhost:8888/
vault:
uri: http://localhost:8200
authentication: token
token: s.5bXvCP90f4GlQMKrupuQwH7C
profiles:
active:
- local,test
The application builds properly when the vault server is unsealed. Maven fetches the database username from the vault properly. When I run the build after sealing the vault, the build is failing due to the below error.
org.springframework.vault.VaultException: Status 503 Service Unavailable [secret/application]: error performing token check: Vault is sealed; nested exception is org.springframework.web.client.HttpServerErrorException$ServiceUnavailable: 503 Service Unavailable: [{"errors":["error performing token check: Vault is sealed"]}
How can I resolve this? I want maven to get the DB username and password during the build without any issues from the vault even when though it is sealed.
It's a profit of Vault that it's not simple static storage, and on any change in the environment, you need to perform some actions to have a stable workable system.
Advice: create a script(s) for automation the process.
Example. I have a multi-services system and some of my services use Vault to get the configuration.
init.sh:
#!/bin/bash
export VAULT_ADDR="http://localhost:8200"
vault operator unseal <token1>
vault operator unseal <token2>
vault operator unseal <token3>
vault login <main token>
vault secrets enable -path=<path>/ -description="secrets for My projects" kv
vault auth enable approle
vault policy write application-policy-dev ./application-policy-DEV.hcl
application.sh:
#!/bin/bash
export VAULT_ADDR="http://localhost:8200"
vault login <main token>
vault delete <secret>/<app_path>
vault delete sys/policy/<app>-policy
vault delete auth/approle/role/<app>-role
vault kv put <secret>/<app_path> - < <(yq m ./application.yaml)
vault policy write <app>-policy ./<app>-policy.hcl
vault write auth/approle/role/<app>-role token_policies="application-policy"
role_id=$(vault read auth/approle/role/<app>-role/role-id -format="json" | jq -r '.data.role_id')
secret_id=$(vault write auth/approle/role/<app>-role/secret-id -format="json" | jq -r '.data.secret_id')
token=$(vault write auth/approle/login role_id="${role_id}" secret_id=${secret_id} -format="json" | jq -r '.auth.client_token')
echo 'Token:' ${token}
where <app> - the name of your application, application.yaml - file with configuration, <app>-policy.hcl - file with policy
Of course, all these files should not be public, only for Vault administration.
On any changes in the environment or Vault period termination just run init.sh. For getting a token for the application run application.sh. Also if you need to change a configuration parameter, change it in application.yaml, run application.sh and use result token.
Script result (for one of my services):
Key Value
--- -----
token *****
token_accessor *****
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
Success! Data deleted (if it existed) at: <secret>/<app>
Success! Data deleted (if it existed) at: sys/policy/<app>-policy
Success! Data deleted (if it existed) at: auth/approle/role/<app>-role
Success! Data written to: <secret>/<app>
Success! Uploaded policy: <app>-policy
Success! Data written to: auth/approle/role/<app>-role
Token: s.dn2o5b7tvxHLMWint1DvxPRJ
Process finished with exit code 0

Stream deployment failing with CloudFoundryAppDeployer: Error: Organization does not exist

I have deployed data-flow server and skipper successfully on cloud foundry but when I try to deploy stream with all deployer properties configured, it is complaining org doesn't exist.
I have tried to configured different properties from data flow web UI but when I deploy stream it fails with error: org doesn't exist. I gave all cloud foundry credential everything that I provided to skipper and server, which are working fine, just stream inside data flow is not able to understand cloud foundry app deployer properties.
enter image description here
In the pic above you can see how I am providing properties value to stream from data flow web UI.
I am getting below error message :
2019-07-23T09:48:37.50-0400 [APP/PROC/WEB/0] OUT 2019-07-23 13:48:37.509 INFO 9 --- [eTaskExecutor-3] o.s.c.s.s.s.StateMachineConfiguration : Entering state ObjectState [getIds()=[INSTALL_INSTALL], getClass()=class org.springframework.statemachine.state.ObjectState, hashCode()=444730043, toString()=AbstractState [id=INSTALL_INSTALL, pseudoState=org.springframework.statemachine.state.DefaultPseudoState#49b9c289, deferred=[], entryActions=[org.springframework.cloud.skipper.server.statemachine.InstallInstallAction#6981f8f3], exitActions=[], stateActions=[], regions=[], submachine=null]]
2019-07-23T09:48:38.44-0400 [APP/PROC/WEB/0] OUT 2019-07-23 13:48:38.440 INFO 9 --- [eTaskExecutor-3] o.s.c.d.s.c.AbstractCloudFoundryDeployer : Preparing to push an application from org.springframework.cloud.stream.app:log-sink-rabbit:jar:2.1.1.RELEASE. This may take some time if the artifact must be downloaded from a remote host.
2019-07-23T09:48:41.70-0400 [APP/PROC/WEB/0] OUT 2019-07-23 13:48:41.708 ERROR 9 --- [eTaskExecutor-3] o.s.c.d.s.c.CloudFoundryAppDeployer : Error: Organization RE-Pheonix-DataFlow-NonProd does not exist creating app DAu4sEO-MyStream1-log-v1
2019-07-23T09:48:41.72-0400 [APP/PROC/WEB/0] OUT 2019-07-23 13:48:41.719 ERROR 9 --- [eTaskExecutor-3] o.s.c.d.s.c.AbstractCloudFoundryDeployer : Failed to deploy DAu4sEO-MyStream1-log-v1
2019-07-23T09:48:41.72-0400 [APP/PROC/WEB/0] OUT java.lang.IllegalArgumentException: Organization RE-Pheonix-DataFlow-NonProd does not exist
2019-07-23T09:48:41.72-0400 [APP/PROC/WEB/0] OUT at org.cloudfoundry.util.ExceptionUtils.illegalArgument(ExceptionUtils.java:45) ~[cloudfoundry-util-3.15.0.RELEASE.jar!/:na]
Following is my deploy properties:
enter image description here
[1]: https://i.stack.imgur.com/cVXKf.png
enter image description here
Please find exported deploy prop:
Deployer Prop
Skipper Config
Skipper Config
For the stream deployment, the Cloud Foundry connection properties (org, space, url, username, password, skipSslValidation) are obtained either from the global Skipper configuration properties for the chosen platform or via the deployment properties set when deploying the stream.
You should be able to provide the org connection property when deploying the stream like this:
If you don't specify this property as a deployer property when deploying the stream, then the Skipper configuration for the corresponding platform will be used.
Can you share how you configured your Skipper configuration properties for Cloud Foundry connection for the chosen platform?

Resources