Getting Kubernetes Config Map values into application.properties in Spring Boot - spring

In our project we are deploying our Spring Boot based microservices into Azure using Azure Kubernetes Service and we have a Jenkins Job that creates ConfigMaps with appropriate DB names using Azure CLI . Now I want to get the values of DB Names from the ConfigMap created by Jenkins in my Spring Boot application.properties .
Jenkins job uses following code to create a configmap in AKS
sh '''
kubectl --kubeconfig ./temp-config create configmap generic ${PSQL_CONFIG} -n "${HEC_NAMESPACE}" \
--from-literal=hec.postgres.host=${PSQL_SERVER} \
--from-literal=hec.postgres.dbNames=[${DB_NAMES}] \
--dry-run=client -o yaml | kubectl --kubeconfig ./temp-config apply ${DRYRUN} -f
'''
Now to get the value of variable DB_NAMES in my Spring Boot app ,
1. Should I create a configMap inside a Spring Boot project and load the DB Values ?
2. Or should I set the DB_NAMES variable in the application.properties like
hec.postgres.db-name={DB_NAMES}
Once I have DB_NAMES values populated then I can use the way I want in my Code
Please let me know which approach is good

Yes, define DB_NAMES variable and value in configmap object and load the values from configmap as environment variables inside the container. use those environment variables in the springboot properties file.

Related

Environment variables for Spring Cloud Config in Docker

So, I am learning about microservices (I am a beginner) and I'm facing an issue. I've went through the Spring Cloud Config and Docker docs hoping to find a solution, but I didn't.
I have an app with 3 microservices (Spring Boot) and 1 config server (Spring Cloud Config). I'm using a private Github repository for storing config files and this is my application.properties file for config server:
spring.profiles.active=git
spring.cloud.config.server.git.uri=https://github.com/username/microservices-config.git
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.default-label=master
spring.cloud.config.server.git.username=${GIT_USERNAME}
spring.cloud.config.server.git.password=${GIT_ACCCESS_TOKEN}
I have a Dockerfile based on which I have created a Docker image for config server (with no problems). I created a docker-compose.yml which I use to create and run containers, but it fails because of an exception in my cloud config app. The exception is:
org.eclipse.jgit.api.errors.TransportException: https://github.com/username/microservices-config.git: not authorized
Which basically means that my environment variables GIT_USERNAME and GIT_ACCCESS_TOKEN (that I set up in Intellij's "Edit configuration" and use in application.properties) are not available for the config server to use in a container.
The question is: Do I need to somehow add those environment variables to .jar or to Docker image or to Docker container? Like I'm not sure how do I make them available for the config server to use in a container.
Any help or explanation is welcomed :)

How can I natively load docker secrets mounted as a volume in spring boot

How can my spring boot application running inside a container access docker secrets mounted as a volume inside that same container?
Commonly suggested methods I DO NOT want to use:
Echo the secret into an environment variable - this is insecure.
Pass the secret in as a command line argument - this is messy with multiple secrets and hurts the local dev experience
Manually read the secret from a file by implementing my own property loader - Spring must have a native way to do this
Spring boot 2.4 introduced this feature as part of Volume mounted config trees using the spring.config.import property.
To read docker secrets mounted as a volume from the default location set:
spring.config.import = configtree:/run/secrets/
The property name will be derived from the secret filename and the value from the file contents.
This can then be accessed like any other spring property.

GCP Cloud Run Spring App - How to specify flyway volume

In my spring app, I externalized flyway migration
spring.flyway.locations=filesystem:/var/migration
When using docker run I specify
docker run --name papp -p 9000:2000 -v "C:...\db\migration":/var/migration
How can I specify this in Cloud Run >> VAriables and Secrets
What values do I need to supply for -v "C:...\db\migration":/var/migration
(I created a bucket and uploaded file in Cloud Storage..assuming files should be there)
The form you have there is being described as:
Mount each secret as a volume, which makes the secret available to the container as files.
If you cannot explain in how far /var/migration would be a "secret", this might be the wrong approach. One likely cannot mount that volume as a secret; just mount with docker run -v.
But there are secrets for Flyway, which you could mount eg. as flyway.properties file:
flyway.user=databaseUser
flyway.password=databasePassword
flyway.schemas=schema1,schema2
And generally all the other configuration parameters, as well:
https://flywaydb.org/documentation/configuration/parameters/

How to do Spring-boot configuration for many clients

I am wondering how to resolve problem: I have a spring-boot app on docker that connects to db and some other service.
Probably some clients will have db on other urls than the others.
I use spring.datasource.url property to connect to DB. Should I add it to args and use:
Properties properties = new Properties();
properties.put("spring.datasource.url", args[1]);
application.setDefaultProperties(properties);
And something like that will override it ? But every run will need adding DB url. Or use something else?
datasource could be read as a variable from the docker-compose file:
assume this is your docker-compose file:
version: '2'
services:
db:
image: customimage_mysql
restart: always
ports:
- "3306:3306"
application:
build: .
ports:
- "9111:9111"
depends_on:
- db
links:
- db
environment:
- database.url=jdbc:mysql://mysql-docker-container:3306/spring_app_db?
Now you have 2 options:
set different values for databse.url inside docker compose and build image for each app correspondingly
set different variables (databse1.url , databse2.url,databse3.url, ...) inside docker-compose file, and reference to them from
application.properties:
application.properties
spring.datasource.url=${database.url}
spring.datasource.username=root
spring.datasource.password=root
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.database-platform=org.hibernate.dialect.MySQLDialect
spring.jpa.hibernate.ddl-auto=update
spring.jpa.generate-ddl=true
spring.jpa.show-sql=true
server.port=9111
According to the information that you have provided here, the database link should be a configuration to your application. basically you need to have a configurations file
application.properties
And when you want to change the URL, just change it in the configuration file and build.
you can find the documentation here
And moreover if you are using devops environment like kubernetes, you would have to have a config-map and your deployments will get configurations from those config-maps which are like application.properties files.
If you've got lots of deployments each with their own database then that will take some management one way or another. But you don't want it to require lots of builds of your app - you want to externalise that configuration.
Sorting boot has features for externalising configuration (https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-external-config.html). The simplest for your case would be to use environment variables that override properties by relaxed binding of names (https://github.com/spring-projects/spring-boot/wiki/Relaxed-Binding-2.0). If app is started with an environment variable named SPRING_DATASOURCE_URL then this value will override what you have in your properties for spring.datasource.url. Your properties file effectively sets a default value that you can override. This is out of the box behaviour for Spring Boot and applies to other properties too (inc all the db ones, though if you've got databases of different types then you'll want to include all the relevant driver jars in your build).
Since you're using docker you can set environment variables in the container at deploy/startup time using a -e parameter. So you can override at deploy time for each deployed instance.
You might well use further layers on top of docker like docker-compose or Kubernetes. Then you might get into setting the environment variables in deployment descriptor files that describe your deployment configuration. But that configuration management question is at a different layer/stage and is no longer part of the build step once you have your config externalised.

Where to set env variables for local Spring Cloud Dataflow?

For development, I'm using the local Spring Cloud Dataflow server on my Mac, though we plan to deploy to a Kubernetes cluster for integration testing and production. The SCDF docs say you can use environment variables to configure various things, like database configuration. I'd like my registered app to use these env variables, but they don't seem to be able to see them. That is, I start the SCDF server by running its jar from a terminal window, which can see a set of environment variables. I then configure a stream using some Spring Cloud stream starter apps and one custom Spring Boot app. I have the custom app logging System.getenv() and it's not showing the env variables I need. I set them in my ~/.bashrc file, which I also source from ~/.bash_profile. That works for my terminal windows and most other things that need environment, but not here. Where should I be defining them?
To the points in the first answer and comments, they sound good, but nothing works for me. I have an SQS Source that get's its connection via:
return AmazonSQSAsyncClientBuilder.standard()
.withRegion(Regions.US_WEST_2.getName()))
.build();
When I deploy to a Minikube environment, I edit the sqs app's deployment and set the AWS credentials in the env section. Then it works. For a local deployment, I've now tried:
stream deploy --name greg1 --properties "deployer.sqs.AWS_ACCESS_KEY_ID=<id>,deployer.sqs.AWS_SECRET_ACCESS_KEY=<secret>"
stream deploy --name greg1 --properties "deployer.sqs.aws_access_key_id=<id>,deployer.sqs.aws_secret_access_key=<secret>"
stream deploy --name greg1 --properties "app.sqs.AWS_ACCESS_KEY_ID=<id>,app.sqs.AWS_SECRET_ACCESS_KEY=<secret>"
stream deploy --name greg1 --properties "app.sqs.aws_access_key_id=<id>,app.sqs.aws_secret_access_key=<secret>"
All fail with the error message I get when credentials are wrong, which is, "The specified queue does not exist for this wsdl version." I've read the links, and don't really see anything else to try. Where am I going wrong?
You can pass environment variables to the apps that are deployed via SCDF using application properties or deployment properties. Check the docs for a description of each type.
For example:
dataflow:> stream deploy --name ticktock --properties "deployer.time.local.javaOpts=-Xmx2048m -Dtest=foo"

Resources