Best practices for storing passwords when using Spring Boot - spring-boot

We are working on a Java Spring Boot application, that needs access to a database, the password is stored on the application.properties file.
Our main issue is that the passwords might be viewable when uploaded to GitLab/GitHub.
I found that we can use Jasypt to encrypt the data, but from what I read, I need to use the decryption key on the execution, which is also stored on Git, in order to be deployed using Kubernates.
Is there some way to secure our passwords in such a case? We are using AWS if that makes any difference, and we are trying to use the EKS service, but until now we have had a VM with K8s installed.

Do not store passwords in application.properties as you mention is insecure but also you may have a different version of your application (dev, staging, prod) which will use different databases and different passwords.
What you can do in this case is maintain the password empty in source files and externalize this configuration, i.e you can use an environment variable in your k8 deployment file or VM that the application will be run, spring boot will load it as property value if they have the right format. From spring documentation:
Spring Boot lets you externalize your configuration so that you can work with the same application code in different environments. You can use a variety of external configuration sources, include Java properties files, YAML files, environment variables, and command-line arguments.

You should use environment variables in your application.properties file for this:
spring.datasource.username=${SPRING_DATASOURCE_USERNAME}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD}
Or with a default value (for development):
spring.datasource.username=${SPRING_DATASOURCE_USERNAME:admin}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD:admin}
Then you can add a Kubernetes Secret to your namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: mynamespace
data:
SPRING_DATASOURCE_PASSWORD: YWRtaW4=
SPRING_DATASOURCE_USERNAME: YWRtaW4=
And assign it to your Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
spec:
# omitted...
containers:
- name: mycontainer
envFrom:
- secretRef:
name: mysecret
- configMapRef:
name: myconfigmap
# omitted...
Another alternative would be to store the entire application.properties file in your Secret or ConfigMap and mount it into your container as a file.
Both scenarios are explained in further detail here:
https://developers.redhat.com/blog/2017/10/03/configuring-spring-boot-kubernetes-configmap

Related

How to inject deployment yaml env variable in springboot application yaml

I am trying to read environment variable declared in deployment yaml of Kubernetes into springboot application.yaml
Below is sample in deployment.yaml
spec:
containers:
env:
- name: SECRET_IN
value: dev
Below is sample in application.yaml
innovation:
in: ${SECRET_IN:demo}
But on localhost when I try to print innovation.in (#Configuration is created correctly) I am not getting "dev" in output, it always prints demo, it appears the link between deployment, application yaml is not happening ,could someone please help.
You can store the whole application.YAML config file into the config map or secret and inject it with the deployment only
For example :
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yaml: |-
pool:
size:
core: 1
max:16
if your application.properties is something like
example:
spring.datasource.url=jdbc:mysql://${MYSQL_HOST:localhost}:3306/dbname
spring.datasource.username=user
spring.datasource.password=password
You can replace it with
jdbc:mysql://${MYSQL_HOST:localhost}:3306/dbname
Deployment.yaml will be something like
spec:
containers:
- name: demowebapp
image: registry.gitlab.com/unicorn/unicornapp:1.0
ports:
- containerPort: 8080
imagePullPolicy: Always
env:
- name: MYSQL_HOST
value: mysql-prod
You can save more config into the config map & secret also based on the requirement.
Read more at : https://pushbuildtestdeploy.com/spring-boot-application.properties-in-kubernetes/
I think you did everything right, I have a similar working setup, although without a default 'demo'.
A couple of clarification from the spring boot's standpoint that might help.
application.yml can contain placeholders that can be resolved from the environment variables indeed.
Make sure that this application.yml is not "changed" (rewritten, filtered by maven whatever) during the compilation of the spring boot application artifact.
The most important: spring boot knows nothing about the k8s setup. If the environment variable exists - it will pick it. So the same could be checked even locally - define the env. variable on your local machine and run the spring boot application.
The chances are that somehow when the application runs (with the user/group) the environment variables are not accessible - check it by printing the environment variables (or this specific one) right before starting the spring boot application. Or you can do it in java in the main method:
Map<String, String> env = System.getenv();
env.entrySet().forEach(System.out::println);

passing application configuration using K8s configmaps

How to pass in the application.properties to the Spring boot application using configmaps. Since the application.yml file contains sensitive information, this requires to pass in secrets and configmaps. In this case what options do we have to pass in both the sensitive and non-sensitive configuration data to the Spring boot pod.
I am currently using Spring cloud config server and Spring cloud config server can encrypt the sensitive data using the encrypt.key and decrypt the key.
ConfigMaps as described by #paltaa would do the trick for non-sensitive information. For sensitive information I would use a sealedSecret.
Sealed Secrets is composed of two parts:
A cluster-side controller / operator
A client-side utility: kubeseal
The kubeseal utility uses asymmetric crypto to encrypt secrets that only the controller can decrypt.
These encrypted secrets are encoded in a SealedSecret resource, which you can see as a recipe for creating a secret.
Once installed you create your secret as normal and you can then:
kubeseal --format=yaml < secret.yaml > sealed-secret.yaml
You can safely push your sealedSecret to github etc.
This normal kubernetes secret will appear in the cluster after a few seconds and you can use it as you would use any secret that you would have created directly (e.g. reference it from a Pod).
You can mount Secret as volumes, the same as ConfigMaps. For example:
Create the secret.
kubectl create secret generic ssh-key-secret --from-file=application.properties
Then mount it as volume:
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
labels:
name: secret-test
spec:
volumes:
- name: secret-volume
secret:
secretName: ssh-key-secret
containers:
- name: ssh-test-container
image: mySshImage
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
More information in https://kubernetes.io/docs/concepts/configuration/secret/

Understanding sourcing secrets in kubernetes spring boot app

I am following this guide to consume secrets: https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#secrets-propertysource.
It says roughly.
save secrets
reference secrets in deployment.yml file
containers:
- env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
Then it says "You can select the Secrets to consume in a number of ways:" and gives 3 examples. However without doing any of these steps I can still see the secrets in my env perfectly. Futhermore the operations in step 1 and step 2 operate independently of spring boot(save and move secrets into environment variables)
My questions:
If I make the changes suggested in step 3 what changes/improvements does it make for my container/app/pod?
Is there no way to be able to avoid all the mapping in step 1 and put all secrets in an env?
they write -Dspring.cloud.kubernetes.secrets.paths=/etc/secrets to source all secrets, how is it they knew secrets were in a folder called /etc/
You can mount all env variables from secret in the following way:
containers:
- name: app
envFrom:
- secretRef:
name: db-secret
As for where Spring gets secrets from - I'm not an expert in Spring but it seems there is already an explanation in the link you provided:
When enabled, the Fabric8SecretsPropertySource looks up Kubernetes for
Secrets from the following sources:
Reading recursively from secrets mounts
Named after the application (as defined by spring.application.name)
Matching some labels
So it takes secrets from secrets mount (if you mount them as volumes). It also scans Kubernetes API for secrets (i guess in the same namespaces the app is running in). It can do it by utilizing Kubernetes serviceaccount token which by default is always mounted into the pod. It is up to what Kubernetes RBAC permissions are given to pod's serviceaccount.
So it tries to search secrets using Kubernetes API and match them against application name or application labels.

Running application with Kubernetes Secrets locally

I have application.yaml file which contains database properties fetched from Secrets object in Kubernetes Cluster in separate deployment environment. However, when I try to run that application locally (Spring Boot application), it fails to load for obvious reason that it can't find the datasource due to not having actual values in application.yaml file.
Does anyone have any idea how to start application locally without hardcoding database credentials in yaml file?
url: ${DB_URL}
username: ${DB_USER}
password: ${DB_PASSWORD}
I don't have Kubernetes cluster locally.
I don't have Kubernetes cluster locally.
You will need something to run .yaml files locally probably "minikube". Add secrets to that environment using another file(local-secrets.yaml) or directly using "kubectl".
See here how to add secrets.
The object will look something like this (base64'ed)
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
without hardcoding database credentials in yaml file
you may use helm charts for that, cause you can provide values with --set parameter when installing the chart.

config-map kubernetes multiple environments

I am trying to deploy a Spring Boot application using configuration data from Kubernetes cluster. I have a simple RestController that prints a message by reading from a Kubernetes cluster.
private String message = "Message not coming from Kubernetes config map";
#RequestMapping(value="/echo", method=GET)
public String printKubeConfig() {
return message;
}
Specified the name of the config map in my application.yml
spring:
application:
name: echo-configmap
echo-configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: echo-configmap
data:
application.properties: |-
message=Hello from dev Kubernetes Configmap
application_qa.properties: |-
message=Hello from qa Kubernetes Configmap
I have several environments like qa, int, test etc
What's the best way to specify environment specific properties in the config map? And how to access them in Spring boot application?
Ex: if the application is deployed in qa, my service should return the message "Hello from qa Kubernetes Configmap"
We also have plans to read these configuration files from GIT in future. How to handle that usecase?
Let me try and provide an answer which I think gives you what you need, without using any tools beyond what you'll have installed on most boxes. Maybe try this first, and if you find the approach becomes difficult to manage and scale, move onto something more sophisticated.
Step 1: Version control configmaps per environment
Create a folder like k8s/configmaps or something, and create one configmap per environment:
k8s/configmaps/properties.dev.yaml
k8s/configmaps/properties.qa.yaml
k8s/configmaps/properties.sit.yaml
k8s/configmaps/properties.uat.yaml
Each configmap should contain your environment specific settings.
Step 2: Have a namespace per environment
Create a k8s namespace per environment, such as:
application-dev
application-qa
application-sit
application-uat
Step 3: Create the configmap per environment
A little bash will help here:
#!/usr/bin/env bash
# apply-configmaps.sh
namespace="application-${ENVIRONMENT}"
for configmap in ./k8s/configmaps/*.${ENVIRONMENT}.yml; do
echo "Processing ConfigMap $configmap"
kubectl apply -n ${namespace} -f $configmap
done
Now all you need to do to create or update configmaps for any environment is:
ENVIRONMENT=dev ./update-configmaps.sh
Step 4: Finish the job with CI/CD
Now you can create a CI/CD pipeline - if your configmap source changes just run the command shown above.
Summary
Based on primitive commands and no special tools you can:
Version control config
Manage config per environment
Update or create config when the config code changes
Easily apply the same approach in a CI/CD pipeline if needed
I would strongly recommend you follow this basic 'first principles' approach before jumping into more sophisticated tools to solve the same problems, in many cases you can do it yourself without much effort, learn the key concepts and save the more sophisticated tooling till later if you really need it.
Hope that helps!
This sounds like a good use case for Helm. You could deploy your application as a Helm Chart, which would basically allow you to generate your Kubernetes resources (like ConfigMaps, Deployments, and whatever else you need) from templates.
You can use the documentation on Helm Charts to get started with Helm. After having created a Chart with helm create, you will get a templates/ directory, in which you might place the following YAML template for your ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" .Release.Name .Chart.Name }}
labels:
app: {{ .Chart.Name | trunc 63 | trimSuffix "-" }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
application.properties: |-
message={{ .Values.properties.message }}
You can add a second YAML template for your Deployment object (actually, helm create will already create a sensible default deployment). Simply add your ConfigMap as volume, there:
containers:
- name: {{ .Chart.Name }}
# [...]
volumes:
- name: property-volume
mountPath: /etc/your-app/properties
volumes:
- name: property-volume
configMap:
name: {{ printf "%s-%s" .Release.Name .Chart.Name }}
Each Helm chart has a values.yaml file, in which you can define default values that are then used to fill in your templates. This default file might look like this (remember that the ConfigMap template above contained a {{ .Values.properties.message }} expression):
replicaCount: 1
image:
repository: your-docker-image
tag: your-docker-tag
properties:
message: Hello!
Next, use this Helm chart and the helm install command to deploy your application as many times as you want with different configurations. You can supply different YAML files in which you override specific values from your values.yaml file, or override individual values using --set:
$ helm install --name dev --set image.tag=latest --set replicaCount=1 path/to/chart
$ helm install --name prod --set image.tag=stable --set replicaCount=3 --set properties.message="Hello from prod" path/to/chart
As to your second question: Of course you should put your Helm Chart into version control. You can then use the helm upgrade command to apply changes to an already deployed application.
I would use this tool for each of your git projects for non secret data.
https://github.com/kubernetes-sigs/kustomize
In the meta data you can filter your pods
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mycomponent
env: dev
tier: backend
name: mycomponent
namespace: myapplication
kubectl get pods -n myapplication -l env=dev,tier=backend,app=mycomponent
Typically you want that applications running on qa don't interfere with applications running on production. Using Kubernetes, you can get this kind of isolation using different namespaces for different environments.
That way, objects on the prod namespace are different from objects on the qa namespace. Another, more expensive approach would be to use different k8s clusters for different environments.
Having this setup, you would deploy your application in the namespace for the specific environment where you want to deploy to, creating the Deployment object on that namespace.
This Deployment would make use of a ConfigMap object containing your Spring Boot properties. Let's call this ConfigMap echo-properties for example.
That way, every namespace would have a unique copy of the echo-properties ConfigMap. Each containing the specific configuration for the environment where it belongs.
The Deployment object consumes the ConfigMap properties by either using environment variables or reading files. The important bit here is that if you change the echo-properties ConfigMap data, your application won't see those new values, by default. Kubernetes doesn't have this feature so far. You can't compare ConfigMaps to Spring Cloud Config, which is a dynamic configuration solution.
An approach that would get you a similar behaviour (but not quite the same) would be using the fabric8 ConfigMap Controller on your cluster. This controller is a process that runs on your cluster, and it would restart your application whenever the ConfigMap changes, so that it reads the new configuration values.
If you don't want to restart your application whenever a configuration changes, you should probably stick to Spring Cloud Config for values that will potentially change, and use ConfigMaps for other properties that won't change, like application name or port.
Your use case sounds very much like you should take look at spring-cloud-config - https://cloud.spring.io/spring-cloud-config/
The config-server is an infrastructure component that serves configuration that could be located in a git repository.
A config-client application would connect to config-server at startup and loads the configuration applicable to the current profiles.
You could have different branches for different environments - or use profiles per environment. In your kubernetes deployment manifest you could set the profile by setting SPRING_PROFILES_ACTIVE environment variable.

Resources