Running application with Kubernetes Secrets locally - spring-boot

I have application.yaml file which contains database properties fetched from Secrets object in Kubernetes Cluster in separate deployment environment. However, when I try to run that application locally (Spring Boot application), it fails to load for obvious reason that it can't find the datasource due to not having actual values in application.yaml file.
Does anyone have any idea how to start application locally without hardcoding database credentials in yaml file?
url: ${DB_URL}
username: ${DB_USER}
password: ${DB_PASSWORD}
I don't have Kubernetes cluster locally.

I don't have Kubernetes cluster locally.
You will need something to run .yaml files locally probably "minikube". Add secrets to that environment using another file(local-secrets.yaml) or directly using "kubectl".
See here how to add secrets.
The object will look something like this (base64'ed)
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm

without hardcoding database credentials in yaml file
you may use helm charts for that, cause you can provide values with --set parameter when installing the chart.

Related

Best practices for storing passwords when using Spring Boot

We are working on a Java Spring Boot application, that needs access to a database, the password is stored on the application.properties file.
Our main issue is that the passwords might be viewable when uploaded to GitLab/GitHub.
I found that we can use Jasypt to encrypt the data, but from what I read, I need to use the decryption key on the execution, which is also stored on Git, in order to be deployed using Kubernates.
Is there some way to secure our passwords in such a case? We are using AWS if that makes any difference, and we are trying to use the EKS service, but until now we have had a VM with K8s installed.
Do not store passwords in application.properties as you mention is insecure but also you may have a different version of your application (dev, staging, prod) which will use different databases and different passwords.
What you can do in this case is maintain the password empty in source files and externalize this configuration, i.e you can use an environment variable in your k8 deployment file or VM that the application will be run, spring boot will load it as property value if they have the right format. From spring documentation:
Spring Boot lets you externalize your configuration so that you can work with the same application code in different environments. You can use a variety of external configuration sources, include Java properties files, YAML files, environment variables, and command-line arguments.
You should use environment variables in your application.properties file for this:
spring.datasource.username=${SPRING_DATASOURCE_USERNAME}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD}
Or with a default value (for development):
spring.datasource.username=${SPRING_DATASOURCE_USERNAME:admin}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD:admin}
Then you can add a Kubernetes Secret to your namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: mynamespace
data:
SPRING_DATASOURCE_PASSWORD: YWRtaW4=
SPRING_DATASOURCE_USERNAME: YWRtaW4=
And assign it to your Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
spec:
# omitted...
containers:
- name: mycontainer
envFrom:
- secretRef:
name: mysecret
- configMapRef:
name: myconfigmap
# omitted...
Another alternative would be to store the entire application.properties file in your Secret or ConfigMap and mount it into your container as a file.
Both scenarios are explained in further detail here:
https://developers.redhat.com/blog/2017/10/03/configuring-spring-boot-kubernetes-configmap

How to inject deployment yaml env variable in springboot application yaml

I am trying to read environment variable declared in deployment yaml of Kubernetes into springboot application.yaml
Below is sample in deployment.yaml
spec:
containers:
env:
- name: SECRET_IN
value: dev
Below is sample in application.yaml
innovation:
in: ${SECRET_IN:demo}
But on localhost when I try to print innovation.in (#Configuration is created correctly) I am not getting "dev" in output, it always prints demo, it appears the link between deployment, application yaml is not happening ,could someone please help.
You can store the whole application.YAML config file into the config map or secret and inject it with the deployment only
For example :
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yaml: |-
pool:
size:
core: 1
max:16
if your application.properties is something like
example:
spring.datasource.url=jdbc:mysql://${MYSQL_HOST:localhost}:3306/dbname
spring.datasource.username=user
spring.datasource.password=password
You can replace it with
jdbc:mysql://${MYSQL_HOST:localhost}:3306/dbname
Deployment.yaml will be something like
spec:
containers:
- name: demowebapp
image: registry.gitlab.com/unicorn/unicornapp:1.0
ports:
- containerPort: 8080
imagePullPolicy: Always
env:
- name: MYSQL_HOST
value: mysql-prod
You can save more config into the config map & secret also based on the requirement.
Read more at : https://pushbuildtestdeploy.com/spring-boot-application.properties-in-kubernetes/
I think you did everything right, I have a similar working setup, although without a default 'demo'.
A couple of clarification from the spring boot's standpoint that might help.
application.yml can contain placeholders that can be resolved from the environment variables indeed.
Make sure that this application.yml is not "changed" (rewritten, filtered by maven whatever) during the compilation of the spring boot application artifact.
The most important: spring boot knows nothing about the k8s setup. If the environment variable exists - it will pick it. So the same could be checked even locally - define the env. variable on your local machine and run the spring boot application.
The chances are that somehow when the application runs (with the user/group) the environment variables are not accessible - check it by printing the environment variables (or this specific one) right before starting the spring boot application. Or you can do it in java in the main method:
Map<String, String> env = System.getenv();
env.entrySet().forEach(System.out::println);

passing application configuration using K8s configmaps

How to pass in the application.properties to the Spring boot application using configmaps. Since the application.yml file contains sensitive information, this requires to pass in secrets and configmaps. In this case what options do we have to pass in both the sensitive and non-sensitive configuration data to the Spring boot pod.
I am currently using Spring cloud config server and Spring cloud config server can encrypt the sensitive data using the encrypt.key and decrypt the key.
ConfigMaps as described by #paltaa would do the trick for non-sensitive information. For sensitive information I would use a sealedSecret.
Sealed Secrets is composed of two parts:
A cluster-side controller / operator
A client-side utility: kubeseal
The kubeseal utility uses asymmetric crypto to encrypt secrets that only the controller can decrypt.
These encrypted secrets are encoded in a SealedSecret resource, which you can see as a recipe for creating a secret.
Once installed you create your secret as normal and you can then:
kubeseal --format=yaml < secret.yaml > sealed-secret.yaml
You can safely push your sealedSecret to github etc.
This normal kubernetes secret will appear in the cluster after a few seconds and you can use it as you would use any secret that you would have created directly (e.g. reference it from a Pod).
You can mount Secret as volumes, the same as ConfigMaps. For example:
Create the secret.
kubectl create secret generic ssh-key-secret --from-file=application.properties
Then mount it as volume:
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
labels:
name: secret-test
spec:
volumes:
- name: secret-volume
secret:
secretName: ssh-key-secret
containers:
- name: ssh-test-container
image: mySshImage
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
More information in https://kubernetes.io/docs/concepts/configuration/secret/

Run MySQL cluster backed, HTTPS-enabled Spring boot app on AWS (EKS)

I was looking at step-by-step tutorial on how to run my spring boot, mysql-backed app using AWS EKS (Elastic Container service for Kubernetes) using the existing SSL wildcard certificate and wasn't able to find a complete solution.
The app is a standard Spring boot self-contained application backed by MySQL database, running on port 8080. I need to run it with high availability, high redundancy including MySQL db that needs to handle large number of writes as well as reads.
I decided to go with the EKS-hosted cluster, saving a custom Docker image to AWS-own ECR private Docker repo going against EKS-hosted MySQL cluster. And using AWS issued SSL certificate to communicate over HTTPS. Below is my solution but I'll be very curious to see how it can be done differently
This a step-by-step tutorial. Please don't proceed forward until the previous step is complete.
CREATE EKS CLUSTER
Follow the standard tutorial to create EKS cluster. Don't do step 4. When you done you should have a working EKS cluster and you must be able to use kubectl utility to communicate with the cluster. When executed from the command line you should see the working nodes and other cluster elements using
kubectl get all --all-namespaces command
INSTALL MYSQL CLUSTER
I used helm to install MySQL cluster following steps from this tutorial. Here are the steps
Install helm
Since I'm using Macbook Pro with homebrew I used brew install kubernetes-helm command
Deploy MySQL cluster
Note that in MySQL cluster and Kubernetes (EKS) cluster, word "cluster" refers to 2 different things. Basically you are installing cluster into cluster, just like a Russian Matryoshka doll so your MySQL cluster ends up running on EKS cluster nodes.
I used a 2nd part of this tutorial (ignore kops part) to prepare the helm chart and install MySQL cluster. Quoting helm configuration:
$ kubectl create serviceaccount -n kube-system tiller
serviceaccount "tiller" created
$ kubectl create clusterrolebinding tiller-crule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io "tiller-crule" created
$ helm init --service-account tiller --wait
$HELM_HOME has been configured at /home/presslabs/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
$ helm repo add presslabs https://presslabs.github.io/charts
"presslabs" has been added to your repositories
$ helm install presslabs/mysql-operator --name mysql-operator
NAME: mysql-operator
LAST DEPLOYED: Tue Aug 14 15:50:42 2018
NAMESPACE: default
STATUS: DEPLOYED
I run all commands exactly as quoted above.
Before creating a cluster, you need a secret that contains the ROOT_PASSWORD key.
Create a file named example-cluster-secret.yaml and copy into it the following YAML code
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
# root password is required to be specified
ROOT_PASSWORD: Zm9vYmFy
But what is that ROOT_PASSWORD? Turns out this is base64 encoded password that you planning to use with your MySQL root user. Say you want root/foobar (please don't actually use foobar). The easiest way to encode the password is to use one of the websites such as https://www.base64encode.org/ which encodes foobar into Zm9vYmFy
When ready execute kubectl apply -f example-cluster-secret.yaml which will create a new secret
Then you need to create a file named example-cluster.yaml and copy into it the following YAML code:
apiVersion: mysql.presslabs.org/v1alpha1
kind: MysqlCluster
metadata:
name: my-cluster
spec:
replicas: 2
secretName: my-secret
Note how the secretName matches the secret name you just created. You can change it to something more meaningful as long as it matches in both files. Now run kubectl apply -f example-cluster.yaml to finally create a MySQL cluster. Test it with
$ kubectl get mysql
NAME AGE
my-cluster 1m
Note that I did not configure a backup as described in the rest of the article. You don't need to do it for the database to operate. But how to access your db? At this point the mysql service is there but it doesn't have external IP. In my case I don't even want that as long as my app that will run on the same EKS cluster can access it.
However you can use kubectl port forwarding to access the db from your dev box that runs kubectl. Type in this command: kubectl port-forward services/my-cluster-mysql 8806:3306. Now you can access your db from 127.0.0.1:8806 using user root and the non-encoded password (foobar). Type this into separate command prompt: mysql -u root -h 127.0.0.1 -P 8806 -p. With this you can also use MySQL Workbench to manage your database just don't forget to run port-forward. And of course you can change 8806 to other port of your choosing
PACKAGE YOUR APP AS A DOCKER IMAGE AND DEPLOY
To deploy your Spring boot app into EKS cluster you need to package it into a Docker image and deploy it into the Docker repo. Let's start with a Docker image. There are plenty tutorials on this like this one but the steps are simple:
Put your generated, self-contained, spring boot jar file into a directory and create a text file with this exact name: Dockerfile in the same directory and add the following content to it:
FROM openjdk:8-jdk-alpine
MAINTAINER me#mydomain.com
LABEL name="My Awesome Docker Image"
# Add spring boot jar
VOLUME /tmp
ADD myapp-0.1.8.jar app.jar
EXPOSE 8080
# Database settings (maybe different in your app)
ENV RDS_USERNAME="my_user"
ENV RDS_PASSWORD="foobar"
# Other options
ENV JAVA_OPTS="-Dverknow.pypath=/"
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
Now simply run a Docker command from the same folder to create an image. Of course that requires Docker client installed on your dev box.
$ docker build -t myapp:0.1.8 --force-rm=true --no-cache=true .
If all goes well you should see your image listed with docker ps command
Deploy to the private ECR repo
Deploying your new image to ECR repo is easy and ECR works with EKS right out of the box. Log into AWS console and navigate to the ECR section. I found it confusing that apparently you need to have one repository per image but when you click "Create repository" button put your image name (e.g. myapp) into the text field. Now you need to copy the ugly URL for your image and go back to the command prompt
Tag and push your image. I'm using a fake URL as example: 901237695701.dkr.ecr.us-west-2.amazonaws.com you need to copy your own from the previous step
$ docker tag myapp:0.1.8 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
$ docker push 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
At this point the image should show up at ECR repository you created
Deploy your app to EKS cluster
Now you need to create a Kubernetes deployment for your app's Docker image. Create a myapp-deployment.yaml file with the following content
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
selector:
matchLabels:
app: myapp
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
name: myapp
ports:
- containerPort: 8080
name: server
env:
# optional
- name: RDS_HOSTNAME
value: "10.100.98.196"
- name: RDS_PORT
value: "3306"
- name: RDS_DB_NAME
value: "mydb"
restartPolicy: Always
status: {}
Note how I'm using a full URL for the image parameter. I'm also using a private CLUSTER-IP of mysql cluster that you can get with kubectl get svc my-cluster-mysql command. This will differ for your app including any env names but you do have to provide this info to your app somehow. Then in your app you can set something like this in the application.properties file:
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://${RDS_HOSTNAME}:${RDS_PORT}/${RDS_DB_NAME}?autoReconnect=true&zeroDateTimeBehavior=convertToNull
spring.datasource.username=${RDS_USERNAME}
spring.datasource.password=${RDS_PASSWORD}
Once you save the myapp-deployment.yaml you need to run this command
kubectl apply -f myapp-deployment.yaml
Which will deploy your app into EKS cluster. This will create 2 pods in the cluster that you can see with kubectl get pods command
And rather than try to access one of the pods directly we can create a service to front the app pods. Create a myapp-service.yaml with this content:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
ports:
- port: 443
targetPort: 8080
protocol: TCP
name: http
selector:
app: myapp
type: LoadBalancer
That's where the magic happens! Just by setting the port to 443 and type to LoadBalancer the system will create a Classic Load Balancer to front your app.
BTW if you don't need to run your app over HTTPS you can set port to 80 and you will be pretty much done!
After you run kubectl apply -f myapp-service.yaml the service in the cluster will be created and if you go to to the Load Balancers section in the EC2 section of AWS console you will see that a new balancer is created for you. You can also run kubectl get svc myapp-service command which will give you EXTERNAL-IP value, something like bl3a3e072346011e98cac0a1468f945b-8158249.us-west-2.elb.amazonaws.com. Copy that because we need to use it next.
It is worth to mention that if you are using port 80 then simply pasting that URL into the browser should display your app
Access your app over HTTPS
The following section assumes that you have AWS-issued SSL certificate. If you don't then go to AWS console "Certificate Manager" and create a wildcard certificate for your domain
Before your load balancer can work you need to access AWS console -> EC2 -> Load Balancers -> My new balancer -> Listeners and click on "Change" link in SSL Certificate column. Then in the pop up select the AWS-issued SSL certificate and save.
Go to Route-53 section in AWS console and select a hosted zone for your domain, say myapp.com.. Then click "Create Record Set" and create a CNAME - Canonical name record with Name set to whatever alias you want, say cluster.myapp.com and Value set to the EXTERNAL-IP from above. After you "Save Record Set" go to your browser and type in https://cluster.myapp.com. You should see your app running

Can't access Google Cloud Datastore from Google Kubernetes Engine cluster

I have a simple application that Gets and Puts information from a Datastore.
It works everywhere, but when I run it from inside the Kubernetes Engine cluster, I get this output:
Error from Get()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
Error from Put()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
I'm using the cloud.google.com/go/datastore package and the Go language.
I don't know why I'm getting this error since the application works everywhere else just fine.
Update:
Looking for an answer I found this comment on Google Groups:
In order to use Cloud Datastore from GCE, the instance needs to be
configured with a couple of extra scopes. These can't be added to
existing GCE instances, but you can create a new one with the
following Cloud SDK command:
gcloud compute instances create hello-datastore --project
--zone --scopes datastore userinfo-email
Would that mean I can't use Datastore from GKE by default?
Update 2:
I can see that when creating my cluster I didn't enable any permissions (which are disabled for most services by default). I suppose that's what's causing the issue:
Strangely, I can use CloudSQL just fine even though it's disabled (using the cloudsql_proxy container).
So what I learnt in the process of debugging this issue was that:
During the creation of a Kubernetes Cluster you can specify permissions for the GCE nodes that will be created.
If you for example enable Datastore access on the cluster nodes during creation, you will be able to access Datastore directly from the Pods without having to set up anything else.
If your cluster node permissions are disabled for most things (default settings) like mine were, you will need to create an appropriate Service Account for each application that wants to use a GCP resource like Datastore.
Another alternative is to create a new node pool with the gcloud command, set the desired permission scopes and then migrate all deployments to the new node pool (rather tedious).
So at the end of the day I fixed the issue by creating a Service Account for my application, downloading the JSON authentication key, creating a Kubernetes secret which contains that key, and in the case of Datastore, I set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the mounted secret JSON key.
This way when my application starts, it checks if the GOOGLE_APPLICATION_CREDENTIALS variable is present, and authenticates Datastore API access based on the JSON key that the variable points to.
Deployment YAML snippet:
...
containers:
- image: foo
name: foo
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /auth/credentials.json
volumeMounts:
- name: foo-service-account
mountPath: "/auth"
readOnly: true
volumes:
- name: foo-service-account
secret:
secretName: foo-service-account
After struggling some hours, I was also able to connect to the datastore. Here are my results, most of if from Google Docs:
Create Service Account
gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME]
Get full iam account name
gcloud iam service-accounts list
The result will look something like this:
[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Give owner access to the project for the service account
gcloud projects add-iam-policy-binding [PROJECT_NAME] --member serviceAccount:[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com --role roles/owner
Create key-file
gcloud iam service-accounts keys create mycredentials.json --iam-account [SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Create app-key Secret
kubectl create secret generic app-key --from-file=credentials.json=mycredentials.json
This app-key secret will then be mounted in the deployment.yaml
Edit deyployment file
deployment.yaml:
...
spec:
containers:
- name: app
image: eu.gcr.io/google_project_id/springapplication:v1
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/credentials.json
ports:
- name: http-server
containerPort: 8080
volumes:
- name: google-cloud-key
secret:
secretName: app-key
I was using a minimalistic Dockerfile like:
FROM SCRATCH
ADD main /
EXPOSE 80
CMD ["/main"]
which kept my go app in an indefinite "hanging" state when trying to connect to the GCP Datastore. After LOTS of playing I figured out that the SCRATCH Docker image might be missing certain environment tools / variables / libraries which the Google cloud library requires. Using this Dockerfile now works:
FROM golang:alpine
RUN apk add --no-cache ca-certificates
ADD main /
EXPOSE 80
CMD ["/main"]
It does not require me to provide the google credentials environment variable. The library seems to automatically understand where it's running in (maybe from the context.Background() ?) and automatically uses a default service account which Google creates for you when you create your cluster on GKE.

Resources