Correctly override "settings.xml" in Jenkinsfile Maven build on kubernetes? - maven

We are setting up a Jenkins-based CI pipeline on our Kubernetes cluster (Rancher if that matters) and up to now we have used the official maven:3-jdk-11-slim image for experiments. Unfortunately it does not provide any built-in way of overriding the default settings.xml to use a mirror, which we need - preferably just by setting an environment variable. I am not very familar with kubernetes so I may be missing something simple.
Is there a simple way to add a file to the image? Should I use another image with this functionality built in?
pipeline {
agent {
kubernetes {
yaml """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
- name: kaniko
.... etc

Summary: you can mount your settings.xml file on the pod at some specific path and use that file with command mvn -s /my/path/to/settings.xml.
Crou's ConfigMap approach is one way to do it. However, since the settings.xml file usually contains credentials, I would treat it as Secrets.
You can create a Secret in Kubernetes with command:
$ kubectl create secret generic mvn-settings --from-file=settings.xml=./settings.xml
The pod definition will be something like this:
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
volumeMounts:
- name: mvn-settings-vol
mountPath: /my/path/to
volumes:
- name: mvn-settings-vol
secret:
secretName: mvn-settings
Advanced/Optional: If you practice "Infrastructure as Code", you might want to save the manifest file for that secret for recovery. This can be achieved by this command after secret already created:
$ kubectl get secrets mvn-settings -o yaml
You can keep secrets.yml file but do not check into any VCS/Github repo since this version of secrets.yml contains unencrypted data.
Some k8s administrators may have kubeseal installed. In that case, I'd recommend using kubeseal to get encrypted version of secrets.yml.
$ kubectl create secret generic mvn-settings --from-file=settings.xml=./settings.xml --dry-run -o json | kubeseal --controller-name=controller --controller-namespace=k8s-sealed-secrets --format=yaml >secrets.yml
# Actually create secrets
$ kubectl apply -f secrets.yml
The controller-name and controller-namespace should be obtained from k8s administrators.
This secrets.yml contains encrypted data of your settings.xml and can be safely checked into VCS/Github repo.

If you want to override a file inside pod you can use ConfigMap to store the changed file and mount it instead of previous one.
You can create the ConfigMap from a file using
kubectl create configmap settings-xml --from-file=settings.xml
Your pod definition might look like this:
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
volumeMounts:
- name: config-settings
mountPath: /usr/share/maven/ref/settings.xml
volumes:
- name: config-settings
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: settings-xml
...

This worked for me:
Install Config File Provider Plugin
Go to Manage Jenkins > Config File Management > Add a new config and insert here your settings.xml
In your jenkinsfile just put your rtMavenRun inside a configFileProvider block, and put the same fileId of the jenkins config file you created before
stage('Build Maven') {
steps {
configFileProvider([configFile(fileId: 'MavenArtifactorySettingId', variable: 'MAVEN_SETTINGS_XML')]) {
retry(count: 3) {
rtMavenRun(
tool: "Maven 3.6.2", //id specified in Global Tool Configuration
pom: 'pom.xml',
goals: '-U -s $MAVEN_SETTINGS_XML clean install',
)
}
}
}
}
this is exactly the pipeline that I used if you want to see more: https://gist.github.com/robertobatts/42da9069e13b61a238f51c36754de97b

If you versioned the settings.xml of the project with the code, it makes sense to build with mvn install -s settings.xml using sh step. It what I did at work. If settings.xml is not versioned with the project, it indeed makes sens to mount the file with Crou's solution.
To answer your question "Should I use another image with this functionality built in?" I would recommend to avoid a maximum to build custom images because you will end up having to maintain them

Related

openshift set environment variable from a file

I have a mount volume has a file urls.txt with database source url, like
databasesource: mysql://xxxx
and in my springboot application which will be running as a container in a openshift pod, and in the application I need to change the SPRING_DATASOURCE_URL as mentioned in the file above, here is what I want to achieve in my template file
env:
- name: SPRING_DATASOURCE_URL
valueFrom:
mount:
name: my-volume
key: databasesource
volumeMounts:
- name: my-volume
mountPath: /someDir
I know we can valueFrom configMap or secret, but I want to achieve via a volumeMount
if you can use below format in
urls.txt
databasesource=mysql://xxxx
as part of your container start you run
source /somedir/urls.txt
which will load the key & values in env. which can be further used.
The problem is resolved by a Springboot2.0 feature: https://docs.spring.io/spring-boot/docs/current/reference/html/features.html#features.external-config.files.importing-extensionless

How to overwrite file in pods container in Kubernetes deployment file?

I want to overwrite the file on the pod container. Right now I have elasticsearch.yml at location /usr/share/elasticsearch/config.
I was trying to achieve that with initContainer at kubernetes deployment file, so I added something like:
- name: disabled-the-xpack-security
image: busybox
command:
- /bin/sh
- -c
- |
sleep 20
rm /usr/share/elasticsearch/config/elasticsearch.yml
cp /home/x/IdeaProjects/BD/infra/istio/kube/elasticsearch.yml /usr/share/elasticsearch/config/
securityContext:
privileged: true
But this doesn't work, error looks like:
rm: can't remove '/usr/share/elasticsearch/config/elasticsearch.yml': No such file or directory
cp: can't stat '/home/x/IdeaProjects/BD/infra/istio/kube/elasticsearch.yml': No such file or directory
I was trying to use some echo "some yaml config" >> elasticsearch.yml, but this kind of workarounds doesn't work, because I was able to keep proper yaml formatting.
Do you have any suggestions, how can I do this?
As stated by Arman in the comments, you can create a ConfigMap with the contents of /home/x/IdeaProjects/BD/infra/istio/kube/elasticsearch.ymland mount it as a volume in the deployment.
To create the config map from your file you can run:
kubectl create configmap my-es-config --from-file=/home/x/IdeaProjects/BD/infra/istio/kube/elasticsearch.yml
This will create a ConfigMap inside your kubernetes cluster with the yaml file.
You can then use that and add the volume mount to your deployment as:
containers:
- name: elasticsearch
image: k8s.gcr.io/busybox
.
.
.
volumeMounts:
- name: config-volume
mountPath: /usr/share/elasticsearch/config/
volumes:
- name: config-volume
configMap:
name: my-es-config
Notes
It is recommended to create your ConfigMap as yaml as well. More information here
Mounting a configmap directly on /usr/share/elasticsearch/config/, will replace everything inside that path and place the config file from the configmap. If that causes an issue, you might want to mount it at another location and then copy it.
Note if you don't want to override everything in the mounted directory you could mount the file only using "subPath" in whatever directory you want.
https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath

How to inject secret from Google Secret Manager into Kubernetes Pod as environment variable with Spring Boot?

For the life of Bryan, how do I do this?
Terraform is used to create an SQL Server instance in GCP.
Root password and user passwords are randomly generated, then put into the Google Secret Manager.
The DB's IP is exposed via private DNS zone.
How can I now get the username and password to access the DB into my K8s cluster? Running a Spring Boot app here.
This was one option I thought of:
In my deployment I add an initContainer:
- name: secrets
image: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- echo "DB_PASSWORD=$(gcloud secrets versions access latest --secret=\"$NAME_OF_SECRET\")" >> super_secret.env
Okay, what now? How do I get it into my application container from here?
There are also options like bitnami/sealed-secrets, which I don't like since the setup is using Terraform already and saving the secrets in GCP. When using sealed-secrets I could skip using the secrets manager. Same with Vault IMO.
On top of the other answers and suggestion in the comments I would like to suggest two tools that you might find interesting.
First one is secret-init:
secrets-init is a minimalistic init system designed to run as PID 1
inside container environments and it`s integrated with
multiple secrets manager services, e.x. Google Secret Manager
Second one is kube-secrets-init:
The kube-secrets-init is a Kubernetes mutating admission webhook,
that mutates any K8s Pod that is using specially prefixed environment
variables, directly or from Kubernetes as Secret or ConfigMap.
It`s also support integration with Google Secret Manager:
User can put Google secret name (prefixed with gcp:secretmanager:) as environment variable value. The secrets-init will resolve any environment value, using specified name, to referenced secret value.
Here`s a good article about how it works.
How do I get it into my application container from here?
You could use a volume to store the secret and mount the same volume in both init container and main container to share the secret with the main container from the init container.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: secrets
image: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- echo "DB_PASSWORD=$(gcloud secrets versions access latest --secret=\"$NAME_OF_SECRET\")" >> super_secret.env
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
You can use spring-cloud-gcp-starter-secretmanager to load secrets from Spring application itself.
Documentation - https://cloud.spring.io/spring-cloud-gcp/reference/html/#secret-manager
Using volumes of emptyDir with medium: Memory to guarantee that the secret will not be persisted.
...
volumes:
- name: scratch
emptyDir:
medium: Memory
sizeLimit: "1Gi"
...
If one has control over the image, it's possible to change the entry point and use berglas.
Dockerfile:
FROM adoptopenjdk/openjdk8:jdk8u242-b08-ubuntu # or whatever you need
# Install berglas, see https://github.com/GoogleCloudPlatform/berglas
RUN mkdir -p /usr/local/bin/
ADD https://storage.googleapis.com/berglas/main/linux_amd64/berglas /usr/local/bin/berglas
RUN chmod +x /usr/local/bin/berglas
ENTRYPOINT ["/usr/local/bin/berglas", "exec", "--"]
Now we build the container and test it:
docker build -t image-with-berglas-and-your-app .
docker run \
-v /host/path/to/credentials_dir:/root/credentials \
--env GOOGLE_APPLICATION_CREDENTIALS=/root/credentials/your-service-account-that-can-access-the-secret.json \
--env SECRET_TO_RESOLVE=sm://your-google-project/your-secret \
-ti image-with-berglas-and-your-app env
This should print the environment variables with the sm:// substituted by the actual secret value.
In K8s we run it with Workload Identity, so the K8s service account on behalf of which the pod is scheduled needs to be bound to a Google service account that has the right to access the secret.
In the end your pod description would be something like this:
apiVersion: v1
kind: Pod
metadata:
name: your-app
spec:
containers:
- name: your-app
image: image-with-berglas-and-your-app
command: [start-sql-server]
env:
- name: AXIOMA_PASSWORD
value: sm://your-google-project/your-secret

k8s: Use parameterized image tag when creating deployment

I want to run a kubernetes deployment in the likes of the following:
apiVersion: v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
template:
spec:
containers:
- name: my-app
image: our-own-registry.com/somerepo/my-app:${IMAGE_TAG}
env:
- name: FOO
value: "BAR"
This will be delivered to the developers so that they can perform on demand deployments using the image tag of their preference.
What is best way / recommended pattern to pass the tag variable?
performing an export on the command line to make it available as env var on the shell from which the kubectl command will run?
Unfortunately, it's impossible via native kubernetes tools. From here:
kubectl will never support variable substitution.
But, that issue case also has some good workarounds. The best way is deploy your apps via Helm charts using templates
For simple use cases envsubst will do just fine:
IMAGE_TAG=1.2 envsubst < deployment.yaml | kubectl apply -f -`

How can I use environment variable in kubernetes replication controller yaml file

How to read environment variables in kubernetes yaml file?
for example, I want to change the docker image tag but do not want to rewrite the file, like this
apiVersion: v1
kind: ReplicationController
...
spec:
containers:
- name: myapp
image: myapp:${VERSION}
...
With this I can do kubectl rolling-update without updating the yaml file.
thanks
If you want a simple, lightweight approch you might try using envsubst. So assuming your example is in a file called example.yaml in a bash shell you'd execute:
export VERSION=69
envsubst < example.yaml | kubectl apply -f -
Also recent versions of Kustomize can do it too.
Helm should solve your config issues - https://github.com/kubernetes/helm
You should use a Deployment coupled with kubectl set image like this:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
I would highly recommend using HELM. https://github.com/kubernetes/helm
You can install HELM using the information contained in the above link. That will make the helm command available to you.
By running helm create YOUR_APP_NAME it will create a directory structure like the following.
YOUR_APP_NAME/
Chart.yaml # A YAML file containing information about the chart
LICENSE # OPTIONAL: A plain text file containing the license for the chart
README.md # OPTIONAL: A human-readable README file
values.yaml # The default configuration values for this chart
charts/ # OPTIONAL: A directory containing any charts upon which this chart depends.
templates/ # OPTIONAL: A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
In the values.yaml file you can set some ENV variables like:
container:
name: "nginx"
version: "latest"
In your ReplicationController file you can reference the variables using:
apiVersion: v1
kind: ReplicationController
...
spec:
containers:
- name: myapp
image: {{.Values.container.name}}:{{.Values.container.version}}
...
The YAML file for your replication controller should be placed in the templates directory.
You can then run the command helm package YOUR_PACKAGE_NAME. To install the package on your K8S cluster you can then run helm install PACKAGE_NAME
NOTE: I would suggest you switch to using Deployments instead of ReplicationController. See: https://kubernetes.io/docs/user-guide/deployments/
Maybe you mean this?
- name: PUBLIC_URL
value: "http://gitserver.$(POD_NAMESPACE):$(SERVICE_PORT)"
This is something what their docs specified.. but it doesn't work for me anymore.

Resources