How can I use environment variable in kubernetes replication controller yaml file - yaml

How to read environment variables in kubernetes yaml file?
for example, I want to change the docker image tag but do not want to rewrite the file, like this
apiVersion: v1
kind: ReplicationController
...
spec:
containers:
- name: myapp
image: myapp:${VERSION}
...
With this I can do kubectl rolling-update without updating the yaml file.
thanks

If you want a simple, lightweight approch you might try using envsubst. So assuming your example is in a file called example.yaml in a bash shell you'd execute:
export VERSION=69
envsubst < example.yaml | kubectl apply -f -
Also recent versions of Kustomize can do it too.

Helm should solve your config issues - https://github.com/kubernetes/helm

You should use a Deployment coupled with kubectl set image like this:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1

I would highly recommend using HELM. https://github.com/kubernetes/helm
You can install HELM using the information contained in the above link. That will make the helm command available to you.
By running helm create YOUR_APP_NAME it will create a directory structure like the following.
YOUR_APP_NAME/
Chart.yaml # A YAML file containing information about the chart
LICENSE # OPTIONAL: A plain text file containing the license for the chart
README.md # OPTIONAL: A human-readable README file
values.yaml # The default configuration values for this chart
charts/ # OPTIONAL: A directory containing any charts upon which this chart depends.
templates/ # OPTIONAL: A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
In the values.yaml file you can set some ENV variables like:
container:
name: "nginx"
version: "latest"
In your ReplicationController file you can reference the variables using:
apiVersion: v1
kind: ReplicationController
...
spec:
containers:
- name: myapp
image: {{.Values.container.name}}:{{.Values.container.version}}
...
The YAML file for your replication controller should be placed in the templates directory.
You can then run the command helm package YOUR_PACKAGE_NAME. To install the package on your K8S cluster you can then run helm install PACKAGE_NAME
NOTE: I would suggest you switch to using Deployments instead of ReplicationController. See: https://kubernetes.io/docs/user-guide/deployments/

Maybe you mean this?
- name: PUBLIC_URL
value: "http://gitserver.$(POD_NAMESPACE):$(SERVICE_PORT)"
This is something what their docs specified.. but it doesn't work for me anymore.

Related

How to overwrite file in pods container in Kubernetes deployment file?

I want to overwrite the file on the pod container. Right now I have elasticsearch.yml at location /usr/share/elasticsearch/config.
I was trying to achieve that with initContainer at kubernetes deployment file, so I added something like:
- name: disabled-the-xpack-security
image: busybox
command:
- /bin/sh
- -c
- |
sleep 20
rm /usr/share/elasticsearch/config/elasticsearch.yml
cp /home/x/IdeaProjects/BD/infra/istio/kube/elasticsearch.yml /usr/share/elasticsearch/config/
securityContext:
privileged: true
But this doesn't work, error looks like:
rm: can't remove '/usr/share/elasticsearch/config/elasticsearch.yml': No such file or directory
cp: can't stat '/home/x/IdeaProjects/BD/infra/istio/kube/elasticsearch.yml': No such file or directory
I was trying to use some echo "some yaml config" >> elasticsearch.yml, but this kind of workarounds doesn't work, because I was able to keep proper yaml formatting.
Do you have any suggestions, how can I do this?
As stated by Arman in the comments, you can create a ConfigMap with the contents of /home/x/IdeaProjects/BD/infra/istio/kube/elasticsearch.ymland mount it as a volume in the deployment.
To create the config map from your file you can run:
kubectl create configmap my-es-config --from-file=/home/x/IdeaProjects/BD/infra/istio/kube/elasticsearch.yml
This will create a ConfigMap inside your kubernetes cluster with the yaml file.
You can then use that and add the volume mount to your deployment as:
containers:
- name: elasticsearch
image: k8s.gcr.io/busybox
.
.
.
volumeMounts:
- name: config-volume
mountPath: /usr/share/elasticsearch/config/
volumes:
- name: config-volume
configMap:
name: my-es-config
Notes
It is recommended to create your ConfigMap as yaml as well. More information here
Mounting a configmap directly on /usr/share/elasticsearch/config/, will replace everything inside that path and place the config file from the configmap. If that causes an issue, you might want to mount it at another location and then copy it.
Note if you don't want to override everything in the mounted directory you could mount the file only using "subPath" in whatever directory you want.
https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath

Is it possible to do parameter substitution in k8s configMaps?

I have configMap that are loading properties files for my spring boot application.
My configMap is mounted as a volume and my springboot app is reading from that volume.
my typical property files are:
application-dev1.yml has
integrations-queue-name=integration-dev1
search-queue-name=searchindex-dev1
application-dev2.yml
integrations-queue-name=integration-dev2
search-queue-name=searchindex-dev1
application-dev3.yml
integrations-queue-name=integration-dev3
search-queue-name=searchindex-dev1
My goal is to have 1 properties file
application-env.yml
integrations-queue-name=integration-{env}
search-queue-name=searchindex-{env}
I want to do parameter substitution of env with the profile that is active for my service.
Is it possible to do parameter substitution in configMaps from my spring boot application running in the pod? I am lookin for something similar to maven-resource-plugin that can be done run time.
If it's just those two, then likely you will get more mileage out of using the SPRING_APPLICATION_JSON environment variable, which should supersede anything in the configmap:
containers:
- name: my-spring-app
image: whatever
env:
- name: ENV_NAME
value: dev2
- name: SPRING_APPLICATION_JSON
value: |
{"integrations-queue-name": "integration-$(ENV_NAME)",
"search-queue-name": "searchindex-$(ENV_NAME)"}
materializes as:
$ kubectl exec my-spring-pod -- printenv
ENV_NAME=dev2
SPRING_APPLICATION_JSON={"integrations-queue-name": "integration-dev2",
"search-queue-name": "searchindex-dev2"}

Correctly override "settings.xml" in Jenkinsfile Maven build on kubernetes?

We are setting up a Jenkins-based CI pipeline on our Kubernetes cluster (Rancher if that matters) and up to now we have used the official maven:3-jdk-11-slim image for experiments. Unfortunately it does not provide any built-in way of overriding the default settings.xml to use a mirror, which we need - preferably just by setting an environment variable. I am not very familar with kubernetes so I may be missing something simple.
Is there a simple way to add a file to the image? Should I use another image with this functionality built in?
pipeline {
agent {
kubernetes {
yaml """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
- name: kaniko
.... etc
Summary: you can mount your settings.xml file on the pod at some specific path and use that file with command mvn -s /my/path/to/settings.xml.
Crou's ConfigMap approach is one way to do it. However, since the settings.xml file usually contains credentials, I would treat it as Secrets.
You can create a Secret in Kubernetes with command:
$ kubectl create secret generic mvn-settings --from-file=settings.xml=./settings.xml
The pod definition will be something like this:
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
volumeMounts:
- name: mvn-settings-vol
mountPath: /my/path/to
volumes:
- name: mvn-settings-vol
secret:
secretName: mvn-settings
Advanced/Optional: If you practice "Infrastructure as Code", you might want to save the manifest file for that secret for recovery. This can be achieved by this command after secret already created:
$ kubectl get secrets mvn-settings -o yaml
You can keep secrets.yml file but do not check into any VCS/Github repo since this version of secrets.yml contains unencrypted data.
Some k8s administrators may have kubeseal installed. In that case, I'd recommend using kubeseal to get encrypted version of secrets.yml.
$ kubectl create secret generic mvn-settings --from-file=settings.xml=./settings.xml --dry-run -o json | kubeseal --controller-name=controller --controller-namespace=k8s-sealed-secrets --format=yaml >secrets.yml
# Actually create secrets
$ kubectl apply -f secrets.yml
The controller-name and controller-namespace should be obtained from k8s administrators.
This secrets.yml contains encrypted data of your settings.xml and can be safely checked into VCS/Github repo.
If you want to override a file inside pod you can use ConfigMap to store the changed file and mount it instead of previous one.
You can create the ConfigMap from a file using
kubectl create configmap settings-xml --from-file=settings.xml
Your pod definition might look like this:
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
volumeMounts:
- name: config-settings
mountPath: /usr/share/maven/ref/settings.xml
volumes:
- name: config-settings
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: settings-xml
...
This worked for me:
Install Config File Provider Plugin
Go to Manage Jenkins > Config File Management > Add a new config and insert here your settings.xml
In your jenkinsfile just put your rtMavenRun inside a configFileProvider block, and put the same fileId of the jenkins config file you created before
stage('Build Maven') {
steps {
configFileProvider([configFile(fileId: 'MavenArtifactorySettingId', variable: 'MAVEN_SETTINGS_XML')]) {
retry(count: 3) {
rtMavenRun(
tool: "Maven 3.6.2", //id specified in Global Tool Configuration
pom: 'pom.xml',
goals: '-U -s $MAVEN_SETTINGS_XML clean install',
)
}
}
}
}
this is exactly the pipeline that I used if you want to see more: https://gist.github.com/robertobatts/42da9069e13b61a238f51c36754de97b
If you versioned the settings.xml of the project with the code, it makes sense to build with mvn install -s settings.xml using sh step. It what I did at work. If settings.xml is not versioned with the project, it indeed makes sens to mount the file with Crou's solution.
To answer your question "Should I use another image with this functionality built in?" I would recommend to avoid a maximum to build custom images because you will end up having to maintain them

k8s: Use parameterized image tag when creating deployment

I want to run a kubernetes deployment in the likes of the following:
apiVersion: v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
template:
spec:
containers:
- name: my-app
image: our-own-registry.com/somerepo/my-app:${IMAGE_TAG}
env:
- name: FOO
value: "BAR"
This will be delivered to the developers so that they can perform on demand deployments using the image tag of their preference.
What is best way / recommended pattern to pass the tag variable?
performing an export on the command line to make it available as env var on the shell from which the kubectl command will run?
Unfortunately, it's impossible via native kubernetes tools. From here:
kubectl will never support variable substitution.
But, that issue case also has some good workarounds. The best way is deploy your apps via Helm charts using templates
For simple use cases envsubst will do just fine:
IMAGE_TAG=1.2 envsubst < deployment.yaml | kubectl apply -f -`

config-map kubernetes multiple environments

I am trying to deploy a Spring Boot application using configuration data from Kubernetes cluster. I have a simple RestController that prints a message by reading from a Kubernetes cluster.
private String message = "Message not coming from Kubernetes config map";
#RequestMapping(value="/echo", method=GET)
public String printKubeConfig() {
return message;
}
Specified the name of the config map in my application.yml
spring:
application:
name: echo-configmap
echo-configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: echo-configmap
data:
application.properties: |-
message=Hello from dev Kubernetes Configmap
application_qa.properties: |-
message=Hello from qa Kubernetes Configmap
I have several environments like qa, int, test etc
What's the best way to specify environment specific properties in the config map? And how to access them in Spring boot application?
Ex: if the application is deployed in qa, my service should return the message "Hello from qa Kubernetes Configmap"
We also have plans to read these configuration files from GIT in future. How to handle that usecase?
Let me try and provide an answer which I think gives you what you need, without using any tools beyond what you'll have installed on most boxes. Maybe try this first, and if you find the approach becomes difficult to manage and scale, move onto something more sophisticated.
Step 1: Version control configmaps per environment
Create a folder like k8s/configmaps or something, and create one configmap per environment:
k8s/configmaps/properties.dev.yaml
k8s/configmaps/properties.qa.yaml
k8s/configmaps/properties.sit.yaml
k8s/configmaps/properties.uat.yaml
Each configmap should contain your environment specific settings.
Step 2: Have a namespace per environment
Create a k8s namespace per environment, such as:
application-dev
application-qa
application-sit
application-uat
Step 3: Create the configmap per environment
A little bash will help here:
#!/usr/bin/env bash
# apply-configmaps.sh
namespace="application-${ENVIRONMENT}"
for configmap in ./k8s/configmaps/*.${ENVIRONMENT}.yml; do
echo "Processing ConfigMap $configmap"
kubectl apply -n ${namespace} -f $configmap
done
Now all you need to do to create or update configmaps for any environment is:
ENVIRONMENT=dev ./update-configmaps.sh
Step 4: Finish the job with CI/CD
Now you can create a CI/CD pipeline - if your configmap source changes just run the command shown above.
Summary
Based on primitive commands and no special tools you can:
Version control config
Manage config per environment
Update or create config when the config code changes
Easily apply the same approach in a CI/CD pipeline if needed
I would strongly recommend you follow this basic 'first principles' approach before jumping into more sophisticated tools to solve the same problems, in many cases you can do it yourself without much effort, learn the key concepts and save the more sophisticated tooling till later if you really need it.
Hope that helps!
This sounds like a good use case for Helm. You could deploy your application as a Helm Chart, which would basically allow you to generate your Kubernetes resources (like ConfigMaps, Deployments, and whatever else you need) from templates.
You can use the documentation on Helm Charts to get started with Helm. After having created a Chart with helm create, you will get a templates/ directory, in which you might place the following YAML template for your ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" .Release.Name .Chart.Name }}
labels:
app: {{ .Chart.Name | trunc 63 | trimSuffix "-" }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
application.properties: |-
message={{ .Values.properties.message }}
You can add a second YAML template for your Deployment object (actually, helm create will already create a sensible default deployment). Simply add your ConfigMap as volume, there:
containers:
- name: {{ .Chart.Name }}
# [...]
volumes:
- name: property-volume
mountPath: /etc/your-app/properties
volumes:
- name: property-volume
configMap:
name: {{ printf "%s-%s" .Release.Name .Chart.Name }}
Each Helm chart has a values.yaml file, in which you can define default values that are then used to fill in your templates. This default file might look like this (remember that the ConfigMap template above contained a {{ .Values.properties.message }} expression):
replicaCount: 1
image:
repository: your-docker-image
tag: your-docker-tag
properties:
message: Hello!
Next, use this Helm chart and the helm install command to deploy your application as many times as you want with different configurations. You can supply different YAML files in which you override specific values from your values.yaml file, or override individual values using --set:
$ helm install --name dev --set image.tag=latest --set replicaCount=1 path/to/chart
$ helm install --name prod --set image.tag=stable --set replicaCount=3 --set properties.message="Hello from prod" path/to/chart
As to your second question: Of course you should put your Helm Chart into version control. You can then use the helm upgrade command to apply changes to an already deployed application.
I would use this tool for each of your git projects for non secret data.
https://github.com/kubernetes-sigs/kustomize
In the meta data you can filter your pods
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mycomponent
env: dev
tier: backend
name: mycomponent
namespace: myapplication
kubectl get pods -n myapplication -l env=dev,tier=backend,app=mycomponent
Typically you want that applications running on qa don't interfere with applications running on production. Using Kubernetes, you can get this kind of isolation using different namespaces for different environments.
That way, objects on the prod namespace are different from objects on the qa namespace. Another, more expensive approach would be to use different k8s clusters for different environments.
Having this setup, you would deploy your application in the namespace for the specific environment where you want to deploy to, creating the Deployment object on that namespace.
This Deployment would make use of a ConfigMap object containing your Spring Boot properties. Let's call this ConfigMap echo-properties for example.
That way, every namespace would have a unique copy of the echo-properties ConfigMap. Each containing the specific configuration for the environment where it belongs.
The Deployment object consumes the ConfigMap properties by either using environment variables or reading files. The important bit here is that if you change the echo-properties ConfigMap data, your application won't see those new values, by default. Kubernetes doesn't have this feature so far. You can't compare ConfigMaps to Spring Cloud Config, which is a dynamic configuration solution.
An approach that would get you a similar behaviour (but not quite the same) would be using the fabric8 ConfigMap Controller on your cluster. This controller is a process that runs on your cluster, and it would restart your application whenever the ConfigMap changes, so that it reads the new configuration values.
If you don't want to restart your application whenever a configuration changes, you should probably stick to Spring Cloud Config for values that will potentially change, and use ConfigMaps for other properties that won't change, like application name or port.
Your use case sounds very much like you should take look at spring-cloud-config - https://cloud.spring.io/spring-cloud-config/
The config-server is an infrastructure component that serves configuration that could be located in a git repository.
A config-client application would connect to config-server at startup and loads the configuration applicable to the current profiles.
You could have different branches for different environments - or use profiles per environment. In your kubernetes deployment manifest you could set the profile by setting SPRING_PROFILES_ACTIVE environment variable.

Resources