k8s: Use parameterized image tag when creating deployment - shell

I want to run a kubernetes deployment in the likes of the following:
apiVersion: v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
template:
spec:
containers:
- name: my-app
image: our-own-registry.com/somerepo/my-app:${IMAGE_TAG}
env:
- name: FOO
value: "BAR"
This will be delivered to the developers so that they can perform on demand deployments using the image tag of their preference.
What is best way / recommended pattern to pass the tag variable?
performing an export on the command line to make it available as env var on the shell from which the kubectl command will run?

Unfortunately, it's impossible via native kubernetes tools. From here:
kubectl will never support variable substitution.
But, that issue case also has some good workarounds. The best way is deploy your apps via Helm charts using templates

For simple use cases envsubst will do just fine:
IMAGE_TAG=1.2 envsubst < deployment.yaml | kubectl apply -f -`

Related

Job that executes command inside a pod

What I'd like to ask is if is it possible to create a Kubernetes Job that runs a bash command within another Pod.
apiVersion: batch/v1
kind: Job
metadata:
namespace: dev
name: run-cmd
spec:
ttlSecondsAfterFinished: 180
template:
spec:
containers:
- name: run-cmd
image: <IMG>
command: ["/bin/bash", "-c"]
args:
- <CMD> $POD_NAME
restartPolicy: Never
backoffLimit: 4
I considered using :
Environment variable to define the pod name
Using Kubernetes SDK to automate
But if you have better ideas I am open to them, please!
The Job manifest you shared seems the valid idea.
Yet, you need to take into consideration below points:
As running command inside other pod (pod's some container) requires interacting with Kubernetes API server, you'd need to interact with it using Kubernetes client (e.g. kubectl). This, in turn, requires the client to be installed inside the job's container's image.
job's pod's service account has to have the permissions to pods/exec resource. See docs and this answer.

Kubernetes bash to POD after creation

I tried to create the POD using command
kubectl run --generator=run-pod/v1 mypod--image=myimage:1 -it bash and after successful pod creation it prompts for bash command in side container.
Is there anyway to achieve above command using YML file? I tried below YML but it does not go to bash directly after successful creation of POD. I had to manually write command kubectl exec -it POD_NAME bash. But want to avoid using exec command to bash my container. I want my YML to take me to my container after creation of POD. is there anyway to achieve this?
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: mynamespcae
labels:
app: mypod
spec:
containers:
- args:
- bash
name: mypod
image: myimage:1
stdin: true
stdinOnce: true
tty: true
This is a community wiki answer. Feel free to expand it.
As already mentioned by David, it is not possible to go to bash directly after a Pod is created by only using the YAML syntax. You have to use a proper kubectl command like kubectl exec in order to Get a Shell to a Running Container.
The key you want to have the pod that will not exit.
Here is an example for you.
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: mynamespcae
labels:
app: mypod
spec:
containers:
- command:
- bash
- -c
- yes > /dev/null
name: mypod
image: myimage:1
The command yes will continue to output the string yes until it is killed.
The part > /dev/null will make sure that you won't have a ton of garbage logs.
Then you can access your pod with these commands.
kubectl apply -f my-pod.yaml
kubectl exec -it mypod bash
Remember to remove the pod after you finish all the operations.

Correctly override "settings.xml" in Jenkinsfile Maven build on kubernetes?

We are setting up a Jenkins-based CI pipeline on our Kubernetes cluster (Rancher if that matters) and up to now we have used the official maven:3-jdk-11-slim image for experiments. Unfortunately it does not provide any built-in way of overriding the default settings.xml to use a mirror, which we need - preferably just by setting an environment variable. I am not very familar with kubernetes so I may be missing something simple.
Is there a simple way to add a file to the image? Should I use another image with this functionality built in?
pipeline {
agent {
kubernetes {
yaml """
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
- name: kaniko
.... etc
Summary: you can mount your settings.xml file on the pod at some specific path and use that file with command mvn -s /my/path/to/settings.xml.
Crou's ConfigMap approach is one way to do it. However, since the settings.xml file usually contains credentials, I would treat it as Secrets.
You can create a Secret in Kubernetes with command:
$ kubectl create secret generic mvn-settings --from-file=settings.xml=./settings.xml
The pod definition will be something like this:
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
volumeMounts:
- name: mvn-settings-vol
mountPath: /my/path/to
volumes:
- name: mvn-settings-vol
secret:
secretName: mvn-settings
Advanced/Optional: If you practice "Infrastructure as Code", you might want to save the manifest file for that secret for recovery. This can be achieved by this command after secret already created:
$ kubectl get secrets mvn-settings -o yaml
You can keep secrets.yml file but do not check into any VCS/Github repo since this version of secrets.yml contains unencrypted data.
Some k8s administrators may have kubeseal installed. In that case, I'd recommend using kubeseal to get encrypted version of secrets.yml.
$ kubectl create secret generic mvn-settings --from-file=settings.xml=./settings.xml --dry-run -o json | kubeseal --controller-name=controller --controller-namespace=k8s-sealed-secrets --format=yaml >secrets.yml
# Actually create secrets
$ kubectl apply -f secrets.yml
The controller-name and controller-namespace should be obtained from k8s administrators.
This secrets.yml contains encrypted data of your settings.xml and can be safely checked into VCS/Github repo.
If you want to override a file inside pod you can use ConfigMap to store the changed file and mount it instead of previous one.
You can create the ConfigMap from a file using
kubectl create configmap settings-xml --from-file=settings.xml
Your pod definition might look like this:
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: maven
image: maven:3-jdk-11-slim
command:
- cat
tty: true
volumeMounts:
- name: config-settings
mountPath: /usr/share/maven/ref/settings.xml
volumes:
- name: config-settings
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: settings-xml
...
This worked for me:
Install Config File Provider Plugin
Go to Manage Jenkins > Config File Management > Add a new config and insert here your settings.xml
In your jenkinsfile just put your rtMavenRun inside a configFileProvider block, and put the same fileId of the jenkins config file you created before
stage('Build Maven') {
steps {
configFileProvider([configFile(fileId: 'MavenArtifactorySettingId', variable: 'MAVEN_SETTINGS_XML')]) {
retry(count: 3) {
rtMavenRun(
tool: "Maven 3.6.2", //id specified in Global Tool Configuration
pom: 'pom.xml',
goals: '-U -s $MAVEN_SETTINGS_XML clean install',
)
}
}
}
}
this is exactly the pipeline that I used if you want to see more: https://gist.github.com/robertobatts/42da9069e13b61a238f51c36754de97b
If you versioned the settings.xml of the project with the code, it makes sense to build with mvn install -s settings.xml using sh step. It what I did at work. If settings.xml is not versioned with the project, it indeed makes sens to mount the file with Crou's solution.
To answer your question "Should I use another image with this functionality built in?" I would recommend to avoid a maximum to build custom images because you will end up having to maintain them

Openshift - Variables in Config for different Environments

I am currently trying to make deployments on two different openshift clusters, but I only want to use one deploymentconfig file. Is there a good way to overcome the current problem
apiVersion: v1
kind: DeploymentConfig
metadata:
labels:
app: my-app
deploymentconfig: my-app
name: my-app
spec:
selector:
app: my-app
deploymentconfig: my-app
strategy:
type: Rolling
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailability: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
replicas: 1
template:
metadata:
labels:
app: my-app
deploymentconfig: my-app
spec:
containers:
- name: my-app-container
image: 172.0.0.1:5000/int-myproject/my-app:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
env:
- name: ROUTE_PATH
value: /my-app
- name: HTTP_PORT
value: "8080"
- name: HTTPS_PORT
value: "8081"
restartPolicy: Always
dnsPolicy: ClusterFirst
Now if you look at spec.template.spec.containers[0].image there are two problems with this
Nr.1
172.0.0.1:5000/int-myproject/my-app:latest
The IP of the internal registry will differ between the two environments
Nr.2
172.0.0.1:5000/int-myproject/my-app:latest
The namespace will also not be the same. In this scenario I want this to be int-myproject or prod-myproject depending on the environment i want to deploy to. I was thinking maybe there is a way to use parameters in the yaml and pass them to openshift somehow similar to this
oc create -f deploymentconfig.yaml --namespace=int-myproject
and have a parameter like ${namespace} in my yaml file. Is there a good way to achieve this?
Firstly, to answer your question, yes you can use parameters with OpenShift templates and pass the value and creation time.
To do this, you will add the required template values to your yaml file and instead of using oc create you will use oc new-app -f deploymentconfig.yaml --param=SOME_KEY=someValue. Check out oc new-app --help for more info here.
Some other points to note though: IF you are referencing images from internal registry you might be better off to use imagestreams. These provide an abstraction for images pulled from internal docker registry on OpenShift, as is the case you have outlined.
Finally, the namespace value is available via the downward API in every Pod and you should not need to (typically) inject that manually.

How can I use environment variable in kubernetes replication controller yaml file

How to read environment variables in kubernetes yaml file?
for example, I want to change the docker image tag but do not want to rewrite the file, like this
apiVersion: v1
kind: ReplicationController
...
spec:
containers:
- name: myapp
image: myapp:${VERSION}
...
With this I can do kubectl rolling-update without updating the yaml file.
thanks
If you want a simple, lightweight approch you might try using envsubst. So assuming your example is in a file called example.yaml in a bash shell you'd execute:
export VERSION=69
envsubst < example.yaml | kubectl apply -f -
Also recent versions of Kustomize can do it too.
Helm should solve your config issues - https://github.com/kubernetes/helm
You should use a Deployment coupled with kubectl set image like this:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
I would highly recommend using HELM. https://github.com/kubernetes/helm
You can install HELM using the information contained in the above link. That will make the helm command available to you.
By running helm create YOUR_APP_NAME it will create a directory structure like the following.
YOUR_APP_NAME/
Chart.yaml # A YAML file containing information about the chart
LICENSE # OPTIONAL: A plain text file containing the license for the chart
README.md # OPTIONAL: A human-readable README file
values.yaml # The default configuration values for this chart
charts/ # OPTIONAL: A directory containing any charts upon which this chart depends.
templates/ # OPTIONAL: A directory of templates that, when combined with values,
# will generate valid Kubernetes manifest files.
templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
In the values.yaml file you can set some ENV variables like:
container:
name: "nginx"
version: "latest"
In your ReplicationController file you can reference the variables using:
apiVersion: v1
kind: ReplicationController
...
spec:
containers:
- name: myapp
image: {{.Values.container.name}}:{{.Values.container.version}}
...
The YAML file for your replication controller should be placed in the templates directory.
You can then run the command helm package YOUR_PACKAGE_NAME. To install the package on your K8S cluster you can then run helm install PACKAGE_NAME
NOTE: I would suggest you switch to using Deployments instead of ReplicationController. See: https://kubernetes.io/docs/user-guide/deployments/
Maybe you mean this?
- name: PUBLIC_URL
value: "http://gitserver.$(POD_NAMESPACE):$(SERVICE_PORT)"
This is something what their docs specified.. but it doesn't work for me anymore.

Resources