helm cronjob multiple containers - yaml

I need to run multiple containers in one cronjob execution. Currently I have the following cronjob.yaml template:
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}
cron: {{ .Values.filesjob.jobName }}
spec:
containers:
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
env:
- name: FILE_MASK
value: "{{ .Values.filesjob.fileMask }}"
- name: ID
value: "{{ .Values.filesjob.id }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: {{ .Values.filesjob.jobName }}
volumeMounts:
- mountPath: /data
name: path-to-clean
- name: path-logfiles
mountPath: /log
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
env:
- name: FILE_MASK
value: "{{ .Values.filesjob.fileMask }}"
- name: ID
value: "{{ .Values.filesjob.id }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: {{ .Values.filesjob.jobName2 }}
volumeMounts:
- mountPath: /data
name: path2-to-clean
- name: path2-logfiles
mountPath: /log
The above generates a cronjob which executes two containers with passing different env variables. can I generate the same using values.yaml by iterating over a variable?

I was able to solve this basing on example from this article - https://nikhils-devops.medium.com/helm-chart-for-kubernetes-cronjob-a694b47479a
Here is my template:
{{- if .Values.cleanup.enabled -}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: registry-cleanup-job
namespace: service
labels:
{{- include "some.labels" . | nindent 4 }}
spec:
schedule: {{ .Values.cleanup.schedule }}
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
{{- range .Values.cleanup.crons }}
- name: {{ .name | quote }}
image: {{ $.Values.cleanup.imageName }}
imagePullPolicy: {{ $.Values.cleanup.pullPolicy }}
args:
- {{ .command }}
{{- end}}
restartPolicy: Never
{{- end }}
And related values:
cleanup:
enabled: true
schedule: "0 8 * * 5"
imageName: digitalocean/doctl:1.60.0
pullPolicy: IfNotPresent
crons:
- command0:
name: "cleanup0"
command: command0
- command1:
name: "cleanup1"
command: command1
- command2:
name: "cleanup2"
command: command2
- command3:
name: "cleanup3"
command: command3
- command4:
name: "cleanup4"
command: command4

Related

How can evaluate field in range?

I trying to create a default template for many similar applications, I need share same PVC with two or more pod and need modify the chart for create ot not PVC, if already exists.
This is my portion of values.yml about volumes:
persistence:
enabled: true
volumeMounts:
- name: vol1
mountPath: /opt/vol1
- name: vol2
mountPath: /opt/vol2
volumes:
- name: vol1
create: true
claimName: claim-vol1
storageClassName: gp2
accessModes: ReadWriteOnce
storage: 1Gi
- name: vol2
create: false
claimName: claim-vol2
storageClassName: gp2
accessModes: ReadWriteOnce
storage: 1Gi
And this is my pvclaim.yml:
{{- if .Values.persistence.enabled }}
{{- if .Values.volumes.create }}
{{- range .Values.volumes }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .claimName }}
spec:
storageClassName: {{ .storageClassName }}
accessModes:
- {{ .accessModes }}
resources:
requests:
storage: {{ .storage }}
{{- end }}
{{- end }}
{{- end }}
I thought I'd add the field create into range of volumes to manage creations of PVC (assuming in this example that PVC vol2 already exists from other helm chart).
If it were possible I would like helm to read the create field inside the range, this way I get an error:
evaluate field create in type interface {}
If you have any other ideas they are welcome, thanks!
volumes is an array, it does not have a create field.
Elements of volumes have that field. So .Values.volumes.create does not make any sense. Inside the range you may check the create field of the element using .create, e.g.
{{- range .Values.volumes }}
{{if .create}}do something here{{end}}
{{- end}}

Is there a way to give different parameters to a single job in helm charts having parallelism

This is my job configuration when parallelism: 2
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
parallelism: 2
Two pods for the same job will come up how do I give two different parameters to the two pods using helm charts
I have tried to do this like this
Below is my helm template for the job.
{{- range $index := until (.Values.replicaCount | int) }}
apiVersion: batch/v1
kind: Job
metadata:
name: alpine-base-{{ index $.Values.ip_range $index }}
labels:
app: alpine-app
chart: alpine-chart
annotations:
"helm.sh/hook-delete-policy": "hook-succeeded"
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
completionMode: Indexed
parallelism: {{ $.Values.replicaCount }}
completions: {{ $.Values.replicaCount }}
template:
metadata:
labels:
app_type: base
spec:
containers:
- name: alpine-base
image: "{{ $.Values.global.imageRegistry }}/{{ $.Values.global.image.repository }}:{{ $.Values.global.image.tag }}"
imagePullPolicy: "{{ $.Values.global.imagePullPolicy }}"
volumeMounts:
- mountPath: /scripts
name: scripts-vol
env:
- name: IP
value: {{ index $.Values.slave_producer.ip_range $index }}
- name: SLEEP_TIME
value: {{ $.Values.slave_producer.sleepTime }}
command: ["/bin/bash"]
args: ["/scripts/slave_trigger_script.sh"]
ports:
- containerPort: 1093
- containerPort: 5000
- containerPort: 3445
resources:
requests:
memory: "1024Mi"
cpu: "1024m"
limits:
memory: "1024Mi"
cpu: "1024m"
volumes:
- name: scripts-vol
configMap:
name: scripts-configmap
restartPolicy: Never
{{- end }}
But both pods get the same IP in the environment variable

Multiple expression RegEx in Ansible

Note: I have next to zero experience with Ansible.
I need to be able to conditionally modify the configuration of a Kubernetes cluster control plane service. To do this I need to be able to find a specific piece of information in the file and if its value matches a specific pattern, change the value to something else.
To illustrate, consider the following YAML file:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
...
In this scenario, the line I'm interested in is the line containing --bind-address. If that field's value is "127.0.0.1", it needs to be changed to "0.0.0.0". If it's already "0.0.0.0", nothing needs to be done. (I could also approach it from the point of view of: if its not "0.0.0.0" then it needs to change to that.)
The initial thought that comes to mind is: just search for "--bind-address=127.0.0.1" and replace it with "--bind-address=0.0.0.0". Simple enough, eh? No, not that simple. What if, for some reason, there is another piece of configuration in this file that also matches that pattern? Which one is the right one?
The only way I can think of to ensure I find the right text to change, is a multiple expression RegEx match. Something along the lines of:
find spec:
if found, find containers: "within" or "under" spec:
if found, find - command: "within" or "under" containers: (Note: there can be more than one "command")
if found, find - kube-controller-manager "within" or "under" - command:
if found, find - --bind-address "within" or "under" - kube-controller-manager
if found, get the value after the =
if 127.0.0.1 change it to 0.0.0.0, otherwise do nothing
How could I write an Ansible playbook to perform these steps, in sequence and only if each step returns true?
Read the data from the file into a dictionary
- include_vars:
file: conf.yml
name: conf
gives
conf:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
Update the containers
- set_fact:
containers: []
- set_fact:
containers: "{{ containers +
(update_candidate is all)|ternary([_item], [item]) }}"
loop: "{{ conf.spec.containers|d([]) }}"
vars:
update_candidate:
- item is contains 'command'
- item.command is contains 'kube-controller-manager'
- item.command|select('match', '--bind-address')|length > 0
update: "{{ item.command|map('regex_replace',
'--bind-address=127.0.0.1',
'--bind-address=0.0.0.0') }}"
_item: "{{ item|combine({'command': update}) }}"
gives
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Update conf
conf_update: "{{ conf|combine({'spec': spec}) }}"
spec: "{{ conf.spec|combine({'containers': containers}) }}"
give
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
conf_update:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Write the update to the file
- copy:
dest: /tmp/conf.yml
content: |
{{ conf_update|to_nice_yaml(indent=2) }}
gives
shell> cat /tmp/conf.yml
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Example of a complete playbook for testing
- hosts: localhost
vars:
conf_update: "{{ conf|combine({'spec': spec}) }}"
spec: "{{ conf.spec|combine({'containers': containers}) }}"
tasks:
- include_vars:
file: conf.yml
name: conf
- debug:
var: conf
- set_fact:
containers: []
- set_fact:
containers: "{{ containers +
(update_candidate is all)|ternary([_item], [item]) }}"
loop: "{{ conf.spec.containers|d([]) }}"
vars:
update_candidate:
- item is contains 'command'
- item.command is contains 'kube-controller-manager'
- item.command|select('match', '--bind-address')|length > 0
update: "{{ item.command|map('regex_replace',
'--bind-address=127.0.0.1',
'--bind-address=0.0.0.0') }}"
_item: "{{ item|combine({'command': update}) }}"
- debug:
var: containers
- debug:
var: spec
- debug:
var: conf_update
- copy:
dest: /tmp/conf.yml
content: |
{{ conf_update|to_nice_yaml(indent=2) }}

helm to iterate a list of lists

Given the follow values.yaml
configurations:
endpoints:
- firstEndpoint
- firstPath
- secondPath
- secondEndpoint
- thirdPath
- fourthPath
I need to generate different resources from those values in the following way:
- name: firstEndpoint
paths:
- firstPath
- secondPath
- name: secondEndpoint
paths:
- thirdPath
- fourthPath
I can do this if "endpoints" were to be a map, instead of a list/array, but in this case, I need "endpoints" to be a list of endpoints, and "paths" to be a list of paths for each endpoint.
How could this be achieved?
As David Maze said. Your proposed endpoints: isn't valid.
You may try this:
values.yaml
configurations:
endpoints:
- firstEndpoint:
- firstPath
- secondPath
- secondEndpoint:
- thirdPath
- fourthPath
(Note the : after firstEndpoint and secondEndpoint)
template/cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
test: |-
{{- range $_, $item := .Values.configurations.endpoints }}
{{- range $k, $v := . }}
- name: {{ $k }}
path:
{{- range $_, $path := $v }}
- {{ $path }}
{{- end }}
{{- end }}
{{- end }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
test: |-
- name: firstEndpoint
path:
- firstPath
- secondPath
- name: secondEndpoint
path:
- thirdPath
- fourthPath

How to properly set up Health and Liveliness Probes in Helm

As a stepping stone to a more complicated Problem, I have been following this example: https://blog.gopheracademy.com/advent-2017/kubernetes-ready-service/, step by step. The next step that I have been trying to learn is using Helm files to deploy the Golang service instead of a makefile.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .ServiceName }}
labels:
app: {{ .ServiceName }}
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
maxSurge: 1
template:
metadata:
labels:
app: {{ .ServiceName }}
spec:
containers:
- name: {{ .ServiceName }}
image: docker.io/<my Dockerhub name>/{{ .ServiceName }}:{{ .Release }}
imagePullPolicy: Always
ports:
- containerPort: 8000
livenessProbe:
httpGet:
path: /healthz
port: 8000
readinessProbe:
httpGet:
path: /readyz
port: 8000
resources:
limits:
cpu: 10m
memory: 30Mi
requests:
cpu: 10m
memory: 30Mi
terminationGracePeriodSeconds: 30
to a helm deployment.yaml that looks like
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
ano 5.4 mychart/templates/deployment.yaml
{{- include "mychart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "mychart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8000
readinessProbe:
httpGet:
path: /readyz
port: 8000
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
However, when I run the helm chart, the probes (which when not using helm work perfectly fine) fail with errors - specifically when describing the pod, I get the error "Warning Unhealthy 16s (x3 over 24s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 503"
I have obviously set up the probes wrong on the helm chart. How do I convert these probes from one system to another?
Solution:
The solution I found was that probes in the Helm Charts were initial time delays. When I replaced
livenessProbe:
httpGet:
path: /healthz
port: 8000
readinessProbe:
httpGet:
path: /readyz
port: 8000
with
livenessProbe:
httpGet:
path: /healthz
port: 8000
initialDelaySeconds: 15
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 15
Because the probes where running before the container was fully started, they were automatically concluding that they failed.

Resources