Note: I have next to zero experience with Ansible.
I need to be able to conditionally modify the configuration of a Kubernetes cluster control plane service. To do this I need to be able to find a specific piece of information in the file and if its value matches a specific pattern, change the value to something else.
To illustrate, consider the following YAML file:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
...
In this scenario, the line I'm interested in is the line containing --bind-address. If that field's value is "127.0.0.1", it needs to be changed to "0.0.0.0". If it's already "0.0.0.0", nothing needs to be done. (I could also approach it from the point of view of: if its not "0.0.0.0" then it needs to change to that.)
The initial thought that comes to mind is: just search for "--bind-address=127.0.0.1" and replace it with "--bind-address=0.0.0.0". Simple enough, eh? No, not that simple. What if, for some reason, there is another piece of configuration in this file that also matches that pattern? Which one is the right one?
The only way I can think of to ensure I find the right text to change, is a multiple expression RegEx match. Something along the lines of:
find spec:
if found, find containers: "within" or "under" spec:
if found, find - command: "within" or "under" containers: (Note: there can be more than one "command")
if found, find - kube-controller-manager "within" or "under" - command:
if found, find - --bind-address "within" or "under" - kube-controller-manager
if found, get the value after the =
if 127.0.0.1 change it to 0.0.0.0, otherwise do nothing
How could I write an Ansible playbook to perform these steps, in sequence and only if each step returns true?
Read the data from the file into a dictionary
- include_vars:
file: conf.yml
name: conf
gives
conf:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
Update the containers
- set_fact:
containers: []
- set_fact:
containers: "{{ containers +
(update_candidate is all)|ternary([_item], [item]) }}"
loop: "{{ conf.spec.containers|d([]) }}"
vars:
update_candidate:
- item is contains 'command'
- item.command is contains 'kube-controller-manager'
- item.command|select('match', '--bind-address')|length > 0
update: "{{ item.command|map('regex_replace',
'--bind-address=127.0.0.1',
'--bind-address=0.0.0.0') }}"
_item: "{{ item|combine({'command': update}) }}"
gives
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Update conf
conf_update: "{{ conf|combine({'spec': spec}) }}"
spec: "{{ conf.spec|combine({'containers': containers}) }}"
give
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
conf_update:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Write the update to the file
- copy:
dest: /tmp/conf.yml
content: |
{{ conf_update|to_nice_yaml(indent=2) }}
gives
shell> cat /tmp/conf.yml
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Example of a complete playbook for testing
- hosts: localhost
vars:
conf_update: "{{ conf|combine({'spec': spec}) }}"
spec: "{{ conf.spec|combine({'containers': containers}) }}"
tasks:
- include_vars:
file: conf.yml
name: conf
- debug:
var: conf
- set_fact:
containers: []
- set_fact:
containers: "{{ containers +
(update_candidate is all)|ternary([_item], [item]) }}"
loop: "{{ conf.spec.containers|d([]) }}"
vars:
update_candidate:
- item is contains 'command'
- item.command is contains 'kube-controller-manager'
- item.command|select('match', '--bind-address')|length > 0
update: "{{ item.command|map('regex_replace',
'--bind-address=127.0.0.1',
'--bind-address=0.0.0.0') }}"
_item: "{{ item|combine({'command': update}) }}"
- debug:
var: containers
- debug:
var: spec
- debug:
var: conf_update
- copy:
dest: /tmp/conf.yml
content: |
{{ conf_update|to_nice_yaml(indent=2) }}
Related
This is my job configuration when parallelism: 2
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
parallelism: 2
Two pods for the same job will come up how do I give two different parameters to the two pods using helm charts
I have tried to do this like this
Below is my helm template for the job.
{{- range $index := until (.Values.replicaCount | int) }}
apiVersion: batch/v1
kind: Job
metadata:
name: alpine-base-{{ index $.Values.ip_range $index }}
labels:
app: alpine-app
chart: alpine-chart
annotations:
"helm.sh/hook-delete-policy": "hook-succeeded"
spec:
ttlSecondsAfterFinished: 60
backoffLimit: 0
completionMode: Indexed
parallelism: {{ $.Values.replicaCount }}
completions: {{ $.Values.replicaCount }}
template:
metadata:
labels:
app_type: base
spec:
containers:
- name: alpine-base
image: "{{ $.Values.global.imageRegistry }}/{{ $.Values.global.image.repository }}:{{ $.Values.global.image.tag }}"
imagePullPolicy: "{{ $.Values.global.imagePullPolicy }}"
volumeMounts:
- mountPath: /scripts
name: scripts-vol
env:
- name: IP
value: {{ index $.Values.slave_producer.ip_range $index }}
- name: SLEEP_TIME
value: {{ $.Values.slave_producer.sleepTime }}
command: ["/bin/bash"]
args: ["/scripts/slave_trigger_script.sh"]
ports:
- containerPort: 1093
- containerPort: 5000
- containerPort: 3445
resources:
requests:
memory: "1024Mi"
cpu: "1024m"
limits:
memory: "1024Mi"
cpu: "1024m"
volumes:
- name: scripts-vol
configMap:
name: scripts-configmap
restartPolicy: Never
{{- end }}
But both pods get the same IP in the environment variable
I need to run multiple containers in one cronjob execution. Currently I have the following cronjob.yaml template:
jobTemplate:
spec:
template:
metadata:
labels:
app: {{ .Release.Name }}
cron: {{ .Values.filesjob.jobName }}
spec:
containers:
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
env:
- name: FILE_MASK
value: "{{ .Values.filesjob.fileMask }}"
- name: ID
value: "{{ .Values.filesjob.id }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: {{ .Values.filesjob.jobName }}
volumeMounts:
- mountPath: /data
name: path-to-clean
- name: path-logfiles
mountPath: /log
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
env:
- name: FILE_MASK
value: "{{ .Values.filesjob.fileMask }}"
- name: ID
value: "{{ .Values.filesjob.id }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: {{ .Values.filesjob.jobName2 }}
volumeMounts:
- mountPath: /data
name: path2-to-clean
- name: path2-logfiles
mountPath: /log
The above generates a cronjob which executes two containers with passing different env variables. can I generate the same using values.yaml by iterating over a variable?
I was able to solve this basing on example from this article - https://nikhils-devops.medium.com/helm-chart-for-kubernetes-cronjob-a694b47479a
Here is my template:
{{- if .Values.cleanup.enabled -}}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: registry-cleanup-job
namespace: service
labels:
{{- include "some.labels" . | nindent 4 }}
spec:
schedule: {{ .Values.cleanup.schedule }}
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 2
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
{{- range .Values.cleanup.crons }}
- name: {{ .name | quote }}
image: {{ $.Values.cleanup.imageName }}
imagePullPolicy: {{ $.Values.cleanup.pullPolicy }}
args:
- {{ .command }}
{{- end}}
restartPolicy: Never
{{- end }}
And related values:
cleanup:
enabled: true
schedule: "0 8 * * 5"
imageName: digitalocean/doctl:1.60.0
pullPolicy: IfNotPresent
crons:
- command0:
name: "cleanup0"
command: command0
- command1:
name: "cleanup1"
command: command1
- command2:
name: "cleanup2"
command: command2
- command3:
name: "cleanup3"
command: command3
- command4:
name: "cleanup4"
command: command4
Using Ansible YAML format, I would like to add some metadata to tasks definition (similar to kubernetes format):
tasks:
- name: Boostrap
block:
- shell: "helm repo add {{ item.name }} {{ item.url }}"
with_items:
- {name: "elastic", url: "https://helm.elastic.co"}
- {name: "stable", url: "https://kubernetes-charts.storage.googleapis.com/"}
metadata:
kind: utils
tags:
- always
- name: Stack Djobi > Metrics
include_tasks: "./tasks/metrics.yaml"
tags:
- service_metrics_all
metadata:
kind: stack
stack: metrics
- name: Stack Djobi > Tintin
include_tasks: "./tintin/tasks.yaml"
tags:
- tintin
metadata:
kind: stack
stack: tintin
Use case:
Add semantic information to YAML
Better debuging
Could be used to filter tasks like with tag (ansible-playbook ./playbook.yaml --filter kind=stack)
Is that possible? (currently ERROR! 'metadata' is not a valid attribute for a Block)
I have the following yaml file.
resources:
- apiVersion: v1
kind: Deployment
metadata:
labels:
app: test
name: test-cluster-operator
namespace: destiny001
spec:
selector:
matchLabels:
name: test-cluster-operator
test.io/kind: cluster-operator
strategy:
type: Recreate
template:
metadata:
labels:
name: test-cluster-operator
test.io/kind: cluster-operator
spec:
containers:
- args:
- /path/test/bin/cluster_operator_run.sh
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthy
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
name: test-cluster-operator
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: '1'
memory: 256Mi
requests:
cpu: 200m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/data
name: data-cluster-operator
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: test-cluster-operator
serviceAccountName: test-cluster-operator
terminationGracePeriodSeconds: 30
volumes:
- name: data-cluster-operator
persistentVolumeClaim:
claimName: data-cluster-operator
I am trying to get the the value of env variable called MY_NAMESPACE.
This is what I tried in Ansible to get to the env tree path.
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec\") | json_query(\"containers[?name=='test-cluster-operator'].env\") }}"
- name: "debug"
debug:
msg: "{{ myresult }}"
This produces an empty list, however the first json_query works well.
How do I use json_query correctly in this case?
Can I achieve this with just one json_query?
EDIT:
I seem to be closer to a solution but the result ist a list and not string, which I find annoying.
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec\") | json_query(\"[].containers[?name=='test-cluster-operator']\") | json_query(\"[].env[?name=='MY_NAMESPACE'].name\") }}"
This prints - - MY_NAMESPACE instead of just MY_NAMESPACE.
Do I have to use first filter every time after json_query? I know for sure that there is only one containers element. I don't understand why json_query returns a list.
This is finally working but no idea whether it's correct way to do it.
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec\") | first | json_query(\"containers[?name=='test-cluster-operator']\") | first | json_query(\"env[?name=='MY_NAMESPACE'].valueFrom \") | first }}"
json_query uses jmespath and jmespath always returns a list. This is why your first example isn't working. The first query returns a list but the second is trying to query a key. You've corrected that in the second with [].
You're also missing the jmespath pipe expression: | which is used pretty much as you might expect - the result of the first query can be piped into a new one. Note that this is separate from ansible filters using the same character.
This query:
resources[?metadata.name=='test-cluster-operator'].spec.template.spec | [].containers[?name=='test-cluster-operator'][].env[].valueFrom
Should give you the following output:
[
{
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
]
Your task should look like this:
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec | [].containers[?name=='test-cluster-operator'][].env[].valueFrom\") | first }}"
To answer your other question, yes you'll need to the first filter. As mentioned jmespath will always return a list, so if you just want the value of a key you'll need to pull it out.
I am currently using a k8s lookup to search for resources with a certain tag attached to them (in this case branch). This branch is a variable that changes regularly. The problem is that I can't seem to find the correct syntax for adding a variable into the lookup since it is itself using the Jinja syntax.
This works:
- name: delete the replicaset
k8s:
state: absent
api_version: v1
kind: ReplicaSet
namespace: default
name: "{{ replicaset.metadata.name }}"
kubeconfig: /var/lib/awx/.kube/config
vars:
replicaset: "{{ lookup('k8s', kind='ReplicaSet', namespace='default', label_selector='branch=testing' ) }}"
However, when trying to use the branch variable, nothing I try seems to work. Here is one example of not working:
- name: delete the replicaset
k8s:
state: absent
api_version: v1
kind: ReplicaSet
namespace: default
name: "{{ replicaset.metadata.name }}"
kubeconfig: /var/lib/awx/.kube/config
vars:
replicaset: "{{ lookup('k8s', kind='ReplicaSet', namespace='default', label_selector='branch={{ branch }}' ) }}"
You can either add a helper variable:
- name: delete the replicaset
k8s:
state: absent
api_version: v1
kind: ReplicaSet
namespace: default
name: "{{ replicaset.metadata.name }}"
kubeconfig: /var/lib/awx/.kube/config
vars:
replicaset: "{{ lookup('k8s', kind='ReplicaSet', namespace='default', label_selector=my_selector ) }}"
my_selector: branch={{ branch }}
or use a Jinja2 string concatenation:
replicaset: "{{ lookup('k8s', kind='ReplicaSet', namespace='default', label_selector='branch='+branch ) }}"