How to retrieve/parse results from ansible k8s modules - ansible

I try to retrieve/parse results from ansible k8s modules using with_items on multiple fields. I get information from the k8s_info module and save it to a variable {list_object}. Then I would like to get the name of the object and its type from this variable
This is my variable output
results:
- ansible_loop_var: item
changed: false
failed: false
invocation:
module_args:
api: v1
api_key: null
api_version: v1
ca_cert: null
client_cert: null
client_key: null
context: null
field_selectors: []
host: null
kind: ConfigMap
kubeconfig: /home//.kubeconfig
label_selectors:
- app = appsmy
name: null
namespace: my
password: null
persist_config: null
proxy: null
username: null
validate_certs: false
wait: false
wait_condition: null
wait_sleep: 5
wait_timeout: 120
item: ConfigMap
resources:
- apiVersion: v1
data:
FOO: BAR
kind: ConfigMap
metadata:
creationTimestamp: '2020-07-04T12:02:16Z'
labels:
app: myapp
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:FOO: {}
f:metadata:
f:labels:
.: {}
:app: {}
manager: OpenAPI-Generator
operation: Update
time: '2022-07-04T12:04:16Z'
name: myconfigmap
namespace: myapp
resourceVersion: '11'
uid: 12312312312312
I try to extract data like this.
- debug:
msg: "{{ item.kind }} {{ item.metadata.name }}"
loop: "{{ list_object.results.resources }}"
But I receive.
output:
'''list object'' has no attribute ''resources'''
Expected output:
kind: Configmap name: myconfigmap
Do you have any advice or tips why it doesn't work?

Related

Multiple expression RegEx in Ansible

Note: I have next to zero experience with Ansible.
I need to be able to conditionally modify the configuration of a Kubernetes cluster control plane service. To do this I need to be able to find a specific piece of information in the file and if its value matches a specific pattern, change the value to something else.
To illustrate, consider the following YAML file:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
...
In this scenario, the line I'm interested in is the line containing --bind-address. If that field's value is "127.0.0.1", it needs to be changed to "0.0.0.0". If it's already "0.0.0.0", nothing needs to be done. (I could also approach it from the point of view of: if its not "0.0.0.0" then it needs to change to that.)
The initial thought that comes to mind is: just search for "--bind-address=127.0.0.1" and replace it with "--bind-address=0.0.0.0". Simple enough, eh? No, not that simple. What if, for some reason, there is another piece of configuration in this file that also matches that pattern? Which one is the right one?
The only way I can think of to ensure I find the right text to change, is a multiple expression RegEx match. Something along the lines of:
find spec:
if found, find containers: "within" or "under" spec:
if found, find - command: "within" or "under" containers: (Note: there can be more than one "command")
if found, find - kube-controller-manager "within" or "under" - command:
if found, find - --bind-address "within" or "under" - kube-controller-manager
if found, get the value after the =
if 127.0.0.1 change it to 0.0.0.0, otherwise do nothing
How could I write an Ansible playbook to perform these steps, in sequence and only if each step returns true?
Read the data from the file into a dictionary
- include_vars:
file: conf.yml
name: conf
gives
conf:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
Update the containers
- set_fact:
containers: []
- set_fact:
containers: "{{ containers +
(update_candidate is all)|ternary([_item], [item]) }}"
loop: "{{ conf.spec.containers|d([]) }}"
vars:
update_candidate:
- item is contains 'command'
- item.command is contains 'kube-controller-manager'
- item.command|select('match', '--bind-address')|length > 0
update: "{{ item.command|map('regex_replace',
'--bind-address=127.0.0.1',
'--bind-address=0.0.0.0') }}"
_item: "{{ item|combine({'command': update}) }}"
gives
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Update conf
conf_update: "{{ conf|combine({'spec': spec}) }}"
spec: "{{ conf.spec|combine({'containers': containers}) }}"
give
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
conf_update:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Write the update to the file
- copy:
dest: /tmp/conf.yml
content: |
{{ conf_update|to_nice_yaml(indent=2) }}
gives
shell> cat /tmp/conf.yml
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=0.0.0.0
Example of a complete playbook for testing
- hosts: localhost
vars:
conf_update: "{{ conf|combine({'spec': spec}) }}"
spec: "{{ conf.spec|combine({'containers': containers}) }}"
tasks:
- include_vars:
file: conf.yml
name: conf
- debug:
var: conf
- set_fact:
containers: []
- set_fact:
containers: "{{ containers +
(update_candidate is all)|ternary([_item], [item]) }}"
loop: "{{ conf.spec.containers|d([]) }}"
vars:
update_candidate:
- item is contains 'command'
- item.command is contains 'kube-controller-manager'
- item.command|select('match', '--bind-address')|length > 0
update: "{{ item.command|map('regex_replace',
'--bind-address=127.0.0.1',
'--bind-address=0.0.0.0') }}"
_item: "{{ item|combine({'command': update}) }}"
- debug:
var: containers
- debug:
var: spec
- debug:
var: conf_update
- copy:
dest: /tmp/conf.yml
content: |
{{ conf_update|to_nice_yaml(indent=2) }}

yq - is it possible to create a new key/value pair using an existing value whilst returning the whole yaml object?

Currently using yq (mikefarah/yq/ - version 4.27.2) and having trouble modifying inline an existing yaml file.
What I'm trying to do:
select the labels field
if labels contains a field named monitored_item, create a key named affectedCi and use the value of monitored_item
if labels does not contain a field named monitored_item, create a key named affectedCi with the value {{ $labels.affected_ci }}
return the whole yaml object with changes made inline
jobs-prometheusRule.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app: jobs
prometheus: k8s
role: alert-rules
name: jobs-rules
namespace: ops
spec:
groups:
- name: job-detailed
rules:
- alert: JobInstanceFailed
annotations:
description: Please check the status of the {{ $labels.app_name }} job {{ $labels.job_name }} as it has failed.
summary: Job has failed
expr: (process_failed{context="job_failed"} + on(app_name, job_name) group_left(severity)(topk by(app_name, job_name) (1, property{context="job_max_allowed_failures"}))) == 1
for: 1m
labels:
monitored_item: '{{ $labels.app_name }} job {{ $labels.job_name }}'
severity: '{{ $labels.severity }}'
I've scoured through the docs and stack overflow with no luck - below is as far as i've been able to get.
yq command:
yq '(.spec.groups[].rules[] | select(.labels | has("monitored_item")) | .labels.affectedCi) |= .labels.monitored_item ' jobs-prometheusRule.yaml
The output returns the whole yaml object with the field affectedCi: null instead of the specified values
Anyone able to help?
You could use with to update, and // for fall-back:
yq 'with(
.spec.groups[].rules[] | select(.labels).labels;
.affectedCi = .monitored_item // "{{ $labels.affected_ci }}"
)' jobs-prometheusRule.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app: jobs
prometheus: k8s
role: alert-rules
name: jobs-rules
namespace: ops
spec:
groups:
- name: job-detailed
rules:
- alert: JobInstanceFailed
annotations:
description: Please check the status of the {{ $labels.app_name }} job {{ $labels.job_name }} as it has failed.
summary: Job has failed
expr: (process_failed{context="job_failed"} + on(app_name, job_name) group_left(severity)(topk by(app_name, job_name) (1, property{context="job_max_allowed_failures"}))) == 1
for: 1m
labels:
monitored_item: '{{ $labels.app_name }} job {{ $labels.job_name }}'
severity: '{{ $labels.severity }}'
affectedCi: '{{ $labels.app_name }} job {{ $labels.job_name }}'

Metadata or labels for Ansible task YAML definition

Using Ansible YAML format, I would like to add some metadata to tasks definition (similar to kubernetes format):
tasks:
- name: Boostrap
block:
- shell: "helm repo add {{ item.name }} {{ item.url }}"
with_items:
- {name: "elastic", url: "https://helm.elastic.co"}
- {name: "stable", url: "https://kubernetes-charts.storage.googleapis.com/"}
metadata:
kind: utils
tags:
- always
- name: Stack Djobi > Metrics
include_tasks: "./tasks/metrics.yaml"
tags:
- service_metrics_all
metadata:
kind: stack
stack: metrics
- name: Stack Djobi > Tintin
include_tasks: "./tintin/tasks.yaml"
tags:
- tintin
metadata:
kind: stack
stack: tintin
Use case:
Add semantic information to YAML
Better debuging
Could be used to filter tasks like with tag (ansible-playbook ./playbook.yaml --filter kind=stack)
Is that possible? (currently ERROR! 'metadata' is not a valid attribute for a Block)

Why does to_nice_yaml produce quotes around the result in Jinja2 (Ansible)?

I have the following setup.
my_var has the following value.
ansible_facts:
discovered_interpreter_python: /usr/bin/python
invocation:
module_args:
api_key: null
api_version: v1
ca_cert: null
client_cert: null
client_key: null
context: null
field_selectors: []
host: null
kind: Secret
kubeconfig: null
label_selectors: []
password: null
proxy: null
username: null
validate_certs: null
resources:
- apiVersion: v1
data:
a: blah
b: blah
c: blah
kind: Secret
metadata:
name: my_name
type: Opaque
I am using this in a task with a template like this.
- name: "doh"
k8s:
state: present
namespace: "doh"
definition: "{{ lookup('template', 'template.j2') }}"
My template looks like this.
apiVersion: v1
data: "{{ my_var | json_query("resources[?metadata.name=='" + my_name + ".my_string." + some_var + "'].data") | first | to_nice_yaml }}"
kind: Secret
metadata:
name: "blah"
type: Opaque
Unluckily I get this as a result. This is a string and should be plain yaml.
apiVersion: v1
data: "a: blah <-- quote, why?
b: blah
c: blah
" <-- quote, why?
kind: Secret
metadata:
name: "blah"
type: Opaque
Why am I getting quotes around my yaml in Jinja2 and how do I avoid it?
In your template, there are quotes around the yaml:
data: "{{ ... | to_nice_yaml }}"
These quotes are part of your template and will be part of the rendered output.
I think you're confusing Ansible syntax with jinja2 template syntax (likely based on this gotcha from the docs).
This gotcha is not true for jinja2 templates. Everything that is not inside a jinja2 delimited block ({%, {{, etc.) will translate to the rendered value.
If you don't want the quotes in the rendered value, just take them out of the template.
I have the same issue, even if you leave the quotes off, to_nice_yaml is adding them in as in yaml they are technically a string.

Multiple json_query in Ansible?

I have the following yaml file.
resources:
- apiVersion: v1
kind: Deployment
metadata:
labels:
app: test
name: test-cluster-operator
namespace: destiny001
spec:
selector:
matchLabels:
name: test-cluster-operator
test.io/kind: cluster-operator
strategy:
type: Recreate
template:
metadata:
labels:
name: test-cluster-operator
test.io/kind: cluster-operator
spec:
containers:
- args:
- /path/test/bin/cluster_operator_run.sh
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthy
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
name: test-cluster-operator
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: '1'
memory: 256Mi
requests:
cpu: 200m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/data
name: data-cluster-operator
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: test-cluster-operator
serviceAccountName: test-cluster-operator
terminationGracePeriodSeconds: 30
volumes:
- name: data-cluster-operator
persistentVolumeClaim:
claimName: data-cluster-operator
I am trying to get the the value of env variable called MY_NAMESPACE.
This is what I tried in Ansible to get to the env tree path.
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec\") | json_query(\"containers[?name=='test-cluster-operator'].env\") }}"
- name: "debug"
debug:
msg: "{{ myresult }}"
This produces an empty list, however the first json_query works well.
How do I use json_query correctly in this case?
Can I achieve this with just one json_query?
EDIT:
I seem to be closer to a solution but the result ist a list and not string, which I find annoying.
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec\") | json_query(\"[].containers[?name=='test-cluster-operator']\") | json_query(\"[].env[?name=='MY_NAMESPACE'].name\") }}"
This prints - - MY_NAMESPACE instead of just MY_NAMESPACE.
Do I have to use first filter every time after json_query? I know for sure that there is only one containers element. I don't understand why json_query returns a list.
This is finally working but no idea whether it's correct way to do it.
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec\") | first | json_query(\"containers[?name=='test-cluster-operator']\") | first | json_query(\"env[?name=='MY_NAMESPACE'].valueFrom \") | first }}"
json_query uses jmespath and jmespath always returns a list. This is why your first example isn't working. The first query returns a list but the second is trying to query a key. You've corrected that in the second with [].
You're also missing the jmespath pipe expression: | which is used pretty much as you might expect - the result of the first query can be piped into a new one. Note that this is separate from ansible filters using the same character.
This query:
resources[?metadata.name=='test-cluster-operator'].spec.template.spec | [].containers[?name=='test-cluster-operator'][].env[].valueFrom
Should give you the following output:
[
{
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
]
Your task should look like this:
- name: "set test fact"
set_fact:
myresult: "{{ yaml_file_variable | json_query(\"resources[?metadata.name=='test-cluster-operator'].spec.template.spec | [].containers[?name=='test-cluster-operator'][].env[].valueFrom\") | first }}"
To answer your other question, yes you'll need to the first filter. As mentioned jmespath will always return a list, so if you just want the value of a key you'll need to pull it out.

Resources