Ansible Jinja2 template - Remove trailing whitespace - ansible

I am trying to load an ansible vault file into an k8 configmap YAML file using Ansible Jinja template but facing an issue with a trailing whitespace getting added at the end of the contents of the YAML file. This is causing errors as below:
Vault format unhexlify error: Odd-length string
Sample of ansible template am using is :
Playbook main.yml -
- name: display multiple files
shell: cat /tmp/test.yml
register: test
Ansible Jinja Template
apiVersion: v1
data:
test.yml: |-
{{ test.stdout.splitlines()|indent(4, false)|trim|replace(' ','') }}
kind: ConfigMap
metadata:
name: test
namespace: test-namespace
test.yml example:
$ANSIBLE_VAULT;1.1;AES256
62313365396662343061393464336163383764373764613633653634306231386433626436623361
6134333665353966363534333632666535333761666131620a663537646436643839616531643561
63396265333966386166373632626539326166353965363262633030333630313338646335303630
3438626666666137650a353638643435666633633964366338633066623234616432373231333331
6564
Output YAML created from Jinja Template is below
apiVersion: v1
data:
test.yml:
$ANSIBLE_VAULT;1.1;AES256
62313365396662343061393464336163383764373764613633653634306231386433626436623361
6134333665353966363534333632666535333761666131620a663537646436643839616531643561
63396265333966386166373632626539326166353965363262633030333630313338646335303630
3438626666666137650a353638643435666633633964366338633066623234616432373231333331
6564
kind: ConfigMap
metadata:
name: test
namespace: test-namespace
Can you please let me know what i may be missing in my ansible template file to fix the above trailing whitespace issues.

I am trying to load a Ansible Vault encrypted file into a configmap using jinja2 templating
Then you are solving the wrong problem; let the to_yaml filter do all that escaping for you, rather than trying to jinja your way through it.
- command: cat /tmp/test.yml
register: tmp_test
- set_fact:
cm_skeleton:
apiVersion: v1
data:
kind: ConfigMap
metadata:
name: test
namespace: test-namespace
- copy:
content: >-
{{ cm_skeleton | combine({"data":{"test.yml": tmp_test.stdout}}) | to_yaml }}
dest: /tmp/test.configmap.yml
If you have other things you are trying to template into that ConfigMap, fine, you can still do so, but deserialize in into a dict so you can insert the literal contents of test.yml into the dict and then re-serialize using the to_yaml filter:
- set_fact:
cm_skeleton: '{{ lookup("template", "cm.j2") | from_yaml }}'
- copy:
contents: '{{ cm_sketeton | combine({"data"...}) | to_yaml }}'

Related

yq - is it possible to create a new key/value pair using an existing value whilst returning the whole yaml object?

Currently using yq (mikefarah/yq/ - version 4.27.2) and having trouble modifying inline an existing yaml file.
What I'm trying to do:
select the labels field
if labels contains a field named monitored_item, create a key named affectedCi and use the value of monitored_item
if labels does not contain a field named monitored_item, create a key named affectedCi with the value {{ $labels.affected_ci }}
return the whole yaml object with changes made inline
jobs-prometheusRule.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app: jobs
prometheus: k8s
role: alert-rules
name: jobs-rules
namespace: ops
spec:
groups:
- name: job-detailed
rules:
- alert: JobInstanceFailed
annotations:
description: Please check the status of the {{ $labels.app_name }} job {{ $labels.job_name }} as it has failed.
summary: Job has failed
expr: (process_failed{context="job_failed"} + on(app_name, job_name) group_left(severity)(topk by(app_name, job_name) (1, property{context="job_max_allowed_failures"}))) == 1
for: 1m
labels:
monitored_item: '{{ $labels.app_name }} job {{ $labels.job_name }}'
severity: '{{ $labels.severity }}'
I've scoured through the docs and stack overflow with no luck - below is as far as i've been able to get.
yq command:
yq '(.spec.groups[].rules[] | select(.labels | has("monitored_item")) | .labels.affectedCi) |= .labels.monitored_item ' jobs-prometheusRule.yaml
The output returns the whole yaml object with the field affectedCi: null instead of the specified values
Anyone able to help?
You could use with to update, and // for fall-back:
yq 'with(
.spec.groups[].rules[] | select(.labels).labels;
.affectedCi = .monitored_item // "{{ $labels.affected_ci }}"
)' jobs-prometheusRule.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
app: jobs
prometheus: k8s
role: alert-rules
name: jobs-rules
namespace: ops
spec:
groups:
- name: job-detailed
rules:
- alert: JobInstanceFailed
annotations:
description: Please check the status of the {{ $labels.app_name }} job {{ $labels.job_name }} as it has failed.
summary: Job has failed
expr: (process_failed{context="job_failed"} + on(app_name, job_name) group_left(severity)(topk by(app_name, job_name) (1, property{context="job_max_allowed_failures"}))) == 1
for: 1m
labels:
monitored_item: '{{ $labels.app_name }} job {{ $labels.job_name }}'
severity: '{{ $labels.severity }}'
affectedCi: '{{ $labels.app_name }} job {{ $labels.job_name }}'

Call variable in Ansible

Apologies for this simple question, but I tried various approach without success.
This is my vars file
---
preprod:
name: nginx
prod:
name: apache
I am trying to pass the value of name based on the environment name user provides (preprod, prod etc).
This is my template
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ env.name }}
name: {{ env.name }}
namespace: default
spec:
selector:
matchLabels:
app: {{ env.name }}
template:
metadata:
labels:
app: {{ env.name }}
spec:
containers:
- image: {{ env.name }}
imagePullPolicy: Always
name: {{ env.name }}
resources: {}
However, when I try with this using following command:
ansible-playbook playbook.yaml -e env=preprod
I am getting the following error.
fatal: [localhost]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'str object' has no attribute 'name'"}
My expectation is the {{ env.name }} should have been replaced with the value of preprod.name as in nginx in this case.
I want users to provide the value for env via -e on the command line, it seems if I do like preprod.name directly on the template, it seems to work, but I don't want that.
I hope this clarifies what I am trying to do, but it didn't work.
May I know what I am missing?
This error message indicates that the extra var passed on command line as -e is a string, and not the key (we expect) of the dict we are loading from the vars file.
I'm making up an example playbook as you have not shown how you load your vars file. I'm using include_vars as we can name the variable to load dict into.
# include vars with a name, so that we can access testvars[env]
- include_vars:
file: testvars.yml
name: testvars
- template:
src: test/deployment.yaml.j2
dest: /tmp/deployment.yaml
vars:
_name: "{{ testvars[env]['name'] }}"
With this approach, the prod and preprod keys will be available under testvars, and can be referenced with a variable such as env.
Then the template should use _name variable, like:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ _name }}
Given the variables in a place where the play can find it, e.g. group_vars/all. Optionally add default_env
shell> cat group_vars/all
preprod:
name: nginx
prod:
name: apache
default_env:
name: lighttpd
Use vars lookup plugin to "Retrieve the value of an Ansible variable". See
shell> ansible-doc -t lookup vars
For example, the playbook
shell> cat playbook.yml
- hosts: test_11
vars:
env: "{{ lookup('vars', my_env|default('default_env')) }}"
app: "{{ env.name }}"
tasks:
- debug:
var: app
by default displays
shell> ansible-playbook playbook.yml
...
app: lighttpd
Now you can select the environment by declaring the variable my_env, .e.g
shell> ansible-playbook playbook.yml -e my_env=prod
...
app: apache
and
shell> ansible-playbook playbook.yml -e my_env=preprod
...
app: nginx

Ansible- How to include /import playbook based on when condition result

I have folder in which i have many sub folder and those sub folder will have yaml file, by using the below code , I have got all yaml file and register those file,
tasks:
- name: Find .yml files
find:
paths: /temp/
patterns: '*.yml,*.yaml'
recurse: yes
register: yaml_path_files
One of the yaml file content is below,
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
log_level: INFO
---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
log_level: INFO
i want to check if the yaml contains kind is ConfigMap and that yaml file needs to be passed to role/another playbook as parmeter,i tried with below code, it is not working,
- debug:
var: item.path
when: lookup('file',item.path ) | from_yaml_all | list | selectattr('kind', '==', 'ConfigMap') | list | length > 0
with_items: "{{ yaml_path_files.files }}"
i am facing the below error,
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'lookup('file',item.path ) | from_yaml | list | selectattr('kind', '==', 'ConfigMap') | list | length > 0' failed. The error was: error while evaluating conditional (lookup('file',{{ filename }} ) | from_yaml | list | selectattr('kind', '==', 'ConfigMap') | list | length > 0): 'dict object' has no attribute 'kind'\n\nThe error appears to be in '/cygdrive/c/senthil/Ansible/Ansibleusecase4/EKSartifactvalidationV2.yml': line 14, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n # with_items: "{{ yaml_path_files.files }}"\n - debug:\n ^ here\n"}
kindly help me to achive it in ansible
Preliminary note: Your question is missing a complete playbook to be 100% sure of the problem. Edit your question with more info, possibly with an MCVE, if this is not your exact issue.
Your find tasks looks for files on your remote target. The file lookup, as all other lookups (see notes in the lookup plugins documentation), reads files locally on your ansible controller. In other words, you're looking for a file which does not exists.
You have to fetch or slurp the files from the remote target to get their content.
Here is a quickly written untested sample to put you on track. Please note on the last task that item.item.path is not a typo. item.item references the preceding item loop when registering the new var yaml_files_content (i.e. each yaml_path_files.files[*] from the preceding task, debug the full item to see for yourself)
- hosts: all
tasks:
- name: Find .yml files
find:
paths: /temp/
patterns: '*.yml,*.yaml'
recurse: yes
register: yaml_path_files
- name: Slurp file contents from remote
slurp:
src: "{{ item.path }}"
loop: "{{ yaml_path_files.files }}"
register: yaml_files_contents
- name: Show path for files containing ConfigMaps
debug:
var: item.item.path
loop: "{{ yaml_files_contents.results }}"
when: item.content | b64decode | from_yaml_all | list | selectattr('kind', '==', 'ConfigMap') | list | length > 0

Metadata or labels for Ansible task YAML definition

Using Ansible YAML format, I would like to add some metadata to tasks definition (similar to kubernetes format):
tasks:
- name: Boostrap
block:
- shell: "helm repo add {{ item.name }} {{ item.url }}"
with_items:
- {name: "elastic", url: "https://helm.elastic.co"}
- {name: "stable", url: "https://kubernetes-charts.storage.googleapis.com/"}
metadata:
kind: utils
tags:
- always
- name: Stack Djobi > Metrics
include_tasks: "./tasks/metrics.yaml"
tags:
- service_metrics_all
metadata:
kind: stack
stack: metrics
- name: Stack Djobi > Tintin
include_tasks: "./tintin/tasks.yaml"
tags:
- tintin
metadata:
kind: stack
stack: tintin
Use case:
Add semantic information to YAML
Better debuging
Could be used to filter tasks like with tag (ansible-playbook ./playbook.yaml --filter kind=stack)
Is that possible? (currently ERROR! 'metadata' is not a valid attribute for a Block)

Why does to_nice_yaml produce quotes around the result in Jinja2 (Ansible)?

I have the following setup.
my_var has the following value.
ansible_facts:
discovered_interpreter_python: /usr/bin/python
invocation:
module_args:
api_key: null
api_version: v1
ca_cert: null
client_cert: null
client_key: null
context: null
field_selectors: []
host: null
kind: Secret
kubeconfig: null
label_selectors: []
password: null
proxy: null
username: null
validate_certs: null
resources:
- apiVersion: v1
data:
a: blah
b: blah
c: blah
kind: Secret
metadata:
name: my_name
type: Opaque
I am using this in a task with a template like this.
- name: "doh"
k8s:
state: present
namespace: "doh"
definition: "{{ lookup('template', 'template.j2') }}"
My template looks like this.
apiVersion: v1
data: "{{ my_var | json_query("resources[?metadata.name=='" + my_name + ".my_string." + some_var + "'].data") | first | to_nice_yaml }}"
kind: Secret
metadata:
name: "blah"
type: Opaque
Unluckily I get this as a result. This is a string and should be plain yaml.
apiVersion: v1
data: "a: blah <-- quote, why?
b: blah
c: blah
" <-- quote, why?
kind: Secret
metadata:
name: "blah"
type: Opaque
Why am I getting quotes around my yaml in Jinja2 and how do I avoid it?
In your template, there are quotes around the yaml:
data: "{{ ... | to_nice_yaml }}"
These quotes are part of your template and will be part of the rendered output.
I think you're confusing Ansible syntax with jinja2 template syntax (likely based on this gotcha from the docs).
This gotcha is not true for jinja2 templates. Everything that is not inside a jinja2 delimited block ({%, {{, etc.) will translate to the rendered value.
If you don't want the quotes in the rendered value, just take them out of the template.
I have the same issue, even if you leave the quotes off, to_nice_yaml is adding them in as in yaml they are technically a string.

Resources