Use dynamic set of variable for template - ansible

My objectif is to write a generic role for nginx configuration.
I want to allow to define many snippets based on templates. Depending on templates the set of variable can be differents.
The list of templates are defined in host_vars:
nginx_templated_snippets:
- template: whitelist_open.j2
file: whitelist_iprestrict.conf
options:
allow: "{{ default_whitelist + [ansible_facts.default_ipv4.address] }}"
the role looks like:
- name: Add templated snippets
template:
src: "{{ item.template }}"
dest: "/etc/nginx/snippets/{{ item.file }}"
owner: root
group: root
mode: '0644'
vars:
"{'{{ nginx_template_options }}'}"
loop: "{{ nginx_templated_snippets }}"
notify: Test nginx configuration and reload
In this example, the variable allow is requiered in template whitelist_open.j2 but depending on template the variable name can change.
I didn't manage to set the dynamic set of variables:
ERROR! Vars in a Task must be specified as a dictionary, or a list of dictionaries
I tried many syntaxes but not sure that could be possible

Related

Load ansible vars in specific tasks

I feel I must be missing this answer as it seems obvious but I've read a number of posts and have not been able to get this working.
Currently I am loading and then templating vars from files depending on inventory hostnames, like so:
- name: load unique dev vars from file
include_vars:
file: ~/ansible/env-dev.yml
when: inventory_hostname in groups[ 'devs' ]
- name: load unique prod vars from file
include_vars:
file: ~/ansible/env-prod.yml
when: inventory_hostname == 'prod'
- name: copy .env dev file with templated vars
ansible.builtin.template:
src: ~/ansible/env-dev.j2
dest: /home/{{ inventory_hostname }}/.env
owner: '{{ inventory_hostname }}'
group: '{{ inventory_hostname }}'
mode: '0600'
when: inventory_hostname in groups[ 'devs' ]
This works fine but ultimately it is requiring me to create a ton of .yml files when I would rather include some variables in certain steps instead.
Is it possible to load vars for a specific task? I've tried a number of solutions but haven't been able to make it work yet. See below for one method I tried using vars at the end of the task.
- name: copy .env dev file with templated vars
ansible.builtin.template:
src: ~/ansible/env-dev.j2
dest: /home/{{ inventory_hostname }}/.env
owner: '{{ inventory_hostname }}'
group: '{{ inventory_hostname }}'
mode: '0600'
when: inventory_hostname in groups[ 'devs' ]
vars:
NODE_ENV: development
PORT: 66
The key to organize your Ansible code is to rely on group vars.
This feature permits to load variables according to the group a host belong to. You have several ways to do that, one of the clearest way is to use yaml files named with the name of the group in the group_vars folder (plus a all.yaml matching all hosts). Ansible will pick automatically them for you, so you can get rid of your first two include_vars. You can combine them with variables specific to the role and or the playbook. So you end with a set of variables coming from the host (the target) and from the role / playbook (the task to achieve).
To replace the hardcoded src: ~/ansible/env-dev.j2 you could for example define a variable in each group.
---
# dev.yaml
template_name: "env-dev.j2"
---
# prod.yaml
template_name: "env-prod.j2"
And then use it in your playbook / role src: "{{ template_name }}".

Ansible - How to use lookup in remote servers

I have a file called values.txt on each of the server in the directory /tmp/values.txt and it has some values. And I have a jinja template and I am substituting some values from the the values.txt
But the problem is, when i use the lookup command it looks in the controller server and not the remote servers
Here's what I tried:
- name: Create /etc/systemd/system/etcd.service
template:
src: etcd.service.j2
dest: /etc/systemd/system/etcd.service
vars:
value_from_file: "{{ lookup('file', '/tmp/values.txt').split('\n') }}"
vars_from_jinja: [SELF_NAME, SELF_IP, NODE_1_NAME, NODE_1_IP, NODE_2_NAME, NODE_2_IP, NODE_3_NAME, NODE_3_IP, NODE_4_NAME, NODE_4_IP]
my_dict: "{{ dict(vars_from_jinja|zip(value_from_file)) }}"
How can I can this task remotely? Or is there another workaround to substitute the values to the jinja template?
PS: I can't fetch the values.txt to the controller because the content of values.txt in each server is slightly different from the other.
Can someone please help me?
The lookup find the file (or stream) on the controller. If you file is on the remote node, you can't use the lookup
Lookup plugins are an Ansible-specific extension to the Jinja2 templating language. You can use lookup plugins to access data from outside sources (files, databases, key/value stores, APIs, and other services) within your playbooks. Like all templating, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. You can use lookup plugins to load variables or templates with information from external sources.
try the cat file instead:
- name: read the values.txt
shell: cat /tmp/values.txt
register: data
- name: Create /etc/systemd/system/etcd.service
template:
src: etcd.service.j2
dest: /etc/systemd/system/etcd.service
vars:
value_from_file: "{{ data.stdout_lines }}"
vars_from_jinja: [ SELF_NAME, SELF_IP, NODE_1_NAME, NODE_1_IP, NODE_2_NAME, NODE_2_IP, NODE_3_NAME, NODE_3_IP, NODE_4_NAME, NODE_4_IP ]
my_dict: "{{ dict(vars_from_jinja|zip(value_from_file)) }}"
Use slurp:
- name: read remote values.txt
register: values
ansible.builtin.slurp:
src: /tmp/values.txt
- name: Create /etc/systemd/system/etcd.service
template:
src: etcd.service.j2
dest: /etc/systemd/system/etcd.service
vars:
value_from_file: "{{ (values.content | b64decode).split('\n') }}"
vars_from_jinja: [SELF_NAME, SELF_IP, NODE_1_NAME, NODE_1_IP, NODE_2_NAME, NODE_2_IP, NODE_3_NAME, NODE_3_IP, NODE_4_NAME, NODE_4_IP]
my_dict: "{{ dict(vars_from_jinja|zip(value_from_file)) }}"

ansible how to load variables from another role, without executing it?

I have a task to create a one-off cleanup playbook which is using variables from a role, but i don't need to execute that role. Is there a way to provide a role name to get everything from it's defaults and vars, without hardcoding paths to it? I also want to use vars defined in group_vars or host_vars with higher precedence than the ones included from role.
Example task:
- name: stop kafka and zookeeper services if they exist
service:
name: "{{ item }}"
state: stopped
with_items:
- "{{ kafka_service_name }}"
- "{{ zookeeper_service_name }}"
ignore_errors: true
where kafka_service_name and zookeeper_service_name are contained in role kafka, but may also be present in i.e. group_vars.
I came up with a fairly hacky solution, which looks like this:
- name: save old host_vars
set_fact:
old_host_vars: "{{ hostvars[inventory_hostname] }}"
- name: load kafka role variables
include_vars:
dir: "{{ item.root }}/{{ item.path }}"
vars:
params:
files:
- kafka
paths: "{{ ['roles'] + lookup('config', 'DEFAULT_ROLES_PATH') }}"
with_filetree: "{{ lookup('first_found', params) }}"
when: item.state == 'directory' and item.path in ['defaults', 'vars']
- name: stop kafka and zookeeper services if they exist
service:
name: "{{ item }}"
state: stopped
with_items:
- "{{ old_host_vars['kafka_service_name'] | default(kafka_service_name) }}"
- "{{ old_host_vars['zookeeper_service_name'] | default(zookeeper_service_name) }}"
include_vars task finds the first kafka role folder in ./roles and in default role locations, then includes files from directories defaults and vars, in correct order.
I had to save old hostvars due to include_vars having higher priority than anything but extra vars as per ansible doc, and then using included var only if old_host_vars returned nothing.
If you don't have a requirement to load group_vars - include vars works quite nice in one task and looks way better.
UPD: Here is the regexp that i used to replace vars with old_host_vars hack.
This was tested in vscode search/replace, but can be adjusted for any other editor
Search for vars that start with kafka_:
\{\{ (kafka_\w*) \}\}
Replace with:
{{ old_host_vars['$1'] | default($1) }}

Ansible items in separate files

Is it possible to have few .yml files and then read them as separate items for task?
Example:
- name: write templates
template: src=template.j2 dest=/some/path
with_items: ./configs/*.yml
I have found pretty elegant solution:
---
- hosts: localhost
vars:
my_items: "{{ lookup('fileglob', './configs/*.yml', wantlist=True) }}"
tasks:
- name: write templates
template: src=template.j2 dest=/some/path/{{ (item | from_yaml).name }}
with_file: "{{ my_items }}"
And then in template you have to add {% set item = (item | from_yaml) %} at the beginning.
Well, yes and no. You can loop over files and even use their content as variables. But the template module does not take parameters. There is an ugly workaround by using an include statement. Includes do take parameters and if the template task is inside the included file it will have access to them.
Something like this should work:
- include: other_file.yml parameters={{ lookup('file', item) | from_yaml }}
with_fileglob: ./configs/*.yml
And in other_file.yml then the template task:
- name: write template
template: src=template.j2 dest=/some/path
The ugly part here, beside the additional include, is that the include statement only takes parameters in the format of key=value. that's what you see in above task as parameters=.... parameters here has no special meaning, it just is the name of the variable with which the content of the file will be available inside the include.
So if your vars files have a variable foo defined, you would be able to access it in the template as {{ parameters.foo }}.

Tag Name from EC2-group in Ansible

This is another question coming from the following post...:
loops over the registered variable to inspect the results in ansible
So basically having:
- name: EC2Group | Creating an EC2 Security Group inside the Mentioned VPC
local_action:
module: ec2_group
name: "{{ item.sg_name }}"
description: "{{ item.sg_description }}"
region: "{{ vpc_region }}" # Change the AWS region here
vpc_id: "{{ vpc.vpc_id }}" # vpc is the resgister name, you can also set it manually
state: present
rules: "{{ item.sg_rules }}"
with_items: ec2_security_groups
register: aws_sg
- name: Tag the security group with a name
local_action:
module: ec2_tag
resource: "{{ item.group_id }}"
region: "{{ vpc_region }}"
state: present
tags:
Name: "{{vpc_name }}-group"
with_items: aws_sg.results
I wonder how is possible to get the TAG NAME
tags:
Name: "{{ item.sg_name }}"
The same value as per the primary name definition on the Security Groups?
local_action:
module: ec2_group
name: "{{ item.sg_name }}"
I am trying to make that possible but I am not sure how to do it. If it's also possible to retrieve that item?
Thanks!
Tags are available after the ec2.py inventory script is run - they always take the value of tag_key_value where 'key' is the name of the tag, and 'value' is the value within that tag. i.e. if you create a tag called 'Application' and give it a value of 'AwesomeApplication' you would get 'tag_Application_AwesomeApplication'.
That said, if you have just created instances and want to run some commands against those new instances, parse the output from the create instance command to get a list of the IP addresses, and add them to a temporary group, and then you can run commands against that group within the same playbook:
...
- name: add hosts to temporary group
add_host: name="{{ item }}" groups=temporarygroup
with_items: parsedipaddresses
- hosts: temporarygroup
tasks:
- name: awesome script to do stuff goes here
...

Resources