ansible how to load variables from another role, without executing it? - ansible

I have a task to create a one-off cleanup playbook which is using variables from a role, but i don't need to execute that role. Is there a way to provide a role name to get everything from it's defaults and vars, without hardcoding paths to it? I also want to use vars defined in group_vars or host_vars with higher precedence than the ones included from role.
Example task:
- name: stop kafka and zookeeper services if they exist
service:
name: "{{ item }}"
state: stopped
with_items:
- "{{ kafka_service_name }}"
- "{{ zookeeper_service_name }}"
ignore_errors: true
where kafka_service_name and zookeeper_service_name are contained in role kafka, but may also be present in i.e. group_vars.

I came up with a fairly hacky solution, which looks like this:
- name: save old host_vars
set_fact:
old_host_vars: "{{ hostvars[inventory_hostname] }}"
- name: load kafka role variables
include_vars:
dir: "{{ item.root }}/{{ item.path }}"
vars:
params:
files:
- kafka
paths: "{{ ['roles'] + lookup('config', 'DEFAULT_ROLES_PATH') }}"
with_filetree: "{{ lookup('first_found', params) }}"
when: item.state == 'directory' and item.path in ['defaults', 'vars']
- name: stop kafka and zookeeper services if they exist
service:
name: "{{ item }}"
state: stopped
with_items:
- "{{ old_host_vars['kafka_service_name'] | default(kafka_service_name) }}"
- "{{ old_host_vars['zookeeper_service_name'] | default(zookeeper_service_name) }}"
include_vars task finds the first kafka role folder in ./roles and in default role locations, then includes files from directories defaults and vars, in correct order.
I had to save old hostvars due to include_vars having higher priority than anything but extra vars as per ansible doc, and then using included var only if old_host_vars returned nothing.
If you don't have a requirement to load group_vars - include vars works quite nice in one task and looks way better.
UPD: Here is the regexp that i used to replace vars with old_host_vars hack.
This was tested in vscode search/replace, but can be adjusted for any other editor
Search for vars that start with kafka_:
\{\{ (kafka_\w*) \}\}
Replace with:
{{ old_host_vars['$1'] | default($1) }}

Related

Load ansible vars in specific tasks

I feel I must be missing this answer as it seems obvious but I've read a number of posts and have not been able to get this working.
Currently I am loading and then templating vars from files depending on inventory hostnames, like so:
- name: load unique dev vars from file
include_vars:
file: ~/ansible/env-dev.yml
when: inventory_hostname in groups[ 'devs' ]
- name: load unique prod vars from file
include_vars:
file: ~/ansible/env-prod.yml
when: inventory_hostname == 'prod'
- name: copy .env dev file with templated vars
ansible.builtin.template:
src: ~/ansible/env-dev.j2
dest: /home/{{ inventory_hostname }}/.env
owner: '{{ inventory_hostname }}'
group: '{{ inventory_hostname }}'
mode: '0600'
when: inventory_hostname in groups[ 'devs' ]
This works fine but ultimately it is requiring me to create a ton of .yml files when I would rather include some variables in certain steps instead.
Is it possible to load vars for a specific task? I've tried a number of solutions but haven't been able to make it work yet. See below for one method I tried using vars at the end of the task.
- name: copy .env dev file with templated vars
ansible.builtin.template:
src: ~/ansible/env-dev.j2
dest: /home/{{ inventory_hostname }}/.env
owner: '{{ inventory_hostname }}'
group: '{{ inventory_hostname }}'
mode: '0600'
when: inventory_hostname in groups[ 'devs' ]
vars:
NODE_ENV: development
PORT: 66
The key to organize your Ansible code is to rely on group vars.
This feature permits to load variables according to the group a host belong to. You have several ways to do that, one of the clearest way is to use yaml files named with the name of the group in the group_vars folder (plus a all.yaml matching all hosts). Ansible will pick automatically them for you, so you can get rid of your first two include_vars. You can combine them with variables specific to the role and or the playbook. So you end with a set of variables coming from the host (the target) and from the role / playbook (the task to achieve).
To replace the hardcoded src: ~/ansible/env-dev.j2 you could for example define a variable in each group.
---
# dev.yaml
template_name: "env-dev.j2"
---
# prod.yaml
template_name: "env-prod.j2"
And then use it in your playbook / role src: "{{ template_name }}".

How to iterate across Ansible inventory whilst referencing hostvars in add_host

I want to dynamically create an in-memory inventory which is a filter of a standard inventory including only the host where a specific service is installed. The filtered inventory is to be used in a subsequent play.
So I identify the IP address of the host where the service is installed.
- name: find where the service is installed
win_service:
name: "{{ service }}"
register: service_info
This returns a boolean 'exists' value. Using this value as a condition an attempt to add the host where the service is running is made.
- name: create filtered in memory inventory
add_host:
name: "{{ ansible_host }}"
when: service_info.exists
The add_host module bypasses the play host loop and only runs once for all the hosts in the play, as such this only works if the host that add_host runs against is the one that has the service installed.
Below is an attempt to force add_host to iterate across the hosts in the inventory however it appears that the hostvars and therefore service_info.exists are not being passed through to add_host and therefore the conditional 'when' check always returns false.
- name: create filtered in memory inventory
add_host:
name: "{{ ansible_host }}"
when: service_info.exists
with_items: "{{ ansible_play_batch }}"
Is there a way to pass the hosts with their hostvars to add_host as a iterator?
I suggest to create a tasks before add_host to create a temporary file on executor with the list of server matching the condition, and then looping in module add_host over the file.
example taken from Improving use of add_host in ansible that I asked before
---
- hosts: servers
tasks:
- name: find where the service is installed
win_service:
name: "{{ service }}"
register: service_info
- name: write server name in file on control node
lineinfile:
path: /tmp/servers_foo.txt
state: present
line: "{{ inventory_hostname }}"
delegate_to: 127.0.0.1
when: service_info.exists
- name: assign target to group
add_host:
name: "{{ item }}"
groups:
- foo
with_lines: cat /tmp/servers_foo.txt
delegate_to: 127.0.0.1

Ansible - how to conditionally invert variables in a playbook

I needed to be able to invert variables stored in a JSON file that is passed to the playbook from the command line.
These are the tasks that I set up (they are identical except for vars), this is a fragment of a playbook:
- name: Prepare a .sql file
delegate_to: 127.0.0.1
mysql_db:
name: "{{ source['database']['db_name'] }}"
state: dump
login_host: "{{ source['database']['host'] }}"
login_user: "{{ source['database']['user'] }}"
login_password: "{{ source['database']['password'] }}"
target: test_db.sql
when: invert is not defined
- name: Prepare a .sql file (inverted)
delegate_to: 127.0.0.1
mysql_db:
name: "{{ target['database']['db_name'] }}"
state: dump
login_host: "{{ target['database']['host'] }}"
login_user: "{{ target['database']['user'] }}"
login_password: "{{ target['database']['password'] }}"
target: test_db.sql
when: invert is defined
So consequently when I execute
ansible-playbook -i hosts playbook.yml --extra-vars "#dynamic_vars.json"
the first task is executed. If I execute
ansible-playbook -i hosts playbook.yml --extra-vars "#dynamic_vars.json" --extra-vars "invert-yes"
the second task is executed that takes the same hash as parameters, but only swaps source for target (which essentially becomes a source in my playbook).
As you can see, this is a very simplistic approach, there is a lot of unnecessary duplication, I just do not like it. However, I cannot think of a better way to be able to revert variables at the command line without building some more complex include logic.
Perhaps you can advice me on how I can do it better? Thanks!
I'm a big fan of YAMLs anchors and references when it comes to the topic of avoiding repetition. Since the content is dynamic, you could take advantage of with_items, which can be used to pass a parameter like so:
- &sqldump
name: Prepare a .sql file
delegate_to: 127.0.0.1
mysql_db:
name: "{{ item['database']['db_name'] }}"
state: dump
login_host: "{{ item['database']['host'] }}"
login_user: "{{ item['database']['user'] }}"
login_password: "{{ item['database']['password'] }}"
target: test_db.sql
when: invert is not defined
with_items:
- source
- <<: *sqldump
name: Prepare a .sql file (inverted)
when: invert is defined
with_items:
- target
The 2nd task is a perfect clone of the first one, you then override the name, condition and the loop with_items to pass the target instead of the source.
After reading your answer to #ydaetskcoR it sounds like you have quite some cases where you need to use the data from one or the other dict. Maybe in that case it then would make sense to just define the var globally depending on the invert parameter. Your vars file could look like that:
---
source:
database: ...
db_name: ...
target:
database: ...
db_name: ...
data: "{{ target if invert is defined else source }}"
You then simply can use data in all your tasks without dealing with conditions any further.
- name: Prepare a .sql file
delegate_to: 127.0.0.1
mysql_db:
name: "{{ data['database']['db_name'] }}"
state: dump
login_host: "{{ data['database']['host'] }}"
login_user: "{{ data['database']['user'] }}"
login_password: "{{ data['database']['password'] }}"
target: test_db.sql
Of course, this way you have a fixed task name which does not change with the param you pass.
If you are attempting to do the same thing but just want to specify different variables depending on the host/group then a better approach may be to simply set these as host/group vars and run it as a single task.
If we set up our inventory file a bit like this:
[source_and_target-nodes:children]
source-nodes
target-nodes
[source-nodes]
source database_name='source_db' database_login_user='source_user' database_login_pass='source_pass'
[target-nodes]
target database_name='target_db' database_login_user='target_user' database_login_pass='target_pass'
Then we can target the task at the source_and_target-nodes like so:
- name: Prepare a .sql file
hosts: source_and_target-nodes
mysql_db:
name: "{{ database_name }}"
state: dump
login_host: "{{ inventory_hostname }}"
login_user: "{{ database_login_user }}"
login_password: "{{ database_login_pass }}"
target: test_db.sql
You won't be able to access the host vars of a different host this easily if you need to use delegate_to as you are in your question but if you are simply needing to run the play locally you can instead set ansible_connection to local in your host/group vars or setting connection: local in the play.

How can Ansible "register" in a variable the result of including a playbook?

How can an Ansible playbook register in a variable the result of including another playbook?
For example, would the following register the result of executing tasks/foo.yml in result_of_foo?
tasks:
- include: tasks/foo.yml
- register: result_of_foo
How else can Ansible record the result of a task sequence?
The short answer is that this can't be done.
The register statement is used to store the output of a single task into a variable. The exact contents of the registered variable can vary widely depending on the type of task (for example a shell task will include stdout & stderr output from the command you run in the registered variable, while the stat task will provide details of the file that is passed to the task).
If you have an include file with an arbitrary number of tasks within it then Ansible would have no way of knowing what to store in the variable in your example.
Each individual task within your include file can register variables, and you can reference those variables elsewhere, so there's really no need to even do something like this.
I was able to do this by passing a variable name as a variable to be used in the task. I included my main.yaml and included cgw.yaml files below.
main.yaml:
- name: Create App A CGW
include: cgw.yaml
vars:
bgp_asn: "{{ asn_spoke }}"
ip_address: "{{ eip_app_a.public_ip }}"
name: cgw-app-a
region: "{{ aws_region }}"
aws_access_key: "{{ ec2_access_key }}"
aws_secret_key: "{{ ec2_secret_key }}"
register: cgw_app_a
cgw.yaml:
- name: "{{ name }}"
ec2_customer_gateway:
bgp_asn: "{{ bgp_asn }}"
ip_address: "{{ ip_address }}"
name: "{{ name }}"
region: "{{ region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
register: "{{ register }}"

Tag Name from EC2-group in Ansible

This is another question coming from the following post...:
loops over the registered variable to inspect the results in ansible
So basically having:
- name: EC2Group | Creating an EC2 Security Group inside the Mentioned VPC
local_action:
module: ec2_group
name: "{{ item.sg_name }}"
description: "{{ item.sg_description }}"
region: "{{ vpc_region }}" # Change the AWS region here
vpc_id: "{{ vpc.vpc_id }}" # vpc is the resgister name, you can also set it manually
state: present
rules: "{{ item.sg_rules }}"
with_items: ec2_security_groups
register: aws_sg
- name: Tag the security group with a name
local_action:
module: ec2_tag
resource: "{{ item.group_id }}"
region: "{{ vpc_region }}"
state: present
tags:
Name: "{{vpc_name }}-group"
with_items: aws_sg.results
I wonder how is possible to get the TAG NAME
tags:
Name: "{{ item.sg_name }}"
The same value as per the primary name definition on the Security Groups?
local_action:
module: ec2_group
name: "{{ item.sg_name }}"
I am trying to make that possible but I am not sure how to do it. If it's also possible to retrieve that item?
Thanks!
Tags are available after the ec2.py inventory script is run - they always take the value of tag_key_value where 'key' is the name of the tag, and 'value' is the value within that tag. i.e. if you create a tag called 'Application' and give it a value of 'AwesomeApplication' you would get 'tag_Application_AwesomeApplication'.
That said, if you have just created instances and want to run some commands against those new instances, parse the output from the create instance command to get a list of the IP addresses, and add them to a temporary group, and then you can run commands against that group within the same playbook:
...
- name: add hosts to temporary group
add_host: name="{{ item }}" groups=temporarygroup
with_items: parsedipaddresses
- hosts: temporarygroup
tasks:
- name: awesome script to do stuff goes here
...

Resources