Yaml code repetition - ansible

I have several block of code repeated all over my code, like this:
- name: Retrieve VM information
os_server_facts:
validate_certs: False ### From this line
api_timeout: 300
timeout: 600
auth:
auth_url: "{{ cloudstack_auth_url }}"
username: "{{ cloudstack_login }}"
password: "{{ cloudstack_password }}"
project_name: "{{ cloudstack_project }}"
auth_type: v2password ### To this line
server: "{{ vm_hostname }}"
Inside the same file, I could use anchors, but I don't know how to do to factorize this piece of code in differents files, any ideas ?

If you have some tasks that are used in many places you can always include these in your playbook from a common .yml file using an include.
- include: ../common/tasks/mytasks.yml
However! Ansible really want's you to use roles for this type of common task use, I would consider putting these into a very simple role and using it in your plays with include_role. It's really a better and more salable way to do this.
- name: Include my tasks as a role
include_role:
name: reusedTasks
tasks_from: simple_role

Related

Ansible - Is it possible to loop over a list of objects in input within a playbook

I am trying to create a playbook which is managing to create some load balancers.
The playbook takes a configuration YAML in input, which is formatted like so:
-----configuration.yml-----
virtual_servers:
- name: "test-1.local"
type: "standard"
vs_port: 443
description: ""
monitor_interval: 30
ssl_flag: true
(omissis)
As you can see, this defines a list of load balancing objects with the relative specifications.
If I want to create for example a monitor instance, which depends on these definitions, I created this task which is defined within a playbook.
-----Playbook snippet-----
...
- name: "Creator | Create new monitor"
include_role:
name: vs-creator
tasks_from: pool_creator
with_items: "{{ virtual_servers }}"
loop_control:
loop_var: monitor_item
...
-----Monitor Task-----
- name: "Set monitor facts - Site 1"
set_fact:
monitor_name: "{{ monitor_item.name }}"
monitor_vs_port: "{{ monitor_item.vs_port }}"
monitor_interval: "{{ monitor_item.monitor_interval}}"
monitor_partition: "{{ hostvars['localhost']['vlan_partition'] | first }}"
...
(omissis)
- name: "Create HTTP monitor - Site 1"
bigip_monitor_http:
state: present
name: "{{ monitor_name }}_{{ monitor_vs_port }}.monitor"
partition: "{{ monitor_partition }}"
interval: "{{ monitor_interval }}"
timeout: "{{ monitor_interval | int * 3 | int + 1 | int }}"
provider:
server: "{{ inventory_hostname}}"
user: "{{ username }}"
password: "{{ password }}"
delegate_to: localhost
when:
- site: 1
- monitor_item.name | regex_search(regex_site_1) != None
...
As you can probably already see, I have a few problems with this code, the main one which I would like to optimize is the following:
The creation of a load balancer (virtual_server) involves multiple tasks (creation of a monitor, pool, etc...), and I would need to treat each list element in the configuration like an object to create, with all the necessary definitions.
I would need to do this for different sites which pertain to our datacenters - for which I use regex_site_1 and site: 1 in order to get the correct one... though I realize that this is not ideal.
The script, as of now, does that, but it's not well-managed I believe, and I'm at a loss on what approach should I take in developing this playbook: I was thinking about looping over the playbook with each element from the configuration list, but apparently, this is not possible, and I'm wondering if there's any way to do this, if possible with an example.
Thanks in advance for any input you might have.
If you can influence input data I advise to turn elements of virtual_servers into hosts.
In this case inventory will look like this:
virtual_servers:
hosts:
test-1.local:
vs_port: 443
description: ""
monitor_interval: 30
ssl_flag: true
And all code code will become a bliss:
- hosts: virtual_servers
tasks:
- name: Doo something
delegate_to: other_host
debug: msg=done
...
Ansible will create all loops for you for free (no need for include_roles or odd loops), and most of things with variables will be very easy. Each host has own set of variable which you just ... use.
And part where 'we are doing configuration on a real host, not this virtual' is done by use of delegate_to.
This is idiomatic Ansible and it's better to follow this way. Every time you have include_role within loop, you for sure made a mistake in designing the inventory.

Load ansible vars in specific tasks

I feel I must be missing this answer as it seems obvious but I've read a number of posts and have not been able to get this working.
Currently I am loading and then templating vars from files depending on inventory hostnames, like so:
- name: load unique dev vars from file
include_vars:
file: ~/ansible/env-dev.yml
when: inventory_hostname in groups[ 'devs' ]
- name: load unique prod vars from file
include_vars:
file: ~/ansible/env-prod.yml
when: inventory_hostname == 'prod'
- name: copy .env dev file with templated vars
ansible.builtin.template:
src: ~/ansible/env-dev.j2
dest: /home/{{ inventory_hostname }}/.env
owner: '{{ inventory_hostname }}'
group: '{{ inventory_hostname }}'
mode: '0600'
when: inventory_hostname in groups[ 'devs' ]
This works fine but ultimately it is requiring me to create a ton of .yml files when I would rather include some variables in certain steps instead.
Is it possible to load vars for a specific task? I've tried a number of solutions but haven't been able to make it work yet. See below for one method I tried using vars at the end of the task.
- name: copy .env dev file with templated vars
ansible.builtin.template:
src: ~/ansible/env-dev.j2
dest: /home/{{ inventory_hostname }}/.env
owner: '{{ inventory_hostname }}'
group: '{{ inventory_hostname }}'
mode: '0600'
when: inventory_hostname in groups[ 'devs' ]
vars:
NODE_ENV: development
PORT: 66
The key to organize your Ansible code is to rely on group vars.
This feature permits to load variables according to the group a host belong to. You have several ways to do that, one of the clearest way is to use yaml files named with the name of the group in the group_vars folder (plus a all.yaml matching all hosts). Ansible will pick automatically them for you, so you can get rid of your first two include_vars. You can combine them with variables specific to the role and or the playbook. So you end with a set of variables coming from the host (the target) and from the role / playbook (the task to achieve).
To replace the hardcoded src: ~/ansible/env-dev.j2 you could for example define a variable in each group.
---
# dev.yaml
template_name: "env-dev.j2"
---
# prod.yaml
template_name: "env-prod.j2"
And then use it in your playbook / role src: "{{ template_name }}".

ansible how to load variables from another role, without executing it?

I have a task to create a one-off cleanup playbook which is using variables from a role, but i don't need to execute that role. Is there a way to provide a role name to get everything from it's defaults and vars, without hardcoding paths to it? I also want to use vars defined in group_vars or host_vars with higher precedence than the ones included from role.
Example task:
- name: stop kafka and zookeeper services if they exist
service:
name: "{{ item }}"
state: stopped
with_items:
- "{{ kafka_service_name }}"
- "{{ zookeeper_service_name }}"
ignore_errors: true
where kafka_service_name and zookeeper_service_name are contained in role kafka, but may also be present in i.e. group_vars.
I came up with a fairly hacky solution, which looks like this:
- name: save old host_vars
set_fact:
old_host_vars: "{{ hostvars[inventory_hostname] }}"
- name: load kafka role variables
include_vars:
dir: "{{ item.root }}/{{ item.path }}"
vars:
params:
files:
- kafka
paths: "{{ ['roles'] + lookup('config', 'DEFAULT_ROLES_PATH') }}"
with_filetree: "{{ lookup('first_found', params) }}"
when: item.state == 'directory' and item.path in ['defaults', 'vars']
- name: stop kafka and zookeeper services if they exist
service:
name: "{{ item }}"
state: stopped
with_items:
- "{{ old_host_vars['kafka_service_name'] | default(kafka_service_name) }}"
- "{{ old_host_vars['zookeeper_service_name'] | default(zookeeper_service_name) }}"
include_vars task finds the first kafka role folder in ./roles and in default role locations, then includes files from directories defaults and vars, in correct order.
I had to save old hostvars due to include_vars having higher priority than anything but extra vars as per ansible doc, and then using included var only if old_host_vars returned nothing.
If you don't have a requirement to load group_vars - include vars works quite nice in one task and looks way better.
UPD: Here is the regexp that i used to replace vars with old_host_vars hack.
This was tested in vscode search/replace, but can be adjusted for any other editor
Search for vars that start with kafka_:
\{\{ (kafka_\w*) \}\}
Replace with:
{{ old_host_vars['$1'] | default($1) }}

Pass nested Ansible vars into playbook inventory file

I am wondering is it possible to pass in the --extra-vars when running an ansible-playbook in order to inject the variables into an inventory file, which I am using to run my playbook.
sample playbook
- name: "Create CI pipeline"
hosts: all
tasks:
- name: "Create PreCodeReview jobs"
tags:
- jenkins
- jenkins-jobs
when: jenkins is defined
local_action:
module: jenkins_job
url: "{{ jenkins.url }}"
user: "{{ jenkins.username }}"
token: "{{ jenkins.access_token }}"
name: "{{ jenkins.component.name }}_PreCodeReview"
config: "{{ lookup('template', '../templates/jenkins/add-pre-code-config.xml') }}"
- name: "Create Release jobs"
tags:
- jenkins
- jenkins-jobs
when: jenkins is defined
local_action:
module: jenkins_job
url: "{{ jenkins.url }}"
user: "{{ jenkins.username }}"
token: "{{ jenkins.access_token }}"
name: "{{ jenkins.component.name }}_Release"
config: "{{ lookup('template', '../templates/jenkins/add-release-config.xml') }}"
I am looking to pass in jenkins.component.name at run time, I have attempted this with the following jenkins.component.name=<name> and "{'jenkins':{'component':{'name':<name>}}}"
This didn't work.
Here is the inventory I am using to run the playbook
sample inventory
all:
hosts:
local:
ansible_host: 127.0.0.1
ansible_connection: local
project_name: magic_proj
jenkins:
url: https://my/jenkins
username: admin
access_token: f96hjfg54354b3e8512d491fb471fd
keep_builds: 20
components:
- name: <repo_name>
repository: <repo_url>
I am looking to pass in jenkins.component.name at run time, I have attempted this with the following jenkins.component.name=<name> and "{'jenkins':{'component':{'name':<name>}}}"
You were very close: the --extra-vars wants either key=value pairs, JSON, YAML, or #./some/file, as specified in the fine manual
Regrettably, what you provided was Python syntax, and not JSON syntax; if you change your command line to --extra-vars '{"jenkins":{"component":{"name":<name>}}}'
update:
However, even that has a problem: it appears that for dict structures, ansible does not merge inventory dicts and extra-var dicts, so you will need to either choose a "flat" extra-var name (such as almost what you also attempted: --extra-vars '{"jenkins_component_name": ""}') or manually merge the structures together in your playbook (perhaps via pre_tasks: or similar)

Ansible - how to conditionally invert variables in a playbook

I needed to be able to invert variables stored in a JSON file that is passed to the playbook from the command line.
These are the tasks that I set up (they are identical except for vars), this is a fragment of a playbook:
- name: Prepare a .sql file
delegate_to: 127.0.0.1
mysql_db:
name: "{{ source['database']['db_name'] }}"
state: dump
login_host: "{{ source['database']['host'] }}"
login_user: "{{ source['database']['user'] }}"
login_password: "{{ source['database']['password'] }}"
target: test_db.sql
when: invert is not defined
- name: Prepare a .sql file (inverted)
delegate_to: 127.0.0.1
mysql_db:
name: "{{ target['database']['db_name'] }}"
state: dump
login_host: "{{ target['database']['host'] }}"
login_user: "{{ target['database']['user'] }}"
login_password: "{{ target['database']['password'] }}"
target: test_db.sql
when: invert is defined
So consequently when I execute
ansible-playbook -i hosts playbook.yml --extra-vars "#dynamic_vars.json"
the first task is executed. If I execute
ansible-playbook -i hosts playbook.yml --extra-vars "#dynamic_vars.json" --extra-vars "invert-yes"
the second task is executed that takes the same hash as parameters, but only swaps source for target (which essentially becomes a source in my playbook).
As you can see, this is a very simplistic approach, there is a lot of unnecessary duplication, I just do not like it. However, I cannot think of a better way to be able to revert variables at the command line without building some more complex include logic.
Perhaps you can advice me on how I can do it better? Thanks!
I'm a big fan of YAMLs anchors and references when it comes to the topic of avoiding repetition. Since the content is dynamic, you could take advantage of with_items, which can be used to pass a parameter like so:
- &sqldump
name: Prepare a .sql file
delegate_to: 127.0.0.1
mysql_db:
name: "{{ item['database']['db_name'] }}"
state: dump
login_host: "{{ item['database']['host'] }}"
login_user: "{{ item['database']['user'] }}"
login_password: "{{ item['database']['password'] }}"
target: test_db.sql
when: invert is not defined
with_items:
- source
- <<: *sqldump
name: Prepare a .sql file (inverted)
when: invert is defined
with_items:
- target
The 2nd task is a perfect clone of the first one, you then override the name, condition and the loop with_items to pass the target instead of the source.
After reading your answer to #ydaetskcoR it sounds like you have quite some cases where you need to use the data from one or the other dict. Maybe in that case it then would make sense to just define the var globally depending on the invert parameter. Your vars file could look like that:
---
source:
database: ...
db_name: ...
target:
database: ...
db_name: ...
data: "{{ target if invert is defined else source }}"
You then simply can use data in all your tasks without dealing with conditions any further.
- name: Prepare a .sql file
delegate_to: 127.0.0.1
mysql_db:
name: "{{ data['database']['db_name'] }}"
state: dump
login_host: "{{ data['database']['host'] }}"
login_user: "{{ data['database']['user'] }}"
login_password: "{{ data['database']['password'] }}"
target: test_db.sql
Of course, this way you have a fixed task name which does not change with the param you pass.
If you are attempting to do the same thing but just want to specify different variables depending on the host/group then a better approach may be to simply set these as host/group vars and run it as a single task.
If we set up our inventory file a bit like this:
[source_and_target-nodes:children]
source-nodes
target-nodes
[source-nodes]
source database_name='source_db' database_login_user='source_user' database_login_pass='source_pass'
[target-nodes]
target database_name='target_db' database_login_user='target_user' database_login_pass='target_pass'
Then we can target the task at the source_and_target-nodes like so:
- name: Prepare a .sql file
hosts: source_and_target-nodes
mysql_db:
name: "{{ database_name }}"
state: dump
login_host: "{{ inventory_hostname }}"
login_user: "{{ database_login_user }}"
login_password: "{{ database_login_pass }}"
target: test_db.sql
You won't be able to access the host vars of a different host this easily if you need to use delegate_to as you are in your question but if you are simply needing to run the play locally you can instead set ansible_connection to local in your host/group vars or setting connection: local in the play.

Resources