Ansible increment variable globally for all hosts - ansible

I have two servers in my inventory (hosts)
[server]
10.23.12.33
10.23.12.40
and playbook (play.yml)
---
- hosts: all
roles:
web
Inside web role in vars directory i have main.yml
---
file_number : 0
Inside web role in tasks directory i have main.yml
---
- name: Increment variable
set_fact: file_number={{ file_number | int + 1 }}
- name: create file
command: 'touch file{{ file_number }}'
Now i expect that in first machine i will have file1 and in second machine i will have file2 but in both machines i have file1
So this variable is local for every machine, how could i make it global for all machines.
My file structure is:
hosts
play.yml
roles/
web/
tasks/
main.yml
vars/
main.yml

Now i expect that in first machine i will have file1 and in second machine i will have file2 but in both machines i have file1
You need to keep in mind that variables in Ansible aren't global. Variables (aka 'facts') are applied uniquely to each host, so file_number for host1 is different than file_number for host2. Here's an example based loosely on what you posted:
roles/test/vars/main.yml:
---
file_number: 0
roles/test/tasks/main.yml:
---
- name: Increment variable
set_fact: file_number={{ file_number | int + 1 }}
- name: debug
debug: msg="file_number is {{ file_number }} on host {{ inventory_hostname }}"
Now suppose you have just two hosts defined, and you run this role multiple times in a playbook that looks like this:
---
- hosts: all
roles:
- { role: test }
- hosts: host1
roles:
- { role: test }
- hosts: all
roles:
- { role: test }
So in the first play the role is applied to both host1 & host2. In the second play it's only run against host1, and in the third play it's again run against both host1 & host2. The output of this playbook is:
PLAY [all] ********************************************************************
TASK: [test | Increment variable] *********************************************
ok: [host1]
ok: [host2]
TASK: [test | debug] **********************************************************
ok: [host1] => {
"msg": "file_number is 1 on host host1"
}
ok: [host2] => {
"msg": "file_number is 1 on host host2"
}
PLAY [host1] **************************************************
TASK: [test | Increment variable] *********************************************
ok: [host1]
TASK: [test | debug] **********************************************************
ok: [host1] => {
"msg": "file_number is 2 on host host1"
}
PLAY [all] ********************************************************************
TASK: [test | Increment variable] *********************************************
ok: [host1]
ok: [host2]
TASK: [test | debug] **********************************************************
ok: [host1] => {
"msg": "file_number is 3 on host host1"
}
ok: [host2] => {
"msg": "file_number is 2 on host host2"
}
So as you can see, the value of file_number is different for host1 and host2 since the role that increments the value ran against host1 more times than it did host2.
Unfortunately there really isn't a clean way making a variable global within Ansible. The entire nature of Ansible's ability to run tasks in parallel against large numbers of hosts makes something like this very tricky. Unless you're extremely careful with global variables in a parallel environment you can easily trigger a race condition, which will likely result in unpredictable (inconsistent) results.

I haven't found a solution with ansible although i made work around using shell to make global variable for all hosts.
Create temporary file in /tmp in localhost and place in it the starting count
Read the file for every host and increment the number inside the file
I created the file and initialized it in the playbook (play.yml)
- name: Manage localhost working area
hosts: 127.0.0.1
connection: local
tasks:
- name: Create localhost tmp file
file: path={{ item.path }} state={{ item.state }}
with_items:
- { path: '/tmp/file_num', state: 'absent' }
- { path: '/tmp/file_num', state: 'touch' }
- name: Managing tmp files
lineinfile: dest=/tmp/file_num line='0'
Then in web role in main.tml task i read the file and increment it.
- name: Get file number
local_action: shell file=$((`cat /tmp/file_num` + 1)); echo $file | tee /tmp/file_num
register: file_num
- name: Set file name
command: 'touch file{{ file_num.stdout }}'
Now i have in first host file1 and in second host file2

You can use Matt Martz's solution from here.
Basically your task would be like:
- name: Set file name
command: 'touch file{{ play_hosts.index(inventory_hostname) }}'
And you can remove all that code for maintaining global var and external file.

Related

How to pass additional environment variables to the imported ansible playbook

I have a main_play.yml Ansible playbook in which I am importing a reusable playbook a.yml.
main_play.yml
- import_playbook: "reusable_playbooks/a.yml"
a.yml
---
- name: my_playbook
hosts: "{{ HOSTS }}"
force_handlers: true
gather_facts: false
environment:
APP_DEFAULT_PORT: "{{ APP_DEFAULT_PORT }}"
tasks:
- name: Print Msg
debug:
msg: "hello"
My question is: how can I pass an additional environment variable from my main_playbook.yml playbook to my re-usable playbook a.yml (if needed) so that the environment variables become like
environment:
APP_DEFAULT_PORT: "{{ APP_DEFAULT_PORT }}"
SPRING_PROFILE: "{{ SPRING_PROFILE }}"
import_playbook is not really a module but a core feature. It does not allow for any parameter to be passed to the imported playbook. You can see this keyword as a simple commodity to facilitate playing several playbooks in a row exactly as if they were defined in the same file.
So your problem comes down to:
How do I pass additional environment variables to a play ?
Here is one solution with illustrations to use it with extra_vars or setting a fact from a previous play. This far from being exhaustive but I hope it will guide you to you own best solution.
To ease readability:
I used the APP_ prefix for all environment variables in my below examples and filtered only on those for the results.
I truncated the playbook output to the only relevant debug task
We can define the following reusable.yml playbook containing a single play
---
- hosts: localhost
gather_facts: false
vars:
default_env:
APP_DEFAULT_PORT: "{{ APP_DEFAULT_PORT | d(8080) }}"
environment: "{{ default_env | combine(additionnal_env | d({})) }}"
tasks:
- name: get the output on env for APP_* vars
shell: env | grep -i app_
register: env_cmd
changed_when: false
- name: debug the output of env
debug:
var: env_cmd.stdout_lines
We can directly run this playbook as-is which will give
$ ansible-playbook reusable.yml
[... truncated ...]
TASK [debug the output of env] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"env_cmd.stdout_lines": [
"APP_DEFAULT_PORT=8080"
]
}
We can override the default port with
$ ansible-playbook reusable.yml -e APP_DEFAULT_PORT=1234
[... truncated ...]
TASK [debug the output of env] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"env_cmd.stdout_lines": [
"APP_DEFAULT_PORT=1234"
]
}
We can pass additional environment variables with:
$ ansible-playbook reusable.yml -e '{"additionnal_env":{"APP_SPRING_PROFILE": "/toto/pipo"}}'
[... truncated ...]
TASK [debug the output of env] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"env_cmd.stdout_lines": [
"APP_SPRING_PROFILE=/toto/pipo",
"APP_DEFAULT_PORT=8080"
]
}
Now if we want to do this from a parent playbook, we can set the needed variable for the given host in a previous play. We can define a parent.yml playbook:
---
- hosts: localhost
gather_facts: false
tasks:
- name: define additionnal env vars for this host to be used in next play(s)
set_fact:
additionnal_env:
APP_WHATEVER: some_value
APP_VERY_IMPORTANT: "ho yes!"
- import_playbook: reusable.yml
which will give:
$ ansible-playbook parent.yml
[... truncated ...]
TASK [define additionnal env vars for this host to be used in next play(s)] ************************************************************************************************************************
ok: [localhost]
[... truncated ...]
TASK [debug the output of env] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"env_cmd.stdout_lines": [
"APP_WHATEVER=some_value",
"APP_VERY_IMPORTANT=ho yes!",
"APP_DEFAULT_PORT=8080"
]
}

debug module is not showing the expected output

I wrote a simple playbook to check the crontab entries for a bunch of servers, which I'm passing in an inventory file.
- hosts: all
ignore_unreachable: yes
vars_files:
- ansible-vault-file
become: true
remote_user: userID
tasks:
- name: checkingCron
shell: crontab -l | grep abcd
become: true
register: crondata
ignore_errors: yes
- name: show cron results
debug: msg="{{ crondata.stdout_lines }}"
ignore_errors: yes
And then I run the playbook using ansile-playbook cronCheck.yml -i serversfile
I expect it to show the crontab entries for all servers. However, all it shows is this:
PLAY [all] ***
TASK [Gathering Facts] ***
ok: [server1]
ok: [server2]
ok: [server3]
and so on...
TASK [debug] *******
ok: [server_1] => {
"msg": "0"
}
ok: [server_2] => {
"msg": "0"
and so on...
Looks like I'm missing something here.
The requirement is straight forward and I've done something like this before. I can't figure out, what I'm doing wrong this time.

Use dynamic variable name

I'm trying to get the value of ip_address from the following yaml that I'm including as variables on ansible:
common:
ntp:
- time.google.com
node1:
default_route: 10.128.0.1
dns:
- 10.128.0.2
hostname: ip-10-128-5-17
device_interface: ens5
cluster_interface: ens5
interfaces:
ens5:
ip_address: 10.128.5.17
nat_ip_address: 18.221.63.178
netmask: 255.255.240.0
version: 2
However the network interface (ens5 here) may be named something else, such as eth0. My ansible code is this:
- hosts: all
tasks:
- name: Read configuration from the yaml file
include_vars: "{{ config_yaml }}"
- name: Dump Interface Settings
vars:
msg: node1.interfaces.{{ cvp_device_interface }}.ip_address
debug:
msg: "{{ msg }}"
tags: debug_info
Running the code like this I can get the key's name:
TASK [Dump Interface Settings] *************************************************
│ ok: [18.221.63.178] => {
│ "msg": "node1.interfaces.ens5.ip_address"
│ }
But what I actually need is the value (i.e: something like {{ vars[msg] }}, which should expand into {{ node1.interfaces.ens5.ip_address }}). How can I accomplish this?
Use sqare brackets.
Example: a minimal playbook, which defines a variable called "device". This variable is used to return the active status of the device.
- hosts: localhost
connection: local
vars:
device: enx0050b60c19af
tasks:
- debug: var=device
- debug: var=hostvars.localhost.ansible_facts[device].active
Output:
$ ansible-playbook example.yaml
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] *******************************************************************
TASK [Gathering Facts] *************************************************************
ok: [localhost]
TASK [debug] ***********************************************************************
ok: [localhost] => {
"device": "enx0050b60c19af"
}
TASK [debug] ***********************************************************************
ok: [localhost] => {
"hostvars.localhost.ansible_facts[device].active": true
}
PLAY RECAP *************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
see comment
- hosts: all
tasks:
- name: Read configuration from the yaml file
include_vars: "{{ config_yaml }}"
- name: Dump Interface Settings
debug:
msg: "{{ node1['interfaces'][cvp_device_interface]['ip_address'] }}"
debug:
msg: "{{ msg }}"
tags: debug_info

Avoid unused and undefined variable in playbook

I have following data in a variable file
data: [
{
service: "ServiceName",
var2: "file/path/value",
var3: "{{ check_var }}",
}, {
service: "ServiceName",
var2: "file/path/value",
var3: "{{ check_var }}",
}
]
I have two playbooks which require the same data. However one playbook does not require var3.
- debug: msg="{{ item['service'] }} uses - {{ item['var2'] }}"
with_items: "{{ data }}"
This gives error - "'check_var' is undefined".
TRIED:
I don't want to populate the playbook with bad standards and use
when: check_var is undefined
Or use false dummy data in playbook's vars atrribute. Is there any way around this while maintaining standards. Also the actual data is quite huge so I don't want to repeat it twice for each playbook.
In Ansible data has to be assigned to hosts not to playbooks.
You have to create two host groups. Those hosts, who needs just two variables go into the first group. And those hosts which need 3 variables go into both groups. You can include the hosts of the first group in the second group.
Then you create two group var files. In the first you put 2 variables and in the second the 3rd variable.
By this each host gets the right amount of information. Playbook 1 uses 3 variables and playbook 2 uses just 2 variables.
Update: Minimal example
$ diff -N -r none .
diff -N -r none/check_var.yaml ./check_var.yaml
0a1,4
> ---
> - hosts: localhost
> tasks:
> - debug: var=check_var
diff -N -r none/group_vars/myhosts.yaml ./group_vars/myhosts.yaml
0a1
> check_var: "Hello World!"
diff -N -r none/inventory ./inventory
0a1,2
> [myhosts]
> localhost
$ ansible-playbook -i inventory check_var.yaml
PLAY [localhost] ***************************************************************************
TASK [Gathering Facts] *********************************************************************
ok: [localhost]
TASK [debug] *******************************************************************************
ok: [localhost] => {
"check_var": "Hello World!"
}
PLAY RECAP *********************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0

In combination with when and run_once in ansible, it will accidentally skip the task

My inventory_hosts is as follows:
# inventory
[kafka]
192.168.1.1
192.168.1.2
[mysql]
192.168.1.3
My ansible-playbook as follows:
site.yml:
- name: test
hosts: all
roles:
- kafka
kafka roles tasks:
# main.yml
- include: test.yml
when: "'kafka' in group_names"
# test.yml
---
- name: get kafka groups length
shell: echo "{{ groups['kafka']|length }}"
run_once: true
delegate_to: localhost
- name: Get the main control machine ip
shell: ip addr show `ip route |awk '$2=="via" {print $5}' |head -1` | awk '$1=="inet" {print $2}'| head -1 | cut -d '/' -f 1
run_once: true
delegate_to: localhost
EXPECTED RESULTS
get kafka groups length and Get the main control machine ip can be executed and delegated to local execution respectively
ACTUAL RESULTS
TASK [Gathering Facts] **********************************************************************************************************
ok: [192.168.1.1]
ok: [192.168.1.2]
ok: [192.168.1.3]
META: ran handlers
TASK [kafka : get kafka groups length] ***************************************************************************************
skipping: [192.168.1.3] => changed=false
skip_reason: Conditional result was False
TASK [kafka : Get the main control machine ip] ************************************************************************************
skipping: [192.168.1.3] => changed=false
skip_reason: Conditional result was False
It’s not over yet. The trick to teasing people is that when I commented on this set of ips mysql, I found that the task could be executed.
# inventory
[kafka]
192.168.1.1
192.168.1.2
#[mysql]
#192.168.1.3
New run result(Is the result I want):
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [192.168.1.1]
ok: [192.168.1.2]
TASK [kafka : get kafka groups length] ************************************************************************************************************************
changed: [192.168.1.1 -> localhost]
TASK [kafka : Get the main control machine ip] ******************************************************************************************************************************
changed: [192.168.1.1 -> localhost]
Why is this so how can we avoid this uncertain problem?
Q: "Why is this so how can we avoid this uncertain problem?"
- hosts: all
tasks:
- include: test.yml
when: "'kafka' in group_names"
TASK ...
skip_reason: Conditional result was False
A: Use include_tasks. The inheritance should work as expected.
- include_tasks: test.yml
when: "'kafka' in group_names"
(tested with ansible 2.7.9)
Details
Quoting from Special Variables
group_names: List of groups the current host is part of
With the inventory
[kafka]
192.168.1.1
192.168.1.2
the variable of all hosts is group_names: [ "kafka" ] and the play works as expected.
With the inventory
[kafka]
192.168.1.1
192.168.1.2
[mysql]
192.168.1.3
the variable of one host is group_names: [ "mysql" ]. From what you've found out it seems that in include run_once will be skipped if the condition of any host fails.
skipping: [192.168.1.3] => changed=false
skip_reason: Conditional result was False

Resources