About enable/disable by conditional values in ansible yaml manifest - ansible

I want a variable on Ansible to be active or passive depending on the condition.
I am using the role "davidwittman.redis". I'm setting up a redis cluster. I need to run a single yml file by deploying it to 3 servers at the same time. I can't distribute "yml" files separately.
I have a simple setup configuration below. Here, I want to activate the "redis_slaveof" value on all other servers, except the master server, depending on the condition.
In other words, I want the "redis_slaveof" value I selected to not be active on the master server, but to have this value active on the other 2 servers.
How can I do this, do you have any suggestions ?
my main ansible configuration;
> - name: Redis Replication Slave Installations hosts: localhost
>
> pre_tasks:
> - name: check redis server1
> shell: "cat /etc/hosts |grep redis |awk '{print $2}' |sed -n 1p"
> register: server01
>
> - name: check redis server2
> shell: "cat /etc/hosts |grep redis |awk '{print $2}' |sed -n 2p"
> register: server02
>
> - name: check redis server3
> shell: "cat /etc/hosts |grep redis |awk '{print $2}' |sed -n 3p"
> register: server03
>
> roles:
> - role: davidwittman.redis
>
> vars:
> - redis_version: 6.2.6
> - redis_bind: 0.0.0.0
> - redis_port: 6379
> - redis_service_name: redis-service
> - redis_protected_mode: "no"
> - redis_config_file_name: "redis.conf"
>
> # - redis_slaveof: "{{ server01.stdout }} 6379"
I tried the following;
I added extra task and "import_role". I put a when condition in it. If my server01 value and server local hostname value are not the equal, enable this role. Thus, I thought that it would activate this value on other servers except the master server.But it didn't work, added it on all servers. I saw the following error in the logs.
"[WARNING]: flush_handlers task does not support when conditional"
tasks:
- name: test
import_role:
name: davidwittman.redis
vars:
redis_slaveof: "{{ server01.stdout }} 6379"
when: '"{{ (server01.stdout) }}" != "{{ ansible_hostname }}"'
I added the when condition to the main role as above and made it to two different roles in a single yml. This method also failed.
- role: davidwittman.redis
when: '"{{ (server01.stdout) }}" != "{{ ansible_hostname }}"'
I defined the "if/else" condition as follows in the variable. But although it seems to have worked, it still gave the error "does not support when conditional". Added it to all servers as well.
redis_slaveof: "{%if '{{ (server01.stdout) }} != {{ ansible_hostname }}' %} {{ '{{ (server01.stdout) }} 6379' }} {% else %} {{ '' }} {% endif %}"

So you want to run a role differently, based on variables.
In your playbook do not use
roles:
- role: davidwittman.redis
but rather do
tasks:
- name: Pass variables to role
include_role:
name: myrole
vars:
rolevar1: value from task
This is executed after roles but now you can add when expressions:
tasks:
- name: Pass variables to role
include_role:
name: myrole
vars:
rolevar1: value from task
when: ansible_hostname = "server1"
- name: Pass variables to role
include_role:
name: myrole
vars:
rolevar1: value from task
additionalrolevar: whatever
when: ansible_hostname != "server1"

Related

How to change and use a global variable for each host in Ansible?

I am trying to write a playbook to setup mysql master-slave replication with multiple slave servers. For each slave server, I need access to a variable called next_id that should be incremented before use for each host. For example, for the first slave server, next_id should be 2 and for the second slave server it should be 3 and so on. What is the way to achieve this in Ansible?
This is the yaml file I use to run my role.
- name: Setup the environment
hosts: master , slave_hosts
serial: 1
roles:
- setup-master-slave
vars:
master_ip_address : "188.66.192.11"
slave_ip_list :
- "188.66.192.17"
- "188.66.192.22"
This is the yaml file where I use the variable.
- name: Replace conf file with template
template:
src: masterslave.j2
dest: /etc/mysql/mariadb.conf.d/50-server.cnf
when: inventory_hostname != 'master'
vars:
- ip_address : "{{ master_ip_address }}"
- server_id : "{{ next_id }}"
I can suggest a way which will work according to you requirement for any number of slave servers but it is not based on any any module but self conscience.
Here my master_ip_address is 10.x.x.x and input is any value of next_id you want to increment for every slave server
- hosts: master,slave1,slave2,slave3,slave4
serial: 1
gather_facts: no
tasks:
- shell: echo "{{ input }}" > yes.txt
delegate_to: localhost
when: inventory_hostname == '10.x.x.x'
- shell: cat yes.txt
register: var
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- shell: echo "$(({{var.stdout}}+1))"
register: next_id
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- shell: echo "{{ next_id.stdout }}" > yes.txt
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- name: Replace conf file with template
template:
src: masterslave.j2
dest: 50-server.cnf
when: inventory_hostname != '10.x.x.x'
vars:
- ip_address : "{{ master_ip_address }}"
- server_id : "{{ next_id.stdout }}"
[ansibleserver#172 test_increment]$ cat masterslave.j2
- {{ ip_address }}
- {{ server_id }}
Now, if you run
ansible-playbook increment.yml -e 'master_ip_address=10.x.x.x input=1'
slave1 server
[root#slave1 ~]# cat 50-server.cnf
- 10.x.x.x
- 2
slave2 server
[root#slave2 ~]# cat 50-server.cnf
- 10.x.x.x
- 3
slave3 server
[root#slave3 ~]# cat 50-server.cnf
- 10.x.x.x
- 4
and so on
"serial" is available in a playbooks only and not working in roles
therefore, for fully automatic generation of server_id for MySQL in Ansible roles, you can use the following:
roles/my_role/defaults/main.yml
---
cluster_alias: test_cluster
mysql_server_id_config: "settings/mysql/{{ cluster_alias }}/server_id.ini"
roles/my_role/tasks/server-id.yml
---
- name: Ensures '{{ mysql_server_id_config | dirname }}' dir exists
file:
path: '{{ mysql_server_id_config | dirname }}'
state: directory
owner: root
group: root
mode: 0755
delegate_to: localhost
- name: Ensure mysql server id config file exists
copy:
content: "0"
dest: "{{ mysql_server_id_config }}"
force: no
owner: root
mode: '0755'
delegate_to: localhost
- name: server-id
include_tasks: server-id-tasks.yml
when: inventory_hostname == item
with_items: "{{ ansible_play_hosts }}"
tags:
- server-id
roles/my_role/tasks/server-id-tasks.yml
# tasks file
---
- name: get last server id
shell: >
cat "{{ mysql_server_id_config }}"
register: _last_mysql_server_id
check_mode: no
delegate_to: localhost
tags:
- server-id
- name: get new server id
shell: >
echo "$(({{_last_mysql_server_id.stdout}}+1))"
register: _new_mysql_server_id
check_mode: no
delegate_to: localhost
tags:
- server-id
- name: save new server id
shell: >
echo -n "{{ _new_mysql_server_id.stdout }}" > "{{ mysql_server_id_config }}"
check_mode: no
delegate_to: localhost
tags:
- server-id
- debug:
var: _new_mysql_server_id.stdout
tags:
- server-id
- name: Replace conf file with template
template:
src: server-id.j2
dest: server-id.cnf

saving variables from playbook run to ansible host local file

I'm sort of trying to build an inventory file from an ansible playbook run.
I'm trying to list out all the kvm hosts and the guests running on them, by running both service libvirtd status and if successful, virsh list --all, and to store the values in a file on the ansible host.
Ive tried a few different playbook structures but none have been successful in writing the file (using local_action wrote the ansible_hostname from just one host).
Please can someone guide me on what I'm doing wrong?
This is what I'm running:
- name: Determine KVM hosts
hosts: all
become: yes
#gather_facts: false
tasks:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: not(libvirtd_status.rc == 0)
ignore_errors: true
- name: List KVM guests
shell: "virsh list --all"
register: list_vms
when: libvirtd_status.rc == 0
ignore_errors: true
- name: Write hostname to file
lineinfile:
path: /tmp/libvirtd_hosts
line: "{{ ansible_hostname }} kvm guests: "
create: true
#local_action: copy content="{{ item.value }}" dest="/tmp/libvirtd_hosts"
with_items:
- variable: ansible_hostname
value: "{{ ansible_hostname }}"
- variable: list_vms
value: "{{ list_vms }}"
when: libvirtd_status.rc == 0 or list_vms.rc == 0
Was able to cobble something that's mostly working:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: libvirtd_status.rc not in [0, 1]
- name: List KVM guests
#shell: "virsh list --all"
virt:
command: list_vms
register: all_vms
when: libvirtd_status.rc == 0
---
- name: List all KVM hosts
hosts: production, admin_hosts, kvm_hosts
become: yes
tasks:
- name: create file
file:
dest: /tmp/libvirtd_hosts
state: touch
delegate_to: localhost
- name: Copy VMs list
include_tasks: run_libvirtd_commands.yaml
- name: saving cumulative result
lineinfile:
line: '{{ ansible_hostname }} has {{ all_vms }}'
dest: /tmp/libvirtd_hosts
insertafter: EOF
delegate_to: localhost
when: groups["list_vms"] is defined and (groups["list_vms"] | length > 0)
Now if only I could clean up the output to filter out false positives (machines that don't have libvirtd status, and have an empty/no list of VMs, because the above doesn't really work.
But at least there is output from all the KVM hosts!

How to set fact witch is visible on all hosts in Ansible role

I'm setting fact in a role:
- name: Check if manager already configured
shell: >
docker info | perl -ne 'print "$1" if /Swarm: (\w+)/'
register: swarm_status
- name: Init cluster
shell: >-
docker swarm init
--advertise-addr "{{ ansible_default_ipv4.address }}"
when: "'active' not in swarm_status.stdout_lines"
- name: Get worker token
shell: docker swarm join-token -q worker
register: worker_token_result
- set_fact:
worker_token: "{{ worker_token_result.stdout }}"
Then I want to access worker_token on another hosts. Here's my main playbook, the fact is defined in the swarm-master role
- hosts: swarm_cluster
become: yes
roles:
- docker
- hosts: swarm_cluster:&manager
become: yes
roles:
- swarm-master
- hosts: swarm_cluster:&node
become: yes
tasks:
- debug:
msg: "{{ worker_token }}"
I'm getting undefined variable. How to make it visible globally?
Of course it works perfectly if I run debug on the same host.
if your goal is just to access worker_token from on another host, you can use hostvars variable and iterate through the group where you've defined your variable like this:
- hosts: swarm_cluster:&node
tasks:
- debug:
msg: "{{ hostvars[item]['worker_token'] }}"
with_items: "{{ groups['manager'] }}"
If your goal is to define the variable globally, you can add a step to define a variable on all hosts like this:
- hosts: all
tasks:
- set_fact:
worker_token_global: "{{ hostvars[item]['worker_token'] }}"
with_items: "{{ groups['manager'] }}"
- hosts: swarm_cluster:&node
tasks:
- debug:
var: worker_token_global

Return Variable from Included Ansible Playbook

I have seen how to register variables within tasks in an ansible playbook and then use those variables elsewhere in the same playbook, but can you register a variable in an included playbook and then access those variables back in the original playbook?
Here is what I am trying to accomplish:
This is my main playbook:
- include: sub-playbook.yml job_url="http://some-jenkins-job"
- hosts: localhost
roles:
- some_role
sub-playbook.yml:
---
- hosts: localhost
tasks:
- name: Collect info from Jenkins Job
script: whatever.py --url "{{ job_url }}"
register: jenkins_artifacts
I'd like to be able to access the jenkins_artifacts results back in main_playbook if possible. I know you can access it from other hosts in the same playbook like this: "{{ hostvars['localhost']['jenkins_artifacts'].stdout_lines }}"
Is it the same idea for sharing across playbooks?
I'm confused what this question is about. Just use the variable name jenkins_artifacts:
- include: sub-playbook.yml job_url="http://some-jenkins-job"
- hosts: localhost
debug:
var: jenkins_artifacts
This might seem complicated but I love doing this in my Playbooks:
rc defines the name of the variable which contains the return value
ar gives the arguments to the include tasks
master.yml:
- name: verify_os
include_tasks: "verify_os/main.yml"
vars:
verify_os:
rc: "isos_present"
ar:
image: "{{ os.ar.to_os }}"
verify_os/main.yml:
---
- name: check image on device
ios_command:
commands:
- "sh bootflash: | inc {{ verify_os.ar.image }}"
register: image_check
- name: check if available
shell: "printf '{{ image_check.stdout_lines[0][0] }}\n' | grep {{ verify_os.ar.image }} | wc -l"
register: image_available
delegate_to: localhost
- set_fact: { "{{ verify_os.rc }}": "{{ true if image_available.stdout == '1' else false }}" }
...
I can now use the isos_present variable anywhere in the master.yml to access the returned value.

Ansible recursive checks in playbooks

We need to go through this structure
Zone spec
https://gist.github.com/git001/9230f041aaa34d22ec82eb17d444550c
I was able to run the following snipplet but now I'm stucked at the error checking.
playbook
--
- hosts: all
gather_facts: no
vars_files:
- "../doc/application-zone-spec.yml"
roles:
- { role: ingress_add, customers: "{{ application_zone_spec }}" }
role
- name: check if router exists
shell: "oc get dc -n default {{ customers.zone_name }}-{{ item.type }}"
with_items: "{{ customers.ingress }}"
ignore_errors: True
register: check_router
- name: Print ingress hostnames
debug: var=check_router
- name: create new router
shell: "echo 'I will create a router'"
with_items: "{{ customers.ingress }}"
when: check_router.rc == 1
Output of a ansible run
https://gist.github.com/git001/dab97d7d12a53edfcf2a69647ad543b7
The problem is that I need to go through the ingress items and I need to map the error of the differnt types from the "check_router" register.
It would be nice to make something like.
Pseudo code.
Iterate through the "customers.ingress"
check in "check_router" if the rc is ! 0
execute command.
We use.
ansible-playbook --version
ansible-playbook 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
You can replace the second loop with:
- name: create new router
shell: "echo 'I will create a router with type {{ item.item }}'"
with_items: "{{ check_router.results }}"
when: item.rc == 1
This will iterate over every step of check_route loop and you can access original items via item.item.

Resources