Ansible: using group vars in tasks - ansible

In Ansible, i want to trigger selected tasks based on hosts.
My inventory:
[appserver]
ip1
ip2
[webserver]
ip3
ip4
And my main.yml file will be like this
- name : Playbook
hosts: all
user: username
become: yes
gather_facts: True
tasks:
- name: Task 1
replace:
path: "somepath"
regexp: "{{ oldvalue }}"
replace: "{{ newvalue }}"
backup: yes
- name: Task 2
replace:
path: "somepath"
regexp: "{{ oldvalue }}"
replace: "{{ newvalue }}"
backup: yes
How do I run this particular task1 on appserver group and task2 on webserver group. Is there any best way to achieve this without using ansible tags?

- hosts: appserver
user: username
become: yes
gather_facts: True
tasks:
- name: Task 1
replace:
path: "somepath"
regexp: "{{ oldvalue }}"
replace: "{{ newvalue }}"
backup: yes
- hosts: webserver
user: username
become: yes
gather_facts: True
tasks:
- name: Task 2
replace:
path: "somepath"
regexp: "{{ oldvalue }}"
replace: "{{ newvalue }}"
backup: yes
After posting, I saw #Zeitounator's comment.

Related

Deploy multiple VMs from an OVF file

I am trying to develop the code for creating multiple VM's using module deploy ovf in Ansible. I've tried to find out with other solution, but, it didn't work out. Here you can see my playbook :
- hosts: localhost
become: yes
gather_facts: false
vars_files:
- vars: vars.yml
tasks:
- name: deploy ovf
vmware_deploy_ovf:
hostname: "{{ hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: "{{ validate_certs }}"
datacenter: "{{ datacenter }}"
name: "{{ vm_name }}"
ovf: "{{ ovf_path }}"
cluster: "{{ cluster }}"
wait_for_ip_address: true
inject_ovf_env: false
power_on: no
datastore: "{{ datastore }}"
networks: "{{ vcen_network }}"
disk_provisioning: thin
In variables files, I set "vm_name" as list.
vars.yml
vm_name:
vm-01
vm-02
So I've ran the code with extra variables like this:
ansible-playbook main.yml -e "vm_name=vm-01" -e "vm_name=vm-02".
It's only create vm-02 but not for both. Also, I tried to use "loop" or "with_items", but, it didn't work out.
Please assist, thank you
Yeah. Don't do that. Have the hosts in your inventory, and let Ansible do its thing:
- hosts: all
become: no
gather_facts: false
vars_files:
- vars: vars.yml
tasks:
- name: deploy ovf
vmware_deploy_ovf:
# hostname: "{{ hostname }}" # use environment variables
# username: "{{ username }}" # use environment variables
# password: "{{ password }}" # use environment variables
validate_certs: "{{ validate_certs }}"
datacenter: "{{ datacenter }}"
name: "{{ inventory_hostname }}"
ovf: "{{ ovf_path }}"
cluster: "{{ cluster }}"
wait_for_ip_address: true
inject_ovf_env: false
power_on: no
datastore: "{{ datastore }}"
networks: "{{ vcen_network }}"
disk_provisioning: thin
delegate_to: localhost

Ansible Tower how to pass inventory to my playbook variables

I am setting up a vmware job in Ansible Tower to snapshot a list of VM's, ideally, this list should be generated by AWX/Tower from the vSphere dynamic inventory. Inventory is named "lab_vm" in AWX and use either the hostname or the UUID of the VM.
How do I pass this through in my playbook variables file?
---
vars:
vmware:
host: '{{ lookup("env", "VMWARE_HOST") }}'
username: '{{ lookup("env", "VMWARE_USER") }}'
password: '{{ lookup("env", "VMWARE_PASSWORD") }}'
vcenter_datacenter: "dc1"
vcenter_validate_certs: false
vm_name: "EVE-NG"
vm_template: "Win2019-Template"
vm_folder: "Network Labs"
my playbook
---
- name: vm snapshot
hosts: localhost
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
# hostname: "{{ host }}"
# username: "{{ user }}"
# password: "{{ password }}"
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ vm_name }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
You're going about it backward. Ansible loops through the inventory for you. Use that feature, and delegate the task to localhost:
---
- name: vm snapshot
hosts: all
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ inventory_hostname }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
delegate_to: localhost
I've not used this particular module before, but don't your want snapshot_name to be unique for each guest?

How to change and use a global variable for each host in Ansible?

I am trying to write a playbook to setup mysql master-slave replication with multiple slave servers. For each slave server, I need access to a variable called next_id that should be incremented before use for each host. For example, for the first slave server, next_id should be 2 and for the second slave server it should be 3 and so on. What is the way to achieve this in Ansible?
This is the yaml file I use to run my role.
- name: Setup the environment
hosts: master , slave_hosts
serial: 1
roles:
- setup-master-slave
vars:
master_ip_address : "188.66.192.11"
slave_ip_list :
- "188.66.192.17"
- "188.66.192.22"
This is the yaml file where I use the variable.
- name: Replace conf file with template
template:
src: masterslave.j2
dest: /etc/mysql/mariadb.conf.d/50-server.cnf
when: inventory_hostname != 'master'
vars:
- ip_address : "{{ master_ip_address }}"
- server_id : "{{ next_id }}"
I can suggest a way which will work according to you requirement for any number of slave servers but it is not based on any any module but self conscience.
Here my master_ip_address is 10.x.x.x and input is any value of next_id you want to increment for every slave server
- hosts: master,slave1,slave2,slave3,slave4
serial: 1
gather_facts: no
tasks:
- shell: echo "{{ input }}" > yes.txt
delegate_to: localhost
when: inventory_hostname == '10.x.x.x'
- shell: cat yes.txt
register: var
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- shell: echo "$(({{var.stdout}}+1))"
register: next_id
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- shell: echo "{{ next_id.stdout }}" > yes.txt
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- name: Replace conf file with template
template:
src: masterslave.j2
dest: 50-server.cnf
when: inventory_hostname != '10.x.x.x'
vars:
- ip_address : "{{ master_ip_address }}"
- server_id : "{{ next_id.stdout }}"
[ansibleserver#172 test_increment]$ cat masterslave.j2
- {{ ip_address }}
- {{ server_id }}
Now, if you run
ansible-playbook increment.yml -e 'master_ip_address=10.x.x.x input=1'
slave1 server
[root#slave1 ~]# cat 50-server.cnf
- 10.x.x.x
- 2
slave2 server
[root#slave2 ~]# cat 50-server.cnf
- 10.x.x.x
- 3
slave3 server
[root#slave3 ~]# cat 50-server.cnf
- 10.x.x.x
- 4
and so on
"serial" is available in a playbooks only and not working in roles
therefore, for fully automatic generation of server_id for MySQL in Ansible roles, you can use the following:
roles/my_role/defaults/main.yml
---
cluster_alias: test_cluster
mysql_server_id_config: "settings/mysql/{{ cluster_alias }}/server_id.ini"
roles/my_role/tasks/server-id.yml
---
- name: Ensures '{{ mysql_server_id_config | dirname }}' dir exists
file:
path: '{{ mysql_server_id_config | dirname }}'
state: directory
owner: root
group: root
mode: 0755
delegate_to: localhost
- name: Ensure mysql server id config file exists
copy:
content: "0"
dest: "{{ mysql_server_id_config }}"
force: no
owner: root
mode: '0755'
delegate_to: localhost
- name: server-id
include_tasks: server-id-tasks.yml
when: inventory_hostname == item
with_items: "{{ ansible_play_hosts }}"
tags:
- server-id
roles/my_role/tasks/server-id-tasks.yml
# tasks file
---
- name: get last server id
shell: >
cat "{{ mysql_server_id_config }}"
register: _last_mysql_server_id
check_mode: no
delegate_to: localhost
tags:
- server-id
- name: get new server id
shell: >
echo "$(({{_last_mysql_server_id.stdout}}+1))"
register: _new_mysql_server_id
check_mode: no
delegate_to: localhost
tags:
- server-id
- name: save new server id
shell: >
echo -n "{{ _new_mysql_server_id.stdout }}" > "{{ mysql_server_id_config }}"
check_mode: no
delegate_to: localhost
tags:
- server-id
- debug:
var: _new_mysql_server_id.stdout
tags:
- server-id
- name: Replace conf file with template
template:
src: server-id.j2
dest: server-id.cnf

In Ansible loop, test existence of files from registered results

I have several files that I need to backup in different directories. I have tried the code below and not working for me.
vars:
file_vars:
- {name: /file1}
- {name: /etc/file2}
- {name: /etc/file/file3}
tasks:
- name: "Checking if config files exists"
stat:
path: "{{ item.name }}"
with_items: "{{ file_vars }}"
register: stat_result
- name: Backup Files
copy: src={{ item.name }} dest={{ item.name }}{{ ansible_date_time.date }}.bak
with_items: "{{ file_vars }}"
remote_src: yes
when: stat_result.stat.exists == True
The problem is the condition
when: stat_result.stat.exists == True
There is no attribute stat_result.stat. Instead, the attribute stat_result.results is a list of the results from the loop. It's possible to create a dictionary of files and their statuses. For example
- set_fact:
files_stats: "{{ dict(my_files|zip(my_stats)) }}"
vars:
my_files: "{{ stat_result.results|json_query('[].item.name') }}"
my_stats: "{{ stat_result.results|json_query('[].stat.exists') }}"
Then simply use this dictionary in the condition
when: files_stats[item.name]
Below is a shorter version which creates the dictionary more efficiently
- set_fact:
files_stats: "{{ dict(stat_result.results|
json_query('[].[item.name, stat.exists]')) }}"
Please try using below worked for me:
---
- name: Copy files
hosts: localhost
become: yes
become_user: root
vars_files:
- files.yml
tasks:
- name: "Checking if config files exists"
stat:
path: "{{ item }}"
with_items: "{{ files }}"
register: stat_result
- name: Ansible
debug:
msg: "{{ stat_result }}"
- name: Backup Files
copy:
src: "{{ item }}"
dest: "{{ item.bak }}"
with_items: "{{ files }}"
when: stat_result == "True"
and files.yml will look like:
---
files:
- /tmp/file1
- /tmp/file2
you can check you playbook syntax using below command:
ansible-playbook copy.yml --syntax-check
Also you do dry run your playbook before actual execution.
ansible-playbook -i localhost copy.yml --check

Ansible: How to use multiple vCenter?

I use Ansible to take snapshots of VM in VMware vCenter.
If vcenter only one, then it works fine, but what if there are many vcenter?
How to describe multiple Vmware vCenter in the current role?
My current example for one vcenter:
# cat snapshot.yml
- name: ROLE_update
hosts: all
gather_facts: False
become: yes
become_method: su
become_user: root
vars_files:
- /ansible/vars/privileges.txt
roles:
- role: snapshot
# cat tasks/main.yml
- block:
- name: "Create vmware snapshot of {{ inventory_hostname }}"
vmware_guest_snapshot:
hostname: "{{vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{datacenter_name}}"
folder: find_folder.yml
name: "{{inventory_hostname }}"
state: present
snapshot_name: snap check
validate_certs: False
delegate_to: localhost
register: vm_facts
# cat find_folder.yml
- name: Find folder path of an existing virtual machine
hosts: all
gather_facts: False
vars:
ansible_python_interpreter: "/usr/bin/env python3"
tasks:
- name: "Find folder for VM - {{ vm_name }}"
vmware_guest_find:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter_name }}"
validate_certs: False
name: "{{ vm_name }}"
delegate_to: localhost
register: vm_facts
# cat vars/main.yml variables declared
vcenter_hostname: <secret>
vcenter_username: <secret>
vcenter_password: <secret>
Thanks!
Is having the variables for the different vcenter in different ".yml" files an option? If so, you could easily call for those variables at runtime execution like:
ansible-playbook your_playbook.yml --extra-vars "your_vcenter_1.yml"
There are other ways i could think of, like using tags in your playbook for each task, and invoke your playbook using a specific tag. On each task with the different tags, on the import_role, you could define the new variables like this:
tasks:
- import_tasks: your_role.yml
vars:
vcenter_hostname: your_vcenter_1
tags:
- vcenter1

Resources