Ansible: Task control - ansible

I am working with Ansible and familiarizing myself with task control in a playbook. I am struggling with the fail module and fail_when statement. Here is a lab I worked on, which seems to work, but I would like to see how this could be handled using the fail module or fail_when, IF it is needed.
Here is the task I struggled with:
Install packages only if the current operating system is CentOS or RHEL version 8 or later. If that is not the case, the playbook should fail with the error message "Host hostname does not meet the minimal requirements", where hostname is replaced with the current host name.
Here are my issues:
Using ansible_facts in the fail module does not workout well it seems
I do not understand how I would use fail_when on this task
Here is my solution:
---
- name: Install packages
hosts: all
vars_files:
vars/pack.yml
tasks:
- name: Install packages
block:
- name: Install packages and loop
yum:
name: "{{ item.package }}"
state: "{{ item.state }}"
loop: "{{ packages }}"
when:
( ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_version'] == "8" )
or
( ansible_facts['distribution'] == "RedHat" and ansible_facts['distribution_version'] >= "8.1" )
- name: Copy file to /tmp
copy:
content: "Welcome to my webserver"
dest: /tmp/index.html
notify: restart web
- name: Check for firewalld
yum:
name: firewalld
state: latest
- name: verify firewalld is started
service:
name: firewalld
state: started
- name: open firewall ports for http and https
firewalld:
service: "{{ item.service }}"
state: "{{ item.state }}"
immediate: yes
permanent: yes
loop: "{{ firewall }}"
rescue:
- name: fail if any task fail
fail:
msg: did not meet the requirements
handlers:
- name: restart web
service:
name: httpd
state: restarted
I am using the RHCE exam book by Sander Van Vugt btw. This is lab 7-1. His Github is a bit lacking on the labs.
Here is the better optimized playbook:
---
- name: End of chapter lab 7 final
hosts: all
become: true
vars_files:
- vars/pack.yml
tasks:
- name: Install httpd and mod_ssl packages
yum:
name: "{{ item.package }}"
state: "{{ item.state }}"
loop: "{{ packages }}"
when:
( ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_version'] <= "8" )
or
( ansible_facts['distribution'] == "RedHat" and ansible_facts['distribution_version'] <= "8" )
- name: Fail if the following is not met
fail:
msg: "Host {{ ansible_facts['hostname'] }} does not meet the minimal requirements"
when:
not (( ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_version'] <= "8" )
or
( ansible_facts['distribution'] == "RedHat" and ansible_facts['distribution_version'] <= "8" ))
- name: Copy tmp file
copy:
content: "Welcome to my webserver"
dest: /tmp/index.html
- name: Configure Firewalld for http and https rules
firewalld:
service: "{{ item.service }}"
state: "{{ item.state }}"
immediate: yes
permanent: yes
loop: "{{ firewall }}"

Use the fail module to stop the script with a custom error message. In your case you could use it like this:
- name: fail if wrong OS
fail:
msg: "Wrong OS: {{ansible_facts['distribution']}}"
when: not (( ansible_facts['distribution'] == "CentOS" and ansible_facts['distribution_version'] == "8" )
or
( ansible_facts['distribution'] == "RedHat" and ansible_facts['distribution_version'] >= "8.1" ))
Whereas the failed_when can be used to define when some behaviour is to be considered a failure. No custom error message possible. Use it like this:
- name: check if file exists and fail if it does
stat:
path: /etc/hosts
register: result
failed_when: result.stat.exists
In your above script you used when. This one executes the task if the when expression is true, or it skips the task if the when expression is false. But it would not stop the script execution.

Related

How to stop VM and Validate that VM is stoped using Ansible?

For my Project, I need to create an Ansible script that does the following:
Shut down VM
Checks that vm are not reachable
Create messages that say what VM is still up and what vm is still down.
I wrote the following script for this:
---
# tasks file for start-stop-vm
- name: Stop all VM
community.general.shutdown:
# delay: 60
when: ('webservers' in group_names)
- name: check what vm is still up
ansible.builtin.ping:
register: result
# - name: Output result
# debug:
# var: result
- name: Return the vm that still running
debug:
msg: "Following VM are still up {{ inventory_hostname }}"
when: (result.failed == false)
- name: Return the vm that still running
debug:
msg: "Following VM are down {{ inventory_hostname }}"
when: (result.failed != false)
My question is, is there any better way to do this?
Is this a VMware-based VM? if that's the case, you can use the vmware modules:
- name: "Shuting down using guest tools"
vmware_guest_powerstate:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
uuid: "{{ vm_facts.instance.hw_product_uuid }}"
state: shutdown-guest
state_change_timeout: 240
force: yes
delegate_to: "localhost"
register: newState
ignore_errors: yes
when: vm_facts.instance.guest_tools_status == "guestToolsRunning"
- name: "Power off from vCenter"
vmware_guest_powerstate:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
uuid: "{{ vm_facts.instance.hw_product_uuid }}"
state: powered-off
force: yes
delegate_to: "localhost"
ignore_errors: yes
register: newState
First task tries to poweroff the VM using guest tools
Second task tries to poweroff the VM using the poweroff option from VMWare.
After this two steps, you can get current status for the VM to confirm that the actual state is powered off.

saving variables from playbook run to ansible host local file

I'm sort of trying to build an inventory file from an ansible playbook run.
I'm trying to list out all the kvm hosts and the guests running on them, by running both service libvirtd status and if successful, virsh list --all, and to store the values in a file on the ansible host.
Ive tried a few different playbook structures but none have been successful in writing the file (using local_action wrote the ansible_hostname from just one host).
Please can someone guide me on what I'm doing wrong?
This is what I'm running:
- name: Determine KVM hosts
hosts: all
become: yes
#gather_facts: false
tasks:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: not(libvirtd_status.rc == 0)
ignore_errors: true
- name: List KVM guests
shell: "virsh list --all"
register: list_vms
when: libvirtd_status.rc == 0
ignore_errors: true
- name: Write hostname to file
lineinfile:
path: /tmp/libvirtd_hosts
line: "{{ ansible_hostname }} kvm guests: "
create: true
#local_action: copy content="{{ item.value }}" dest="/tmp/libvirtd_hosts"
with_items:
- variable: ansible_hostname
value: "{{ ansible_hostname }}"
- variable: list_vms
value: "{{ list_vms }}"
when: libvirtd_status.rc == 0 or list_vms.rc == 0
Was able to cobble something that's mostly working:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: libvirtd_status.rc not in [0, 1]
- name: List KVM guests
#shell: "virsh list --all"
virt:
command: list_vms
register: all_vms
when: libvirtd_status.rc == 0
---
- name: List all KVM hosts
hosts: production, admin_hosts, kvm_hosts
become: yes
tasks:
- name: create file
file:
dest: /tmp/libvirtd_hosts
state: touch
delegate_to: localhost
- name: Copy VMs list
include_tasks: run_libvirtd_commands.yaml
- name: saving cumulative result
lineinfile:
line: '{{ ansible_hostname }} has {{ all_vms }}'
dest: /tmp/libvirtd_hosts
insertafter: EOF
delegate_to: localhost
when: groups["list_vms"] is defined and (groups["list_vms"] | length > 0)
Now if only I could clean up the output to filter out false positives (machines that don't have libvirtd status, and have an empty/no list of VMs, because the above doesn't really work.
But at least there is output from all the KVM hosts!

Integrate a dictionary to all tasks and loop through it in Ansible

Hey still kind of new with Ansible.
I made a playbook(test) that will perform a rolling update for a Mariadb galera-cluster that uses a HAproxy as lb.
I have no idea how to use the dictionary(bottom of the code) for all my tasks in the play. Also it has to loop like first server 1 then server 3 then server 2 and then server 4. Idea is that if the host or ip changed that you'll only have to change it in the dictionary.
For example task1 needs to use the key.value of host1 same for task2 and when its done loop to the next host.
I tried to use the vars module but it only worked task
specific. Was thinking of using the Vars folder but I'm not using the roles architecture.
- hosts: DBserver
become: yes
tasks:
- name: disable the haproxy server
replace:
path: /etc/haproxy/haproxy.cfg
regexp: "{{ item }}"
replace: 'server "{{}}" "{{}}" check weight 0'
with_items:
- 'server "{{}}" "{{}}" check weight 1'
- hosts: "{{}}"
become: yes
tasks:
- name: stop the mariadb
service:
name: mariadb
state: stopped
- hosts: DBserver
become: yes
tasks:
replace:
path: /etc/haproxy/haproxy.cfg
regexp: "{{ item }}"
replace: 'server "{{}}}" "{{}}" check weight 1'
with_items:
- 'server "{{}}" "{{}}" check weight 0'
dictionary:
{ 'name': 'host1', 'key': 'ipxxx' }, { 'name': 'host2', 'key': 'ipxxxx' }, { 'name': 'host3', 'key': 'ipxxx' }, { 'name': 'host4', 'key': 'ipxxx' }
I solved my problem with this play.
---
- hosts: clients
become: yes
serial: 1
tasks:
- name: disable the haproxy server
replace:
path: /etc/haproxy/haproxy.cfg
regexp: "{{ item }}"
replace: 'server {{ name }} {{ ip }} check weight 0'
with_items:
- 'server {{ name }} {{ ip }} check weight 1'
delegate_to: deploy
- name: stop the mariadb
service:
name: mariadb
state: stopped
- name: update mariadb
apt:
name: "{{ packages }}"
state: latest
update_cache: yes
vars:
packages:
- mariadb-server
- mariadb-client
- mariadb-common
- name: start the mariadb
service:
name: mariadb
state: started
- name: status mariadb
shell: systemctl status mariadb
register: server_status
- name: debug
debug:
msg: "{{ server_status['stdout_lines'] }}"
- name: enable proxy
replace:
path: /etc/haproxy/haproxy.cfg
regexp: "{{ item }}"
replace: 'server {{ name }} {{ ip }} check weight 1'
with_items:
- 'server {{ name }} {{ ip }} check weight 0'
delegate_to: deploy
I used the host_vars and that did the trick in combination with delegate_to.

Ansible rollback: run a group of tasks over list of hosts even when one of hosts failed

I have a playbook with multiple roles, hosts and groups. I am trying to develop a rollback functionality, that would run over all hosts. My current obstacle is that I see no way to delegate role, block or set of tasks to group of hosts
I tried looking up delegation to group without loops, so it would work on a block.
import_role doesn't accept loops
include_role doesn't accept delegate_to
same with import_tasks/include_tasks
here is what I have now as a playbook file (shortened version)
- hosts: all
any_errors_fatal: true
vars_prompt:
- name: "remote_user_p"
prompt: "Remote user running the playbook"
default: "root"
private: no
- name: "service_user_p"
prompt: "Specify user to run non-root tasks"
default: "user"
private: no
tasks:
- set_fact:
playbook_type: "upgrade"
- import_role:
name: 0_pre_check
run_once: true
remote_user: "{{ remote_user_p }}"
become_user: "{{ service_user_p }}"
become_method: su
become: yes
- block:
- import_role:
name: 1_os
- import_role:
name: 2_mysql
when: inventory_hostname in groups['mysql'] | default("")
- import_role:
name: 3_web
when: inventory_hostname in groups['web'] | default("")
...
rescue:
- block:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
remote_user: "{{ remote_user }}"
become_user: "{{ service_user }}"
become_method: su
become: yes
This is some example code from rollback.yml:
- block:
- name: rollback symlinks to config dir
file:
src: "{{ current_config_path }}"
dest: "{{ install_dir }}/static/cfg"
owner: "{{ service_user }}"
group: "{{ service_user_primary_group }}"
state: link
when: current_new_configs | default("N") == "Y"
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
- block:
- name: return config files
shell: test -f '{{ item.1.current_ver_file_path }}' && cp -p {{ item.1.current_ver_file_path }} {{ item.1.old_config_location }}
args:
warn: false
register: return_config_files
failed_when: return_config_files.rc >= 2
when:
- roolback_moved_cfg | default('N') == "Y"
- inventory_hostname in groups[item.0.group]
- item.1.old_config_location != ""
- item.1.current_ver_file_path != ""
with_subelements:
- "{{ config_files }}"
- files
become_user: root
become_method: sudo
become: yes
- name: systemctl daemon-reload
shell: systemctl daemon-reload
failed_when: false
when: root_rights == "Y"
args:
warn: false
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
when: root_rights == "Y"
become_user: root
become_method: sudo
become: yes
- fail:
msg: "Upgrade failed. Symbolic links were set to the previous version. Fix the issues and try again. If you wish to cancel the upgrade, restore the database backup manually."
As you can see, now I use lame workaround by introducing
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
after every task.
There are two problems here:
1. I can't use same approach after task return config files, because it already uses one loop
2. This is generally lame duplication of code and I hate it
Why I need it at all: if playbook execution fails somewhere in mysql role, for example, the rescue block will be executed only over the hosts in that mysql role (and btw, execution of tasks from next role will continue while running rescue block - same amount of tasks, despite all efforts), while I would like it to run over all hosts instead.
I finally was able to solve this with an ugly-ugly hack. Used plays instead of just roles - now there are more than 10 plays. Don't judge me, I spent lots of effort trying to make it nice ):
Example play followed by a check - same as for every other.
- hosts: mysql
any_errors_fatal: true
tasks:
- block:
- import_role:
name: 2_mysql
when: not rollback | default(false)
rescue:
- block:
- name: set fact for rollback
set_fact:
rollback: "yes"
delegate_to: "{{ item }}"
delegate_facts: true
with_items: "{{ groups['all'] }}"
- hosts: all
any_errors_fatal: true
tasks:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
when: rollback | default(false)
include_role doesn't accept delegate_to
Actually, it does.
With ansible-core 2.8:
- name: "call my/role with host '{{ansible_hostname}}' for hosts in '{{ansible_play_hosts}}'"
include_role:
name: my/role
apply:
delegate_to: "{{current_host}}"
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With ansible-core 2.5 to 2.7, see "2.5: delegate_to, include_role with loops" from George Shuklin, mentioned in ansible/ansible issue 35398
- name: "call my/role with host '{{ansible_hostname}}' for items in '{{ansible_play_hosts}}'"
include_tasks: loop.yml
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With loop.yml another tasks in its own file:
- name: "Import my/role for '{{current_host}}'"
import_role: name=my/role
delegate_to: "{{current_host}}"
So in two files (with ansible-core 2.7) or one file (2.8), you can make a all role and its tasks run on a delegated server.

Ansible Idempotence for elasticsearch.yml

New ansible user (v2.3) and pulled playbook for github but not understanding idempotence. Once elasticsearch is installed and started with playbook, I now want to be able to just modify or add an es_config parm and then rerun playbook expecting an updated elasticsearch.yml config file and elasticsearch restarted. That include_roles elasticsearch however is skipped on rerun and not sure what to modify to change that.
ansible-playbook --ask-become-pass -vv elk.yml
elk.yml
---
#
# Playbook to install the ELK stack
#
- hosts: servers
strategy: debug
remote_user: ansible
become: yes
become_user: root
tasks:
- debug: msg="elk main"
- include_role:
name: java
name: elasticsearch
name: kibana
name: nginx
name: logstash
vars:
es_instance_name: "node1"
es_config: {
discovery.zen.ping.unicast.hosts: "logstash-2, logstash-3",
network.host: _eno1_,
cluster.name: logstash,
node.name: "{{ ansible_hostname }}",
http.port: 9200,
transport.tcp.port: 9300,
node.data: true,
node.master: true,
bootstrap.memory_lock: true,
index.number_of_shards: 3,
index.number_of_replicas: 1
}
es_major_version: "6.x"
es_version: "6.2.1"
es_heap_size: "26g"
es_cluster_name: "logstash"
roles/elasticsearch/tasks/main.yml
---
- name: os-specific vars
include_vars: "{{ansible_os_family}}.yml"
tags:
- always
- debug: msg="es parms"
- name: check-set-parameters
# include: elasticsearch-parameters.yml
include_tasks: elasticsearch-parameters.yml
tags:
- always
#- include: java.yml
# when: es_java_install
# tags:
# - java
#- include: elasticsearch.yml
- include_tasks: elasticsearch.yml
roles/elasticsearch/tasks/elasticsearch.yml
---
- name: Include optional user and group creation.
when: (es_user_id is defined) and (es_group_id is defined)
include_tasks: elasticsearch-optional-user.yml
- name: Include specific Elasticsearch
include_tasks: elasticsearch-Debian.yml
when: ansible_os_family == 'Debian'
roles/elasticsearch/tasks/elasticsearch-Debian.yml
---
- set_fact: force_install=no
- set_fact: force_install=yes
when: es_allow_downgrades
- name: Debian - Install apt-transport-https to support https APT downloads
become: yes
apt: name=apt-transport-https state=present
when: es_use_repository
- name: Debian - Add Elasticsearch repository key
become: yes
apt_key: url="{{ es_apt_key }}" state=present
when: es_use_repository and es_apt_key
- name: Debian - Add elasticsearch repository
become: yes
apt_repository: repo={{ item.repo }} state={{ item.state}}
with_items:
- { repo: "{{ es_apt_url_old }}", state: "absent" }
- { repo: "{{ es_apt_url }}", state: "present" }
when: es_use_repository
- name: Debian - Include versionlock
include: elasticsearch-Debian-version-lock.yml
when: es_version_lock
- name: Debian - Ensure elasticsearch is installed
become: yes
apt: name=elasticsearch{% if es_version is defined and es_version != ""
%}={{$
when: es_use_repository
register: debian_elasticsearch_install_from_repo
notify: restart elasticsearch
- name: Debian - Download elasticsearch from url
get_url: url={% if es_custom_package_url is defined %}{{
es_custom_package_ur$
when: not es_use_repository
- name: Debian - Ensure elasticsearch is installed from downloaded package
become: yes
apt: deb=/tmp/elasticsearch-{{ es_version }}.deb
when: not es_use_repository
register: elasticsearch_install_from_package
notify: restart elasticsearch
roles/elasticsearch/tasks/elasticsearch-parameters.yml
# Check for mandatory parameters
- fail: msg="es_instance_name must be specified and cannot be blank"
when: es_instance_name is not defined or es_instance_name == ''
- fail: msg="es_proxy_port must be specified and cannot be blank when
es_proxy_$
when: (es_proxy_port is not defined or es_proxy_port == '') and
(es_proxy_hos$
- debug: msg="WARNING - It is recommended you specify the parameter
'http.port'"
when: es_config['http.port'] is not defined
- debug: msg="WARNING - It is recommended you specify the parameter
'transport.$
when: es_config['transport.tcp.port'] is not defined
- debug: msg="WARNING - It is recommended you specify the parameter
'discovery.$
when: es_config['discovery.zen.ping.unicast.hosts'] is not defined
#If the user attempts to lock memory they must specify a heap size
- fail: msg="If locking memory with bootstrap.memory_lock a heap size must
be s$
when: es_config['bootstrap.memory_lock'] is defined and
es_config['bootstrap.$
#Check if working with security we have an es_api_basic_auth_username and
es_ap$
- fail: msg="Enabling security requires an es_api_basic_auth_username and
es_ap$
when: es_enable_xpack and ("security" in es_xpack_features) and
es_api_basic_$
- set_fact: file_reserved_users={{ es_users.file.keys() | intersect
(reserved_x$
when: es_users is defined and es_users.file is defined and
(es_users.file.key$
- fail:
msg: "ERROR: INVALID CONFIG - YOU CANNOT CHANGE RESERVED USERS THROUGH THE$
when: file_reserved_users | default([]) | length > 0
- set_fact: instance_default_file={{default_file |
dirname}}/{{es_instance_name$
- set_fact: instance_init_script={{init_script | dirname
}}/{{es_instance_name}$
- set_fact: conf_dir={{ es_conf_dir }}/{{es_instance_name}}
- set_fact: m_lock_enabled={{ es_config['bootstrap.memory_lock'] is defined
and$
#TODO - if transport.host is not local maybe error on boostrap checks
#Use systemd for the following distributions:
#Ubuntu 15 and up
#Debian 8 and up
#Centos 7 and up
#Relies on elasticsearch distribution installing a serviced script to
determine$
- set_fact: use_system_d={{(ansible_distribution == 'Debian' and
ansible_distri$
- set_fact: instance_sysd_script={{sysd_script | dirname
}}/{{es_instance_name}$
when: use_system_d
#For directories we also use the {{inventory_hostname}}-{{ es_instance_name
}} $
- set_fact: instance_suffix={{inventory_hostname}}-{{ es_instance_name }}
- set_fact: pid_dir={{ es_pid_dir }}/{{instance_suffix}}
- set_fact: log_dir={{ es_log_dir }}/{{instance_suffix}}
- set_fact: data_dirs={{ es_data_dirs | append_to_list('/'+instance_suffix)
}}
formatting issue with include_role. Fixed problem with one role per include as follows:
- include_role:
name: java
- include_role:
name: elasticsearch
and so forth.

Resources