I am trying to use Ansible to check if SELinux is enabled (set to Enforcing), and if not, enable it. The play to enable SELinux must be invoked only if SELinux is disabled.
The playbook looks like so:
- hosts: all
# root should execute this.
remote_user: root
become: yes
tasks:
# Check if SELinux is enabled.
- name: check if selinux is enabled
tags: selinuxCheck
register: selinuxCheckOut
command: getenforce
- debug: var=selinuxCheckOut.stdout_lines
- name: enable selinux if not enabled already
tags: enableSELinux
selinux: policy=targeted state=enforcing
when: selinuxCheckOut.stdout_lines == "Enforcing"
- debug: var=enableSELinuxOut.stdout_lines
When I run this, the task enableSELinux fails with the reason, "Conditional check failed". The output is:
TASK [debug] *******************************************************************
task path: /root/ansible/playbooks/selinuxConfig.yml:24
ok: [localhost] => {
"selinuxCheckOut.stdout_lines": [
"Enforcing"
]
}
TASK [enable selinux if not enabled already] ***********************************
task path: /root/ansible/playbooks/selinuxConfig.yml:26
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}
My questions:
1. Is this the correct way to get a play to execute depending on the output from another play?
2. How do I get this to work?
Your playbook is correct. But stdout_lines is a list. You have to compare the first element in that list. Try this:
when: selinuxCheckOut.stdout_lines[0] == "Enforcing"
Related
I'm trying to set up an NFS share between two nodes using Ansible. I'm using the role nfs, which is executed by both nodes, and I separated client and server tasks using the when condition.
After templating the NFS /etc/exports on the server node I want to restart the NFS server using a handler, but the notify doesn't seem to work.
This is the task I'm using:
- name: Template NFS /etc/exports
ansible.builtin.template:
src: exports
dest: /etc/exports
notify:
- Restart NFS server
when: inventory_hostname == groups['nfs-servers'][0]
I tried to restart the nfs server using this handler:
- name: Restart NFS server
ansible.builtin.systemd:
name: nfs-kernel-server
state: restarted
However, the handler never executes, even when exports is actually templated.
I think that's because the task always results in changed = false for the client node (node2), due to the when condition. This is the output of the task:
TASK [nfs : Template NFS /etc/exports]
skipping: [node2] => changed=false
skip_reason: Conditional result was False
changed: [node1] => changed=true
checksum: db864f9235e5afc0272896c4467e244d3b9c4147
dest: /etc/exports
gid: 0
group: root
md5sum: 447bf4f5557e3a020c40f4e729c90a62
mode: '0644'
owner: root
size: 94
src: /home/test/.ansible/tmp/ansible-tmp-1673949188.2970977-15410-64672518997352/source
state: file
uid: 0
Any suggestions on how to use when and notify together? Is there a way to make the notify ignore the skipping result?
Everything works fine if I use two roles and I remove the when condition.
This is the expected behaviour, handlers are running operations on change, if there is no change, there will be no handler run.
So, you will have to trigger the handlers in a separate task in order to have the handler react to a change on another host. In this way of doing it, you will have to register the result of the template task in order to inspect its change state with the help of the hostsvar from other hosts.
This can be done with a changed state on any task you like, for example, a debug task.
From the task you are providing I also see some other improvements possible:
Do not use a when in order to limit a task to a single host, prefer delegation and the use of run_once: true, which also offer the advantages that facts collected with run_once: true are propagated to all hosts, and avoid the need to use hostsvar.
Watch out for warning, your actual group name, containing a dash, is raised by Ansible as problematic
[WARNING]: Invalid characters were found in group names but not replaced,
use -vvvv to see details
and in verbose mode:
Not replacing invalid character(s) "{'-'}" in group name (nfs-servers)
Use underscores instead of dashes.
So, with all this:
- hosts: nfs_servers
gather_facts: no
tasks:
- name: Template NFS /etc/exports
ansible.builtin.template:
src: exports
dest: /etc/exports
delegate_to: "{{ groups.nfs_servers.0 }}"
run_once: true
register: templated_exports
- name: Propagation of the change to notify handler
ansible.builtin.debug:
msg: Notify all handlers on /etc/exports change
notify:
- Restart NFS server
changed_when: true
when: templated_exports is changed
handlers:
- name: Restart NFS server
ansible.builtin.debug:
msg: Restarting NFS server now
When run with a file change:
PLAY [nfs_servers] *******************************************************
TASK [Template NFS /etc/exports] *****************************************
changed: [node1]
TASK [Propagation of the change to notify handler] ***********************
changed: [node1] =>
msg: Notify all handlers on /etc/exports change
changed: [node2] =>
msg: Notify all handlers on /etc/exports change
RUNNING HANDLER [Restart NFS server] *************************************
ok: [node1] =>
msg: Restarting NFS server now
ok: [node2] =>
msg: Restarting NFS server now
When run with no change:
PLAY [nfs_servers] *******************************************************
TASK [Template NFS /etc/exports] *****************************************
ok: [node1]
TASK [Propagation of the change to notify handler] ***********************
skipping: [node1]
skipping: [node2]
I am able to create a user on a windows server as part of a playbook, but when the playbook is re-run, the create task fails.
I'm trying to work out if I am missing something.
playbook:
---
# vim: set filetype=ansible ff=unix ts=2 sw=2 ai expandtab :
#
# Playbook to configure the environment
- hosts: createuser
tasks:
- name: create user
run_once: true
win_user:
name: gary
password: 'B0bP4ssw0rd123!^'
password_never_expires: true
account_disabled: no
account_locked: no
password_expired: no
state: present
groups:
- Administrators
- Users
if I run the playbook when the user does not exist, the create works fine.
When I re-run, I get:
PLAY [createuser] *******************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************************************************************
ok: [dsy-demo-mssql02]
TASK [create user] ******************************************************************************************************************************************************************************************************************
fatal: [dsy-demo-mssql02]: FAILED! => {"changed": false, "failed": true, "msg": "Exception calling \"ValidateCredentials\" with \"2\" argument(s): \"The network path was not found.\r\n\""}
I have verified that I can logon to the server using the created user credentials.
Anyone seen this before, or understand what can be happening?
It looks to me like it might be the
run_once: true
is only telling the task to run once. For the ansible documentation on that delegation you can go here https://docs.ansible.com/ansible/playbooks_delegation.html#run-once
I have the following task to print out the current version of jenkins that is installed on some servers:
---
- hosts: all
remote_user: user
tasks:
- name: Printing the Jenkins version running on the masters
yum:
name: jenkins
register: version
- debug: var=version
I am trying to avoid using the -v option when running the playbook with hopes to keep the output as clean as possible.
If the playbook is run without the -v option the output looks like this:
TASK [Printing the jenkins version that is installed on each of the servers]***************
ok: [Server1]
ok: [Server2]
ok: [Server3]
TASK [debug] ******************************************************************* ok: [Server1] => {
"changed": false,
"version": "VARIABLE IS NOT DEFINED!"
}
ok: [Server1] => {
"changed": false,
"version": "VARIABLE IS NOT DEFINED!"
}
ok: [Server1] => {
"changed": false,
"version": "VARIABLE IS NOT DEFINED!"
}
However it returns that version is not defined. I am confused as to why this is happening because I have done the printing the same way for a bunch of other tasks without any problems. Any suggestions are greatly appreciated.
You can acheive this using the shell and debug
---
- hosts: all
remote_user: user
become: True
become_method: sudo
tasks:
- name: Printing the Jenkins version running on the masters
shell: cat /var/lib/jenkins/config.xml | grep '<version>'
register: version
- debug: var={{ version['stdout'] }}
You can create ansible callback plugin, or use one available in network
i.e.
human_log
I want to run an Ansible role conditionally, i.e. only when some binary does NOT exist (which for me implies absence of some particular app installation).
Something like the pattern used here.
Using the following code in my playbook:
- hosts: my_host
tasks:
- name: check app existence
command: /opt/my_app/somebinary
register: myapp_exists
ignore_errors: yes
roles:
- { role: myconditional_role, when: myapp_exists|failed }
- another_role_to_be_included_either_way
Here is the output:
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [my_host ]
TASK [ myconditional_role : create temporary installation directory] ***********************
fatal: [my_host]: FAILED! => {"failed": true, "msg": "ERROR! The conditional check 'myapp_exists|failed' failed. The error was: ERROR! |failed expects a dictionary"}
Why is the conditional check failing?
Using ansible 2.0.0.2 on Ubuntu 16.04.01
btw: "create temporary installation directory" is the name of the first main task of the conditionally included role.
Tasks are executed after roles, so myapp_exists is undefined.
Use pre_tasks instead.
Also keep in mind that when is not actually a conditional role, it just attaches this when statement to every task in your role.
I am running an Ansible play and would like to list all the hosts targeted by it. Ansible docs mentions that this is possible, but their method doesn't seem to work with a complex targeted group (targeting like hosts: web_servers:&data_center_primary)
I'm sure this is doable, but cant seem to find any further documentation on it. Is there a var with all the currently targeted hosts?
You are looking for 'play_hosts' variable
---
- hosts: all
tasks:
- name: Create a group of all hosts by app_type
group_by: key={{app_type}}
- debug: msg="groups={{groups}}"
run_once: true
- hosts: web:&some_other_group
tasks:
- debug: msg="play_hosts={{play_hosts}}"
run_once: true
would result in
TASK: [Create a group of all hosts by app_type] *******************************
changed: [web1] => {"changed": true, "groups": {"web": ["web1", "web2"], "load_balancer": ["web3"]}}
TASK: [debug msg="play_hosts={{play_hosts}}"] *********************************
ok: [web1] => {
"msg": "play_hosts=['web1']"
}
inventory:
[proxy]
web1 app_type=web
web2 app_type=web
web3 app_type=load_balancer
[some_other_group]
web1
web3
You can use the option --list-hosts to only list hosts a playbook would affect.
Also, there is the dict hostvars which holds all hosts currently known to Ansible. But I think the setup module had to be run on all hosts, so you can not skip that step via gather_facts: no.