Ansible: disable service only if present - ansible

Is there any nice way to do disable and stop a service, but only if it's installed on server? Something like this:
- service: name={{ item }} enabled=no state=stopped only_if_present=yes
with_items:
- avahi-daemon
- abrtd
- abrt-ccpp
Note that "only_if_present" is a keyword that doesn't exist right now in Ansible, but I suppose my goal is obvious.

I don't know what is the package name in your case, but you can do something similar to this:
- shell: dpkg-query -W 'avahi'
ignore_errors: True
register: is_avahi
when: ansible_distribution == 'Debian' or ansible_distribution == 'Ubuntu'
- shell: rpm -q 'avahi'
ignore_errors: True
register: is__avahi
when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
- service: name=avahi-daemon enabled=no state=stopped
when: is_avahi|failed
Update: I have added conditions so that the playbook works when you have multiple different distros, you might need to adapt it to fit your requirements.

Universal solution for systemd services:
- name: Disable services if enabled
shell: if systemctl is-enabled --quiet {{ item }}; then systemctl disable {{ item }} && echo disable_ok ; fi
register: output
changed_when: "'disable_ok' in output.stdout"
loop:
- avahi-daemon
- abrtd
- abrt-ccpp
It produces 3 states:
service is absent or disabled already — ok
service exists and was disabled — changed
service exists and disable failed — failed

Related

How do I avoid registering a variable when a “when” condition is Not met? [duplicate]

I have the following Ansible Playbook code:
- name: Users | Generate password for user (Debian/Ubuntu)
shell: makepasswd --chars=20
register: make_password
when: ansible_distribution in ['Debian', 'Ubuntu']
- name: Users | Generate password for user (Fedora)
shell: makepasswd -m 20 -M 20
register: make_password
when: ansible_distribution in ['Fedora', 'Amazon']
- name: Users | Generate password for user (CentOS)
shell: mkpasswd -l 20
register: make_password
when: ansible_distribution in ['CentOS']
- name: debug
debug: var=make_password
Which outputs:
TASK: [users | debug]
ok: [127.0.0.1] => {
"var": {
"make_password": {
"changed": false,
"skipped": true
}
}
}
... Because every register block gets executed regardless of the when condition.
How would I fix this so make_password doesn't get overwritten when the when condition isn't met?
Or if this is the wrong approach for what you can see that I'm trying to accomplish, let me know of a better one.
Unfortunately, this is the expected behavior. From Ansible Variables
Note
If a task fails or is skipped, the variable still is registered
with a failure or skipped status, the only way to avoid registering a
variable is using tags.
I do not know how to use tags to solve your issue.
EDIT: I found a way albeit a crude solution. Store the results so that it is not overwritten
- set_fact: mypwd="{{make_password}}"
when: make_password.changed
So your code will look like:
- name: Users | Generate password for user (Debian/Ubuntu)
shell: makepasswd --chars=20
register: make_password
when: ansible_distribution in ['Debian', 'Ubuntu']
- set_fact: mypwd="{{make_password}}"
when: make_password.changed
- name: Users | Generate password for user (Fedora)
shell: makepasswd -m 20 -M 20
register: make_password
when: ansible_distribution in ['Fedora', 'Amazon']
- set_fact: mypwd="{{make_password}}"
when: make_password.changed
- name: Users | Generate password for user (CentOS)
shell: mkpasswd -l 20
register: make_password
when: ansible_distribution in ['CentOS']
- set_fact: mypwd="{{make_password}}"
when: make_password.changed
- name: debug
debug: var=mypwd
Typically for tasks that run differently on different distros I tend to include a distro specific playbook that is then conditionally included into main.yml.
So an example might look something like this:
main.yml:
- include: tasks/Debian.yml
when: ansible_distribution in ['Debian', 'Ubuntu']
- include: tasks/Fedora.yml
when: ansible_distribution in ['Fedora', 'Amazon']
- include: tasks/Centos.yml
when: ansible_distribution in ['CentOS']
- name: debug
debug: var=make_password
Debian.yml
- name: Users | Generate password for user (Debian/Ubuntu)
shell: makepasswd --chars=20
register: make_password
And obviously repeat for the other 2 distros.
This way you keep main.yml to be only running all the generic tasks for the role that can be run on any distro but then anything that needs to be different can be in a distro specific playbook. Because the include is conditional it won't even load the task if the condition isn't met so the variable should not be registered.
how about define a dict in var file?
cat vars.yml
make_password: {
'Debian':'makepasswd --chars=20',
'Ubuntu':'makepasswd --chars=20',
'Fedora':'makepasswd -m 20 -M 20',
'Amazon':'makepasswd -m 20 -M 20',
'CentOS':'mkpasswd -l 20'
}
cat test.yml
---
- hosts: "{{ host }}"
remote_user: root
vars_files:
- vars.yml
tasks:
- name: get mkpasswd
debug: var="{{ make_password[ansible_distribution] }}"
run result:
TASK: [get mkpasswd]
ok: [10.10.10.1] => {
"mkpasswd -l 20": "mkpasswd -l 20"
}
Maybe it makes sense to put all the variants into a shell script and then just run that script instead of multiple Ansible tasks.
The script can detect the OS or simply react on a passed parameter.
#!/bin/bash
case "$1" in
"Debian" | "Ubuntu")
makepasswd --chars=20
;;
"Fedora" | "Amazon")
makepasswd -m 20 -M 20
;;
"CentOS")
mkpasswd -l 20
;;
*)
echo "Unexpected distribution" 1>&2
exit 1
;;
esac
Throw this in the scripts folder of your role as make_password.sh and then call it as:
- name: Users | Generate password for user
script: make_password.sh {{ ansible_distribution }}
register: make_password
Another idea: You seem to actually generate a password remotely, register the output and then use it later in other tasks. If you can guarantee the Ansible master host always is of the same type and not every team member uses a different distribution you could simply run the task locally.
Let's say you use Ubuntu:
- name: Users | Generate password for user
shell: makepasswd --chars=20
delegate_to: localhost
register: make_password
The tasks is executed locally on the host you ran Ansible via delegate_to.

Ansible task continuation only when find strings in registred variable

I have Ansible code below
- name: Try to delete file
shell: rm -rf /tmp/"{{ file_Del }}"
register: result
ignore_errors: True
- name: Display result
debug:
var: result.stdout
result.stdout can be either
"result.stderr": "Error from server (NotFound): file /tmp/somefile not found"
OR
"result.stderr": "" <= in success
Both of these are valid but I want to fail Ansible of anything else in the "result.stderr". Ex: "result.stderr": "rm -rf Command not found"
How do I do that with "end_play"
- meta: end_play
when: "*Error from server (NotFound): file.*.*not found" not in result.stderr OR result.stderr ==""
When looking at this question it looks like you're working in a bit of a mess, and trying to continue in that way. I think it's best to not move further in that direction, but rather work in a more professional fashion.
For now; a more elegant solution, with only using Ansible modules;
---
- hosts: local_test
# gather_facts: False
vars:
file_Del: test
tasks:
- name: find some files at a location, recursively
find:
paths: "/tmp/{{ file_Del }}"
recurse: true
register: found_files
- name: display files found, to be deleted, could be empty
debug:
msg: "Files found are {{ item.path }}"
with_items:
- "{{ found_files.files }}"
- name: delete files when found
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ found_files.files | default ([]) }}"
- name: display files found, to be deleted, could be empty
debug:
msg: "{{ found_files }}"
- name: end play when no files are found
meta: end_play
when: found_files.matched == 0
- name: this task is skipped when there are no files found, but executed when files were deleted
shell: echo hi
Regarding your matter with "rm -rf Command not found"; this looks just out of the ordinary, e.g. you're executing a playbook on both a Linux and a Windows host, where the task would fail on a Windows host. Each Linux host I know of is pre-installed with rm, even most Docker containers and OpenBSD hosts.
It could be wise to separate these tasks per environment.
Perhaps a bit of context could help us, and eventually you, out.
rm isn't very chatty. See rm -rf return codes. But it's possible to use find and xargs. For example the play
- hosts: localhost
vars:
file_del: xxx
tasks:
- shell: "find /tmp/{{ file_del }} | xargs /bin/rm"
ignore_errors: True
register: result
- meta: end_play
when: result.rc == 123
- debug:
msg: /tmp/{{ file_del }} found and deleted. Continue play.
gives (when /tmp/{{ file_del }} found and deleted)
"msg": "/tmp/xxx found and deleted. Continue play."
and ends the play when file not found result.rc == 123.
Notes
Make sure rc 123 is reported when file not found
It's possible to test result.stderr
If possible write a custom command. For example
$ cat my_custom_command.sh
#!/bin/sh
if [ "$#" -eq "0" ]; then
echo "[ERR 123] No files found.";
exit 123;
fi
echo "Here I can do whatever I want with $#";
exit 0;

Ansible: how to loop over ip-addresses until first success shell output?

I'm creating playbook which will be applied to new Docker swarm manager(s). Server(s) is/are not configured before playbook run.
We already have some Swarm managers. I can find all of them (include new one) with:
- name: 'Search for SwarmManager server IPs'
ec2_instance_facts:
region: "{{ ec2_region }}"
filters:
vpc-id: "{{ ec2_vpc_id }}"
"tag:aws:cloudformation:logical-id": "AutoScalingGroupSwarmManager"
register: swarmmanager_instance_facts_result
Now I can use something like this to get join-token:
- set_fact:
swarmmanager_ip: "{{ swarmmanager_instance_facts_result.instances[0].private_ip_address }}"
- name: 'Get the docker swarm join-token'
shell: docker swarm join-token -q manager
changed_when: False
register: docker_swarm_token_result
delegate_to: "{{ swarmmanager_ip }}"
run_once: true
Success shell output looks like this — just 1 line started with "SWMTKN-1":
SWMTKN-1-11xxxyyyzzz-xxxyyyzzz
But I see some possible problems here with swarmmanager_ip:
it can be new instance which still unconfigured,
it can be instance with not working Swarm manager.
So I decided to loop over results until I've got join-token. But many code variants I've tried doesn't work. For example, this one runs over all list without break:
- name: 'Get the docker swarm join-token'
shell: docker swarm join-token -q manager
changed_when: False
register: docker_swarm_token_result
delegate_to: "{{ item.private_ip_address }}"
loop: "{{ swarmmanager_instance_facts_result.instances }}"
# ignore_errors: true
# until: docker_swarm_token_result.stdout_lines|length == 1
when: docker_swarm_token_result is not defined or docker_swarm_token_result.stdout_lines is not defined or docker_swarm_token_result.stdout_lines|length == 1
run_once: true
check_mode: false
Do you know how to iterate over list until first success shell output?
I use Ansible 2.6.11, it is OK to receive answer about 2.7.
P.S.: I've already read How to break `with_lines` cycle in Ansible?, it doesn't works for modern Ansible versions.

Ansible: How do I avoid registering a variable when a "when" condition is *not* met?

I have the following Ansible Playbook code:
- name: Users | Generate password for user (Debian/Ubuntu)
shell: makepasswd --chars=20
register: make_password
when: ansible_distribution in ['Debian', 'Ubuntu']
- name: Users | Generate password for user (Fedora)
shell: makepasswd -m 20 -M 20
register: make_password
when: ansible_distribution in ['Fedora', 'Amazon']
- name: Users | Generate password for user (CentOS)
shell: mkpasswd -l 20
register: make_password
when: ansible_distribution in ['CentOS']
- name: debug
debug: var=make_password
Which outputs:
TASK: [users | debug]
ok: [127.0.0.1] => {
"var": {
"make_password": {
"changed": false,
"skipped": true
}
}
}
... Because every register block gets executed regardless of the when condition.
How would I fix this so make_password doesn't get overwritten when the when condition isn't met?
Or if this is the wrong approach for what you can see that I'm trying to accomplish, let me know of a better one.
Unfortunately, this is the expected behavior. From Ansible Variables
Note
If a task fails or is skipped, the variable still is registered
with a failure or skipped status, the only way to avoid registering a
variable is using tags.
I do not know how to use tags to solve your issue.
EDIT: I found a way albeit a crude solution. Store the results so that it is not overwritten
- set_fact: mypwd="{{make_password}}"
when: make_password.changed
So your code will look like:
- name: Users | Generate password for user (Debian/Ubuntu)
shell: makepasswd --chars=20
register: make_password
when: ansible_distribution in ['Debian', 'Ubuntu']
- set_fact: mypwd="{{make_password}}"
when: make_password.changed
- name: Users | Generate password for user (Fedora)
shell: makepasswd -m 20 -M 20
register: make_password
when: ansible_distribution in ['Fedora', 'Amazon']
- set_fact: mypwd="{{make_password}}"
when: make_password.changed
- name: Users | Generate password for user (CentOS)
shell: mkpasswd -l 20
register: make_password
when: ansible_distribution in ['CentOS']
- set_fact: mypwd="{{make_password}}"
when: make_password.changed
- name: debug
debug: var=mypwd
Typically for tasks that run differently on different distros I tend to include a distro specific playbook that is then conditionally included into main.yml.
So an example might look something like this:
main.yml:
- include: tasks/Debian.yml
when: ansible_distribution in ['Debian', 'Ubuntu']
- include: tasks/Fedora.yml
when: ansible_distribution in ['Fedora', 'Amazon']
- include: tasks/Centos.yml
when: ansible_distribution in ['CentOS']
- name: debug
debug: var=make_password
Debian.yml
- name: Users | Generate password for user (Debian/Ubuntu)
shell: makepasswd --chars=20
register: make_password
And obviously repeat for the other 2 distros.
This way you keep main.yml to be only running all the generic tasks for the role that can be run on any distro but then anything that needs to be different can be in a distro specific playbook. Because the include is conditional it won't even load the task if the condition isn't met so the variable should not be registered.
how about define a dict in var file?
cat vars.yml
make_password: {
'Debian':'makepasswd --chars=20',
'Ubuntu':'makepasswd --chars=20',
'Fedora':'makepasswd -m 20 -M 20',
'Amazon':'makepasswd -m 20 -M 20',
'CentOS':'mkpasswd -l 20'
}
cat test.yml
---
- hosts: "{{ host }}"
remote_user: root
vars_files:
- vars.yml
tasks:
- name: get mkpasswd
debug: var="{{ make_password[ansible_distribution] }}"
run result:
TASK: [get mkpasswd]
ok: [10.10.10.1] => {
"mkpasswd -l 20": "mkpasswd -l 20"
}
Maybe it makes sense to put all the variants into a shell script and then just run that script instead of multiple Ansible tasks.
The script can detect the OS or simply react on a passed parameter.
#!/bin/bash
case "$1" in
"Debian" | "Ubuntu")
makepasswd --chars=20
;;
"Fedora" | "Amazon")
makepasswd -m 20 -M 20
;;
"CentOS")
mkpasswd -l 20
;;
*)
echo "Unexpected distribution" 1>&2
exit 1
;;
esac
Throw this in the scripts folder of your role as make_password.sh and then call it as:
- name: Users | Generate password for user
script: make_password.sh {{ ansible_distribution }}
register: make_password
Another idea: You seem to actually generate a password remotely, register the output and then use it later in other tasks. If you can guarantee the Ansible master host always is of the same type and not every team member uses a different distribution you could simply run the task locally.
Let's say you use Ubuntu:
- name: Users | Generate password for user
shell: makepasswd --chars=20
delegate_to: localhost
register: make_password
The tasks is executed locally on the host you ran Ansible via delegate_to.

ansible : how to use the variable ${item} from with_items in notify?

I am new to Ansible and I am trying to create several virtual environments (one for each project, the list of projects being defined in a variable).
The task works well, I got all the folders, however the handler does not work, it does not init each folder with the virtual environment. The ${item} varialbe in the handler does not work.
How can I use an handler when I use with_items ?
tasks:
- name: create virtual env for all projects ${projects}
file: state=directory path=${virtualenvs_dir}/${item}
with_items: ${projects}
notify: deploy virtual env
handlers:
- name: deploy virtual env
command: virtualenv ${virtualenvs_dir}/${item}
Handlers are just 'flagged' for execution once whatever (itemized sub-)task requests it (had the changed: yes in its result).
At that time handlers are just like a next regular tasks, and don't know about the itemized loop.
A possible solution is not with a handler but with an extratask + conditional
Something like
- hosts: all
gather_facts: false
tasks:
- action: shell echo {{item}}
with_items:
- 1
- 2
- 3
- 4
- 5
register: task
- debug: msg="{{item.item}}"
with_items: task.results
when: item.changed == True
To sum up the previous discussion and adjusting for the modern Ansible...
- hosts: localhost,
gather_facts: false
tasks:
- action: shell echo {{item}} && exit {{item}}
with_items:
- 1
- 2
- 3
- 4
- 5
register: task
changed_when: task.rc == 3
failed_when: no
notify: update service
handlers:
- name: update service
debug: msg="updated {{item}}"
with_items: >
{{
task.results
| selectattr('changed')
| map(attribute='item')
| list
}}

Resources