I am facing the problem that I need to define a set of similar handlers in ansible that differ only in a name.
Let me give an example. Here are some tasks:
# tasks/main.yml
- name: Install config of OpenVPN instance 1
notify: restart openvpn-1
...
- name: Install config of OpenVPN instance 2
notify: restart openvpn-2
...
# Multiple more of that pattern.
You might think that each instance has a slightly different configuration that can be handled here.
All right. Bow the handlers:
# handlers/main.yml
- name: restart openvpn-1
systemd:
name: openvpn-server#instance1
state: restarted
- name: restart openvpn-2
systemd:
name: openvpn-server#instance2
state: restarted
# ...
You see this is quite some duplicated code (not good).
I was thinking of doing something like:
# Handler template or so
- name: restart-openvpn-{{ item }}
systemd:
name: openvpn-server#instance{{ item }}
state: restarted
loop:
- "1"
- "2"
# ...
This is not working, I tried it.
I found this post but this works here not so ideal as it assumes the task to be running in a loop. Instead I have a set of individual tasks that trigger single instances of the handlers. Also this is already in a role.
So the short question is: How can I create a handler template to avoid code redundancy?
Your code works.
E.g. I took two services and tell 'em to restart using your approach:
- hosts: localhost
connection: local
become: true
tasks:
- name: restart-{{ item }}
systemd:
name: "{{ item }}.service"
state: restarted
loop:
- "whoopsie"
- "wpa_supplicant"
O/P is:
PLAY RECAP **********
localhost : ok=2
The main difference from your code: add quotes.
As per Ansible Documentation on Using Variables:
YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you aren’t trying to start a YAML dictionary. This is covered on the YAML Syntax documentation.
This won’t work:
- hosts: app_servers
vars:
app_path: {{ base_path }}/22
Do it like this and you’ll be fine:
- hosts: app_servers
vars:
app_path: "{{ base_path }}/22"
Related
I have a task that checks the redis service status on the host list below
- hosts: 192.168.0.1, 192.168.0.2, 192.168.0.3
tasks:
- command:
cmd: service redis-server status
register: result
- debug:
var: result
After checking I need to access hosts where service does not exist.
And they should be accessible as variable to proceed with them in the next tasks.
Can someone please help?
Similar to Ansible facts it is also possible to gather service_facts. In example
- name: Set facts SERVICE
set_fact:
SERVICE: "redis-server.service"
- name: Gathering Service Facts
service_facts:
- name: Show ansible_facts.services
debug:
msg:
- "{{ ansible_facts.services[SERVICE].status }}"
If you like to perform tasks after on a service of which you don't the status, with Conditionals you can check the state.
If the service is not installed at that time, the specific variable (key) would not be defined. You would perform Conditionals based on variables then
when: ansible_facts.services[SERVICE] is defined
when: ansible_facts.services['redis-server.service'] is defined
Also it is recommend to use the Ansible service module to perform tasks on the service
- name: Start redis-server, if not started
service:
name: redis-server
state: started
instead of using the command module.
Further services related Q&A
How to check service exists and is not installed in the server using service_facts module in an Ansible playbook?
Ansible: How to start stopped services?
Ansible: How to get disabled but running services?
How to list only the running services with ansible_facts?
Finally found the solution that perfectly matches.
- name: Check that redis service exists
systemd:
name: "redis"
register: redis_status
changed_when: redis_status.status.ActiveState == "inactive"
- set_fact:
_dict: "{{ dict(ansible_play_hosts|zip(
ansible_play_hosts|map('extract', hostvars, 'redis_status'))) }}"
run_once: true
- set_fact:
_changed: "{{ (_dict|dict2items|json_query('[?value.changed].key'))| join(',') }}"
run_once: true
I have list of systemd services defined as
vars:
systemd_scripts: ['a.service', 'b.service', 'c.service']
Now I want to stop only a.service from above list. How this can be achieved using systemd_module?
What are you trying to achieve? As written, you could just do:
- name: Stop service A
systemd:
name: a.service
state: stopped
If instead you mean "the first service", use the first filter or an index:
- name: Stop first service
systemd:
name: "{{ systemd_scripts | first }}"
state: stopped
OR
- name: Stop first service
systemd:
name: "{{ systemd_scripts[0] }}"
state: stopped
Your question is very vague and unspecific. However, parts of your question could be answered with the following approach
- name: Loop over service list and stop one service
systemd:
name: "{{ item }}"
state: stopped
when:
- systemd_scripts[item] == 'a.service'
with_items: "{{ systemd_scripts }}"
You may need to extend and change it for your needs and required functionality.
Regarding discovering, starting and stopping services via Ansible facts (service_facts) and systemd you may have a look into
Further readings
Ansible: How to get disabled but running services?
How to declare a variable for service_facts?
How to check service exists and is not installed in the server using service_facts module in an Ansible playbook?
I am using ansible 2.5.4 and I need to share variables between hosts.
I tried many examples thtat I saw on-line ( share with set_fact or using a dummy host ) and it is all not working.
maybe I am doing something different,
this is my playbook:
---
- hosts: master[0]
tasks:
- name: generate kubernetes BootrapToken
command: kubeadm token generate
register: generate_token_result
- set_fact: token="{{generate_token_result}}"
- hosts: new # requires creating new group in inventory.cfg named new
tasks:
- name: include docker-host role
include_role:
name: docker-host
when: not skip_nodes_setup
- name: include kubernetes-host role
include_role:
name: kubernetes-host
when: not skip_nodes_setup
- name: include kubernetes-operator role
include_role:
name: kubernetes-operator
when: not skip_nodes_setup
- name: join node to kubernetes cluster
command: "kubeadm join --token {{ hostvars['master[0]']['token']['stdout'] }} --discovery-token-unsafe-skip-ca-verification {{ hostvars['kubernetes_machines']['kube_apiserver'] }}"
I am getting the following error:
The task includes an option with an undefined variable. The error was: "hostvars['master[0]']" is undefined
the first task is able to run on master[0] but the second task does not recognize that host.
please help.
thanks
adding the inventory.cfg:
[kubernetes_machines:vars]
kube_apiserver=10.82.72.54:6443
[kubernetes_machines:children]
masters
nodes
new
[masters]
srv12
[nodes]
srv13
[new]
prd4
If you ask for "hostvars['master[0]']", you've got the entire master[0] inside quotes so you're referring to a host with the literal name master[0]. If you mean the first member of the master hostgroup, you need a variable reference, not a string, and you'll need to use the groups variable (and you need to remember your hostgroup is named masters not master):
hostvars[groups.masters.0]
You can find relevant documentation here.
Quoting from Playbook Basics
The hosts line is a list of one or more groups or host patterns
Pattern master[0] doesn't match hostname master[0]. If the hostname is master0 then the hostvars reference should be
hostvars['master0']
It's not clear why hosts: master[0] works. It should not according to the documentation. hosts: master.0 which should be the same doesn't work.
I'm trying to run the Ansible modules junos_cli and junos_rollback and I get the following error:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in '/home/quake/network-ansible/roles/junos-rollback/tasks/main.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
---
- name: I've made a huge mistake
^ here
This is the role in question:
---
- name: I've made a huge mistake
junos_rollback:
host={{ inventory_hostname }}
user=ansible
comment={{ comment }}
confirm={{ confirm }}
rollback={{ rollback }}
logfile={{ playbook_dir }}/library/logs/rollback.log
diffs_file={{ playbook_dir }}/configs/{{ inventory_hostname }}
Here is the Juniper page:
http://junos-ansible-modules.readthedocs.io/en/1.3.1/junos_rollback.html
Their example's syntax is a little odd. host uses a colon while the rest uses = signs. I've tried mixing both and only using one or the other. I keep getting errors.
I also confirmed that my junos-eznc version is higher than 1.2.2 (I have 2.0.1)
I've been able to use junos_cli before, I don't know if a version mismatch happened. On the official Ansible documentation, there is no mention of junos_cli or junos_rollback. Perhaps they're not supported anymore?
http://docs.ansible.com/ansible/list_of_network_modules.html#junos
Thanks,
junos_cli & junos_rollback are part of Galaxy and not core modules. You can find them at
https://galaxy.ansible.com/Juniper/junos/
Is the content posted here has whole content of your playbook? if yes, You need to define other items too in your playbook such as roles, connection, local. For example
refer https://github.com/Juniper/ansible-junos-stdlib#example-playbook
```
---
- name: rollback example
hosts: all
roles:
- Juniper.junos
connection: local
gather_facts: no
tasks:
- name: I've made a huge mistake
junos_rollback:
host = {{inventory_hostname}}
----
----
```
Where have you saved the content of juniper.junos modules?. Can you post the content of your playbook and the output of the tree command to see your file structure? That could help.
I had a similar problem where Ansible was not finding my modules and what I did was to copy the juniper.junos folder to my roles folder and then added a tasks folder within it to execute the main.yaml from there.
Something like this:
/Users/macuared/Ansible_projects/roles/Juniper.junos/tasks
---
- name: "TEST 1 - Gather Facts"
junos_get_facts:
host: "{{ inventory_hostname}}"
user: "uuuuu"
passwd: "yyyyyy"
savedir: "/Users/macuared/Ansible_projects/Ouput/Facts"
ignore_errors: True
register: junos
- name: Checking Device Version
debug: msg="{{ junos.facts.serialnumber }}"
Additionally, I would add "" to the string values in your YAML. Something like this:
---
- name: I've made a huge mistake
junos_rollback:
host="{{ inventory_hostname }}"
user=ansible
comment="{{ comment }}"
confirm={{ confirm }}
rollback={{ rollback }}
logfile="{{ playbook_dir }}/library/logs/rollback.log"
diffs_file="{{ playbook_dir }}/configs/{{ inventory_hostname }}"
Regarding this "I've tried mixing both and only using one or the other. I keep getting errors."
I've used just colon and mine works fine even when in the documentation suggests = signs. See junos_get_facts
I would like to run a handler only once in an entire playbook.
I attempted using an include statement in the following in the playbook file, but this resulted in the handler being run multiple times, once for each play:
- name: Configure common config
hosts: all
become: true
vars:
OE: "{{ ansible_hostname[5] }}"
roles:
- { role: common }
handlers:
- include: handlers/main.yml
- name: Configure metadata config
hosts: metadata
become: true
vars:
OE: "{{ ansible_hostname[5] }}"
roles:
- { role: metadata }
handlers:
- include: handlers/main.yml
Here is the content of handlers/main.yml:
- name: restart autofs
service:
name: autofs.service
state: restarted
Here is an example of one of the tasks that notifies the handler:
- name: Configure automount - /opt/local/xxx in /etc/auto.direct
lineinfile:
dest: /etc/auto.direct
regexp: "^/opt/local/xxx"
line: "/opt/local/xxx -acdirmin=0,acdirmax=0,rdirplus,rw,hard,intr,bg,retry=2 nfs_server:/vol/xxx"
notify: restart autofs
How can I get the playbook to only execute the handler once for the entire playbook?
The answer
The literal answer to the question in the title is: no.
Playbook is a list of plays. Playbook has no namespace, no variables, no state. All the configuration, logic, and tasks are defined in plays.
Handler is a task with a different calling schedule (not sequential, but conditional, once at the end of a play, or triggered by the meta: flush_handlers task).
A handler belongs to a play, not a playbook, and there is no way to trigger it outside of the play (i.e. at the end of the playbook).
Solution
The solution to the problem is possible without referring to handlers.
You can use group_by module to create an ad-hoc group based on the result of the tasks at the bottom of each play.
Then you can define a separate play at the end of the playbook restarting the service on targets belonging to the above ad-hoc group.
Refer to the below stub for the idea:
- hosts: all
roles:
# roles declaration
tasks:
- # an example task modifying Nginx configuration
register: nginx_configuration
# ... other tasks ...
- name: the last task in the play
group_by:
key: hosts_to_restart_{{ 'nginx' if nginx_configuration is changed else '' }}
# ... other plays ...
- hosts: hosts_to_restart_nginx
gather_facts: no
tasks:
- service:
name: nginx
state: restarted
Possible solution
Use handlers to add hosts to in-memory inventory. Then add play to run restart service only for these hosts.
See this example:
If task is changed, it notify mark to restart to set fact, that host needs service restart.
Second handler add host is quite special, because add_host task only run once for whole play even in handler, see also documentation. But if notified, it will run after marking is done implied from handlers order.
Handler loops over hosts on which tasks were run and check if host service needs restart, if yes, add to special hosts_to_restart group.
Because facts are persistent across plays, notify third handler clear mark for affected hosts.
A lot of lines you hide with moving handlers to separate file and include them.
inventory file
10.1.1.[1:10]
[primary]
10.1.1.1
10.1.1.5
test.yml
---
- hosts: all
gather_facts: no
tasks:
- name: Random change to notify trigger
debug: msg="test"
changed_when: "1|random == 1"
notify:
- mark to restart
- add host
- clear mark
handlers:
- name: mark to restart
set_fact: restart_service=true
- name: add host
add_host:
name: "{{item}}"
groups: "hosts_to_restart"
when: hostvars[item].restart_service is defined and hostvars[item].restart_service
with_items: "{{ansible_play_batch}}"
- name: clear mark
set_fact: restart_service=false
- hosts: primary
gather_facts: no
tasks:
- name: Change to notify trigger
debug: msg="test"
changed_when: true
notify:
- mark to restart
- add host
- clear mark
handlers:
- name: mark to restart
set_fact: restart_service=true
- name: add host
add_host:
name: "{{item}}"
groups: "hosts_to_restart"
when: hostvars[item].restart_service is defined and hostvars[item].restart_service
with_items: "{{ansible_play_batch}}"
- name: clear mark
set_fact: restart_service=false
- hosts: hosts_to_restart
gather_facts: no
tasks:
- name: Restart service
debug: msg="Service restarted"
changed_when: true
A handler triggered in post_tasks will run after everything else. And the handler can be set to run_once: true.
It's not clear to me what your handler should do. Anyway, as for official documentation, handlers
are triggered at the end of each block of tasks in a play,
and will only be triggered once even if notified by multiple different
tasks [...] As of Ansible 2.2, handlers can also “listen” to generic topics, and tasks can notify those topics as follows:
So handlers are notified / executed once for each block of tasks.
May be you get your goal just keeping handlers after "all" target hosts, but it doesn't seem a clean use of handlers.
.