I use an "eye" as a supervisor and on changes in templates have to runs something like this:
eye load service.rb
eye restart service.rb
I want to define this as a single handler for all the apps and call it like
eye reload appname
And in a handler operate like this:
- name: reload eye service
command: eye load /path/{{ service }}.rb && eye restart {{ service }}
But I can't find a way to pass variable to a handler. Is it possible?
Don't do this. I understand your desire to use Ansible as a programming tool, where 'handler' is a 'function' you 'call', but it's not.
You can invent a dozen of tricks to do what you want, but result would be a total mess, hard to read and even harder to debug.
The key issue is that ansible does not support 'argument passing' to anything (except for modules). All tricks you read about or invent by yourself will change global variable. If you ever wrote at least bit in any language , you know, that program where every function is using global variables (for read and write, and to pass arguments) is fundamentally flawed.
So, how to do this in a very good and readable Ansible?
Yes, just wrote a separate handler for each service. It's the cleanest and simplest Ansible. Easy to read, easy to change.
BTW: if you have to actions in a chain, do not join them with '&&'.
Use two separate handlers:
- foo:
notify:
- eye reload
- eye restart foo
(note, that order of handlers is defined in the handlers list, not the 'notify' list).
Btw, if you have few services you will save on multiple reload operations - 'eye reload' would be called once.
handlers/main.yml:
- name: restart my service
shell: eye load /path/{{ service }}.rb && eye restart {{ service }}
So you can setup variable through default
defaults/main.yml:
service : "service"
or you can define {{ service }} though command line:
ansible-playbook -i xxx path/to/playbook -e "service=service"
http://docs.ansible.com/ansible/playbooks_variables.html
PS: http://docs.ansible.com/ansible/playbooks_intro.html#playbook-language-
example
---
- hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum: name=httpd state=latest
- name: write the apache config file
template: src=/srv/httpd.j2 dest=/etc/httpd.conf
notify:
- restart apache
- name: ensure apache is running (and enable it at boot)
service: name=httpd state=started enabled=yes
handlers:
- name: restart apache
service: name=httpd state=restarted
http://docs.ansible.com/ansible/playbooks_intro.html#handlers-running-operations-on-change
If you ever want to flush all the handler commands immediately though, in 1.2 and later, you can:
tasks:
- shell: some tasks go here
- meta: flush_handlers
- shell: some other tasks
You cannot do this but you can use set_fact in order set facts that can be accessed by the handler.
Related
I poked around a bit here but didn't see anything that quite matched up to what I am trying to accomplish, so here goes.
So I've put together my first Ansible playbook which opens or closes one or more ports on the firewall of one or more hosts, for one or more specified IP addresses. Works great so far. But what I want to do is restart the firewall service after all the tasks for a given host are complete (with no errors, of course).
NOTE: The hostvars/localhost references just hold vars_prompt input from the user in a task list above this one. I store prompted data in hosts: localhost build a dynamic host list based on what the user entered, and then have a separate task list to actually do the work.
So:
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
- set_fact:
hostList: "{{hostvars['localhost']['hostList']}}"
- set_fact:
portList: "{{hostvars['localhost']['portList']}}"
- set_fact:
portStateRequested: "{{hostvars['localhost']['portStateRequested']}}"
- set_fact:
portState: "{{hostvars['localhost']['portState']}}"
- set_fact:
remoteIPs: "{{hostvars['localhost']['remoteIPs']}}"
- name: Invoke firewall-cmd remotely
firewalld:
.. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
register: requestStatus
In my original version of the script, which only did 1 port for 1 host for 1 IP, I just did:
- name: Reload firewalld
when: requestStatus.changed
systemd:
name: firewalld
state: reloaded
But I don't think that will work as easily here because of the nesting. For example. Let's say I want to open port 9999 for a remote IP address of 1.1.1.1 on 10 different hosts. And let's say the 5th host has an error for some reason. I may not want to restart the firewall service at that point.
Actually, now that I think about it, I guess that in that scenario, there would be 4 new entries to the firewall config, and 6 that didn't take because of the error. Now I'm wondering if I need to track the successes, and have a rescue block within the Playbook to back those entries that did go through.
Grrr.... any ideas? Sorry, new to Ansible here. Plus, I hate YAML for things like this. :D
Thanks in advance for any guidance.
It looks to me like what you are looking for is what Ansible call handlers.
As we’ve mentioned, modules should be idempotent and can relay when
they have made a change on the remote system. Playbooks recognize this
and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks
in a play, and will only be triggered once even if notified by
multiple different tasks.
For instance, multiple resources may indicate that apache needs to be
restarted because they have changed a config file, but apache will
only be bounced once to avoid unnecessary restarts.
Note that handlers are simply a pair of
A notify attribute on one or multiple tasks
A handler, with a name matching your above mentioned notify attribute
So your playbook should look like
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
# set_fact removed for concision
- name: Invoke firewall-cmd remotely
firewalld:
# .. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
notify: Reload firewalld
handlers:
- name: Reload firewalld
systemd:
name: firewalld
state: reloaded
In ansible, I want to restart a service only if the configuration was changed.
Here is an example:
- hosts: workers
tasks:
- lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
- shell: service autofs reload
As you can see this code will always restart the autofs, even when the configuration file is not updated.
How can I improve this so it will restart only when the configuration file is changed?
Note: that's a generic question that is not specific to autofs, it could apply to any service where I do want to execute something IF a configuration file was changed, probably via lineinfile or ini_file core modules.
For a start you should be using the service module for controlling running services when running out. In general, if there's a proper module for something and it does what you need then you should do that and only resort to shelling out for edge cases.
Also, when an Ansible task runs it returns a series of facts that you can register to be able to use directly. This nearly always includes a changed attribute which is a boolean saying whether Ansible thinks it changed something (it can't always know. If a shell task returns something in stdout then it assumes something changed unless you directly override it with changed_when).
So you could go with something like this:
- hosts: workers
tasks:
- name: Set autofs options
lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
register: result
- name: reload autofs is autofs options are changed
service: name=autofs state=reloaded
when: result.changed
If you would create a role instead of using loose tasks right in your playbook you could work with handlers. Also see Best practices: Task And Handler Organization For A Role.
Tasks file of your role:
- name: Change autofs config
lineinfile: dest=/etc/default/autofs
regexp=^OPTIONS=
line="OPTIONS=\"-O soft\""
backup=yes
notify:
- Restart autfs
Then in your handlers file of the same role:
- name: Restart autfs
service: name=autfs
state=restarted
The handler gets notified whenever the config tasks has a changed state.
PS: I used the service module for managing the service. You should only use the shell module if no specific module for your tasks is available
Regarding roles:
One thing you will definitely want to do though, is use the “roles” organization feature, which is documented as part of the main playbooks page. See Playbook Roles and Include Statements. You absolutely should be using roles. Roles are great. Use roles. Roles! Did we say that enough? Roles are great.
I know this is old, but I landed on this and felt there was more to add to what others have already answered. You should probably use a handler, but you do not need to do a discreet role to use a handler. Here's an example of handlers used directly in the playbook.
---
- hosts: workers
tasks:
- lineinfile: 'dest=/etc/default/autofs regexp=^OPTIONS= line="OPTIONS=\"-O soft\"" backup=yes'
notify: reload autofs
handlers:
- name: reload autofs
service: name=autofs state=reloaded
One thing to note.. handlers fire off at the end of the play. So if you have multiple tasks, and you're expecting the handler to fire after the task that notified it, you might be in for a bad time. You can do a 'meta' task to flush_handlers if you need that handler to fire off in a sequential order with your tasks, or you can go the register/conditional way like ydaetskcoR provided. In your play, with a single task, it doesn't really matter.
I'm new to Ansible. The following is my requirement,
Transfer files(.tar.gz) from one host to many machines (38+ Nodes) under /tmp as user1
Log in to each machines as user2 and switch to root user using sudo su - (With Password)
extract it to another directory (/opt/monitor)
Change a configuration in the file (/opt/monitor/etc/config -> host= )
Start the process under /opt/monitor/init.d
For this, should I use playbooks or ad-hoc commands ?
I'll happy to use ad-hoc mode in ansible as I'm afraid of Playbooks.
Thanks in advance
You’d have to write several ad hoc commands to accomplish this. I don’t see any good reason to not use a playbook here. You will want to learn about playbooks, but it’s not much more to learn than the ad hoc commands. The sudo parts are taken care of for you by using the -b option to “become” the using sudo. Ansible takes care of the logging in for you via ssh.
The actions you’ll want to make use of are common for this type of setup where you’re installing something from source, commands like yum, get_url, unarchive, service. As an example, here’s a pretty similar process to what you need, demonstrating installing redis from source on a RedHat-family system:
- name: install yum dependencies for redis
yum: name=jemalloc-devel ... state=present
- name: get redis from file server
get_url: url={{s3uri}}/common/{{redis}}.tar.gz dest={{tmp}}
- name: extract redis
unarchive: copy=no src={{tmp}}/{{redis}}.tar.gz dest={{tmp}} creates={{tmp}}/{{redis}}
- name: build redis
command: chdir={{tmp}}/{{redis}} creates=/usr/local/bin/redis-server make install
- name: copy custom systemd redis.service
copy: src=myredis.service dest=/usr/lib/systemd/system/
# and logrotate, redis.conf, etc
- name: enable myredis service
service: name=myredis state=started enabled=yes
You could define custom variables like tmp and redis in a global_vars/all.yaml file. You’ll also want a site.yaml file to define your hosts and a role(s).
You’d invoke the playbook with something like:
ansible-playbook site.yaml -b --ask-become-pass -v
This can operate on your 38+ nodes as easily as on one.
You'll want a playbook to do this. At the simplest level, since you mention unpacking, it might look something like this:
- name: copy & unpack the file
unarchive: src=/path/to/file/on/local/host
dest=/path/to/target
copy=yes
- name: copy custom config
copy: src=/path/to/src/file
dest=/path/to/target
- name: Enable service
service: name=foo enabled=yes state=started
I'm working on a new opsware agent service check on AIX, its agent path is /etc/rc.d/init.d/opsware-agent.
Firstly please let me know how to define this variable path and call in service.
Secondly it should run the command only if this opsware agent service has been restarted. How to do it, since below one is not working.
- name: Ensure Opsware agent is running on AIX
service: name={{ aix_service_path }} state=started enabled=yes
register: aix_status
- name: Opsware AIX Notify only if it failed
when: aix_status|success
notify:
- hardware refresh
- software refresh
- name: hardware refresh
command: chdir=/opt/opsware/agent/pylibs/cog/ ./bs_hardware
- name: software refresh
command: chdir=/opt/opsware/agent/pylibs/cog/ ./bs_Software
Let me assume the YML formatting is correct and just got broken in your post. Otherwise you first need to indent your lines correctly.
Then make sure your handlers are inside handlers/main.yml. In your post it looks like everything is in the same file which then would of course get executed on every play.
Finally you can trigger your handlers in the service task, no need to have the dummy task, which additionally wouldn't work because there actually is no action defined.
So this should work:
your_role/tasks/main.yml:
---
- name: Ensure Opsware agent is running on AIX
service: name={{ aix_service_path }} state=started enabled=yes
notify:
- hardware refresh
- software refresh
...
your_role/handlers/main.yml:
---
- name: hardware refresh
command: chdir=/opt/opsware/agent/pylibs/cog/ ./bs_hardware
- name: software refresh
command: chdir=/opt/opsware/agent/pylibs/cog/ ./bs_Software
...
The handlers will be notified only when the service status is changed.
How you define aix_service_path depends on what you want to archive. You can define a default value in your_role/defaults/main.yml:
---
aix_service_path: foo
...
Or force it by defining it in your_role/vars/main.yml - same format as defaults above.
You can pass parameters in the role calls in your playbook, e.g.
roles:
- role: your_role
aix_service_path: foo
A parameter passed like this would override a definition in defaults/main.yml, but not those defined in vars/main.yml.
You can define it in a vars section in the playbook.
You can pass it on command-line when calling your playbook.
ansible-playbook ... --extra-vars "aix_service_path=foo"
Or define it as server- or group var. As well you can define variables in the inventory... There really are a ton of options for defining variables. You have to decide which fits your needs. Check out the variables section in the Ansible docs for more details.
Can I notify the handler in another role? What should I do to make ansible find it?
The use case is, e.g. I want to configure some service and then restart it if changed. Different OS have probably different files to edit and even the file format can be different. So I would like to put them into different roles (because the file format can be different, it can't be done by setting group_vars). But the way to restart the service is the same, using service module; so I'd like to put the handler to common role.
Is anyway to achieve this? Thanks.
You can also call handlers of a dependency role. May be cleaner than including files or explicitly listing roles in a playbook just for the purpose of role to role relationship. E.g.:
roles/my-handlers/handlers/main.yml
---
- name: nginx restart
service: >
name=nginx
state=restarted
roles/my-other/meta/main.yml
---
dependencies:
- role: my-handlers
roles/my-other/tasks/main.yml
---
- copy: >
src=nginx.conf
dest=/etc/nginx/
notify: nginx restart
You should be able to do that if you include the handler file.
Example:
handlers:
- include: someOtherRole/handlers/main.yml
But I don't think its elegant.
A more elegant way is to have a play that manages both roles, something like this:
- hosts: all
roles:
- role1
- role2
This will make both roles able to call other handlers.
But again I would suggest to make it all in one role and separate files and use a conditional include http://docs.ansible.com/playbooks_conditionals.html#conditional-imports
Hope that helps
You may import additional handlers from YourRole/handlers/main.yml file by using import_tasks.
So, if MyRole needs to call handlers in some OtherRole, roles/MyRole/handlers/main.yml will look like this:
- import_tasks: roles/OtherRole/handlers/main.yml
Of course roles/MyRole/handlers/main.yml may include additional handlers as well.
This way if I want to run MyRole without running tasks from the OtherRole, ansible will be able to correctly import and run handlers from the OtherRole
I had a similar issue, but needed to take many actions in the other dependent roles.
So rather than invoking the handeler - we set a fact like so:
- name: install mylib to virtualenv
pip: requirements=/opt/mylib/requirements.txt virtualenv={{ mylib_virtualenv_path }}
sudo_user: mylib
register: mylib_wheel_upgraded
- name: set variable if source code was upgraded
set_fact:
mylib_source_upgraded: true
when: mylib_wheel_upgraded.changed
Then elsewhere in another role:
- name: restart services if source code was upgraded
command: /bin/true
notify: restart mylib server
when: mylib_source_upgraded
Currently I'm using ansible v2.10.3 and it supports to call handlers on different roles. This was because the handlers are visible on the play-level, as per Ansible Docs says. You can see the docs mentioned that in the bottom-most point.
handlers are play scoped and as such can be used outside of the role they are defined in.
FYI, I tested the solution i.e. calling other role's handlers and it works! No need to import or else, just make sure that the roles are in the same playbook execution.
To illustrate:
roles/vm/handlers/main.yaml
---
- name: rebootvm
ansible.builtin.reboot:
reboot_timeout: 600
test_command: whoami
roles/config-files/tasks/main.yaml
---
- name: Copy files from local to remote
ansible.builtin.copy:
dest: /home/ubuntu/config.conf
src: config.conf
backup: yes
force: yes
notify:
- rebootvm
So when the config file (i.e. config.conf) changed, Ansible will send it to the remote location and it will notify the handler rebootvm, then the VM is rebooted.
P.S. I don't know what version exactly Ansible support this.
Edit: code indentation fix