In many of my Ansible roles I have a handler to restart a service and a task to make sure the service is enabled and started. When this is a first run Ansible will start my service (Ensure mongodb is started and enabled) and then at the end run the restart handler. How do I tell Ansible to only start it once.
Sample play:
---
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} state=started enabled=yes
Insert a flush_handlers task just in the front of the server started task to run the restart handler. Like:
- name: flush handlers
meta: flush_handlers
## your service start task
- name: Ensure mongodb is started and enabled
service:
service: name={{ mongodb_daemon_name }} state=started enabled=yes
If handlers are notified, the handler will restart the services (I assume you have handlers that restart the services).
If handlers are not notified, the Ensure service started task will start the service if it is not already started.
It's been a while since I asked this but anywho, if it helps, the solution I ended up with is below...
The trick is to...
Have the service task in the tasks/main.yml file with the parameters state: started and enabled: yes split out into their own tasks
For the Ensure mongodb is started task register a mongodb_service_started variable
Use the newly registered variable mongodb_service_started in the when clause of the Restart mongodb handler to only run when the Ensure mongodb is started task did NOT change
This ensures that your application does not get started then subsequently restarted in the same play and prevents your application from being stopped when it is running some critical function on startup. For example database migrations. Restarting a service right after it's been started can cause an inconsistent state.
tasks/main.yml
---
- name: Install MongoDB package
yum:
name: "mongodb-org-{{ mongodb_version }}"
state: present
notify: Restart mongodb
- name: Configure mongodb
template:
src: mongod.conf.j2
dest: "/etc/{{ mongodb_config_file }}"
owner: root
group: root
mode: 0644
notify: Restart mongodb
- name: Ensure mongodb is started
service:
name: "{{ mongodb_daemon_name }}"
state: started
register: mongodb_service_started
- name: Ensure mongodb is enabled
service:
name: "{{ mongodb_daemon_name }}"
enabled: yes
handlers/main.yml
---
- name: Restart mongodb
service:
name: "{{ mongodb_daemon_name }}"
state: started
when: mongodb_service_started.changed == False
May be you are looking something like this:
---
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
register: mongodb_install
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} state=started enabled=yes
when: mongodb_install.changed
I can think of two three ways.
Option #1. Remove the state from the task that enables the service and move it to a handler.
According to the Ansible documentation, "notify handlers are always run in the same order they are defined." Since started is idempotent and restarted always starts the service, the second handler (start) won't run if the first one does (restart).
# roles/mongodb/tasks/main.yml
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
notify: mongodb start
# roles/mongodb/handlers/main.yml
- name: mongodb restart
service: name={{ mongodb_daemon_name }} state=restarted
- name: mongodb start
service: name={{ mongodb_daemon_name }} state=started
However, handlers are only triggered if the task is changed, so this doesn't ensure that the service is running if neither of those tasks are considered changed. For instance, if the service was already installed, enabled, and configured earlier, but someone has logged on to the host and shut it down manually, then it won't be started when this playbook is run.
Option #2. Add a post_task that always runs and ensures the service has been started.
Handlers are flushed before post tasks, so if the configuration has been changed, either during first installation or because it changed, the service will either be started or restarted. However, if nothing has changed and the service isn't running for some reason, it will explicitly be started during the post_task.
# roles/mongodb/tasks/main.yml
- name: Install MongoDB package
yum: name="mongodb-org-{{ mongodb_version }}" state=present
- name: Configure mongodb
template: src=mongod.conf.j2 dest=/etc/{{ mongodb_config_file }} owner=root group=root mode=0644
notify: mongodb restart
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
# In your playbook, e.g., site.yml
- hosts: all
roles:
- mongodb
post_tasks:
- name: Ensure mongodb is running
service: name={{ mongodb_daemon_name }} state=started
Option #3. Like Option #1, except always trigger the "start" handler
# roles/mongodb/tasks/main.yml
# ...same as Option #1 above...
- name: Ensure mongodb is started and enabled
service: name={{ mongodb_daemon_name }} enabled=yes
notify: mongodb start
changed_when: yes
As long as seeing a "change" in your output doesn't bother you, this will work the way you want.
If you have a handler (in the handlers directory of your role) then you do not need an additional task to start/enable mongo. In your handler you should not start but restart (state=restarted). When notified for the first time it will start and enable your service, and in future when the service is running, it will re-start.
- name: mongodb restart
service: name={{ mongodb_daemon_name }} state=restarted enabled=yes
No additional task required.
Related
I added some rule to firewalld in centos 7 with ansible. But I must reload firewalld daemon thus service work properly. Is there any idea?
Here is my ansible code:
- name: Add port to firewalld
firewalld:
port: "{{ item }}"
permanent: yes
state: enabled
when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
loop:
- 8080/tcp
- 8000/tcp
- 8090/tcp
- 8040/tcp
First of all use with_items for list of ports as below:
- name: Add port to firewalld
firewalld:
port: "{{ item }}"
permanent: yes
state: enabled
when: ansible_distribution == 'CentOS' or ansible_distribution == 'Red Hat Enterprise Linux'
loop:
- 8080/tcp
- 8000/tcp
- 8090/tcp
- 8040/tcp
You can also use the below code to enter ports if they are not fixed and use its as a variable:
- hosts: localhost
gather_facts: no
vars_prompt:
- name: ports
prompt: "Enter port(s) number"
private: no
tasks:
- name: add port
firewalld:
service: "{{ item }}"
permanent: yes
immediate: yes
state: enabled
with_items: "{{ ports.split(',') }}"
and regarding reloading firewalld its mentioned here we can't reload firewalld using state parameter So use systemd module as below:
- name: reload service firewalld
systemd:
name: firewalld
state: reloaded
firewalld module has immediate option which is performing the same reload within firewall-cmd cli tool.
- name: Add port to firewalld
firewalld:
port: "{{ item }}"
permanent: yes
state: enabled
immediate: true
You can use service or systemd module.
#Supports init systems include BSD init, OpenRC, SysV, Solaris SMF, systemd, upstart.
- name: Restart service
service:
name: firewalld
state: restarted
#Controls systemd services on remote hosts.
- name: Restart service
systemd:
state: restarted
daemon_reload: yes
name: firewalld
I'm a bit late but given that all previous answers seem to just speculate I will give another input. Firewalld is not reloaded with 'service' or 'systemctl' commands but rather with it's own specific command:
firewall-cmd --reload
This is because that way you can load new rules without interrupting any active network connections as would be the case when using iptables directly.
Given this I think using service or systemctl is not a good solution.
So if you just want to create a task I suggest using the command module from ansible to execute this command. Or you could write a handler like so:
- name: reload firewalld
command: firewall-cmd --reload
Just put the handler in the handlers/main.yml file inside your role. Then in your tasks you can call that handler with:
notify: reload firewalld
That way Ansible only executes the handler at the end of your Ansible run. I successfully tested this on RHEL7.
If you are using permanent conditional, you can use immediate option.
Example:
- name: Apply Base Services
ansible.posix.firewalld:
service: "{{ item }}"
state: enabled
zone: public
permanent: yes
immediate: true
loop:
- http
- https
After this rule will applied, firewalld will reload automatically.
You already got a number of excellent answers. There is yet another possible approach (although the reloading part is the same as in cstoll's answer).
If you are certain that nobody and nothing else but Ansible will ever manipulate firewalld rules, you can use a template to directly generate the zone XML files in /etc/firewalld/zones . You will still need to add
notify: reload firewalld
and the corresponding handler, as in cstoll's answer.
The main advantage of this approach is that it can be dramatically faster and simpler than adding the rules one at a time.
The drawback of this approach is that it will not preserve any rules added to firewalld outside of Ansible. A second drawback is that it will not do any error checking; you can create invalid zone files easily. The firewall-cmd command (and thus the firewalld module) will verify the validity of each rule. For instance, it checks that zones do not overlap.
I am creating a systemd service using template module
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
The contents of the sonarqube.service can change of course. On change I want to restart the service. How can I do this?
There are two solutions.
Register + When changed
You can register template module output (with its status change),
register: service_conf
and then use when clause.
when: service_conf.changed
For example:
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
register: service_conf
- name: restart service
service:
name: sonarqube
state: restarted
when: service_conf.changed
Handler + Notify
You define your restart service task as handler. And then in your template task you notify the handler.
tasks:
- name: Add Sonarqube to Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: Restart Sonarqube
- …
handlers:
- name: Restart Sonarqube
service:
name: sonarqube
state: restarted
More info can be found in Ansible Doc.
Difference between those 2?
In the first case, the service will restart directly. In the case of the handler the restart will happen at the end of the play.
Another difference will be, if you have several tasks changes that need to restart of your service, you simply add the notify to all of them.
The handler will run if any of those task get a changed status. With the first solution, you will have to register several return. And it will generate a longer when clause_1 or clause_2 or …
The handler will run only once even if notified several times.
This calls for a handler
---
- name: Testplaybook
hosts: all
handlers:
- name: restart_service
service:
name: <servicename>
state: restarted
tasks:
- template:
src: ...
dest: ...
notify:
- restart_service
The handler will automatically get notified by the module when something changed. See the documentatation for further information on handlers.
Since you are using systemd, you will also need to execute daemon-reload because you updated the service file.
The task just templates the service file and notifies a handler:
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: restart sonarqube systemd
Based on the presence of your specific when clause above, I'm assuming you might want to specify separate handlers in the case that systemd is not in use. The handler for the systemd case would look like the following:
- name: restart sonarqube systemd
systemd:
name: sonarqube
state: restarted
daemon_reload: yes
I'm trying to set up a role in ansible to install some servers with needed applications. One of the apps is docker.
Docker-ce is installed successfully. Now I'm trying to tell the system to startup docker.service and enable it by reboot.
When I'm creating a list over "with_items" it works fine, when I'm trying to use a list out of my defaults/main.yml file ansible tells me that it can't find the service docker. Now I'm wondering, maybe just some spelling problem?
This one works fine
- name: Start and enable needed services
systemd:
name: "{{ item }}"
state: started
enabled: yes
daemon_reload: yes
with_items:
- docker
This one doesn't work
- name: Start and enable needed services
systemd:
name: "{{ clientonline }}"
state: started
enabled: yes
daemon_reload: yes
-------
# in defaults/main.yml
clientonline:
- docker
Ansible can't find the docker service when I'm using my list from defaults/main.yml
[WARNING]: The value ['docker'] (type list) in a string field was converted to u"['docker']" (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
Also this example doesn't work:
- name: Start and enable needed services
systemd:
name: "{{ item }}"
state: started
enabled: yes
daemon_reload: yes
with_items:
- clientonline
That brings this error:
failed: [fgi_appdeploy_server] (item=clientonline) => {"ansible_loop_var": "item", "changed": false, "item": "clientonline", "msg": "Could not find the requested service clientonline: host"}
that will work:
- name: Start and enable needed services
systemd:
name: "{{ item }}"
state: started
enabled: yes
daemon_reload: yes
with_item:
- "{{ clientonline }}"
since clientonline is a list you need to loop through it
OK; this now works fine for systemd:
- name: Start and enable needed services
systemd:
name: '{{ item }}'
state: started
enabled: yes
daemon_reload: yes
with_items:
- '{{ clientonline }}'
But same style for yum module will bring this warning on bash, nice to know
- name: Install needed packages
yum:
name: '{{ item }}'
state: latest
with_items:
- '{{ clientpackages }}'
[DEPRECATION WARNING]: Invoking "yum" only once while using a loop via squash_actions is deprecated. Instead of using a loop to supply multiple items and specifying `name: "{{ item }}"`, please use `name: ['{{ serverpackages }}']` and remove the loop. This feature will
be removed in version 2.11. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
Nice to know - thanks to all!
I have an Ansible playbook for deploying a Java app as an init.d daemon.
Being a beginner in both Ansible and Linux I'm having trouble to conditionally execute tasks on a host based on the host's status.
Namely I have some hosts having the service already present and running where I want to stop it before doing anything else. And then there might be new hosts, which don't have the service yet. So I can't simply use service: name={{service_name}} state=stopped, because this will fail on new hosts.
How I can I achieve this? Here's what I have so far:
- name: Check if Service Exists
shell: "if chkconfig --list | grep -q my_service; then echo true; else echo false; fi;"
register: service_exists
# This should only execute on hosts where the service is present
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_exists
register: service_stopped
# This too
- name: Remove Old App Folder
command: rm -rf {{app_target_folder}}
when: service_exists
# This should be executed on all hosts, but only after the service has stopped, if it was present
- name: Unpack App Archive
unarchive: src=../target/{{app_tar_name}} dest=/opt
See the service_facts module, new in Ansible 2.5.
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker' in services"
Of course I could also just check if the wrapper script exists in /etc/init.d. So this is what I ended up with:
- name: Check if Service Exists
stat: path=/etc/init.d/{{service_name}}
register: service_status
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_status.stat.exists
register: service_stopped
I modified Florian's answer to only use the return code of the service command (this worked on Mint 18.2)
- name: Check if Logstash service exist
shell: service logstash status
register: logstash_status
failed_when: not(logstash_status.rc == 3 or logstash_status.rc == 0)
- name: Check if Logstash service exist
service:
name: logstash
state: stopped
when: logstash_status.rc == 0
It would be nice if the "service" module could handle "unrecognized service" errors.
This is my approach, using the service command instead of checking for an init script:
- name: check for apache
shell: "service apache2 status"
register: _svc_apache
failed_when: >
_svc_apache.rc != 0 and ("unrecognized service" not in _svc_apache.stderr)
- name: disable apache
service: name=apache2 state=stopped enabled=no
when: "_svc_apache.rc == 0"
check the exit code of "service status" and accept the exit code 0 when the output contains "unrecognized service"
if the exit code was 0, that service is installed (stopped or running)
Another approach for systemd (from Jakuje):
- name: Check if cups-browsed service exists
command: systemctl cat cups-browsed
check_mode: no
register: cups_browsed_exists
changed_when: False
failed_when: cups_browsed_exists.rc not in [0, 1]
- name: Stop cups-browsed service
systemd:
name: cups-browsed
state: stopped
when: cups_browsed_exists.rc == 0
Building on #Maciej's answer for RedHat 8, and combining it with the comments made on it.
This is how I managed to stop Celery only if it has already been installed:
- name: Populate service facts
service_facts:
- debug:
msg: httpd installed!
when: ansible_facts.services['httpd.service'] is defined
- name: Stop celery.service
service:
name: celery.service
state: stopped
enabled: true
when: ansible_facts.services['celery.service'] is defined
You can drop the debug statement--it's there just to confirm that ansible_facts is working.
This way using only the service module has worked for us:
- name: Disable *service_name*
service:
name: *service_name*
enabled: no
state: stopped
register: service_command_output
failed_when: >
service_command_output|failed
and 'unrecognized service' not in service_command_output.msg
and 'Could not find the requested service' not in service_command_output.msg
My few cents. The same approach as above but for kubernetes
Check if kublete service is running
- name: "Obtain state of kublet service"
command: systemctl status kubelet.service
register: kubelet_status
failed_when: kubelet_status.rc > 3
Display debug message if kublet service is not running
- debug:
msg: "{{ kubelet_status.stdout }}"
when: "'running' not in kubelet_status.stdout"
You can use the service_facts module since Ansible 2.5. But you need to know that the output are the real name of the service like docker.service or somthing#.service. So you have different options like:
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker.service' in services"
Or you can search for the string beginning with the service name; that is much more reliable, because the service names are different between the distributions:
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "services.keys()|list|select('search', '^docker')|length >0"
New to Ansible and was considering the following creation of services on the fly and how best to manage this. I have described below how this doesn't work, but hopefully its enough to describe the problem.
Any pointers appreciated. Thanks.
I'm using a template file to deploy a bunch of near identical application servers. During the deployment of the application servers, a corresponding init script is placed using the variable:`
/etc/init.d/{{ application_instance }}`
Next I'd like to enable and ensure its started:
name: be sure app_xyz is running and enabled
service: name={{ application_instance }} state=started enabled=yes
Further on I'd like to call a restart of the the application when configuration files are updated:
- name: be sure app_xyz is configured
template: src=xyz.conf dest=/opt/application/{{ application_server }}.conf
notify:
- restart {{ application_server }}
With the handler looking like this:
- name: restart {{ application_server }}
service: name={{ application_server }} state=restarted
You don't need a dynamical handler name for it. What about a static handler name:
# handler
- name: restart application server
service: name={{ application_server }} state=restarted
# task
- name: be sure app_xyz is configured
template: src=xyz.conf dest=/opt/application/{{ application_server }}.conf
notify:
- restart application server
service: name={{ application_server }} state=restarted