I created a playbook that uses ansible_facts.services to restart specific services on multiple RHEL servers. The services that get restarted begin with a specific name and may or may not exist on the different hosts where the playbook is ran.
I have this working correctly but I would also like to add a follow up task that checks the services that were restarted in the previous task to make sure they are running. The task would need to only check the services that were restarted then start the service if the status is stopped.
Please recommend the best way to do this. Include code if possible. Thanks!
It's sounds odd, but you may try to use handlers (to react on 'changes') and use check_mode for the module to assure the state.
Try to add 'notify' to the restarting task, and check_mode: true to the handler. Handler is, basically, the same systemd module, but without ability to change.
... Or, if you want to test your infrastructure, may be you may like testinfra (pytest-testinfra) as a tool to check your infra.
I understand your question that you like to make sure that a list of serivces is running.
- name: Loop over all services and print name if not running
debug:
msg: "{{ item }}"
when:
- ansible_facts.services[item].state != 'running'
with_items: "{{ ansible_facts.services }}"
Based on that example you could start the service if the status is stopped.
Further Q&A
Ansible: How to get disabled but running services?
How to list only the running services with ansible_facts?
How to check service exists and is not installed in the server using service_facts module in an Ansible playbook?
Related
we are patching Windows Servers with Ansible. For some reason a lot of "automatic" Windows services are not starting automaticly, so i'm doing this with Ansible after startup:
- name: start services
when: services is defined
ansible.windows.win_service:
name: "{{ item }}"
state: started
with_items:
- "{{ services }}"
In the host_vars there is for example mssql01.yml with services: "SQL Server (MSSQLSERVER)".
This is working as expected, but results in a lot of host_vars, which is not that clean as i hope it would be. I thought about regisering all services that ran before, and if a service ran before and is a startup automatic/automatic (delayed), it should be started. Sadly ansible.windows.win_service needs a service name, so it can't just register all services without knowing them.
Does anybody have a smart idea how to ensure every service that's set to automatic get's started, or do we need to troubleshoot why this is not happening automatically? Even if this would be solved, you can't be 100% sure that they start, so a solution would be good in any case.
You might not understand what I am trying to say in the question. Before stating the problem, I would give an example to understand even more.
Let's take terraform for the example.
Imagine we use terraform to spin up an EC2 instance in the aws. So, we are currently storing the state locally. Let's say we need to add a tag to the ec2 instance.
So what we do is, we add the this small block to achieve that.
tags = {
Name = "HelloWorld"
}
Nothing complex, just simple as it is. So, as we add a tag, the terraform will look for any changes in the state and as it found a tag has been added, it runs and only add the tag without recreating the whole instance. (There can be other scenarios where it needs to recreate the instance but let's leave those for now)
So, as I mentioned, terraform doesn't recreate the instance for adding a tag. If we want to add another tag, it will add the tag without recreating the instance. Just as simple as that.
doesn't matter how many times we run the terraform apply it doesn't do any changes to existing resources unless we made any in the terraform files.
So, let's now come to the real question.
Let's say we want to install a httpd using ansible. So I write ansible playbook. And it will use the yum package and install and then start and enable the service.
Okay it is simple like that.
Let's say we we run the same playbook for the second time, now will try to execute the same commands from scratch without checking first whether it has executed this playbook before?
I have couple of experiences where I try to install some services and when I try to execute the same playbook again, it fails.
Is it possible to preserve the state in ansible just like we do in terraform so it always only execute the newer changes other than executing from the start on the second run?
You seem to be describing idempotence:
Idempotence is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application.
It's up to you to make sure your playbooks are idempotent in ansible. You could do this for your example using the package and service modules.
- name: Setup Apache
hosts: webservers
tasks:
- name: Install Apache
package:
name: httpd
state: present
- name: Start and enable Apache
service:
name: httpd
state: started
enabled: yes
Ansible doesn't achieve idempotence by retaining state on the the controller. Each module should check the state of the host it's operating on to determine what changes it needs to make to achieve the state specified by the playbook. If it determines that it doesn't need to make any changes then that's what it should report back to the controller.
I am trying to query a list of ec2 instances via ansible using the ec2 plugin for dynamic inventories.
I can see the utility of using dynamic inventories. If new machines are added then ansible will automatically execute a play against them. But I also saw on the net that it is possible to spawn instances with ansible, and manually add the new hosts to the static list of hosts.
So my question is : What would be the use cases where we would use dynamic inventories vs static inventories ? i'm new to the realm of devops so i don't know how often we need to spawn instances automatically Vs doing it manually via the AWS console for example.
Thanks !
In case you use autoscaling groups you have to use dynamic inventories.
If you launch ec2s temporary as part of a build pipelines use dynamic inventories. e.g. you just want to test the deployment of your software and terminate the machine after that test.
If you want to disable ansible plays on some machines you can create dynamic inventories based on ec2 tags. e.g. you have a security play that runs an all web server each hour but a developer wants to test something on his machine. So he can tag his machine to be skipped. He doesn't need access to the inventory file (and you can run another play once at midnight to enable the security play again. so it won't be forgotten).
By the way: you can use ec2_instance_facts with filter options and add_host to create dynamic inventories during the playbooks run time.
e.g. you have three types of server "web", "app", "db". you tag the ec2s during launch with servertype: [web|app|db]. You can filter these ec2s with:
- name: collect ec2s
ec2_instance_facts:
region: "{{ region }}"
filters:
"tag:servertype": "{{ servertype_list }}"
register: ec2_list
and run your play selectively on a server group with an external variable ansible-playbook test.yml -e servertype_list=['web','app'] or ansible-playbook test.yml -e servertype_list=['db'].
So by tagging the machine you avoid to take care of a static inventory.
I want to write a playbook to synchronize a source file to a destination host and restart tomcat/apache if the file changed. The documentation on synchronize does not give any example on if this is possible. Can anyone provide some pointers?
If you're only changing one file, you probably want to use copy instead of synchronize. However, this approach should work either way.
The handler system is designed for this sort of thing. The documentation there provides an example of bouncing memcached after a configuration file change:
Here’s an example of restarting two services when the contents of a
file change, but only if the file changes:
- name: template configuration file
template: src=template.j2 dest=/etc/foo.conf
notify:
- restart memcached
- restart apache
The things listed in the notify section of a task are called handlers.
Handlers are lists of tasks, not really any different from regular
tasks, that are referenced by a globally unique name, and are notified
by notifiers. If nothing notifies a handler, it will not run.
Regardless of how many tasks notify a handler, it will run only once,
after all of the tasks complete in a particular play.
Here’s an example handlers section:
handlers:
- name: restart memcached
service: name=memcached state=restarted
- name: restart apache
service: name=apache state=restarted
Let's imagine a playbook with following roles: base, monitoring, nginx and another playbook with only base and nginx.
Now I want in monitoring role to run a task only if playbook includes nginx role, because for monitoring nginx I have to pass a little bit different configuration to monitoring service.
How to execute a task what dependes on another role existence?
While my workaround in the comments might have worked for you, I feel it's still not the best approach. It's not modular. For example in a situation where you change monitoring system, you'd need to go into each role and check if it has monitoring component and update that...Not the most optimal way.
Perhaps a better way would be to still include a separate monitoring role, but there execute specific tasks using playbook conditionals. For example, nginx monitoring task would execute only when this server is part of your [webservers] group. Or when a certain variable is set to a specific value or some other appropriate conditional is met.
It is possible to set fact with set_fact in nginx role (set_fact: nginx=True) and then check it in monitoring role and execute task when fact is defined and true ( when: (ansible_facts['nginx'] is defined) and (ansible_facts['nginx'] == True)).