Stop Tomcat Service based on environment using Ansible - ansible

I am working on ansible scripts to stop tomcat service on remote hosts
we have different environments, on one of the environment the service name is different and remaining environments it is same
Below is the task i am using
- name: stop WEB service
win_service:
name: "{{ item }}"
state: stopped
loop: "{{ web_service }}"
group vars
web_service:
- 'Tomcat9.0.45'
On one environment the service name is Tomcat9.0.45 and remaining environment it is Tomcat9
Can anyone help me how to update task to stop service based on environment?

There are couple ways to solve this. One robust option would be to add a variable to every group/host with the relevant version. For example, configure this on the relevant group / host:
tomcat_version: 9.0.45
And on the second group / hosts:
tomcat_version: 9
and in your task, you won't need the loop at all
- name: stop WEB service
win_service:
name: "Tomcat{{ tomcat_version }}"
state: stopped
This way, every group/host has it's own version. It's easier to change in the future as well (think if tomorrow, you would have 9.0.60?)

Related

How to get a list of all hosts specified by ansible-playbook --limit

The situation
I have a playbook whose execution I sometimes want to limit to specific hosts: web servers and application servers, e.g., --limit mywebserver, appserver1, appserver2. I have two groups:
webservers contain web servers including mywebserver
appservers contain application servers including appserver1 and appserver2
The playbook calls a play that I call for each of my webservers:
- name: Install and prepare Webservers
hosts: webservers
roles:
- role: webserver_setup
In that play, I execute tasks, e.g., setting up a website for each application server that is stored in a group variable:
- name: Configure website for all application servers
notify: Reload webserver
loop: "{{ groups['webserver_' + inventory_hostname] }}"
ansible.builtin.template:
src: ./templates/website.y2
dest: "/etc/nginx/sites-available/{{ hostvars[item]['app_server'] }}.conf"
mode: '0640'
owner: webserver
group: webserver
That works fine if I run the script without --limit. However, if I specify a host for --limit, then the loop in the play is still executed for all application servers, because it doesn't take the limit option into account. (No surprise here.)
My question
My problem is that I have a lot of application servers, and that I would like to limit the play's loop to only the hosts specified by --limit. How'd I do that?
Thoughts on solutions
There are special variables provided by Ansible, ansible_play_hosts, ansible_play_batch, but they only provide the hosts limited by --limit for the current play. In my case, these variables would contain only mywebserver, because the play is called for hosts: webservers.
I've thought about refactoring the play, such that it is called for each application server, instead, and use delegate_to to actually execute the task above on each associated webserver. However, I think this would have some drawbacks (ssh log in overhead for each app server, reload webserver config overhead for each app server, possible conflicts due to parallel task execution on the web server).
ansible_limit provides the limit that was passed on the command line, and can be expanded into a list of hostnames using the inventory_hostnames lookup. You could just use this in a condition; it's not entirely clear from the question what your actual structure is, but one of the following is probably right:
when: item in query('inventory_hostnames', ansible_limit | default('all'))
# OR
when: hostvars[item]['app_server'] in query('inventory_hostnames', ansible_limit | default('all'))
You might also be able to eliminate items from the loop entirely, though that makes the code a bit more complex and it's helpful to use intermediate variables to make it more readable:
- name: Configure website for all application servers
ansible.builtin.template:
src: website.y2 # Relative paths in template actions use `templates/` automatically, so you shouldn't specify it.
dest: /etc/nginx/sites-available/{{ hostvars[item]['app_server'] }}.conf
mode: "0640"
owner: webserver
group: webserver
notify: Reload webserver
loop: "{{ webserver_group | intersect(limit_hosts) }}"
vars:
webserver_group: "{{ groups['webserver_' ~ inventory_hostname] }}"
limit_hosts: "{{ query('inventory_hostnames', ansible_limit | default('all')) }}"

Start only specific systemd service from list of services using Ansible

I have list of systemd services defined as
vars:
systemd_scripts: ['a.service', 'b.service', 'c.service']
Now I want to stop only a.service from above list. How this can be achieved using systemd_module?
What are you trying to achieve? As written, you could just do:
- name: Stop service A
systemd:
name: a.service
state: stopped
If instead you mean "the first service", use the first filter or an index:
- name: Stop first service
systemd:
name: "{{ systemd_scripts | first }}"
state: stopped
OR
- name: Stop first service
systemd:
name: "{{ systemd_scripts[0] }}"
state: stopped
Your question is very vague and unspecific. However, parts of your question could be answered with the following approach
- name: Loop over service list and stop one service
systemd:
name: "{{ item }}"
state: stopped
when:
- systemd_scripts[item] == 'a.service'
with_items: "{{ systemd_scripts }}"
You may need to extend and change it for your needs and required functionality.
Regarding discovering, starting and stopping services via Ansible facts (service_facts) and systemd you may have a look into
Further readings
Ansible: How to get disabled but running services?
How to declare a variable for service_facts?
How to check service exists and is not installed in the server using service_facts module in an Ansible playbook?

Restarting a service after looped commands on multiple servers

I poked around a bit here but didn't see anything that quite matched up to what I am trying to accomplish, so here goes.
So I've put together my first Ansible playbook which opens or closes one or more ports on the firewall of one or more hosts, for one or more specified IP addresses. Works great so far. But what I want to do is restart the firewall service after all the tasks for a given host are complete (with no errors, of course).
NOTE: The hostvars/localhost references just hold vars_prompt input from the user in a task list above this one. I store prompted data in hosts: localhost build a dynamic host list based on what the user entered, and then have a separate task list to actually do the work.
So:
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
- set_fact:
hostList: "{{hostvars['localhost']['hostList']}}"
- set_fact:
portList: "{{hostvars['localhost']['portList']}}"
- set_fact:
portStateRequested: "{{hostvars['localhost']['portStateRequested']}}"
- set_fact:
portState: "{{hostvars['localhost']['portState']}}"
- set_fact:
remoteIPs: "{{hostvars['localhost']['remoteIPs']}}"
- name: Invoke firewall-cmd remotely
firewalld:
.. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
register: requestStatus
In my original version of the script, which only did 1 port for 1 host for 1 IP, I just did:
- name: Reload firewalld
when: requestStatus.changed
systemd:
name: firewalld
state: reloaded
But I don't think that will work as easily here because of the nesting. For example. Let's say I want to open port 9999 for a remote IP address of 1.1.1.1 on 10 different hosts. And let's say the 5th host has an error for some reason. I may not want to restart the firewall service at that point.
Actually, now that I think about it, I guess that in that scenario, there would be 4 new entries to the firewall config, and 6 that didn't take because of the error. Now I'm wondering if I need to track the successes, and have a rescue block within the Playbook to back those entries that did go through.
Grrr.... any ideas? Sorry, new to Ansible here. Plus, I hate YAML for things like this. :D
Thanks in advance for any guidance.
It looks to me like what you are looking for is what Ansible call handlers.
As we’ve mentioned, modules should be idempotent and can relay when
they have made a change on the remote system. Playbooks recognize this
and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks
in a play, and will only be triggered once even if notified by
multiple different tasks.
For instance, multiple resources may indicate that apache needs to be
restarted because they have changed a config file, but apache will
only be bounced once to avoid unnecessary restarts.
Note that handlers are simply a pair of
A notify attribute on one or multiple tasks
A handler, with a name matching your above mentioned notify attribute
So your playbook should look like
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
# set_fact removed for concision
- name: Invoke firewall-cmd remotely
firewalld:
# .. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
notify: Reload firewalld
handlers:
- name: Reload firewalld
systemd:
name: firewalld
state: reloaded

How to run plays using Ansible AWX on all hosts in a group using localhost

I'm trying to create a playbook that creates EC2 snapshots of some AWS Windows servers. I'm having a lot of trouble understanding how to get the correct directives in place to make sure things are running as they can. What I need to do is:
run the AWS commands locally, i.e. on the AWX host (as this is preferable to having to configure credentials on every server)
run the commands against each host in the inventory
I've done this in the past with a different group of Linux servers with no issue. But the fact that I'm having these issues makes me think that's not working as I think it is (I'm pretty new to all things Ansible/AWX).
The first step is I need to identify instances that are usually turned off and turn them on, then to take snapshots, then to turn them off again if they are usually turned off. So this is my main.yml:
---
- name: start the instance if the default state is stopped
import_playbook: start-instance.yml
when: default_state is defined and default_state == 'stopped'
- name: run the snapshot script
import_playbook: instance-snapshot.yml
- name: stop the instance if the default state is stopped
import_playbook: stop-instance.yml
when: default_state is defined and default_state == 'stopped'
And this is start-instance.yml
---
- name: make sure instances are running
hosts: all
gather_facts: false
connection: local
tasks:
- name: start the instances
ec2:
instance_id: "{{ instance_id }}"
region: "{{ aws_region }}"
state: running
wait: true
register: ec2
- name: pause for 120 seconds to allow the instance to start
pause: seconds=120
When I run this, I get the following error:
fatal: [myhost,mydomain.com]: UNREACHABLE! => {
"changed": false,
"msg": "ssl: HTTPSConnectionPool(host='myhost,mydomain.com', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f99045e6bd0>, 'Connection to myhost,mydomain.com timed out. (connect timeout=30)'))",
"unreachable": true
I've learnt enough to know that this means it is, indeed, trying to connect to each Windows host which is not what I want. However, I thought having connection: local resolved this as it seemed to with another template I have which uses and identical playbook.
So If I change start-instance.yml to instead say "connection: localhost", then the play runs - but skips the steps because it determines that no hosts meet the condition (i.e. default_state is defined and default_state == 'stopped'). This tells me that the play is running on using localhost to run the play, but is also running it against localhost instead of the list of hosts in the inventory in AWX.
My hosts have variables defined in AWX such as instance ID, region, default state. I know in normal Ansible use we would have the instance IDs in the playbook and that's how Ansible would know which AWS instances to start up, but in this case I need it to get this information from AWX.
Is there any way to do this? So, run the tasks (start the instances, pause for 120 seconds) against all hosts in my AWX inventory, using localhost?
In the end I used delegate_to for this. I changed the code to this:
---
- name: make sure instances are running
hosts: all
gather_facts: false
tasks:
- name: start the instances
delegate_to: localhost
ec2:
instance_id: "{{ instance_id }}"
region: "{{ aws_region }}"
state: running
wait: true
register: ec2
- name: pause for 120 seconds to allow the instance to start
pause: seconds=120
And AWX correctly used localhost to run the tasks I added delegation to.
Worth noting I got stuck for a while before realising my template needed two sets of credentials - one IAM user with correct permissions to run the AWS commands, and one for the machine credentials for the Windows instances.

Ansible: What would be the preferred way of running several tasks on different groups?

Let's say we have 3 services and 5 servers and
Service 1 should run on all servers.
Service 2 should run on server 1, 2 and 3.
Service 3 should run on server 4 and 5.
I could think of a few dirty ways to accomplish that, but I'd like to know what the best-practice for that scenario would be.
I was thinking that it might be a good idea to first define a group for each service that specifies the hosts that it should be deployed on in the inventory file:
[service-1]
server-[01:05]
[service-2]
server-[01:03]
[service-3]
server-[04:05]
To get the services on the target machines I would use copy or template.
And for actually starting services I found this.
But how would I make the services start on only those machines that they are supposed to run on?
I guess I could create multiple playbooks to achieve that, but I would really prefer to put everything into one playbook.
A playbook can have multiple plays. Therefore you indeed can use those groups to archive your goal and still only have one playbook:
- name: Setup service 1
hosts: service-1
tasks/roles: ...
- name: Setup service 2
hosts: service-2
tasks/roles: ...
- name: Setup service 3
hosts: service-3
tasks/roles: ...
Alternatively you can filter you roles/tasks with conditions like so:
- name: Setup all service
hosts: all
roles:
- role: service1
when: "'service-1' in group_names"
- role: service2
when: "'service-2' in group_names"
- role: service3
when: "'service-3' in group_names"
group_names is a magic variable and holds all the groups the current host belongs to.

Resources