Restart the apache service on a server based on the varnish health status - ansible

I have 3 instances running on centos 7
1. Ansible Server
2. Varnish Server
3. Apache httpd server
I want to restart the apache service when varnish service is up but varnish health status showing ""Sick" because apache service is stopped.
I have already created a playbook and defined the both hosts but not working
- name: Check Backend Nodes
shell: varnishadm backend.list | awk 'FNR==2{ print $4 }'
register: status1
- name: print backend status
debug:
msg: "{{status1.stdout_lines}}"
#tasks:
- include_tasks: /etc/ansible/apache/tasks/main.yml
when: status1.stdout_lines == 'Sick'

This is most likely because your when condition has a glitch, as you can see in the documentation, stdout_lines is always a list of string, when your condition do compare it to a string.
So your fix could actually be as simple as checking if the string Sick is among the list stdout_lines:
- include_tasks: /etc/ansible/apache/tasks/main.yml
when: “'Sick' in status1.stdout_lines”

Related

Cant start adminctl because of ## placeholders in admin.conf from Connections 6.5 IHS

I made a Connections 6.5 headless installation which itself works, but couldn't start adminctl in the
# cd /opt/IBM/HTTPServer/bin/
# ./adminctl start
Syntax error on line 7 of /opt/IBM/HTTPServer/conf/admin.conf:
Port must be specified
Line 7 seems like an variable, that doesn't got parsed properly when configuring the IHS
# grep Listen ../conf/admin.conf
Listen ##AdminPort##
There are also other such ## variables in the config file:
# grep ## ../conf/admin.conf
Listen ##AdminPort##
User ##SetupadmUser##
Group ##SetupadmGroup##
ServerName cnx65.internal:##AdminPort##
Why are those values not correctly replaced? For example to Listen 8008 (default IHS admin port).
How I configure the IHS
The machine got provisioned using ansible, where the following shell command runs for IHS plugin configuration:
./wctcmd.sh -tool pct -createDefinition -defLocPathname /opt/IBM/WebSphere/Plugins -response /tmp/plugin-response-file.txt -defLocName webserver1
Response file /tmp/plugin-response-file.txt:
configType=remote
enableAdminServerSupport=true
enableUserAndPass=true
enableWinService=false
ihsAdminCreateUserAndGroup=true
ihsAdminPassword=adminihs
ihsAdminPort=8008
ihsAdminUnixUserGroup=ihsadmin
ihsAdminUnixUserID=ihsadmin
mapWebServerToApplications=true
wasMachineHostname=cnx65.internal
webServerConfigFile1=/opt/IBM/HTTPServer/conf/httpd.conf
webServerDefinition=webserver1
webServerHostName=cnx65.internal
webServerOS=Linux
webServerPortNumber=80
webServerSelected=IHS
As you can see, all required variables for substitution were present. So the tool should be able to replace ##AdminPort## by the value 8008.
wctcmd.sh just creates the WAS definition for the IHS, but doesn't prepare the admin server. We need to do this manually with postinst and setupadm as documented here. This seems not just required for zip installations. My installation was done using Installation Manager and the admin server doesn't work without those steps.
I automated it in Ansible like this:
- name: Check if admin config is properly parsed
become: yes
shell: grep ##AdminPort## {{ http_server.target }}/conf/admin.conf
register: admin_conf_check
# File not found raise rc = 2, rc = 0 found, rc = 1 not found but file exists
failed_when: admin_conf_check.rc != 0 and admin_conf_check.rc != 1
changed_when: False
- set_fact:
admin_conf_is_configured: "{{ admin_conf_check.rc == 1 }}"
- name: Parse IHS admin config
become: yes
# plugin_config_file is defined in http-plugin.yml
shell: |
./bin/postinst -i $PWD -t setupadm -v ADMINPORT={{ http_server.admin_port }} -v SETUPADMUSER=nobody -v SETUPADMGROUP=nobody
./bin/setupadm -usr nobody -grp nobody -cfg conf/httpd.conf -plg {{ plugin_config_file }} -adm conf/admin.conf
args:
chdir: "{{ http_server.target }}"
environment:
LANG: "{{ system_language }}"
register: ihs_setup
# setupadm returns 90 if it was successfull: "Script Completed RC(90)"
failed_when: ihs_setup.rc != 90
when: not admin_conf_is_configured
- name: Create htpasswd for admin config
become: yes
shell: ./bin/htpasswd -c conf/admin.passwd adminihs
args:
chdir: "{{ http_server.target }}"
creates: "{{ http_server.target }}/conf/admin.passwd"
environment:
LANG: "{{ system_language }}"
http_server.target is the IHS base path, e.g. /opt/IBM/HTTPServer
http_server.admin_port is the IBM default value 8008
plugin_config_file is set to /opt/IBM/WebSphere/Plugins/config/{{ http_server.name }}/plugin-cfg.xml where http_server.name matches the definition name in WAS (webserver1 in my example)
system_language is set to en_US.utf8 to make sure that we get english error message for output validation (when required), independent of the configured OS language
After running those configuration tools, we can see that all placeholders were replaced by their corresponding values:
# grep -i listen ../conf/admin.conf
Listen 8008
Running the admin server by executing ./adminctl start in the bin directory now works as expected.
I heard from folks in the lab at IBM that webServerSelected=IHS is not being regognized and it must be webServerSelected=ihs (lowercase)
https://www.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/tins_pctcl_using.html
webServerSelected
Specifies the web server to be configured
Specify only one web server to configure.
apache22
Apache Web Server Version 2.2
64-bit configuration not supported on Windows
apache24
Apache Web Server Version 2.4
64-bit configuration not supported on Windows
ihs
IBM® HTTP Server
64-bit configuration not supported on Windows
...

Error handling with "failed_when" doesn't work

I wish to create an ansible playbook to run on multiple servers in order to check for conformity to the compliance rules. Therefore the playbook is supposed to check whether several services are disabled and whether they are stopped.
I do know that not all the services will be running or enabled on all machines. Therefore I wish to handle certain return codes within the playbook.
I tried the failed_when Statement for this Task. It seems the way to go as it allows to set which RCs to handle.
- hosts: server_group1
remote_user: ansible
tasks:
- name: Stop services
command: /bin/systemctl stop "{{ item }}"
with_items:
- cups
- avahi
- slapd
- isc-dhcp-server
- isc-dhcp-server6
- nfs-server
- rpcbind
- bind9
- vsftpd
- dovecot
- smbd
- snmpd
- squid
register: output
failed_when: "output.rc != 0 and output.rc != 1"
However the loop only works if I use ignore_errors: True which is not shown in the example code.
I do know that I wish to catch RCs of 0 and 1 from the command executed by the playbook. But no matter what failed_when always generates a fatal error and my playbook fails.
I also tried the failed_when line without the "" but that doesn't change a thing.
Something is missing, but I don't see what.
Any advice?
Therefore the playbook is supposed to check whether several services are disabled and whether they are stopped.
Your playbook isn't doing either of these things. If a service exists, whether or not it's running, then systemctl stop <service> will return successfully in most cases. The only time you'll get a non-zero exit code is if either (a) the service does not exist or (b) systemd is unable to stop the service for some reason.
Note that calling systemctl stop has no effect on whether or not a service is disabled; with your current playbook, all those services would start back up the next time the host boots.
If your goal is not simply to check that services are stopped and disabled, but rather to ensure that services are stopped and disabled, you could do something like this:
- name: "Stop services"
service:
name: "{{ item }}"
enabled: false
state: stopped
ignore_errors: true
register: results
loop:
- cups
- avahi
- slapd
- isc-dhcp-server
- isc-dhcp-server6
- nfs-server
- rpcbind
- bind9
- vsftpd
- dovecot
- smbd
- snmpd
- squid
You probably want ignore_errors: true here, because you want run the task for every item in your list.
A different way of handling errors might be with something like:
failed_when: >-
results is failed and
"Could not find the requested service" not in results.msg|default('')
This would only fail the task if stopping the service failed for a reason other than the the fact that there was not matching service on the target host.

Is there an option on Ansible for getting squid service status?

I'm setting up a new playbook, and want to support both service status and systemctl status.
I want to monitor a service called Squid on Ubuntu 14.04 that has only service and 16.04 that has both service and systemctl.
I had tried using
How to get service status by Ansible?
But it gives me an answer only for service or systemctl.
- hosts: servers
gather_facts: true
tasks:
- name: Run the specified shell command
shell: 'service squid status'
register: result
- name: Display the outcome
debug:
msg: '{{ result }}'
I'm trying to get both service status from service and systemctl in the same playbook run.
But the actual result from that .yml file is only explicit for Systemctl status or service status

ansible check non-existent service [duplicate]

I have an Ansible playbook for deploying a Java app as an init.d daemon.
Being a beginner in both Ansible and Linux I'm having trouble to conditionally execute tasks on a host based on the host's status.
Namely I have some hosts having the service already present and running where I want to stop it before doing anything else. And then there might be new hosts, which don't have the service yet. So I can't simply use service: name={{service_name}} state=stopped, because this will fail on new hosts.
How I can I achieve this? Here's what I have so far:
- name: Check if Service Exists
shell: "if chkconfig --list | grep -q my_service; then echo true; else echo false; fi;"
register: service_exists
# This should only execute on hosts where the service is present
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_exists
register: service_stopped
# This too
- name: Remove Old App Folder
command: rm -rf {{app_target_folder}}
when: service_exists
# This should be executed on all hosts, but only after the service has stopped, if it was present
- name: Unpack App Archive
unarchive: src=../target/{{app_tar_name}} dest=/opt
See the service_facts module, new in Ansible 2.5.
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker' in services"
Of course I could also just check if the wrapper script exists in /etc/init.d. So this is what I ended up with:
- name: Check if Service Exists
stat: path=/etc/init.d/{{service_name}}
register: service_status
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_status.stat.exists
register: service_stopped
I modified Florian's answer to only use the return code of the service command (this worked on Mint 18.2)
- name: Check if Logstash service exist
shell: service logstash status
register: logstash_status
failed_when: not(logstash_status.rc == 3 or logstash_status.rc == 0)
- name: Check if Logstash service exist
service:
name: logstash
state: stopped
when: logstash_status.rc == 0
It would be nice if the "service" module could handle "unrecognized service" errors.
This is my approach, using the service command instead of checking for an init script:
- name: check for apache
shell: "service apache2 status"
register: _svc_apache
failed_when: >
_svc_apache.rc != 0 and ("unrecognized service" not in _svc_apache.stderr)
- name: disable apache
service: name=apache2 state=stopped enabled=no
when: "_svc_apache.rc == 0"
check the exit code of "service status" and accept the exit code 0 when the output contains "unrecognized service"
if the exit code was 0, that service is installed (stopped or running)
Another approach for systemd (from Jakuje):
- name: Check if cups-browsed service exists
command: systemctl cat cups-browsed
check_mode: no
register: cups_browsed_exists
changed_when: False
failed_when: cups_browsed_exists.rc not in [0, 1]
- name: Stop cups-browsed service
systemd:
name: cups-browsed
state: stopped
when: cups_browsed_exists.rc == 0
Building on #Maciej's answer for RedHat 8, and combining it with the comments made on it.
This is how I managed to stop Celery only if it has already been installed:
- name: Populate service facts
service_facts:
- debug:
msg: httpd installed!
when: ansible_facts.services['httpd.service'] is defined
- name: Stop celery.service
service:
name: celery.service
state: stopped
enabled: true
when: ansible_facts.services['celery.service'] is defined
You can drop the debug statement--it's there just to confirm that ansible_facts is working.
This way using only the service module has worked for us:
- name: Disable *service_name*
service:
name: *service_name*
enabled: no
state: stopped
register: service_command_output
failed_when: >
service_command_output|failed
and 'unrecognized service' not in service_command_output.msg
and 'Could not find the requested service' not in service_command_output.msg
My few cents. The same approach as above but for kubernetes
Check if kublete service is running
- name: "Obtain state of kublet service"
command: systemctl status kubelet.service
register: kubelet_status
failed_when: kubelet_status.rc > 3
Display debug message if kublet service is not running
- debug:
msg: "{{ kubelet_status.stdout }}"
when: "'running' not in kubelet_status.stdout"
You can use the service_facts module since Ansible 2.5. But you need to know that the output are the real name of the service like docker.service or somthing#.service. So you have different options like:
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker.service' in services"
Or you can search for the string beginning with the service name; that is much more reliable, because the service names are different between the distributions:
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "services.keys()|list|select('search', '^docker')|length >0"

Using Ansible to stop service that might not exist

I am using Ansible 2.6.1.
I am trying to ensure that certain service is not running on target hosts.
Problem is that the service might not exist at all on some hosts. If this is the case Ansible fails with error because of missing service. Services are run by Systemd.
Using service module:
- name: Stop service
service:
name: '{{ target_service }}'
state: stopped
Fails with error Could not find the requested service SERVICE: host
Trying with command module:
- name: Stop service
command: service {{ target_service }} stop
Gives error: Failed to stop SERVICE.service: Unit SERVICE.service not loaded.
I know I could use ignore_errors: yes but it might hide real errors too.
An other solution would be having 2 tasks. One checking for existance of service and other that is run only when first task found service but feels complex.
Is there simpler way to ensure that service is stopped and avoid errors if the service does not exists?
I'm using the following steps:
- name: Get the list of services
service_facts:
- name: Stop service
systemd:
name: <service_name_here>
state: stopped
when: "'<service_name_here>.service' in services"
service_facts could be called once in the gathering facts phase.
The following will register the module output in service_stop; if the module execution's standard output does not contain "Could not find the requested service" AND the service fails to stop based on return code, the module execution will fail. Since you did not include the entire stack trace I am assuming the error you posted is in the standard output, you may need to change slightly based on your error.
- name: Stop service
register: service_stop
failed_when:
- '"Could not find the requested service" not in service_stop.stdout'
- service_stop.rc != 0
service:
name: '{{ target_service }}'
state: stopped
IMHO there isn't simpler way to ensure that service is stopped. Ansible service module doesn't check service's existence. Either (1) more then one task, or (2) command that check service's existence is needed. The command would be OS specific. For example for FreeBSD
command: "service -e | grep {{ target_service }} && service {{ target_service }} stop"
Same solution as Vladimir's, but for Ubuntu (systemd) and with better state handling:
- name: restart {{ target_service }} if exists
shell: if systemctl is-enabled --quiet {{ target_service }}; then systemctl restart {{ target_service }} && echo restarted ; fi
register: output
changed_when: "'restarted' in output.stdout"
It produces 3 states:
service is absent or disabled — ok
service exists and was restarted — changed
service exists and restart failed — failed
Same solution as #ToughKernel's but use systemd to manage service.
- name: disable ntpd service
systemd:
name: ntpd
enabled: no
state: stopped
register: stop_service
failed_when:
- stop_service.failed == true
- '"Could not find the requested service" not in stop_service.msg'
# the order is important, only failed == true, there will be
# attribute 'msg' in the result
When the service module fails, check if the service that needs to be stopped is installed at all. That is similar to this answer, but avoids the rather lengthy gathering of service facts unless necessary.
- name: Stop a service
block:
- name: Attempt to stop the service
service:
name: < service name >
state: stopped
rescue:
- name: Get the list of services
service_facts:
- name: Verify that Nagios is not installed
assert:
that:
- "'< service name >.service' not in services"

Resources