How to save ansible debug logs to a single file - ansible

Goal to achieve - check the status of "filebeat" & "Telegraf" service from ansible on 20 production servers. In case any service is stopped on any servers, I could get alert.
---
- hosts: ALL
tasks:
- name: checking service status
command: systemctl status "{{ item }}"
with_items:
- filebeat
- telegraf
register: result
ignore_errors: yes
- debug:
var: result
Got Below output -
ok: [10.5.10.10] => {
"result.results[0].stdout": "* filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.\n Loaded: loaded (/usr/lib/systemd/system/filebeat.service; disabled; vendor preset: disabled)\n Active: active (running) since Tue 2019-08-06 11:07:34 IST; 3 weeks 6 days ago\n Docs: https://www.elastic.co/products/beats/filebeat\n Main PID: 102961 (filebeat)\n CGroup: /system.slice/filebeat.service\n `-102961 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat\n\nWarning: Journal has been rotated since unit was started. Log output is incomplete or unavailable."
}
ok: [10.5.10.11] => {
"result.results[0].stdout": "* filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.\n Loaded: loaded (/usr/lib/systemd/system/filebeat.service; disabled; vendor preset: disabled)\n Active: inactive (dead)\n Docs: https://www.elastic.co/products/beats/filebeat"
}
How could I store these outputs in a file on my ansible server. So that I could apply alert in case any service is not running on any server.

Why don't you parse the output and see a red sign when one of them is not active?
Answering your question:
What I would do is saving the output to a file and copying that file back to your ansible server then appending all the results.

Related

ansible use tags but register variable is empty

I want to execute a playbook with tags, because I want to execute part of script, but the variable stored in register is empty.
---
- hosts: node
vars:
service_name: apache2
become: true
- name: Start if Service Apache is stopped
shell: service apache2 status | grep Active | awk -v N=2 '{print $N}'
args:
warn: false
register: res
tags:
- toto
- name: Start Apache because service was stopped
service:
name: "{{service_name}}"
state: started
when: res.stdout == 'inactive'
tags:
- toto
- name: Check for apache status
service_facts:
- debug:
var: ansible_facts.services.apache2.state
tags:
- toto2
$ ansible-playbook status.yaml -i hosts --tags="toto,toto2"
PLAY [nodeOne] ***************************************************************************
TASK [Start if Service Apache is stopped] ************************************************
changed: [nodeOne]
TASK [Start Apache because service was stopped] ******************************************
skipping: [nodeOne]
TASK [debug] *****************************************************************************
ok: [nodeOne] => {
"ansible_facts.services.apache2.state": "VARIABLE IS NOT DEFINED!"
}
At the end of the script, I don't get the output of apache status.
Q: "The variable stored in the register is empty."
A: A tag is missing in the task service_facts. As a result, this task is skipped and ansible_facts.services is not defined. Fix it for example
- name: Check for apache status
service_facts:
tags: toto2
Notes
1) With a single tag only it's not necessary to declare a list with one tag.
2) The concept of Idempotency makes the condition redundant.
An operation is idempotent if the result of performing it once is exactly the same as the result of performing it repeatedly without any intervening actions.
The module service is idempotent. For example the task below
- name: Start Apache
service:
name: "{{ service_name }}"
state: started
tags:
- toto
will make any changes only if the service has not been started yet. The result of the task will be a started service. Once the service is started the task will report [ok] and not touch the service. In this respect, it does not matter what was the previous state of the service i.e. there is no reason to run the task conditionally.
3) The module service_facts works as expected. For example
- service_facts:
- debug:
var: ansible_facts.services.apache2.state
gives
"ansible_facts.services.apache2.state": "running"

Restart the apache service on a server based on the varnish health status

I have 3 instances running on centos 7
1. Ansible Server
2. Varnish Server
3. Apache httpd server
I want to restart the apache service when varnish service is up but varnish health status showing ""Sick" because apache service is stopped.
I have already created a playbook and defined the both hosts but not working
- name: Check Backend Nodes
shell: varnishadm backend.list | awk 'FNR==2{ print $4 }'
register: status1
- name: print backend status
debug:
msg: "{{status1.stdout_lines}}"
#tasks:
- include_tasks: /etc/ansible/apache/tasks/main.yml
when: status1.stdout_lines == 'Sick'
This is most likely because your when condition has a glitch, as you can see in the documentation, stdout_lines is always a list of string, when your condition do compare it to a string.
So your fix could actually be as simple as checking if the string Sick is among the list stdout_lines:
- include_tasks: /etc/ansible/apache/tasks/main.yml
when: “'Sick' in status1.stdout_lines”

Ansible: Start service in next host after service finished starting on previous host

I have three hosts in which I want to start a service in a rolling fashion. The host 2 needs to wait for the service to finish starting on host 1, and host 3 needs to wait for service on the host 2.
Host 1 has finished starting the service when a line with an instruction like:
Starting listening for CQL clients
is written to a file.
How can I instruct Ansible (service module preferably) only to start the following host, when the service on the previous host writes the line to that file?
you'll probably need to break your playbook down a bit, for example
Your restart.yml contents:
- service:
name: foobar
state: restarted
- wait_for:
search_regex: "Starting listening for CQL clients"
path: /tmp/foo
and then your main.yml contents:
- include_tasks: restart.yml
with_items:
- host1
- host2
- host3
https://docs.ansible.com/ansible/latest/modules/wait_for_module.html
It seems it's not possible to serialize at a task level. So I had to build another playbook specific to start the service and used serial: 1 in the playbook yaml.
My files now look like this:
roles/start-cluster/tasks/main.yaml
- name: Start Cassandra
become: yes
service:
name: cassandra
state: started
- name: Wait for node to join cluster
wait_for:
search_regex: "Starting listening for CQL clients"
path: /var/log/cassandra/system.log
start-cluster.yaml
- hosts: all
serial: 1
gather_facts: False
roles:
- start-cluster

ansible check non-existent service [duplicate]

I have an Ansible playbook for deploying a Java app as an init.d daemon.
Being a beginner in both Ansible and Linux I'm having trouble to conditionally execute tasks on a host based on the host's status.
Namely I have some hosts having the service already present and running where I want to stop it before doing anything else. And then there might be new hosts, which don't have the service yet. So I can't simply use service: name={{service_name}} state=stopped, because this will fail on new hosts.
How I can I achieve this? Here's what I have so far:
- name: Check if Service Exists
shell: "if chkconfig --list | grep -q my_service; then echo true; else echo false; fi;"
register: service_exists
# This should only execute on hosts where the service is present
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_exists
register: service_stopped
# This too
- name: Remove Old App Folder
command: rm -rf {{app_target_folder}}
when: service_exists
# This should be executed on all hosts, but only after the service has stopped, if it was present
- name: Unpack App Archive
unarchive: src=../target/{{app_tar_name}} dest=/opt
See the service_facts module, new in Ansible 2.5.
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker' in services"
Of course I could also just check if the wrapper script exists in /etc/init.d. So this is what I ended up with:
- name: Check if Service Exists
stat: path=/etc/init.d/{{service_name}}
register: service_status
- name: Stop Service
service: name={{service_name}} state=stopped
when: service_status.stat.exists
register: service_stopped
I modified Florian's answer to only use the return code of the service command (this worked on Mint 18.2)
- name: Check if Logstash service exist
shell: service logstash status
register: logstash_status
failed_when: not(logstash_status.rc == 3 or logstash_status.rc == 0)
- name: Check if Logstash service exist
service:
name: logstash
state: stopped
when: logstash_status.rc == 0
It would be nice if the "service" module could handle "unrecognized service" errors.
This is my approach, using the service command instead of checking for an init script:
- name: check for apache
shell: "service apache2 status"
register: _svc_apache
failed_when: >
_svc_apache.rc != 0 and ("unrecognized service" not in _svc_apache.stderr)
- name: disable apache
service: name=apache2 state=stopped enabled=no
when: "_svc_apache.rc == 0"
check the exit code of "service status" and accept the exit code 0 when the output contains "unrecognized service"
if the exit code was 0, that service is installed (stopped or running)
Another approach for systemd (from Jakuje):
- name: Check if cups-browsed service exists
command: systemctl cat cups-browsed
check_mode: no
register: cups_browsed_exists
changed_when: False
failed_when: cups_browsed_exists.rc not in [0, 1]
- name: Stop cups-browsed service
systemd:
name: cups-browsed
state: stopped
when: cups_browsed_exists.rc == 0
Building on #Maciej's answer for RedHat 8, and combining it with the comments made on it.
This is how I managed to stop Celery only if it has already been installed:
- name: Populate service facts
service_facts:
- debug:
msg: httpd installed!
when: ansible_facts.services['httpd.service'] is defined
- name: Stop celery.service
service:
name: celery.service
state: stopped
enabled: true
when: ansible_facts.services['celery.service'] is defined
You can drop the debug statement--it's there just to confirm that ansible_facts is working.
This way using only the service module has worked for us:
- name: Disable *service_name*
service:
name: *service_name*
enabled: no
state: stopped
register: service_command_output
failed_when: >
service_command_output|failed
and 'unrecognized service' not in service_command_output.msg
and 'Could not find the requested service' not in service_command_output.msg
My few cents. The same approach as above but for kubernetes
Check if kublete service is running
- name: "Obtain state of kublet service"
command: systemctl status kubelet.service
register: kubelet_status
failed_when: kubelet_status.rc > 3
Display debug message if kublet service is not running
- debug:
msg: "{{ kubelet_status.stdout }}"
when: "'running' not in kubelet_status.stdout"
You can use the service_facts module since Ansible 2.5. But you need to know that the output are the real name of the service like docker.service or somthing#.service. So you have different options like:
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "'docker.service' in services"
Or you can search for the string beginning with the service name; that is much more reliable, because the service names are different between the distributions:
- name: Populate service facts
service_facts:
- debug:
msg: Docker installed!
when: "services.keys()|list|select('search', '^docker')|length >0"

How to run command after error occurs in ansible handler

I have a simple playbook that copies some config files for nginx server and then issues service nginx reload command:
---
- hosts: mirrors
tasks:
- name: Installs nginx web server
apt: pkg=nginx state=installed update_cache=true
notify:
- start nginx
- name: Copy vhost file
copy: src=vhost/test.vhost dest=/etc/nginx/sites-available/ mode=0644
- name: Symlink vhost file to enabled sites
file: src=/etc/nginx/sites-available/test.vhost dest=/etc/nginx/sites-enabled/test.vhost state=link
notify:
- reload nginx
handlers:
- name: start nginx
service: name=nginx state=started
- name: reload nginx
service: name=nginx state=reloaded
Now, the problem is that it might happen that the test.host file contains an error and nginx won't reload properly. If it happens, I'd like to run a command systemctl status nginx.service and see it's output when ansible playbook executes. How can I add this "error handler"?
If the handler output tells you about the failure, you can put together a nice bit of error handling with the ignore_errors and register keywords.
Set your handler not to end to fail and end the play when it gets an error code back, but to register the output as a variable:
handlers:
- name: reload nginx
service: name=nginx state=reloaded
ignore_errors: True
register: nginx_reloaded
Then back in the tasks section, call flush_handlers to execute all the queued handlers that would otherwise wait until the end of the play. Add a task in to go and ask the server about the status of Nginx when your nginx_reloaded variable has a non-zero return code, and then print the information from that with the debug module:
- name: Symlink vhost file to enabled sites
file: src=/etc/nginx/sites-available/test.vhost dest=/etc/nginx/sites-enabled/test.vhost state=link
notify:
- reload nginx
- meta: flush_handlers
- name: Get the Nginx service status if it failed to failed.
command: "systemctl status nginx.service"
register: nginx_status
when: nginx_reloaded.rc != 0
- debug: var=nginx_status
when: nginx_reloaded.rc != 0
Of course, a better solution would be to fix those errors in your configuration files - and figuring them out through Ansible output has limitations compared to SSHing onto the target host to check it out.
Why don't use Ansible modules ability to validate configuration?:
http://docs.ansible.com/ansible/template_module.html
http://docs.ansible.com/ansible/replace_module.html
http://docs.ansible.com/ansible/lineinfile_module.html
For example:
- template: src=/mine/sudoers dest=/etc/sudoers validate='visudo -cf %s'
- lineinfile: dest=/etc/sudoers state=present regexp='^%ADMIN ALL\=' line='%ADMIN ALL=(ALL) NOPASSWD:ALL' validate='visudo -cf %s'
- replace: dest=/etc/apache/ports regexp='^(NameVirtualHost|Listen)\s+80\s*$' replace='\1 127.0.0.1:8080' validate='/usr/sbin/apache2ctl -f %s -t'

Resources