validating log file before continuing the playbook execution - ansible

I want to look for particular sentence '*****--Finished Initialization**' in log before starting the application on next host. The tail command will never stop printing the data since the application will be used by some process immediately after application start and data will be logged.
how can I validate that before restarting the application on the second host?
As of now, I have skipped the tail command and given a timeout of 3 minutes.
- name: playbook to restart the application on hosts
hosts: host1, host2
tags: ddapp
connection: ssh
gather_facts: no
tasks:
- name: start app and validate before proceeding
shell: |
sudo systemctl start tomcat#application
#tail –f application_log.txt
wait_for: timeout=180
#other shell commands
args:
chdir: /path/to/files/directory

Use wait_for module:
- name: playbook to restart the application on hosts
hosts: host1, host2
tags: ddapp
connection: ssh
gather_facts: no
tasks:
- name: start app
become: yes
service:
name: tomcat#application
state: started
- name: validate before proceeding
wait_for:
path: /path/to/files/directory/application_log.txt
search_regex: Finished Initialization
Notice if the log is not cleared between app restarts and contains multiple Finished Initialization strings inside, refer to this question.

You have to use the wait_for module to look for a particular string with regex
- name: start app
service:
state: started
name: tomcat#application
become: true
- name: Wait for application to be ready
wait_for:
search_regex: '\*\*\*\*\*--Finished Initialization\*\*'
path: /you/path/to/application_log.txt
wait_for can also be used to detect for file apparition (like pid) or network port being opened (or not).
also, always prefer using native module instead of using a shell script to handle deployment or action. That why I replace the shell with service module.

Related

Ansible wait_for_connection until the hosts are ready for ansible?

I am using ansible to configure some VM's.
Problem I am facing right now is, I can't execute ansible commands right after the VM's are just started, it gives connection time out error. This happens when I execute the ansible right after the VMs are spinned up in GCP.
Commands working fine when I execute ansible playbook after 60 seconds, but I am looking for a way to do this automatically without manually wait 60s and execute, so I can execute right after VM's are spun up and ansible will wait until they are ready. I don't want to add a delay seconds to ansible tasks as well,
I am looking for a dynamic way where ansible tries to execute playbook and when it fails, it won't show any error but wait until the VM's are ready?
I used this, but it still doesn't work (as it fails)
---
- hosts: all
tasks:
- name: Wait for connection
wait_for_connection: # but this will still fails, am I doing this wrong?
- name: Ping all hosts for connectivity check
ping:
Can someone please help me?
I have the same issue on my side.
I've fixed htis with this task wait_for.
The basic way is to waiting ssh connection like this :
- name: Wait 300 seconds for port 22 to become open and contain "OpenSSH"
wait_for:
port: 22
host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
search_regex: OpenSSH
delay: 10
connection: local
I guess your VM must launch an application/service so you can monitor on the vm in the log file where application is started, like this for example (here for nexus container):
- name: Wait container is start and running
become: yes
become_user: "{{ ansible_nexus_user }}"
wait_for:
path: "{{ ansible_nexus_directory_data }}/log/nexus.log"
search_regex: ".*Started Sonatype Nexus.*"
I believe what you are looking for is to postpone gather_facts until the server is up, as that otherwise will time out as you experienced. Your file could work as follows:
---
- hosts: all
gather_facts: no
tasks:
- name: Wait for connection (600s default)
ansible.builtin.wait_for_connection:
- name: Gather facts manually
ansible.builtin.wait_for_connection
I have these under pre_tasks instead of tasks, but it should probably work if they are first in your file.

Ansible: Trigger the task only when previous task is successful and the output is created

I am deploying a VM in azure using ansible and using the public ip created in the next tasks. But the time taken to create the public ip is too long so when the subsequent task is executed, it fails. The time to create the ip also varies, it's not fixed. I want to introduce some logic where the next task will only run when the ip is created.
- name: Deploy Master Node
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: testvm10
admin_username: chouseknecht
admin_password: <your password here>
image:
offer: CentOS-CI
publisher: OpenLogic
sku: '7-CI'
version: latest
Can someone assist me here..! It's greatly appreciated.
I think the wait_for module is a bad choice because while it can test for port availability it will often give you false positives because the port is open before the service is actually ready to accept connections.
Fortunately, the wait_for_connection module was designed for exactly the situation you are describing: it will wait until Ansible is able to successfully connect to your target.
This generally requires that you register your Azure VM with your Ansible inventory (e.g. using the add_host module). I don't use Azure, but if I were doing this with OpenStack I might write something like this:
- hosts: localhost
gather_facts: false
tasks:
# This is the task that creates the vm, much like your existing task
- os_server:
name: larstest
cloud: kaizen-lars
image: 669142a3-fbda-4a83-bce8-e09371660e2c
key_name: default
flavor: m1.small
security_groups: allow_ssh
nics:
- net-name: test_net0
auto_ip: true
register: myserver
# Now we take the public ip from the previous task and use it
# to create a new inventory entry for a host named "myserver".
- add_host:
name: myserver
ansible_host: "{{ myserver.openstack.accessIPv4 }}"
ansible_user: centos
# Now we wait for the host to finished booting. We need gather_facts: false here
# because otherwise Ansible will attempt to run the `setup` module on the target,
# which will fail if the host isn't ready yet.
- hosts: myserver
gather_facts: false
tasks:
- wait_for_connection:
delay: 10
# We could add additional tasks to the previous play, but we can also start
# new play with implicit fact gathering.
- hosts: myserver
tasks:
- ...other tasks here...

Make ansible wait for server to start, without logging in

When I provision a new server, there is a lag between the time I create it and it becomes available. So I need to wait until it's ready.
I assumed that was the purpose of the wait_for task:
hosts:
[servers]
42.42.42.42
playbook.yml:
---
- hosts: all
gather_facts: no
tasks:
- name: wait until server is up
wait_for: port=22
This fails with Permission denied. I assume that's because nothing is setup yet.
I expected it to open an ssh connection and wait for the prompt - just to see if the server is up. But what actually happens is it tries to login.
Is there some other way to perform a wait that doesn't try to login?
as you correctly stated, this task executes on the "to be provisioned" host, so ansible tries to connect to it (via ssh) first, then would try to wait for the port to be up. this would work for other ports/services, but not for 22 on a given host, since 22 is a "prerequisite" for executing any task on that host.
what you could do is try to delegate_to this task to the ansible host (that you run the PB) and add the host parameter in the wait_for task.
Example:
- name: wait until server is up
wait_for:
port: 22
host: <the IP of the host you are trying to provision>
delegate_to: localhost
hope it helps
Q: "Is there some other way to perform a wait that doesn't try to login?"
A: It is possible to wait_for_connection. For example
- hosts: all
gather_facts: no
tasks:
- name: wait until server is up
wait_for_connection:
delay: 60
timeout: 300

Ansible: Start service in next host after service finished starting on previous host

I have three hosts in which I want to start a service in a rolling fashion. The host 2 needs to wait for the service to finish starting on host 1, and host 3 needs to wait for service on the host 2.
Host 1 has finished starting the service when a line with an instruction like:
Starting listening for CQL clients
is written to a file.
How can I instruct Ansible (service module preferably) only to start the following host, when the service on the previous host writes the line to that file?
you'll probably need to break your playbook down a bit, for example
Your restart.yml contents:
- service:
name: foobar
state: restarted
- wait_for:
search_regex: "Starting listening for CQL clients"
path: /tmp/foo
and then your main.yml contents:
- include_tasks: restart.yml
with_items:
- host1
- host2
- host3
https://docs.ansible.com/ansible/latest/modules/wait_for_module.html
It seems it's not possible to serialize at a task level. So I had to build another playbook specific to start the service and used serial: 1 in the playbook yaml.
My files now look like this:
roles/start-cluster/tasks/main.yaml
- name: Start Cassandra
become: yes
service:
name: cassandra
state: started
- name: Wait for node to join cluster
wait_for:
search_regex: "Starting listening for CQL clients"
path: /var/log/cassandra/system.log
start-cluster.yaml
- hosts: all
serial: 1
gather_facts: False
roles:
- start-cluster

How to run an ansible task only once regardless of how many targets there are

Consider the following Ansible task:
- name: stop tomcat
gather_facts: false
hosts: pod1
pre_tasks:
- include_vars:
dir: "vars/{{ environment }}"
vars:
hipchat_message: "stop tomcat pod1 done."
hipchat_notify: "yes"
tasks:
- include: tasks/stopTomcat8AndClearCache.yml
- include: tasks/stopHttpd.yml
- include: tasks/hipchatNotification.yml
This stops tomcat on n number of servers. What I want it to do is send a hipchat notification when it's done doing this. However, this code sends a separate hipchat message for each server the task happens on. This floods the hipchat window with redundant messages. Is there a way to make the hipchat task happen once after the stop tomcat/stop httpd tasks have been done on all the targets? I want the task to shut down tomcat on all the servers, then send one hip chat message saying "tomcat stopped on pod 1".
You can conditionally run the hipchat notification task on only one of the pod1 hosts.
- include: tasks/hipChatNotification.yml
when: inventory_hostname == groups.pod1[0]
Alternately you could only run it on localhost if you don't need any of the variables from the previous play.
- name: Run notification
gather_facts: false
hosts: localhost
tasks:
- include: tasks/hipchatNotification.yml
You also could use the run_once flag on the task itself.
- name: Do a thing on the first host in a group.
debug:
msg: "Yay only prints once"
run_once: true
- name: Run this block only once per host group
block:
- name: Do a thing on the first host in a group.
debug:
msg: "Yay only prints once"
run_once: true
Ansible handlers are made for this type of problem where you want to run a task once at the end of an operation even though it may have been triggered multiple times in the play.
You can define a handler section in your playbook and notify it in the tasks, the handlers will not run unless notified by a task, and will only run once regardless of how many times they are notified.
handlers:
- name: hipchat notify
hipchat:
room: someroom
msg: tomcat stopped on pod 1
In your play tasks just include a "notify" on the tasks that should trigger the handler and if they change it will run the handler after all tasks have executed.
- name: Stop service httpd, if started
service:
name: httpd
state: stopped
notify:
- hipchat notify

Resources