With the wait_formodule in Ansible if I use search_regex='foo' on a file
it seems to start at the beginning of the file, which means it will match on old data, thus when restarting a process/app (Java) which appends to a file rather than start a new file, the wait_for module will exit true for old data, but I would like to check from the tail of the file.
Regular expression in search_regex of wait_for module is by default set to multiline.
You can register the contents of the last line and then search for the string appearing after that line (this assumes there are no duplicate lines in the log file, i.e. each one contains a time stamp):
vars:
log_file_to_check: <path_to_log_file>
wanted_pattern: <pattern_to_match>
tasks:
- name: Get the contents of the last line in {{ log_file_to_check }}
shell: tail -n 1 {{ log_file_to_check }}
register: tail_output
- name: Create a variable with a meaningful name, just for clarity
set_fact:
last_line_of_the_log_file: "{{ tail_output.stdout }}"
### do some other tasks ###
- name: Match "{{ wanted_pattern }}" appearing after "{{ last_line_of_the_log_file }}" in {{ log_file_to_check }}
wait_for:
path: "{{ log_file_to_check }}"
search_regex: "{{ last_line_of_the_log_file }}\r(.*\r)*.*{{ wanted_pattern }}"
techraf's answer would work if every line inside the log file is time stamped. Otherwise, the log file may have multiple lines that are identical to the last one.
A more robust/durable approach would be to check how many lines the log file currently has, and then search for the regex/pattern occurring after the 'nth' line.
vars:
log_file: <path_to_log_file>
pattern_to_match: <pattern_to_match>
tasks:
- name: "Get contents of log file: {{ log_file }}"
command: "cat {{ log_file }}"
changed_when: false # Do not show that state was "changed" since we are simply reading the file!
register: cat_output
- name: "Create variable to store line count (for clarity)"
set_fact:
line_count: "{{ cat_output.stdout_lines | length }}"
##### DO SOME OTHER TASKS (LIKE DEPLOYING APP) #####
- name: "Wait until '{{ pattern_to_match}}' is found inside log file: {{ log_file }}"
wait_for:
path: "{{ log_file }}"
search_regex: "^{{ pattern_to_skip_preexisting_lines }}{{ pattern_to_match }}$"
state: present
vars:
pattern_to_skip_preexisting_lines : "(.*\\n){% raw %}{{% endraw %}{{ line_count }},{% raw %}}{% endraw %}" # i.e. if line_count=100, then this would equal "(.*\\n){100,}"
One more method, using intermediate temp file for tailing new records:
- name: Create tempfile for log tailing
tempfile:
state: file
register: tempfile
- name: Asynchronous tail log to temp file
shell: tail -n 0 -f /path/to/logfile > {{ tempfile.path }}
async: 60
poll: 0
- name: Wait for regex in log
wait_for:
path: "{{ tempfile.path }}"
search_regex: 'some regex here'
- name: Remove tempfile
file:
path: "{{ tempfile.path }}"
state: absent
Actually, if you can force a log rotation on your java app log file then straightforward wait_for will achieve what you want since there won't be any historical log lines to match
I am using this approach with rolling upgrade of mongodb and waiting for "waiting for connections" in the mongod logs before proceeding.
sample tasks:
tasks:
- name: Rotate mongod logs
shell: kill -SIGUSR1 $(pidof mongod)
args:
executable: /bin/bash
- name: Wait for mongod being ready
wait_for:
path: /var/log/mongodb/mongod.log
search_regex: 'waiting for connections'
I solved my problem with #Slezhuk's answer above. But I found an issue. The tail command won't stop after the completion of the playbook. It will run forever. I had to add some more logic to stop the process:
# the playbook doesn't stop the async job automatically. Need to stop it
- name: get the PID async tail
shell: ps -ef | grep {{ tmpfile.path }} | grep tail | grep -v grep | awk '{print $2}'
register: pid_grep
- set_fact:
tail_pid: "{{pid_grep.stdout}}"
# the tail command shell has two processes. So need to kill both
- name: killing async tail command
shell: ps -ef | grep " {{tail_pid}} " | grep -v grep | awk '{print $2}' | xargs kill -15
- name: Wait the tail process to be killed
wait_for:
path: /proc/{{tail_pid}}/status
state: absent
Came across this page when trying to do something similar and ansible has come a way since the above, so here's my solution using the above. Also used the timeout linux command with async - so doubling up a bit.
It would be much easier if the application rotated logs on startup - but some apps aren't nice like that!
As per this page..
https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html
The async tasks will run until they either complete, fail or timeout according to their async value.
Hope it helps someone
- name: set some common vars
set_fact:
tmp_tail_log_file: /some/path/tail_file.out # I did this as I didn't want to create a temp file every time, and i put it in the same spot.
timeout_value: 300 # async will terminate the task - but to be double sure I used timeout linux command as well.
log_file: /var/logs/my_log_file.log
- name: "Asynchronous tail log to {{ tmp_tail_log_file }}"
shell: timeout {{ timeout_value }} tail -n 0 -f {{ log_file }} > {{ tmp_tail_log_file }}
async: "{{ timeout_value }}"
poll: 0
- name: "Wait for xxxxx to finish starting"
wait_for:
timeout: "{{ timeout_value }}"
path: "{{ tmp_tail_log_file }}"
search_regex: "{{ pattern_search }}"
vars:
pattern_search: (.*Startup process completed.*)
register: waitfor
- name: "Display log file entry"
debug:
msg:
- "xxxxxx startup completed - the following line was matched in the logs"
- "{{ waitfor['match_groups'][0] }}"
Related
I'm trying write role to install MySql 8, and get problem with this:
- name: Extract root password from logs into {{ mysql_root_old_password }} variable
ansible.builtin.slurp:
src: "{{ mysql_logfile_path }}"
register: mysql_root_old_password
#when: "'mysql' in ansible_facts.packages"
- name: Extract root password from logs into {{ mysql_root_old_password }} variable
set_fact:
mysql_root_old_password: "{{ mysql_root_old_password.content | b64decode | regex_findall('generated for root#localhost: (.*)$', 'multiline=true') }}"
#when: "'mysqld' in ansible_facts.packages"
- name: Get Server template
ansible.builtin.template:
src: "{{ item.name }}.j2"
dest: "{{ item.path }}"
loop:
- { name: "my.cnf", path: "/root/.my.cnf" }
notify:
- Restart mysqld
on the .my.cnf I get password with quotes and brackets:
[client]
user=root
password=['th6k(gZeJSt4']
How to trim that?
What I try:
- name: trim password
set_fact:
mysql_root_old_password2: "{{ mysql_root_old_password | regex_findall('[a-zA-Z0-9,()!##$%^&*]{12}')}}"
Thanks.
The result of regex_findall is a list because there might be more matches. Take the last item
- set_fact:
mysql_root_old_password: "{{ mysql_root_old_password.content|
b64decode|
regex_findall('generated for root#localhost: (.*)$', 'multiline=true')|
last }}"
From your description
on the .my.cnf I get password with quotes and brackets ... How to trim that
I understand that you like to read a INI file like my.cnf.ini
[client]
user=root
password=['A1234567890B']
where the value of the key password looks like a list with one element in YAML and the structure doesn't change, but you are interested in the value without leading and trailing square brackets and single quotes only.
To do so there are several possibilities.
Via Ansible Lookup plugins
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Extract root password from INI file
debug:
msg: "{{ lookup('ini', 'password section=client file=my.cnf.ini') }}"
register: result
- name: Show result with type
debug:
msg:
- "{{ result.msg }}"
- "result.msg is of type {{ result.msg | type_debug }}"
- "Show password only {{ result.msg[0] }}" # the first list element
Such approach will work on the Control Node.
Like all templating, lookups execute and are evaluated on the Ansible control machine.
Further Q&A
How to read a line from a file into an Ansible variable
What is the difference between .ini and .conf?
Further Documentation
ini lookup – read data from an INI file
Via Ansible shell module, sed and cut.
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Extract root password from INI file
shell:
cmd: echo $(sed -n 's/^password=//p' my.cnf.ini | cut -d "'" -f 2)
register: result
- name: Show result
debug:
msg: "{{ result.stdout }}"
Please take note regarding "I get password with quotes and brackets ... ['<pass>'] ... How to trim that?" that from perspective of the SQL service, .cnf file, [' and '] are actually part of the password!
- name: Server template
template:
src: "my.cnf.ini"
dest: "/root/.my.cnf"
If that is correct they will be necessary for proper function of the service.
I have a playbook that controls a clustered application. The issue is this playbook can be called/executed a few different ways (manual on the cmd line[multiple SREs working], scheduled task, or programmatically via a 3rd party system).
The problem is if the playbook tries to execute simultaneously, it could cause some issues to the application (nature of the application).
Question:
Is there a way to prevent the same playbook from running concurrently on the same Ansible server?
Environment:
ansible [core 2.11.6]
config file = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/ansible.cfg
configured module search path = ['/etc/ansible/library/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Nov 1 2021, 11:34:21) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
you could test if file exist at the start of playbook and stop the play if the file exist with meta, if not you create the file to block another launch:
- name: lock_test
hosts: all
vars:
lock_file_path: /tmp/ansible-playbook.lock
pre_tasks:
- name: Check if some file exists
delegate_to: localhost
stat:
path: "{{ lock_file_path }}"
register: lock_file
- block:
- name: "end play "
debug:
msg: "playbook already launched, ending play"
- meta: end_play
when: lock_file.stat.exists
- name: create lock_file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: touch
# ****************** tasks start
tasks:
- name: debug
debug:
msg: "something to do"
# ****************** tasks end
post_tasks:
- name: delete the lock file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: absent
but you have to have only one playbook in your play even the first playbook stops, the second is launched except if you do the same test in the next playbook.
it exist a little lapse time before test and creation of file... so the probality to launch twice the same playbook in same second is very low.
The solution will be always better than you have actually
Another solution is to lock an existing file, and test if file is locked or not, but be careful with this option.. see lock, flock in unix command
You can create a lockfile on the controller with the PID of the ansible-playbook process.
- delegate_to: localhost
vars:
lockfile: /tmp/thisisalockfile
my_pid: "{{ lookup('pipe', 'cut -d\" \" -f4 /proc/$PPID/stat') }}"
lock_pid: "{{ lookup('file', lockfile) }}"
block:
- name: Lock file
copy:
dest: "{{ lockfile }}"
content: "{{ my_pid }}"
when: my_lockfile is not exists
or ('/proc/' ~ lock_pid) is not exists
or 'ansible-playbook' not in lookup('file', '/proc/' ~ lock_pid ~ '/cmdline')
- name: Make sure we won the lock
assert:
that: lock_pid == my_pid
fail_msg: "{{ lockfile }} is locked by process {{ lock_pid }}"
Finding the current PID is the trickiest part; $PPID in the lookup is still the PID of a child, so we're grabbing the grandparent out of /proc/
I wanted to post this here but do not consider it a final/perfect answer.
it does work for general purposes.
I put this 'playbook_lock.yml' at the root of my playbook and call it in before any roles.
playbook_lock.yml:
# ./playbook_lock.yml
#
## NOTES:
## - Uses '/tmp/' on Ansible server as lock file directory
## - Format of lock file: E.g. 129416_20211103094638_playbook_common_01.lock
## -- Detailed explanation further down
## - Race-condition:
## -- Assumption playbooks will not run within 10sec of each other
## -- Assumption lockfiles were not deleted within 10sec
## -- If running the playbook manually with manual input of Ansible Vault
## --- Enter creds within 10 sec or the playbook will consider this run legacy
## - Built logic to only use ansbile.builin modules to not add additional requirements
##
#
---
## Build a transaction ID from year/month/day/hour/min/sec
- name: debug_transactionID
debug:
msg: "{{ transactionID }}"
vars:
filter: "{{ ansible_date_time }}"
transactionID: "{{ filter.year + filter.month + filter.day + filter.hour + filter.minute + filter.second }}"
run_once: true
delegate_to: localhost
register: reg_transactionID
## Find current playbook PID
## Race-condition => assumption playbooks will not run within 10sec of each other
## If playbook is already running >10secs, this return will be empty
- name: debug_current_playbook_pid
ansible.builtin.shell:
## serach PS for any command matching the name of the playbook | remove the 'grep' result | return only the 1st one (if etime < 10sec)
cmd: "ps -e -o 'pid,etimes,cmd' | grep {{ ansible_play_name }} | grep -v grep | awk 'NR==1{if($2<10) print $1}'"
changed_when: false
run_once: true
delegate_to: localhost
register: reg_current_playbook_pid
## Check for existing lock files
- name: find_existing_lock_files
ansible.builtin.find:
paths: /tmp
patterns: "*_{{ ansible_play_name }}.lock"
age: 1s
run_once: true
delegate_to: localhost
register: reg_existing_lock_files
## Check and verify existing lock files
- name: block_discovered_existing_lock_files
block:
## build fact of all lock files discovered
- name: fact_existing_lock_files
ansible.builtin.set_fact:
fact_existing_lock_files: "{{ fact_existing_lock_files | default([]) + [item.path] }}"
loop: "{{ reg_existing_lock_files.files }}"
run_once: true
delegate_to: localhost
when:
- reg_existing_lock_files.matched > 0
## Build fact of all discovered lock files
- name: fact_playbook_lock_file_dict
ansible.builtin.set_fact:
fact_playbook_lock_file_dict: "{{ fact_playbook_lock_file_dict | default([]) + [data] }}"
vars:
## E.g. lockfile => 129416_20211103094638_playbook_common_01.lock
var_pid: "{{ item.split('/')[2].split('_')[0] }}" ## extract the 1st portion = PID
var_transid: "{{ item.split('/')[2].split('_')[1] }}" ## extract 2nd portion = TransactionID
var_playbook: "{{ item.split('/')[2].split('_')[2:] | join('_') }}" ## Extract the remaining and join back together = playbook file
data:
{pid: "{{ var_pid }}", transid: "{{ var_transid }}", playbook: "{{ var_playbook }}"}
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
## Check each discovered lock file
## Verify the PID is still operational
- name: shell_verify_pid_is_active
ansible.builtin.shell:
cmd: "ps -p {{ item.pid }} | awk 'NR==2{print $1}'"
loop: "{{ fact_playbook_lock_file_dict }}"
changed_when: false
delegate_to: localhost
register: reg_verify_pid_is_active
## Build fact of discovered previous playbook PIDs
- name: fact_previous_playbook_pids
ansible.builtin.set_fact:
fact_previous_playbook_pids: "{{ fact_previous_playbook_pids | default([]) + [item.stdout | int] }}"
loop: "{{ reg_verify_pid_is_active.results }}"
run_once: true
delegate_to: localhost
## Build fact is playbook already operational
## Add PIDs together
## If SUM =0 => No PIDs found (no previous playbooks running)
## If SUM != 0 => previous playbook is still operational
- name: fact_previous_playbook_operational
ansible.builtin.set_fact:
fact_previous_playbook_operational: "{{ ((fact_previous_playbook_pids | sum) | int) != 0 }}"
when:
- reg_existing_lock_files.matched > 0
- reg_current_playbook_pid.stdout is defined
## Continue with playbook, as no previous instances running
- name: block_continue_playbook_operations
block:
## Cleanup legacy lock files, as the PIDs are not operational
- name: stat_cleanup_legacy_lock_files
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
when: fact_existing_lock_files | length >= 1
## Create lock file for current playbook
- name: stat_create_playbook_lock_file
ansible.builtin.file:
path: "/tmp/{{ var_playbook_lock_file }}"
state: touch
mode: '0644'
vars:
var_playbook_lock_file: "{{ reg_current_playbook_pid.stdout }}_{{ reg_transactionID.msg }}_{{ ansible_play_name }}.lock"
run_once: true
delegate_to: localhost
when:
- reg_current_playbook_pid.stdout is defined
## Fail & exit playbook, as previous playbook is still operational
- name: block_playbook_already_operational
block:
- name: fail
fail:
msg: 'Playbook "{{ ansible_play_name }}" is already operational! This playbook will now exit without any modifications!!!'
run_once: true
delegate_to: localhost
when: (fact_previous_playbook_operational is true) or
(reg_current_playbook_pid.stdout is not defined)
...
first line: /u01/app/oracle/oradata/TEST/
second line: /u02/
How to read both lines in a same variable and by using same varible i want know the present working directory through shell commands in ansible
You can use command to read a file from disk
- name: Read a file into a variable
command: cat /path/to/your/file
register: my_variable
And then do something like below to loop over the lines in the file.
- debug: msg="line: {{ item }}"
loop: y_variable.stdout_lines
The task below creates the list of the lines from a file
- set_fact:
lines_list: "{{ lines_list|default([]) + [item] }}"
with_lines: cat /path/to/file
It's possible to create both a list
"lines_list": [
"/u01/app/oracle/oradata/TEST/",
"/u02/"
]
and a dictionary
"lines_dict": {
"0": "/u01/app/oracle/oradata/TEST/",
"1": "/u02/"
}
with the combine filter
- set_fact:
lines_dict: "{{ lines_dict|default({})|combine({idx: item}) }}"
with_lines: cat /path/to/file
loop_control:
index_var: idx
"Present working directory through shell commands in ansible" can be printed from the registered variable. For example
- command: echo $PWD
register: result
- debug:
var: result.stdout
(not tested)
I have a playbook below:
- hosts: localhost
vars:
folderpath:
folder1/des
folder2/sdf
tasks:
- name: Create a symlink
shell: "echo {{folderpath}} | awk -F'/' '{system(\"mkdir \" $1$2 );}'"
register: result
#- debug:
# msg: "{{ result.stdout }}"
with_items:
- " {{folderpath}} "
However when I run the playbook I get 2 folders made. The first one is :
1- folder1des (as expected)
2- folder2 (this should ideally be folder2sdf )
I have tried many combination and still it doesnt want to work. What do I need to have it work properly.
I do not have ansible environment at the moment. But following should work:
- hosts: localhost
tasks:
- name: Create a symlink
shell: "echo {{item}} | awk -F'/' '{system(\"mkdir \" $1$2 );}'"
register: result
#- debug:
# msg: "{{ result.stdout }}"
with_items:
- folder1/des
- folder2/sdf
Reference: Ansible Loops Example
Explanation:
You were adding a single list object to the with_items. so in your with_items it finds only one object (which is of type list) to iterate over. Hence it runs only once. So now what I have done is I have passed a list of items to with_items that way it can iterate over the multiple items present in with_items.
Hope this helps!
Maybe
- hosts: localhost
vars:
folderpath:
folder1/des
folder2/sdf
tasks:
- name: Create a symlink
file:
state : link
path : "{{ item | regex_replace('[0-9]/','_') }}"
src : "{{ item }}"
with_items: " {{ folderpath }} "
Nothing in your given code creates symlinks. Is that really what you meant to do?
I am running several shell commands in an ansible playbook that may or may not modify a configuration file.
One of the items in the playbook is to restart the service. But I only want to do this if a variable is set.
I am planning on registering a result in each of the shell tasks, but I do not want to overwrite the variable if it is already set to 'restart_needed' or something like that.
The idea is the restart should be the last thing to go, and if any of the commands set the restart variable, it will go, and if none of them did, the service will not be restarted. Here is an example of what I have so far...
tasks:
- name: Make a backup copy of file
copy: src={{ file_path }} dest={{ file_path }}.{{ date }} remote_src=true owner=root group=root mode=644 backup=yes
- name: get list of items
shell: |
grep <file>
register: result
- name: output will be 'restart_needed'
shell: |
NUM=14"s"; if [ "${NUM}" != "s" ]; then sed -i "${NUM}/no/yes/g" {{ file_path }}; echo "restart_needed"; else echo "nothing_changed" ; fi
with_items: "{{ result.stdout_lines }}"
register: output
- name: output will be 'nothing_changed'
shell: |
NUM="s"; if [ "${NUM}" != "s" ]; then sed -i "${NUM}/no/yes/g" {{ file_path }}; echo "restart_needed"; else echo "nothing_changed" ;; fi
with_items: "{{ result.stdout_lines }}"
register: output
- name: Restart service
service: name=myservice enabled=yes state=restarted
In the above example, the variable output will be set to restart_needed after the first task but then will be changed to 'nothing_changed' in the second task.
I want to keep the variable at 'restart_needed' if it is already there and then kick off the restart service task only if the variable is set to restart_needed.
Thanks!
For triggering restarts, you have two options: the when statement or handlers.
When statement example:
tasks:
- name: check if string "foo" exists in somefile
shell: grep -q foo somefile
register: result
- name: restart service
service:
name: myservice
enabled: yes
state: restarted
when: result.rc == 0
Handlers example:
tasks:
- name: check if string "foo" exists in somefile
shell: grep -q foo somefile
register: result
changed_when: "result.rc == 0"
notify: restart service
handlers:
- name: restart service
service:
name: myservice
enabled: yes
state: restarted