Need Syntax to add Ansible meta module to existing Playbook - ansible

I wish to search for a string ("AC245") in all files with extension *.db under /home/examples directory. Below is what i tried.
---
- name: "Find the details here "
hosts: localhost
any_errors_fatal: true
serial: 1
tasks:
- name: Ansible find files multiple patterns examples
find:
paths: /home/examples
patterns: "*.db"
recurse: yes
register: files_matched
- name: Search for String in the matched files
command: grep -i {{ myString }} {{ item.path }}
register: command_result
failed_when: command_result.rc == 0
with_items:
- "{{ files_matched.files }}"
run the above find.yml using this command:
ansible-playbook find.yml -e "myString=AC245"
My requirement is that if the string is found I wish to abort the play immediately using "meta: end_play" marking the playbook as FAILED.
Can you help suggest how can I update my current code to add the end_play feature as soon as the string is found in any *.db file ?

One possible solution is to move the task into a separate tasks file and loop over it. That will allow discrete control over each iteration of the task. For example:
playbook.yml:
---
- name: "Find the details here "
hosts: localhost
any_errors_fatal: true
serial: 1
tasks:
- name: Ansible find files multiple patterns examples
find:
paths: /home/mparkins/bin/playbooks_sandpit/outputs/dir
patterns: "*.db"
recurse: yes
register: files_matched
- name: Search for String in the matched files
include_tasks:
tasks.yml
with_items:
- "{{ files_matched.files }}"
tasks.yml:
- command: grep -i {{ myString }} {{ item.path }}
register: command_result
failed_when: command_result.rc == 0
You can add additional tasks in the tasks file if you wish to fail in a different way, for example using the fail module or meta: end_play

Q: "The requirement is that if the string is found I wish to abort the play immediately using "meta: end_play" marking the playbook as FAILED."
(ansible 2.7.9)
A: It's not possible.
1) It's not possible to execute meta: end_play in a loop on a condition (string is found)
2) It's not possible to execute both meta: end_play and fail/assert in one play
1) It is possible to write a filter_plugin (Python2)
$ cat filter_plugins/file_filters.py
import mmap
def file_list_search(list, string):
for file in list:
if file_search(file, string):
break
return file_search(file, string)
def file_search(file, string):
f = open(file)
s = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
if s.find(string) != -1:
return True
else:
return False
class FilterModule(object):
''' Ansible filters for operating on files '''
def filters(self):
return {
'file_search' : file_search,
'file_list_search' : file_list_search
}
and use it in a play. For example
- hosts: localhost
vars:
myString: "test string"
tasks:
- find:
paths: /home/examples
patterns: "*.db"
register: files_matched
- name: End of play when myString found
meta: end_play
when: "files_matched.files|
json_query('[*].path')|
file_list_search(myString)"
- debug:
msg: continue
2) It is possible to set_stats. The play below
- name: End of play when myString found
block:
- set_stats:
data:
FAILED: 1
- meta: end_play
when: "files_matched.files|
json_query('[*].path')|
file_list_search(myString)"
- debug:
msg: continue
- set_stats:
data:
FAILED: 0
gives the output below if the string is found
PLAY RECAP **********************************************************************************
127.0.0.1 : ok=4 changed=0 unreachable=0 failed=0
CUSTOM STATS: *******************************************************************************
RUN: { "FAILED": 1}
Allow show_custom_stats
$ grep stats ansible.cfg
show_custom_stats = True

Related

How to search for a string in a remote file using Ansible?

Based on a questions
How to search for a string in a file using Ansible?
Ansible: How to pull a specific string out of the contents of a file?
Can slurp be used as a direct replacement for lookup?
and considerations like
By using the slurp module one is going to transfer the whole file from the Remote Node to the Control Node over the network just in order to process it and looking up a string. For log files these can be several MB and whereby one is mostly interested only in the information if the file on the Remote Node contains a specific string and therefore one would only need to transfer that kind of information, true or false.
How to execute a script on a Remote Node using Ansible?
I was wondering how this can be solved instead of using the shell module?
---
- hosts: localhost
become: false
gather_facts: false
vars:
SEARCH_STRING: "test"
SEARCH_FILE: "test.file"
tasks:
- name: Search for string in file
command:
cmd: "grep '{{ SEARCH_STRING }}' {{ SEARCH_FILE }}"
register: result
# Since it is a reporting task
# which needs to deliver a result in any case
failed_when: result.rc != 0 and result.rc != 1
check_mode: false
changed_when: false
Or instead of using a workaround with the lineinfile module?
---
- hosts: localhost
become: false
gather_facts: false
vars:
SEARCH_STRING: "test"
SEARCH_FILE: "test.file"
tasks:
- name: Search for string
lineinfile:
path: "{{ SEARCH_FILE }}"
regexp: "{{ SEARCH_STRING }}"
line: "SEARCH_STRING FOUND"
state: present
register: result
# Since it is a reporting task
changed_when: false
failed_when: "'replaced' not in result.msg" # as it means SEARCH_STRING NOT FOUND
check_mode: true # to prevent changes and to do a dry-run only
- name: Show result, if not found
debug:
var: result
when: "'added' in result.msg" # as it means SEARCH_STRING NOT FOUND
Since I am looking for a more generic approach, could it be a feasible case for Should you develop a module?
Following Developing modules and Creating a module I've found the following simple solution with
Custom Module library/pygrep.py
#!/usr/bin/python
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.module_utils.basic import AnsibleModule
def run_module():
module_args = dict(
path=dict(type='str', required=True),
search_string=dict(type='str', required=True)
)
result = dict(
changed=False,
found_lines='',
found=False
)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
with open(module.params['path'], 'r') as f:
for line in f.readlines():
if module.params['search_string'] in line:
result['found_lines'] = result['found_lines'] + line
result['found'] = True
result['changed'] = False
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()
Playbook pygrep.yml
---
- hosts: localhost
become: false
gather_facts: false
vars:
SEARCH_FILE: "test.file"
SEARCH_STRING: "test"
tasks:
- name: Grep string from file
pygrep:
path: "{{ SEARCH_FILE }}"
search_string: "{{ SEARCH_STRING }}"
register: search
- name: Show search
debug:
var: search
when: search.found
For a simple test.file
NOTEST
This is a test file.
It contains several test lines.
321tset
123test
cbatset
abctest
testabc
test123
END OF TEST
it will result into an output of
TASK [Show search] ******************
ok: [localhost] =>
search:
changed: false
failed: false
found: true
found_lines: |-
This is a test file.
It contains several test lines.
123test
abctest
testabc
test123
Some Links
What is the grep equivalent in Python?
cat, grep and cut - translated to Python
What is the Python code equivalent to Linux command grep -A?

Ansible - Prevent playbook executing simultaneously

I have a playbook that controls a clustered application. The issue is this playbook can be called/executed a few different ways (manual on the cmd line[multiple SREs working], scheduled task, or programmatically via a 3rd party system).
The problem is if the playbook tries to execute simultaneously, it could cause some issues to the application (nature of the application).
Question:
Is there a way to prevent the same playbook from running concurrently on the same Ansible server?
Environment:
ansible [core 2.11.6]
config file = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/ansible.cfg
configured module search path = ['/etc/ansible/library/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Nov 1 2021, 11:34:21) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
you could test if file exist at the start of playbook and stop the play if the file exist with meta, if not you create the file to block another launch:
- name: lock_test
hosts: all
vars:
lock_file_path: /tmp/ansible-playbook.lock
pre_tasks:
- name: Check if some file exists
delegate_to: localhost
stat:
path: "{{ lock_file_path }}"
register: lock_file
- block:
- name: "end play "
debug:
msg: "playbook already launched, ending play"
- meta: end_play
when: lock_file.stat.exists
- name: create lock_file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: touch
# ****************** tasks start
tasks:
- name: debug
debug:
msg: "something to do"
# ****************** tasks end
post_tasks:
- name: delete the lock file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: absent
but you have to have only one playbook in your play even the first playbook stops, the second is launched except if you do the same test in the next playbook.
it exist a little lapse time before test and creation of file... so the probality to launch twice the same playbook in same second is very low.
The solution will be always better than you have actually
Another solution is to lock an existing file, and test if file is locked or not, but be careful with this option.. see lock, flock in unix command
You can create a lockfile on the controller with the PID of the ansible-playbook process.
- delegate_to: localhost
vars:
lockfile: /tmp/thisisalockfile
my_pid: "{{ lookup('pipe', 'cut -d\" \" -f4 /proc/$PPID/stat') }}"
lock_pid: "{{ lookup('file', lockfile) }}"
block:
- name: Lock file
copy:
dest: "{{ lockfile }}"
content: "{{ my_pid }}"
when: my_lockfile is not exists
or ('/proc/' ~ lock_pid) is not exists
or 'ansible-playbook' not in lookup('file', '/proc/' ~ lock_pid ~ '/cmdline')
- name: Make sure we won the lock
assert:
that: lock_pid == my_pid
fail_msg: "{{ lockfile }} is locked by process {{ lock_pid }}"
Finding the current PID is the trickiest part; $PPID in the lookup is still the PID of a child, so we're grabbing the grandparent out of /proc/
I wanted to post this here but do not consider it a final/perfect answer.
it does work for general purposes.
I put this 'playbook_lock.yml' at the root of my playbook and call it in before any roles.
playbook_lock.yml:
# ./playbook_lock.yml
#
## NOTES:
## - Uses '/tmp/' on Ansible server as lock file directory
## - Format of lock file: E.g. 129416_20211103094638_playbook_common_01.lock
## -- Detailed explanation further down
## - Race-condition:
## -- Assumption playbooks will not run within 10sec of each other
## -- Assumption lockfiles were not deleted within 10sec
## -- If running the playbook manually with manual input of Ansible Vault
## --- Enter creds within 10 sec or the playbook will consider this run legacy
## - Built logic to only use ansbile.builin modules to not add additional requirements
##
#
---
## Build a transaction ID from year/month/day/hour/min/sec
- name: debug_transactionID
debug:
msg: "{{ transactionID }}"
vars:
filter: "{{ ansible_date_time }}"
transactionID: "{{ filter.year + filter.month + filter.day + filter.hour + filter.minute + filter.second }}"
run_once: true
delegate_to: localhost
register: reg_transactionID
## Find current playbook PID
## Race-condition => assumption playbooks will not run within 10sec of each other
## If playbook is already running >10secs, this return will be empty
- name: debug_current_playbook_pid
ansible.builtin.shell:
## serach PS for any command matching the name of the playbook | remove the 'grep' result | return only the 1st one (if etime < 10sec)
cmd: "ps -e -o 'pid,etimes,cmd' | grep {{ ansible_play_name }} | grep -v grep | awk 'NR==1{if($2<10) print $1}'"
changed_when: false
run_once: true
delegate_to: localhost
register: reg_current_playbook_pid
## Check for existing lock files
- name: find_existing_lock_files
ansible.builtin.find:
paths: /tmp
patterns: "*_{{ ansible_play_name }}.lock"
age: 1s
run_once: true
delegate_to: localhost
register: reg_existing_lock_files
## Check and verify existing lock files
- name: block_discovered_existing_lock_files
block:
## build fact of all lock files discovered
- name: fact_existing_lock_files
ansible.builtin.set_fact:
fact_existing_lock_files: "{{ fact_existing_lock_files | default([]) + [item.path] }}"
loop: "{{ reg_existing_lock_files.files }}"
run_once: true
delegate_to: localhost
when:
- reg_existing_lock_files.matched > 0
## Build fact of all discovered lock files
- name: fact_playbook_lock_file_dict
ansible.builtin.set_fact:
fact_playbook_lock_file_dict: "{{ fact_playbook_lock_file_dict | default([]) + [data] }}"
vars:
## E.g. lockfile => 129416_20211103094638_playbook_common_01.lock
var_pid: "{{ item.split('/')[2].split('_')[0] }}" ## extract the 1st portion = PID
var_transid: "{{ item.split('/')[2].split('_')[1] }}" ## extract 2nd portion = TransactionID
var_playbook: "{{ item.split('/')[2].split('_')[2:] | join('_') }}" ## Extract the remaining and join back together = playbook file
data:
{pid: "{{ var_pid }}", transid: "{{ var_transid }}", playbook: "{{ var_playbook }}"}
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
## Check each discovered lock file
## Verify the PID is still operational
- name: shell_verify_pid_is_active
ansible.builtin.shell:
cmd: "ps -p {{ item.pid }} | awk 'NR==2{print $1}'"
loop: "{{ fact_playbook_lock_file_dict }}"
changed_when: false
delegate_to: localhost
register: reg_verify_pid_is_active
## Build fact of discovered previous playbook PIDs
- name: fact_previous_playbook_pids
ansible.builtin.set_fact:
fact_previous_playbook_pids: "{{ fact_previous_playbook_pids | default([]) + [item.stdout | int] }}"
loop: "{{ reg_verify_pid_is_active.results }}"
run_once: true
delegate_to: localhost
## Build fact is playbook already operational
## Add PIDs together
## If SUM =0 => No PIDs found (no previous playbooks running)
## If SUM != 0 => previous playbook is still operational
- name: fact_previous_playbook_operational
ansible.builtin.set_fact:
fact_previous_playbook_operational: "{{ ((fact_previous_playbook_pids | sum) | int) != 0 }}"
when:
- reg_existing_lock_files.matched > 0
- reg_current_playbook_pid.stdout is defined
## Continue with playbook, as no previous instances running
- name: block_continue_playbook_operations
block:
## Cleanup legacy lock files, as the PIDs are not operational
- name: stat_cleanup_legacy_lock_files
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
when: fact_existing_lock_files | length >= 1
## Create lock file for current playbook
- name: stat_create_playbook_lock_file
ansible.builtin.file:
path: "/tmp/{{ var_playbook_lock_file }}"
state: touch
mode: '0644'
vars:
var_playbook_lock_file: "{{ reg_current_playbook_pid.stdout }}_{{ reg_transactionID.msg }}_{{ ansible_play_name }}.lock"
run_once: true
delegate_to: localhost
when:
- reg_current_playbook_pid.stdout is defined
## Fail & exit playbook, as previous playbook is still operational
- name: block_playbook_already_operational
block:
- name: fail
fail:
msg: 'Playbook "{{ ansible_play_name }}" is already operational! This playbook will now exit without any modifications!!!'
run_once: true
delegate_to: localhost
when: (fact_previous_playbook_operational is true) or
(reg_current_playbook_pid.stdout is not defined)
...

detect file difference (change) with `ansible`

In this task I found a roundabout method to compare two files (dconfDump and dconfDumpLocalCurrent) and to set a variable (previously defined as false) to true if the two files differ.
The solution seem to work, but it looks ugly and, as a beginner with ansible, I have the impression a better solution should be existing.
---
# vars file for dconfLoad
local_changed : false
target_changed : false
---
- name: local changed is true when previous target different then local current
shell: diff /home/frank/dconfDump /home/frank/dconfDumpLocalCurrent
register: diff_oldtarget_localCurrent
register: local_changed
ignore_errors: true
- debug:
msg: CHANGED LOCALLY
when: local_changed
Some background to the task, which is an attempt to synchronize files: A file LocalCurrent is compared with LocalOld and CurrentTarget, to determine if the LocalCurrent is changed and if it is different than currentTarget. If LocalCurrent is not changed and CurrentTarget is changed, then apply the change (and set LocalOld to CurrentTarget); if LocalCurrent is changed then upload to controller.
What is the appropriate approach with ansible? Thank you for help!
You can use stat to get the checksum and then compare it. Please see below.
tasks:
- name: Stat of dconfDump
stat:
path : "/tmp/dconfDump"
register: dump
- name: SHA1 of dconfDump
set_fact:
dump_sha1: "{{ dump.stat.checksum }}"
- name: Stat of dconfDumpLocalCurrent
stat:
path : "/tmp/dconfDumpLocalCurrent"
register: dump_local
- name: SHA1 of dconfDumpLocalCurrent
set_fact:
local_sha1: "{{ dump_local.stat.checksum }}"
- name: Same
set_fact:
val: "False"
when: dump_sha1 != local_sha1
- name: Different
set_fact:
val: "True"
when: dump_sha1 == local_sha1
- name: Print
debug:
msg: "{{val}}"
Use stat and create dictionary of checksums. For example
- stat:
path: "{{ item }}"
loop:
- LocalOld
- LocalCurrent
- CurrentTarget
register: result
- set_fact:
my_files: "{{ dict(paths|zip(chkms)) }}"
vars:
paths: "{{ result.results|map(attribute='stat.path')|list }}"
chkms: "{{ result.results|map(attribute='stat.checksum')|list }}"
- debug:
var: my_files
gives (abridged) if all files are the same
my_files:
CurrentTarget: 7c73e9f589ca1f0a1372aa4cd6944feec459c4a8
LocalCurrent: 7c73e9f589ca1f0a1372aa4cd6944feec459c4a8
LocalOld: 7c73e9f589ca1f0a1372aa4cd6944feec459c4a8
Then use the dictionary to compare the checksums and copy files. For example
# If LocalCurrent is not changed and CurrentTarget is changed,
# then apply the change (and set LocalOld to CurrentTarget)
- debug:
msg: Set LocalOld to CurrentTarget
when:
- my_files['LocalCurrent'] == my_files['LocalOld']
- my_files['LocalCurrent'] != my_files['CurrentTarget']
- debug:
msg: Do not copy anything
when:
- my_files['LocalCurrent'] == my_files['LocalOld']
- my_files['LocalCurrent'] == my_files['CurrentTarget']
gives
TASK [debug] ****
skipping: [localhost]
TASK [debug] ****
ok: [localhost] =>
msg: Do not copy anything

ansible "with_lines: cat somefile" fails in a skipped block

I have a playbook with a block that has a when condition. Inside is a task with a loop. How can I change this loop so that when the condition is false the skipped task doesn't fail?
block:
- name: create a file
lineinfile:
line: "Hello World"
path: "{{my_testfile}}"
create: yes
- name: use the file
debug:
msg: "{{ item}}"
with_lines: cat "{{my_testfile}}"
when: false
TASK [create a file] ************************************************************************************************************************************************************
TASK [use the file] *************************************************************************************************************************************************************
cat: files/my/testfile: No such file or directory
fatal: [ipad-icpi01]: FAILED! => {"msg": "lookup_plugin.lines(cat \"files/mytestfile\") returned 1"}
Change your failing task to the following which will always be able to run, even if the file does not exists, and will not use the shell or command where there is no need to:
- name: use the file
debug:
msg: "{{ item }}"
loop: "{{ (lookup('file', my_testfile, errors='ignore') | default('', true)).split('\n') }}"
The key points:
use the file lookup plugin with errors='ignore' so that it returns the file content or None rather than an error when file does not exists.
use the default filter with second option to true so that it return default value if var exists but is null or empty.
split the result on new lines to get a list of lines (empty list if file does not exist).
Note: as reported by #Vladimir, I corrected your var name which is not valid in ansible.
Test the existence of the file. For example
- block:
- name: create a file
lineinfile:
line: "Hello World"
path: "{{ my_testfile }}"
create: yes
- name: use the file
shell: '[ -f "{{ my_testfile }}" ] && cat {{ my_testfile }}'
register: result
- name: use the file
debug:
msg: "{{ item }}"
loop: "{{ result.stdout_lines }}"
when: false
The lookup plugin file should be preferred.
I ended up with a mix of the provided answers. These tasks will be skipped without failing or creating a warning.
- block:
- name: create a file
lineinfile:
line: "Hello World"
path: "{{ my_testfile }}"
create: yes
- name: get the file
slurp:
src: "{{ my_testfile }}"
register: result
- name: use the file
debug:
msg: "{{ item }}"
loop: "{{ (result['content'] | b64decode).split('\n') }}"
when: false

How to write varibles/hard code values in nested json in ansible?

I'm trying to create a json file with hard codes valuesas a output in nested json.But the second play is overwriting the first play value.So do we have any best option to do this?
I have tried with to_nice_json template to copy the variable to json file.But not able to keep multiple variable values in imported_var to copy to json file
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: load var from file
include_vars:
file: /tmp/var.json
name: imported_var
- name: Checking mysqld status
shell: service mysqld status
register: mysqld_stat
ignore_errors: true
- name: Checking mysqld status
shell: service httpd status
register: httpd_stat
ignore_errors: true
- name: append mysqld status to output json
set_fact:
imported_var: "{{ imported_var | combine({ 'status_checks':[{'mysqld_status': (mysqld_stat.rc == 0)|ternary('good', 'bad') }]})}}"
# - name: write var to file
# copy:
# content: "{{ imported_var | to_nice_json }}"
# dest: /tmp/final.json
- name: append httpd status to output json
set_fact:
imported_var: "{{ imported_var| combine({ 'status_checks':[{'httpd_status': (httpd_stat.rc == 0)|ternary('good', 'bad') }]})}}"
# - debug:
# var: imported_var
- name: write var to file
copy:
content: "{{ imported_var | to_nice_json }}"
dest: /tmp/final.json
Expected result:
{
"status_checks": [
{
"mysqld_status": "good"
"httpd_status": "good"
}
]
}
Actual result:
{
"status_checks": [
{
"httpd_status": "good"
}
]
}
You're trying to perform the sort of data manipulation that Ansible really isn't all that good at. Any time you attempt to modify an existing variable -- especially if you're trying to set a nested value -- you're making life complicated. Having said that, it is possible to do what you want. For example:
---
- hosts: localhost
gather_facts: false
vars:
imported_var: {}
tasks:
- name: Checking sshd status
command: systemctl is-active sshd
register: sshd_stat
ignore_errors: true
- name: Checking httpd status
command: systemctl is-active httpd
register: httpd_stat
ignore_errors: true
- set_fact:
imported_var: "{{ imported_var|combine({'status_checks': []}) }}"
- set_fact:
imported_var: >-
{{ imported_var|combine({'status_checks':
imported_var.status_checks + [{'sshd_status': (sshd_stat.rc == 0)|ternary('good', 'bad')}]}) }}
- set_fact:
imported_var: >-
{{ imported_var|combine({'status_checks':
imported_var.status_checks + [{'httpd_status': (httpd_stat.rc == 0)|ternary('good', 'bad')}]}) }}
- debug:
var: imported_var
On my system (which is running sshd but is not running httpd, this will output:
TASK [debug] **********************************************************************************
ok: [localhost] => {
"imported_var": {
"status_checks": [
{
"sshd_status": "good"
},
{
"httpd_status": "bad"
}
]
}
}
You could dramatically simplify the playbook by restructuring your data. Make status_checks a top level variable, and instead of having it be a list, have it be a dictionary that maps a service name to the corresponding status. Combine this with some loops and you end up with something that is dramatically simpler:
---
- hosts: localhost
gather_facts: false
tasks:
# We can use a loop here instead of writing a separate task
# for each service.
- name: Checking service status
command: systemctl is-active {{ item }}
register: services
ignore_errors: true
loop:
- sshd
- httpd
# Using a loop in the previous task means we can use a loop
# when creating the status_checks variable, which again removes
# a bunch of duplicate code.
- name: set status_checks variable
set_fact:
status_checks: "{{ status_checks|default({})|combine({item.item: (item.rc == 0)|ternary('good', 'bad')}) }}"
loop: "{{ services.results }}"
- debug:
var: status_checks
The above will output:
TASK [debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"status_checks": {
"httpd": "bad",
"sshd": "good"
}
}
If you really want to add this information to your imported_var, you can do that in a single task:
- set_fact:
imported_var: "{{ imported_var|combine({'status_checks': status_checks}) }}"

Resources