Playbook where item.stat.exist not working - ansible

I have created playbook which will run on a remote host and check whether the files exist or not. I want to extract the only files which are not present on the remote host. But my playbook giving all paths whether they are present or not.
Playbook:-
- name: Playbook for files not present on remote hosts
hosts: source
gather_facts: false
vars:
Filepath: /opt/webapps/obiee/oracle_common/inventory/ContentsXML/comps.xml
tasks:
- name: Getting files location path
shell: grep -i "COMP NAME" {{ Filepath }} |sed 's/^.*INST_LOC="//'|cut -f1 -d'"' | sed '/^$/d;s/[[:blank:]]//g' // extract files from comps.xml
register: get_element_attribute
- name: check path present or not
stat:
path: "{{ item }}"
with_items:
- "{{ get_element_attribute.stdout_lines }}"
register: path_output
- name: path exists or not
set_fact:
path_item: "{{ item }}" # here i am getting the output as expected that's files not present on remote host
with_items: "{{ path_output.results }}"
register: final_output
when: item.stat.exists == False
- debug:
var: final_output # giving both output i.e. files present and absent
- name: Create a fact list
set_fact:
paths: "{{ final_output.results | map(attribute='item.item') | list }}" # i have add this condition " item.stat.exists == False' inside this stmt
- name: Print Fact
debug:
var: paths

The issue resolved by using below command:
- name: Create a fact list
set_fact:
paths: "{{ final_output.results | selectattr('item.stat.exists', 'equalto', false) | map(attribute='item.item') | list }}"
register: config_facts

The following query should get all the file names which don't exsist on the remote host and store them in the fact 'paths':
- name: Create a fact list
set_fact:
paths: "{{ final_output | json_query(query)}}"
vars:
query: "results[?(#._ansible_item_label.stat.exists==`false`)]._ansible_item_label.item"

Related

Ansible - Prevent playbook executing simultaneously

I have a playbook that controls a clustered application. The issue is this playbook can be called/executed a few different ways (manual on the cmd line[multiple SREs working], scheduled task, or programmatically via a 3rd party system).
The problem is if the playbook tries to execute simultaneously, it could cause some issues to the application (nature of the application).
Question:
Is there a way to prevent the same playbook from running concurrently on the same Ansible server?
Environment:
ansible [core 2.11.6]
config file = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/ansible.cfg
configured module search path = ['/etc/ansible/library/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Nov 1 2021, 11:34:21) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
you could test if file exist at the start of playbook and stop the play if the file exist with meta, if not you create the file to block another launch:
- name: lock_test
hosts: all
vars:
lock_file_path: /tmp/ansible-playbook.lock
pre_tasks:
- name: Check if some file exists
delegate_to: localhost
stat:
path: "{{ lock_file_path }}"
register: lock_file
- block:
- name: "end play "
debug:
msg: "playbook already launched, ending play"
- meta: end_play
when: lock_file.stat.exists
- name: create lock_file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: touch
# ****************** tasks start
tasks:
- name: debug
debug:
msg: "something to do"
# ****************** tasks end
post_tasks:
- name: delete the lock file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: absent
but you have to have only one playbook in your play even the first playbook stops, the second is launched except if you do the same test in the next playbook.
it exist a little lapse time before test and creation of file... so the probality to launch twice the same playbook in same second is very low.
The solution will be always better than you have actually
Another solution is to lock an existing file, and test if file is locked or not, but be careful with this option.. see lock, flock in unix command
You can create a lockfile on the controller with the PID of the ansible-playbook process.
- delegate_to: localhost
vars:
lockfile: /tmp/thisisalockfile
my_pid: "{{ lookup('pipe', 'cut -d\" \" -f4 /proc/$PPID/stat') }}"
lock_pid: "{{ lookup('file', lockfile) }}"
block:
- name: Lock file
copy:
dest: "{{ lockfile }}"
content: "{{ my_pid }}"
when: my_lockfile is not exists
or ('/proc/' ~ lock_pid) is not exists
or 'ansible-playbook' not in lookup('file', '/proc/' ~ lock_pid ~ '/cmdline')
- name: Make sure we won the lock
assert:
that: lock_pid == my_pid
fail_msg: "{{ lockfile }} is locked by process {{ lock_pid }}"
Finding the current PID is the trickiest part; $PPID in the lookup is still the PID of a child, so we're grabbing the grandparent out of /proc/
I wanted to post this here but do not consider it a final/perfect answer.
it does work for general purposes.
I put this 'playbook_lock.yml' at the root of my playbook and call it in before any roles.
playbook_lock.yml:
# ./playbook_lock.yml
#
## NOTES:
## - Uses '/tmp/' on Ansible server as lock file directory
## - Format of lock file: E.g. 129416_20211103094638_playbook_common_01.lock
## -- Detailed explanation further down
## - Race-condition:
## -- Assumption playbooks will not run within 10sec of each other
## -- Assumption lockfiles were not deleted within 10sec
## -- If running the playbook manually with manual input of Ansible Vault
## --- Enter creds within 10 sec or the playbook will consider this run legacy
## - Built logic to only use ansbile.builin modules to not add additional requirements
##
#
---
## Build a transaction ID from year/month/day/hour/min/sec
- name: debug_transactionID
debug:
msg: "{{ transactionID }}"
vars:
filter: "{{ ansible_date_time }}"
transactionID: "{{ filter.year + filter.month + filter.day + filter.hour + filter.minute + filter.second }}"
run_once: true
delegate_to: localhost
register: reg_transactionID
## Find current playbook PID
## Race-condition => assumption playbooks will not run within 10sec of each other
## If playbook is already running >10secs, this return will be empty
- name: debug_current_playbook_pid
ansible.builtin.shell:
## serach PS for any command matching the name of the playbook | remove the 'grep' result | return only the 1st one (if etime < 10sec)
cmd: "ps -e -o 'pid,etimes,cmd' | grep {{ ansible_play_name }} | grep -v grep | awk 'NR==1{if($2<10) print $1}'"
changed_when: false
run_once: true
delegate_to: localhost
register: reg_current_playbook_pid
## Check for existing lock files
- name: find_existing_lock_files
ansible.builtin.find:
paths: /tmp
patterns: "*_{{ ansible_play_name }}.lock"
age: 1s
run_once: true
delegate_to: localhost
register: reg_existing_lock_files
## Check and verify existing lock files
- name: block_discovered_existing_lock_files
block:
## build fact of all lock files discovered
- name: fact_existing_lock_files
ansible.builtin.set_fact:
fact_existing_lock_files: "{{ fact_existing_lock_files | default([]) + [item.path] }}"
loop: "{{ reg_existing_lock_files.files }}"
run_once: true
delegate_to: localhost
when:
- reg_existing_lock_files.matched > 0
## Build fact of all discovered lock files
- name: fact_playbook_lock_file_dict
ansible.builtin.set_fact:
fact_playbook_lock_file_dict: "{{ fact_playbook_lock_file_dict | default([]) + [data] }}"
vars:
## E.g. lockfile => 129416_20211103094638_playbook_common_01.lock
var_pid: "{{ item.split('/')[2].split('_')[0] }}" ## extract the 1st portion = PID
var_transid: "{{ item.split('/')[2].split('_')[1] }}" ## extract 2nd portion = TransactionID
var_playbook: "{{ item.split('/')[2].split('_')[2:] | join('_') }}" ## Extract the remaining and join back together = playbook file
data:
{pid: "{{ var_pid }}", transid: "{{ var_transid }}", playbook: "{{ var_playbook }}"}
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
## Check each discovered lock file
## Verify the PID is still operational
- name: shell_verify_pid_is_active
ansible.builtin.shell:
cmd: "ps -p {{ item.pid }} | awk 'NR==2{print $1}'"
loop: "{{ fact_playbook_lock_file_dict }}"
changed_when: false
delegate_to: localhost
register: reg_verify_pid_is_active
## Build fact of discovered previous playbook PIDs
- name: fact_previous_playbook_pids
ansible.builtin.set_fact:
fact_previous_playbook_pids: "{{ fact_previous_playbook_pids | default([]) + [item.stdout | int] }}"
loop: "{{ reg_verify_pid_is_active.results }}"
run_once: true
delegate_to: localhost
## Build fact is playbook already operational
## Add PIDs together
## If SUM =0 => No PIDs found (no previous playbooks running)
## If SUM != 0 => previous playbook is still operational
- name: fact_previous_playbook_operational
ansible.builtin.set_fact:
fact_previous_playbook_operational: "{{ ((fact_previous_playbook_pids | sum) | int) != 0 }}"
when:
- reg_existing_lock_files.matched > 0
- reg_current_playbook_pid.stdout is defined
## Continue with playbook, as no previous instances running
- name: block_continue_playbook_operations
block:
## Cleanup legacy lock files, as the PIDs are not operational
- name: stat_cleanup_legacy_lock_files
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
when: fact_existing_lock_files | length >= 1
## Create lock file for current playbook
- name: stat_create_playbook_lock_file
ansible.builtin.file:
path: "/tmp/{{ var_playbook_lock_file }}"
state: touch
mode: '0644'
vars:
var_playbook_lock_file: "{{ reg_current_playbook_pid.stdout }}_{{ reg_transactionID.msg }}_{{ ansible_play_name }}.lock"
run_once: true
delegate_to: localhost
when:
- reg_current_playbook_pid.stdout is defined
## Fail & exit playbook, as previous playbook is still operational
- name: block_playbook_already_operational
block:
- name: fail
fail:
msg: 'Playbook "{{ ansible_play_name }}" is already operational! This playbook will now exit without any modifications!!!'
run_once: true
delegate_to: localhost
when: (fact_previous_playbook_operational is true) or
(reg_current_playbook_pid.stdout is not defined)
...

How to register and capture only the output that meets a failed condition criteria in ansible playbook?

I am trying to set up a playbook as follows where I need to send an email notification when my task fails. The alert with email notification as such works as expected, however I am unable to capture the output just for the failed conditions and currently the way I am registering gives me all results including the variables (file paths in this example) where the condition does not fail.
---
- hosts: test-box
tasks:
- name: alert on failure
block:
- name: generate a list of desired file paths
find:
paths: /base/sdir
recurse: yes
file_type: file
patterns: "abrn.*.dat"
use_regex: yes
register: file_paths
- name: check if file stopped updating
vars:
msg: |
"{{ item }}"
"{{ ansible_date_time.epoch }}"
"{{ item.stat.mtime|int }}"
"{{ ( (ansible_date_time.epoch|int - item.stat.mtime|int) / 60 ) | int }} min"
with_items: "{{ ts.results }}"
fail:
msg: |
"{{ msg.split('\n') }}"
register: failed_items ### -> HOW TO REGISTER ONLY THE FILE PATHS (RESULTS) WHERE THIS FAIL CONDITION IS MET??
when: ( (ansible_date_time.epoch|int - item.stat.mtime|int) / 60 ) | int > 2
rescue:
- name: email notification
mail:
host: localhost
port: 25
from: A
to: B
subject: TASK FAILED
body: |
Failed subdirs: {{ failed_items }} ## This gives me all results including those where the failed condition is not met
delegate_to: localhost
...
In the body of the email, I want to capture only the file paths where the mtime condition is met but currently I get all file paths.
Any suggestions on how I can filter to capture the output only for matching condition?
Thanks.
You should use set_fact, this way you will assign a list variable only if your when condition is true.
---
- hosts: localhost
become: false
vars:
path: "{{ '%s/bin' | format(lookup('env', 'HOME')) }}"
time: "{{ 2 * 60 }}" # 2 mins
tasks:
- name: Find files in a defined path
find:
paths: "{{ path }}"
register: _result
- name: Add files not modified for longer time than defined to list
set_fact:
stale_files: "{{ stale_files | default([]) + [item.path] }}"
loop: "{{ _result.files }}"
when: ((ansible_date_time.epoch | float) - item.mtime) > (time | float)
- name: Show stale files
debug:
msg: "{{ stale_files }}"
Well you could use also another approach, ie. make loop with filtering (for me selectattr filter did not work with lt test, thus json_query filter should work, see Can't compare attribute to a number. Error: "not supported between instances of 'AnsibleUnsafeText' and 'int'").

In Ansible ,can we loop through list of files and match content in file using lineinfile module

I have to find all config.xml on a server and produce the list on a given server.
Once we registered the files list, i have to check the content in each file on the list using Ansible
I tried to derive the paths for all config.xml
register them and print the list
Added the registered variable into lineinfile path
##Derive Config.xml path
- name: Find the location of xml file
shell: find {{ wlx_mount_point }} -maxdepth 1 -name {{xml_file}} | rev | cut -d '/' -f3- | rev
register: wlx_domain_home
ignore_errors: yes
- debug:
msg: "{{ wlx_domain_home.stdout_lines|list }}"
- name: check domain home directory exists
stat:
path: "{{ wlx_domain_home |list }}"
ignore_errors: true
- debug:
msg: "{{ wlx_domain_home.stdout_lines|list }}"
- name: "Ensure Logging Settings in config.xml"
lineinfile:
line: "{{ item.line }}"
regexp: "{{ item.regexp }}"
path: "{{ wlx_domain_home.stdout_lines|list }}/config/config.xml"
state: present
backrefs: yes
register: config.xml.Logging
with_fileglob: "{{ wlx_domain_home.stdout_lines|list }}/config/config.xml"
with_items:
- line: "<logger-severity>Info</logger-severity>"
regexp: "^logger-severity.*"
Expected results are , it has to look for lines in each file and loop through the list. ` Its printing the list and not able to find the content
"_ansible_ignore_errors": true, "msg": "Destination
[u'/appl/cmpas9/user_projects/pte-ipscasws',
u'/appl/bbb/user_projects/qa-ucxservices_bkp',
u'/appl/app/user_projects/weiss_apps12',
u'appl/view/user_projects/weiss_apps12_oldbkp',
u'appl/voc/user_projects/qa-voc']/config/config.xml does not exist !"
}
This is how i fixed the issue. Now it gives output
- name: find all weblogic domain paths
shell: find /tech/appl -type f -name config.xml
register: wlx_domain_config
- debug:
var: wlx_domain_config.stdout_lines
name: "Ensure allow-unencrypted-null-cipher Encryption Settings in config.xml"
shell: grep -i "" {{ item }}
with_items: "{{ wlx_domain_config.stdout_lines }}"
register: allowunencrypted
debug:
var: allowunencrypted.stdout_lines

Ansible access same variables from multiple Json files

I have multiple .json files on local host where I place my playbook:
json-file-path/{{ testName }}.json
{{ testName }}.json are: testA.json, testB.json, testC.json ... etc.
All .json files have same keys with different values like this:
json-file-path/testA.json:
{
“a_key”: “a_value1”
“b_key”: “b_value1”
}
json-file-path/testB.json:
{
“a_key”: “a_value2”
“b_key”: “b_value2”
}
json-file-path/testC.json:
{
“a_key”: “a_value3”
“b_key”: “b_value3”
}
.....
I need to access the key-value variables from all .json files and if the values meet some condition, I will perform some task in target host. For example, I have:
a_value1=3
a_value2=4
a_value3=1
I go through my .json file one by one, if a_key[value]>3, I will copy this .json file to target host, otherwise skip the task. In this case, I will only copy testC.json to target host.
How would I achieve this? I was thinking of re-constructing my .json files using {{ testName }} as dynamic key of dict like this:
{
“testName”: “testA”
{
“a_key”: “a_value1”
“b_key”: “b_value1”
}
So I can access my variable as {{ testName}}.a_key. So far I haven’t been able to achieve this.
I have tried the following in my playbook:
—-
- host: localhost
tasks:
- name: construct json files
vars:
my_vars:
a_key: “{{ a_value }}”
b_key: “{{ b_value }}”
with_dict: “{{ testName }}”
copy:
content: “{{ my_vars | to_nice_json }}”
dest: /json-file-path/{{ testName }}.json
My updated playbook are:
/mypath/tmp/include.yaml:
—-
- hosts: remote_hostName
tasks:
- name: load json files
set_fact:
json_data: “{{ lookup(‘file’, item) | from_json }}”
- name: copy json file if condition meets
copy:
src: “{{ item }}”
dest: “{{ /remote_host_path/tmp}}/{{item | basename }}”
delegate_to: “{{ remote_hostName }}”
when: json_data.a_key|int>5
/mypath/test.yml:
—-
- hosts: localhost
vars:
local_src_ dir: /mypath/tmp
remote_host: remote_hostName
remote_dest_dir: /remote_host_path/tmp
tasks:
- name: looping
include: include.yaml
with_fileglob:
- “{{ local_src_dir }}/*json”
All json files on localhost under /mypath/tmp/.
Latest version of playbook. It is working now:
/mypath/tmp/include.yaml:
—-
- name: loafing json flies
include_vars:
file: “{{ item }}”
name: json_data
- name: copy json file to remote if condition meets
copy:
src: “{{ item }}”
dest: ‘/remote_host_path/tmp/{{item | basename}}’
delegate_to: “{{ remote_host }}”
when: json_data.a_key > 5
/mypath/test.yml:
—-
- hosts: localhost
vars:
local_src_dir: /mypath/tmp
remote_host: remote_hostName
remote_dest_dir: /remote_host_path/tmp
tasks:
- name: looping json files
include: include.yaml
with_fileglob:
- “{{ local_src_dir }}”/*json”
I am hoping that I have understood your requirements correctly, and that this helps move you forward.
Fundamentally, you can load each of the JSON files so you can query the values as native Ansible variables. Therefore you can loop through all the files, read each one, compare the value you are interested in and then conditionally copy to your remote host via a delegated task. Therefore, give this a try:
Create an include file include.yaml:
---
# 'item' contains a path to a local JSON file on each pass of the loop
- name: Load the json file
set_fact:
json_data: "{{ lookup('file', item) | from_json }}"
- name: Delegate a copy task to the remote host conditionally
copy:
src: "{{ item }}"
dest: "{{ remote_dest_dir }}/{{ item | basename }}"
delegate_to: "{{ remote_host }}"
when: json_data.a_key > value_threshold
then in your playbook:
---
- hosts: localhost
connection: local
# Set some example vars, tho these could be placed in a variety of places
vars:
local_src_dir: /some/local/path
remote_host: <some_inventory_hostname>
remote_dest_dir: /some/remote/path
value_threshold: 3
tasks:
- name: Loop through all *json files, passing matches to include.yaml
include: include.yaml
loop: "{{ lookup('fileglob', local_src_dir + '/*json').split(',') }}"
Note: As you are running an old version of Ansible, you may need older alternate syntax for all of this to work:
In your include file:
- name: Load the json file
set_fact:
include_vars: "{{ item }}"
- name: Delegate a copy task to the remote host conditionally
copy:
src: "{{ item }}"
dest: "{{ remote_dest_dir }}/{{ item | basename }}"
delegate_to: "{{ remote_host }}"
when: a_key > value_threshold
and in your playbook:
- name: Loop through all *json files, passing matches to include.yaml
include: include.yaml
with_fileglob:
- "{{ local_src_dir }}/*json"

Assign item to a var with_items in ansible

I am trying to create a playbook to find out on which openstack server vm is running on. I have created a list of openstack servers in vars and used delegate_to with with_items to iterate through until find vm. I am using wc -l at the end of command and 1 will be success. The aim is, once os-server is found, store servername into a var so this can be used for rest of tasks in playbook. I am unable to get the os server name in a var from the list. I am not an ansible expert. Can anyone help to achieve this? Thanks
- hosts: localhost
vars:
openstack:
- reg1
- reg2
- reg3
- reg4
tasks:
- name: Command to find os server where vm exists
shell: somecommand-to-check-if-vm-exist | wc -l
delegate_to: "{{ item }}"
with_items: "{{ openstack }}"
register: found_server
retries: 1
delay: 1
until: found_server.stdout != "1"
- debug: var=found_server
- name: set fact
set_fact: os-server = "{{ item.item }}"
when: item.stdout == "1"
with_items: "{{ found_server.results }}"
register: var2
- name: debug var
debug: var=var2
- debug: var=os-server
There's no need to retry/until here and for the second loop as well.
Try this:
- hosts: localhost
vars:
openstack: [reg1, reg2, reg3, reg4]
tasks:
- name: Command to find os server where vm exists
shell: somecommand-to-check-if-vm-exist | wc -l
delegate_to: "{{ item }}"
with_items: "{{ openstack }}"
register: vm_check
- name: set fact
set_fact:
os_server: "{{ (vm_check.results | selectattr('stdout','equalto','1') | list | first).item }}"
- name: debug var
debug:
msg: "{{ os_server }}"
This will register results from every server into vm_check.results, and then just select elements with stdout set to 1, take first element of it it (I suppose you always have one server with VM), and get .item of this element which contains the item of original loop (in our case it is server's name).

Resources