I am trying to check group ownership of all the directory named "deployments" inside path. To do this I am using for loop with find command and store all the deployments dir in variable. And then using grep I am checking whether there is any deviation. Below is my task. The problem is that the Its not working and even if group ownership is different its not detecting it.
How can i check and fix how the command i am passing in shell module is running correctly by shell module.
---
- name: deployment dir group ownership check
shell: for i in `find /{{ path }} -name deployments -type d -print`;do ls -ld $i | grep -v 'sag';done > /dev/null 2>&1; echo $?
register: find_result
delegate_to: "{{ pub_server }}"
changed_when: False
ignore_errors: true
tags:
- deployment_dir
- debug: var=find_result
- name: status of the group in deployments dir
shell: echo "Success:Deployments dir is owned by sag group in {{ path }} in server {{ pub_server }}" >> groupCheck.log
when: find_result.stdout == "1"
changed_when: False
tags:
- deployment_dir
- name: status of the group in deployments dir
shell: echo "Fail:Deployments dir is NOT owned by sag group in {{ path }} in server {{ pub_server }}" >> groupCheck.log
changed_when: False
when: find_result.stdout == "0"
tags:
- deployment_dir
Why you use that shell with find?
You can stat the folder and then get the UID and GID for set those permissions...
---
- hosts: localhost
gather_facts: no
connection: local
tasks:
- stat:
path: /tmp
register: tmp
- debug: msg="Uid {{ tmp.stat.uid }} - Guid {{ tmp.stat.gid }}"
Playbook run:
PLAY ***************************************************************************
TASK [stat] ********************************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "Uid 0 - Guid 0"
}
Tmp folder:
14:17:17 kelson#local:/# ls -la /
drwxrwxrwt 23 root root 4096 Mai 14 14:17 tmp
Using that line of thinking, you can use the find module, register the output and use stat with your results to seek every folder permissions or set those you want...
Related
I have a directory that contains files like der_azureData_Linux_x64_24.1.0.0.7.tar. I need to find out the latest file in that directory and split the file name into name and version.
Expected output
name: der_azureData_Linux_x64
version: 24.1.0.0.7
Playbook
I tried with the below playbook but it is not displaying the file name.
- hosts: localhost
gather_facts: no
tasks:
- find:
paths: "/root/ansible.temp/"
patterns: "*.tar"
recurse: yes
register: files_matched
- set_fact:
latest_files: "{{ latest_files | default([]) + [item.path] }}"
loop: "{{ files_matched.files | sort(attribute='mtime', reverse=true) }}"
when: "item.path | dirname not in latest_files | default([]) | map('dirname')"
##
# The loop_control is just there for validation purpose
##
loop_control:
label: "{{ '%Y-%m-%d %H:%M:%S' | strftime(item.mtime) }} {{ item.path }}"
- debug:
var: latest_files
- name: check the directories and locations
shell: |
f = {{ latest_files }} # loop over each file in current directory
name="${f%_*}" # trim from right to first '_'
register: name
- name: print values
debug:
msg: "filename {{ name.stdout }} "
Output as follow
TASK [check the directories and locations] **************************
changed: [localhost]
TASK [print values] *************************************************
ok: [localhost] => {
"msg": "filename "
}
How to find new files in a directory? ... I need to find out the latest file in that directory.
Given the test files
tree test
test
├── newest.yml
├── test_1.2.3.4.tar
└── test_2.3.4.5.tar
ls -al test*
-rw-r--r--. 1 user users 0 Jan 16 12:00 test_1.2.3.4.tar
-rw-r--r--. 1 user users 0 Jan 16 12:00 test_2.3.4.5.tar
a minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Get files in a folder
find:
paths: "/home/{{ ansible_user }}/test"
patterns: "*.tar"
register: result
- name: Get newest file
set_fact:
LATEST: "{{ result.files | sort(attribute='mtime',reverse=true) | first }}"
- name: Show LATEST
debug:
msg: "{{ LATEST.path | basename }}"
will result into an output of
TASK [Show LATEST] ****
ok: [localhost] =>
msg: test_2.3.4.5.tar
Similar Q&A
Getting the newest filename in a directory in Ansible
How to divide or split file name with the version name at the filename?
You may then proceed further with
- name: Show VERSION only
debug:
msg: "{{ LATEST.path | basename | split('_') | last | splitext | first }}"
resulting into an output of
TASK [Show VERSION only] ****
ok: [localhost] =>
msg: 2.3.4.5
Similar Q&A
Extract file names without extension - Ansible
Filters Used
basename
first
last
split
splitext
Further Documentation
Using filters to manipulate data - Managing file names and path names
I have the following scenario:
The inventory file is laid out as below:
[my_host]
host
server
[host:children]
hostm
hosts
[hostm]
host01
[hosts]
host02
host03
The group_vars file(s) is as below:
host.yml
user: user01
ansible_python_interpreter: /usr/bin/python3
The host_vars file(s) is as below:
host01.yml
id: ABC
nr: 00
host02.yml
id: DEF
nr: 20
host03.yml
id: GHI
nr: 02
Now using the above, I'm trying to run a playbook as described below:
custom.yml
- hosts: "{{ v_host | default([]) }}"
remote_user: root
tasks:
- name: Run the shell script
become: true
become_user: "{{ user }}"
become_method: su
become_exe: "su -"
ansible.builtin.shell: cleanipc {{ item.nr }} remove
with_items:
- "{{ v_host }}"
register: shell_result
no_log: false
changed_when: false
- name: print message
ansible.builtin.debug:
var: shell_result.stdout_lines
To run the playbook, I use the below command:
>ansible-playbook -i /path-to-inventory-file/file custom.yml -e 'v_host=host'
I'm trying to get the playbook to run the shell command on all child nodes of 'host', i.e., 'host01', 'host02' and 'host03', with the value of the variable 'nr' automatically substituted for each host.
I tried changing the lookup of the variable using hostvars as below:
ansible.builtin.shell: cleanipc {{ hostvars[item]['nr'] }} remove
But this didn't work either. Thank you for any help or guidance you can provide.
Thank you!
I have a playbook that controls a clustered application. The issue is this playbook can be called/executed a few different ways (manual on the cmd line[multiple SREs working], scheduled task, or programmatically via a 3rd party system).
The problem is if the playbook tries to execute simultaneously, it could cause some issues to the application (nature of the application).
Question:
Is there a way to prevent the same playbook from running concurrently on the same Ansible server?
Environment:
ansible [core 2.11.6]
config file = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/ansible.cfg
configured module search path = ['/etc/ansible/library/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /app/ansible/ansible_linux_playbooks/playbooks/scoutam_client_configs_playbook/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Nov 1 2021, 11:34:21) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.2
libyaml = True
you could test if file exist at the start of playbook and stop the play if the file exist with meta, if not you create the file to block another launch:
- name: lock_test
hosts: all
vars:
lock_file_path: /tmp/ansible-playbook.lock
pre_tasks:
- name: Check if some file exists
delegate_to: localhost
stat:
path: "{{ lock_file_path }}"
register: lock_file
- block:
- name: "end play "
debug:
msg: "playbook already launched, ending play"
- meta: end_play
when: lock_file.stat.exists
- name: create lock_file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: touch
# ****************** tasks start
tasks:
- name: debug
debug:
msg: "something to do"
# ****************** tasks end
post_tasks:
- name: delete the lock file {{ lock_file_path }}
delegate_to: localhost
file:
path: "{{ lock_file_path }}"
state: absent
but you have to have only one playbook in your play even the first playbook stops, the second is launched except if you do the same test in the next playbook.
it exist a little lapse time before test and creation of file... so the probality to launch twice the same playbook in same second is very low.
The solution will be always better than you have actually
Another solution is to lock an existing file, and test if file is locked or not, but be careful with this option.. see lock, flock in unix command
You can create a lockfile on the controller with the PID of the ansible-playbook process.
- delegate_to: localhost
vars:
lockfile: /tmp/thisisalockfile
my_pid: "{{ lookup('pipe', 'cut -d\" \" -f4 /proc/$PPID/stat') }}"
lock_pid: "{{ lookup('file', lockfile) }}"
block:
- name: Lock file
copy:
dest: "{{ lockfile }}"
content: "{{ my_pid }}"
when: my_lockfile is not exists
or ('/proc/' ~ lock_pid) is not exists
or 'ansible-playbook' not in lookup('file', '/proc/' ~ lock_pid ~ '/cmdline')
- name: Make sure we won the lock
assert:
that: lock_pid == my_pid
fail_msg: "{{ lockfile }} is locked by process {{ lock_pid }}"
Finding the current PID is the trickiest part; $PPID in the lookup is still the PID of a child, so we're grabbing the grandparent out of /proc/
I wanted to post this here but do not consider it a final/perfect answer.
it does work for general purposes.
I put this 'playbook_lock.yml' at the root of my playbook and call it in before any roles.
playbook_lock.yml:
# ./playbook_lock.yml
#
## NOTES:
## - Uses '/tmp/' on Ansible server as lock file directory
## - Format of lock file: E.g. 129416_20211103094638_playbook_common_01.lock
## -- Detailed explanation further down
## - Race-condition:
## -- Assumption playbooks will not run within 10sec of each other
## -- Assumption lockfiles were not deleted within 10sec
## -- If running the playbook manually with manual input of Ansible Vault
## --- Enter creds within 10 sec or the playbook will consider this run legacy
## - Built logic to only use ansbile.builin modules to not add additional requirements
##
#
---
## Build a transaction ID from year/month/day/hour/min/sec
- name: debug_transactionID
debug:
msg: "{{ transactionID }}"
vars:
filter: "{{ ansible_date_time }}"
transactionID: "{{ filter.year + filter.month + filter.day + filter.hour + filter.minute + filter.second }}"
run_once: true
delegate_to: localhost
register: reg_transactionID
## Find current playbook PID
## Race-condition => assumption playbooks will not run within 10sec of each other
## If playbook is already running >10secs, this return will be empty
- name: debug_current_playbook_pid
ansible.builtin.shell:
## serach PS for any command matching the name of the playbook | remove the 'grep' result | return only the 1st one (if etime < 10sec)
cmd: "ps -e -o 'pid,etimes,cmd' | grep {{ ansible_play_name }} | grep -v grep | awk 'NR==1{if($2<10) print $1}'"
changed_when: false
run_once: true
delegate_to: localhost
register: reg_current_playbook_pid
## Check for existing lock files
- name: find_existing_lock_files
ansible.builtin.find:
paths: /tmp
patterns: "*_{{ ansible_play_name }}.lock"
age: 1s
run_once: true
delegate_to: localhost
register: reg_existing_lock_files
## Check and verify existing lock files
- name: block_discovered_existing_lock_files
block:
## build fact of all lock files discovered
- name: fact_existing_lock_files
ansible.builtin.set_fact:
fact_existing_lock_files: "{{ fact_existing_lock_files | default([]) + [item.path] }}"
loop: "{{ reg_existing_lock_files.files }}"
run_once: true
delegate_to: localhost
when:
- reg_existing_lock_files.matched > 0
## Build fact of all discovered lock files
- name: fact_playbook_lock_file_dict
ansible.builtin.set_fact:
fact_playbook_lock_file_dict: "{{ fact_playbook_lock_file_dict | default([]) + [data] }}"
vars:
## E.g. lockfile => 129416_20211103094638_playbook_common_01.lock
var_pid: "{{ item.split('/')[2].split('_')[0] }}" ## extract the 1st portion = PID
var_transid: "{{ item.split('/')[2].split('_')[1] }}" ## extract 2nd portion = TransactionID
var_playbook: "{{ item.split('/')[2].split('_')[2:] | join('_') }}" ## Extract the remaining and join back together = playbook file
data:
{pid: "{{ var_pid }}", transid: "{{ var_transid }}", playbook: "{{ var_playbook }}"}
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
## Check each discovered lock file
## Verify the PID is still operational
- name: shell_verify_pid_is_active
ansible.builtin.shell:
cmd: "ps -p {{ item.pid }} | awk 'NR==2{print $1}'"
loop: "{{ fact_playbook_lock_file_dict }}"
changed_when: false
delegate_to: localhost
register: reg_verify_pid_is_active
## Build fact of discovered previous playbook PIDs
- name: fact_previous_playbook_pids
ansible.builtin.set_fact:
fact_previous_playbook_pids: "{{ fact_previous_playbook_pids | default([]) + [item.stdout | int] }}"
loop: "{{ reg_verify_pid_is_active.results }}"
run_once: true
delegate_to: localhost
## Build fact is playbook already operational
## Add PIDs together
## If SUM =0 => No PIDs found (no previous playbooks running)
## If SUM != 0 => previous playbook is still operational
- name: fact_previous_playbook_operational
ansible.builtin.set_fact:
fact_previous_playbook_operational: "{{ ((fact_previous_playbook_pids | sum) | int) != 0 }}"
when:
- reg_existing_lock_files.matched > 0
- reg_current_playbook_pid.stdout is defined
## Continue with playbook, as no previous instances running
- name: block_continue_playbook_operations
block:
## Cleanup legacy lock files, as the PIDs are not operational
- name: stat_cleanup_legacy_lock_files
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop: "{{ fact_existing_lock_files }}"
run_once: true
delegate_to: localhost
when: fact_existing_lock_files | length >= 1
## Create lock file for current playbook
- name: stat_create_playbook_lock_file
ansible.builtin.file:
path: "/tmp/{{ var_playbook_lock_file }}"
state: touch
mode: '0644'
vars:
var_playbook_lock_file: "{{ reg_current_playbook_pid.stdout }}_{{ reg_transactionID.msg }}_{{ ansible_play_name }}.lock"
run_once: true
delegate_to: localhost
when:
- reg_current_playbook_pid.stdout is defined
## Fail & exit playbook, as previous playbook is still operational
- name: block_playbook_already_operational
block:
- name: fail
fail:
msg: 'Playbook "{{ ansible_play_name }}" is already operational! This playbook will now exit without any modifications!!!'
run_once: true
delegate_to: localhost
when: (fact_previous_playbook_operational is true) or
(reg_current_playbook_pid.stdout is not defined)
...
EDIT: I resolved this a different way by getting a checksum of the 7z contents and checking
a) if the directory existed
b) if it did - did it's contents checksum match
I have an ansible playbook which uses a 7zip shell command, but I want to check if the 7z has been inflated already so I have the following
- name: Get zip listing
shell: '7z l {{ sz_file }} | tail -n +21 | head -n -2 | cut -c 54-'
register: sz_contents
- name: Compare zip listing to file contents
stat:
path: '{{ extract_dir }}/{{ item }}'
register: result
with_items: '{{ sz_contents.stdout_lines }}'
- name: Inflate 7z file if needed
shell: 7z x {{ sz_file }}
when: ???
I want the following to happen:
Stop the Compare task the first time results.stat.exists == False (the 7z has many files and continuing the comparison after that is pointless)
Register if the file needs inflating and do so as needed
It sounds like you want to make the extract task conditional on
whether the compare tasks succeeds or fails, and you want the compare
task to fail as soon as it finds a file that doesn't exist.
We can get most of the way there.
Normally, the stat module doesn't trigger a failure when you point
it at a path that doesn't exist. For example, the following playbook:
- hosts: localhost
gather_facts: false
tasks:
- stat:
path: /does-not-exist
register: result
- debug:
var: result
Yields:
TASK [stat] ***********************************************************************************
ok: [localhost]
TASK [debug] **********************************************************************************
ok: [localhost] => {
"result": {
"changed": false,
"failed": false,
"stat": {
"exists": false
}
}
}
Ansible provides us with the failed_when directive to control when a
task fails. This means we can rewrite your compare task to fail on a
missing file like this:
- name: Compare zip listing to file contents
stat:
path: '{{ extract_dir }}/{{ item }}'
register: result
failed_when: not result.stat.exists
ignore_errors: true
with_items: '{{ sz_contents.stdout_lines }}'
The failed_when directive tells Ansible to consider the task
"failed" if the file passed to stat doesn't exist, and the
ignore_errors directive tells Ansible to continue executing the
playbook rather than aborting when the task fails.
We can make the extract task condition on this one with a simple
when directive:
- name: Inflate 7z file if needed
shell: 7z x {{ sz_file }}
when: result is failed
The only problem with this solution is that Ansible won't exit a loop
when an individual item causes a failure, so it's going to check
through all of sz_contents.stdout_lines regardless.
Update
I was discussing this issue on irc and #bcoca pointed out the when
is evaluated before register, so we can actually get the behavior
you want by writing the compare task like this:
- name: Compare zip listing to file contents
stat:
path: '{{ extract_dir }}/{{ item }}'
register: result
when: result is defined or result is success
failed_when: not result.stat.exists
ignore_errors: true
with_items: '{{ sz_contents.stdout_lines }}'
The when statement will cause all loop iterations after the first
failure to be skipped.
I am running several shell commands in an ansible playbook that may or may not modify a configuration file.
One of the items in the playbook is to restart the service. But I only want to do this if a variable is set.
I am planning on registering a result in each of the shell tasks, but I do not want to overwrite the variable if it is already set to 'restart_needed' or something like that.
The idea is the restart should be the last thing to go, and if any of the commands set the restart variable, it will go, and if none of them did, the service will not be restarted. Here is an example of what I have so far...
tasks:
- name: Make a backup copy of file
copy: src={{ file_path }} dest={{ file_path }}.{{ date }} remote_src=true owner=root group=root mode=644 backup=yes
- name: get list of items
shell: |
grep <file>
register: result
- name: output will be 'restart_needed'
shell: |
NUM=14"s"; if [ "${NUM}" != "s" ]; then sed -i "${NUM}/no/yes/g" {{ file_path }}; echo "restart_needed"; else echo "nothing_changed" ; fi
with_items: "{{ result.stdout_lines }}"
register: output
- name: output will be 'nothing_changed'
shell: |
NUM="s"; if [ "${NUM}" != "s" ]; then sed -i "${NUM}/no/yes/g" {{ file_path }}; echo "restart_needed"; else echo "nothing_changed" ;; fi
with_items: "{{ result.stdout_lines }}"
register: output
- name: Restart service
service: name=myservice enabled=yes state=restarted
In the above example, the variable output will be set to restart_needed after the first task but then will be changed to 'nothing_changed' in the second task.
I want to keep the variable at 'restart_needed' if it is already there and then kick off the restart service task only if the variable is set to restart_needed.
Thanks!
For triggering restarts, you have two options: the when statement or handlers.
When statement example:
tasks:
- name: check if string "foo" exists in somefile
shell: grep -q foo somefile
register: result
- name: restart service
service:
name: myservice
enabled: yes
state: restarted
when: result.rc == 0
Handlers example:
tasks:
- name: check if string "foo" exists in somefile
shell: grep -q foo somefile
register: result
changed_when: "result.rc == 0"
notify: restart service
handlers:
- name: restart service
service:
name: myservice
enabled: yes
state: restarted