I'm creating a playbook to verify all NFS present in fstab are mounted and also they are RW [read/write].
Here is the first block of my playbook that is actually working.
Compares the mount output with the current fstab.
tasks:
-
name: Get mounted devices
shell: /usr/bin/mount | /usr/bin/sed -n {'/nfs / {/#/d;p}'} | /bin/awk '{print $3}'
register: current_mount
-
set_fact:
block_devices: "{{ current_mount.stdout_lines }}"
-
shell: /usr/bin/sed -n '/nfs/ {/#/d;p}' /root/testtab | /usr/bin/awk '{print $2}'
register: fstab
-
set_fact:
fstab_devices: "{{ fstab.stdout_lines }}"
-
name: Device not mounted
fail:
msg: "ONE OR MORE NFS ARE NOT MOUNTED, PLEASE VERIFY"
when: block_devices != fstab_devices
Now to verify these are Read/Write ready:
I need to actually create a file inside the paths stored on {{ current_mount }}
1.1) If it succeed we move forward and delete the newly created files
1.2) If there is failure when creating one or all the files, we need to fail the playbook and msg which of them is not read/write [that mean, if we can't touch[create] a file inside then the FS then is not RW ]
I tried to do the below, but it seems that it does not work like that.
-
file:
path: "{{ current_mount }}"/ansibletestfile
state: touch
-
file:
path: "{{ current_mount }}"/ansibletestfile
state: absent
This is a sample of whats inside the {{ current_mount }}
/testnfs
/testnfs2
/testnfs3
After doing some research, it seems that to create more than one file within different FS listed in a variable I would have to use the item module, but I had no luck
Please let me know if there is a way to perform this task.
To achieve:
create a file inside the paths stored on {{ current_mount }}
You need to loop over current_mount.stdout_lines or block_devices, like:
- file:
path: "{{ item }}/ansibletestfile"
state: "touch"
loop: "{{ current_mount.stdout_lines }}"
For 1.1 and 1.2, we can use blocks. So, overall, the code can look like:
- block:
# Try to touch file
- file:
path: "{{ item }}/ansibletestfile"
state: "touch"
loop: "{{ current_mount.stdout_lines }}"
rescue:
# If file couldn't be touched
- fail:
msg: "Could not create/touch file"
always:
# Remove the file regardless
- file:
path: "{{ item }}/ansibletestfile"
state: "absent"
loop: "{{ current_mount.stdout_lines }}"
Related
How to register a variable when using loop with stat module?
I am working on a project where I wish to run comparisons against the known value of a collection of files (checksum), which I will then take action if a change is detected (EG: notify someone, have not written this part yet).
If this were purely a CLI matter, I would have this sorted with some easy SH scripting.
That said, I have Ansible (2.7.5) available within my ENV and am keen to use it!
In reading the vendor documents, using the stat module felt the "Ansible way" to go on this one.
Currently just *NIX servers (Linux, Solaris, and possibly AIX) are in scope, but eventually this might also apply to Windows, where I expect I would use win_stat instead with suitable parameters.
At present I plan to dump the results of the scan to a file (EG: CSV), which I would then iterate / match against, for the purposes of a comparison (to detect if a file has been somehow changed).
This is another part I have not written yet (the read a file and compare portions), but expect to hit those once I get this present matter sorted.
My current challenge, is that I can get "one-off" stat checks to work fine.
However, I expect to be targeting a whole directory worth of files, and thus want to presumably:
"discover" the contents of the target directory, and retain this in memory
iterate (loop) through the list in memory
performing a stat check upon each file
retaining the checksum of each file
building some sort of dict or list?
write the collective results (or one line at a time) out to a log file of sorts (CSV.log: file_path,file_checksum)
I would welcome your feedback on what I might be missing (aside from some hair at this point)?
I have tried a few different approaches to looping within the playbook (loop, with_items, etc.), however the challenge remains the same.
The stat loop runs fine, but the trailing register statement fails to commit the output to memory (resulting in a variety of "undefined variable" errors).
Am I somehow missing something in my loop definition?
Looking at the vendor docs on "Using register with a loop", it would appear I am doing this correctly (in my view anyway).
Simple "target files" I am checking against within a directory.
/tmp/app/targets/file1.txt
Some text.
/tmp/app/targets/file2.cfg
cluster=0
cluster_id=app_pool_00
/tmp/app/targets/file3.sh
#!/bin/sh
printf "Hello world\n"
exit 0
My prototyping playbook as it exists currently.
---
- name: check file integrity
hosts: localhost
become: no
vars:
TARGET: /tmp/app/targets
LOG: /tmp/app/archive/scan_results.log
tasks:
- name: discover target files
find:
paths: "{{ TARGET }}"
recurse: yes
file_type: file
register: TARGET_FILES
- name: scan target
stat:
path: "{{ item.path }}"
get_checksum: yes
loop: "{{ TARGET_FILES.files }}"
register: TARGET_RESULTS
- name: DEBUG
debug:
var: "{{ TARGET_RESULTS }}"
- name: write findings to log
copy:
content: "{{ TARGET_RESULTS.stat.path }},{{ TARGET_RESULTS.stat.checksum }}"
dest: "{{ LOG }}"
...
My "one-off" playbook that worked.
---
- name: check file integrity
hosts: localhost
become: no
vars:
TARGET: /tmp/app/targets/file1.txt
LOG: /tmp/app/archive/scan_results.log
tasks:
- name: scan target
stat:
path: '{{ TARGET }}'
checksum_algorithm: sha1
follow: no
get_attributes: yes
get_checksum: yes
get_md5: no
get_mime: yes
register: result
- name: write findings to log
copy:
content: "{{ result.stat.path }},{{ result.stat.checksum }}"
dest: "{{ LOG }}"
...
The output was not exciting, but useful.
Would expect to build this up with multi-line output (one line per file stat checked) if I could figure out how to loop / register loop output correctly.
/tmp/app/archive/scan_results.log
/tmp/app/targets/file1.txt,8d06cea05d408d70c59b1dbc5df3bda374d869a4
You can use the set_fact module to register a variable like you want.
I don't use it in my test for you, it maybe useless in your case :
---
- name: check file integrity
hosts: localhost
vars:
TARGET: /tmp/app/targets
LOG: /tmp/app/archive/scan_results.log
tasks:
- name: 'discover target files'
find:
paths: "{{ TARGET }}"
recurse: yes
file_type: file
register: TARGET_FILES
- debug:
var: TARGET_FILES
- name: 'scan target'
stat:
path: "{{ item.path }}"
get_checksum: yes
loop: "{{ TARGET_FILES.files }}"
register: TARGET_RESULTS
- debug:
var: TARGET_RESULTS
- name: 'write findings to log'
lineinfile:
line: "{{ item.stat.path }},{{ item.stat.checksum }}"
path: "{{ LOG }}"
create: yes
loop: '{{ TARGET_RESULTS.results }}'
result:
# cat /tmp/app/archive/scan_results.log
/tmp/app/targets/file3.sh,bb4b0ffe4b5d26551785b250c38592b6f482cab4
/tmp/app/targets/file1.txt,8d06cea05d408d70c59b1dbc5df3bda374d869a4
/tmp/app/targets/file2.cfg,fb23292e06f91a0e0345f819fdee34fac8a53e59
Best Regards
Im creating a deployment playbook for our web services. Each web service is in its own directory such as:
/webapps/service-one/
/webapps/service-two/
/webapps/service-three/
I want to check to see if the service directory exists, and if so, I want to run a shell script that stops the service gracefully. Currently, I am able to complete this step by using ignore_errors: yes.
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
ignore_errors: yes
While this works, the output is very messy if one of the directories doesnt exist or a service is being deployed for the first time. I effectively want to something like one of these:
This:
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
when: shell: [ -d /webapps/{{item}} ]
or this:
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
stat:
path: /webapps/{{item}}
register: path
when: path.stat.exists == True
I'd collect facts first and then do only necessary things.
- name: Check existing services
stat:
path: "/tmp/{{ item }}"
with_items: "{{ services_to_stop }}"
register: services_stat
- name: Stop existing services
with_items: "{{ services_stat.results | selectattr('stat.exists') | map(attribute='item') | list }}"
shell: "/webapps/scripts/stopService.sh {{ item }}"
Also note, that bare variables in with_items don't work since Ansible 2.2, so you should template them.
This will let you get a list of existing directory names into the list variable dir_names (use recurse: no to read only the first level under webapps):
---
- hosts: localhost
connection: local
vars:
dir_names: []
tasks:
- find:
paths: "/webapps"
file_type: directory
recurse: no
register: tmp_dirs
- set_fact: dir_names="{{ dir_names+ [item['path']] }}"
no_log: True
with_items:
- "{{ tmp_dirs['files'] }}"
- debug: var=dir_names
You can then use dir_names in your "Stop services" task via a with_items. It looks like you're intending to use only the name of the directory under "webapps" so you probably want to use the | basename jinja2 filter to get that, so something like this:
- name: Stop services
with_items: "{{ dir_names }}"
shell: "/webapps/scripts/stopService.sh {{item | basename }}"
I am getting the below error
ERROR: expecting dict; got: shell:"ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt" register:the_file
I want to check the folder is exist.
if exist I need to specify some path /mule/acc
if the path does not exist need to point /mule/bcc
The following is ansible playbook
---
# get name of the .txt file
- stat: path=/mule/ansiple/mule-enterprise-standalone-3.4.1
register:the_file
when: the_file.stat.exists == True
- shell:"ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt"
register:the_file
- debug: msg="{{ the_file }}"
- set_fact: app_folder="{{ the_file.stdout | replace('-anchor.txt','') }}"
- debug: msg="{{ app_folder }}"
- debug: msg="{{ the_file.stdout }}"
# delete the .txt file
- name: Delete the anchor.txt file
file: path="{{ the_file.stdout }}" state=absent
# wait until the app folder disappears
- name: Wait for the folder to disappear
wait_for: path="{{ app_folder }}" state=absent
# copy the zip file
- name: Copy the zip file
copy: src="../p" dest="/c"
You almost have the YAML right, but there is one YAML error in your file and that is on the line:
register:the_file
because the line preceding it starts the first element of the toplevel sequence as a mapping through defining a key-value pair based on the colon (':') followed by space, and that cannot be followed by a scalar (or a sequence). If that file was your input, you would have gotten a syntax error by the YAML parser with part of the message looking like:
register:the_file
^
could not find expected ':'
This:
- shell:"ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt"
register:the_file
is perfectly fine YAML. It defines the second element of the top-level sequence to be a scalar string:
shell:"ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt" register:the_file
(you can break scalar strings over multiple lines in YAML).
Now ansible doesn't like that and expects that element, and probably all top-level sequence elements, to be mappings. And for a mapping you need to have a key value pair, which is (like with register:the_file) separated/indicated by a colon-space pair of characters in YAML. So ansible (not YAML) probably wants:
shell: ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt
register: the_file
please note that the quotes around the scalar value ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt are unnecessary in YAML.
There might be other things ansible expects from the structure of the file, but I would start with the above change and see if ansible throws further errors on:
# get name of the .txt file
- stat: path=/mule/ansiple/mule-enterprise-standalone-3.4.1
register: the_file
when: the_file.stat.exists == True
- shell: ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt
register: the_file
- debug: msg="{{ the_file }}"
- set_fact: app_folder="{{ the_file.stdout | replace('-anchor.txt','') }}"
- debug: msg="{{ app_folder }}"
- debug: msg="{{ the_file.stdout }}"
# delete the .txt file
- name: Delete the anchor.txt file
file: path="{{ the_file.stdout }}" state=absent
# wait until the app folder disappears
- name: Wait for the folder to disappear
wait_for: path="{{ app_folder }}" state=absent
# copy the zip file
- name: Copy the zip file
copy: src="../analytic-core-services-mule-3.0.0-SNAPSHOT.zip" dest="/mule/ansiple/mule-enterprise-standalone-3.4.1"
You need to learn YAML syntax. After every colon you are required to add a whitespace. The message comes from the YAML parser, telling you it expected a dictionary:
key1: value1
key2: value2
Instead it found no key-value-pair:
key1:value1
Specifically it complains about this line:
- shell:"ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt"
which should be
- shell: "ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt"
But you have the same issue in two more lines which look like:
register:the_file
and should be :
register: the_file
If in doubt an error comes from Ansible tasks or simply from a YAML parsing error, paste your YAML definition into any online YAML parser.
So much for the format. Now to logical problems:
- stat: path=/mule/ansiple/mule-enterprise-standalone-3.4.1
register: the_file
when: the_file.stat.exists == True
Unless you have the_file already registered from a previous task you didn't show, this can't work. when is the condition to decide if the task should be run. You can not execute a task depending on its outcome. It first has to run before the result is available. At the time the condition is evaluated the_file simply won't exist and this should result in an error, complaining that a None object does not have a key stat or something similar.
Then in the next task you again register the result with the same name.
- shell: "ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt"
register: the_file
This simply will override the previous registered result. Maybe you meant to have the condition from the first task on the second. But even that would not work. A result still will be registered from skipped tasks, simply stating the task was skipped. You either need to store the results in unique vars or check all possible file locations in a single task.
Cleaned up your playbook would look like this:
---
# get name of the .txt file
- stat:
path: /mule/ansiple/mule-enterprise-standalone-3.4.1
register: the_file
when: the_file.stat.exists
- shell: ls /mule/ansiple/mule-enterprise-standalone-3.4.1/*.txt
register: the_file
- debug:
msg: "{{ the_file }}"
- set_fact:
app_folder: "{{ the_file.stdout | replace('-anchor.txt','') }}"
- debug:
msg: "{{ app_folder }}"
- debug:
msg: "{{ the_file.stdout }}"
- name: Delete the anchor.txt file
file:
path: "{{ the_file.stdout }}"
state: absent
- name: Wait for the folder to disappear
wait_for:
path: "{{ app_folder }}"
state: absent
- name: Copy the zip file
copy:
src: ../analytic-core-services-mule-3.0.0-SNAPSHOT.zip
dest: /mule/ansiple/mule-enterprise-standalone-3.4.1
...
I'm using the ec2 module with ansible-playbook I want to set a variable to the contents of a file. Here's how I'm currently doing it.
Var with the filename
shell task to cat the file
use the result of the cat to pass to the ec2 module.
Example contents of my playbook.
vars:
amazon_linux_ami: "ami-fb8e9292"
user_data_file: "base-ami-userdata.sh"
tasks:
- name: user_data_contents
shell: cat {{ user_data_file }}
register: user_data_action
- name: launch ec2-instance
local_action:
...
user_data: "{{ user_data_action.stdout }}"
I assume there's a much easier way to do this, but I couldn't find it while searching Ansible docs.
You can use lookups in Ansible in order to get the contents of a file, e.g.
user_data: "{{ lookup('file', user_data_file) }}"
Caveat: This lookup will work with local files, not remote files.
Here's a complete example from the docs:
- hosts: all
vars:
contents: "{{ lookup('file', '/etc/foo.txt') }}"
tasks:
- debug: msg="the value of foo.txt is {{ contents }}"
You can use the slurp module to fetch a file from the remote host: (Thanks to #mlissner for suggesting it)
vars:
amazon_linux_ami: "ami-fb8e9292"
user_data_file: "base-ami-userdata.sh"
tasks:
- name: Load data
slurp:
src: "{{ user_data_file }}"
register: slurped_user_data
- name: Decode data and store as fact # You can skip this if you want to use the right hand side directly...
set_fact:
user_data: "{{ slurped_user_data.content | b64decode }}"
You can use fetch module to copy files from remote hosts to local, and lookup module to read the content of fetched files.
lookup only works on localhost. If you want to retrieve variables from a variables file you made remotely use include_vars: {{ varfile }} . Contents of {{ varfile }} should be a dictionary of the form {"key":"value"}, you will find ansible gives you trouble if you include a space after the colon.
I want to recursively copy over a directory and render all .j2 files in there as templates. For this I am currently using the following lines:
- template: >
src=/src/conf.d/{{ item }}
dest=/dest/conf.d/{{ item|replace('.j2','') }}
with_lines: find /src/conf.d/ -type f -printf "%P\n"
Now I'm looking for a way to remove unmanaged files from this directory. For example if I remove a file/template from /src/conf.d/ I want Ansible to remove it from /dest/conf.d/ as well.
Is there some way to do this? I tried fiddling around with rsync --delete, but there I got a problem with the templates which get their suffix .j2 removed.
I'd do it like this, assuming a variable defined as 'managed_files' up top that is a list.
- shell: ls -1 /some/dir
register: contents
- file: path=/some/dir/{{ item }} state=absent
with_items: contents.stdout_lines
when: item not in managed_files
We do this with our nginx files, since we want them to be in a special order, come from templates, but remove unmanaged ones this works:
# loop through the nginx sites array and create a conf for each file in order
# file will be name 01_file.conf, 02_file.conf etc
- name: nginx_sites conf
template: >
src=templates/nginx/{{ item.1.template }}
dest={{ nginx_conf_dir }}/{{ '%02d' % item.0 }}_{{ item.1.conf_name|default(item.1.template) }}
owner={{ user }}
group={{ group }}
mode=0660
with_indexed_items: nginx_sites
notify:
- restart nginx
register: nginx_sites_confs
# flatten and map the results into simple list
# unchanged files have attribute dest, changed have attribute path
- set_fact:
nginx_confs: "{{ nginx_sites_confs.results|selectattr('dest', 'string')|map(attribute='dest')|list + nginx_sites_confs.results|selectattr('path', 'string')|map(attribute='path')|select|list }}"
when: nginx_sites
# get contents of conf dir
- shell: ls -1 {{ nginx_conf_dir }}/*.conf
register: contents
when: nginx_sites
# so we can delete the ones we don't manage
- name: empty old confs
file: path="{{ item }}" state=absent
with_items: contents.stdout_lines
when: nginx_sites and item not in nginx_confs
The trick (as you can see) is that template and with_items have different attributes in the register results. Then you turn them into a list of files you manage and then get a list of the the directory and removed the ones not in that list.
Could be done with less code if you already have a list of files. But in this case I'm creating an indexed list so need to create the list as well with map.
I want to share my experience with this case.
Ansible from 2.2 is had with_filetree loop provides simple way to upload dirs, links, static files and even (!) templates. It's best way to keep my config dir synchronized.
- name: etc config - Create directories
file:
path: "{{ nginx_conf_dir }}/{{ item.path }}"
state: directory
mode: 0755
with_filetree: etc/nginx
when: item.state == 'directory'
- name: etc config - Creating configuration files from templates
template:
src: "{{ item.src }}"
dest: "{{ nginx_conf_dir }}/{{ item.path | regex_replace('\\.j2$', '') }}"
mode: 0644
with_filetree: etc/nginx
when:
- item.state == "file"
- item.path | match('.+\.j2$') | bool
- name: etc config - Creating staic configuration files
copy:
src: "{{ item.src }}"
dest: "{{ nginx_conf_dir }}/{{ item.path }}"
mode: 0644
with_filetree: etc/nginx
when:
- item.state == "file"
- not (item.path | match('.+\.j2$') | bool)
- name: etc config - Recreate symlinks
file:
src: "{{ item.src }}"
dest: "{{ nginx_conf_dir }}/{{ item.path }}"
state: link
force: yes
mode: "{{ item.mode }}"
with_filetree: etc/nginx
when: item.state == "link"
Next we may want delete unused files from config dir. It's simple.
We gather list of uploaded files and files exist on remote server, next remove diffrence.
But we may want to have unmanaged files in config dir.
I've used -prune functionality of find to avoid clearing folders with unmanaged files.
PS _(Y)_ sure after I have deleted some unmanaged files
- name: etc config - Gathering managed files
set_fact:
__managed_file_path: "{{ nginx_conf_dir }}/{{ item.path | regex_replace('\\.j2$', '') }}"
with_filetree: etc/nginx
register: __managed_files
- name: etc config - Convert managed files to list
set_fact: managed_files="{{ __managed_files.results | map(attribute='ansible_facts.__managed_file_path') | list }}"
- name: etc config - Gathering exist files (excluding .ansible_keep-content dirs)
shell: find /etc/nginx -mindepth 1 -type d -exec test -e '{}/.ansible_keep-content' \; -prune -o -print
register: exist_files
changed_when: False
- name: etc config - Delete unmanaged files
file: path="{{ item }}" state=absent
with_items: "{{ exist_files.stdout_lines }}"
when:
- item not in managed_files
Here's something I came up with:
- template: src=/source/directory{{ item }}.j2 dest=/target/directory/{{ item }}
register: template_results
with_items:
- a_list.txt
- of_all.txt
- templates.txt
- set_fact:
managed_files: "{{ template_results.results|selectattr('invocation', 'defined')|map(attribute='invocation.module_args.dest')|list }}"
- debug:
var: managed_files
verbosity: 0
- find:
paths: "/target/directory/"
patterns: "*.txt"
register: all_files
- set_fact:
files_to_delete: "{{ all_files.files|map(attribute='path')|difference(managed_files) }}"
- debug:
var: all_files
verbosity: 0
- debug:
var: files_to_delete
verbosity: 0
- file: path={{ item }} state=absent
with_items: "{{ files_to_delete }}"
This generates the templates (however which way you want), and records the results in 'template_results'
The the results are mangled to get a simple list of the "dest" of each template. Skipped templates (due to a when condition, not shown) have no "invocation" attribute, so they're filtered out.
"find" is then used to get a list of all files that should be absent unless specifically written.
this is then mangled to get a raw list of files present, and then the "supposed to be there" files are removed.
The remaining "files_to_delete" are then removed.
Pros: You avoid multiple 'skipped' entries showing up during deletes.
Cons: You'll need to concatenate each template_results.results if you want to do multiple template tasks before doing the find/delete.
There might be a couple of ways to handle this, but would it be possible to entirely empty the target directory in a task before the template step? Or maybe drop the templated files into a temporary directory and then delete+rename in a subsequent step?
Usually I do not remove files but I add -unmanaged suffix to its name.
Sample ansible tasks:
- name: Get sources.list.d files
shell: grep -r --include=\*.list -L '^# Ansible' /etc/apt/sources.list.d || true
register: grep_unmanaged
changed_when: grep_unmanaged.stdout_lines
- name: Add '-unmanaged' suffix
shell: rename 's/$/-unmanaged/' {{ item }}
with_items: grep_unmanaged.stdout_lines
EXPLANATION
Grep command uses:
-r to do recursive search
--include=\*.list - only take files
with .list extension during recursive search
-L '^# Ansible' - display file names that are not having line starting with '# Ansible'
|| true - this is used to ignore errors. Ansible's ignore_errors also works but before ignoring the error ansible will show it in red color during ansible-playbook run
which is undesired (at least for me).
Then I register output of grep command as a variable. When grep displays any output I set this task as changed (the line changed_when is responsible for this).
In next task I iterate grep output (i.e. file names returned by grep) and run rename command to add suffix to each file.
That's all. Next time you run the command first task should be green and second skipped.
I am using Ansible version 2.9.20
---
# tasks file for delete_unmanaged_files
- name: list files in dest
shell: ls -1 dest/conf.d
register: files_in_dest
- name: list files in src
shell: ls -1 src/conf.d
register: files_in_src
- name: Managed files - dest
command: echo {{ item|replace('.j2','') }}
with_items: "{{ files_in_dest.stdout_lines }}"
register: managed_files_dest
- name: Managed files - src
command: echo {{ item|replace('.j2','') }}
with_items: "{{ files_in_src.stdout_lines }}"
register: managed_files_src
- name: Convert src managed files to list
set_fact: managed_files_src_list="{{ managed_files_src.results | map(attribute='stdout') | list }}"
- name: Delete unmanaged files in dest
file: path=dest/conf.d/{{ item.stdout }} state=absent
with_items: "{{ managed_files_dest.results }}"
when: item.stdout not in managed_files_src_list
I think depending on the usecase of this issue, I found above solution might help you. Here, I have created 6 tasks.
Explanation:
Task-1 & Task-2 will help storing file names in variable "files_in_dest" and "files_in_src" in it.
Task-3 & Task-4 will inherit the output coming from Task-1 & Task-2 and then replace the j2 file (required for the usecase). Then these tasks will store the output in "managed_files_dest" and "managed_files_src" variables.
Task-5 will convert the output of "managed_files_src" to list, so that we can have all the present files (at current state) stored in src directory in a proper or single list and then we can use this list in next task to know the unmanaged files in dest directory.
Task-6 will delete the unmanaged files in dest.
Apparently this isn't possible with ansible at the moment. I had a conversation with mdehaan on IRC and it boils down to ansible not having a directed acyclic graph for resources, making things like this very hard.
Asking mdehaan for an example e.g. authoritatively managing a sudoers.d directory he came up with these things:
14:17 < mdehaan> Robe: http://pastebin.com/yrdCZB0y
14:19 < Robe> mdehaan: HM
14:19 < Robe> mdehaan: that actually looks relatively sane
14:19 < mdehaan> thanks :)
14:19 < Robe> the problem I'm seeing is that I'd have to gather the managed files myself
14:19 < mdehaan> you would yes
14:19 < mdehaan> ALMOST
14:20 < mdehaan> you could do a fileglob and ... well, it would be a little gross
[..]
14:32 < mdehaan> eh, theoretical syntax, nm
14:33 < mdehaan> I could do it by writing a lookup plugin that filtered a list
14:34 < mdehaan> http://pastebin.com/rjF7QR24
14:34 < mdehaan> if that plugin existed, for instance, and iterated across lists in A that were also in B
Building on #user2645850's answer I came up with this improved version, in this case it manages the vhost configuration of Apache. It doesn't use shell and thus works also in --check mode.
# Remove unmanged vhost configs left over from renaming or removing apps
# all managed configs need to be added to "managed_sites" in advance
- find:
paths: /etc/apache2/sites-available
patterns: '*.conf'
register: sites_available_contents
- name: Remove unmanaged vhost config files
file:
path: /etc/apache2/sites-available/{{ item }}
state: absent
with_items: "{{ sites_available_contents.files | map(attribute='path') | map('basename') | list }}"
when: item not in managed_sites
# links may differ from files, therefore we need our own find task for them
- find:
paths: /etc/apache2/sites-enabled
file_type: any
register: sites_enabled_contents
- name: Remove unmanaged vhost config links
file:
path: /etc/apache2/sites-enabled/{{ item }}
state: absent
with_items: "{{ sites_enabled_contents.files | map(attribute='path') | map('basename') | list }}"
when: item not in managed_sites
Examples on how to build managed_sites:
# Add single conf and handle managed_sites being unset
- set_fact:
managed_sites: "{{ (managed_sites | default([])) + [ '000-default.conf' ] }}"
# Add a list of vhosts appending ".conf" to each entry of vhosts
- set_fact:
managed_sites: "{{ managed_sites + ( vhosts | map(attribute='app') | product(['.conf']) | map('join') | list ) }}"