ansible - delete unmanaged files from directory? - ansible
I want to recursively copy over a directory and render all .j2 files in there as templates. For this I am currently using the following lines:
- template: >
src=/src/conf.d/{{ item }}
dest=/dest/conf.d/{{ item|replace('.j2','') }}
with_lines: find /src/conf.d/ -type f -printf "%P\n"
Now I'm looking for a way to remove unmanaged files from this directory. For example if I remove a file/template from /src/conf.d/ I want Ansible to remove it from /dest/conf.d/ as well.
Is there some way to do this? I tried fiddling around with rsync --delete, but there I got a problem with the templates which get their suffix .j2 removed.
I'd do it like this, assuming a variable defined as 'managed_files' up top that is a list.
- shell: ls -1 /some/dir
register: contents
- file: path=/some/dir/{{ item }} state=absent
with_items: contents.stdout_lines
when: item not in managed_files
We do this with our nginx files, since we want them to be in a special order, come from templates, but remove unmanaged ones this works:
# loop through the nginx sites array and create a conf for each file in order
# file will be name 01_file.conf, 02_file.conf etc
- name: nginx_sites conf
template: >
src=templates/nginx/{{ item.1.template }}
dest={{ nginx_conf_dir }}/{{ '%02d' % item.0 }}_{{ item.1.conf_name|default(item.1.template) }}
owner={{ user }}
group={{ group }}
mode=0660
with_indexed_items: nginx_sites
notify:
- restart nginx
register: nginx_sites_confs
# flatten and map the results into simple list
# unchanged files have attribute dest, changed have attribute path
- set_fact:
nginx_confs: "{{ nginx_sites_confs.results|selectattr('dest', 'string')|map(attribute='dest')|list + nginx_sites_confs.results|selectattr('path', 'string')|map(attribute='path')|select|list }}"
when: nginx_sites
# get contents of conf dir
- shell: ls -1 {{ nginx_conf_dir }}/*.conf
register: contents
when: nginx_sites
# so we can delete the ones we don't manage
- name: empty old confs
file: path="{{ item }}" state=absent
with_items: contents.stdout_lines
when: nginx_sites and item not in nginx_confs
The trick (as you can see) is that template and with_items have different attributes in the register results. Then you turn them into a list of files you manage and then get a list of the the directory and removed the ones not in that list.
Could be done with less code if you already have a list of files. But in this case I'm creating an indexed list so need to create the list as well with map.
I want to share my experience with this case.
Ansible from 2.2 is had with_filetree loop provides simple way to upload dirs, links, static files and even (!) templates. It's best way to keep my config dir synchronized.
- name: etc config - Create directories
file:
path: "{{ nginx_conf_dir }}/{{ item.path }}"
state: directory
mode: 0755
with_filetree: etc/nginx
when: item.state == 'directory'
- name: etc config - Creating configuration files from templates
template:
src: "{{ item.src }}"
dest: "{{ nginx_conf_dir }}/{{ item.path | regex_replace('\\.j2$', '') }}"
mode: 0644
with_filetree: etc/nginx
when:
- item.state == "file"
- item.path | match('.+\.j2$') | bool
- name: etc config - Creating staic configuration files
copy:
src: "{{ item.src }}"
dest: "{{ nginx_conf_dir }}/{{ item.path }}"
mode: 0644
with_filetree: etc/nginx
when:
- item.state == "file"
- not (item.path | match('.+\.j2$') | bool)
- name: etc config - Recreate symlinks
file:
src: "{{ item.src }}"
dest: "{{ nginx_conf_dir }}/{{ item.path }}"
state: link
force: yes
mode: "{{ item.mode }}"
with_filetree: etc/nginx
when: item.state == "link"
Next we may want delete unused files from config dir. It's simple.
We gather list of uploaded files and files exist on remote server, next remove diffrence.
But we may want to have unmanaged files in config dir.
I've used -prune functionality of find to avoid clearing folders with unmanaged files.
PS _(Y)_ sure after I have deleted some unmanaged files
- name: etc config - Gathering managed files
set_fact:
__managed_file_path: "{{ nginx_conf_dir }}/{{ item.path | regex_replace('\\.j2$', '') }}"
with_filetree: etc/nginx
register: __managed_files
- name: etc config - Convert managed files to list
set_fact: managed_files="{{ __managed_files.results | map(attribute='ansible_facts.__managed_file_path') | list }}"
- name: etc config - Gathering exist files (excluding .ansible_keep-content dirs)
shell: find /etc/nginx -mindepth 1 -type d -exec test -e '{}/.ansible_keep-content' \; -prune -o -print
register: exist_files
changed_when: False
- name: etc config - Delete unmanaged files
file: path="{{ item }}" state=absent
with_items: "{{ exist_files.stdout_lines }}"
when:
- item not in managed_files
Here's something I came up with:
- template: src=/source/directory{{ item }}.j2 dest=/target/directory/{{ item }}
register: template_results
with_items:
- a_list.txt
- of_all.txt
- templates.txt
- set_fact:
managed_files: "{{ template_results.results|selectattr('invocation', 'defined')|map(attribute='invocation.module_args.dest')|list }}"
- debug:
var: managed_files
verbosity: 0
- find:
paths: "/target/directory/"
patterns: "*.txt"
register: all_files
- set_fact:
files_to_delete: "{{ all_files.files|map(attribute='path')|difference(managed_files) }}"
- debug:
var: all_files
verbosity: 0
- debug:
var: files_to_delete
verbosity: 0
- file: path={{ item }} state=absent
with_items: "{{ files_to_delete }}"
This generates the templates (however which way you want), and records the results in 'template_results'
The the results are mangled to get a simple list of the "dest" of each template. Skipped templates (due to a when condition, not shown) have no "invocation" attribute, so they're filtered out.
"find" is then used to get a list of all files that should be absent unless specifically written.
this is then mangled to get a raw list of files present, and then the "supposed to be there" files are removed.
The remaining "files_to_delete" are then removed.
Pros: You avoid multiple 'skipped' entries showing up during deletes.
Cons: You'll need to concatenate each template_results.results if you want to do multiple template tasks before doing the find/delete.
There might be a couple of ways to handle this, but would it be possible to entirely empty the target directory in a task before the template step? Or maybe drop the templated files into a temporary directory and then delete+rename in a subsequent step?
Usually I do not remove files but I add -unmanaged suffix to its name.
Sample ansible tasks:
- name: Get sources.list.d files
shell: grep -r --include=\*.list -L '^# Ansible' /etc/apt/sources.list.d || true
register: grep_unmanaged
changed_when: grep_unmanaged.stdout_lines
- name: Add '-unmanaged' suffix
shell: rename 's/$/-unmanaged/' {{ item }}
with_items: grep_unmanaged.stdout_lines
EXPLANATION
Grep command uses:
-r to do recursive search
--include=\*.list - only take files
with .list extension during recursive search
-L '^# Ansible' - display file names that are not having line starting with '# Ansible'
|| true - this is used to ignore errors. Ansible's ignore_errors also works but before ignoring the error ansible will show it in red color during ansible-playbook run
which is undesired (at least for me).
Then I register output of grep command as a variable. When grep displays any output I set this task as changed (the line changed_when is responsible for this).
In next task I iterate grep output (i.e. file names returned by grep) and run rename command to add suffix to each file.
That's all. Next time you run the command first task should be green and second skipped.
I am using Ansible version 2.9.20
---
# tasks file for delete_unmanaged_files
- name: list files in dest
shell: ls -1 dest/conf.d
register: files_in_dest
- name: list files in src
shell: ls -1 src/conf.d
register: files_in_src
- name: Managed files - dest
command: echo {{ item|replace('.j2','') }}
with_items: "{{ files_in_dest.stdout_lines }}"
register: managed_files_dest
- name: Managed files - src
command: echo {{ item|replace('.j2','') }}
with_items: "{{ files_in_src.stdout_lines }}"
register: managed_files_src
- name: Convert src managed files to list
set_fact: managed_files_src_list="{{ managed_files_src.results | map(attribute='stdout') | list }}"
- name: Delete unmanaged files in dest
file: path=dest/conf.d/{{ item.stdout }} state=absent
with_items: "{{ managed_files_dest.results }}"
when: item.stdout not in managed_files_src_list
I think depending on the usecase of this issue, I found above solution might help you. Here, I have created 6 tasks.
Explanation:
Task-1 & Task-2 will help storing file names in variable "files_in_dest" and "files_in_src" in it.
Task-3 & Task-4 will inherit the output coming from Task-1 & Task-2 and then replace the j2 file (required for the usecase). Then these tasks will store the output in "managed_files_dest" and "managed_files_src" variables.
Task-5 will convert the output of "managed_files_src" to list, so that we can have all the present files (at current state) stored in src directory in a proper or single list and then we can use this list in next task to know the unmanaged files in dest directory.
Task-6 will delete the unmanaged files in dest.
Apparently this isn't possible with ansible at the moment. I had a conversation with mdehaan on IRC and it boils down to ansible not having a directed acyclic graph for resources, making things like this very hard.
Asking mdehaan for an example e.g. authoritatively managing a sudoers.d directory he came up with these things:
14:17 < mdehaan> Robe: http://pastebin.com/yrdCZB0y
14:19 < Robe> mdehaan: HM
14:19 < Robe> mdehaan: that actually looks relatively sane
14:19 < mdehaan> thanks :)
14:19 < Robe> the problem I'm seeing is that I'd have to gather the managed files myself
14:19 < mdehaan> you would yes
14:19 < mdehaan> ALMOST
14:20 < mdehaan> you could do a fileglob and ... well, it would be a little gross
[..]
14:32 < mdehaan> eh, theoretical syntax, nm
14:33 < mdehaan> I could do it by writing a lookup plugin that filtered a list
14:34 < mdehaan> http://pastebin.com/rjF7QR24
14:34 < mdehaan> if that plugin existed, for instance, and iterated across lists in A that were also in B
Building on #user2645850's answer I came up with this improved version, in this case it manages the vhost configuration of Apache. It doesn't use shell and thus works also in --check mode.
# Remove unmanged vhost configs left over from renaming or removing apps
# all managed configs need to be added to "managed_sites" in advance
- find:
paths: /etc/apache2/sites-available
patterns: '*.conf'
register: sites_available_contents
- name: Remove unmanaged vhost config files
file:
path: /etc/apache2/sites-available/{{ item }}
state: absent
with_items: "{{ sites_available_contents.files | map(attribute='path') | map('basename') | list }}"
when: item not in managed_sites
# links may differ from files, therefore we need our own find task for them
- find:
paths: /etc/apache2/sites-enabled
file_type: any
register: sites_enabled_contents
- name: Remove unmanaged vhost config links
file:
path: /etc/apache2/sites-enabled/{{ item }}
state: absent
with_items: "{{ sites_enabled_contents.files | map(attribute='path') | map('basename') | list }}"
when: item not in managed_sites
Examples on how to build managed_sites:
# Add single conf and handle managed_sites being unset
- set_fact:
managed_sites: "{{ (managed_sites | default([])) + [ '000-default.conf' ] }}"
# Add a list of vhosts appending ".conf" to each entry of vhosts
- set_fact:
managed_sites: "{{ managed_sites + ( vhosts | map(attribute='app') | product(['.conf']) | map('join') | list ) }}"
Related
Ansible Playbook using variable stdout as input to create files
I'm creating a playbook to verify all NFS present in fstab are mounted and also they are RW [read/write]. Here is the first block of my playbook that is actually working. Compares the mount output with the current fstab. tasks: - name: Get mounted devices shell: /usr/bin/mount | /usr/bin/sed -n {'/nfs / {/#/d;p}'} | /bin/awk '{print $3}' register: current_mount - set_fact: block_devices: "{{ current_mount.stdout_lines }}" - shell: /usr/bin/sed -n '/nfs/ {/#/d;p}' /root/testtab | /usr/bin/awk '{print $2}' register: fstab - set_fact: fstab_devices: "{{ fstab.stdout_lines }}" - name: Device not mounted fail: msg: "ONE OR MORE NFS ARE NOT MOUNTED, PLEASE VERIFY" when: block_devices != fstab_devices Now to verify these are Read/Write ready: I need to actually create a file inside the paths stored on {{ current_mount }} 1.1) If it succeed we move forward and delete the newly created files 1.2) If there is failure when creating one or all the files, we need to fail the playbook and msg which of them is not read/write [that mean, if we can't touch[create] a file inside then the FS then is not RW ] I tried to do the below, but it seems that it does not work like that. - file: path: "{{ current_mount }}"/ansibletestfile state: touch - file: path: "{{ current_mount }}"/ansibletestfile state: absent This is a sample of whats inside the {{ current_mount }} /testnfs /testnfs2 /testnfs3 After doing some research, it seems that to create more than one file within different FS listed in a variable I would have to use the item module, but I had no luck Please let me know if there is a way to perform this task.
To achieve: create a file inside the paths stored on {{ current_mount }} You need to loop over current_mount.stdout_lines or block_devices, like: - file: path: "{{ item }}/ansibletestfile" state: "touch" loop: "{{ current_mount.stdout_lines }}" For 1.1 and 1.2, we can use blocks. So, overall, the code can look like: - block: # Try to touch file - file: path: "{{ item }}/ansibletestfile" state: "touch" loop: "{{ current_mount.stdout_lines }}" rescue: # If file couldn't be touched - fail: msg: "Could not create/touch file" always: # Remove the file regardless - file: path: "{{ item }}/ansibletestfile" state: "absent" loop: "{{ current_mount.stdout_lines }}"
Compare two files with Ansible
I am struggling to find out how to compare two files. Tried several methods including this one which errors out with: FAILED! => {"msg": "The module diff was not found in configured module paths. Additionally, core modules are missing. If this is a checkout, run 'git pull --rebase' to correct this problem."} Is this the best practice to compare two files and ensure the contents are the same or is there a better way? Thanks in advance. My playbook: - name: Find out if cluster management protocol is in use ios_command: commands: - show running-config | include ^line vty|transport input register: showcmpstatus - local_action: copy content="{{ showcmpstatus.stdout_lines[0] }}" dest=/poc/files/{{ inventory_hostname }}.result - local_action: diff /poc/files/{{ inventory_hostname }}.result /poc/files/transport.results failed_when: "diff.rc > 1" register: diff - name: debug output debug: msg="{{ diff.stdout }}"
Why not using stat to compare the two files? Just a simple example: - name: Get cksum of my First file stat: path : "/poc/files/{{ inventory_hostname }}.result" register: myfirstfile - name: Current SHA1 set_fact: mf1sha1: "{{ myfirstfile.stat.checksum }}" - name: Get cksum of my Second File (If needed you can jump this) stat: path : "/poc/files/transport.results" register: mysecondfile - name: Current SHA1 set_fact: mf2sha1: "{{ mysecondfile.stat.checksum }}" - name: Compilation Changed debug: msg: "File Compare" failed_when: mf2sha1 != mf1sha1
your "diff" task is missing the shell keyword, Ansible thinks you want to use the diff module instead. also i think diff (as name of the variable to register the tasks result) leads ansible to confusion, change to diff_result or something. code (example): tasks: - local_action: shell diff /etc/hosts /etc/fstab failed_when: "diff_output.rc > 1" register: diff_output - debug: var: diff_output hope it helps
From Ansible User Guide: https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html - name: Fail task when both files are identical ansible.builtin.raw: diff foo/file1 bar/file2 register: diff_cmd failed_when: diff_cmd.rc == 0 or diff_cmd.rc >= 2
A slightly shortened version of 'imjoseangel' answer which avoids setting facts: vars: file_1: cats.txt file_2: dogs.txt tasks: - name: register the first file stat: path: "{{ file_1 }}" checksum: sha1 get_checksum: yes register: file_1_checksum - name: register the second file stat: path: "{{ file_2 }}" checksum: sha1 get_checksum: yes register: file_2_checksum - name: Check if the files are the same debug: msg="The {{ file_1 }} and {{ file_2 }} are identical" failed_when: file_1_checksum.stat.checksum != file_2_checksum.stat.checksum ignore_errors: true
Ansible Playbook - Synchronize module - Register variable and with_items
I'm trying to write a playbook that will rsync the folders from source to target after a database refresh. Our Peoplesoft HR application also requires a filesystem refresh along with database. I'm new to ansible and not an expert with python. I've written this but my playbook fails if any of the with_items doesn't exist. I'd like to use this playbook for all apps and the folders may differ between apps. How can I skip the folders that doesn't exist in source. I'm passing {{ target }} at command line. --- - hosts: '<hostname>' remote_user: <user> tasks: - shell: ls -l /opt/custhome/prod/ register: folders - name: "Copy PROD filesystem to target" synchronize: src: "/opt/custhome/prod/{{ item }}" dest: "/opt/custhome/dev/" delete: yes when: "{{ folders == item }}" with_items: - 'src/cbl/' - 'sqr/' - 'bin/' - 'NVISION/' In this case, NVISION doesn't exist in HR app but it does in FIN app. But the playbook is failing coz that folder doesn't exist in source.
You can use find module to find and store paths to source folders and then to iterate over results. Example playbook: - hosts: '<hostname>' remote_user: <user> tasks: - name: find all directories find: file_type: directory paths: /opt/custhome/prod/ patterns: - "src" - "sqr" - "bin" register: folders #debug to understand contents of {{ folders }} variable # - debug: msg="{{ folders }}" - name: "Copy PROD filesystem to target" synchronize: src: "{{ item.path }}" dest: "/opt/custhome/dev/" delete: yes with_items: "{{ folders.files }}" You may want to use recurse to descend into subdirectories and use_regex to use the power of python regex instead of shell globbing.
Ansible - Check if multiple directories exist - if so run a script on each directory - How?
Im creating a deployment playbook for our web services. Each web service is in its own directory such as: /webapps/service-one/ /webapps/service-two/ /webapps/service-three/ I want to check to see if the service directory exists, and if so, I want to run a shell script that stops the service gracefully. Currently, I am able to complete this step by using ignore_errors: yes. - name: Stop services with_items: services_to_stop shell: "/webapps/scripts/stopService.sh {{item}}" ignore_errors: yes While this works, the output is very messy if one of the directories doesnt exist or a service is being deployed for the first time. I effectively want to something like one of these: This: - name: Stop services with_items: services_to_stop shell: "/webapps/scripts/stopService.sh {{item}}" when: shell: [ -d /webapps/{{item}} ] or this: - name: Stop services with_items: services_to_stop shell: "/webapps/scripts/stopService.sh {{item}}" stat: path: /webapps/{{item}} register: path when: path.stat.exists == True
I'd collect facts first and then do only necessary things. - name: Check existing services stat: path: "/tmp/{{ item }}" with_items: "{{ services_to_stop }}" register: services_stat - name: Stop existing services with_items: "{{ services_stat.results | selectattr('stat.exists') | map(attribute='item') | list }}" shell: "/webapps/scripts/stopService.sh {{ item }}" Also note, that bare variables in with_items don't work since Ansible 2.2, so you should template them.
This will let you get a list of existing directory names into the list variable dir_names (use recurse: no to read only the first level under webapps): --- - hosts: localhost connection: local vars: dir_names: [] tasks: - find: paths: "/webapps" file_type: directory recurse: no register: tmp_dirs - set_fact: dir_names="{{ dir_names+ [item['path']] }}" no_log: True with_items: - "{{ tmp_dirs['files'] }}" - debug: var=dir_names You can then use dir_names in your "Stop services" task via a with_items. It looks like you're intending to use only the name of the directory under "webapps" so you probably want to use the | basename jinja2 filter to get that, so something like this: - name: Stop services with_items: "{{ dir_names }}" shell: "/webapps/scripts/stopService.sh {{item | basename }}"
Ansible: How to delete files and folders inside a directory?
The below code only deletes the first file it gets inside the web dir. I want to remove all the files and folders inside the web directory and retain the web directory. How can I do that? - name: remove web dir contents file: path='/home/mydata/web/{{ item }}' state=absent with_fileglob: - /home/mydata/web/* Note: I've tried rm -rf using command and shell, but they don't work. Perhaps I am using them wrongly. Any help in the right direction will be appreciated. I am using ansible 2.1.0.0
- name: Delete content & directory file: state: absent path: /home/mydata/web/ Note: this will delete the directory too.
Remove the directory (basically a copy of https://stackoverflow.com/a/38201611/1695680), Ansible does this operation with rmtree under the hood. - name: remove files and directories file: state: "{{ item }}" path: "/srv/deleteme/" owner: 1000 # set your owner, group, and mode accordingly group: 1000 mode: '0777' with_items: - absent - directory If you don't have the luxury of removing the whole directory and recreating it, you can scan it for files, (and directories), and delete them one by one. Which will take a while. You probably want to make sure you have [ssh_connection]\npipelining = True in your ansible.cfg on. - block: - name: 'collect files' find: paths: "/srv/deleteme/" hidden: True recurse: True # file_type: any # Added in ansible 2.3 register: collected_files - name: 'collect directories' find: paths: "/srv/deleteme/" hidden: True recurse: True file_type: directory register: collected_directories - name: remove collected files and directories file: path: "{{ item.path }}" state: absent with_items: > {{ collected_files.files + collected_directories.files }}
Using shell module (idempotent too): - shell: /bin/rm -rf /home/mydata/web/* If there are dot/hidden files: - shell: /bin/rm -rf /home/mydata/web/* /home/mydata/web/.* Cleanest solution if you don't care about creation date and owner/permissions: - file: path=/home/mydata/web state=absent - file: path=/home/mydata/web state=directory
I really didn't like the rm solution, also ansible gives you warnings about using rm. So here is how to do it without the need of rm and without ansible warnings. - hosts: all tasks: - name: Ansible delete file glob find: paths: /etc/Ansible patterns: "*.txt" register: files_to_delete - name: Ansible remove file glob file: path: "{{ item.path }}" state: absent with_items: "{{ files_to_delete.files }}" source: http://www.mydailytutorials.com/ansible-delete-multiple-files-directories-ansible/
try the below command, it should work - shell: ls -1 /some/dir register: contents - file: path=/some/dir/{{ item }} state=absent with_items: {{ contents.stdout_lines }}
That's what I come up with: - name: Get directory listing find: path: "{{ directory }}" file_type: any hidden: yes register: directory_content_result - name: Remove directory content file: path: "{{ item.path }}" state: absent with_items: "{{ directory_content_result.files }}" loop_control: label: "{{ item.path }}" First, we're getting directory listing with find, setting file_type to any, so we wouldn't miss nested directories and links hidden to yes, so we don't skip hidden files also, do not set recurse to yes, since it is not only unnecessary, but may increase execution time. Then, we go through that list with file module. It's output is a bit verbose, so loop_control.label will help us with limiting output (found this advice here). But I found previous solution to be somewhat slow, since it iterates through the content, so I went with: - name: Get directory stats stat: path: "{{ directory }}" register: directory_stat - name: Delete directory file: path: "{{ directory }}" state: absent - name: Create directory file: path: "{{ directory }}" state: directory owner: "{{ directory_stat.stat.pw_name }}" group: "{{ directory_stat.stat.gr_name }}" mode: "{{ directory_stat.stat.mode }}" get directory properties with the stat delete directory recreate directory with the same properties. That was enough for me, but you can add attributes as well, if you want.
Using file glob also it will work. There is some syntax error in the code you posted. I have modified and tested this should work. - name: remove web dir contents file: path: "{{ item }}" state: absent with_fileglob: - "/home/mydata/web/*"
Following up on the most upvoted answer here (which I cannot edit since "edit queue is full"): - name: Delete content & directory file: state: absent path: /home/mydata/web/ - name: Re-create the directory file: state: directory path: /home/mydata/web/
While Ansible is still debating to implement state = empty https://github.com/ansible/ansible-modules-core/issues/902 my_folder: "/home/mydata/web/" empty_path: "/tmp/empty" - name: "Create empty folder for wiping." file: path: "{{ empty_path }}" state: directory - name: "Wipe clean {{ my_folder }} with empty folder hack." synchronize: mode: push #note the backslash here src: "{{ empty_path }}/" dest: "{{ nl_code_path }}" recursive: yes delete: yes delegate_to: "{{ inventory_hostname }}" Note though, with synchronize you should be able to sync your files (with delete) properly anyway.
Created an overall rehauled and fail-safe implementation from all comments and suggestions: # collect stats about the dir - name: check directory exists stat: path: '{{ directory_path }}' register: dir_to_delete # delete directory if condition is true - name: purge {{directory_path}} file: state: absent path: '{{ directory_path }}' when: dir_to_delete.stat.exists and dir_to_delete.stat.isdir # create directory if deleted (or if it didn't exist at all) - name: create directory again file: state: directory path: '{{ directory_path }}' when: dir_to_delete is defined or dir_to_delete.stat.exist == False
Below code worked for me : - name: Get directory listing become: yes find: paths: /applications/cache patterns: '*' hidden: yes register: directory_content_result - name: Remove directory content become: yes file: path: "{{ item.path }}" state: absent with_items: "{{ directory_content_result.files }}"
There is an issue open with respect to this. For now, the solution works for me: create a empty folder locally and synchronize it with the remote one. Here is a sample playbook: - name: "Empty directory" hosts: * tasks: - name: "Create an empty directory (locally)" local_action: module: file state: directory path: "/tmp/empty" - name: Empty remote directory synchronize: src: /tmp/empty/ dest: /home/mydata/web/ delete: yes recursive: yes
I want to make sure that the find command only deletes everything inside the directory and leave the directory intact because in my case the directory is a filesystem. The system will generate an error when trying to delete a filesystem but that is not a nice option. Iam using the shell option because that is the only working option I found so far for this question. What I did: Edit the hosts file to put in some variables: [all:vars] COGNOS_HOME=/tmp/cognos find=/bin/find And create a playbook: - hosts: all tasks: - name: Ansible remove files shell: "{{ find }} {{ COGNOS_HOME }} -xdev -mindepth 1 -delete" This will delete all files and directories in the COGNOS_HOME variable directory/filesystem. The "-mindepth 1" option makes sure that the current directory will not be touched.
I have written an custom ansible module to cleanup files based on multiple filters like age, timestamp, glob patterns, etc. It is also compatible with ansible older versions. It can be found here. Here is an example: - cleanup_files: path_pattern: /tmp/*.log state: absent excludes: - foo* - bar*
Just a small cleaner copy & paste template of ThorSummoners answer, if you are using Ansible >= 2.3 (distinction between files and dirs not necessary anymore.) - name: Collect all fs items inside dir find: path: "{{ target_directory_path }}" hidden: true file_type: any changed_when: false register: collected_fsitems - name: Remove all fs items inside dir file: path: "{{ item.path }}" state: absent with_items: "{{ collected_fsitems.files }}" when: collected_fsitems.matched|int != 0
Isn't it that simple ... tested working .. eg. --- - hosts: localhost vars: cleandir: /var/lib/cloud/ tasks: - shell: ls -a -I '.' -I '..' {{ cleandir }} register: ls2del ignore_errors: yes - name: Cleanup {{ cleandir }} file: path: "{{ cleandir }}{{ item }}" state: absent with_items: "{{ ls2del.stdout_lines }}"
- name: Files to delete search find: paths: /home/mydata/web/ file_type: any register: files_to_delete - name: Deleting files to delete file: path: '{{ item.path }}' state: absent with_items: "{{ files_to_delete.files }}"
I like the following solution: - name: remove web dir contents command: cmd: "find . -path '*/*' -delete -print" chdir: "/home/mydata/web/" register: web_files_list changed_when: web_files_list.stdout | length > 0 because it is: simple idempotent fast
Assuming you are always in Linux, try the find cmd. - name: Clean everything inside {{ item }} shell: test -d {{ item }} && find {{ item }} -path '{{ item }}/*' -prune -exec rm -rf {} \; with_items: [/home/mydata/web] This should wipe out files/folders/hidden under /home/mydata/web
- name: delete old data and clean cache file: path: "{{ item[0] }}" state: "{{ item[1] }}" with_nested: - [ "/data/server/{{ app_name }}/webapps/", "/data/server/{{ app_name }}/work/" ] - [ "absent", "directory" ] ignore_errors: yes
Below worked for me, - name: Ansible delete html directory file: path: /var/www/html state: directory