So, this is ansible 2.10, and i'm trying to copy multiple files using with_items and it seems that it does not work. My tryout are more or less on the form of (this is a task):
---
- name: Sync md tools
ansible.builtin.copy:
dest: /root/bin/
src: "{{ playbook_dir }}/../additions/mdraid/{{ item }}"
with_items:
- mdadm_readd_dev
- md_check
owner: root
group: root
mode: '0700'
force: yes
I'm aware of ansible.posix.synchronize and of src_dir/ dst_dir/ format, but i would like to be able so specify a list. Is there a way to copy a specific list of multiple files with ansible.builtin.copy?
Bad syntax, ie. bad indentation. ansible-lint or ansible-playbook <playbook> -C would tell you more.
Related
In my files directory I have various files, with a similar name structure:
data-example.zip
data-precise.zip
data-arbitrary.zip
data-collected.zip
I would like to transfer all of these files in the /tmp directory of my remote machine using Ansible without specifying each file name explicitly.
In other words I would like to transfer every file that stars with "data-".
What is the correct way to do that? In a similar thread, someone suggested the with_fileglob keyword, - but I couldn't get that to work.
Can someone provide me an example on how to accomplish said task?
Method 1: Find all files, store them in a variable and copy them to destination.
- hosts: lnx
tasks:
- find: paths="/source/path" recurse=yes patterns="data*"
register: file_to_copy
- copy: src={{ item.path }} dest=/dear/dir
owner: root
mode: 0775
with_items: "{{ files_to_copy.files }}"
Use remote_src: yes to copy file in remote machine from one path to another.
Ansible documentation
Method 2: Fileglob
- name: Copy each file over that matches the given pattern
copy:
src: "{{ item }}"
dest: "/etc/fooapp/"
owner: "root"
mode: 0600
with_fileglob:
- "/playbooks/files/fooapp/*"
Ansible documentation
Shortly after posting the question I actually figured it out myself. The with_fileglob keyword is the way to do it.
- name: "Transferring all data files"
copy:
src: "{{ item }}"
dest: /tmp/
with_fileglob: "data-*"
Plz bear with me I am new to Ansible. I have a tasks/main.yml file like this as part of an effort to enhance. For now, I have to execute a playbook for each file separately to copy A.jar and B.jar one at a time. However I am trying to create an array to contain A and B jar files in advance and process one by one to copy it to two different destination folders in this playbook but struggling with syntax. Hoping to re-use with_items.
- name: Copy
copy:
src: "/somePath/{{ name }}.jar"
dest: "{{ item }}"
remote_src: yes
with_items:
- "/pathTo/foo/"
- "/pathTo/bar/"
# /pathTo/foo
A.jar
B.jar
# /pathTo/bar
A.jar
B.jar
You can use with_nested for looping,
- name: Copy
copy:
src: "/somePath/{{ item[0] }}"
dest: "{{ item[1] }}"
remote_src: yes
with_nested:
- [ "A.jar", "B.jar" ]
- [ "/pathTo/foo/", "/pathTo/bar/" ]
This iterates over each element in each list to copy all your source files (A.jar and B.jar) from the first list to all the destination directories listed in the second list.
I'm currently trying to get used to Ansible but I'm failing to achieve what seems to be a common use-case:
Lets say I have have a role nginx in roles/nginx and and one task is to setup a custom default page:
- name: install nginx default page
copy:
src: "index.html"
dest: /var/www/html/
owner: root
mode: 0644
Ansible will look for the file in:
roles/nginx/files
roles/nginx
roles/nginx/tasks/files
roles/nginx/tasks
files
./
Now for some reason a single host should receive a completely different file.
I know I could alter the file src path to src: "{{ inventory_hostname }}/index.html" but then it would search in host-specific directories only.
Is there a way to alter the search paths so that Ansible will look for files in host-specific directories first but fall-back to common directories?
I don't want to decide if files might need to be host-specific when writing roles. I'd rather like to overwrite the role default files without altering the base role at all.
Q: "Is there a way to alter the search paths so that Ansible will look for files in host-specific directories first but fall back to common directories?"
A: In general, it is not possible to change the search paths. But, with first_found it is possible to define how a specific file shall be searched. For example,
- copy:
src: "{{ lookup('first_found', findme) }}"
dest: /scratch/tmp/
owner: root
mode: 0644
vars:
findme:
- "{{ inventory_hostname }}/index.html"
- "{{ role_path }}/files/index.html"
- "{{ role_path }}/files/defaults/index.html"
In my Ansible Playbook I'd like to have a task that removes old files and folders from the application's directory. The twist to this otherwise simple task is that a few files or folders need to remain. Imagine something like this:
/opt/application
- /config
- *.properties
- special.yml
- /logs
- /bin
- /var
- /data
- /templates
Let's assume I'd like to keep /logs completely, /var/data and from /config I want to keep special.yml.
(I cannot provide exact code at the moment because I left work frustrated by this and, after cooling down, I am now writing up this question at home)
My idea was to have two lists of exclusions, one holding the folders and one the file. Then I use the find module to first get the folders in the application's directory into a variable and the same for the remaining files into another variable. Afterwards I wanted to remove every folder and file that are not in the lists of exclusions using the file module.
(Pseudo-YML because I'm not yet fluent enough in Ansible that I can whip up a properly structured example; it should be close enough though)
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ found_files_list.files }}"
when: well, that is the big question
What I can't figure out is how to properly construct the when clause. Is it even possible like this?
I don't believe there is a when clause with the file module.
But you can probably achieve what you need as follows:
- name: Find /opt/application all directories, exclude logs, data, and config
find:
paths: /opt/application
excludes: 'logs,data,config'
register: files_to_delete
- name: Ansible remove file glob
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ files_to_delete.files }}"
I hope this is what you need.
First use the find module like you said to get a total list of all files and directories. Register to a variable like all_objects.
- name: Get list of all files recursively
find:
path: /opt/application/
recurse: yes
register: all_objects
Then manually make a list of things you want to keep.
vars:
keep_these:
- /logs
- /var/data
- /config/special.yml
Then this task should delete everything except things in your list:
- name: Delete all files and directories except exclusions
file:
path: "{{ item.path }}"
state: absent
recurse: true
with_items: "{{ all_objects.files }}"
when: item.path not in keep_these
I think this general strategy should work... only thing I'm not sure about is the exact nesting heiararchy of the registered variable from the find module. You might have to play around with the debug module to get it exactly right.
For some weird reasons I'm having troubles with a simple task which is copying a content of the folder myfiles (few files in there) to the dist/myfiles location. Task looks like this:
name: Deploy config files like there is no tomorrow
copy:
src: "{{ item }}"
dest: "/home/{{ ansible_user_id }}/dist/{{ item }}"
with_items:
- 'config'
- 'myfiles/'
myfiles folder exist under the dist and config file is copied to the dist folder.
Is this possible in Ansible or I should copy each file separately? Am I doing it completely wrong?
Your task copies both: the config file and the myfiles on Debian and CentOS targets properly.
If for some reason you have a problem, you might have a look at Looping over Fileglobs.
You need to split the task into two, with the second one looking like:
- name: Deploy multiple config files
copy:
src: "{{ item }}"
dest: "/home/{{ ansible_user_id }}/dist/myfiles/{{ item | basename }}"
with_fileglob:
- /path/to/myfiles/*
For a recursive copy, check this question on SeverFault
Alternatively, you could use the synchronize module, but pay special attention when using become. See this question on SuperUser.