I'm writing an Ansible role where I have some templates that must be present multiple times with different names in a single destination directory. In order not to have to handle each of these files separately I would need to be able to apply templating or some other form of placeholder substitution also to their names. To give a concrete example, I might have a file named
{{ Client }}DataSourceContext.xml
which I need to change into, say,
AcmeDataSourceContext.xml
I have many files of this kind that have to be installed in different directories, but all copies of a single file go to the same directory. If I didn't need to change their names or duplicate them I could handle a whole bunch of such files with something like
- name: Process a whole subtree of templates
template:
src: "{{ item.src }}"
dest: "/path/to/{{ item.path }}"
with_filetree: ../templates/my-templates/
when: item.state == 'file'
I guess what I'd like is a magic consider_filenames_as_templates toggle that turned on filename preprocessing. Is there any way to approximate this behaviour?
Pretty much anywhere you can put a literal value in Ansible you can instead substitute the value of a a variable. So for example, you could do something like this:
- template:
src: sometemplate.xml
dest: "/path/to/{{ item }}DataSourceContext.xml"
loop:
- client1
- client2
This would end up creating templates
/path/to/client1DataSourceContext.xml and
/path/to/client2DataSourceContext.xml.
Update 1
For the question you've posed in your update:
I guess what I'd like is a magic consider_filenames_as_templates toggle that turned on filename preprocessing. Is there any way to approximate this behaviour?
It seems like you could just do something like:
- name: Process a whole subtree of templates
template:
src: "{{ item.src }}"
dest: "/path/to/{{ item.path.replace('__client__', client_name) }}"
with_filetree: ../templates/my-templates/
when: item.state == 'file'
That is, replace the string __client__ in your filenames with the
value of the client_name variable.
Related
I'm not sure how to describe the title or my question properly, feel free to edit.
I'll jump right in. I have this working piece of Ansible code:
- file:
path: "{{ item.item.value.my_folder }}/{{ item.item.value.filename }}"
state: absent
loop: "{{ my_stat.results }}"
when: item.stat is defined and item.stat.exists and item.stat.islnk
If Ansible is run, the task is executed properly, and the file is removed from the system.
Now, the issue. What I want Ansible to do is loop over multiple items described in "path". This way I won't have to create a seperate task for each filename I want to be deleted.
Example:
- file:
path:
- "{{ item.item.value.my_folder }}/{{ item.item.value.filename }}"
- "{{ item.item.value.my_folder }}/{{ item.item.value.other_filename }}"
state: absent
loop: "{{ my_stat.results }}"
when: item.stat is defined and item.stat.exists and item.stat.islnk
But Ansible doesn't proces the items in the list described in 'path', so the filesnames will not be deleted.
I see I cannot use 'loop', since it is already in use for another value.
Question: How would I configure Ansible so that I can have multiple items in the path and let Ansible delete the filenames, and keeping the current loop intact.
-- EDIT --
Output of the tasks:
I've removed the pastebin url since I believe it has no added value for the question, and the answer has been given.
As described in the documentation, path is of type path, so Ansible will only accept a valid path in there, not a list.
What you can do, though, is to slightly modify your loop and make a product between your existing list and a list of the filenames properties you want to remove, then use those as the key to access item.item.value (or item.0.item.value now, since we have the product filter applied).
For example:
- file:
path: "{{ item.0.item.value.my_folder }}/{{ item.0.item.value[item.1] }}"
state: absent
loop: "{{ my_stat.results | product(['filename', 'other_filename']) }}"
when:
- item.0.stat is defined
- item.0.stat.exists
- item.0.stat.islnk
PS: a list in a when is the same as adding and statements in the said when
I'm using Ansible to push config files for various apps (based on group_names) & need to loop thru the config .j2 templates from a list variable. If I use a known list of config templates I can use a standard with_nested like this...
template:
src: '{{ playbook_dir }}/templates/{{ item[1] }}/configs/{{ item[0] }}.j2'
dest: /path/to/{{ item[1] }}/configs/{{ item[0] }}
with_nested:
- ['file.1', 'file.2', 'file.3', 'file.4']
- '{{ group_names }}'
However, since each app will have its own configs I can't use a common list for a with_nested. Every attempt to somehow use with_filetree nested fails. Is there any way to nest a with_filetree? Am I missing something painfully obvious?
The most straightforward way to deal with this is probably to imbricate loops through an include. I take for granted that your app directory only contains .j2 files. Adapt if this is not the case.
In e.g. push_templates.yml
---
- name: Copy templates for group {{ current_group }}
template:
src: "{{ item.src }}"
dest: /path/to/{{ current_group }}/configs/{{ (item.src | splitext).0 | basename }}
with_filetree: "{{ playbook_dir }}/templates/{{ current_group }}"
# Or using the latest loop syntax
# loop: "{{ query('filetree', playbook_dir + '/templates/' + current_group) }}"
when: item.src is defined
Note: on the dest line, I am removing the last found extension of the file and getting its name only without the leading directory path. Check the ansible doc on filters for splitext and basename for more info
Then in your e.g. main.yml
- name: Copy templates for all groups
include_tasks: push_templates.yml
loop: "{{ group_names }}"
loop_control:
loop_var: current_group
Note the loop_var in the control section to disambiguate the possible item overlap in the included file. The var name is of course aligned with the one I used in the above included file. See the ansible loops documentation for more info.
An alternative approach to the above would be to construct your own data structure looping over your groups with set_fact and calling the filetree lookup on each iteration (see example above with the newer loop syntax), then loop over your custom data structure to do the job.
I'm currently trying to get used to Ansible but I'm failing to achieve what seems to be a common use-case:
Lets say I have have a role nginx in roles/nginx and and one task is to setup a custom default page:
- name: install nginx default page
copy:
src: "index.html"
dest: /var/www/html/
owner: root
mode: 0644
Ansible will look for the file in:
roles/nginx/files
roles/nginx
roles/nginx/tasks/files
roles/nginx/tasks
files
./
Now for some reason a single host should receive a completely different file.
I know I could alter the file src path to src: "{{ inventory_hostname }}/index.html" but then it would search in host-specific directories only.
Is there a way to alter the search paths so that Ansible will look for files in host-specific directories first but fall-back to common directories?
I don't want to decide if files might need to be host-specific when writing roles. I'd rather like to overwrite the role default files without altering the base role at all.
Q: "Is there a way to alter the search paths so that Ansible will look for files in host-specific directories first but fall back to common directories?"
A: In general, it is not possible to change the search paths. But, with first_found it is possible to define how a specific file shall be searched. For example,
- copy:
src: "{{ lookup('first_found', findme) }}"
dest: /scratch/tmp/
owner: root
mode: 0644
vars:
findme:
- "{{ inventory_hostname }}/index.html"
- "{{ role_path }}/files/index.html"
- "{{ role_path }}/files/defaults/index.html"
In my Ansible Playbook I'd like to have a task that removes old files and folders from the application's directory. The twist to this otherwise simple task is that a few files or folders need to remain. Imagine something like this:
/opt/application
- /config
- *.properties
- special.yml
- /logs
- /bin
- /var
- /data
- /templates
Let's assume I'd like to keep /logs completely, /var/data and from /config I want to keep special.yml.
(I cannot provide exact code at the moment because I left work frustrated by this and, after cooling down, I am now writing up this question at home)
My idea was to have two lists of exclusions, one holding the folders and one the file. Then I use the find module to first get the folders in the application's directory into a variable and the same for the remaining files into another variable. Afterwards I wanted to remove every folder and file that are not in the lists of exclusions using the file module.
(Pseudo-YML because I'm not yet fluent enough in Ansible that I can whip up a properly structured example; it should be close enough though)
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ found_files_list.files }}"
when: well, that is the big question
What I can't figure out is how to properly construct the when clause. Is it even possible like this?
I don't believe there is a when clause with the file module.
But you can probably achieve what you need as follows:
- name: Find /opt/application all directories, exclude logs, data, and config
find:
paths: /opt/application
excludes: 'logs,data,config'
register: files_to_delete
- name: Ansible remove file glob
file:
path: "{{ item.path }}"
state: absent
with_items: "{{ files_to_delete.files }}"
I hope this is what you need.
First use the find module like you said to get a total list of all files and directories. Register to a variable like all_objects.
- name: Get list of all files recursively
find:
path: /opt/application/
recurse: yes
register: all_objects
Then manually make a list of things you want to keep.
vars:
keep_these:
- /logs
- /var/data
- /config/special.yml
Then this task should delete everything except things in your list:
- name: Delete all files and directories except exclusions
file:
path: "{{ item.path }}"
state: absent
recurse: true
with_items: "{{ all_objects.files }}"
when: item.path not in keep_these
I think this general strategy should work... only thing I'm not sure about is the exact nesting heiararchy of the registered variable from the find module. You might have to play around with the debug module to get it exactly right.
Is it possible to have few .yml files and then read them as separate items for task?
Example:
- name: write templates
template: src=template.j2 dest=/some/path
with_items: ./configs/*.yml
I have found pretty elegant solution:
---
- hosts: localhost
vars:
my_items: "{{ lookup('fileglob', './configs/*.yml', wantlist=True) }}"
tasks:
- name: write templates
template: src=template.j2 dest=/some/path/{{ (item | from_yaml).name }}
with_file: "{{ my_items }}"
And then in template you have to add {% set item = (item | from_yaml) %} at the beginning.
Well, yes and no. You can loop over files and even use their content as variables. But the template module does not take parameters. There is an ugly workaround by using an include statement. Includes do take parameters and if the template task is inside the included file it will have access to them.
Something like this should work:
- include: other_file.yml parameters={{ lookup('file', item) | from_yaml }}
with_fileglob: ./configs/*.yml
And in other_file.yml then the template task:
- name: write template
template: src=template.j2 dest=/some/path
The ugly part here, beside the additional include, is that the include statement only takes parameters in the format of key=value. that's what you see in above task as parameters=.... parameters here has no special meaning, it just is the name of the variable with which the content of the file will be available inside the include.
So if your vars files have a variable foo defined, you would be able to access it in the template as {{ parameters.foo }}.