I'd like to copy a file from local to remote using Ansible with the copy module. It fails because it cannot find the file.
I've tried both relative paths (from the Ansible root) as well as with environment variables (which would be the preferred way).
I think Ansible isn't supporting ENV, at least it cannot find the file. This is how I've done it:
- name: Ensure test file
copy:
src: $DNM_TOOLS_HOME/ch/testfile.txt
dest: /tmp/testfile.txt
owner: root
group: root
mode: 0644
Is there a way I can make use of Environment variables? If not, from which folder is Ansible doing the relative path lookup?
Yaml does not interpolate environment variables as you are trying to do, nor ansible which uses jinja2 templating.
In this case, you will have to use an ansible lookup, more specifically the env lookup
src: "{{ lookup('env', 'DNM_TOOLS_HOME') }}/ch/testfile.txt"
Note that lookup always run on the controler machine. If you ever need to get an environment variable from a remote host inside a task, those are available in the ansible_env hash (e.g. some_yaml_key: "{{ ansible_env.MY_REMOTE_ENV_VAR }}"
Related
I zipped up an Ansible playbook and a configuration file, pushed the .zip file to S3, and I'm triggering the Ansible playbook from AWS SSM.
I'm getting a AnsibleFileNotFound error: AnsibleFileNotFound: Could not find or access '/path/to/my_file.txt' on the Ansible Controller.
Here is my playbook:
- name: Copies a configuration file to a machine.
hosts: 127.0.0.1
connection: local
tasks:
- name: Copy the configuration file.
copy:
src: /path/to/my_file.txt
dest: /etc/my_file.txt
owner: root
group: root
mode: '0644'
become: true
my_file.txt exists in the .zip file that I uploaded to S3, and I've verified that it's being extracted (via the AWS SSM output). Why wouldn't I be able to copy that file over? What do I need to do to get Ansible to save this file to /etc/ on the target machine?
EDIT:
Using remote_src: true makes sense because the .zip file is presumably unpacked by AWS SSM to somewhere on the target machine. The problem is that this is unpacked to a random temp directory, so the file isn't found anyway.
I tried a couple of different absolute paths - I am assuming the path here is relevant to the .zip root.
The solution here is a bit horrendous:
The .zip file is extracted to the machine into an ephemeral directory with a random name which is not known in advance of the AWS SSM execution.
remote_src must be true. Yeah, it's your file that you've uploaded to S3, but Ansible isn't really smart enough to know that in this context.
A path relative to the playbook has to be used if you're bundling configuration files with the playbook.
That relative path has to be interpolated.
So using src: "{{ playbook_dir | dirname }}/path/to/my_file.txt" solved the problem in this case.
Note that this approach should not be used if configuration files contain secrets, but I'm not sure what approach AWS SSM offers for that type of scenario when you are using it in conjunction with Ansible.
I am trying to Include a file which contains tasks. but it fails as below
ERROR! Unable to retrieve file contents Could not find or access
'/roles/k8s/tasks/Get_volumes.yaml' on the Ansible Controller. If you
are using a module and expect the file to exist on the remote, see the
remote_src option
Below is my ansible script. As i am going to test only few tasks ,i have included the tasks file which is inside roles folder. I am not sure whether it fails because of roles folder . But i am unable to resolve it.
---
- hosts: localhost
vars:
local_kubectl: kubectl
local_array_api_url: https://Mystoragearray:5392/v1
array_username: admin
array_password: admin
tasks:
#- import_tasks: /roles/k8s/tasks/GetVolume_token.yaml
- include_tasks: /roles/k8s/tasks/Get_volumes.yaml
Q: How to include_tasks with relative path?
A: Place the file relative to the playbook base directory playbook_dir. For example
- include_tasks: Get_volumes.yaml
is the same as
- include_tasks: "{{ playbook_dir }}/Get_volumes.yaml"
If the directory /roles is configured in DEFAULT_ROLES_PATH then include_role might be a better option
- include_role:
name: k8s
tasks_from: Get_volumes.yaml
Please verify the path again. is path "/roles/k8s/tasks/Get_volumes.yaml" correct? it should be something like "roles/k8s/tasks/Get_volumes.yaml".
if path is correct ( seems to be starting from root directory "/"), please provide necessary permission to the ansible user.
tl;dr
Ansible's template module treats a template destination as a directory instead of as a file. What am I missing?
Situation:
I am using paulfantom's ansible-restic role in another role, named backups-role. I am testing my role with a dummy playbook in a local lxc environment, which installs a database service and then uses ansible-restic to create a cron job to back it up. For testing purposes, the backup destination is a local restic repo: /var/backup/backups-test . I am using only one restic_repo named backups-testWrong! the variable was named {{ ansible_hostname }}/{{ ansible_date_time.epoch }}, evaluated to something equivalent to myhost.local/1551799877. See the answer directly.
Problem
This ansible task with from a stable ansible role (ansible-restic):
- name: Deploy cron script
template:
src: 'restic.cron.j2'
dest: '/etc/cron.d/restic-{{ item.name }}'
mode: '0640'
no_log: false
with_items: '{{ restic_repos }}'
register: _cronfiles
... fails complaining with:
Destination directory /etc/cron.d/restic-backups-test does not exist
Discussion
What ansible-restic should do here is to deploy a cron script based on a template, with name restic-backups-test, inside directory /etc/cron.d/ . However, it looks like ansible interprets that the directory should be /etc/cron.d/restic-backups-test and the file name just the epoch timestamp like 1551799877 , contrary to what ansible docs itself suggests:
# Example from Ansible Playbooks
- template:
src: /mytemplates/foo.j2
dest: /etc/file.conf
owner: bin
group: wheel
mode: 0644
I'm afraid it has to do with my environment instead, but I don't know what could make ansible change this behaviour, and my playbook doesn't do black magic.
Some more details
I am running ansible version 2.7.8 with python 3.5.3 from a Debian Stretch machine, against a linux container with guest os Ubuntu Bionic with python 2.7.15rc1 and python 3.6.7 . python symlink points to python2.7 . I have tried also with python3.6 with the same result.
Petition
Can you help me make sense of this? At the end I just want it to work without having to modify the upstream role. The timestamp filename can be a hint you may understand.
I self-answer becase I solved it with the hints of #TheLastProject from github.com. See just asked:
What's the value of your restic_repos variable?
And it turned to be that the info I provided at the question was wrong. I was setting the variable with a slash / in the middle of the name, still don't know why. Therefore, when ansible-restic was trying to make sure that /etc/cron.d/{{ repo_name }} existed, it was actually trying to check /etc/cron.d/myhost.local/1551799877, which clearly didn't exist, and didn't create it because it only creates files, not intermediary parents.
So no magic and no bugs in Ansible!
I am putting together an Ansible Playbook designed to build webservers. However I am stuck when trying to use with_fileglob because Ansible keeps reporting that it's skipping the copy of nginx vhost files.
My script looks like this:
- name: Nginx | Copy vhost files
copy: src={{ item }} dest=/etc/nginx/sites-available owner=root group=root mode=600
with_fileglob:
- "{{ templates_dir }}/nginx/sites-available/*"
notify
- nginx-restart:
{{ templates }} has been defined elsewhere as roles/common/templates. In this directory I have a file called webserver1 that I'm hoping Ansible will copy into /etc/nginx/sites-available/
I have found other people discussing this issue but no responses have helped me solve this problem. Why would Ansible be skipping files?
Edit: I should point out that I want to use with_fileglob (rather than straight copy) as I want to iterate over other virtual hosts in the future.
Look at http://docs.ansible.com/playbooks_loops.html#looping-over-fileglobs, Note 1:
When using a relative path with with_fileglob in a role, Ansible resolves the path relative to the roles//files directory.
So to access a file in the templates directory, you can start the relative path with ../templates
I wonder if there is a way for Ansible to access local environment variables.
The documentation references accessing variable on the target machine:
{{ lookup('env', 'SOMEVAR') }}
Is there a way to access environment variables on the source machine?
I have a Linux vm running on osx, and for me:
lookup('env', 'HOME') returns "/Users/Gonzalo" (the HOME variable from osx), while ansible_env.HOME returns "/root" (the HOME variable from the vm).
Worth to mention, that ansible_env.VAR fails if the variable does not exists, while lookup('env', 'VAR') does not fail.
Use ansible lookup:
- set_fact: env_var="{{ lookup('env','ENV_VAR') }}"
Those variables are in the management machine I suppose source machine in your case.
Check this: https://docs.ansible.com/ansible/devel/collections/ansible/builtin/env_lookup.html
Basically, if you just need to access existing variables, use the ‘env’ lookup plugin. For example, to access the value of the HOME environment variable on management machine:`
Now, if you need to access it in the remote machine you can just run your ansible script locally in the remote machine.
Or you could just the ansible facts variables. If it's not in the ansible facts you can just run a shell command to get it.
Use delegate_to to run it on any machine you want:
- name: get running ansible user
ansible.builtin.set_fact:
local_ansible_user: "{{ lookup('env', 'USER') }}"
delegate_to: localhost