I have 2 remote servers (Prod and Demo) and I would like to copy the latest file from a particular folder in Prod to another folder in Demo. Only one file is to be copied.
I can find the latest file in Prod using:
- name: Get files in folder
find:
paths: "/path_in_prod/arch/"
register: found_files
become: true
become_user: root
delegate_to: "{{ prod_server }}"
when: copy_content_from_prod is defined
- name: Get latest file
set_fact:
latest_file: "{{ found_files.files | sort(attribute='mtime', reverse=true) | first }}"
become: true
become_user: root
delegate_to: "{{ prod_server }}"
when: copy_content_from_prod is defined
I can check I have the correct file (debug).
When I try to copy the file with
- name: Fetch the file from prod
fetch: src= {{ latest_file.path }} dest=buffer/ flat=yes
delegate_to: "{{ prod_server }}"
- name: Copy the file to demo
copy: src=buffer/{{ latest_file.path | basename }} dest=/path_in_demo/in
I get a "File not found" error. But if I look for the file it is there (latest_file.path on Prod).
this is the error message
fatal: [demoServerHost -> ProdServerHost ]: FAILED! => {"changed": false, "msg": "file not found: "}
I do not know if I am interpreting the error message correctly but it seems to be looking in Demo in order to copy onto Prod?
In such case the synchronize_module might be the solution.
- name: Synchronize file from PROD to DEMO
synchronize:
src: "/tmp/test.txt"
dest: "/tmp/test.txt"
mode: push
delegate_to: "{{ prod_server }}"
when: "{{ demo_server }}"
which is "copying" a file from the production node to the demo node.
There are also a lot of answers under How to copy files between two nodes using Ansible.
I have faced a similar issue, where the copy task hangs indefinitely. Here is my example which is not site specific (will identify the site and user using the options).
The easiest solution I have found is to scp directly using the shell module:
- name: scp files onto '{{ target_destination }}' looping for each file on '{{ target_source }}'
shell: 'scp {{ hostvars[target_source].ansible_user }}#{{ hostvars[target_source].ansible_host }}:/opt/{{ hostvars[target_source].ansible_user }}/{{ item }} /opt/{{ destuser.stdout }}'
loop: '{{ diffout.stdout_lines }}'
when: diffout.stdout != ""
Some notes:
"target_source" and "target_destination" are defined using the extra-vars option
diffout is an earlier task comparing the folders on "Prod" and "Demo" and shows any new files to copy
this task is run on the "target_destination" (in my case Prod)
hostvars[target_source] will look at the variables for the "target_source" host in the inventory
this serves as a "pull" from Demo to Prod in my case, if your "Demo" doesn't have permissions, then you could delegate the task to "Prod" and rearrange the scp to look for "Demo" vars to push from "Prod"
Related
Using Ansible 2.9.12
Question: How do I configure Ansible to ensure the contents of a file is equal amongst at least 3 hosts, when the file is present at at least one host?
Imagine there are 3 hosts.
Host 1 does not has /file.txt.
Host 2 has /file.txt with contents hello.
Host 3 has /file.txt with contents hello.
Before the play is run, I am unaware whether the file is present or not. So the file could exist on host1, or host2 or host3. But the file exists on at least one of the hosts.
How would I ensure each time Ansible runs, the files across the hosts are equal. So in the end, Host 1 has the same file with the same contents as Host 2 or Host 3.
I'd like this to be dynamically set, instead of specifying the host names or group names, e.g. when: inventory_hostname == host1.
I am not expecting a check to see whether the contents of host 2 and 3 are equal
I do however, want this to be setup in an idempotent fashion.
The play below does the job, I think
shell> cat pb.yml
- hosts: all
tasks:
- name: Get status.
stat:
path: /file.txt
register: status
- block:
- name: Create dictionary status.
set_fact:
status: "{{ dict(keys|zip(values)) }}"
vars:
keys: "{{ ansible_play_hosts }}"
values: "{{ ansible_play_hosts|
map('extract', hostvars, ['status','stat','exists'])|
list }}"
- name: Fail. No file exists.
fail:
msg: No file exists
when: status.values()|list is not any
- name: Set reference to first host with file present.
set_fact:
reference: "{{ status|dict2items|
selectattr('value')|
map(attribute='key')|
first }}"
- name: Fetch file.
fetch:
src: /file.txt
dest: /tmp
delegate_to: "{{ reference }}"
run_once: true
- name: Copy file if not exist
copy:
src: "/tmp/{{ reference }}/file.txt"
dest: /file.txt
when: not status[inventory_hostname]
But, this doesn't check the existing files are in sync. It would be safer to sync all hosts, I think
- name: Synchronize file
synchronize:
src: "/tmp/{{ reference }}/file.txt"
dest: /file.txt
when: not status[inventory_hostname]
Q: "FATAL. could not find or access '/tmp/test-multi-01/file.txt on the Ansible controller. However, folder /tmp/test-multi-03 is present with the file.txt in it."
A: There is a problem with the fetch module when the task is delegated to another host. When the TASK [Fetch file.] is delegated to test-multi-01 which is localhost in this case changed: [test-multi-03 -> 127.0.0.1] the file will be fetched from test-multi-01 but will be stored in /tmp/test-multi-03/file.txt. The conclusion is, the fetch module ignores delegate_to when it comes to creating host-specific directories (not reported yet).
As a workaround, it's possible to set flat: true and store the files in a specific directory. For example, add the variable sync_files_dir with the directory, set fetch flat: true, and use the directory to both fetch and copy the file
- hosts: all
vars:
sync_files_dir: /tmp/sync_files
tasks:
- name: Get status.
stat:
path: /file.txt
register: status
- block:
- name: Create dir for files to be fetched and synced
file:
state: directory
path: "{{ sync_files_dir }}"
delegate_to: localhost
- name: Create dictionary status.
set_fact:
status: "{{ dict(keys|zip(values)) }}"
vars:
keys: "{{ ansible_play_hosts }}"
values: "{{ ansible_play_hosts|
map('extract', hostvars, ['status','stat','exists'])|
list }}"
- debug:
var: status
- name: Fail. No file exists.
fail:
msg: No file exists
when: status.values()|list is not any
- name: Set reference to first host with file present.
set_fact:
reference: "{{ status|dict2items|
selectattr('value')|
map(attribute='key')|
first }}"
- name: Fetch file.
fetch:
src: /file.txt
dest: "{{ sync_files_dir }}/"
flat: true
delegate_to: "{{ reference }}"
run_once: true
- name: Copy file if not exist
copy:
src: "{{ sync_files_dir }}/file.txt"
dest: /file.txt
when: not status[inventory_hostname]
We can achieve it by fetching the file from hosts where the file exists. The file(s) will be available on the control machine. However if the file which will be the source, exists on more than 1 node, then there will be no single source of truth.
Consider an inventory:
[my_hosts]
host1
host2
host3
Then the below play can fetch the file, then use that file to copy to all nodes.
# Fetch the file from remote host if it exists
- hosts: my_hosts
tasks:
- stat:
path: /file.txt
register: my_file
- fetch:
src: /file.txt
dest: /tmp/
when: my_file.stat.exists
- find:
paths:
- /tmp
patterns: file.txt
recurse: yes
register: local_file
delegate_to: localhost
- copy:
src: "{{ local_file.files[0].path }}"
dest: /tmp
If multiple hosts had this file then it would be in /tmp/{{ ansible_host }}. Then as we won't have a single source of truth, our best estimate can be to use the first file and apply on all hosts.
Well i believe the get_url module is pretty versatile - allows for local file paths or paths from a web server. Try it and let me know.
- name: Download files in all host
hosts: all
tasks:
- name: Download file from a file path
get_url:
url: file:///tmp/file.txt
dest: /tmp/
Edited ans:
(From documentation: For the synchronize module, the “local host” is the host the synchronize task originates on, and the “destination host” is the host synchronize is connecting to)
- name: Check that the file exists
stat:
path: /etc/file.txt
register: stat_result
- name: copy the file to other hosts by delegating the task to the source host
synchronize:
src: path/host
dest: path/host
delegate_to: my_source_host
when: stat_result.stat.exists
I am trying to build an rsync type backup on multiple servers. I would like to create a backup directory per server locally on my laptop and then back them up. If the directory does not exist create it.
I start off by calling the playbook locally, so that I can create the directories locally, then change the playbook to the backup group. The issue is that I dont know how to populate the hostnames in the backup group. When I run the playbook below the only directory that gets created is localhost. I need for each host in the backup group to create a local directory and back it up. what would be the easiest way to make this work?
- hosts: localhost
become: yes
#strategy: free
pre_tasks:
vars:
- backupDir: "/Users/user1/Desktop/Fusion/backups/{{ inventory_hostname }}/"
roles:
tasks:
- name: Check if Backup Folder Exisits.
stat:
path: "{{ backupDir }}"
register: my_folder
- name: "Ansible Create directory if not exists"
file:
path: "{{ backupDir }}"
state: directory
when: my_folder.stat.exists == false
- hosts: backup
tasks:
- name: Rsync Directories from Remote to Local
synchronize:
mode: pull
src: "{{ item }}"
dest: "{{ backupDir }}/{{ansible_date_time.date}}.back"
with_items:
- "/home/user1/"
- "/var/www/html/"
- "/root/"
when: my_folder.stat.exists
handlers:
In that case, I think you're looking for the loop module.
something like this..
- name: "Ansible Create directory if not exists"
file:
path: "{{ backupDir }}"
state: directory
when: my_folder.stat.exists == false
loop: {{ inventory_hostname }}
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html
In your inventory file you can create groups that tie back to your hosts you're calling on.
[localhost]
127.0.0.1
[backup]
host1
host2
host3
With mode=pull, I want to fetch and save remote files to the "dest" directory per hostname under the same top-level directory tree.
This is what I want:
src=/proc/cpuinfo (of every Ansible inventory host)
dest=/tmp/host1/cpuinfo, /tmp/host2/cpuinfo, /tmp/host3/cpuinfo, etc. (of the Ansible master)
If I do,
ansible all -m synchronize 'src=/proc/cpuinfo dest=/tmp/cpuinfo mode=pull'
/tmp/cpuinfo file on the Ansible master (= dest) gets overwritten by every remote host's cpuinfo file and I get to see only the very last one.
That is, I want a similar behavior as if I run
ansible all -m fetch -a 'src=/proc/cpuinfo dest=/tmp/cpuinfo'
Thank you in advance!
Steve
I doubt you can do this with single ad-hoc command.
ansible all -m synchronize -a 'src=/proc/cpuinfo dest=/tmp/{{inventory_hostname}}/cpuinfo mode=pull'
could do the thing, but you must create /tmp/<hostname> directories in advance, because rsync doesn't create non-existent directories for you. And you can't use ansible facts (like ansible_hostname and ansible_fqdn) as parameters for ad-hoc module execution - only "predefined" variables (like inventory_hostname).
Update: playbook code
- file:
path: "/tmp/{{ inventory_hostname }}"
state: directory
delegate_to: localhost
- synchronize:
src: /proc/cpuinfo
dest: "/tmp/{{ inventory_hostname }}/cpuinfo"
mode: pull
(Original poster)
Another way to do it using the synchronize module only:
- synchronize:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
mode: pull
with_items:
- { src: '/proc/cpuinfo', dest: '/tmp/testing/{{ inventory_hostname }}/proc' }
- { src: '/proc/meminfo', dest: '/tmp/testing/{{ inventory_hostname }}/proc' }
- { src: '/etc/services', dest: '/tmp/testing/{{ inventory_hostname }}/etc' }
I'm trying to write a playbook that will rsync the folders from source to target after a database refresh. Our Peoplesoft HR application also requires a filesystem refresh along with database. I'm new to ansible and not an expert with python. I've written this but my playbook fails if any of the with_items doesn't exist. I'd like to use this playbook for all apps and the folders may differ between apps. How can I skip the folders that doesn't exist in source. I'm passing {{ target }} at command line.
---
- hosts: '<hostname>'
remote_user: <user>
tasks:
- shell: ls -l /opt/custhome/prod/
register: folders
- name: "Copy PROD filesystem to target"
synchronize:
src: "/opt/custhome/prod/{{ item }}"
dest: "/opt/custhome/dev/"
delete: yes
when: "{{ folders == item }}"
with_items:
- 'src/cbl/'
- 'sqr/'
- 'bin/'
- 'NVISION/'
In this case, NVISION doesn't exist in HR app but it does in FIN app. But the playbook is failing coz that folder doesn't exist in source.
You can use find module to find and store paths to source folders and then to iterate over results. Example playbook:
- hosts: '<hostname>'
remote_user: <user>
tasks:
- name: find all directories
find:
file_type: directory
paths: /opt/custhome/prod/
patterns:
- "src"
- "sqr"
- "bin"
register: folders
#debug to understand contents of {{ folders }} variable
# - debug: msg="{{ folders }}"
- name: "Copy PROD filesystem to target"
synchronize:
src: "{{ item.path }}"
dest: "/opt/custhome/dev/"
delete: yes
with_items: "{{ folders.files }}"
You may want to use recurse to descend into subdirectories and use_regex to use the power of python regex instead of shell globbing.
I am trying to copy the content of dist directory to nginx directory.
- name: copy html file
copy: src=/home/vagrant/dist/ dest=/usr/share/nginx/html/
But when I execute the playbook it throws an error:
TASK [NGINX : copy html file] **************************************************
fatal: [172.16.8.200]: FAILED! => {"changed": false, "failed": true, "msg": "attempted to take checksum of directory:/home/vagrant/dist/"}
How can I copy a directory that has another directory and a file inside?
You could use the synchronize module. The example from the documentation:
# Synchronize two directories on one remote host.
- synchronize:
src: /first/absolute/path
dest: /second/absolute/path
delegate_to: "{{ inventory_hostname }}"
This has the added benefit that it will be more efficient for large/many files.
EDIT: This solution worked when the question was posted. Later Ansible deprecated recursive copying with remote_src
Ansible Copy module by default copies files/dirs from control machine to remote machine. If you want to copy files/dirs in remote machine and if you have Ansible 2.0, set remote_src to yes
- name: copy html file
copy: src=/home/vagrant/dist/ dest=/usr/share/nginx/html/ remote_src=yes directory_mode=yes
To copy a directory's content to another directory you CAN use ansibles copy module:
- name: Copy content of directory 'files'
copy:
src: files/ # note the '/' <-- !!!
dest: /tmp/files/
From the docs about the src parameter:
If (src!) path is a directory, it is copied recursively...
... if path ends with "/", only inside contents of that directory are copied to destination.
... if it does not end with "/", the directory itself with all contents is copied.
Resolved answer:
To copy a directory's content to another directory I use the next:
- name: copy consul_ui files
command: cp -r /home/{{ user }}/dist/{{ item }} /usr/share/nginx/html
with_items:
- "index.html"
- "static/"
It copies both items to the other directory. In the example, one of the items is a directory and the other is not. It works perfectly.
The simplest solution I've found to copy the contents of a folder without copying the folder itself is to use the following:
- name: Move directory contents
command: cp -r /<source_path>/. /<dest_path>/
This resolves #surfer190's follow-up question:
Hmmm what if you want to copy the entire contents? I noticed that * doesn't work – surfer190 Jul 23 '16 at 7:29
* is a shell glob, in that it relies on your shell to enumerate all the files within the folder before running cp, while the . directly instructs cp to get the directory contents (see https://askubuntu.com/questions/86822/how-can-i-copy-the-contents-of-a-folder-to-another-folder-in-a-different-directo)
Ansible remote_src does not support recursive copying.See remote_src description in Ansible copy docs
To recursively copy the contents of a folder and to make sure the task stays idempotent I usually do it this way:
- name: get file names to copy
command: "find /home/vagrant/dist -type f"
register: files_to_copy
- name: copy files
copy:
src: "{{ item }}"
dest: "/usr/share/nginx/html"
owner: nginx
group: nginx
remote_src: True
mode: 0644
with_items:
- "{{ files_to_copy.stdout_lines }}"
Downside is that the find command still shows up as 'changed'
the ansible doc is quite clear https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html for parameter src it says the following:
Local path to a file to copy to the remote server.
This can be absolute or relative.
If path is a directory, it is copied recursively. In this case, if path ends with "/",
only inside contents of that directory are copied to destination. Otherwise, if it
does not end with "/", the directory itself with all contents is copied. This behavior
is similar to the rsync command line tool.
So what you need is skip the / at the end of your src path.
- name: copy html file
copy: src=/home/vagrant/dist dest=/usr/share/nginx/html/
I found a workaround for recursive copying from remote to remote :
- name: List files in /usr/share/easy-rsa
find:
path: /usr/share/easy-rsa
recurse: yes
file_type: any
register: find_result
- name: Create the directories
file:
path: "{{ item.path | regex_replace('/usr/share/easy-rsa','/etc/easy-rsa') }}"
state: directory
mode: "{{ item.mode }}"
with_items:
- "{{ find_result.files }}"
when:
- item.isdir
- name: Copy the files
copy:
src: "{{ item.path }}"
dest: "{{ item.path | regex_replace('/usr/share/easy-rsa','/etc/easy-rsa') }}"
remote_src: yes
mode: "{{ item.mode }}"
with_items:
- "{{ find_result.files }}"
when:
- item.isdir == False
I got involved whole a day, too! and finally found the solution in shell command instead of copy: or command: as below:
- hosts: remote-server-name
gather_facts: no
vars:
src_path: "/path/to/source/"
des_path: "/path/to/dest/"
tasks:
- name: Ansible copy files remote to remote
shell: 'cp -r {{ src_path }}/. {{ des_path }}'
strictly notice to:
1. src_path and des_path end by / symbol
2. in shell command src_path ends by . which shows all content of directory
3. I used my remote-server-name both in hosts: and execute shell
section of jenkins, instead of remote_src: specifier in playbook.
I guess it is a good advice to run below command in Execute Shell section in jenkins:
ansible-playbook copy-payment.yml -i remote-server-name
Below worked for me,
-name: Upload html app directory to Deployment host
copy: src=/var/lib/jenkins/workspace/Demoapp/html dest=/var/www/ directory_mode=yes
This I found an ideal solution for copying file from Ansible server to remote.
copying yaml file
- hosts: localhost
user: {{ user }}
connection: ssh
become: yes
gather_facts: no
tasks:
- name: Creation of directory on remote server
file:
path: /var/lib/jenkins/.aws
state: directory
mode: 0755
register: result
- debug:
var: result
- name: get file names to copy
command: "find conf/.aws -type f"
register: files_to_copy
- name: copy files
copy:
src: "{{ item }}"
dest: "/var/lib/jenkins/.aws"
owner: {{ user }}
group: {{ group }}
remote_src: True
mode: 0644
with_items:
- "{{ files_to_copy.stdout_lines }}"
How to copy directory and sub dirs's and files from ansible server to remote host
- name: copy nmonchart39 directory to {{ inventory_hostname }}
copy:
src: /home/ansib.usr.srv/automation/monitoring/nmonchart39
dest: /var/nmon/data
Where:
copy entire directory: src: /automation/monitoring/nmonchart39
copy directory contents src: nmonchart39/