I would like to copy the local files to remote hosts, the files in local are in the folder named by the remote host(as it shows in the screenshot), how could I send them to the remote host respectively? (*.pem in centos8-8 will be sent to centos8-8 only, etc). I tried group['clients'] as the loop but it does not work.
Thanks.
use the copy module with the magic variable inventory_hostname
- name: Copy dir
ansible.builtin.copy:
src: path/{{ inventory_hostname }}
dest: pathdest
Related
I'm trying to move everything under /opt/* to a new location on the remote server. I've tried this using command to run rsync directly, as well as using both the copy and the sychronize ansible module. In all cases I get the same error message saying:
"msg": "rsync: link_stat \"/opt/*\" failed: No such file or directory
If I run the command listed in the "cmd" part of the ansible error message directly on my remote server it works without error. I'm not sure why ansible is failing.
Here is the current attempt using sychronize:
- name: move /opt to new partition
become: true
synchronize:
src: /opt/*
dest: /mnt/opt/*
delegate_to: "{{ inventory_hostname }}"
You should skip the wildcards that is a common mistake:
UPDATE
Thanks to the user: # Zeitounator, I managed to do it with synchronize.
The advantage of using synchronize instead of copy module is performance, it's much faster if you have a lot of files to copy.
- name: move /opt to new partition
become: true
synchronize:
src: /opt/
dest: /mnt/opt
delegate_to: "{{ inventory_hostname }}"
So basically the initial answer was right but you needed to delete the wildcards "*" and the slash on dest path.
Also, you should add the deletion of files on /opt/
I'm trying to figure out how one would copy or write the contents of a slurped variable to a remote (preferable) file. If this is not possible, what's the cleanest way to do it in steps?
I have something like this:
- name: Load r user public key
slurp:
src: *path*
register: slurped_r_key
- name: Decode r key
set_fact:
r_content: "{{ slurped_r_key.content | b64decode }}"
I want to get the contents of {{ r_content }} into a file in the remote machines that are part of an inventory group. If I cannot do that directly, what's the best way? Should I copy the contents to a local file and then scp the file over to the remote machines?
Thanks in advance!
To copy the variable to a file you can try as below:
- name: copy
copy:
content: "{{r_content}}"
dest: /tmp/testing
I am trying to emulate scenario of copying local file from one directory to another directory on same machine..but ansible copy command is looking for remote server always..
code I am using
- name: Configure Create directory
hosts: 127.0.0.1
connection: local
vars:
customer_folder: "{{ customer }}"
tasks:
- file:
path: /opt/scripts/{ customer_folder }}
state: directory
- copy:
src: /home/centos/absample.txt
dest: /opt/scripts/{{ customer_folder }}
~
I am running this play book like
ansible-playbook ab_deploy.yml --extra-vars "customer=ab"
So two problem i am facing
It should create a directory called ab under /opt/scripts/ but it creating folder as { customer_folder }}..its not taking ab as name of directory
second, copy as i read documentation, copy only work to copy files from local to remote machine, But i want is simply copy from local to local..
how can i achieve this..might be silly, i am just trying out things
Please suggest.
I solved it..i used cmd under shell module then it worked.
I need to analyze nginx log file from multi hosts.
First, i want to copy them to a host directory.
For example, i want to copy nginx error log file from 6 hosts to a destination host directory.
The 6 hosts ips are 192.168.0.2 - 192.168.0.7. The nginx error log path is /var/log/nginx/nginx_error.log. I want to copy them to /var/log/nginx_error directory in destination host 192.168.0.10. Every file is named by source host ip. How can i write playbook using ansible?
[serverB]
192.168.0.10
[serverA]
192.168.0.2
192.168.0.3
192.168.0.4
192.168.0.5
- hosts: serverB
tasks:
- name: Copy Remote-To-Remote (from serverA to serverB)
synchronize: src=/var/log/nginx/nginx_error.log dest=/var/log/nginx_error/
delegate_to: serverA
The problem is that i can't know how to name dest file using source ip address?
I don't have an elegant solution. some workaround here. to name dest file using source ip you can use ansible_hostname or inventory_hostname variable defined by ansible.
Make sure that you've created the /tmp/nginx_logs/ directory in ansible controller before executing.
---
- name : Copy Remote
hosts: serverA, serverB
tasks:
- name: Copy the files from the Source Machine(serverA) to Ansible Controller
synchronize:
mode: pull
src: /var/log/nginx/nginx_error.log
dest: "/tmp/nginx_logs/test_{{ ansible_hostname }}.txt"
when: "inventory_hostname in groups['serverA']"
- name: Copy log files to the destination server
copy:
src: /tmp/nginx_logs/
dest: /var/log/nginx_error/
when: "inventory_hostname in groups['serverB']"
I have to copy files, which are target specific files. I have stored these files in my machine as their target hostnames.
Example:
/tmp/Server1.cfg
/tmp/Server2.cfg
host file has
Server1
Server2
When my playbook runs for Server1 it should copy Server1.cfg.
When my playbook runs for Server2 it should copy Server2.cfg.
How can I achieve this ?
Thanks.
PS: Please be explicit as I am still a toddler in ansible
You may want to read some chapters at docs.ansible.com:
Additionally, inventory_hostname is the name of the hostname as configured in Ansible’s inventory host file. This can be useful for when you don’t want to rely on the discovered hostname ansible_hostname or for other mysterious reasons.
So, in your case:
- copy:
src: "{{ inventory_hostname }}"
dest: "/tmp/{{ inventory_hostname }}.cfg"