How to synchronize a file between two remote servers in Ansible? - ansible

The end goal is for me to copy file.txt from Host2 over to Host1. However, I keep getting the same error whenever I perform the function. I have triple checked my spacing and made sure I spelled everything correctly, but nothing seems to work.
Command to start the playbook:
ansible-playbook playbook_name.yml -i inventory/inventory_name -u username -k
My Code:
- hosts: Host1
tasks:
- name: Synchronization using rsync protocol on delegate host (pull)
synchronize:
mode: pull
src: rsync://Host2.linux.us.com/tmp/file.txt
dest: /tmp
delegate_to: Host2.linux.us.com
Expected Result:
Successfully working
Actual Result:
fatal: [Host1.linux.us.com]: FAILED! => {"changed": false, "cmd": "sshpass", "msg": "[Errno 2] No such file or directory", "rc": 2}

I have the same problem as you,Installing sshpass on the target host can work normally
yum install -y sshpass

Related

Ansible synchronize module returning 127

I'm finding the ansible synchronize module keeps failing with error 127, it blames python but other commands are having no issue, I've got the latest module from ansible-galaxy
fatal: [HostA]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
In the playbook I have
- ansible.posix.synchronize:
archive: yes
compress: yes
delete: yes
recursive: yes
dest: "{{ libexec_path }}"
src: "{{ libexec_path }}/"
rsync_opts:
- "--exclude=check_dhcp"
- "--exclude=check_icmp"
ansible.cfg
[defaults]
timeout = 10
fact_caching_timeout = 30
host_key_checking = false
ansible_ssh_extra_args = -R 3128:127.0.0.1:3128
interpreter_python = auto_legacy_silent
forks = 50
I've tried removing the ansible_ssh_extra_args without success, I use this when using apt to tunnel back out to the internet because the remote hosts have no internet access.
I can run sync manually without an issue, pre ansible I used to call rsync using:
sudo rsync -e 'ssh -ax' -avz --timeout=20 --delete -i --progress --exclude-from '/opt/openitc/nagios/bin/exclude.txt' /opt/openitc/nagios/libexec/* root#" . $ip . ":/opt/openitc/nagios/libexec"
I'm synchronising from Ubuntu 20.04 to Ubuntu 14.04
Can anyone see what I'm doing wrong, a way to debug the synchronize or a way to call rsync manually?

Failed to open session error

I am trying to use ansible to connect to my switches and just do a show version. For some reason when i run the ansible playbook i keep getting the error "Failed to open session", i don't know why i keep getting it. I am able to ssh directly to the box with no issues.
[Ansible.cfg]
enable_task_debugger=True
hostfile=inventory
transport=paramiko
host_key_checking=False
[inventory/hosts]
127.0.0.1 ansible_connection=local
[routers]
192.168.10.1
[test.yaml]
---
- hosts: routers
gather_facts: true
connection: paramiko
tasks:
- name: show run
ios_command:
commands:
- show version
then i try to run it like this
ansible-playbook -vvv -i inventory test.yaml -u username -k
And then this is the last line of the error
EXEC /bin/sh -c 'echo ~ && sleep 0'
fatal: [192.168.10.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to open session",
"unreachable": true
}
Anisble version is 2.4.2.0
Please use::
connection: local
change - hosts: routers to - hosts: localhost

Ansible synchronize always prepend username#host

I'm running an ansible 2.3.1.0 on my local machine (macOs) and trying to achieve :
connecting to user1#host1
copying a file from user2#host2:/path/to/file to user1#host1:/tmp/path/to/file
I'm on my local, with host1 as hosts and user1 as remote_user:
- synchronize: mode=pull src=user2#host2:/path/to/file dest=/tmp/path/to/file
Wrong output:
/usr/bin/rsync (...) user1#host1:user2#host2:/path/to/file /tmp/path/to/file
Conclusion
I've been trying different options. I've debugged ansible. I can't understand what's wrong.
Help!
Edit 1
I've also tried adding delegate_to:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: host2
It gives:
fatal: [host1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,password,keyboard-interactive).\r\n", "unreachable": true}
And also:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: user2#host2
Which gives:
fatal: [host1 -> host2]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -o StrictHostKeyChecking=no --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L host1:/path/to/file /tmp/path/to/file", "failed": true, "msg": "Permission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]\n", "rc": 255}
NB: ssh user1#host1 and then ssh user2#host2 works with ssh keys (no password required)
Please pay attention to this notes from modules' docs:
For the synchronize module, the “local host” is the host the synchronize task originates on, and the “destination host” is the host synchronize is connecting to.
The “local host” can be changed to a different host by using delegate_to. This enables copying between two remote hosts or entirely on one remote machine.
I guess, you may want to try (assuming Ansible can connect to host2):
- synchronize:
src: /path/to/file
dest: /tmp/path/to/file
delegate_to: host2

Ansible Local connection script argument path failed to detect

I have a ansible playbook which calls 2 roles. role 1 runs on local, which has a script with arg as file path /tmp/inputfile/input.csv. The playbook looks:
- hosts: "{{my_extra_var_IP}}"
connection: local
roles:
- prereq
Roles task:
- name: Copy script to local
copy:
src: files/csv_to_files.sh
dest: /tmp/input_dir/
mode: 0777
- command: ls -ltr /tmp/input_dir
- command: cat /tmp/input_dir/inputFile.csv
#- name: run csv to yml script
# script: /tmp/input_dir/csv_to_files.sh /tmp/input_dir/inputFile.csv
# become_user: niceha
The output of first 2 tasks is success and is as expected but on 3rd & 4th step I get error:
FAILED! => {"changed": true, "cmd": ["cat", "/tmp/input_dir/inputFile.csv"], "delta": "0:00:00.007141", "end": "2017-06-09 15:53:58.673450", "failed": true, "rc": 1, "start": "2017-06-09 15:53:58.666309", "stderr": "cat: /tmp/input_dir/inputFile.csv: No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []}
I am running this job from tower which uses userA I also tried to change the users but no luck.
The indenting does not look right:
- name: Copy script to local
copy:
src: files/csv_to_files.sh
dest: /tmp/input_dir/
mode: 0777
Ok. So after much reading I got to know the code is fine as it runs from the console but not from the Ansible tower, and just to cross check it worked from other dir paths.
Ansible tower actually uses /tmp/ dir as staging area so any changes/tasks mentioned in the playbook to be run in tmp dir wont take affect.
Changing my input file path from /tmp to /home/user did the work for me.

Ansible synchronize module permissions issue

Remote Server's "/home"
enter image description here
Remote Server User
1. bitnami
2. take02
3. take03
4. take04
But local Host are only ubuntu users.
I would like to copy the "home" directory of REMOTE HOST as ansible,
keeping the OWNER information.
This is my playbook:
---
- hosts: discovery_bitnami
gather_facts: no
become: yes
tasks:
- name: "Creates directory"
local_action: >
file path=/tmp/{{ inventory_hostname }}/home/ state=directory
- name: "remote-to-local sync test"
become_method: sudo
synchronize:
mode: pull
src: /home/
dest: /tmp/{{ inventory_hostname }}/home
rsync_path: "sudo rsync"
Playbook result is:
PLAY [discovery_bitnami] *******************************************************
TASK [Creates directory] *******************************************************
ok: [discovery_bitnami -> localhost]
TASK [remote-to-local sync test] ***********************************************
fatal: [discovery_bitnami]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/ubuntu/.ssh/red_LightsailDefaultPrivateKey.pem -S none -o StrictHostKeyChecking=no -o Port=22' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"bitnami#54.236.34.197:/home/\" \"/tmp/discovery_bitnami/home\"", "failed": true, "msg": "rsync: failed to set times on \"/tmp/discovery_bitnami/home/.\": Operation not permitted (1)\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/bitnami\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take02\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take03\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take04\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1655) [generator=3.1.1]\n", "rc": 23}
to retry, use: --limit #/home/ubuntu/work/esc_discovery/ansible_test/ansible_sync_test.retry
PLAY RECAP *********************************************************************
discovery_bitnami : ok=1 changed=0 unreachable=0 failed=1
But,
failed "cmd" works fine run with sudo on the console.
$ sudo /usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/ubuntu/.ssh/red_PrivateKey.pem -S none -o StrictHostKeyChecking=no -o Port=22' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' bitnami#54.236.34.197:/home/ /tmp/discovery_bitnami/home
How do I run "task" with sudo?
ps. remove become: yes then all permission is "ubuntu"
enter image description here
I guess you are out of options for the synchronize module. It runs locally without sudo and it's hardcoded.
On the other hand, in the first task you create a directory under /tmp as root, so the permissions are limited to the root user. As a result you get "permissions denied" error.
Either:
refactor the code so that you don't need root permissions for the local destination (or add become: no for the task "Creates directory"), as you use archive option which implies permissions preservation, this might not be an option;
or:
create your own version of the synchronize module and add sudo to the front of the cmd variable;
or:
use the command module with sudo /usr/bin/rsync as the call.
Mind that synchronize module is a non-standard one, there were changes in the past regarding the accounts used, and requests for the changes.
On top of everything, the current documentation for the module is pretty confusing. On one hand it states strongly:
The user and permissions for the synchronize dest are those of the remote_user on the destination host or the become_user if become=yes is active.
But in another place it only hints that the source and destination meaning is reversed when using pull mode:
In pull mode the remote host in context is the source.
So for the case from this question, the following passage is relevant, even though it incorrectly states the "src":
The user and permissions for the synchronize src are those of the user running the Ansible task on the local host (or the remote_user for a delegate_to host when delegate_to is used).

Resources