Sync files between two remote hosts using ansible sync module - ansible

When i am trying use the sync push module getting the below error
fatal: [node-master]: FAILED! => {"changed": false, "cmd": "sshpass -d20 /usr/bin/rsync --delay-updates -F --compress --archive --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L /home/hadoop/test rsync://node1/home/hadoop/test", "msg": "#ERROR: Unknown module 'home'\nrsync error: error starting client-server protocol (code 5) at main.c(1649) [sender=3.1.2]\n", "rc": 5}
sample yaml file for sync push module
- name: Copy the file from node-master to node1 using Method Push
ansible.posix.synchronize:
mode: push
src: /home/hadoop/test
dest: rsync://node1/home/hadoop/test

Related

Ansible synchronize module returning 127

I'm finding the ansible synchronize module keeps failing with error 127, it blames python but other commands are having no issue, I've got the latest module from ansible-galaxy
fatal: [HostA]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
In the playbook I have
- ansible.posix.synchronize:
archive: yes
compress: yes
delete: yes
recursive: yes
dest: "{{ libexec_path }}"
src: "{{ libexec_path }}/"
rsync_opts:
- "--exclude=check_dhcp"
- "--exclude=check_icmp"
ansible.cfg
[defaults]
timeout = 10
fact_caching_timeout = 30
host_key_checking = false
ansible_ssh_extra_args = -R 3128:127.0.0.1:3128
interpreter_python = auto_legacy_silent
forks = 50
I've tried removing the ansible_ssh_extra_args without success, I use this when using apt to tunnel back out to the internet because the remote hosts have no internet access.
I can run sync manually without an issue, pre ansible I used to call rsync using:
sudo rsync -e 'ssh -ax' -avz --timeout=20 --delete -i --progress --exclude-from '/opt/openitc/nagios/bin/exclude.txt' /opt/openitc/nagios/libexec/* root#" . $ip . ":/opt/openitc/nagios/libexec"
I'm synchronising from Ubuntu 20.04 to Ubuntu 14.04
Can anyone see what I'm doing wrong, a way to debug the synchronize or a way to call rsync manually?

How to synchronize a file between two remote servers in Ansible?

The end goal is for me to copy file.txt from Host2 over to Host1. However, I keep getting the same error whenever I perform the function. I have triple checked my spacing and made sure I spelled everything correctly, but nothing seems to work.
Command to start the playbook:
ansible-playbook playbook_name.yml -i inventory/inventory_name -u username -k
My Code:
- hosts: Host1
tasks:
- name: Synchronization using rsync protocol on delegate host (pull)
synchronize:
mode: pull
src: rsync://Host2.linux.us.com/tmp/file.txt
dest: /tmp
delegate_to: Host2.linux.us.com
Expected Result:
Successfully working
Actual Result:
fatal: [Host1.linux.us.com]: FAILED! => {"changed": false, "cmd": "sshpass", "msg": "[Errno 2] No such file or directory", "rc": 2}
I have the same problem as you,Installing sshpass on the target host can work normally
yum install -y sshpass

Ansible synchronize always prepend username#host

I'm running an ansible 2.3.1.0 on my local machine (macOs) and trying to achieve :
connecting to user1#host1
copying a file from user2#host2:/path/to/file to user1#host1:/tmp/path/to/file
I'm on my local, with host1 as hosts and user1 as remote_user:
- synchronize: mode=pull src=user2#host2:/path/to/file dest=/tmp/path/to/file
Wrong output:
/usr/bin/rsync (...) user1#host1:user2#host2:/path/to/file /tmp/path/to/file
Conclusion
I've been trying different options. I've debugged ansible. I can't understand what's wrong.
Help!
Edit 1
I've also tried adding delegate_to:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: host2
It gives:
fatal: [host1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,password,keyboard-interactive).\r\n", "unreachable": true}
And also:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: user2#host2
Which gives:
fatal: [host1 -> host2]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -o StrictHostKeyChecking=no --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L host1:/path/to/file /tmp/path/to/file", "failed": true, "msg": "Permission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]\n", "rc": 255}
NB: ssh user1#host1 and then ssh user2#host2 works with ssh keys (no password required)
Please pay attention to this notes from modules' docs:
For the synchronize module, the “local host” is the host the synchronize task originates on, and the “destination host” is the host synchronize is connecting to.
The “local host” can be changed to a different host by using delegate_to. This enables copying between two remote hosts or entirely on one remote machine.
I guess, you may want to try (assuming Ansible can connect to host2):
- synchronize:
src: /path/to/file
dest: /tmp/path/to/file
delegate_to: host2

Ansible synchronize module permissions issue

Remote Server's "/home"
enter image description here
Remote Server User
1. bitnami
2. take02
3. take03
4. take04
But local Host are only ubuntu users.
I would like to copy the "home" directory of REMOTE HOST as ansible,
keeping the OWNER information.
This is my playbook:
---
- hosts: discovery_bitnami
gather_facts: no
become: yes
tasks:
- name: "Creates directory"
local_action: >
file path=/tmp/{{ inventory_hostname }}/home/ state=directory
- name: "remote-to-local sync test"
become_method: sudo
synchronize:
mode: pull
src: /home/
dest: /tmp/{{ inventory_hostname }}/home
rsync_path: "sudo rsync"
Playbook result is:
PLAY [discovery_bitnami] *******************************************************
TASK [Creates directory] *******************************************************
ok: [discovery_bitnami -> localhost]
TASK [remote-to-local sync test] ***********************************************
fatal: [discovery_bitnami]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/ubuntu/.ssh/red_LightsailDefaultPrivateKey.pem -S none -o StrictHostKeyChecking=no -o Port=22' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"bitnami#54.236.34.197:/home/\" \"/tmp/discovery_bitnami/home\"", "failed": true, "msg": "rsync: failed to set times on \"/tmp/discovery_bitnami/home/.\": Operation not permitted (1)\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/bitnami\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take02\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take03\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take04\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1655) [generator=3.1.1]\n", "rc": 23}
to retry, use: --limit #/home/ubuntu/work/esc_discovery/ansible_test/ansible_sync_test.retry
PLAY RECAP *********************************************************************
discovery_bitnami : ok=1 changed=0 unreachable=0 failed=1
But,
failed "cmd" works fine run with sudo on the console.
$ sudo /usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/ubuntu/.ssh/red_PrivateKey.pem -S none -o StrictHostKeyChecking=no -o Port=22' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' bitnami#54.236.34.197:/home/ /tmp/discovery_bitnami/home
How do I run "task" with sudo?
ps. remove become: yes then all permission is "ubuntu"
enter image description here
I guess you are out of options for the synchronize module. It runs locally without sudo and it's hardcoded.
On the other hand, in the first task you create a directory under /tmp as root, so the permissions are limited to the root user. As a result you get "permissions denied" error.
Either:
refactor the code so that you don't need root permissions for the local destination (or add become: no for the task "Creates directory"), as you use archive option which implies permissions preservation, this might not be an option;
or:
create your own version of the synchronize module and add sudo to the front of the cmd variable;
or:
use the command module with sudo /usr/bin/rsync as the call.
Mind that synchronize module is a non-standard one, there were changes in the past regarding the accounts used, and requests for the changes.
On top of everything, the current documentation for the module is pretty confusing. On one hand it states strongly:
The user and permissions for the synchronize dest are those of the remote_user on the destination host or the become_user if become=yes is active.
But in another place it only hints that the source and destination meaning is reversed when using pull mode:
In pull mode the remote host in context is the source.
So for the case from this question, the following passage is relevant, even though it incorrectly states the "src":
The user and permissions for the synchronize src are those of the user running the Ansible task on the local host (or the remote_user for a delegate_to host when delegate_to is used).

Ansible script module - Control socket permission denied

I'm new to Ansible and trying to run a local script on a remote node using the script module. My task is defined as follows:
- name: Initial setup
script: ../../../initial_setup.sh
become: yes
When I run the playbook I get the error below but I'm not clear on what the actual problem is. Does this indicate a problem connecting to the node or a problem transferring the script?
fatal: [default]: FAILED! => {
"changed": true,
"failed": true,
"invocation": {
"module_args": {
"_raw_params": "../../../initial_setup.sh"
},
"module_name": "script"
},
"rc": 127,
"stderr": "Control socket connect(/tmp): Permission denied\r\nControlSocket /tmp already exists, disabling multiplexing\r\nConnection to 127.0.0.1 closed.\r\n",
"stdout": "/bin/sh: 1: /home/ubuntu/.ansible/tmp/ansible-tmp-1482161914.64-107588947758469/initial_setup.sh: not found\r\n",
"stdout_lines": [
"/bin/sh: 1: /home/ubuntu/.ansible/tmp/ansible-tmp-1482161914.64-107588947758469/initial_setup.sh: not found"
]
}
tl;dr
Ensure -o ControlMaster=auto is defined in ssh_args in Ansible in ansible.cfg:
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
The following error is related to SSH connection multiplexing:
Control socket connect(/tmp): Permission denied
ControlSocket /tmp already exists, disabling multiplexing
Connection to 127.0.0.1 closed
It tried to create a socket directly at /tmp, not inside /tmp... Some other parameter defined somewhere for SSH could play role here.
Setting the value of ControlMaster to auto causes SSH to create a new master connection should the existing one not exist (or have problems, as here?).

Resources