Ansible synchronize module returning 127 - ansible

I'm finding the ansible synchronize module keeps failing with error 127, it blames python but other commands are having no issue, I've got the latest module from ansible-galaxy
fatal: [HostA]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
In the playbook I have
- ansible.posix.synchronize:
archive: yes
compress: yes
delete: yes
recursive: yes
dest: "{{ libexec_path }}"
src: "{{ libexec_path }}/"
rsync_opts:
- "--exclude=check_dhcp"
- "--exclude=check_icmp"
ansible.cfg
[defaults]
timeout = 10
fact_caching_timeout = 30
host_key_checking = false
ansible_ssh_extra_args = -R 3128:127.0.0.1:3128
interpreter_python = auto_legacy_silent
forks = 50
I've tried removing the ansible_ssh_extra_args without success, I use this when using apt to tunnel back out to the internet because the remote hosts have no internet access.
I can run sync manually without an issue, pre ansible I used to call rsync using:
sudo rsync -e 'ssh -ax' -avz --timeout=20 --delete -i --progress --exclude-from '/opt/openitc/nagios/bin/exclude.txt' /opt/openitc/nagios/libexec/* root#" . $ip . ":/opt/openitc/nagios/libexec"
I'm synchronising from Ubuntu 20.04 to Ubuntu 14.04
Can anyone see what I'm doing wrong, a way to debug the synchronize or a way to call rsync manually?

Related

Provision Ubuntu WSL Environment Using Ansible

I am able to provision Windows 10 using Ansible/Chocolatey by running Ansible in Ubuntu WSL. I am now trying to provision the Ubuntu WSL environment using that same Ansible instance. It seems to authenticate properly but I'm getting the following permission error when I try to provision Ubuntu WSL from Ubuntu WSL itself:
fatal: [localhost-wsl]: UNREACHABLE! => {"changed": false, "msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo /tmp/.ansible-${USER}/tmp/ansible-tmp-1594006839.9280272-267367995921233 `\" && echo ansible-tmp-1594006839.9280272-267367995921233=\"` echo /tmp/.ansible-${USER}/tmp/ansible-tmp-1594006839.9280272-267367995921233 `\" ), exited with
result 1, stdout output: ansible-tmp-1594006839.9280272-267367995921233=/tmp/.ansible-***/tmp/ansible-tmp-1594006839.9280272-267367995921233\n", "unreachable": true}
[WARNING]: Failure using method (v2_runner_on_unreachable) in callback plugin
(<ansible.plugins.callback.mail.CallbackModule object at 0x7feccbade550>): [Errno
111] Connection refused
Here's my inventory.yml:
all:
children:
ubuntu-wsl:
hosts:
localhost-wsl:
ansible_port: 22
ansible_host: localhost
ansible_password: "{{ passwordd}}"
ansible_user: "{{ usernamee}}"
And here's my ansible.cfg:
[defaults]
inventory = inventory.ymlforks = 50
transport = ssh
gathering = smart
fact_caching = jsonfile
fact_caching_connection = ~/.ansible/factcachingconnection
callback_whitelist = mailfact_caching_timeout = 60480000hash_behavior = merge
retry_files_enable = False
pipelining = True
host_key_checking = False
remote_tmp = /tmp/.ansible-${USER}/tmp
[winrm_connection]
server_cert_validation = ignore
transport = credssp,ssl
[ssh_connection]
transfer_method = piped
Can anyone spot an error or suggest a possible solution? I was unable to get it working using the local type connection as well (the above is using SSH).
Thanks
The solution to this was upgrading the Ubuntu WSL environment to WSL 2. See https://learn.microsoft.com/en-us/windows/wsl/install-win10#update-to-wsl-2

How to synchronize a file between two remote servers in Ansible?

The end goal is for me to copy file.txt from Host2 over to Host1. However, I keep getting the same error whenever I perform the function. I have triple checked my spacing and made sure I spelled everything correctly, but nothing seems to work.
Command to start the playbook:
ansible-playbook playbook_name.yml -i inventory/inventory_name -u username -k
My Code:
- hosts: Host1
tasks:
- name: Synchronization using rsync protocol on delegate host (pull)
synchronize:
mode: pull
src: rsync://Host2.linux.us.com/tmp/file.txt
dest: /tmp
delegate_to: Host2.linux.us.com
Expected Result:
Successfully working
Actual Result:
fatal: [Host1.linux.us.com]: FAILED! => {"changed": false, "cmd": "sshpass", "msg": "[Errno 2] No such file or directory", "rc": 2}
I have the same problem as you,Installing sshpass on the target host can work normally
yum install -y sshpass

ansible behavior to specific sudo commands on managed nodes

Here to discuss the ansible behavior when user at managed nodes is given sudo privileges to specific commands.
I have sudo privileges on remote managed host [rm-host.company.com] to specific commands. Two of them are:
/bin/mkdir /opt/somedir/unit*
/bin/chmod 2775 /opt/somedir/unit*
PS: /opt/somedir at remote nodes exists already.
My ansible control machine version:
ansible 2.7.10
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
YAML code fails when I use ansbile "file" module even though I have sudo privileges to chmod and mkdir as listed above.
- name: 7|Ensure Directory - "/opt/somedir/{{ ENV_CHOSEN }}" Permissions are 2775
become: yes
become_method: sudo
file: path="/opt/somedir/{{ ENV_CHOSEN }}" state=directory mode=2775
when:
- ansible_facts['os_family'] == "CentOS" or ansible_facts['os_family'] == "RedHat"
- ansible_distribution_version | int >= 6
- http_dir_path.stat.exists == true
- http_dir_path.stat.isdir == true
- CreateWebAgentEnvDir is defined
- CreateWebAgentEnvDir is succeeded
register: ChangeDirPermission
- debug:
var: ChangeDirPermission
Runtime error:
TASK [7|Ensure Directory - "/opt/somedir/unitc" Permissions are 2775] **************************************************************************************************************************************************************************************
fatal: [rm-host.company.com]: FAILED! => {"changed": false, "module_stderr": "FIPS mode initialized\r\nShared connection to rm-host.company.com closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
to retry, use: --limit #/u/joker/scripts/Ansible/playbooks/agent/plays/agent_Install.retry
PLAY RECAP ***************************************************************************************************************************************************************************************************************************************************
rm-host.company.com : ok=9 changed=2 unreachable=0 failed=1
But succeeds when I use command module, like so:
- name: 7|Ensure Directory - "/opt/somedir/{{ ENV_CHOSEN }}" Permissions are 2775
command: sudo /bin/chmod 2775 "/opt/somedir/{{ ENV_CHOSEN }}"
when:
- ansible_facts['os_family'] == "CentOS" or ansible_facts['os_family'] == "RedHat"
- ansible_distribution_version | int >= 6
- http_dir_path.stat.exists == true
- http_dir_path.stat.isdir == true
- CreateagentEnvDir is defined
- CreateagentEnvDir is succeeded
register: ChangeDirPermission
- debug:
var: ChangeDirPermission
Success Runtime debug output captured:
TASK [7|Ensure Directory - "/opt/somedir/unitc" Permissions are 2775] **************************************************************************************************************************************************************************************
[WARNING]: Consider using 'become', 'become_method', and 'become_user' rather than running sudo
changed: [rm-host.company.com]
TASK [debug] *************************************************************************************************************************************************************************************************************************************************
ok: [rm-host.company.com] => {
"ChangeDirPermission": {
"changed": true,
"cmd": [
"sudo",
"/bin/chmod",
"2775",
"/opt/somedir/unitc"
],
"delta": "0:00:00.301570",
"end": "2019-06-22 13:20:17.300266",
"failed": false,
"rc": 0,
"start": "2019-06-22 13:20:16.998696",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"warnings": [
"Consider using 'become', 'become_method', and 'become_user' rather than running sudo"
]
}
}
Question:
How can I make this work without using command module? I want to stick to ansible core modules using 'become', 'become_method' rather than running sudo in command module.
Note:
It works when sudo is enabled for ALL commands. But [ user ALL=(ALL) NOPASSWD: ALL ] cannot be given on remote host. Not allowed by company policy for the group I am in.
The short answer is you can't. The way ansible works is by executing python scripts in the remote host (except for the raw, command and shell modules). See the docs.
The file module executes this script with a long line of parameters. But ansible will first become the required user, in this case root by running sudo -H -S -n -u root /bin/sh in the remote ssh session (please bear in mind that this command might be slightly different in your case).
Once the user logged remotely has become the root user, Ansible will upload and execute the file.py script.
It looks like in your case, you'll need to revert to use the raw, command or shell in the cases you need to run the privileged commands.
To understand this a bit better and see the detail and order of the commands being executed, run ansible-playbook with the parameter -vvvv.
I solved this issue by removing the become_method and become_user off my playbook.
First, I specified the user in the inventory file using ansible_user=your_user. Then, I removed the become_method and become_user off my playbook leaving just become=yes
For more details about this answer, look on this other answer.

Failed to open session error

I am trying to use ansible to connect to my switches and just do a show version. For some reason when i run the ansible playbook i keep getting the error "Failed to open session", i don't know why i keep getting it. I am able to ssh directly to the box with no issues.
[Ansible.cfg]
enable_task_debugger=True
hostfile=inventory
transport=paramiko
host_key_checking=False
[inventory/hosts]
127.0.0.1 ansible_connection=local
[routers]
192.168.10.1
[test.yaml]
---
- hosts: routers
gather_facts: true
connection: paramiko
tasks:
- name: show run
ios_command:
commands:
- show version
then i try to run it like this
ansible-playbook -vvv -i inventory test.yaml -u username -k
And then this is the last line of the error
EXEC /bin/sh -c 'echo ~ && sleep 0'
fatal: [192.168.10.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to open session",
"unreachable": true
}
Anisble version is 2.4.2.0
Please use::
connection: local
change - hosts: routers to - hosts: localhost

Ansible synchronize always prepend username#host

I'm running an ansible 2.3.1.0 on my local machine (macOs) and trying to achieve :
connecting to user1#host1
copying a file from user2#host2:/path/to/file to user1#host1:/tmp/path/to/file
I'm on my local, with host1 as hosts and user1 as remote_user:
- synchronize: mode=pull src=user2#host2:/path/to/file dest=/tmp/path/to/file
Wrong output:
/usr/bin/rsync (...) user1#host1:user2#host2:/path/to/file /tmp/path/to/file
Conclusion
I've been trying different options. I've debugged ansible. I can't understand what's wrong.
Help!
Edit 1
I've also tried adding delegate_to:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: host2
It gives:
fatal: [host1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,password,keyboard-interactive).\r\n", "unreachable": true}
And also:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: user2#host2
Which gives:
fatal: [host1 -> host2]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -o StrictHostKeyChecking=no --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L host1:/path/to/file /tmp/path/to/file", "failed": true, "msg": "Permission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]\n", "rc": 255}
NB: ssh user1#host1 and then ssh user2#host2 works with ssh keys (no password required)
Please pay attention to this notes from modules' docs:
For the synchronize module, the “local host” is the host the synchronize task originates on, and the “destination host” is the host synchronize is connecting to.
The “local host” can be changed to a different host by using delegate_to. This enables copying between two remote hosts or entirely on one remote machine.
I guess, you may want to try (assuming Ansible can connect to host2):
- synchronize:
src: /path/to/file
dest: /tmp/path/to/file
delegate_to: host2

Resources