Provision Ubuntu WSL Environment Using Ansible - ansible

I am able to provision Windows 10 using Ansible/Chocolatey by running Ansible in Ubuntu WSL. I am now trying to provision the Ubuntu WSL environment using that same Ansible instance. It seems to authenticate properly but I'm getting the following permission error when I try to provision Ubuntu WSL from Ubuntu WSL itself:
fatal: [localhost-wsl]: UNREACHABLE! => {"changed": false, "msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo /tmp/.ansible-${USER}/tmp/ansible-tmp-1594006839.9280272-267367995921233 `\" && echo ansible-tmp-1594006839.9280272-267367995921233=\"` echo /tmp/.ansible-${USER}/tmp/ansible-tmp-1594006839.9280272-267367995921233 `\" ), exited with
result 1, stdout output: ansible-tmp-1594006839.9280272-267367995921233=/tmp/.ansible-***/tmp/ansible-tmp-1594006839.9280272-267367995921233\n", "unreachable": true}
[WARNING]: Failure using method (v2_runner_on_unreachable) in callback plugin
(<ansible.plugins.callback.mail.CallbackModule object at 0x7feccbade550>): [Errno
111] Connection refused
Here's my inventory.yml:
all:
children:
ubuntu-wsl:
hosts:
localhost-wsl:
ansible_port: 22
ansible_host: localhost
ansible_password: "{{ passwordd}}"
ansible_user: "{{ usernamee}}"
And here's my ansible.cfg:
[defaults]
inventory = inventory.ymlforks = 50
transport = ssh
gathering = smart
fact_caching = jsonfile
fact_caching_connection = ~/.ansible/factcachingconnection
callback_whitelist = mailfact_caching_timeout = 60480000hash_behavior = merge
retry_files_enable = False
pipelining = True
host_key_checking = False
remote_tmp = /tmp/.ansible-${USER}/tmp
[winrm_connection]
server_cert_validation = ignore
transport = credssp,ssl
[ssh_connection]
transfer_method = piped
Can anyone spot an error or suggest a possible solution? I was unable to get it working using the local type connection as well (the above is using SSH).
Thanks

The solution to this was upgrading the Ubuntu WSL environment to WSL 2. See https://learn.microsoft.com/en-us/windows/wsl/install-win10#update-to-wsl-2

Related

Ansible synchronize module returning 127

I'm finding the ansible synchronize module keeps failing with error 127, it blames python but other commands are having no issue, I've got the latest module from ansible-galaxy
fatal: [HostA]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
In the playbook I have
- ansible.posix.synchronize:
archive: yes
compress: yes
delete: yes
recursive: yes
dest: "{{ libexec_path }}"
src: "{{ libexec_path }}/"
rsync_opts:
- "--exclude=check_dhcp"
- "--exclude=check_icmp"
ansible.cfg
[defaults]
timeout = 10
fact_caching_timeout = 30
host_key_checking = false
ansible_ssh_extra_args = -R 3128:127.0.0.1:3128
interpreter_python = auto_legacy_silent
forks = 50
I've tried removing the ansible_ssh_extra_args without success, I use this when using apt to tunnel back out to the internet because the remote hosts have no internet access.
I can run sync manually without an issue, pre ansible I used to call rsync using:
sudo rsync -e 'ssh -ax' -avz --timeout=20 --delete -i --progress --exclude-from '/opt/openitc/nagios/bin/exclude.txt' /opt/openitc/nagios/libexec/* root#" . $ip . ":/opt/openitc/nagios/libexec"
I'm synchronising from Ubuntu 20.04 to Ubuntu 14.04
Can anyone see what I'm doing wrong, a way to debug the synchronize or a way to call rsync manually?

ansible from MAC to a remote DataDomain which has special filesystem

Folks I have an ansible.cfg
[defaults]
remote_user = sysadmin
inventory = hosts.yaml
host_key_checking = False
local_tmp = /Users/juergen/Documents/DPSCodeAcademy/ansible/#dev/ddve-aws/ddve6-7.4
Further down a playbook
---
-
hosts: ddve
gather_facts: False
tasks:
- name: net show all
command: net show all
...
and the ddve host is a very special linux box which it's own commandset, so regular linux operation do not work. What I was trying is to redirect the tmp dir to a local dir on my mac and just fire a valid command on that ddve host but this fails with:
fatal: [3.126.251.125]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo For example, \"help timezone\" shows all commands relating to timezones./.ansible/tmp `\"&& mkdir \"` echo For example, \"help timezone\" shows all commands relating to timezones./.ansible/tmp/ansible-tmp-1611501684.866448-10774-109898410031575 `\" && echo ansible-tmp-1611501684.866448-10774-109898410031575=\"` echo For example, \"help timezone\" shows all commands relating to timezones./.ansible/tmp/ansible-tmp-1611501684.866448-10774-109898410031575 `\" ), exited with result 40, stdout output: That doesn't look like a valid command, displaying help...\n\nHelp is available on the following topics:\n\n adminaccess ddboost ntp\n alerts disk qos\n alias elicense quota\n authentication enclosure replication\n autosupport filesys smt\n cifs ifgroup snapshot\n client-group log snmp\n cloud migration storage\n compression mtree support\n config net system\n data-movement nfs user\n\nType \"help <topic>\" to view help for the given topic.\n\nType \"help <keyword>\" to search the commands for a specific keyword.\nFor example, \"help timezone\" shows all commands relating to timezones.\n\n", "unreachable": true}
PLAY RECAP ************************************************************************************************************************************************************************************
3.126.251.125 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
but a ssh login is working.
❯ ssh sysadmin#3.126.251.125
EMC Data Domain Virtual Edition
Last login: Sun Jan 24 07:21:24 PST 2021 from 95.91.249.86 on ssh
Welcome to Data Domain OS 7.4.0.5-671629
----------------------------------------
sysadmin#ip-172-31-16-174# net show all
Active Network Configuration:
ethV0 Link encap:Ethernet HWaddr 02:C9:AF:87:AC:7C
inet addr:172.31.16.1
can you help me what the error is telling
Ansible relies on being able to run python on the remote host. If "regular linux operations" won't work, this is probably the problem.
The simplest workaround is to use the raw module, which simply executes commands via ssh. This is the only module you would be able to use to target the remote host.
- name: net show all
raw: net show all
It looks like the remote system is some sort of networking device. There are a number of Ansible modules designed to work with switches and other network devices that don't support regular Linux commands, or Python, etc. See the documentation for Ansible Network Automation for more information. Possibly there is a module for the device you are managing?

Failed to open session error

I am trying to use ansible to connect to my switches and just do a show version. For some reason when i run the ansible playbook i keep getting the error "Failed to open session", i don't know why i keep getting it. I am able to ssh directly to the box with no issues.
[Ansible.cfg]
enable_task_debugger=True
hostfile=inventory
transport=paramiko
host_key_checking=False
[inventory/hosts]
127.0.0.1 ansible_connection=local
[routers]
192.168.10.1
[test.yaml]
---
- hosts: routers
gather_facts: true
connection: paramiko
tasks:
- name: show run
ios_command:
commands:
- show version
then i try to run it like this
ansible-playbook -vvv -i inventory test.yaml -u username -k
And then this is the last line of the error
EXEC /bin/sh -c 'echo ~ && sleep 0'
fatal: [192.168.10.1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to open session",
"unreachable": true
}
Anisble version is 2.4.2.0
Please use::
connection: local
change - hosts: routers to - hosts: localhost

Synchronize files/folders between two remote hosts

I have two web servers behind a load balancer that use Let's Encrypt for automatic SSL. web1 will handle the creation and renewal of the SSL keys and then synchronize those keys onto web2. Trying to use a variant of this isn't working for me:
- name: Sync SSL files from master to slave(s)
synchronize:
src: "{{ item }}"
dest: "{{ item }}"
when: inventory_hostname != 'web1'
delegate_to: web1
with_items:
- /etc/nginx/ssl/letsencrypt/
- /var/lib/letsencrypt/csrs/
- /var/lib/letsencrypt/account.key
- /etc/ssl/certs/lets-encrypt-x3-cross-signed.pem
That returns an immediate error of:
Permission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(605) [Receiver=3.0.9]\n"
Why isn't the ssh forwarding working once ansible logs into web1 or web2? When I execute this manually, it works fine:
ssh -A user#web1
#logged into web1 successfully
ssh user#web2
#logged into web2 successfully
Here is my ansible.cfg
[defaults]
filter_plugins = ~/.ansible/plugins/filter_plugins/:/usr/share/ansible_plugins/filter_plugins:lib/trellis/plugins/filter
host_key_checking = False
force_color = True
force_handlers = True
inventory = hosts
nocows = 1
roles_path = vendor/roles
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes
pipelining = True
retries = 1
What I think is happening is I am trying to copy contents from a folder with root only permissions. So sudo is being used, which switches my user and why I get a permission denied, b/c the SSH key is associated with non-root. So it seems I need a way to access contents of a root only folder and send it across with a regular user. I might create a few steps to copy and change permissions with root, then sync with non-root, and use sudo to fix permissions on the remote host.
Seems like a lot of steps, but not sure if synchronize can handle my use case.
UPDATED: Added more relevant error

Ansible synchronize always prepend username#host

I'm running an ansible 2.3.1.0 on my local machine (macOs) and trying to achieve :
connecting to user1#host1
copying a file from user2#host2:/path/to/file to user1#host1:/tmp/path/to/file
I'm on my local, with host1 as hosts and user1 as remote_user:
- synchronize: mode=pull src=user2#host2:/path/to/file dest=/tmp/path/to/file
Wrong output:
/usr/bin/rsync (...) user1#host1:user2#host2:/path/to/file /tmp/path/to/file
Conclusion
I've been trying different options. I've debugged ansible. I can't understand what's wrong.
Help!
Edit 1
I've also tried adding delegate_to:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: host2
It gives:
fatal: [host1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,password,keyboard-interactive).\r\n", "unreachable": true}
And also:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: user2#host2
Which gives:
fatal: [host1 -> host2]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -o StrictHostKeyChecking=no --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L host1:/path/to/file /tmp/path/to/file", "failed": true, "msg": "Permission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]\n", "rc": 255}
NB: ssh user1#host1 and then ssh user2#host2 works with ssh keys (no password required)
Please pay attention to this notes from modules' docs:
For the synchronize module, the “local host” is the host the synchronize task originates on, and the “destination host” is the host synchronize is connecting to.
The “local host” can be changed to a different host by using delegate_to. This enables copying between two remote hosts or entirely on one remote machine.
I guess, you may want to try (assuming Ansible can connect to host2):
- synchronize:
src: /path/to/file
dest: /tmp/path/to/file
delegate_to: host2

Resources