Ansible error: Failed to connect to the host via ssh - ansible

I have read other links with the same error but they didn't work for me.
I'm quite new to Ansible and I'm trying to learn it with simple codes.
With the playbook below I want to install Foxitreader with win_chocolatey on a windows host:
---
- hosts: 192.168.2.123
gather_facts: no
tasks:
- name: manage foxitreader
win_chocolatey:
name: foxitreader
state: present
but when I run this playbook with the code below:
ansible-playbook test_choco.yaml -i 192.168.2.123,
I get this error:
fatal: [192.168.2.123]: UNREACHABLE! => {"changed": false, "msg": "Failed
to connect to the host via ssh: ssh: connect to host 192.168.2.123 port 22:
Connection refused", "unreachable": true}
Any help will be appreciated.

Related

Ansible SSH key mismatch

I wrote the following Ansible playbook:
---
- name: Create VLAN
hosts: exos_device
connection: ansible.netcommon.network_cli
vars:
ansible_user: admin
ansible_password: password
ansible_network_os: community.network.exos
tasks:
- name: Create VLAN 4050
community.network.exos_config:
lines:
- create vlan TESTVLAN tag 4050
match: exact
save_when: always
Where I'm trying to create a new vlan on an Extreme Networks machine (ExtremeXOS version 16.2.5.4). But when I execute it I keep getting the following error:
fatal: [10.12.2.10]: FAILED! => {
"changed": false,
"module_stderr": "ssh connection failed: ssh connect failed: kex error : no match for method server host key algo: server [ssh-rsa], client [rsa-sha2-512,rsa-sha2-256,ssh-ed25519,ecdsa-sha2-nistp521,ecdsa-sha2-nistp384,ecdsa-sha2-nistp256]",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
I think this error indicates that there is a mismatch between the SSH key algorithms that the client (Ansible controller) and the server (the EXOS machine) support.
What is the best way to resolve this issue?
I've already tried specifying an algorithm inside the ansible.cfg file like this:
[defaults]
inventory = inventory.ini
ssh_args = -oKexAlgorithms=ssh-rsa
But with no success.

ad-hoc ping command for ansible

I am very new to Ansible, I will try to automate ACI with learning it.
I installed Ansible on my MacBook, the version is 2.10.
My inventory is like below
[APIC]
sandbox ansible_host=sandboxapicdc.cisco.com username=admin password=XXX
mpod ansible_host=10.1.1.100 ansible_ssh_user=admin ansible_ssh_pass=cisco
The second ansible_host is pingable from my machine, but I receive the following when I try to ping it via the Ansible ping module.
ansible all -m ping -i inventory
mpod | FAILED! => {
"msg": "to use the 'ssh' connection type with passwords, you must install the sshpass
program"
}
sandbox | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: connect to host
sandboxapicdc.cisco.com port 22: Operation timed out",
"unreachable": true
}
Try adding these to your variables
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: nxos
To treat them like NX-OS

Ansible synchronize module delegate_to could not resolve hostname

I am using synchronize module to transfer file form serverA to server B. My serverA and serverB hosts are:
[serverB]
172.20.13.201 ansible_user=root ansible_ssh_pass="hi12#$"
172.20.13.202 ansible_user=root ansible_ssh_pass="hi12#$"
172.20.13.203 ansible_user=root ansible_ssh_pass="hi12#$"
[serverA]
172.20.5.121 ansible_user=root ansible_ssh_pass="hi12#$"
my ansible playbook is
- hosts: serverB
tasks:
- name: Copy Remote-To-Remote
remote_user: root
synchronize: src=/root/connection dest=/root/neutron-server.log
delegate_to: serverA
But it error
TASK [Copy Remote-To-Remote] ***************************************************
fatal: [172.20.13.201]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
fatal: [172.20.13.202]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
fatal: [172.20.13.203]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
Why synchronize can't resolve hostname 'servera'? In my host file, it's serverA.
serverA is the name of the group. There is no such host. (There might be more hosts in the group serverA. It would be difficult to decide which host delegate to.)
Try delegate to 172.20.5.121
- hosts: serverB
tasks:
- name: Copy Remote-To-Remote
remote_user: root
synchronize: src=/root/connection dest=/root/neutron-server.log
delegate_to: 172.20.5.121

get ansible to wait for ssh connection

How do you get ansible to wait or retry ssh connections? I have an ansible tsk that runs govc to upload a vm into vcenter but right after that I ssh into the machine to run commands like this:
hosts: vcenter
gather_facts: false
tasks:
- name: Download GOVC
get_url:
url: https://github.com/vmware/govmomi/releases/download/v0.20.0/govc_linux_amd64.gz
dest: /home/gkeadmin/govc_linux_amd64.gz
but doing it right after I get this: fatal: [139.178.66.91]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 1.2.3.4 port 22: Operation timed out", "unreachable": true}
I rerun it again with the --retry command and then it continues. Seems like it just needs sometime before I can connect via ssh...how do I wait for an ssh connection to get established in ansible?
ansible supports retries. May this can help you.
---
- name: test
retries: 2
hosts: <hosts_name>
tasks:
- name: task
<module_name>:
you can add a section on the top of your playbook to wait for it, for example
---
- name: wait for ssh
tasks:
- wait_for:
port: 22
host: '{{ inventory_hostname }}'
delegate_to: localhost
- name: my playbook
hosts: vcenter
gather_facts: false
tasks:
- name: Download GOVC
[ ... etc ... ]
https://docs.ansible.com/ansible/latest/modules/wait_for_module.html#examples

Ansible tries to connect to VM IP before executing the role creating the VM

I'm trying to develop an Ansible script to generate a VM. I wrote a myvm role that contains the script that orchestrates vmware_guest. This script contains a delegate_to: localhost which vmware_guest requires.
Then, I added my new-to-be-vm to hosts, and added the following to hosts:
[myvms]
myvm1
and extended site.yml with:
- hosts: myvms
roles:
- myvm
Now, when I run:
ansible-playbook site.yml -i hosts --limit myvm1
it fails with:
fatal: [myvm1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Connection reset by 192.168.10.13 port 22\r\n", "unreachable": true}
It seems ansible tries to connect to the vm ip before reading the actual role that creates the vm where it delegates to localhost. Adding 'delegate_to' to site.yml fails, however.
How can I fix my Ansible scripts to properly generate the VM for me?
Add gather_facts: false to the play.
- hosts: myvms
gather_facts: false
roles:
- myvm
Ansible by default connects to target machines and runs script which collect data (facts).

Resources