Ansible SSH key mismatch - ansible

I wrote the following Ansible playbook:
---
- name: Create VLAN
hosts: exos_device
connection: ansible.netcommon.network_cli
vars:
ansible_user: admin
ansible_password: password
ansible_network_os: community.network.exos
tasks:
- name: Create VLAN 4050
community.network.exos_config:
lines:
- create vlan TESTVLAN tag 4050
match: exact
save_when: always
Where I'm trying to create a new vlan on an Extreme Networks machine (ExtremeXOS version 16.2.5.4). But when I execute it I keep getting the following error:
fatal: [10.12.2.10]: FAILED! => {
"changed": false,
"module_stderr": "ssh connection failed: ssh connect failed: kex error : no match for method server host key algo: server [ssh-rsa], client [rsa-sha2-512,rsa-sha2-256,ssh-ed25519,ecdsa-sha2-nistp521,ecdsa-sha2-nistp384,ecdsa-sha2-nistp256]",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
I think this error indicates that there is a mismatch between the SSH key algorithms that the client (Ansible controller) and the server (the EXOS machine) support.
What is the best way to resolve this issue?
I've already tried specifying an algorithm inside the ansible.cfg file like this:
[defaults]
inventory = inventory.ini
ssh_args = -oKexAlgorithms=ssh-rsa
But with no success.

Related

Ansible SSH and Playbook

The current version Ubuntu I have is 20.10, the version of Ansible 2.9.9.
I have Eve NG with Cisco VIRL Routers on IOS 15.6
First I came across that Ubuntu was unable to SSH to cisco router, due to no matching key exchange method found. Their offer: diffie-hellman-group1-sha1, I found a work around using ~/.ssh/config. File using the following link
~/.ssh/config file:
Host 192.168.100.2
KexAlgorithms=+diffie-hellman-group1-sha1
Host 192.168.100.3
KexAlgorithms=+diffie-hellman-group1-sha1_
Now I am trying to deploy my first playbook.
When I try to run the playbook I get the following error:
fatal: [CSR-1]: FAILED! => {"changed": false, "msg": "Connection type ssh is not valid for this module"}
fatal: [CSR-2]: FAILED! => {"changed": false, "msg": "Connection type ssh is not valid for this module"}
I can SSH from Ubuntu to each router as I used ~/.ssh/config, but I don’t know how to make sure Ansible to use the ~/.ssh/config file.
I try in ansible.cfg file ssh_args = -F /home/a/.ssh/config ß the location of the SSH file, but cannot seem to get it working.
I have spent several hours Google around, but cannot find a fix.
ansible.cfg
[defaults]
inventory =./host
host_key_checking = False
retry_files_enabled = False
gathering = explicit
Interpreter_python = /usr/bin/python3
ssh_args = -F /home/n/etc/ssh/ssh_config.d/*.conf
Playbook:
hosts: CSR_Routers
tasks:
name: Show Version
ios_command:
commands: show version
all.yml:
ansible_user: "cisco"
ansible_ssh_pass: "cisco"
ansible_connection: "ssh"
ansible_network_os: "iso"
ansbile_connection: "network_cli"
If you see into the documentation don't use SSH as connection type, but network_cli. So - you don't talk to the device via default ssh, but via network_cli. Put that as a host specific var into your inventory.
all:
hosts:
CSR_01:
ansible_host: 192.168.100.2
ansible_connection: "network_cli"
ansible_network_os: "ios"
ansible_user: "cisco"
ansible_password: "cisco"
ansible_become: yes
ansible_become_method: enable
ansible_become_password: "cisco"
children:
CSR_Routers:
hosts:
CSR_01:
Based on your playbook, this inventory contains a group "CSR_Routers" and the only device on it is CSR_01 with IP 192.168.100.2. The connection type of that device is not ssh but network_cli.
remove the ssh_args from your ansible.cfg
remove ansible_ssh_pass, ansible_connection, ansible_user, ansible_network_os, ansbile_connection from your all.yml. This should be host specific (be aware of other devices in your inventory that are not an IOS device
So you call your playbook with:
ansible-playbook -i inventory.yaml playbook.yml
Also - have a look at the IOS specific documentation in Ansible
SSH FIX - after posted in Reddit
nano /etc/ssh/ssh_config
KexAlgorithms +diffie-hellman-group1-sha1
Ciphers +aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc
systemctl restart ssh
nano /etc/ansible/ansible.cfg
[defaults]
host_key_checking=False
timeout = 30
Video with details
https://www.youtube.com/playlist?app=desktop&list=PLov64niDpWBId50D_wuraYWuQ-d02PiR1

Ansible error: Failed to connect to the host via ssh

I have read other links with the same error but they didn't work for me.
I'm quite new to Ansible and I'm trying to learn it with simple codes.
With the playbook below I want to install Foxitreader with win_chocolatey on a windows host:
---
- hosts: 192.168.2.123
gather_facts: no
tasks:
- name: manage foxitreader
win_chocolatey:
name: foxitreader
state: present
but when I run this playbook with the code below:
ansible-playbook test_choco.yaml -i 192.168.2.123,
I get this error:
fatal: [192.168.2.123]: UNREACHABLE! => {"changed": false, "msg": "Failed
to connect to the host via ssh: ssh: connect to host 192.168.2.123 port 22:
Connection refused", "unreachable": true}
Any help will be appreciated.

Run playbook against Openstack with Ansible Tower

I am trying to run a simple playbook against Openstack in admin tenant using Ansible Tower, both running on localhost. Here is the script:
--- #
- hosts: localhost
gather_facts: no
connection: local
tasks:
- name: Security Group
os_security_group:
state: present
name: example
I have done the following configuration:
Credentials:
Template:
Inventory test:
With this configuration, I am getting this error:
TASK [Security Group] **********************************************************
13:35:48
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
Any idea what can be? Looks like is a credential problem.
Untick Enable Privilege Escalation - it's not necessary. Your OpenStack privilege/authorisation will be tied to your OpenStack credentials (admin in this case), not the user running the Ansible task.

Ansible tries to connect to VM IP before executing the role creating the VM

I'm trying to develop an Ansible script to generate a VM. I wrote a myvm role that contains the script that orchestrates vmware_guest. This script contains a delegate_to: localhost which vmware_guest requires.
Then, I added my new-to-be-vm to hosts, and added the following to hosts:
[myvms]
myvm1
and extended site.yml with:
- hosts: myvms
roles:
- myvm
Now, when I run:
ansible-playbook site.yml -i hosts --limit myvm1
it fails with:
fatal: [myvm1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Connection reset by 192.168.10.13 port 22\r\n", "unreachable": true}
It seems ansible tries to connect to the vm ip before reading the actual role that creates the vm where it delegates to localhost. Adding 'delegate_to' to site.yml fails, however.
How can I fix my Ansible scripts to properly generate the VM for me?
Add gather_facts: false to the play.
- hosts: myvms
gather_facts: false
roles:
- myvm
Ansible by default connects to target machines and runs script which collect data (facts).

SSH-less LXC containers using Ansible

I am new to ansible, and I am trying to use ansible on some lxc containers.
My problem is that I don't want to install ssh on my containers. So
What I tried:
I tried to use this connection plugin but it seams that it does not work with ansible 2.
After understanding that chifflier connection plugin doesn't work, I tried to use the connection plugin from openstack.
After some failed attempts I dived into the code, and I understand
that the plugin doesn't have the information that the host I am talking with is a container.(because the code never reached this point)
My current setup:
{Ansbile host}---|ssh|---{vm}--|ansible connection plugin|---{container1}
My ansible.cfg:
[defaults]
connection_plugins = /home/jkarr/ansible-test/connection_plugins/ssh
inventory = inventory
My inventory:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm container_name=mailserver
my group vars:
ansible_host: "{{ physical_hostname }}"
ansible_ssh_extra_args: "{{ container_name }}"
ansible_user: containeruser
container_name: "{{ inventory_hostname }}"
physical_hostname: "{{ hostvars[physical_host]['ansible_host'] }}"
My testing playbook:
- name: Test Playbook
hosts: containers
gather_facts: true
tasks:
- name: testfile
copy:
content: "Test"
dest: /tmp/test
The output is:
fatal: [mailserver]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mailserver: No address associated with hostname\r\n",
"unreachable": true
}
Ansible version is: 2.3.1.0
So what am I doing wrong? any tips?
Thanks in advance!
Update 1:
Based on eric answer I am now using this connection plug-in.
I update the my inventory and it looks like:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm ansible_connection=lxc
After running my playbook I took:
<192.168.28.12> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "192.168.28.12 is not running"
}
Which is weird because 192.168.28.12 is the vm and the container is called mailserver. Also I verified that the container is running.
Also why it says that 192.168.28.12 is local lxc dir?
Update 2:
I remove my group_vars, my ansible.cfg and the connection plugin from the playbook and I got the this error:
<mailserver> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "mailserver is not running"
}
You should take a look at this lxc connection plugin. It might fit your needs.
Edit : lxc connection plugin is actually part of Ansible.
Just add ansible_connection=lxc in your inventory or group vars.
I'm trying something similar.
I want to configure a host over ssh using ansible and run lxc containers on the host, which are also configured using ansible:
ansible control node ----> host-a -----------> container-a
ssh lxc-attach
The issue with the lxc connection module is, that it only works for local lxc containers. There is no way to get it working through ssh.
At the moment the only way seems to be a direct ssh connection or a ssh connection through the first host:
ssh
ansible control node ----> container-a
or
ssh ssh
ansible control node ----> host-a ----> container-a
Both require sshd installed in the container. But the second way doesn't need port forwarding or multiple ip addresses.
Did you get a working solution?

Resources