Ansible SSH and Playbook - ansible

The current version Ubuntu I have is 20.10, the version of Ansible 2.9.9.
I have Eve NG with Cisco VIRL Routers on IOS 15.6
First I came across that Ubuntu was unable to SSH to cisco router, due to no matching key exchange method found. Their offer: diffie-hellman-group1-sha1, I found a work around using ~/.ssh/config. File using the following link
~/.ssh/config file:
Host 192.168.100.2
KexAlgorithms=+diffie-hellman-group1-sha1
Host 192.168.100.3
KexAlgorithms=+diffie-hellman-group1-sha1_
Now I am trying to deploy my first playbook.
When I try to run the playbook I get the following error:
fatal: [CSR-1]: FAILED! => {"changed": false, "msg": "Connection type ssh is not valid for this module"}
fatal: [CSR-2]: FAILED! => {"changed": false, "msg": "Connection type ssh is not valid for this module"}
I can SSH from Ubuntu to each router as I used ~/.ssh/config, but I don’t know how to make sure Ansible to use the ~/.ssh/config file.
I try in ansible.cfg file ssh_args = -F /home/a/.ssh/config ß the location of the SSH file, but cannot seem to get it working.
I have spent several hours Google around, but cannot find a fix.
ansible.cfg
[defaults]
inventory =./host
host_key_checking = False
retry_files_enabled = False
gathering = explicit
Interpreter_python = /usr/bin/python3
ssh_args = -F /home/n/etc/ssh/ssh_config.d/*.conf
Playbook:
hosts: CSR_Routers
tasks:
name: Show Version
ios_command:
commands: show version
all.yml:
ansible_user: "cisco"
ansible_ssh_pass: "cisco"
ansible_connection: "ssh"
ansible_network_os: "iso"
ansbile_connection: "network_cli"

If you see into the documentation don't use SSH as connection type, but network_cli. So - you don't talk to the device via default ssh, but via network_cli. Put that as a host specific var into your inventory.
all:
hosts:
CSR_01:
ansible_host: 192.168.100.2
ansible_connection: "network_cli"
ansible_network_os: "ios"
ansible_user: "cisco"
ansible_password: "cisco"
ansible_become: yes
ansible_become_method: enable
ansible_become_password: "cisco"
children:
CSR_Routers:
hosts:
CSR_01:
Based on your playbook, this inventory contains a group "CSR_Routers" and the only device on it is CSR_01 with IP 192.168.100.2. The connection type of that device is not ssh but network_cli.
remove the ssh_args from your ansible.cfg
remove ansible_ssh_pass, ansible_connection, ansible_user, ansible_network_os, ansbile_connection from your all.yml. This should be host specific (be aware of other devices in your inventory that are not an IOS device
So you call your playbook with:
ansible-playbook -i inventory.yaml playbook.yml
Also - have a look at the IOS specific documentation in Ansible

SSH FIX - after posted in Reddit
nano /etc/ssh/ssh_config
KexAlgorithms +diffie-hellman-group1-sha1
Ciphers +aes256-cbc,aes192-cbc,aes128-cbc,3des-cbc
systemctl restart ssh
nano /etc/ansible/ansible.cfg
[defaults]
host_key_checking=False
timeout = 30
Video with details
https://www.youtube.com/playlist?app=desktop&list=PLov64niDpWBId50D_wuraYWuQ-d02PiR1

Related

Ansible: Host localhost is unreachable

In my job there is a playbook developed in the following way that is executed by ansible tower.
This is the file that ansible tower executes and calls a playbook
report.yaml:
- hosts: localhost
gather_facts: false
connection: local
tasks:
- name: "Execute"
include_role:
name: 'fusion'
main.yaml from fusion role:
- name: "hc fusion"
include_tasks: "hc_fusion.yaml"
hc_fusion.yaml from fusion role:
- name: "FUSION"
shell: ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars 'fusion_ip_ha={{item.ip}} fusion_user={{item.username}} fusion_pass={{item.password}} fecha="{{fecha.stdout}}" fusion_ansible_become_user={{item.ansible_become_user}} fusion_ansible_become_pass={{item.ansible_become_pass}}'
fusion.yaml from fusion role:
- hosts: localhost
vars:
ansible_become_user: "{{fusion_ansible_become_user}}"
ansible_become_pass: "{{fusion_ansible_become_pass}}"
tasks:
- name: Validate
ignore_unreachable: yes
shell: service had status
delegate_to: "{{fusion_user}}#{{fusion_ip_ha}}"
become: True
become_method: su
This is a summary of the entire run.
Previously it worked but throws the following error.
stdout: PLAY [localhost] \nTASK [Validate] [1;31mfatal: [localhost -> gandalf#10.66.173.14]: UNREACHABLE! => {\"changed\": false, \"msg\": \"Failed to connect to the host via ssh: Warning: Permanently added '10.66.173.14' (RSA) to the list of known hosts.\ngandalf#10.66.173.14: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password), \"skip_reason\": \"Host localhost is unreachable\"
When I execute ansible-playbook roles/fusion/tasks/fusion.yaml --extra-vars XXXXXXXX from the command line with user awx it works.
Also I validated the connection from the server where ansible tower is running to where you want to connect with the ssh command and if it allows me to connect without requesting a password with the user awx
fusion.yaml does not explicitly specify connection plugin, thus default ssh type is being used. For localhost this approach usually brings a number of related problems (ssh keys, known_hosts, loopback interfaces etc.). If you need to run tasks on localhost you should define connection plugin local just like in your report.yaml playbook.
Additionally, as Zeitounator mentioned, running one ansible playbook from another with shell model is a really bad practice. Please, avoid this. Ansible has a number of mechanism for code re-use (includes, imports, roles etc.).

how can I use ansible to push playbooks with SSH keys authentification

I am new to ansible and try to push playbooks to my nodes. I would like to push via ssh-keys. Here is my playbook:
- name: nginx install and start services
hosts: <ip>
vars:
ansible_ssh_private_key_file: "/path/to/.ssh/id_ed25519"
become: true
tasks:
- name: install nginx
yum:
name: nginx
state: latest
- name: start service nginx
service:
name: nginx
state: started
Here is my inventory:
<ip> ansible_ssh_private_key_file=/path/to/.ssh/id_ed25519
before I push, I check if it works: ansible-playbook -i /home/myuser/.ansible/hosts nginx.yaml --check
it gives me:
fatal: [ip]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: user#ip: Permission denied (publickey,password).", "unreachable": true}
On that server I don't have root privileges, I cant do sudo. That's why I use my own inventory in my home directory. To the target node where I want to push that nginx playbook, I can do a SSH connection and perform a login. The public key is on the remote server in /home/user/.ssh/id_ed25119.pub
What am i missing?
Copy /etc/ansible/ansible.cfg into the directory from which you are running the nginx.yaml playbook, or somewhere else per the documentation: https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-configuration-settings-locations
Then edit that file to change this line:
#private_key_file = /path/to/file
to read:
private_key_file = /path/to/.ssh/id_ed25519
Also check the remote user_user entry.

Ansible local_action on host without local ssh daemon

How can I run a local command on a Ansible control server, if that control server does not have a SSH daemon running?
If I run the following playbook:
- name: Test commands
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Test local action
local_action: command echo "hello world"
I get the following error:
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host localhost port 22: Connection refused", "unreachable": true}
It seems that local_action is the same as delegate_to: 127.0.0.1, so Ansible tries to ssh to the localhost. However, there is no SSH daemon running on the local controller host (only on the remote machines).
So my immediate question is how to run a specific command from Ansible, without Ansible first trying to SSH to localhost.
Crucial addition, not in the original question:
My host_vars contained the following line:
ansible_connection: ssh
how to run a specific command from Ansible, without Ansible first trying to SSH to localhost.
connection: local is sufficient to make the tasks run in the controller without using SSH.
Try,
- name: Test commands
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Test local action
command: echo "hello world"
I'll answer the details myself, perhaps it is useful to someone:
In my case:
ansible_connection was set to ssh in the host_vars.
ansible_host was set to localhost by local_action.
This combined let to a ssh to localhost that failed.
Further considerations:
delegate_to, local_action set ansible_host and ansible_connection, but any setting in the host_vars, playbook or task override that.
connection: local only sets ansible_connection (ansible_host is unmodified), but any setting of ansible_connection in the host_vars, playbook or task overrides it.
So my solution was to either remove the ansible_connection in the host_vars, or setting the var ansible_connection in a task.
That looks wrong for me.
- name: import profiles of VMs
connection: local
hosts: localhost
gather_facts: false
tasks:
- name: list files
find:
paths: .
recurse: no
delegate_to: localhost
He is still asking me for ssh password:
❯ ansible-playbook playbooks/import_vm_profiles.yml -i localhost, -k [WARNING]: Unable to parse the plugin filter file /Users/fredericclement/devops/ansible_refactored/etc/Plugin_filters.yml as module_blacklist is not a list. Skipping.
SSH password:

Ansible tries to connect to VM IP before executing the role creating the VM

I'm trying to develop an Ansible script to generate a VM. I wrote a myvm role that contains the script that orchestrates vmware_guest. This script contains a delegate_to: localhost which vmware_guest requires.
Then, I added my new-to-be-vm to hosts, and added the following to hosts:
[myvms]
myvm1
and extended site.yml with:
- hosts: myvms
roles:
- myvm
Now, when I run:
ansible-playbook site.yml -i hosts --limit myvm1
it fails with:
fatal: [myvm1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Connection reset by 192.168.10.13 port 22\r\n", "unreachable": true}
It seems ansible tries to connect to the vm ip before reading the actual role that creates the vm where it delegates to localhost. Adding 'delegate_to' to site.yml fails, however.
How can I fix my Ansible scripts to properly generate the VM for me?
Add gather_facts: false to the play.
- hosts: myvms
gather_facts: false
roles:
- myvm
Ansible by default connects to target machines and runs script which collect data (facts).

SSH-less LXC containers using Ansible

I am new to ansible, and I am trying to use ansible on some lxc containers.
My problem is that I don't want to install ssh on my containers. So
What I tried:
I tried to use this connection plugin but it seams that it does not work with ansible 2.
After understanding that chifflier connection plugin doesn't work, I tried to use the connection plugin from openstack.
After some failed attempts I dived into the code, and I understand
that the plugin doesn't have the information that the host I am talking with is a container.(because the code never reached this point)
My current setup:
{Ansbile host}---|ssh|---{vm}--|ansible connection plugin|---{container1}
My ansible.cfg:
[defaults]
connection_plugins = /home/jkarr/ansible-test/connection_plugins/ssh
inventory = inventory
My inventory:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm container_name=mailserver
my group vars:
ansible_host: "{{ physical_hostname }}"
ansible_ssh_extra_args: "{{ container_name }}"
ansible_user: containeruser
container_name: "{{ inventory_hostname }}"
physical_hostname: "{{ hostvars[physical_host]['ansible_host'] }}"
My testing playbook:
- name: Test Playbook
hosts: containers
gather_facts: true
tasks:
- name: testfile
copy:
content: "Test"
dest: /tmp/test
The output is:
fatal: [mailserver]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mailserver: No address associated with hostname\r\n",
"unreachable": true
}
Ansible version is: 2.3.1.0
So what am I doing wrong? any tips?
Thanks in advance!
Update 1:
Based on eric answer I am now using this connection plug-in.
I update the my inventory and it looks like:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm ansible_connection=lxc
After running my playbook I took:
<192.168.28.12> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "192.168.28.12 is not running"
}
Which is weird because 192.168.28.12 is the vm and the container is called mailserver. Also I verified that the container is running.
Also why it says that 192.168.28.12 is local lxc dir?
Update 2:
I remove my group_vars, my ansible.cfg and the connection plugin from the playbook and I got the this error:
<mailserver> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "mailserver is not running"
}
You should take a look at this lxc connection plugin. It might fit your needs.
Edit : lxc connection plugin is actually part of Ansible.
Just add ansible_connection=lxc in your inventory or group vars.
I'm trying something similar.
I want to configure a host over ssh using ansible and run lxc containers on the host, which are also configured using ansible:
ansible control node ----> host-a -----------> container-a
ssh lxc-attach
The issue with the lxc connection module is, that it only works for local lxc containers. There is no way to get it working through ssh.
At the moment the only way seems to be a direct ssh connection or a ssh connection through the first host:
ssh
ansible control node ----> container-a
or
ssh ssh
ansible control node ----> host-a ----> container-a
Both require sshd installed in the container. But the second way doesn't need port forwarding or multiple ip addresses.
Did you get a working solution?

Resources