ad-hoc ping command for ansible - ansible

I am very new to Ansible, I will try to automate ACI with learning it.
I installed Ansible on my MacBook, the version is 2.10.
My inventory is like below
[APIC]
sandbox ansible_host=sandboxapicdc.cisco.com username=admin password=XXX
mpod ansible_host=10.1.1.100 ansible_ssh_user=admin ansible_ssh_pass=cisco
The second ansible_host is pingable from my machine, but I receive the following when I try to ping it via the Ansible ping module.
ansible all -m ping -i inventory
mpod | FAILED! => {
"msg": "to use the 'ssh' connection type with passwords, you must install the sshpass
program"
}
sandbox | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: connect to host
sandboxapicdc.cisco.com port 22: Operation timed out",
"unreachable": true
}

Try adding these to your variables
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: nxos
To treat them like NX-OS

Related

Ansible error: Failed to connect to the host via ssh

I have read other links with the same error but they didn't work for me.
I'm quite new to Ansible and I'm trying to learn it with simple codes.
With the playbook below I want to install Foxitreader with win_chocolatey on a windows host:
---
- hosts: 192.168.2.123
gather_facts: no
tasks:
- name: manage foxitreader
win_chocolatey:
name: foxitreader
state: present
but when I run this playbook with the code below:
ansible-playbook test_choco.yaml -i 192.168.2.123,
I get this error:
fatal: [192.168.2.123]: UNREACHABLE! => {"changed": false, "msg": "Failed
to connect to the host via ssh: ssh: connect to host 192.168.2.123 port 22:
Connection refused", "unreachable": true}
Any help will be appreciated.

I get the "Failed to connect to the host via ssh" error when Ansible tries to connect to a machine via ssh using private key

I am trying to provision a machine using ansible. I must connect to it via ssh using a private key, instead of password.
This is the content of my inventory.txt file:
target ansible_host=<ip_address> ansible_ssh_private_key_file=~/.ssh/<private_key_name>.pem
This is the content of my playbook.yaml file:
-
name: Playbook name
hosts: target
tasks:
<task_list>
When I am executing the command ansible-playbook <playbook_name>.yaml -i inventory.txt I get the following error:
fatal: [target]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey).\r\n", "unreachable": true}
I also tried executing the following command: ansible-playbook <playbook_name>.yaml --private-key=~/.ssh/<private_key_name>.pem -i inventory.txt, without the ansible_ssh_private_key_file property inside the inventory.txt file.
Note: I can connect to the machine using the command ssh -i <private_key_name>.pem <username>#<ip_address>.
How can I resolve this issue ?
I suspect you are connecting as different user. In the above example you use <user>#<host> during ssh checks but you don't have ansible_user=... field configured. Try providing username this way in hosts file.

Ansible tries to connect to VM IP before executing the role creating the VM

I'm trying to develop an Ansible script to generate a VM. I wrote a myvm role that contains the script that orchestrates vmware_guest. This script contains a delegate_to: localhost which vmware_guest requires.
Then, I added my new-to-be-vm to hosts, and added the following to hosts:
[myvms]
myvm1
and extended site.yml with:
- hosts: myvms
roles:
- myvm
Now, when I run:
ansible-playbook site.yml -i hosts --limit myvm1
it fails with:
fatal: [myvm1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Connection reset by 192.168.10.13 port 22\r\n", "unreachable": true}
It seems ansible tries to connect to the vm ip before reading the actual role that creates the vm where it delegates to localhost. Adding 'delegate_to' to site.yml fails, however.
How can I fix my Ansible scripts to properly generate the VM for me?
Add gather_facts: false to the play.
- hosts: myvms
gather_facts: false
roles:
- myvm
Ansible by default connects to target machines and runs script which collect data (facts).

SSH-less LXC containers using Ansible

I am new to ansible, and I am trying to use ansible on some lxc containers.
My problem is that I don't want to install ssh on my containers. So
What I tried:
I tried to use this connection plugin but it seams that it does not work with ansible 2.
After understanding that chifflier connection plugin doesn't work, I tried to use the connection plugin from openstack.
After some failed attempts I dived into the code, and I understand
that the plugin doesn't have the information that the host I am talking with is a container.(because the code never reached this point)
My current setup:
{Ansbile host}---|ssh|---{vm}--|ansible connection plugin|---{container1}
My ansible.cfg:
[defaults]
connection_plugins = /home/jkarr/ansible-test/connection_plugins/ssh
inventory = inventory
My inventory:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm container_name=mailserver
my group vars:
ansible_host: "{{ physical_hostname }}"
ansible_ssh_extra_args: "{{ container_name }}"
ansible_user: containeruser
container_name: "{{ inventory_hostname }}"
physical_hostname: "{{ hostvars[physical_host]['ansible_host'] }}"
My testing playbook:
- name: Test Playbook
hosts: containers
gather_facts: true
tasks:
- name: testfile
copy:
content: "Test"
dest: /tmp/test
The output is:
fatal: [mailserver]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname mailserver: No address associated with hostname\r\n",
"unreachable": true
}
Ansible version is: 2.3.1.0
So what am I doing wrong? any tips?
Thanks in advance!
Update 1:
Based on eric answer I am now using this connection plug-in.
I update the my inventory and it looks like:
[hosts]
vm ansible_host=192.168.28.12
[containers]
mailserver physical_host=vm ansible_connection=lxc
After running my playbook I took:
<192.168.28.12> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "192.168.28.12 is not running"
}
Which is weird because 192.168.28.12 is the vm and the container is called mailserver. Also I verified that the container is running.
Also why it says that 192.168.28.12 is local lxc dir?
Update 2:
I remove my group_vars, my ansible.cfg and the connection plugin from the playbook and I got the this error:
<mailserver> THIS IS A LOCAL LXC DIR
fatal: [mailserver]: FAILED! => {
"failed": true,
"msg": "mailserver is not running"
}
You should take a look at this lxc connection plugin. It might fit your needs.
Edit : lxc connection plugin is actually part of Ansible.
Just add ansible_connection=lxc in your inventory or group vars.
I'm trying something similar.
I want to configure a host over ssh using ansible and run lxc containers on the host, which are also configured using ansible:
ansible control node ----> host-a -----------> container-a
ssh lxc-attach
The issue with the lxc connection module is, that it only works for local lxc containers. There is no way to get it working through ssh.
At the moment the only way seems to be a direct ssh connection or a ssh connection through the first host:
ssh
ansible control node ----> container-a
or
ssh ssh
ansible control node ----> host-a ----> container-a
Both require sshd installed in the container. But the second way doesn't need port forwarding or multiple ip addresses.
Did you get a working solution?

Ansible test hosts fails

Just starting out with Ansible. I configured the hosts file like this:
[webserver]
<remote-server-ip> ansible_user=<user> ansible_private_key_file=<full-path-to-private-ssh-key>
When I run:
ansible all -m ping
I get:
<remote-server-ip> | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Couldn't read packet: Connection reset by peer\r\n",
"unreachable": true
I can connect with no issues if I run:
ssh -i <full-path-to-private-ssh-key> <user>#<remote-server-ip>
Notes:
There is no password on the SSH key.
The project is located at ~/my_project_name.
I also tried using ansible_connection=local, and while ansible all -m ping appeared to work, in reality all it does is allow me to execute tasks that modify the host machine Ansible is running on.
The ansible.cfg file has not been modified, though it is in a different directory: /etc/ansible/ansible.cfg.
Ansible by default tries to connect to localhost through ssh. For localhost, set the ansible_connection to local in your hosts file shown below.
<remote-server-ip> ansible_connection=local ansible_user=<user> ansible_private_key_file=<full-path-to-private-ssh-key>
Refer this documentation for more details.
Hope this helps!
I think I saw this earlier, can you try adding below in the hosts file and see if that works
ansible_connection=ssh ansible_port=22
I figured out that this is an issue with the version of Ansible I was using (2.3.1). Using version 2.2.0.0 works with no problems.

Resources