Unable to make a PyEZ connection: ConnectUnknownHostError - ansible

I am trying to use juniper_junos_facts from the Ansible Junos module to query some VM's that I provisioned using Vagrant. However I am getting the following error.
fatal: [r1]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r1)"}
fatal: [r2]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r2)"}
I see in the following document Here on juniper.net that this error occurs when you don't have the host defined correctly in the inventory file. I don't believe this to be an issue with my inventory file because when I run ansible-inventory --host all appears to be in order
~/vagrant-projects/junos$ ansible-inventory --host r1
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2222,
"ansible_ssh_private_key_file": ".vagrant/machines/r1/virtualbox/private_key",
"ansible_ssh_user": "root"
}
~/vagrant-projects/junos$ ansible-inventory --host r2
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2200,
"ansible_ssh_private_key_file": ".vagrant/machines/r2/virtualbox/private_key",
"ansible_ssh_user": "root"
}
My playbook is copied from the following document which I got from Here on juniper.net.
My Inventory File
[vsrx]
r1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=.vagrant/machines/r1/virtualbox/private_key
r2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=.vagrant/machines/r2/virtualbox/private_key
[vsrx:vars]
ansible_ssh_user=root
My Playbook
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
tasks:
- name: retrieve facts
juniper_junos_facts:
host: "{{ inventory_hostname }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version

As you're using connection: local you need to give the module full connection details (usually packaged in a provider dictionary at the play level to reduce repetition):
- name: retrieve facts
juniper_junos_facts:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
savedir: "{{ playbook_dir }}"
Full docs are here (watch out for the correct role version in the URL): https://junos-ansible-modules.readthedocs.io/en/2.1.0/juniper_junos_facts.html where you can also see what the defaults are.
To fully explain the "provider" method, your playbook should look something like this:
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
vars:
connection_info:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
tasks:
- name: retrieve facts
juniper_junos_facts:
provider: "{{ connection_info }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version

This answer for people who will find this question by error message.
If you use connection plugin different from local, it can, and usually caused by this bug related to variables ordering
Bug already fixed in Release 2.2.1 and later, try to update module from Galaxy.

Related

Ansible & Juniper Junos - Unable to make a PyEZ connection: ConnectError() [duplicate]

I am trying to use juniper_junos_facts from the Ansible Junos module to query some VM's that I provisioned using Vagrant. However I am getting the following error.
fatal: [r1]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r1)"}
fatal: [r2]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r2)"}
I see in the following document Here on juniper.net that this error occurs when you don't have the host defined correctly in the inventory file. I don't believe this to be an issue with my inventory file because when I run ansible-inventory --host all appears to be in order
~/vagrant-projects/junos$ ansible-inventory --host r1
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2222,
"ansible_ssh_private_key_file": ".vagrant/machines/r1/virtualbox/private_key",
"ansible_ssh_user": "root"
}
~/vagrant-projects/junos$ ansible-inventory --host r2
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2200,
"ansible_ssh_private_key_file": ".vagrant/machines/r2/virtualbox/private_key",
"ansible_ssh_user": "root"
}
My playbook is copied from the following document which I got from Here on juniper.net.
My Inventory File
[vsrx]
r1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=.vagrant/machines/r1/virtualbox/private_key
r2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=.vagrant/machines/r2/virtualbox/private_key
[vsrx:vars]
ansible_ssh_user=root
My Playbook
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
tasks:
- name: retrieve facts
juniper_junos_facts:
host: "{{ inventory_hostname }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version
As you're using connection: local you need to give the module full connection details (usually packaged in a provider dictionary at the play level to reduce repetition):
- name: retrieve facts
juniper_junos_facts:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
savedir: "{{ playbook_dir }}"
Full docs are here (watch out for the correct role version in the URL): https://junos-ansible-modules.readthedocs.io/en/2.1.0/juniper_junos_facts.html where you can also see what the defaults are.
To fully explain the "provider" method, your playbook should look something like this:
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
vars:
connection_info:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
tasks:
- name: retrieve facts
juniper_junos_facts:
provider: "{{ connection_info }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version
This answer for people who will find this question by error message.
If you use connection plugin different from local, it can, and usually caused by this bug related to variables ordering
Bug already fixed in Release 2.2.1 and later, try to update module from Galaxy.

Ansible Tower how to pass inventory to my playbook variables

I am setting up a vmware job in Ansible Tower to snapshot a list of VM's, ideally, this list should be generated by AWX/Tower from the vSphere dynamic inventory. Inventory is named "lab_vm" in AWX and use either the hostname or the UUID of the VM.
How do I pass this through in my playbook variables file?
---
vars:
vmware:
host: '{{ lookup("env", "VMWARE_HOST") }}'
username: '{{ lookup("env", "VMWARE_USER") }}'
password: '{{ lookup("env", "VMWARE_PASSWORD") }}'
vcenter_datacenter: "dc1"
vcenter_validate_certs: false
vm_name: "EVE-NG"
vm_template: "Win2019-Template"
vm_folder: "Network Labs"
my playbook
---
- name: vm snapshot
hosts: localhost
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
# hostname: "{{ host }}"
# username: "{{ user }}"
# password: "{{ password }}"
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ vm_name }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
You're going about it backward. Ansible loops through the inventory for you. Use that feature, and delegate the task to localhost:
---
- name: vm snapshot
hosts: all
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ inventory_hostname }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
delegate_to: localhost
I've not used this particular module before, but don't your want snapshot_name to be unique for each guest?

Ansible how to remove groups value by key

I am having a play where i will collect available host names before running a task, i am using this for a purpose,
My play code:
--
- name: check reachable side A hosts
hosts: ????ha???
connection: local
gather_facts: no
roles:
- Juniper.junos
vars:
credentials:
host: "{{ loopback_v4 }}"
username: "test"
ssh_keyfile: "/id_rsa"
port: "{{ port }}"
timeout: 60
tasks:
- block:
- name: "Check netconf connectivity with switches"
juniper_junos_ping:
provider: "{{ credentials }}"
dest: "{{ loopback_v4 }}"
- name: Add devices with connectivity to the "reachable" group
group_by:
key: "reachable_other_pairs"
rescue:
- debug: msg="Cannot ping to {{inventory_hostname}}. Skipping OS Install"
When i print this using
- debug:
msg: "group: {{ groups['reachable_other_pairs'] }}"
i am getting below result
"this group : ['testha1', 'testha2', 'testha3']",
Now if again call the same play with different hosts grouping with the same key i am getting the new host names appending to the existing values, like below
- name: check reachable side B hosts
hosts: ????hb???
connection: local
gather_facts: no
roles:
- Juniper.junos
vars:
credentials:
host: "{{ loopback_v4 }}"
username: "test"
ssh_keyfile: "/id_rsa"
port: "{{ port }}"
timeout: 60
tasks:
- block:
- name: "Check netconf connectivity with switches"
juniper_junos_ping:
provider: "{{ credentials }}"
dest: "{{ loopback_v4 }}"
- name: Add devices with connectivity to the "reachable" group
group_by:
key: "reachable_other_pairs"
rescue:
- debug: msg="Cannot ping to {{inventory_hostname}}. Skipping OS Install"
if i print the reachable_other_pairs i am getting below results
"msg": " new group: ['testhb1', 'testhb2', 'testhb3', 'testha1', 'testha2', 'testha3']"
All i want is only first 3 entries ['testhb1', 'testhb2', 'testhb3']
Can some one let me know how to achieve this?
Add this as as task just before your block. It will refresh your inventory and clean up all groups that are not in there:
- meta: refresh_inventory

Multiple variables in with_items made error

I created the following playbook to set ufw settings.
---
- name: setup ufw for multi ports
hosts: db
become: yes
tasks:
- name: 'Allow all access for multi ports'
community.general.ufw:
rule: allow
port: "{{ item.port_num }}"
src: "{{ item.dest_ip }}"
with_items:
- { port_num: "33787", dest_ip: "{{web_ip_band}}" }
And this is my group_vars file.
web_ip_band:
- '192.168.101.13/24'
- '192.168.101.44/24'
when I execute this playbook, I get this error.
failed: [dbserver01] (item={'port_num': '33787', 'dest_ip': ['192.168.101.13/24', '192.168.101.44/24']}) => {"ansible_loop_var": "item", "changed": false, "commands": ["/usr/sbin/ufw status verbose", "/bin/grep -h '^### tuple' /lib/ufw/user.rules /lib/ufw/user6.rules /etc/ufw/user.rules /etc/ufw/user6.rules /var/lib/ufw/user.rules /var/lib/ufw/user6.rules", "/usr/sbin/ufw --version", "/usr/sbin/ufw allow from ['192.168.101.13/24', '192.168.101.44/24'] to any port 33787"], "item": {"dest_ip": ["192.168.101.13/24", "192.168.101.44/24"], "port_num": "33787"}, "msg": "ERROR: Wrong number of arguments\n"}
Is there a syntax error in my playbook?
From the community.general.ufw module documentation (extract rearranged to fit in SO answer)
from_ip (aliases: from, src)
string - Default: "any"
You are passing a list of IPs, which explains your error message:
ERROR: Wrong number of arguments
You have to play that task for each combination of port_num and individual entries in dest_ip. What you need here is a subelements loop:
- name: 'Allow all access for multi ports'
community.general.ufw:
rule: allow
port: "{{ item.0.port_num }}"
src: "{{ item.1 }}"
vars:
my_rules:
- { port_num: "33787", dest_ip: "{{web_ip_band}}" }
loop: "{{ my_rules | subelements('dest_ip') }}"

How to transfer variables to include playbook?

i have playbook, which have include. Also have var_prompt "name_VM" and i need transfer variable in include playbook "new-vm.yml", but i have error:
TASK [hostname]
**************************************************************** fatal: [192.168.250.102]: FAILED! => {"failed": true, "msg": "the
field 'args' has an invalid value, which appears to include a variable
that is undefined. The error was: {{ name_VM }}: 'name_VM' is
undefined\n\nThe error appears to have been in
'/etc/ansible/playbooks/tasks/new-vm.yml': line 7, column 7, but
may\nbe elsewhere in the file depending on the exact syntax
problem.\n\nThe offending line appears to be:\n\n pre_tasks:\n -
hostname:\n ^ here\n"}
How to transfer variables in pre_tasks include playbook?
Main playbook:
- hosts: localhost
gather_facts: false
connection: local
become: true
vars_files:
- ../roles/vm-create/vars/am-default.yml
vars_prompt:
- name: "name_VM"
prompt: "VM name:"
private: no
default: "vm001"
- name: "size_hard"
prompt: "Size hard disk (Gb)"
private: no
default: "16"
- name: "size_memory"
prompt: "Size memory (Mb)"
private: no
default: "2048"
- name: "count_CPU"
prompt: "Count CPU:"
private: no
default: "2"
roles:
- vm-create
tasks:
- include: tasks/check-ip.yml
- include: tasks/new-vm.yml
new-vm playbook:
- hosts: temp
vars:
ldap_server: ldap://ldap.example.com
agent_server: zabbix.aexample.com
pre_tasks:
- hostname:
name: "{{ name_vm }}"
roles:
- { role: zabbix-agent, tags: [ 'zabbix' ] }
- { role: ldap-client, tags: [ 'ldap' ] }
- { role: motd, tags: [ 'motd' ] }
tasks:
- telegram:
token: 'bot12345:XXXXXX'
chat_id: XXXXX
msg: "New VM {{ ansible_hostname }} ({{ ansible_all_ipv4_addresses }}) is created and has been configured."
tags:
- telegram
check_ip.yml in which i add host:
- vsphere_guest:
vcenter_hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
guest: "{{ name_VM }}"
vmware_guest_facts: yes
validate_certs: no
register: vsphere_facts
until: vsphere_facts.ansible_facts.hw_eth0.ipaddresses[0] | match("192.168.250.")
retries: 6
delay: 10
- name: Ensure virtual machine is in the dynamic inventory
add_host:
name: "{{ vsphere_facts.ansible_facts.hw_eth0.ipaddresses[0] }}"
ansible_user: root
ansible_ssh_pass: pass
groups: temp
In your case name_VM is play-bound and will not be visible from second play.
You need to assign a fact to temp host (I guess you use add_host somewhere inside vm-create role; so just add name_vm: "{{ name_VM }}" host fact there).
Then in second play you can access {{ name_vm }} host fact.
Update: example, based on question edit.
- name: Ensure virtual machine is in the dynamic inventory
add_host:
name: "{{ vsphere_facts.ansible_facts.hw_eth0.ipaddresses[0] }}"
name_vm: "{{ name_VM }}"
ansible_user: root
ansible_ssh_pass: pass
groups: temp

Resources