I am having a play where i will collect available host names before running a task, i am using this for a purpose,
My play code:
--
- name: check reachable side A hosts
hosts: ????ha???
connection: local
gather_facts: no
roles:
- Juniper.junos
vars:
credentials:
host: "{{ loopback_v4 }}"
username: "test"
ssh_keyfile: "/id_rsa"
port: "{{ port }}"
timeout: 60
tasks:
- block:
- name: "Check netconf connectivity with switches"
juniper_junos_ping:
provider: "{{ credentials }}"
dest: "{{ loopback_v4 }}"
- name: Add devices with connectivity to the "reachable" group
group_by:
key: "reachable_other_pairs"
rescue:
- debug: msg="Cannot ping to {{inventory_hostname}}. Skipping OS Install"
When i print this using
- debug:
msg: "group: {{ groups['reachable_other_pairs'] }}"
i am getting below result
"this group : ['testha1', 'testha2', 'testha3']",
Now if again call the same play with different hosts grouping with the same key i am getting the new host names appending to the existing values, like below
- name: check reachable side B hosts
hosts: ????hb???
connection: local
gather_facts: no
roles:
- Juniper.junos
vars:
credentials:
host: "{{ loopback_v4 }}"
username: "test"
ssh_keyfile: "/id_rsa"
port: "{{ port }}"
timeout: 60
tasks:
- block:
- name: "Check netconf connectivity with switches"
juniper_junos_ping:
provider: "{{ credentials }}"
dest: "{{ loopback_v4 }}"
- name: Add devices with connectivity to the "reachable" group
group_by:
key: "reachable_other_pairs"
rescue:
- debug: msg="Cannot ping to {{inventory_hostname}}. Skipping OS Install"
if i print the reachable_other_pairs i am getting below results
"msg": " new group: ['testhb1', 'testhb2', 'testhb3', 'testha1', 'testha2', 'testha3']"
All i want is only first 3 entries ['testhb1', 'testhb2', 'testhb3']
Can some one let me know how to achieve this?
Add this as as task just before your block. It will refresh your inventory and clean up all groups that are not in there:
- meta: refresh_inventory
Related
I am trying to use juniper_junos_facts from the Ansible Junos module to query some VM's that I provisioned using Vagrant. However I am getting the following error.
fatal: [r1]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r1)"}
fatal: [r2]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r2)"}
I see in the following document Here on juniper.net that this error occurs when you don't have the host defined correctly in the inventory file. I don't believe this to be an issue with my inventory file because when I run ansible-inventory --host all appears to be in order
~/vagrant-projects/junos$ ansible-inventory --host r1
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2222,
"ansible_ssh_private_key_file": ".vagrant/machines/r1/virtualbox/private_key",
"ansible_ssh_user": "root"
}
~/vagrant-projects/junos$ ansible-inventory --host r2
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2200,
"ansible_ssh_private_key_file": ".vagrant/machines/r2/virtualbox/private_key",
"ansible_ssh_user": "root"
}
My playbook is copied from the following document which I got from Here on juniper.net.
My Inventory File
[vsrx]
r1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=.vagrant/machines/r1/virtualbox/private_key
r2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=.vagrant/machines/r2/virtualbox/private_key
[vsrx:vars]
ansible_ssh_user=root
My Playbook
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
tasks:
- name: retrieve facts
juniper_junos_facts:
host: "{{ inventory_hostname }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version
As you're using connection: local you need to give the module full connection details (usually packaged in a provider dictionary at the play level to reduce repetition):
- name: retrieve facts
juniper_junos_facts:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
savedir: "{{ playbook_dir }}"
Full docs are here (watch out for the correct role version in the URL): https://junos-ansible-modules.readthedocs.io/en/2.1.0/juniper_junos_facts.html where you can also see what the defaults are.
To fully explain the "provider" method, your playbook should look something like this:
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
vars:
connection_info:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
tasks:
- name: retrieve facts
juniper_junos_facts:
provider: "{{ connection_info }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version
This answer for people who will find this question by error message.
If you use connection plugin different from local, it can, and usually caused by this bug related to variables ordering
Bug already fixed in Release 2.2.1 and later, try to update module from Galaxy.
I am setting up a vmware job in Ansible Tower to snapshot a list of VM's, ideally, this list should be generated by AWX/Tower from the vSphere dynamic inventory. Inventory is named "lab_vm" in AWX and use either the hostname or the UUID of the VM.
How do I pass this through in my playbook variables file?
---
vars:
vmware:
host: '{{ lookup("env", "VMWARE_HOST") }}'
username: '{{ lookup("env", "VMWARE_USER") }}'
password: '{{ lookup("env", "VMWARE_PASSWORD") }}'
vcenter_datacenter: "dc1"
vcenter_validate_certs: false
vm_name: "EVE-NG"
vm_template: "Win2019-Template"
vm_folder: "Network Labs"
my playbook
---
- name: vm snapshot
hosts: localhost
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
# hostname: "{{ host }}"
# username: "{{ user }}"
# password: "{{ password }}"
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ vm_name }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
You're going about it backward. Ansible loops through the inventory for you. Use that feature, and delegate the task to localhost:
---
- name: vm snapshot
hosts: all
become: false
gather_facts: false
collections:
- community.vmware
pre_tasks:
- include_vars: vars.yml
tasks:
- name: create snapshot
vmware_guest_snapshot:
datacenter: "{{ vcenter_datacenter }}"
validate_certs: False
name: "{{ inventory_hostname }}"
state: present
snapshot_name: "Ansible Managed Snapshot"
folder: "{{ vm_folder }}"
description: "This snapshot is created by Ansible Playbook"
delegate_to: localhost
I've not used this particular module before, but don't your want snapshot_name to be unique for each guest?
I'm trying to get server name as user input and if the server OS is RHEL7 it will proceed for further tasks. I'm trying with hostvars but it is not helping, kindly help me to find the OS version with when condition:
---
- name: Add hosts
hosts: localhost
vars:
- username: test
password: test
vars_prompt:
- name: server1
prompt: Server_1 IP or hostname
private: no
- name: server2
prompt: Server_2 IP or hostname
private: no
tasks:
- add_host:
name: "{{ server1 }}"
groups:
- cluster_nodes
- primary
- management
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- add_host:
name: "{{ server2 }}"
groups:
- cluster_nodes
- secondary
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- debug:
msg: "{{ hostvars['server1'].ansible_distribution_major_version }}"
When I execute the playbook, I'm getting below error:
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['server1']\" is undefined\n\nThe error appears to be in '/var/lib/awx/projects/pacemaker_RHEL_7_ST/main_2.yml': line 33, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
You need to gather_facts on the newly added host before you consume the variable. As an example, this will do it with automatic facts gathering.
---
- name: Add hosts
hosts: localhost
vars:
- username: test
password: test
vars_prompt:
- name: server1
prompt: Server_1 IP or hostname
private: no
- name: server2
prompt: Server_2 IP or hostname
private: no
tasks:
- add_host:
name: "{{ server1 }}"
groups:
- cluster_nodes
- primary
- management
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- add_host:
name: "{{ server2 }}"
groups:
- cluster_nodes
- secondary
ansible_user: "{{ username }}"
ansible_password: "{{ password}}"
- name: Gather facts for newly added targets
hosts: cluster_nodes
# gather_facts: true <= this is the default
- name: Do <whatever> targeting localhost again
hosts: localhost
gather_facts: false # already gathered in play1
tasks:
# Warning!! bad practice. Looping on a group usually
# shows you should have a play targeting that specific group
- debug:
msg: "OS version for {{ item }} is 7"
when: hostvars[item].ansible_distribution_major_version | int == 7
loop: "{{ groups['cluster_nodes'] }}"
If you don't want to rely on automatic gathering, you can manually play the setup module, e.g. for the second play:
- name: Gather facts for newly added targets
hosts: cluster_nodes
gather_facts: false
tasks:
- name: get facts from targets
setup:
I am trying to use juniper_junos_facts from the Ansible Junos module to query some VM's that I provisioned using Vagrant. However I am getting the following error.
fatal: [r1]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r1)"}
fatal: [r2]: FAILED! => {"changed": false, "msg": "Unable to make a PyEZ connection: ConnectUnknownHostError(r2)"}
I see in the following document Here on juniper.net that this error occurs when you don't have the host defined correctly in the inventory file. I don't believe this to be an issue with my inventory file because when I run ansible-inventory --host all appears to be in order
~/vagrant-projects/junos$ ansible-inventory --host r1
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2222,
"ansible_ssh_private_key_file": ".vagrant/machines/r1/virtualbox/private_key",
"ansible_ssh_user": "root"
}
~/vagrant-projects/junos$ ansible-inventory --host r2
{
"ansible_ssh_host": "127.0.0.1",
"ansible_ssh_port": 2200,
"ansible_ssh_private_key_file": ".vagrant/machines/r2/virtualbox/private_key",
"ansible_ssh_user": "root"
}
My playbook is copied from the following document which I got from Here on juniper.net.
My Inventory File
[vsrx]
r1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_private_key_file=.vagrant/machines/r1/virtualbox/private_key
r2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_private_key_file=.vagrant/machines/r2/virtualbox/private_key
[vsrx:vars]
ansible_ssh_user=root
My Playbook
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
tasks:
- name: retrieve facts
juniper_junos_facts:
host: "{{ inventory_hostname }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version
As you're using connection: local you need to give the module full connection details (usually packaged in a provider dictionary at the play level to reduce repetition):
- name: retrieve facts
juniper_junos_facts:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
savedir: "{{ playbook_dir }}"
Full docs are here (watch out for the correct role version in the URL): https://junos-ansible-modules.readthedocs.io/en/2.1.0/juniper_junos_facts.html where you can also see what the defaults are.
To fully explain the "provider" method, your playbook should look something like this:
---
- name: show version
hosts: vsrx
roles:
- Juniper.junos
connection: local
gather_facts: no
vars:
connection_info:
host: "{{ ansible_ssh_host }}"
port: "{{ ansible_ssh_port }}"
user: "{{ ansible_ssh_user }}"
passwd: "{{ ansible_ssh_pass }}"
ssh_private_key_file: "{{ ansible_ssh_private_key_file }}"
tasks:
- name: retrieve facts
juniper_junos_facts:
provider: "{{ connection_info }}"
savedir: "{{ playbook_dir }}"
- name: print version
debug:
var: junos.version
This answer for people who will find this question by error message.
If you use connection plugin different from local, it can, and usually caused by this bug related to variables ordering
Bug already fixed in Release 2.2.1 and later, try to update module from Galaxy.
I need to add a server to service group every time I create a new server using the following task.
Task
- name: Create a service group
a10_service_group_v3:
validate_certs: no
host: "{{ item.0.a10_host }}"
state: "{{ item.1.service_state }}"
username: "{{ item.0.user }}"
password: "{{ item.0.pass }}"
service_group: "{{ item.1.group_name }}"
reset_on_server_selection_fail: yes
servers:
- name: "{{ item.1.server_name1 }}"
port: "{{ item.1.server_port1 }}"
overwrite: yes
write_config: yes
ignore_errors: yes
with_nested:
- "{{ a10 }}"
- "{{ service_group }}"
Variables:
service_group:
- group_name: bif_sg
service_state: present
server_name1: bif01
server_port1: 80
I need help with passing variables for server_name and server_port, let's say If I have 3 servers to add to service group in the task I need to add 3 times server_name1, server_port1
server_name2, server_port2 ......
Everytime I add server I need to update in the task as well :(
Is there a way to pass multiple times sever_name and serer_port with single defined value in the task.
I you expect server_group to have a list of servers, refactor your variable to have a list of servers and not a bunch of separate subkeys:
service_group:
- group_name: bif_sg
service_state: present
servers:
- name: bif01
port: 80
- name: bif02
port: 8080
And in your task:
...
servers: "{{ item.1.servers }}"
...