Create VLANs only if they don't exist on a Nexus switch - ansible

I'm trying to create an Ansible playbook which should create VLANs defined in file vlans.dat on a Cisco Nexus switch only when they don't exist on device.
File vlans.dat contains:
---
vlans:
- { vlan_id: 2, name: TEST }
And Ansible file:
---
- name: Verify and create VLANs
hosts: switches_group
gather_facts: no
vars_files:
- vlans.dat
tasks:
- name: Get Nexus facts
nxos_facts:
register: data
- name: Create new VLANs only
nxos_vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
state: "{{item.state | default('present') }}"
with_items: "{{ vlans }}"
when: item.vlan_id not in data.ansible_facts.vlan_list
In the when statement I'm trying to limit execution only to the case when vlan_id (defined in the file) doesn't exist in the vlan_list gathered by nxos_facts module. Unfortunately it gets executed even when the vlan_id already exists in the vlan_list and I don't know why?
PLAY [Verify and create VLANs]
TASK [Get Nexus facts]
ok: [10.1.1.1]
TASK [Create new VLANs only]
ok: [10.1.1.1] => (item={u'name': u'TEST', u'vlan_id': 2})
TASK [debug]
skipping: [10.1.1.1]
PLAY RECAP
10.1.1.1 : ok=2 changed=0 unreachable=0 failed=0
Can you help me with that or provide some solution what I'm doing wrong here?

It appears you have stumbled upon a side-effect of YAML having actual types. Because in {vlan_id: 2} the 2 is an int but the list is strings. As you might imagine {{ 1 in ["1"] }} is False.
There are two ways out of that situation: make the vlan_id a string via - { vlan_id: "2" } or coerce the vlan_id to a string just for testing the list membership:
when: (item.vlan_id|string) not in data.ansible_facts.vlan_list

Related

Ansible, how to set a global fact using roles?

I'm trying to use Ansible to deploy a small k3s cluster with just two server nodes at the moment. Deploying the first server node, which I refer to as "master" is easy to set up with Ansible. However, setting up the second server node, which I refer to as "node" is giving me a challenge because I need to pull the value of the node-token from the master and use it to call the k3s install command on the "node" vm.
I'm using Ansible roles, and this is what my playbook looks like:
- hosts: all
roles:
- { role: k3sInstall , when: 'server_type is defined'}
- { role: k3sUnInstall , when: 'server_type is defined'}
This is my main.yml file from the k3sInstall role directory:
- name: Install k3s Server
import_tasks: k3s_install_server.yml
tags:
- k3s_install
This is my k3s_install_server.yml:
---
- name: Install k3s Cluster
block:
- name: Install k3s Master Server
become: yes
shell: "{{ k3s_master_install_cmd }}"
when: server_role == "master"
- name: Get Node-Token file from master server.
become: yes
shell: cat {{ node_token_filepath }}
when: server_role == "master"
register: nodetoken
- name: Print Node-Token
when: server_role == "master"
debug:
msg: "{{ nodetoken.stdout }}"
# msg: "{{ k3s_node_install_cmd }}"
- name: Set Node-Token fact
when: server_role == "master"
set_fact:
nodeToken: "{{ nodetoken.stdout }}"
- name: Print Node-Token fact
when: server_role == "node" or server_role == "master"
debug:
msg: "{{ nodeToken }}"
# - name: Install k3s Node Server
# become: yes
# shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
# when: server_role == "node"
I've commented out the Install k3s Node Servertask because I'm not able to properly reference the nodeToken variable that I'm setting when server_role == master.
This is the output of the debug:
TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
ok: [server1] => {
"msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
}
ok: [server2] => {
"msg": ""
}
My host file:
[p6dualstackservers]
server1 ansible_ssh_host=10.63.60.220
server2 ansible_ssh_host=10.63.60.221
And I have the following host_vars files assigned:
server1.yml:
server_role: master
server2.yml:
server_role: node
I've tried assigning the nodeToken variable inside of k3sInstall/vars/main.yml as well as one level up from the k3sInstall role inside group_vars/all.yml but that didn't help.
I tried searching for a way to use block-level variables but couldn't find anything.
If you set the variable for master only it's not available for other hosts, e.g.
- hosts: master,node
tasks:
- set_fact:
nodeToken: K10cf129cfedaf
when: inventory_hostname == 'master'
- debug:
var: nodeToken
gives
ok: [master] =>
nodeToken: K10cf129cfedaf
ok: [node] =>
nodeToken: VARIABLE IS NOT DEFINED!
If you want to "apply all results and facts to all the hosts in the same batch" use run_once: true, e.g.
- hosts: master,node
tasks:
- set_fact:
nodeToken: K10cf129cfedaf
when: inventory_hostname == 'master'
run_once: true
- debug:
var: nodeToken
gives
ok: [master] =>
nodeToken: K10cf129cfedaf
ok: [node] =>
nodeToken: K10cf129cfedaf
In your case, add 'run_once: true' to the task
- name: Set Node-Token fact
set_fact:
nodeToken: "{{ nodetoken.stdout }}"
when: server_role == "master"
run_once: true
The above code works because the condition when: server_role == "master" is applied before run_once: true. Quoting from run_once
"Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterward apply any results and facts to all active hosts in the same batch."
Safer code would be adding a standalone set_fact instead of relying on the precedence of the condition when: and run_once, e.g.
- set_fact:
nodeToken: "{{ nodetoken.stdout }}"
when: inventory_hostname == 'master'
- set_fact:
nodeToken: "{{ hostvars['master'].nodeToken }}"
run_once: true
Using when in this use case is probably not the best fit, you would probably be better delegating some tasks to the so-called master server.
What you can do to define what server is the master, based on your inventory variable, is to delegate a fact to localhost, for example.
Then again, to get the token from your file in the master server, you can delegate this task and fact only to this server.
Given the playbook:
- hosts: all
gather_facts: no
tasks:
- set_fact:
master_node: "{{ inventory_hostname }}"
when: server_role == 'master'
delegate_to: localhost
delegate_facts: true
- set_fact:
token: 12345678
run_once: true
delegate_to: "{{ hostvars.localhost.master_node }}"
delegate_facts: true
- debug:
var: hostvars[hostvars.localhost.master_node].token
when: server_role != 'master'
This yields the expected:
PLAY [all] ********************************************************************************************************
TASK [set_fact] ***************************************************************************************************
skipping: [node1]
ok: [node2 -> localhost]
skipping: [node3]
TASK [set_fact] ***************************************************************************************************
ok: [node1 -> node2]
TASK [debug] ******************************************************************************************************
skipping: [node2]
ok: [node1] =>
hostvars[hostvars.localhost.master_node].token: '12345678'
ok: [node3] =>
hostvars[hostvars.localhost.master_node].token: '12345678'
PLAY RECAP ********************************************************************************************************
node1 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
node2 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
node3 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0

Use dynamic variable name

I'm trying to get the value of ip_address from the following yaml that I'm including as variables on ansible:
common:
ntp:
- time.google.com
node1:
default_route: 10.128.0.1
dns:
- 10.128.0.2
hostname: ip-10-128-5-17
device_interface: ens5
cluster_interface: ens5
interfaces:
ens5:
ip_address: 10.128.5.17
nat_ip_address: 18.221.63.178
netmask: 255.255.240.0
version: 2
However the network interface (ens5 here) may be named something else, such as eth0. My ansible code is this:
- hosts: all
tasks:
- name: Read configuration from the yaml file
include_vars: "{{ config_yaml }}"
- name: Dump Interface Settings
vars:
msg: node1.interfaces.{{ cvp_device_interface }}.ip_address
debug:
msg: "{{ msg }}"
tags: debug_info
Running the code like this I can get the key's name:
TASK [Dump Interface Settings] *************************************************
│ ok: [18.221.63.178] => {
│ "msg": "node1.interfaces.ens5.ip_address"
│ }
But what I actually need is the value (i.e: something like {{ vars[msg] }}, which should expand into {{ node1.interfaces.ens5.ip_address }}). How can I accomplish this?
Use sqare brackets.
Example: a minimal playbook, which defines a variable called "device". This variable is used to return the active status of the device.
- hosts: localhost
connection: local
vars:
device: enx0050b60c19af
tasks:
- debug: var=device
- debug: var=hostvars.localhost.ansible_facts[device].active
Output:
$ ansible-playbook example.yaml
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] *******************************************************************
TASK [Gathering Facts] *************************************************************
ok: [localhost]
TASK [debug] ***********************************************************************
ok: [localhost] => {
"device": "enx0050b60c19af"
}
TASK [debug] ***********************************************************************
ok: [localhost] => {
"hostvars.localhost.ansible_facts[device].active": true
}
PLAY RECAP *************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
see comment
- hosts: all
tasks:
- name: Read configuration from the yaml file
include_vars: "{{ config_yaml }}"
- name: Dump Interface Settings
debug:
msg: "{{ node1['interfaces'][cvp_device_interface]['ip_address'] }}"
debug:
msg: "{{ msg }}"
tags: debug_info

Loops within loops

I've set up some application information in my Ansible group_vars like this:
applications:
- name: app1
- name: app2
- name: app3
- name: app4
settings:
log_dir: /var/logs/app4
associated_files:
- auth/key.json
- name: app5
settings:
log_dir: /var/logs/app5
repo_path: new_apps/app5
I'm struggling to get my head around how I can use these "sub loops".
My tasks for each application are:
Create some folders based purely on the name value
Create a log folder if a settings/log_dir value exists
Copy associated files over, if specified
The syntax for these tasks isn't the problem here, I'm comfortable with those - I just need to know how to access the information from this applications variable. Number 3 in particular seems troublesome to me - I need to loop within a loop.
To debug this, I've been trying to run the following task:
- debug:
msg: "{{ item }}"
with_subelements:
- "{{ applications }}"
- settings
Here's the output:
with_items: I get the error with_items expects a list or a set
with_nested: I can see the top level information (e.g. msg: {{ item }} outputs an array of app1, app2 etc)
with_subelements: I get the error subelements lookup expects a dictionary, got 'None'
It's possible/probable that the way I've set the variable up in the first instance is wrong. If there's a better way to do this, it's not a problem to change it.
You can't use with_subelements because settings is a dictionary, not a list. If you were to restructure your data so that settings is a list, like this:
applications:
- name: app1
- name: app2
- name: app3
- name: app4
settings:
- name: log_dir
value: /var/logs/app4
- name: associated_files
value:
- auth/key.json
- name: app5
settings:
- name: log_dir
value: /var/logs/app5
- name: repo_path
value: new_apps/app5
You could then write something like the following to iterate over each setting for each application:
---
- hosts: localhost
gather_facts: false
vars_files:
- applications.yml
tasks:
- debug:
msg: "set {{ item.1.name }} to {{ item.1.value }} for {{ item.0.name }}"
loop: "{{ applications|subelements('settings', skip_missing=true) }}"
loop_control:
label: "{{ item.0.name }}.{{ item.1.name }} = {{ item.1.value }}"
(I'm using loop_control here just to make the output nicer.)
Using the sample data you posted in applications.yml, this will produce as output:
PLAY [localhost] *********************************************************************
TASK [debug] *************************************************************************
ok: [localhost] => (item=app4.log_dir = /var/logs/app4) => {
"msg": "set log_dir to /var/logs/app4 for app4"
}
ok: [localhost] => (item=app4.associated_files = ['auth/key.json']) => {
"msg": "set associated_files to ['auth/key.json'] for app4"
}
ok: [localhost] => (item=app5.log_dir = /var/logs/app5) => {
"msg": "set log_dir to /var/logs/app5 for app5"
}
ok: [localhost] => (item=app5.repo_path = new_apps/app5) => {
"msg": "set repo_path to new_apps/app5 for app5"
}
PLAY RECAP ***************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Check for Value for a Key in Dict within Ansible

I want to run a Ansible Task in a string(vlan) is found within a keys (name) value. i.e
dict
interfaces_l3:
- name: vlan101
ipv4: 192.168.1.100/24
state: present
task
- name: Enable Features
nxos_feature:
feature: interface-vlan
state: enabled
when: vlan in interfaces_l3.values()
This is what I have but currently, this is not working.
There are a few problems with your expression:
interfaces_l3.values() should just blow up, because interfaces_l3 is a list, and lists don't have a .values() method.
You are referring to a variable named vlan rather than a string "vlan".
You are asking if any item in the interfaces_l3 list contains the string "vlan" in the value of the name attribute. You could do something like this:
---
- hosts: localhost
gather_facts: false
vars:
interfaces_l3_with_vlan:
- name: vlan101
ipv4: 192.168.1.100/24
state: present
interfaces_l3_without_vlan:
- name: something else
ipv4: 192.168.1.100/24
state: present
tasks:
- name: this should run
debug:
msg: "enabling features"
when: "interfaces_l3_with_vlan|selectattr('name', 'match', 'vlan')|list"
- name: this should be skipped
debug:
msg: "enabling features"
when: "interfaces_l3_without_vlan|selectattr('name', 'match', 'vlan')|list"
Which produces the following output:
PLAY [localhost] ******************************************************************************************************************************************************************************
TASK [this should run] ************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "enabling features"
}
TASK [this should be skipped] *****************************************************************************************************************************************************************
skipping: [localhost]
PLAY RECAP ************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0

How to generate single reusable random password with ansible

That is to say: How to evaluate the password lookup only once?
- name: Demo
hosts: localhost
gather_facts: False
vars:
my_pass: "{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}"
tasks:
- debug:
msg: "{{ my_pass }}"
- debug:
msg: "{{ my_pass }}"
- debug:
msg: "{{ my_pass }}"
each debug statement will print out a different value, e.g:
PLAY [Demo] *************
TASK [debug] ************
ok: [localhost] => {
"msg": "ZfyzacMsqZaYqwW"
}
TASK [debug] ************
ok: [localhost] => {
"msg": "mKcfRedImqxgXnE"
}
TASK [debug] ************
ok: [localhost] => {
"msg": "POpqMQoJWTiDpEW"
}
PLAY RECAP ************
localhost : ok=3 changed=0 unreachable=0 failed=0
ansible 2.3.2.0
Use set_fact to assign permanent fact:
- name: Demo
hosts: localhost
gather_facts: False
vars:
pwd_alias: "{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}"
tasks:
- set_fact:
my_pass: "{{ pwd_alias }}"
- debug:
msg: "{{ my_pass }}"
- debug:
msg: "{{ my_pass }}"
- debug:
msg: "{{ my_pass }}"
I've been doing it this way and never had an issue.
- name: Demo
hosts: localhost
gather_facts: False
tasks:
- set_fact:
my_pass: "{{ lookup('password', '/dev/null length=15 chars=ascii_letters') }}"
- debug:
msg: "{{ my_pass }}"
The lookup password is nice, but what if you have password specification, like it have to contain specific characters, or must not contain uppercase... the lookup also does not guarantee that the password will have special characters if needed to have...
I have ended with custom jinja filter, that might help somebody ( works fine for me :) )
https://gitlab.privatecloud.sk/vladoportos/custom-jinja-filters
The problem is that you are using the password module wrong, or at least according to the latest documentation (maybe this a new feature on 2.5):
Generates a random plaintext password and stores it in a file at a given filepath.
By definition,the lookup password generates a random password AND stores it on the specified path for subsequent lookups. So, first time it checks if the specified path exists, and if not generates a random password and stores it on that path, subsequent lookups will just retrieve it. Because you are using /dev/null as store path, you are forcing ansible to generate a new random password because everytime it checks for existence it finds nothing.
If you want to have a random password per host + client or whatever
all you need to do to is use some templating and set the store path based on those parameters.
For example:
---
- name: Password test
connection: local
hosts: localhost
tasks:
- name: create a mysql user with a random password
ansible.builtin.debug:
msg: "{{ lookup('password', 'credentials/' + item.host + '/' + item.user + '/mysqlpassword length=15') }}"
with_items:
- user: joe
host: atlanta
- user: jim
host: london
- name: Another task that uses the password of joe
ansible.builtin.debug:
msg: "{{ lookup('password', 'credentials/atlanta/joe/mysqlpassword length=15') }}"
- name: Another task that uses the password of jim
ansible.builtin.debug:
msg: "{{ lookup('password', 'credentials/london/jim/mysqlpassword length=15') }}"
And this is the task execution, as you can see, the three tasks are getting the right generated passwords:
TASK [Gathering Facts] ***********************************************************************************************
ok: [localhost]
TASK [create a mysql user with a random password] ********************************************************************
ok: [localhost] => (item={'user': 'joe', 'host': 'atlanta'}) => {
"msg": "niwPf4tk9HWHhNc"
}
ok: [localhost] => (item={'user': 'jim', 'host': 'london'}) => {
"msg": "dHJdg,OjOEqdyrW"
}
TASK [Another task that uses the password of joe] ********************************************************************
ok: [localhost] => {
"msg": "niwPf4tk9HWHhNc"
}
TASK [Another task that uses the password of jim] ********************************************************************
ok: [localhost] => {
"msg": "dHJdg,OjOEqdyrW"
}
This has the advantage that, even if you play fails and you have to re-execute you will not get the same previous random password,that you can then store on a key-chain or just delete them.

Resources