Ansible playbook for Cisco Nexus fails with nxapi in provider - ansible

I have not been able to get this playbook working. I am trying to use the 'nxapi' provider method. I have nxapi setup on the target devices. I have a group_vars file with the following settings:
ansible_connection: local
ansible_network_os: nxos
ansible_user: user
ansible_password: password
The playbook code is:
- name: nxos_facts module
hosts: "{{ myhosts }}"
vars:
ssh:
host: "{{ myhosts }}"
username: "{{ ansible_user }}"
password: "{{ ansible_password }}"
transport: cli
nxapi:
host: "{{ myhosts }}"
username: "{{ ansible_user }}"
password: "{{ ansible_password }}"
transport: nxapi
use_ssl: yes
validate_certs: no
port: 8443
tasks:
- name: nxos_facts nxapi
nxos_facts:
provider: "{{ nxapi }}"
When I debug the failed playbook, I see this line:
ESTABLISH LOCAL CONNECTION FOR USER: < MY USER NAME >
It seems like the Playbook is not using the variables in the group_vars file , which specify a specific user to connect to the nxapi service on the switch.

Can't add a comment due to lack of rep
Make sure your group_vars match a child group in your host file. (Can't tell from your question)
host file:
all:
hosts:
SWITCH1:
ansible_host: 192.168.1.1
children:
group1:
hosts:
SWITCH1:
You can then have your Group variables in one of 2 files, all.yml or group1.yml

Related

Ansible Playbook with include_tasks

I have an Ansible playbook called main.yml that builds a Windows VM using an OVF template. It then configures the VM using variables that I pass. Typical Ansible stuff.
What I would like to do is have another task inside the main.yml run at the end that configures a couple things on the VM (configureserver.yml). I pass "{{ vmName }}" as a variable in the main.yml, and want the 2nd playbook (configureserver.yml) to run against that `"{{ vmName }}" variable.
Main.yml (removed some of the customization to shorten this post)
- name: Deploy Virtual Machine from an OVF template in content library
community.vmware.vmware_content_deploy_ovf_template:
name: "{{ vmName }}"
hostname: "{{ vcenterName }}"
username: "{{ vmwareUser }}"
password: "{{ vmwarePassword }}"
template: "{{ templateName }}"
- name: Configure the VM
community.vmware.vmware_guest:
name: "{{ vmName }}"
hostname: "{{ vcenterName }}"
username: "{{ vmwareUser }}"
password: "{{ vmwarePassword }}"
datacenter: "{{ datacenterName }}"
cluster: "{{ clusterName }}"
datastore: "{{ datastoreclusterName }}"
# THIS IS WHAT I WANT TO ADD
- name: Configure the D Drive
include_tasks:
file: configureserver.yml
configureserver.yml
- hosts: "{{ vmName }}"
gather_facts: no
become: yes
vars_files:
- passwords.yml
- vars.test.yml
vars:
- ansible_connection: ssh
- ansible_shell_type: cmd
- ansible_become_method: runas
- ansible_become_user: System
In the code above, I want to run a separate task in a separate yml that will configure the server. I want to run the configureserver.yml playbook against the "{{ vmName }}" variable as the inventory or host.
Glad to provide more details if needed.
I figured it out. I changed the include_tasks to include_playbook and that did the trick

Using Ansible to manage ESXI Inventory

I am trying to use Ansible to modify the DNS settings on a group of ESXI servers. I've been able to get my playbook to change the settings on a single server like this:
---
- hosts: localhost
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: 'myesxiserver.domain.local'
username: 'username'
password: 'password'
dns_servers:
- x.x.x.x
- x.x.x.x
delegate_to: localhost
How can I get this to work for multiple servers? The Ansible documentation provides this example:
---
- hosts: localhost
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: '{{ esxi_hostname }}'
username: '{{ esxi_username }}'
password: '{{ esxi_password }}'
change_hostname_to: esx01
domainname: foo.org
dns_servers:
- 8.8.8.8
- 8.8.4.4
delegate_to: localhost
I'm not clear on how to iterate through a list of hosts and pass the correct values into the variable '{{ esxi_hostname }}' for each of my servers. I'm assuming that the variables can be passed using an inventory file but I haven't found any good examples on how to do this for ESXI servers.
So I did get this working.
---
- hosts: localhost
vars_files:
- vars.yml
- vars2.yml
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: "{{ item }}"
username: 'myadmin'
password: "{{ Password }}"
validate_certs: no
change_hostname_to: "{{ item }}"
domainname: foo.org
dns_servers:
- x.x.x.x
- x.x.x.x
delegate_to: localhost
loop: "{{ esxihost }}"
I had to pass a list of host names using vars_file and iterate through it using the loop keyword. I tried to use the {{inventory_hostname}} variable along with a standard inventory file but because SSH is not generally enabled by default on ESXi servers I would get an SSH connection error.

Setting Ansible vars with set_fact results

Im running ansible 2.9.18 on RHEL7.
I am using hvac to retrieve usernames and passwords from a Hashicorp vault.
vars:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
tasks:
- name: set Cisco creds
set_fact:
cisco: "{{ creds['data'] }}"
- name: Get nxos facts
nxos_command:
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
commands: show ver
timeout: 30
register: ver_out
- debug: msg="{{ ver_out.stdout }}"
But username and password are deprecated and I am trying to figure out how to pass the username, password as a "provider" variable. And this code doesn't work:
vars:
asa_api:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
set_fact:
cisco: "{{ creds['data'] }}"
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
tasks:
- name: show run
asa_command:
commands: show run
provider: "{{ asa_api }}"
register: run
become: yes
tags:
- show_run
I cannot figure how syntax for making this work. I would greatly appreciate any help.
Thanks,
Steve
Disclaimer: This is a generic answer. I do not have any network device to test this fully so you might have to adapt a bit after reading the documentation
Your are taking this the wrong way. You don't need set_fact at all and both method you are trying to use (user/pass or provider dict) are actually deprecated. Ansible treats you network device as any host and will use the available user and password you have configured if they exist.
In the following example, I'm assuming your playbook only targets network devices and that the login/pass stored in your vault is the same on all devices.
- name: Untested network device connection configuration demo
hosts: my_network_device_group
vars:
# This indicates which connection plugin to use. Default is ssh
# An other possible value is httpapi. See above documentation link
ansible_connection: network_cli
vault_secret: tst2/data/cisco
vault_token: verysecret
vault_url: http://10.80.23.81:8200
vault_options: "secret={{ vault_secret }} token={{ vault_token }} url={{ vault_url }}"
creds: "{{ lookup('hashi_vault', vault_options).data }}"
# These are the user and pass used for connection.
ansible_user: "{{ creds.username }}"
ansible_password: "{{ creds.password }}"
tasks:
- name: Get nxos version
nxos_command:
commands: show ver
timeout: 30
register: ver_cmd
- name: show version
debug:
msg: "NXOS version on {{ inventory_hostname }} is {{ ver_cmd.stdout }}"
- name: An other task to play on targets
debug:
msg: "Task played on {{ inventory_hostname }}"
Rather than vars at play level, you can store this information in your inventory for all hosts or for a specific group, even for each host. See how to organise your group and host variables if you want to use that feature.

Ansible | Deploy VM then run additional playbooks against new host?

I am fairly new to Ansible. I have created an Ansible role that contains the following tasks that will deploy a vm from a template, then configure the VM with custom OS settings:
create-vm.yml
configure-vm.yml
I can successfully deploy a VM from a template via the "create-vm" task. But after that is
complete, I would like to continue with the "configure-vm" task. Since the playbooks/role-vm-deploy.yml file contains "localhost" as shown here...
- hosts: localhost
roles:
- vm-deploy
gather_facts: no
connection: local
... the next task doesn't run successfully because it is attempting to run the task against "localhost" and not the new VM hostname. I have since added the following to the end of the "create-vm" task...
- name: Add host to group 'just_created'
add_host:
name: '{{ hostname }}.{{ domain }}'
groups: just_created
...but I'm not quite sure what to do with it. I can't quite wrap my head around what else I need to do and how to call the new hostname in the "configure-vm" task instead of localhost.
I am executing the playbook via CLI
# ansible-playbook playbooks/role-vm-deploy.yml
I saw this post, which was kind of helpful
I also saw the dynamic inventory documentation, but it's a bit over my head at this juncture. Any help would be appreciated. Thank you!
Here are the contents for the playbooks and tasks
### playbooks -> role-vm-deploy.yml
- hosts: localhost
roles:
- vm-deploy
gather_facts: no
connection: local
### roles -> vm-deploy -> tasks -> main.yml
- name: Deploy VM
include: create-vm.yml
tags:
- create-vm
- name: Configure VM
include: configure-vm.yml
tags:
- configure-vm
### roles -> vm-deploy -> tasks -> create-vm.yml
- name: Clone the template
vmware_guest:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_pwd }}'
validate_certs: False
name: '{{ hostname }}'
template: '{{ template_name }}'
datacenter: '{{ datacenter }}'
folder: '/'
hardware:
memory_mb: '{{ memory }}'
num_cpus: '{{ num_cpu }}'
networks:
- label: "Network adapter 1"
state: present
connected: True
name: '{{ vlan }}'
state: poweredon
wait_for_ip_address: yes
### roles -> vm-deploy -> tasks -> configure-vm.yml
### This task is what I need to execute on the new hostname, but it attempts to execute on "localhost" ###
# Configure Networking
- name: Configure IP Address
lineinfile:
path: '{{ network_conf_file }}'
regexp: '^IPADDR='
line: 'IPADDR={{ ip_address }}'
- name: Configure Gateway Address
lineinfile:
path: '{{ network_conf_file }}'
regexp: '^GATEWAY='
line: 'GATEWAY={{ gw_address }}'
### roles -> vm-deploy -> defaults -> main.yml
- All of the variables reside here including "{{ hostname }}.{{ domain }}"
You got so close! The trick is to observe that a playbook is actually a list of plays (the yaml objects that are {"hosts": "...", "tasks": []}), and the targets of subsequent plays don't have to exist when the playbook starts -- presumably for this very reason. Thus:
- hosts: localhost
roles:
- vm-deploy
gather_facts: no
connection: local
# or wherever you were executing this -- it wasn't obvious from your question
post_tasks:
- name: Add host to group 'just_created'
add_host:
name: '{{ hostname }}.{{ domain }}'
groups: just_created
- hosts: just_created
tasks:
- debug:
msg: hello from the newly created {{ inventory_hostname }}

Ansible can't connect to windows host defined in in-memory inventory

I am trying to create a new Windows host on Azure, add it to Ansible's In-Memory inventory and run play against it. However it seems Ansible is trying to use ssh to connect to my windows host.
Below is my playbook
- name: Windows VM Playbook
hosts: localhost
vars:
nicName: "Blue-xxxx"
vmName: "Blue-xxxx"
vmPubIp: "51.141.x.x"
tasks:
- name: Create Windows VM
azure_rm_virtualmachine:
resource_group: AnsibleVMxxx
name: "{{ vmName }}"
vm_size: Standard_B2ms
storage_account: vmstoragedisksxxx
admin_username: xxxxxxxx
admin_password: xxxxxxxx
network_interface_names: "{{ nicName }}"
managed_disk_type: Standard_LRS
data_disks:
- lun: 0
disk_size_gb: 64
managed_disk_type: Standard_LRS
storage_container_name: vhd
storage_account_name: AnsibleVMxxxxtorageaccount
os_type: Windows
image:
offer: "WindowsServer"
publisher: MicrosoftWindowsServer
sku: '2012-R2-Datacenter'
location: 'uk west'
version: latest
- name: Custom Script Extension
azure_rm_deployment:
state: present
location: 'uk west'
resource_group_name: 'AnsibleVMxxxx'
template: "{{ lookup('file', '/etc/ansible/playbooks/ConfigureAnsibleForPowershellRemoting.json') | from_json }}"
deployment_mode: incremental
parameters:
vmName:
value: "{{ vmName }}"
- name: Add machine to in-memory inventory_plugins
add_host:
name: 51.141.x.x
groups: webservers
ansible_user: adminUser
ansible_password: xxxxxxxx
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_scheme: https
- name: Ping
hosts: webservers
gather_facts: false
connection: local
win_ping:
I also have a webserver.yml file under /playbooks/group_vars folder with following content
ansible_user: adminUser
ansible_password: xxxxxxxx
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_scheme: https
Can anyone please advise me on what I am doing wrong here?
Figured it out.
The problem was occuring because I had host: localhost at the begining of the playbook. I had to move the plays relying on in memory inventory out of localhost block.

Resources