Error handling for updating IOS via Ansible - ansible

beginner to using Ansible. More of a network engineer, less of a scripter / programmer, but trying to learn a new skill.
Attempting to write a playbook to automate updating of our fleet of Cisco switch stacks but I think I am both lost in syntax and if this is the 'right' way to go about what I am doing.
---
- name: Update Cisco switch stack
hosts: Cisco2960
vars:
upgrade_ios_version: "15.2(7)E5"
tasks:
name: Check current IOS version / Determine if update is needed...
ios_facts:
debug:
msg:
- "Current image is {{ ansible_net_version }}"
- "Current compliant image is {{ upgrade_ios_version }}"
name: Fail if versions match.
ansible.builtin.fail: msg="IOS versions match. Stopping update."
when: "{{ ansible_net_version }} = {{ upgrade_ios_version }}"
At first I thought each variable needed its own quotation, but that appears to be incorrect syntax as well, as below.
when: "{{ ansible_net_version }}" = "{{ upgrade_ios_version }}"
Couple questions:
Is there an easier way with a plain-English way of describing the type of error handling I am looking for? Ansible documentation is great on options, but light on practical applications / examples.
Why am I receiving this specific syntax error in this case?

You can use the playbook below.
Ansible Playbook to upgrade Cisco IOS
- name: Upgrade CISCO IOS
hosts: SWITCHES
vars:
upgrade_ios_version: 15.2(7)E5
tasks:
- name: CHECK CURRENT VERSION
ios_facts:
- debug:
msg:
- "Current version is {{ ansible_net_version }}"
- "Current compliant image is {{ upgrade_ios_version }}"
- debug:
msg:
- "Image is not compliant and will be upgraded"
when: ansible_net_version != upgrade_ios_version
Create backup folder for today
- hosts: localhost
tasks:
- name: Get ansible date/time facts
setup:
filter: "ansible_date_time"
gather_subset: "!all"
- name: Store DTG as fact
set_fact:
DTG: "{{ ansible_date_time.date }}"
- name: Create Directory {{hostvars.localhost.DTG}}
file:
path: ~/network-programmability/backups/{{hostvars.localhost.DTG}}
state: directory
run_once: true
Backup Running Config
- hosts: SWITCHES
tasks:
- name: Backup Running Config
ios_command:
commands: show run
register: config
- name: Save output to ~/network-programmability/backups/
copy:
content: "{{config.stdout[0]}}"
dest: "~/network-programmability/backups/{{hostvars.localhost.DTG}}/{{ inventory_hostname }}-{{hostvars.localhost.DTG}}-config.txt"
SAVE the Running Config
- name: Save running config
ios_config:
save_when: always
Copy software to target device
- name: Copy Image // This could take up to 4 minutes
net_put:
src: "~/network-programmability/images/c2960l-universalk9-mz.152-7.E5.bin"
dest: "flash:/c2960l-universalk9-mz.152-7.E5.bin"
vars:
ansible_command_timeout: 600
Change the Boot Variable to the new image
- name: Change Boot Variable to new image
ios_config:
commands:
- "boot system flash:c2960l-universalk9-mz.152-7.E5.bin"
save_when: always
Reload the device
- name: Reload the Device
cli_command:
command: reload
prompt:
- confirm
answer:
- 'y'
Wait for Reachability to the device
- name: Wait for device to come back online
wait_for:
host: "{{ inventory_hostname }}"
port: 22
delay: 90
delegate_to: localhost
Check current image
- name: Check Image Version
ios_facts:
- debug:
msg:
- "Current version is {{ ansible_net_version }}"
- name: ASSERT THAT THE IOS VERSION IS CORRECT
vars:
upgrade_ios_version: 15.2(7)E5
assert:
that:
- upgrade_ios_version == ansible_net_version
- debug:
msg:
- "Software Upgrade has been completed"

Related

The plugin community.general.dnf_versionlock dosent seem to work as intended

im trying to write an ansible playbock that updates all packages on a red hat machine and versionlocks them. when that is done. the list of locked packages chould then be exportet to a different machine.
it seems to work until the "lock all packeges stage". then it just freezes there.
what am i doing wrong?
thank you in advance.
---
- name: Update Red Hat system and lock version
hosts: "{{host}}"
become: true
vars:
locked_packages: "{{ locked_packages | default([]) + [ item.name ] }}"
tasks:
- name: debug host
debug:
msg: "{{host}}"
- name: unlock all packages
community.general.dnf_versionlock:
name: "*"
state: absent
- name: Update all packages
dnf:
name: '*'
state: latest
register: updated_packages_main
- name: print updated_packages_main variable
debug:
var: updated_packages_main
- name: lock all packages
community.general.dnf_versionlock:
name: "*"
state: present
register: locked_packages
- name: debug locked_packages
debug:
var: locked_packages
- name: send list of locked packages to remote host
fetch:
src: /etc/dnf/plugins/versionlock.list
dest: /opt/locked_packages.txt

Setting Ansible vars with set_fact results

Im running ansible 2.9.18 on RHEL7.
I am using hvac to retrieve usernames and passwords from a Hashicorp vault.
vars:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
tasks:
- name: set Cisco creds
set_fact:
cisco: "{{ creds['data'] }}"
- name: Get nxos facts
nxos_command:
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
commands: show ver
timeout: 30
register: ver_out
- debug: msg="{{ ver_out.stdout }}"
But username and password are deprecated and I am trying to figure out how to pass the username, password as a "provider" variable. And this code doesn't work:
vars:
asa_api:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
set_fact:
cisco: "{{ creds['data'] }}"
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
tasks:
- name: show run
asa_command:
commands: show run
provider: "{{ asa_api }}"
register: run
become: yes
tags:
- show_run
I cannot figure how syntax for making this work. I would greatly appreciate any help.
Thanks,
Steve
Disclaimer: This is a generic answer. I do not have any network device to test this fully so you might have to adapt a bit after reading the documentation
Your are taking this the wrong way. You don't need set_fact at all and both method you are trying to use (user/pass or provider dict) are actually deprecated. Ansible treats you network device as any host and will use the available user and password you have configured if they exist.
In the following example, I'm assuming your playbook only targets network devices and that the login/pass stored in your vault is the same on all devices.
- name: Untested network device connection configuration demo
hosts: my_network_device_group
vars:
# This indicates which connection plugin to use. Default is ssh
# An other possible value is httpapi. See above documentation link
ansible_connection: network_cli
vault_secret: tst2/data/cisco
vault_token: verysecret
vault_url: http://10.80.23.81:8200
vault_options: "secret={{ vault_secret }} token={{ vault_token }} url={{ vault_url }}"
creds: "{{ lookup('hashi_vault', vault_options).data }}"
# These are the user and pass used for connection.
ansible_user: "{{ creds.username }}"
ansible_password: "{{ creds.password }}"
tasks:
- name: Get nxos version
nxos_command:
commands: show ver
timeout: 30
register: ver_cmd
- name: show version
debug:
msg: "NXOS version on {{ inventory_hostname }} is {{ ver_cmd.stdout }}"
- name: An other task to play on targets
debug:
msg: "Task played on {{ inventory_hostname }}"
Rather than vars at play level, you can store this information in your inventory for all hosts or for a specific group, even for each host. See how to organise your group and host variables if you want to use that feature.

Ansible Playbook : variables

I am having an issue with my playbook when I run this playbook as follows
ansible-playbook –i inventory junos_config_new.yml –check –vvv
name: Juniper SRX configuration compliance checks
hosts: juniper
gather_facts: false
tasks:
set_fact:
config_directory: '{{ "/home/myfolder/ansible_junos/files/" }}'
- name: Syslog server check
junipernetworks.junos.junos_config:
src:'{{ config_directory }}/syslog_config.txt'
src_format: set
comment: Ensure that appropriate Syslog server configured
register: junos_output
diff: true
- debug:
msg: Syslog server check - This check has failed with the following output({{ junos_output.diff.prepared }})
when: junos_output.changed
- debug:
msg: Syslog server check - This check has failed with the following output({{ junos_output.diff.prepared }})
when: junos_output.changed
- name: Admin credentials check
junipernetworks.junos.junos_config:
src: '{{ config_directory }}/admin_user.txt'
comment: Ensure that Admin user have been created
diff: true
register: junos_output1
- debug:
var: junos_output1 ***************************************failed section
- debug:
msg: Admin credentials check - This check has passed with the following output({{ junos_output1.diff.prepared }})
when: not junos_output1.changed
- debug:
msg: Admin credentials check - This check has failed with the following output({{ junos_output1.diff.prepared }})
when: junos_output1.changed
- name: NTP Server check
junipernetworks.junos.junos_config:
src: '{{ config_directory }}/NTP_server.txt'
comment: Ensure that correct NTP servers has been configured
diff: true
- debug:
var: junos_output2
- debug:
msg: NTP Server check - This check has passed with the following output({{ junos_output2.diff.prepared }})
when: not junos_output.changed
- debug:
msg: NTP Server check - This check has failed with the following output({{ junos_output2.diff.prepared }})
when: junos_output.changed
- name: Idle timeout check
junipernetworks.junos.junos_config:
src: '{{ config_directory }}/idle_timeout.txt'
comment: Ensure that idle timeout has been configured
diff: true
- debug:
var: junos_output3
- debug:
msg: Idle timeout check - This check has passed with the following output({{ junos_output3.diff.prepared }})
when: not junos_output.changed
- debug:
msg: Idle timeout check - This check has failed with the following output({{ junos_output3.diff.prepared }})
when: junos_output.changed
The error appears to be in '/home/gefelas/ansible_junos/junos_config_new.yml': line 30, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
register: junos_output1
- debug:
^ here
Please let me know what I can change ..
The main cause for the error seems to be that you are using diff for the junos_config module. According to the documentation, this module does not support diff: https://docs.ansible.com/ansible/latest/modules/junos_config_module.html
So you'll need to remove the diff: true from your junos_config tasks.
On another note, you also seem to have multiple issues with indentation, for example this is not the correct indentation:
- name: Admin credentials check
junipernetworks.junos.junos_config:
src: '{{ config_directory }}/admin_user.txt'
comment: Ensure that Admin user have been created
register: junos_output1
Make sure you properly indent the task name and additional task parameters like so for all of your tasks:
- name: Admin credentials check
junipernetworks.junos.junos_config:
src: '{{ config_directory }}/admin_user.txt'
comment: Ensure that Admin user have been created
register: junos_output1
So I would recommend going over your file and checking the indentation for all tasks (note that this might also be due to StackOverflow formatting).

Ansible-Playbook - Save output to a remote server

I'm new to all the Ansible stuff. So most of the time I'm in "Trial and Error"-Mode.
Now I'm facing a challenge with a playbook and I do not know to look further.
The main task of this playbook should be to get a "Show run" from a Cisco Device and save this in a text file on a backup server (which is a remote server).
The only task, which is not working, is the Backup Task.
Here is my playbook:
- hosts: IOSGATEWAY
gather_facts: no
connection: local
tasks:
- name: GET CREDENTIALS
include_vars: path/to/all/all.yml
- name: DEFINE CONNECTION TO GW
set_fact:
connection:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
- name: GET SHOW RUN
ios_command:
provider: "{{ connection }}"
commands:
- show run
register: show_run
- name: SAVE TO BACKUP SERVER
copy:
content: "{{ show_run.stdout[0] }}"
dest: "path/to/Directory/{{ inventory_hostname }}.txt"
delegate_to: BACKUPSERVER
Can someone hint me in the right direction?
You set connection: local for the playbook, so everything you do is executed locally (which is correct for ios_... modules, but not what you actually want for copy module).
I'd recommend to define ansible_connection variable in your inventory per group of hosts/devices, so Ansible will use local connection for your ios devices, and ssh for backup-server.

Best practice for ansible apt-key when considering cross-platform

I'm looking into making all my Ansible roles cross compatible, but I'm starting with Darwin (Mac OSX). I'm almost complete, but I've hit a stump I'm not entirely sure how to get around without the use of command, shell, raw, or unique tasks per distribution...
- name: "Ensure key is present"
become: yes
become_user: root
apt_key:
keyserver: "{{ role_keyserver }}"
id: "{{ role_id }}"
state: present
How would I make the above Ansible task compatible for Darwin without the use of command, shell, or raw tasks?
Normally you just put the actually cross platform tasks in your role's tasks/main.yml and then have a task that includes OS specific task lists at an appropriate point.
So a quick example might be to have something like this:
tasks/main.yml
- name: os specific vars
include_vars: "{{ ansible_os_family | lower }}"
- name: os specific tasks
include: "{{ ansible_os_family | lower }}.yml"
- name: enable service
service:
name: "{{ service_name }}"
state: started
enabled: true
And then you might have your OS specific files such as this:
vars/redhat
service_name: foobar
service_yum_package: "{{ service_name }}"
tasks/redhat.yml
- name: install service
yum:
name: "{{ service_yum_package }}"
vars/debian
service_name: foobaz
service_apt_package: "{{ service_name }}"
tasks/debian.yml
- name: install service
apt:
name: "{{ service_yum_package }}"
Apt is only the package manager for Debian-based distributions, so you probably want to add a when statement onto the end of that block:
- name: "Ensure key is present"
become: yes
become_user: root
apt_key:
keyserver: "{{ role_keyserver }}"
id: "{{ role_id }}"
state: present
when: ansible_os_family == "Debian"
This will skip the task when run on a Darwin host.
If you have some similar task you need to run, but for Darwin, then you can similarly condition it on a fact so it only runs on Darwin hosts.

Resources