Best practice for ansible apt-key when considering cross-platform - ansible

I'm looking into making all my Ansible roles cross compatible, but I'm starting with Darwin (Mac OSX). I'm almost complete, but I've hit a stump I'm not entirely sure how to get around without the use of command, shell, raw, or unique tasks per distribution...
- name: "Ensure key is present"
become: yes
become_user: root
apt_key:
keyserver: "{{ role_keyserver }}"
id: "{{ role_id }}"
state: present
How would I make the above Ansible task compatible for Darwin without the use of command, shell, or raw tasks?

Normally you just put the actually cross platform tasks in your role's tasks/main.yml and then have a task that includes OS specific task lists at an appropriate point.
So a quick example might be to have something like this:
tasks/main.yml
- name: os specific vars
include_vars: "{{ ansible_os_family | lower }}"
- name: os specific tasks
include: "{{ ansible_os_family | lower }}.yml"
- name: enable service
service:
name: "{{ service_name }}"
state: started
enabled: true
And then you might have your OS specific files such as this:
vars/redhat
service_name: foobar
service_yum_package: "{{ service_name }}"
tasks/redhat.yml
- name: install service
yum:
name: "{{ service_yum_package }}"
vars/debian
service_name: foobaz
service_apt_package: "{{ service_name }}"
tasks/debian.yml
- name: install service
apt:
name: "{{ service_yum_package }}"

Apt is only the package manager for Debian-based distributions, so you probably want to add a when statement onto the end of that block:
- name: "Ensure key is present"
become: yes
become_user: root
apt_key:
keyserver: "{{ role_keyserver }}"
id: "{{ role_id }}"
state: present
when: ansible_os_family == "Debian"
This will skip the task when run on a Darwin host.
If you have some similar task you need to run, but for Darwin, then you can similarly condition it on a fact so it only runs on Darwin hosts.

Related

Error handling for updating IOS via Ansible

beginner to using Ansible. More of a network engineer, less of a scripter / programmer, but trying to learn a new skill.
Attempting to write a playbook to automate updating of our fleet of Cisco switch stacks but I think I am both lost in syntax and if this is the 'right' way to go about what I am doing.
---
- name: Update Cisco switch stack
hosts: Cisco2960
vars:
upgrade_ios_version: "15.2(7)E5"
tasks:
name: Check current IOS version / Determine if update is needed...
ios_facts:
debug:
msg:
- "Current image is {{ ansible_net_version }}"
- "Current compliant image is {{ upgrade_ios_version }}"
name: Fail if versions match.
ansible.builtin.fail: msg="IOS versions match. Stopping update."
when: "{{ ansible_net_version }} = {{ upgrade_ios_version }}"
At first I thought each variable needed its own quotation, but that appears to be incorrect syntax as well, as below.
when: "{{ ansible_net_version }}" = "{{ upgrade_ios_version }}"
Couple questions:
Is there an easier way with a plain-English way of describing the type of error handling I am looking for? Ansible documentation is great on options, but light on practical applications / examples.
Why am I receiving this specific syntax error in this case?
You can use the playbook below.
Ansible Playbook to upgrade Cisco IOS
- name: Upgrade CISCO IOS
hosts: SWITCHES
vars:
upgrade_ios_version: 15.2(7)E5
tasks:
- name: CHECK CURRENT VERSION
ios_facts:
- debug:
msg:
- "Current version is {{ ansible_net_version }}"
- "Current compliant image is {{ upgrade_ios_version }}"
- debug:
msg:
- "Image is not compliant and will be upgraded"
when: ansible_net_version != upgrade_ios_version
Create backup folder for today
- hosts: localhost
tasks:
- name: Get ansible date/time facts
setup:
filter: "ansible_date_time"
gather_subset: "!all"
- name: Store DTG as fact
set_fact:
DTG: "{{ ansible_date_time.date }}"
- name: Create Directory {{hostvars.localhost.DTG}}
file:
path: ~/network-programmability/backups/{{hostvars.localhost.DTG}}
state: directory
run_once: true
Backup Running Config
- hosts: SWITCHES
tasks:
- name: Backup Running Config
ios_command:
commands: show run
register: config
- name: Save output to ~/network-programmability/backups/
copy:
content: "{{config.stdout[0]}}"
dest: "~/network-programmability/backups/{{hostvars.localhost.DTG}}/{{ inventory_hostname }}-{{hostvars.localhost.DTG}}-config.txt"
SAVE the Running Config
- name: Save running config
ios_config:
save_when: always
Copy software to target device
- name: Copy Image // This could take up to 4 minutes
net_put:
src: "~/network-programmability/images/c2960l-universalk9-mz.152-7.E5.bin"
dest: "flash:/c2960l-universalk9-mz.152-7.E5.bin"
vars:
ansible_command_timeout: 600
Change the Boot Variable to the new image
- name: Change Boot Variable to new image
ios_config:
commands:
- "boot system flash:c2960l-universalk9-mz.152-7.E5.bin"
save_when: always
Reload the device
- name: Reload the Device
cli_command:
command: reload
prompt:
- confirm
answer:
- 'y'
Wait for Reachability to the device
- name: Wait for device to come back online
wait_for:
host: "{{ inventory_hostname }}"
port: 22
delay: 90
delegate_to: localhost
Check current image
- name: Check Image Version
ios_facts:
- debug:
msg:
- "Current version is {{ ansible_net_version }}"
- name: ASSERT THAT THE IOS VERSION IS CORRECT
vars:
upgrade_ios_version: 15.2(7)E5
assert:
that:
- upgrade_ios_version == ansible_net_version
- debug:
msg:
- "Software Upgrade has been completed"

How to pass a files/ folder in ansible include_role option

I am trying to reuse an existing role by using the include_role feature in ansible but I can not seem to find a way to pass the files inside the files/testrole1.yaml folder from the calling role and it always uses the files from the common role.
Here is the structure and code I came up with so far:
---
- name: importing tasks from role1
include_role:
name: service-deploy-role1
tasks_from: "{{item}}"
loop:
- install
- setup
The above code always uses the testrole1.yaml file. Is is possible to pass the testrole2.yml when I call the install task from the service-deploy-role1?
I could figure out the solution:
---
- name: workaround
set_fact:
role_location: "{{ role_path }}"
- name: debug role path
debug:
msg: "{{ role_location }}"
- name: importing tasks from role1
include_role:
name: service-deploy-role1
tasks_from: "{{item}}"
vars:
role_dir: "{{ role_location }}"
loop:
- install
- setup

Setting Ansible vars with set_fact results

Im running ansible 2.9.18 on RHEL7.
I am using hvac to retrieve usernames and passwords from a Hashicorp vault.
vars:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
tasks:
- name: set Cisco creds
set_fact:
cisco: "{{ creds['data'] }}"
- name: Get nxos facts
nxos_command:
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
commands: show ver
timeout: 30
register: ver_out
- debug: msg="{{ ver_out.stdout }}"
But username and password are deprecated and I am trying to figure out how to pass the username, password as a "provider" variable. And this code doesn't work:
vars:
asa_api:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
set_fact:
cisco: "{{ creds['data'] }}"
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
tasks:
- name: show run
asa_command:
commands: show run
provider: "{{ asa_api }}"
register: run
become: yes
tags:
- show_run
I cannot figure how syntax for making this work. I would greatly appreciate any help.
Thanks,
Steve
Disclaimer: This is a generic answer. I do not have any network device to test this fully so you might have to adapt a bit after reading the documentation
Your are taking this the wrong way. You don't need set_fact at all and both method you are trying to use (user/pass or provider dict) are actually deprecated. Ansible treats you network device as any host and will use the available user and password you have configured if they exist.
In the following example, I'm assuming your playbook only targets network devices and that the login/pass stored in your vault is the same on all devices.
- name: Untested network device connection configuration demo
hosts: my_network_device_group
vars:
# This indicates which connection plugin to use. Default is ssh
# An other possible value is httpapi. See above documentation link
ansible_connection: network_cli
vault_secret: tst2/data/cisco
vault_token: verysecret
vault_url: http://10.80.23.81:8200
vault_options: "secret={{ vault_secret }} token={{ vault_token }} url={{ vault_url }}"
creds: "{{ lookup('hashi_vault', vault_options).data }}"
# These are the user and pass used for connection.
ansible_user: "{{ creds.username }}"
ansible_password: "{{ creds.password }}"
tasks:
- name: Get nxos version
nxos_command:
commands: show ver
timeout: 30
register: ver_cmd
- name: show version
debug:
msg: "NXOS version on {{ inventory_hostname }} is {{ ver_cmd.stdout }}"
- name: An other task to play on targets
debug:
msg: "Task played on {{ inventory_hostname }}"
Rather than vars at play level, you can store this information in your inventory for all hosts or for a specific group, even for each host. See how to organise your group and host variables if you want to use that feature.

Install a list of software via apt using a list variable

While it's possible to install a list of software the following way:
- name: Install what I want
apt:
name:
- docker
- nmap
Is it also possible to use a variable that contains a list of software names instead? Like so:
vars:
my_list:
- docker
- nmap
- name: Install what I want
apt:
name: "{{ my_list }}"
Yes. It's possible. name is "A list of package names". Both versions of the code are equivalent.
vars:
my_list:
- docker
- nmap
tasks:
- name: Install what I want
apt:
name: "{{ my_list }}"
It's also possible to use a loop. But, this is less efficient.
vars:
my_list:
- docker
- map
tasks:
- name: Install what I want
apt:
name: "{{ item }}"
loop: "{{ my_list }}"
I last ansible version you can use next syntax:
vars:
my_list: [docker, nmap]
tasks:
- name: Install APPS
apt:
name: "{{ my_list }}"
state: present
update_cache: yes

What is the best way to manage unsupported distros in an Ansible role?

An Ansible role supports Debian Stretch and Buster.
It is not able to do the job on Jessie or older versions.
Which is the best way to tell the user that the role cannot be used on a given old version?
Do nothing in main.yml file (controlling the distro version using when: declarations)
Let the role explicitly fail using the fail module
Do not check for a supported distro version and let tasks fail themselves
Developers should place the supported/tested versions in the Readme. Then users should always read the Readme. Then, common sense should be used.
But we all know that's not the case.
You could configure the host(s) which are too old skip to the role, to ensure the hosts do not execute any command for that role. But the way to go would be to built another role, or update that role, to let that playbook support that OS version.
This method is the least desired one: Do not check for a supported distro version and let tasks fail themselves. Because when you go down this path, then some unsupported tasks are executed on the host and then you can't guarantee the state of the system anymore. In short; you'll create a mess.
To simply prevent the nightmare, indeed, let the play fail:
- name: fail when using older version
fail:
msg: "You fail because reason, woohoo"
when: ansible_distribution is Ubuntu and ansible_distribution_version is 10.04
Q: "What is the best way to manage unsupported distros in an Ansible role?"
A: It's a good idea to end the host or play when the platform and version is not supported. In most cases, this means such a platform and version hasn't been tested yet. It's up to the user to add a new platform and version to the metadata, test it and optionally contribute to the development.
In a role, it's possible to read the variable galaxy_info from the role's file meta/main.yml and test the supported platforms and versions.
$ cat roles/role_1/meta/main.yml
galaxy_info:
author: your name
description: your role description
company: your company (optional)
license: license (GPL-2.0-or-later, MIT, etc)
min_ansible_version: 2.9
platforms:
- name: Ubuntu
versions:
- bionic
- cosmic
- disco
- eoan
galaxy_tags: []
dependencies: []
For example the tasks in the role below
$ cat roles/role_1/tasks/main.yml
---
- name: Print OS and distro Ansible variables collected by setup
debug:
msg:
- "ansible_os_family: {{ ansible_os_family }}"
- "ansible_distribution: {{ ansible_distribution }}"
- "ansible_distribution_major_version: {{ ansible_distribution_major_version }}"
- "ansible_distribution_version: {{ ansible_distribution_version }}"
- "ansible_distribution_release: {{ ansible_distribution_release }}"
- name: Include roles' meta data
include_vars:
file: "{{ role_path }}/meta/main.yml"
- name: Test the distribution is supported. End the host if not.
set_fact:
supported_distributions: "{{ galaxy_info.platforms|json_query('[].name') }}"
- debug:
var: supported_distributions
- block:
- debug:
msg: "{{ ansible_distribution }} not supported. End of host."
- meta: end_host
when: ansible_distribution not in supported_distributions
- name: Test the release is supported. End the host if not.
set_fact:
supported_releases: "{{ (galaxy_info.platforms|
selectattr('name', 'match', ansible_distribution)|
list|first).versions }}"
- debug:
var: supported_releases
- block:
- debug:
msg: "{{ ansible_distribution_release}} not supported. End of host."
- meta: end_host
when: ansible_distribution_release not in supported_releases
- name: The distribution and release is supported. Continue play.
debug:
msg: "{{ ansible_distribution }} {{ ansible_distribution_release }} is supported. Continue play."
with the playbook
- hosts: localhost
gather_facts: true
roles:
- role_1
give
"msg": [
"ansible_os_family: Debian",
"ansible_distribution: Ubuntu",
"ansible_distribution_major_version: 19",
"ansible_distribution_version: 19.04",
"ansible_distribution_release: disco"
]
"supported_distributions": [
"Ubuntu"
]
"supported_releases": [
"bionic",
"cosmic",
"disco",
"eoan"
]
"msg": "Ubuntu disco is supported. Continue play."

Resources