ansible ios_command timeout when doing "show conf" on cisco 3850 - ansible

I've got a simple ansible playbook that works fine on most ios devices. It fails on some of my 3850 switches with what looks like a timeout when doing a "show conf". How do I specify a longer, non-default timeout for command completion with the ios_command module (and presumably also ios_config)?
Useful details:
Playbook:
---
- hosts: ios_devices
gather_facts: no
connection: local
tasks:
- name: OBTAIN LOGIN CREDENTIALS
include_vars: secrets.yaml
- name: DEFINE PROVIDER
set_fact:
provider:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
- name: LIST NAME SERVERS
ios_command:
provider: "{{ provider }}"
commands: "show run | inc name-server"
register: dns_servers
- debug: var=dns_servers.stdout_lines
successful run:
$ ansible-playbook listnameserver.yaml -i inventory/onehost
PLAY [ios_devices] *****************************************************************************************************************
TASK [OBTAIN LOGIN CREDENTIALS] ****************************************************************************************************
ok: [iosdevice1.example.com]
TASK [DEFINE PROVIDER] *************************************************************************************************************
ok: [iosdevice1.example.com]
TASK [LIST NAME SERVERS] ***********************************************************************************************************
ok: [iosdevice1.example.com]
TASK [debug] ***********************************************************************************************************************
ok: [iosdevice1.example.com] => {
"dns_servers.stdout_lines": [
[
"ip name-server 10.1.1.166",
"ip name-server 10.1.1.168"
]
]
}
PLAY RECAP *************************************************************************************************************************
iosdevice1.example.com : ok=4 changed=0 unreachable=0 failed=0
unsuccessful run:
$ ansible-playbook listnameserver.yaml -i inventory/onehost
PLAY [ios_devices] *****************************************************************************************************************
TASK [OBTAIN LOGIN CREDENTIALS] ****************************************************************************************************
ok: [iosdevice2.example.com]
TASK [DEFINE PROVIDER] *************************************************************************************************************
ok: [iosdevice2.example.com]
TASK [LIST NAME SERVERS] ***********************************************************************************************************
fatal: [iosdevice2.example.com]: FAILED! => {"changed": false, "msg": "timeout trying to send command: show run | inc name-server", "rc": 1}
to retry, use: --limit #/home/sample/ansible-playbooks/listnameserver.retry
PLAY RECAP *************************************************************************************************************************
iosdevice2.example.com : ok=2 changed=0 unreachable=0 failed=1

The default timeout is 10 seconds if the request takes longer than this ios_command will fail.
You can add the timeout as a key in the provider variable, like this:
- name: DEFINE PROVIDER
set_fact:
provider:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
timeout: 30

If you've already got a timeout value in provider here's a handy way to update only that key in the variable.
- name: Update existing provider timeout key
set_fact:
provider: "{{ provider | combine( {'timeout': '180'} ) }}"

Related

Ansible: How to check multiple servers for a text file value, to decide which servers to run the script on?

I am trying to ask Ansible to check if a server is passive or active based on the value of a specific file in each server, then Ansible will decide which server it runs the next script on.
For example with 2 servers:
Server1
cat /tmp/currentstate
PASSIVE
Server2
cat /tmp/currentstate
ACTIVE
In Ansible
Trigger next set of jobs on server where the output was ACTIVE.
Once the jobs complete, trigger next set of jobs on server where output was PASSIVE
What I have done so far to grab the state, and output the value to Ansible is
- hosts: "{{ hostname1 | mandatory }}"
gather_facts: no
tasks:
- name: Grab state of first server
shell: |
cat {{ ans_script_path }}currentstate.log
register: state_server1
- debug:
msg: "{{ state_server1.stdout }}"
- hosts: "{{ hostname2 | mandatory }}"
gather_facts: no
tasks:
- name: Grab state of second server
shell: |
cat {{ ans_script_path }}currentstate.log
register: state_server2
- debug:
msg: "{{ state_server2.stdout }}"
What I have done so far to trigger the script
- hosts: "{{ active_hostname | mandatory }}"
tasks:
- name: Run the shutdown on active server first
shell: sh {{ ans_script_path }}stopstart_terracotta_main.sh shutdown
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
- hosts: "{{ passive_hostname | mandatory }}"
tasks:
- name: Run the shutdown on passive server first
shell: sh {{ ans_script_path }}stopstart_terracotta_main.sh shutdown
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
but I don't know how to set the value of active_hostname & passive_hostname based on the value from the script above.
How can I set the Ansible variable of active_hostname & passive_hostname based on the output of the first section?
A better solution came to my mind is to include hosts in new groups according to their state.
This would be more optimal in case there are more than two hosts.
- hosts: all
gather_facts: no
vars:
ans_script_path: /tmp/
tasks:
- name: Grab state of server
shell: |
cat {{ ans_script_path }}currentstate.log
register: server_state
- add_host:
hostname: "{{ item }}"
# every host will be added to a new group according to its state
groups: "{{ 'active' if hostvars[item].server_state.stdout == 'ACTIVE' else 'passive' }}"
# Shorter, but the new groups will be in capital letters
# groups: "{{ hostvars[item].server_state.stdout }}"
loop: "{{ ansible_play_hosts }}"
changed_when: false
- name: show the groups the host(s) are in
debug:
msg: "{{ group_names }}"
- hosts: active
gather_facts: no
tasks:
- name: Run the shutdown on active server first
shell: hostname -f # changed that for debugging
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
- hosts: passive
gather_facts: no
tasks:
- name: Run the shutdown on passive server first
shell: hostname -f
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
test-001 is PASSIVE
test-002 is ACTIVE
PLAY [all] ***************************************************************
TASK [Grab state of server] **********************************************
ok: [test-002]
ok: [test-001]
TASK [add_host] **********************************************************
ok: [test-001] => (item=test-001)
ok: [test-001] => (item=test-002)
TASK [show the groups the host(s) are in] ********************************
ok: [test-001] => {
"msg": [
"passive"
]
}
ok: [test-002] => {
"msg": [
"active"
]
}
PLAY [active] *************************************************************
TASK [Run the shutdown on active server first] ****************************
changed: [test-002]
TASK [debug] **************************************************************
ok: [test-002] => {
"msg": "test-002"
}
PLAY [passive] ************************************************************
TASK [Run the shutdown on passive server first] ****************************
changed: [test-001]
TASK [debug] **************************************************************
ok: [test-001] => {
"msg": "test-001"
}
PLAY RECAP ****************************************************************
test-001 : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test-002 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
For example, given two remote hosts
shell> ssh admin#test_11 cat /tmp/currentstate.log
ACTIVE
shell> ssh admin#test_13 cat /tmp/currentstate.log
PASSIVE
The playbook below reads the files and runs the commands on active and passive servers
shell> cat pb.yml
- hosts: "{{ host1 }},{{ host2 }}"
gather_facts: false
vars:
server_states: "{{ dict(ansible_play_hosts|
zip(ansible_play_hosts|
map('extract', hostvars, ['server_state', 'stdout'])|
list)) }}"
server_active: "{{ server_states|dict2items|
selectattr('value', 'eq', 'ACTIVE')|
map(attribute='key')|list }}"
server_pasive: "{{ server_states|dict2items|
selectattr('value', 'eq', 'PASSIVE')|
map(attribute='key')|list }}"
tasks:
- command: cat /tmp/currentstate.log
register: server_state
- debug:
var: server_state.stdout
- block:
- debug:
var: server_states
- debug:
var: server_active
- debug:
var: server_pasive
run_once: true
- command: echo 'Shutdown active server'
register: out_active
delegate_to: "{{ server_active.0 }}"
- command: echo 'Shutdown passive server'
register: out_pasive
delegate_to: "{{ server_pasive.0 }}"
- debug:
msg: |
{{ server_active.0 }}: [{{ out_active.stdout }}] {{ out_active.start }}
{{ server_pasive.0 }}: [{{ out_pasive.stdout }}] {{ out_pasive.start }}
run_once: true
shell> ansible-playbook pb.yml -e host1=test_11 -e host2=test_13
PLAY [test_11,test_13] ***********************************************************************
TASK [command] *******************************************************************************
changed: [test_13]
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_state.stdout: ACTIVE
ok: [test_13] =>
server_state.stdout: PASSIVE
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_states:
test_11: ACTIVE
test_13: PASSIVE
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_active:
- test_11
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_pasive:
- test_13
TASK [command] *******************************************************************************
changed: [test_11]
changed: [test_13 -> test_11]
TASK [command] *******************************************************************************
changed: [test_11 -> test_13]
changed: [test_13]
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: |-
test_11: [Shutdown active server] 2022-10-27 11:16:00.766309
test_13: [Shutdown passive server] 2022-10-27 11:16:02.501907
PLAY RECAP ***********************************************************************************
test_11: ok=8 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
From the description of your use case I understand that you like to perform tasks on certain servers which a have service role installed (annot.: Terracotta Server) and based on a certain service state.
Therefore, I like to recommend an approach with Custom facts.
Depending on if you have control about where the currentstate.log is placed or how it is structured, you could use in example something like
cat /tmp/ansible/service/terracotta.fact
[currentstate]
ACTIVE = true
PASSIVE = false
or add dynamic facts by adding executable scripts to facts.d ...
Means, alternatively, you can add the current service state to your host facts by creating and running a script in facts.d, which would just read the content of /tmp/currentstate.log.
Then, a sample playbook like
---
- hosts: localhost
become: false
gather_facts: true
fact_path: /tmp/ansible/service
gather_subset:
- "!all"
- "!min"
- "local"
tasks:
- name: Show Gathered Facts
debug:
msg: "{{ ansible_facts }}"
when: ansible_local.terracotta.currentstate.active | bool
will result into an output of
TASK [Show Gathered Facts] ******
ok: [localhost] =>
msg:
ansible_local:
terracotta:
currentstate:
active: 'true'
passive: 'false'
gather_subset:
- '!all'
- '!min'
- local
module_setup: true
An other approach is to address How the inventory is build and Group the hosts
[terracotta:children]
terracotta_active
terracotta_passive
[terracotta_active]
terracotta1.example.com
[terracotta_passive]
terracotta2.example.com
You can then just easily and simple define where a playbook or task should run, just by Targeting hosts and groups
ansible-inventory -i hosts--graph
#all:
|--#terracotta:
| |--#terracotta_active:
| | |--terracotta1.example.com
| |--#terracotta_passive:
| | |--terracotta2.example.com
|--#ungrouped:
ansible-inventory -i hosts terracotta_active --graph
#terracotta_active:
|--terracotta1.example.com
or Conditionals based on ansible_facts, in example
when: 'terracotta_active' in group_names
... from my understanding, both would be minimal and simple solutions without re-implementing functionality which seems to be already there.

Play recap and ignored=1 in Ansible

I am checking whether all services related to a docker-compose.yml are running on a system. The code snippet shown below.
---
- name:
shell: docker-compose ps -q "{{ item }}"
register: result
ignore_errors: yes
The code above works as expected. I have to ignore errors otherwise Ansible will not complete. The following result shows ignored=1
PLAY RECAP *******************************************************************************************************
192.168.50.219 : ok=38 changed=12 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
If this completes successfully I want to run a subsequent playbook but don't know how to specify ignored=1 correctly.
---
- name:
include_tasks: do_other_things.yml
when: ignored is false
How do I get the result from PLAY RECAP into something I can test with?
A better idea than trying to cope with the error returned by docker compose ps when the container does not exist would be to use the purposed module: docker_container_info to achieve the same.
Given the playbook:
- hosts: localhost
gather_facts: no
tasks:
- docker_container_info:
name: "{{ item }}"
register: containers
loop:
- node1 # exists
- node404 # does not exists
- debug:
msg: "`{{ item.item }}` is not started"
loop: "{{ containers.results }}"
loop_control:
label: "{{ item.item }}"
when: not item.exists
This would yield:
TASK [docker_container_info] *************************************************
ok: [localhost] => (item=node1)
ok: [localhost] => (item=node404)
TASK [debug] *****************************************************************
skipping: [localhost] => (item=node1)
ok: [localhost] => (item=node404) =>
msg: `node404` is not started

Ansible playbook is failing for 'become:yes'

The following ansible-playbook works fine for non-sudo access.
But fails when I un-comment become:yes
---
- hosts: all
become: yes
tasks:
- name: Register the policy file in a variable
read_csv:
path: policy.csv
delegate_to: localhost
register: csv_file
- name: Check rules pre-remediation
command: "{{ item.Compliance_check }}"
register: output
with_items:
"{{ csv_file.list }}"
- name: Perform remediation
command: "{{ item.item.Remediation }}"
when: item.item.Expected_result != item.stdout
with_items:
"{{ output.results }}"
The inventory file looks like:
10.136.59.110 ansible_ssh_user=username ansible_ssh_pass=password ansible_sudo_pass=password
Error I'm facing is:
user#hostname:~/git-repo/MCI$ ansible-playbook playbook.yaml -i inventory
PLAY [all] *******************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************
ok: [10.136.59.109]
TASK [Register the policy file in a variable] ********************************************************************************************
Sorry, try again.
fatal: [10.136.59.109 -> localhost]: FAILED! => {"changed": false, "module_stderr": "[sudo via ansible, key=mjcqjbcyeemygkxwycgeftiikivnylsj] password:\nsudo: no password was provided\nsudo: 1 incorrect password attempt\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP *******************************************************************************************************************************
10.136.59.109 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can't understand why this error is showing. I have tried providing correct sudo passwords in both inventory as well as at run-time using --ask-become-pass
Please help find out the cause of the error.

Use dynamic variable name

I'm trying to get the value of ip_address from the following yaml that I'm including as variables on ansible:
common:
ntp:
- time.google.com
node1:
default_route: 10.128.0.1
dns:
- 10.128.0.2
hostname: ip-10-128-5-17
device_interface: ens5
cluster_interface: ens5
interfaces:
ens5:
ip_address: 10.128.5.17
nat_ip_address: 18.221.63.178
netmask: 255.255.240.0
version: 2
However the network interface (ens5 here) may be named something else, such as eth0. My ansible code is this:
- hosts: all
tasks:
- name: Read configuration from the yaml file
include_vars: "{{ config_yaml }}"
- name: Dump Interface Settings
vars:
msg: node1.interfaces.{{ cvp_device_interface }}.ip_address
debug:
msg: "{{ msg }}"
tags: debug_info
Running the code like this I can get the key's name:
TASK [Dump Interface Settings] *************************************************
│ ok: [18.221.63.178] => {
│ "msg": "node1.interfaces.ens5.ip_address"
│ }
But what I actually need is the value (i.e: something like {{ vars[msg] }}, which should expand into {{ node1.interfaces.ens5.ip_address }}). How can I accomplish this?
Use sqare brackets.
Example: a minimal playbook, which defines a variable called "device". This variable is used to return the active status of the device.
- hosts: localhost
connection: local
vars:
device: enx0050b60c19af
tasks:
- debug: var=device
- debug: var=hostvars.localhost.ansible_facts[device].active
Output:
$ ansible-playbook example.yaml
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] *******************************************************************
TASK [Gathering Facts] *************************************************************
ok: [localhost]
TASK [debug] ***********************************************************************
ok: [localhost] => {
"device": "enx0050b60c19af"
}
TASK [debug] ***********************************************************************
ok: [localhost] => {
"hostvars.localhost.ansible_facts[device].active": true
}
PLAY RECAP *************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
see comment
- hosts: all
tasks:
- name: Read configuration from the yaml file
include_vars: "{{ config_yaml }}"
- name: Dump Interface Settings
debug:
msg: "{{ node1['interfaces'][cvp_device_interface]['ip_address'] }}"
debug:
msg: "{{ msg }}"
tags: debug_info

Ansible - set var with condition

I'm traying to set a value for a prompt variable with the next code:
- name: "test"
hosts: localhost
connection: local
become: yes
vars_prompt:
- name: https
prompt: Use https(yes/NO)?
private: no
default: no
vars:
protocol: "{{ 'https' if https else 'http' }}"
tasks:
- debug:
msg: "{{ https | bool }}"
- debug:
msg: "{{ protocol }}"
The result is allways https.. why?
Use https(yes/NO)? [False]: no
PLAY [test] ********************************************************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] *******************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": false
}
TASK [debug] *******************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "https"
}
PLAY RECAP *********************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I have read a lot of questions here, and the correct way to do this is defining the conditional variable like I'm doing... I really don't know if there is any problem to do it with a prompted variable...
I found the answer to my own question... The correct way to do this is to use " | bool":
- name: "test"
hosts: localhost
connection: local
become: yes
vars_prompt:
- name: https
prompt: Use https(yes/NO)?
private: no
default: no
vars:
protocol: "{{ 'https' if (https | bool) else 'http' }}"
tasks:
- debug:
msg: "{{ https | bool }}"
- debug:
msg: "{{ protocol }}"

Resources