What I am trying to achieve with this ansible code: Ansible code should be triggered during an interface flapping with cisco nessus device. It should fetch the time of nessus device(show clock) and try to pull the past 15 minutes from the "show logging" output of nessus device.
Ansible experts, I need to help to manipulate the time to get past 15 minutes from show logging. Below s the code written so far.
cat interfaceflapping.yml
---
- name: Cisco NXOS
hosts: all
connection: network_cli
gather_facts: false
vars:
- cmdlist: sh clock
- ansible_python_interpreter: /usr/bin/python3
- ansible_network_os: nxos
tasks:
- name: Execute command
nxos_command:
commands: "{{ cmdlist }}"
register: output
- set_fact:
arr: "{{ output.stdout_lines[0][1].split() }}"
- debug:
msg: "{{ arr[0] }}"
- name: print variable
set_fact:
my_string_var: "{{ arr[0].split(':') | default('') }}"
- debug:
msg: "{{my_string_var}}"
- name: Execute command
nxos_command:
commands: sh logging | include 2/3
register: output1
- debug:
msg: "{{ output1 }}"
when: output1.stdout is search(my_string_var)
```ansible
==================
output as below
[LABPC#lab-jump-host dow]$ ansible-playbook interfaceflapping.yml -i inventory1.txt --limit nxos --verbose
Using /etc/ansible/ansible.cfg as config file
PLAY [Cisco NXOS] ******************************************************************************************************************************************************************************************
TASK [Execute command] *************************************************************************************************************************************************************************************
ok: [nxos] => {"changed": false, "stdout": ["Time source is NTP\n18:50:15.681 UTC Sat Jul 02 2022"], "stdout_lines": [["Time source is NTP", "18:50:15.681 UTC Sat Jul 02 2022"]]}
TASK [set_fact] ********************************************************************************************************************************************************************************************
ok: [nxos] => {"ansible_facts": {"arr": ["18:50:15.681", "UTC", "Sat", "Jul", "02", "2022"]}, "changed": false}
TASK [debug] ***********************************************************************************************************************************************************************************************
ok: [nxos] => {
"msg": "18:50:15.681"
}
TASK [print variable] **************************************************************************************************************************************************************************************
ok: [nxos] => {"ansible_facts": {"my_string_var": ["18", "50", "15.681"]}, "changed": false}
TASK [debug] ***********************************************************************************************************************************************************************************************
ok: [nxos] => {
"msg": [
"18",
"50",
"15.681"
]
}
TASK [Execute command] *************************************************************************************************************************************************************************************
ok: [nxos] => {"changed": false, "stdout": [""], "stdout_lines": [[""]]}
TASK [debug] ***********************************************************************************************************************************************************************************************
fatal: [nxos]: FAILED! => {"msg": "The conditional check 'output1.stdout is search(my_string_var)' failed. The error was: Unexpected templating type error occurred on ({% if output1.stdout is search(my_string_var) %} True {% else %} False {% endif %}): unhashable type: 'list'\n\nThe error appears to be in '/home/LABPC/gomathi/dow/root/dow/interfaceflapping.yml': line 32, column 8, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - debug:\n ^ here\n"}
PLAY RECAP *************************************************************************************************************************************************************************************************
nxos : ok=6 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
==============
PLZ HELP ME TO GET THE PAST 15 MINUTES OUTPUT FROM SHOW LOGGING. I HAVE HARDCODED THE INTERFACE NUMBER AS 2/3 .
The following ansible-playbook works fine for non-sudo access.
But fails when I un-comment become:yes
---
- hosts: all
become: yes
tasks:
- name: Register the policy file in a variable
read_csv:
path: policy.csv
delegate_to: localhost
register: csv_file
- name: Check rules pre-remediation
command: "{{ item.Compliance_check }}"
register: output
with_items:
"{{ csv_file.list }}"
- name: Perform remediation
command: "{{ item.item.Remediation }}"
when: item.item.Expected_result != item.stdout
with_items:
"{{ output.results }}"
The inventory file looks like:
10.136.59.110 ansible_ssh_user=username ansible_ssh_pass=password ansible_sudo_pass=password
Error I'm facing is:
user#hostname:~/git-repo/MCI$ ansible-playbook playbook.yaml -i inventory
PLAY [all] *******************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************
ok: [10.136.59.109]
TASK [Register the policy file in a variable] ********************************************************************************************
Sorry, try again.
fatal: [10.136.59.109 -> localhost]: FAILED! => {"changed": false, "module_stderr": "[sudo via ansible, key=mjcqjbcyeemygkxwycgeftiikivnylsj] password:\nsudo: no password was provided\nsudo: 1 incorrect password attempt\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
PLAY RECAP *******************************************************************************************************************************
10.136.59.109 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can't understand why this error is showing. I have tried providing correct sudo passwords in both inventory as well as at run-time using --ask-become-pass
Please help find out the cause of the error.
Using Ansible v2.9.12
Question: I'd like Ansible to fail/stop the play when a task fails, when multiple hosts execute the task. In that sense, Ansible should abort the task from further execution. The configuration should work in a role, so using serial, or using different plays, is not possible.
Example ;
- hosts:
- host1
- host2
- host3
any_errors_fatal: true
tasks:
- name: always fail
shell: /bin/false
throttle: 1
Provides ;
===== task | always fail =======
host1: fail
host2: fail
host3: fail
Meaning, the task is still executed on the second host and third host. I'd desire the whole play to fail/stop, once a task fails on a host. When the task fails on the last host, Ansible should abort as well.
Desired outcome ;
===== task | always fail =======
host1: fail
host2: not executed/skipped, cause host1 failed
host3: not executed/skipped, cause host1 failed
As you can see, I've fiddled around with error handling, but without prevail.
Background info: I've been developing an idempotent Ansible role for mysql. It is possible to setup a cluster with multiple hosts. The role also supports adding an arbiter.
The arbiter does not has the mysql application installed, but the host is still required in the play.
Now, imagine three hosts. Host1 is the arbiter, host2 and host3 have mysql installed, setup in a cluster. The applications are setup by the Ansible role.
Now, Ansible executes the role for a second/third/fourth/whatever time, and changes a config setting of mysql. Mysql needs a rolling restart. Usually, one writes some thing along the lines of:
- template:
src: mysql.j2
dest: /etc/mysql
register: mysql_config
when: mysql.role != 'arbiter'
- service:
name: mysql
state: restarted
throttle: 1
when:
- mysql_config.changed
- mysql.role != 'arbiter'
The downside of this Ansible configuration, is that if mysql fails to start on host2 due to whatever reason, Ansible will also restart mysql on host3. And that is undesired, because if mysql fails on host3 as well, then the cluster is lost. So, for this specific task I'd like Ansible to stop/abort/skip other tasks if mysql has failed to start on a single host in the play.
Ok, this works:
# note that test-multi-01 set host_which_is_skipped: true
---
- hosts:
- test-multi-01
- test-multi-02
- test-multi-03
tasks:
- set_fact:
host_which_is_skipped: "{{ inventory_hostname }}"
when: host_which_is_skipped
- shell: /bin/false
run_once: yes
delegate_to: "{{ item }}"
loop: "{{ ansible_play_hosts }}"
when:
- item != host_which_is_skipped
- result is undefined or result is not failed
register: result
- meta: end_play
when: result is failed
- debug:
msg: Will not happen
When the shell command is set to /bin/true, the command is executed on host2 and host3.
One way to solve this would be to run the playbook with serial: 1. That way, the tasks are executed serially on the hosts and as soon as one task fails, the playbook terminates:
- name: My playbook
hosts: all
serial: 1
any_errors_fatal: true
tasks:
- name: Always fail
shell: /bin/false
In this case, this results in the task only being executed on the first host. Note that there is also the order clause, with which you can also control the order in which hosts are run: https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html#hosts-and-users
Disclaimer: most of the credit of the fully working part of this answer is going to #Tomasz Klosinski's answer on Server Fault.
Here is for a partially working idea, that only falls short of one host.
For the demo, I purposely increased my hosts number to 5 hosts.
The idea is based on the special variables ansible_play_batch and ansible_play_hosts_all that are described in the above mentioned document page as:
ansible_play_hosts_all
List of all the hosts that were targeted by the play
ansible_play_batch
List of active hosts in the current play run limited by the serial, aka ‘batch’. Failed/Unreachable hosts are not considered ‘active’.
The idea, coupled with your trial at using throttle: 1 should work, but fail short of one host, executing on host2 when it should skip it.
Given the playbook:
- hosts: all
gather_facts: no
tasks:
- shell: /bin/false
when: "ansible_play_batch | length == ansible_play_hosts_all | length"
throttle: 1
This yields the recap:
PLAY [all] ***********************************************************************************************************
TASK [shell] *********************************************************************************************************
fatal: [host1]: FAILED! => {"changed": true, "cmd": "/bin/false", "delta": "0:00:00.003915", "end": "2020-09-06 22:09:16.550406", "msg": "non-zero return code", "rc": 1, "start": "2020-09-06 22:09:16.546491", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
fatal: [host2]: FAILED! => {"changed": true, "cmd": "/bin/false", "delta": "0:00:00.004736", "end": "2020-09-06 22:09:16.844296", "msg": "non-zero return code", "rc": 1, "start": "2020-09-06 22:09:16.839560", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
skipping: [host3]
skipping: [host4]
skipping: [host5]
PLAY RECAP ***********************************************************************************************************
host1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host3 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host4 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host5 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
Looking further on that, I landed on this answer of Server Fault, and this looks to be the right idea to craft your solution.
Instead of going the normal way, the idea is to delegate everything from the first host with a loop on all targeted hosts of the play, because, in a loop, you are then able to access the registered fact of the previous host in an easy manner, as long as your register it.
So here is the playbook:
- hosts: all
gather_facts: no
tasks:
- shell: /bin/false
loop: "{{ ansible_play_hosts }}"
register: failing_task
when: "failing_task | default({}) is not failed"
delegate_to: "{{ item }}"
run_once: true
This would yield the recap:
PLAY [all] ***********************************************************************************************************
TASK [shell] *********************************************************************************************************
failed: [host1 -> host1] (item=host1) => {"ansible_loop_var": "item", "changed": true, "cmd": "/bin/false", "delta": "0:00:00.003706", "end": "2020-09-06 22:18:23.822608", "item": "host1", "msg": "non-zero return code", "rc": 1, "start": "2020-09-06 22:18:23.818902", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
skipping: [host1] => (item=host2)
skipping: [host1] => (item=host3)
skipping: [host1] => (item=host4)
skipping: [host1] => (item=host5)
NO MORE HOSTS LEFT ***************************************************************************************************
PLAY RECAP ***********************************************************************************************************
host1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
And just for the sake of proving it works as intended, altering it to make the host2 fail specifically, with the help of failed_when:
- hosts: all
gather_facts: no
tasks:
- shell: /bin/false
loop: "{{ ansible_play_hosts }}"
register: failing_task
when: "failing_task | default({}) is not failed"
delegate_to: "{{ item }}"
run_once: true
failed_when: "item == 'host2'"
Yields the recap:
PLAY [all] ***********************************************************************************************************
TASK [shell] *********************************************************************************************************
changed: [host1 -> host1] => (item=host1)
failed: [host1 -> host2] (item=host2) => {"ansible_loop_var": "item", "changed": true, "cmd": "/bin/false", "delta": "0:00:00.004226", "end": "2020-09-06 22:20:38.038546", "failed_when_result": true, "item": "host2", "msg": "non-zero return code", "rc": 1, "start": "2020-09-06 22:20:38.034320", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
skipping: [host1] => (item=host3)
skipping: [host1] => (item=host4)
skipping: [host1] => (item=host5)
NO MORE HOSTS LEFT ***************************************************************************************************
PLAY RECAP ***********************************************************************************************************
host1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
i want create a task of a when condition from a stdout.
Example here of playbook:
---
- hosts: localhost
gather_facts: false
ignore_errors: yes
vars:
- dev_ip: '192.168.20.192'
tasks:
- name: checkking ssh status
wait_for:
host: "{{dev_ip}}"
port: 22
timeout: 2
state: present
register: ssh_stat
- name: checkcondition
debug:
msg: "{{ssh_stat}}"
message out put is:
ok: [localhost] => {
"msg": {
"changed": false,
"elapsed": 2,
"failed": true,
"msg": "Timeout when waiting for 192.168.20.192:22"
}
}
i want to make a when condition task if string "Timeout when waiting for 192.168.20.192:22" is in the ssh_stat.stdout
Here's what you need:
---
- name: answer stack overflow
hosts: localhost
gather_facts: false
ignore_errors: yes
tasks:
- name: checkking ssh status
wait_for:
host: 192.168.1.23
port: 22
timeout: 2
state: present
register: ssh_stat
- name: do something else when ssh_stat.msg == "Timeout when waiting for 192.168.1.23:22"
shell: echo "I am doing it"
when: ssh_stat.msg == "Timeout when waiting for 192.168.1.23:22"
output:
PLAY [answer stack overflow] **************************************************************************************************************************************************************************************
TASK [checkking ssh status] ***************************************************************************************************************************************************************************************
[WARNING]: Platform linux on host localhost is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See
https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
fatal: [localhost]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "elapsed": 3, "msg": "Timeout when waiting for 192.168.1.23:22"}
...ignoring
TASK [do something else when ssh_stat.msg == "Timeout when waiting for 192.168.1.23:22"] ************************************************************************************************************************
changed: [localhost]
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
Created below test yml file against test switches to nail down configs, error below. I defined provider in last task with no luck as well
---
- hosts: aus2-mdf-testswitches
gather_facts: no
connection: local
tasks:
- name: OBTAIN LOGIN CREDENTIALS
include_vars: secret.yml
- name: DEFINE PROVIDER
set_fact:
provider:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
auth_pass: "{{ creds['auth_pass'] }}"
- name: Delete users with aggregate
ios_user:
aggregate:
- name: chase
state: absent
Error that was presented. Please keep in mind that I am new with ansible and this problem might be super easy for this group but I appreciate any help. FYI, reading from https://docs.ansible.com/ansible/2.4/ios_user_module.html
[ansible#dc1netansible automation]$ ansible-playbook -i inventories/prod/hosts playbooks/deleteUsername.yml
PLAY [aus2-mdf-testswitches] ********************************************************************************************************************************************
TASK [OBTAIN LOGIN CREDENTIALS] *****************************************************************************************************************************************
ok: [aus2-mdf-testsw1]
ok: [aus2-mdf-testsw2]
TASK [DEFINE PROVIDER] **************************************************************************************************************************************************
ok: [aus2-mdf-testsw1]
ok: [aus2-mdf-testsw2]
TASK [Delete users with aggregate] **************************************************************************************************************************************
fatal: [aus2-mdf-testsw1]: FAILED! => {"changed": false, "msg": "unable to open shell. Please see: https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell"}
fatal: [aus2-mdf-testsw2]: FAILED! => {"changed": false, "msg": "unable to open shell. Please see: https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell"}
to retry, use: --limit #/home/ansible/automation/playbooks/deleteUsername.retry
PLAY RECAP **************************************************************************************************************************************************************
aus2-mdf-testsw1 : ok=2 changed=0 unreachable=0 failed=1
aus2-mdf-testsw2 : ok=2 changed=0 unreachable=0 failed=1
****updated error with new yml config****
---
- hosts: aus2-mdf-testswitches
gather_facts: no
connection: local
tasks:
- name: OBTAIN LOGIN CREDENTIALS
include_vars: secret.yml
- name: DEFINE PROVIDER
set_fact:
provider:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
auth_pass: "{{ creds['auth_pass'] }}"
- name: Delete users with aggregate
ios_user:
users:
- name: chase
authorize: yes
provider: "{{ provider }}"
state: absent
register: result
[ansible#dc1netansible automation]$ ansible-playbook -i inventories/prod/hosts playbooks/deleteUsername.yml
PLAY [aus2-mdf-testswitches] ********************************************************************************************************************************************
TASK [OBTAIN LOGIN CREDENTIALS] *****************************************************************************************************************************************
ok: [aus2-mdf-testsw1]
ok: [aus2-mdf-testsw2]
TASK [DEFINE PROVIDER] **************************************************************************************************************************************************
ok: [aus2-mdf-testsw1]
ok: [aus2-mdf-testsw2]
TASK [Delete users with aggregate] **************************************************************************************************************************************
fatal: [aus2-mdf-testsw1]: FAILED! => {"changed": false, "msg": "unable to open shell. Please see: https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell"}
fatal: [aus2-mdf-testsw2]: FAILED! => {"changed": false, "msg": "unable to open shell. Please see: https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell"}
to retry, use: --limit #/home/ansible/automation/playbooks/deleteUsername.retry
PLAY RECAP **************************************************************************************************************************************************************
aus2-mdf-testsw1 : ok=2 changed=0 unreachable=0 failed=1
aus2-mdf-testsw2 : ok=2 changed=0 unreachable=0 failed=1
Could be my IOS version is too old, as I am using 12x train on a Cisco switch. Ansible mentions this is tested on the 15x train.
****last update****
PLAY [aus2-mdf-testswitches] ********************************************************************************************************************************************
TASK [OBTAIN LOGIN CREDENTIALS] *****************************************************************************************************************************************
ok: [aus2-mdf-testsw1]
ok: [aus2-mdf-testsw2]
TASK [DEFINE PROVIDER] **************************************************************************************************************************************************
ok: [aus2-mdf-testsw1]
ok: [aus2-mdf-testsw2]
TASK [Delete users with aggregate] **************************************************************************************************************************************
fatal: [aus2-mdf-testsw2]: FAILED! => {"changed": false, "msg": "unable to retrieve current config", "stderr": "show running-config | section username\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\naus2-mdf-testsw2#", "stderr_lines": ["show running-config | section username", " ^", "% Invalid input detected at '^' marker.", "", "aus2-mdf-testsw2#"]}
fatal: [aus2-mdf-testsw1]: FAILED! => {"changed": false, "msg": "unable to retrieve current config", "stderr": "show running-config | section username\r\n ^\r\n% Invalid input detected at '^' marker.\r\n\r\naus2-mdf-testsw1#", "stderr_lines": ["show running-config | section username", " ^", "% Invalid input detected at '^' marker.", "", "aus2-mdf-testsw1#"]}
to retry, use: --limit #/home/ansible/automation/playbooks/deleteUsername.retry
Configs listed here do not work on the IOS I have on my Cisco switch.