How to run Cisco NX-OS Bash shell commands in Ansible? - ansible

Is there a way to run Cisco NX-OS Bash shell commands in Ansible without a task going in to the config mode?
I just want to get the below command output but keep failing.
bash-4.3# smartctl -a /dev/sda | egrep 'Model|Firmware|Hours'
Device Model: Micron_M600_MTFDDAT064MBF
Firmware Version: MC04
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 17014
What I've used is below playbook.
- name: running the bash commands
ios_command:
commands:
- conf t
- feature bash
- run bash sudo su
- smartctl -a /dev/sda | egrep 'Model|Firmware|Hours'
register: uptime
- name: output the result
debug:
msg: uptime
- name: run the last command
ios_command:
commands: smartctl -a /dev/sda | egrep 'Model|Firmware|Hours'
register: uptime
- name: write to the file
ansible.builtin.template:
src: ./templates/9k_uptime.j2
dest: ./9k_uptime/9k_uptime.txt
newline_sequence: '\r\n'
(** I'm not proficient in Ansible. Just barely know how to get outputs for bulk devices)
Any help is much appreciated. Thank you!

As I will have a similar use case and probably some more in the future, I've setup a short test on a RHEL 7.9 environment.
As far as I understand for Cisco Nexus Series NX-OS and Bash are other modules recommended and which come from the Community Collections. An installation of them is necessary before
ansible-galaxy collection install cisco.nxos # --ignore-certs
Process install dependency map
Starting collection install process
Installing ... to '/home/${USER}/.ansible/collections/ansible_collections/community/cisco ...'
as well adding the collection path to the library path.
vi ansible.cfg
...
[defaults]
library = /usr/share/ansible/plugins/modules:~/.ansible/plugins/modules:~/.ansible/collections/ansible_collections/
...
Now it is possible to run commands on the remote device
# Usage of command module
# Doc: https://docs.ansible.com/ansible/latest/collections/cisco/nxos/nxos_command_module.html
- name: Run command on remote device
cisco.nxos.nxos_command:
commands: show version
register: results
- name: Show results
debug:
msg: "{{ results.stdout_lines }}"
or gathering device information, in example the configuration.
# Gather device information
# Doc: https://docs.ansible.com/ansible/latest/collections/cisco/nxos/nxos_facts_module.html
- name: Gather only the config and default facts
cisco.nxos.nxos_facts:
gather_subset:
- config
- name: Show facts
debug:
msg: "{{ ansible_facts }}"
If only interested in Kernel uptime the following
commands: show version | i uptime
would be enough.

Related

Send the output from Ansible to a file [duplicate]

This question already has answers here:
Ansible - Save registered variable to file
(5 answers)
Closed 2 months ago.
I am trying to gain knowledge in Ansible and solve a few problems:
I want to, not sure if it is even possible. Can the output be saved local to the server the playbook is being run on?
in the example, I am just printing to terminal I am running the playbook. I it not much use when there is a large amount of data. I would like it to be saved in a file on the server I am running the playbook instead.
---
- name: list os version
hosts: test
become: true
tasks:
- name: hostname
command: hostname
register: command_output
- name: cat /etc/redhat-release
command: cat redhat-release chdir=/etc
- name: Print output to console
debug:
msg: "{{command_output.stdout}}"
I really want the output to go to a file. I cant find anything about if this is possible.
as you can read on the ansible documentation, you can create a local configuration file ansible.cfg inside the directory vers you have your playbook and then set the proper config log file to output all the playbook output inside: Ansible output documentation
By default Ansible sends output about plays, tasks, and module arguments to your screen (STDOUT) on the control node. If you want to capture Ansible output in a log, you have three options:
To save Ansible output in a single log on the control node, set the log_path configuration file setting. You may also want to set display_args_to_stdout, which helps to differentiate similar tasks by including variable values in the Ansible output.
To save Ansible output in separate logs, one on each managed node, set the no_target_syslog and syslog_facility configuration file settings.
To save Ansible output to a secure database, use AWX or Red Hat Ansible Automation Platform. You can then review history based on hosts, projects, and particular inventories over time, using graphs and/or a REST API.
If you just want to output the result of the task on file, use the copy module on the localhost delegation
---
- name: list os version
hosts: test
become: true
tasks:
- name: hostname
command: hostname
register: command_output
- name: cat /etc/redhat-release
command: cat redhat-release chdir=/etc
- name: Create your local file on master node
ansible.builtin.file:
path: /your/local/file
owner: foo
group: foo
mode: '0644'
delegate_to: localhost
- name: Print output to file
ansible.builtin.copy:
content: "{{command_output.stdout}}"
dest: /your/local/file
delegate_to: localhost

Ansible module lineinfile doesn't properly write all the output

I've written a small playbook to run the sudo /usr/sbin/dmidecode -t1 | grep -i vmware | grep -i product command and write the output in a result file by usign the following code as a .yml:
# Check if server is vmware
---
- name: Check if server is vmware
hosts: all
become: yes
#ignore_errors: yes
gather_facts: False
serial: 50
#become_flags: -i
tasks:
- name: Run uptime command
#become: yes
shell: "sudo /usr/sbin/dmidecode -t1 | grep -i vmware | grep -i product"
register: upcmd
- debug:
msg: "{{ upcmd.stdout }}"
- name: write to file
lineinfile:
path: /home/myuser/ansible/mine/vmware.out
create: yes
line: "{{ inventory_hostname }};{{ upcmd.stdout }}"
delegate_to: localhost
#when: upcmd.stdout != ""
When running the playbook against a list of hosts I get different weird results so even if the debug shows the correct output, when I check the /home/myuser/ansible/mine/vmware.out file I see only part of them being present. Even weirder is that if I run the playbook again, I will correctly populate the whole list but only if I run this twice. I have repeated this several times with some minor tweaks but not getting the expected result. Doing -v or -vv shows nothing unusual.
You are writing to the same file in parallel on localhost. I suspect you're hitting a write concurrency issue. Try the following and see if it fixes your problem:
- name: write to file
lineinfile:
path: /home/myuser/ansible/mine/vmware.out
create: yes
line: "{{ host }};{{ hostvars[host].upcmd.stdout }}"
delegate_to: localhost
run_once: true
loop: "{{ ansible_play_hosts }}"
loop_control:
loop_var: host
From your described case I understand that you like to find out "How to check if a server is virtual?"
The information will already be collected by the setup module.
---
- hosts: linux_host
become: false
gather_facts: true
tasks:
- name: Show Gathered Facts
debug:
msg: "{{ ansible_facts }}"
For an under MS Hyper-V virtualized Linux system, the output could contain
...
bios_version: Hyper-V UEFI Release v1.0
...
system_vendor: Microsoft Corporation
uptime_seconds: 2908494
...
userspace_architecture: x86_64
userspace_bits: '64'
virtualization_role: guest
virtualization_type: VirtualPC
and having already the uptime in seconds included
uptime
... up 33 days ...
For just only a virtual check one could gather_subset resulting into a full output of
gather_subset:
- '!all'
- '!min'
- virtual
module_setup: true
virtualization_role: guest
virtualization_type: VirtualPC
By Caching facts
... you have access to variables and information about all hosts even when you are only managing a small number of servers
on your Ansible Control Node. In ansible.cfg you can configure where and how they are stored and for how long.
fact_caching = yaml
fact_caching_connection = /tmp/ansible/facts_cache
fact_caching_timeout = 86400 # seconds
This would be a minimal and simple solution without re-implementing functionality which is already there.
Further Documentation and Q&A
Ansible facts
What is the exact list of Ansible setup min?

How to output a "show" command result from Cisco ASA in playbook?

Relatively new to Ansible but I'm just wondering what the syntax looks like if I want to run a command on an ASA like show run | i opmanager and then print the output. I have put a pause in because after the output is printed I want it to wait before continuing.
I have an ASA I want to configure with the playbook to see if i can deploy new SNMPv3 credentials to whilst also removing an old set.
This task removes any existing ManageEngine config for SNMP
tasks:
- name: Show remainging opmanager config
asa_command:
commands: show run | i opmanager
register: ManageEngine
pause:
prompt: "Do you want to proceed? (yes/no)"
register: confirm
Regarding your question
I'm just wondering what the syntax looks like
you may have a look into the Ansible Collections documentation Run arbitrary commands on Cisco ASA devices, the documentation of debug_module to Print statements during execution and the pause_module to Pause playbook execution.
# This task removes any existing ManageEngine config for SNMP
tasks:
- name: Show remaining opmanager config
asa_command:
commands: show run | i opmanager
register: ManageEngine
- name: Show result
debug:
msg: "{{ ManageEngine }}"
- name: Pause until confirmation
pause:
prompt: "Do you want to proceed? (yes/no)"

ansible output printing unwanted things. how to format and display only specific data's

I am using ansible 2.4 in centos, trying to run the below script in remote servers and getting the output. Here the problem is yum info output is showing with json format also. But i need to display only the output. How to remove the json format.
---
- hosts: GeneralServer
tasks:
- name: Checking the service status
shell: systemctl status {{ item }}
with_items:
- httpd
- crond
- postfix
- sshd
register: service
- debug: var=service
- name: Checking the package info
shell : yum info {{ item }}
with_items:
- httpd
- postfix
register: info
- debug: var=info
- name: Executing the mysql running scripts in mysql
shell: mysql -u username --password mysql -Ns -e 'show databases;'
register: databases
- debug: var=databases
Also i am new in callback Module. Please help me to resolve this issue.
Is it possibile to display only stdout_lines values only.
You can try to play with different callback plugins to alter your output, e.g.:
$ ANSIBLE_STDOUT_CALLBACK=oneline ansible-playbook myplaybook.yml
$ ANSIBLE_STDOUT_CALLBACK=minimal ansible-playbook myplaybook.yml
But generally you would not avoid JSON, as it's how Ansible interprets data.
To reduce amount of info, you can use different technics. For example json_query filter.
Something like this:
- debug:
msg: "{{ info.results | json_query('[].stdout_lines[]') }}"

Running Python script via ansible

I'm trying to run a python script from an ansible script. I would think this would be an easy thing to do, but I can't figure it out. I've got a project structure like this:
playbook-folder
roles
stagecode
files
mypythonscript.py
tasks
main.yml
release.yml
I'm trying to run mypythonscript.py within a task in main.yml (which is a role used in release.yml). Here's the task:
- name: run my script!
command: ./roles/stagecode/files/mypythonscript.py
args:
chdir: /dir/to/be/run/in
delegate_to: 127.0.0.1
run_once: true
I've also tried ../files/mypythonscript.py. I thought the path for ansible would be relative to the playbook, but I guess not?
I also tried debugging to figure out where I am in the middle of the script, but no luck there either.
- name: figure out where we are
stat: path=.
delegate_to: 127.0.0.1
run_once: true
register: righthere
- name: print where we are
debug: msg="{{righthere.stat.path}}"
delegate_to: 127.0.0.1
run_once: true
That just prints out ".". So helpful ...
try to use script directive, it works for me
my main.yml
---
- name: execute install script
script: get-pip.py
and get-pip.py file should be in files in the same role
If you want to be able to use a relative path to your script rather than an absolute path then you might be better using the role_path magic variable to find the path to the role and work from there.
With the structure you are using in the question the following should work:
- name: run my script!
command: ./mypythonscript.py
args:
chdir: "{{ role_path }}"/files
delegate_to: 127.0.0.1
run_once: true
An alternative/straight forward solution:
Let's say you have already built your virtual env under ./env1 and used pip3 install the needed python modules.
Now write playbook task like:
- name: Run a script using an executable in a system path
script: ./test.py
args:
executable: ./env1/bin/python
register: python_result
- name: Get stdout or stderr from the output
debug:
var: python_result.stdout
If you want to execute the inline script without having a separate script file (for example, as molecule test) you can write something like this:
- name: Test database connection
ansible.builtin.command: |
python3 -c
"
import psycopg2;
psycopg2.connect(
host='127.0.0.1',
dbname='db',
user='user',
password='password'
);
"
You can even insert Ansible variables in this string.

Resources