This question already has answers here:
Ansible - Save registered variable to file
(5 answers)
Closed 2 months ago.
I am trying to gain knowledge in Ansible and solve a few problems:
I want to, not sure if it is even possible. Can the output be saved local to the server the playbook is being run on?
in the example, I am just printing to terminal I am running the playbook. I it not much use when there is a large amount of data. I would like it to be saved in a file on the server I am running the playbook instead.
---
- name: list os version
hosts: test
become: true
tasks:
- name: hostname
command: hostname
register: command_output
- name: cat /etc/redhat-release
command: cat redhat-release chdir=/etc
- name: Print output to console
debug:
msg: "{{command_output.stdout}}"
I really want the output to go to a file. I cant find anything about if this is possible.
as you can read on the ansible documentation, you can create a local configuration file ansible.cfg inside the directory vers you have your playbook and then set the proper config log file to output all the playbook output inside: Ansible output documentation
By default Ansible sends output about plays, tasks, and module arguments to your screen (STDOUT) on the control node. If you want to capture Ansible output in a log, you have three options:
To save Ansible output in a single log on the control node, set the log_path configuration file setting. You may also want to set display_args_to_stdout, which helps to differentiate similar tasks by including variable values in the Ansible output.
To save Ansible output in separate logs, one on each managed node, set the no_target_syslog and syslog_facility configuration file settings.
To save Ansible output to a secure database, use AWX or Red Hat Ansible Automation Platform. You can then review history based on hosts, projects, and particular inventories over time, using graphs and/or a REST API.
If you just want to output the result of the task on file, use the copy module on the localhost delegation
---
- name: list os version
hosts: test
become: true
tasks:
- name: hostname
command: hostname
register: command_output
- name: cat /etc/redhat-release
command: cat redhat-release chdir=/etc
- name: Create your local file on master node
ansible.builtin.file:
path: /your/local/file
owner: foo
group: foo
mode: '0644'
delegate_to: localhost
- name: Print output to file
ansible.builtin.copy:
content: "{{command_output.stdout}}"
dest: /your/local/file
delegate_to: localhost
Related
Relatively new to Ansible but I'm just wondering what the syntax looks like if I want to run a command on an ASA like show run | i opmanager and then print the output. I have put a pause in because after the output is printed I want it to wait before continuing.
I have an ASA I want to configure with the playbook to see if i can deploy new SNMPv3 credentials to whilst also removing an old set.
This task removes any existing ManageEngine config for SNMP
tasks:
- name: Show remainging opmanager config
asa_command:
commands: show run | i opmanager
register: ManageEngine
pause:
prompt: "Do you want to proceed? (yes/no)"
register: confirm
Regarding your question
I'm just wondering what the syntax looks like
you may have a look into the Ansible Collections documentation Run arbitrary commands on Cisco ASA devices, the documentation of debug_module to Print statements during execution and the pause_module to Pause playbook execution.
# This task removes any existing ManageEngine config for SNMP
tasks:
- name: Show remaining opmanager config
asa_command:
commands: show run | i opmanager
register: ManageEngine
- name: Show result
debug:
msg: "{{ ManageEngine }}"
- name: Pause until confirmation
pause:
prompt: "Do you want to proceed? (yes/no)"
Ansible Version: 2.8.3
I have the following hosts.yaml file for use in Ansible
I have applications that I want to deploy on potentially both rp_1 and rp_2
---
all:
vars:
docker_network_name: devopsNet
http_protocol: http
http_host: ansiblenode01_new.example.com
http_url: "{{ http_protocol }}://{{ http_host }}:{{ http_port }}/{{ http_context }}"
hosts:
ansiblenode01_new.example.com:
ansiblenode02_new.example.com:
children:
##################################################################
rp_1:
children:
httpd:
hosts:
ansiblenode01_new.example.com:
vars:
number_of_tools: 6
outside_port: 443
jenkins:
hosts:
ansiblenode01_new.example.com:
vars:
http_port: 4444
http_context: jenkins
artifactory:
hosts:
ansiblenode01_new.example.com:
vars:
http_port: 8000
http_context: artifactory
rp_2:
children:
httpd:
hosts:
ansiblenode02_new.example.com:
vars:
number_of_tools: 4
outside_port: 7090
jenkins:
hosts:
ansiblenode02_new.example.com:
vars:
http_port: 7990
http_context: jenkins
artifactory:
hosts:
ansiblenode02_new.example.com:
vars:
http_port: 8000
http_context: artifactory
The following python wrapper script is calling ansible-playbook in a loop to deploy the applications
#!/usr/bin/python
import yaml
import os
import getpass
with open('hosts.yaml') as f:
var = yaml.load(f)
sudo_pass = getpass.getpass(prompt="Please enter sudo password: ")
# Running individual ansible-playbook deployment for each application listed and uncommented under 'applications' object.
for network in var['all']['children']:
for app in var['all']['children'][network]['children']:
os.system('ansible-playbook deploy.yml --extra-vars "application='+app+' ansible_sudo_password='+sudo_pass+'"')
The problem I recognize is that both Ansible and Python will use the hosts.yaml file, but not use it the way I thought it would as I'm not too familiar with Ansible.
The hosts.yaml was written in a format that is required by Ansible.
The Python script will open the yaml file, make a dictionary out of it, and step through the dictionary and look for the application names to pass to the command line call. The problem is then that Python only passes the name of the app as a string to the invocation of ansible-playbook, the dictionary structure obviously doesn't get passed, so Ansible will then open the hosts.yaml file as well, but all it does is step through the yaml and look for the first occurrence of the app name that was passed as an argument when ansible-playbook was invoked, completely disregarding the structure I've created in the yaml file.
So basically only the rp_1 group in the yaml file will be executed since Ansible, I think reads through the yaml from top down and stops at the first occurrence, therefore all or parts of the rp_2 group will never be processed by Ansible if the group contains all or some of the same apps as rp_1, therefore running the same deployment twice.
Is there a way to invoke Ansible or some ways to set the playbooks up so that Ansible will recognize that in my hosts file, I have networks (rp_1, rp_2) that I want to setup and executes the playbooks in the grouping that I've created in the yaml file?
Ansible already has this built-in. You do not need a wrapper script.
To run the deploy.yml playbook on all hosts in your hosts.yaml (this is called "inventory" btw.) do this:
ansible-playbook -i hosts.yaml deploy.yml -bK
To only run it on rp_1, do this:
ansible-playbook -i hosts.yaml deploy.yml --limit rp_1 -bK
-b makes ansible become root
-K will make ansible ask for the password to become root
-i <file> specifies the inventory file
--limit <host/group> limits the execution to certain hosts or groups, you can also add more than one, as a comma-separated list (e.g., pr_1,rp_2)
You can also specify a list of hosts/groups in your playbook like this:
- name: do whatever you like
hosts:
- rp_1
- rp_2
become: yes
tasks:
- debug:
msg: "I'm running on {{ inventory_hostname }}!"
Further reading:
Discovering variables: facts and magic variables
How to build your inventory
Special variables
Using variables
Ansible examples
Accessing variables of "other" hosts: on serverfault and stackoverflow
I want to restrict an Ansible play to a specific host
Here's a cut down version of what I want:
- hosts some_host_group
tasks:
- name: Remove existing server files
hosts: 127.0.0.1
file:
dest: /tmp/test_file
state: present
- name: DO some other stuff
file:
...
I want to (as an early task), remove a local directory (I've created a file in the example as it's a more easily observed test). I was under the impression that I could limit a play to a set of hosts with the "hosts" parameter to the task -
but I get this error:
ERROR! 'hosts' is not a valid attribute for a Task
$ansible --version
ansible 2.3.1.0
Thanks.
PS I could wrap the ansible in a shell fragment, but that's ugly.
You should use delegate_to or local_action and tell Ansible to run the task only once (otherwise it will try to delete the directory as many times as target hosts in your play, although it won't be a problem).
You should also use absent not present if you want to remove directory, as you stated.
- name: Remove existing server files
delegate_to: 127.0.0.1
run_once: true
file:
dest: /tmp/test_file
state: absent
There are syntax errors in your playbook, have a look at Ansible Intro, Local Playbooks and Delegation.
- hosts: some_host_group
tasks:
- name: Remove existing server files
- hosts: localhost
tasks:
- file:
dest: /tmp/test_file
state: present
- name: DO some other stuff
file:
Is there an easy way to log output from multiple remote hosts to a single file on the server running ansible-playbook?
I have a variable called validate which stores the output of a command executed on each server. I want to take validate.stdout_lines and drop the lines from each host into one file locally.
Here is one of the snippets I wrote but did not work:
- name: Write results to logfile
blockinfile:
create: yes
path: "/var/log/ansible/log"
insertafter: BOF
block: "{{ validate.stdout }}"
delegate_to: localhost
When I executed my playbook w/ the above, it was only able to capture the output from one of the remote hosts. I want to capture the lines from all hosts in that single /var/log/ansible/log file.
One thing you should do is to add a marker to the blockinfile to wrap the result from each single host in a unique block.
The second problem is that the tasks would run in parallel (even with delegate_to: localhost, because the loop here is realised by the Ansible engine) with effectively one task overwriting the other's /var/log/ansible/log file.
As a quick workaround you can serialise the whole play:
- hosts: ...
serial: 1
tasks:
- name: Write results to logfile
blockinfile:
create: yes
path: "/var/log/ansible/log"
insertafter: BOF
block: "{{ validate.stdout }}"
marker: "# {{ inventory_hostname }} {mark}"
delegate_to: localhost
The above produces the intended result, but if serial execution is a problem, you might consider writing your own loop for this single task (for ideas refer to support for "serial" on an individual task #12170).
Speaking of other methods, in two tasks: you can concatenate the results into a single list (no issue with parallel execution then, but pay attention to delegated facts) and then write to a file using copy module (see Write variable to a file in Ansible).
My playbook (/home/user/Ansible/dist/playbooks/test.yml):
- hosts: regional_clients
tasks:
- shell: /export/home/user/ansible_scripts/test.sh
register: shellout
- debug: var=shellout
- hosts: web_clients
tasks:
- shell: /var/www/html/webstart/release/ansible_scripts/test.sh
register: shellout
- debug: var=shellout
- command: echo catalina.sh start
register: output
- debug: var=output
The [regional_clients] group is specified in /home/user/Ansible/webproj/hosts and the [web_clients] group is specified in /home/user/Ansible/regions/hosts.
Is there a way I could make the above work? Currently, running the playbook will fail since neither [regional_clients] or [web_clients] are defined in the default inventory file /home/user/Ansible/dist/hosts.
Yes, you can write a simple shell script:
#!/bin/sh
cat /home/user/Ansible/webproj/hosts /home/user/Ansible/regions/hosts
and call it as a dynamic inventory in Ansible:
ansible-playbook -i my_script test.yml
This question, however, looks to me like a problem with your organisation, not a technical one. If your environment is so complex and maintained by different parties, then use some kind of configuration database (and a dynamic inventory in Ansible which would retrieve the data), instead of individual files in user's paths.