ansible: AnsibleModule.log() method does not print anything to stdout/stderr - ansible

I am trying to write a custom ansible module.
I need to have some debug info during module execution.
However the following lines fo not print anything even with the highest verbosity level enabled (-vvvv) and with export ANSIBLE_DEBUG=true
from ansible.module_utils.basic import AnsibleModule
module.log(msg="some_message"))
The only time I am ever able to see some msg printed is via the following method:
module.exit_json(changed=True, msg=_msg)

It should be emphasized that AnsibleModule.log() will not send the output to neither stout or stderr. It will send it to the default system logging facility.
In my case this was /var/log/syslog.

Related

Ansible job output to json file, while keeping default stdout

I need to run ansible-playbook and output the run results to a json file, and also keep the normal stdout log.
In other words, keep the human readable log stream on stdout, but create a machine readable output to a file.
I can get ansible-playbook to output a json log by setting
[defaults]
log_path = /tmp/log.txt
stdout_callback = json
The problem is that this is overriding the stdout settings, so it doesn't output the "normal" job output text to stdout anymore.
Instead it dumps the json text after the run completes and also outputs the json to stdout.
What I'm looking for would be some sort of log_callback = json or log_callback = yaml type setting, and leaving stdout_callback default. However, nothing seems fit that while reviewing the available options.
Automated interaction with ansible has a dedicated library, ansible-runner

How can I make verbose output for specific task or module in Ansible

I am using AWS ec2 module and I would like to log what userdata is sent to AWS with every command, but I don't want verbose output from all other useless tasks.
Is there any way I can enable verbosity of the ec2 module?
I agree with #techraf that there is no out of the box way to do this.
But Ansible is easily tuneable with plugins!
Drop this code as <playbook_dir>/callback_plugins/verbose_tasks.py:
from ansible.plugins.callback import CallbackBase
import json
try:
from __main__ import display
except ImportError:
display = None
class CallbackModule(CallbackBase):
def v2_runner_on_any(self, result, ignore_errors=False):
if (result._task.action in ['file','stat']):
print '####### DEBUG ########'
print json.dumps(result._result,indent=4)
print '####### DEBUG ########'
v2_runner_on_ok = v2_runner_on_any
v2_runner_on_failed = v2_runner_on_any
You can tune what modules' results you want to print by changing ['file','stat'] list.
If you need only ec2, replace it with ['ec2'].
I don't think there is a straightforward way.
As a workaround you could run the whole play with no_log: true and add no_log: false explicitly to your task calling ec2 action. Then run the playbook with -v.

How can I make Ansible show only errors in execution?

How can I make ansible show only errors inside of some playbook, or even when directly invoked?
I tried suppressing std output but that apparently didn't help, because execution errors seem to be put into standard output instead of error output on Linux.
ansible all -a 'some_command_I_want_to_know_if_it_crashes' 1> /dev/null
I see only errors from Python (exceptions etc.) but not errors from playbook (the red text).
Use the official sample callback plugin called actionable.py.
Put it in the callback_plugins directory and enable stdout-callbacks in ansible.cfg:
[defaults]
stdout_callback = actionable
Just by enabling it you will get much less information in th output, but you can further modify the plugin code to suit your needs.
For example to disable messages on successful tasks completely (regardless if status is ok or changed) change:
def v2_runner_on_ok(self, result):
if result._result.get('changed', False):
self.display_task_banner()
self.super_ref.v2_runner_on_ok(result)
to
def v2_runner_on_ok(self, result):
pass
As Konstantin Suvorov noted, the above ansible.cfg configuration method works for ansible-playbook.
For ansible output you can save the actionable.py as ./callback_plugins/minimal.py to achieve the same results.
You could run the command with actionable callback
ANSIBLE_STDOUT_CALLBACK=actionable ansible all -a 'some_command_I_want_to_know_if_it_crashes'

Verbose option in SFTP

I'm using Pysftp to sftp put a large number of files, and would love to get some progress outputs as it runs so I know how it's doing. Is there a verbose option (or equivalent) I can use to get that ouput?
Thanks.
I could be incorrectly importing paramiko here as I believe its in pysftp anyway but you should get the jist.
import logging
import paramiko
import pysftp
paramiko.util.log_to_file('log_debug.txt', level = 'DEBUG')
# As far as I can see you can use DEBUG, INFO, WARNING or ERROR here but there's possibly more
with pysftp.Connection([whatever you're doing here normally]) as sftp:
sftp.put([or whatever you're doing here instead])
with open('log_debug.txt') as logfile:
log = logfile.read()
print (log)
Obviously you don't have to print it, do what you like...
I've only got this from trying to do something similar, I'm not an expert!

Redirect Output of Capistrano

I have a Capistrano deploy file (Capfile) that is rather large, contains a few namespaces and generally has a lot of information already in it. My ultimate goal is, using the Tinder gem, paste the output of the entire deployment into Campfire. I have Tinder setup properly already.
I looked into using the Capistrano capture method, but that only works for the first host. Additionally that would be a lot of work to go through and add something like:
output << capture 'foocommand'
Specifically, I am looking to capture the output of any deployment from that file into a variable (in addition to putting it to STDOUT so I can see it), then pass that output in the variable into a function called notify_campfire. Since the notify_campfire function is getting called at the end of a task (every task regardless of the namespace), it should have the task name available to it and the output (which is stored in that output variable). Any thoughts on how to accomplish this would be greatly appreciated.
I recommend not messing with the Capistrano logger, Instead use what unix gives you and use pipes:
cap deploy | my_logger.rb
Where your logger reads STDIN and STDOUT and both records, and pipes it back to the appropriate stream.
For an alternative, the Engineyard cap recipies have a logger – this might be a useful reference if you do need to edit the code, but I recommend not doing.
It's sort of a hackish means of solving your problem, but you could try running the deploy task in a Rake task and capturing the output using %x.
# ...in your Rakefile...
task :deploy_and_notify do
output = %x[ cap deploy ] # Run your deploy task here.
notify_campfire(output)
puts output # Echo the output.
end

Resources