I have an ansible task that fails about 20% of the time. It almost always succeeds if retried a couple of times. I'd like to use until to loop until the task succeeds and store the output of each attempt to a separate log file on the local machine. Is there a good way to achieve this?
For example, my task currently looks like this:
- name: Provision
register: prov_ret
until: prov_ret is succeeded
retries: 2
command: provision_cmd
I can see how to store the log output from the last retry when it succeeds, but I'd like to store it from each retry. To store from the last attempt to run the command I use:
- name: Write Log
local_action: copy content={{ prov_ret | to_nice_json }} dest="/tmp/ansible_logs/provision.log"
It's not possible as of 2.9. The until loop doesn't preserve results as loop does. Once a task terminates all variables inside this task will be gone except the register one.
To see what's going on in the loop write a log inside the command at the remote host. For example, the command provision_cmd writes a log to /scratch/provision_cmd.log. Run it in the block and display the log in the rescue section.
- block:
- name: Provision
command: provision_cmd
register: prov_ret
until: prov_ret is succeeded
retries: 2
rescue:
- name: Display registered variable
debug:
var: prov_ret
- name: Read the log
slurp:
src: /scratch/provision_cmd.log
register: provision_cmd_log
- name: Display log
debug:
msg: "{{ msg.split('\n') }}"
vars:
msg: "{{ provision_cmd_log.content|b64decode }}"
Related
I'm running a few tasks in a playbook which runs a bash script and registers its output:
playbook.yml:
- name: Compare FOO to BAZ
shell: . script.sh
register: output
- name: Print the generated output
debug:
msg: "The output is {{ output }}"
- include: Run if BAZ is true
when: output.stdout == "true"
script.sh:
#!/bin/bash
FOO=$(curl example.com/file.txt)
BAR=$(cat file2.txt)
if [ $FOO == $BAR ]; then
export BAZ=true
else
export BAZ=false
fi
What happens is that Ansible registers the output of FOO=$(curl example.com/file.txt) instead of export BAZ.
Is there a way to register BAZ instead of FOO?
I tried running another task that would get the exported value:
- name: Register value of BAZ
shell: echo $BAZ
register: output
But then I realized that every task opens a separate shell on the remote host and doesn't have access to the variables that were exported in previous steps.
Is there any other way to register the right output as a variable?
I've come up with a workaround, but there must be an other way to do this...
I added a line in script.sh and cat the file in a seperate task
script.sh:
...
echo $BAZ > ~/baz.txt
then in the playbook.yml:
- name: Check value of BAZ
shell: cat ~/baz.txt
register: output
This looks a bit like using a hammer to drive a screw... or a screwdriver to plant a nail. Decide if you want to use nails or screws then use the appropriate tool.
Your question misses quite a few details so I hope my answer wont be too far from your requirements. Meanwhile here is an (untested and quite generic) example using ansible to compare your files and run a task based on the result:
- name: compare files and run task (or not...)
hosts: my_group
vars:
reference_url: https://example.com/file.txt
compared_file_path: /path/on/target/to/file2.txt
# Next var will only be defined when the two tasks below have run
file_matches: "{{ reference.content == (compared.content | b64decode) }}"
tasks:
- name: Get reference once for all hosts in play
uri:
url: "{{ reference_url }}"
return_content: true
register: reference
delegate_to: localhost
run_once: true
- name: slurp file to compare from each host in play
slurp:
path: "{{ compared_file_path }}"
register: compared
- name: run a task on each target host if compared is different
debug:
msg: "compared file is different"
when: not file_matches | bool
Just in case you would be doing all this just to check if a file needs to be updated, there's no need to bother: just download the file on the target. It will only be replaced if needed. You can even launch an action at the end of the playbook if (and only if) the file was actually updated on the target server.
- name: Update file from reference if needed
hosts: my_group
vars:
reference_url: https://example.com/file.txt
target_file_path: /path/on/target/to/file2.txt
tasks:
- name: Update file on target if needed and notify handler if changed
get_url:
url: "{{ reference_url }}"
dest: "{{ target_file_path }}"
notify: do_something_if_changed
handlers:
- name: do whatever task is needed if file was updated
debug:
msg: "file was updated: doing some work"
listen: do_something_if_changed
Some references to go further on above concepts:
uri module
get_url module
slurp module
delegating tasks in ansible
registering output of tasks
run_once
handlers
How to store every output in a file while executing multiple tasks in playbook?
1. I want to store every out in a file
2. In a playbook having different tasks also there.
3. Incrementally I want to store the output of playbook execution.
Store each task output to a variable and then write to a file.
Just giving an idea below, not tested.
- name: Task1
...
register: task_output1
- name: Task2
...
register: task_output2
- name: Write to file
copy:
content: "{{ item }}"
dest: /path/to/destination/file
with_items:
- task_output1.stdout_lines
- task_output2.stdout_lines
How can I see realtime output from a shell script run by ansible?
I recently refactored a wait script to use multiprocessing and provide realtime status of the various service wait checks for multiple services.
As a stand alone script, it works as expecting providing status for each thread as they wait in parallel for various services to get stable.
In ansible, the output pauses until the python script completes (or terminates) and then provides the output. While, OK, it I'd rather find a way to display output sooner. I've tried setting PYTHONUNBUFFERED prior to running ansible-playbook via jenkins withEnv but that doesn't seem to accomplish the goal either
- name: Wait up to 30m for service stability
shell: "{{ venv_dir }}/bin/python3 -u wait_service_state.py"
args:
chdir: "{{ script_dir }}"
What's the standard ansible pattern for displaying output for a long running script?
My guess is that I could follow one of these routes
Not use ansible
execute in a docker container and report output via ansible provided this doesn't hit the identical class of problem
Output to a file from the script and have either ansible thread or Jenkins pipeline thread watch and tail the file (both seem kludgy as this blurs the separation of concerns coupling my build server to the deploy scripts a little too tightly)
You can use - https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html
main.yml
- name: Run items asynchronously in batch of two items
vars:
sleep_durations:
- 1
- 2
- 3
- 4
- 5
durations: "{{ item }}"
include_tasks: execute_batch.yml
loop: "{{ sleep_durations | batch(2) | list }}"
execute_batch.yml
- name: Async sleeping for batched_items
command: sleep {{ async_item }}
async: 45
poll: 0
loop: "{{ durations }}"
loop_control:
loop_var: "async_item"
register: async_results
- name: Check sync status
async_status:
jid: "{{ async_result_item.ansible_job_id }}"
loop: "{{ async_results.results }}"
loop_control:
loop_var: "async_result_item"
register: async_poll_results
until: async_poll_results.finished
retries: 30
"What's the standard ansible pattern for displaying output for a long running script?"
Standard ansible pattern for displaying output for a long-running script is polling async and loop until async_status finishes. The customization of the until loop's output is limited. See Feature request: until for blocks #16621.
ansible-runner is another route that might be followed.
I want to execute some script on remote via Ansible and get result file from remote to host.
I wrote a playbook like below:
---
- name : script deploy
hosts: all
vars:
timestamp: "{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}"
become: true
tasks:
- name: script deployment
script: ./exe.sh {{ansible_nodename}}_{{ timestamp }}
args:
chdir: /tmp
exe.sh successfully executed on remote and redirect result to output file like remote_20170806065817.data
Script execution takes a few seconds, and I tried to fetch result file after execution done.
But {{timestamp}} is re-evaluated and changed when I fetch it.
So fetch cannot find script-execution result file name.
What I want is assign immutable (constant) value in my playbook.
Is there any workaround?
Ansible uses lazy evaluation, so variables are evaluated at the moment of their use.
You should set the fact, which will be evaluated once:
---
- name : script deploy
hosts: all
become: true
tasks:
- set_fact:
timestamp: "{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}"
- name: script deployment
script: ./exe.sh {{ansible_nodename}}_{{ timestamp }}
args:
chdir: /tmp
The machine I am targeting should, in theory, have a process running for each individual client called 'marketaccess {client_name}' and I want to ensure that this process is running. Ansible is proving very challenging for checking if processes are running. Below is the playbook I am trying to use to see if there is a process running on a given machine. I plan to then run a conditional on the 'stdout' and say that if it does not contain the customer's name then run a restart process script against that given customer. The issue is that when I run this playbook it tells me that the dictionary object has no attribute 'stdout' yet when I remove the '.stdout' it runs fine and I can clearly see the stdout value for service_status.
- name: Check if process for each client exists
shell: ps aux | grep {{ item.key|lower }}
ignore_errors: yes
changed_when: false
register: service_status
with_dict: "{{ customers }}"
- name: Report status of service
debug:
msg: "{{ service_status.stdout }}"
Your problem is that service_status is a result of a looped task, so it has service_status.results list which contains results for every iteration.
To see stdout for every iteration, you can use:
- name: Report status of service
debug:
msg: "{{ item.stdout }}"
with_items: "{{ service_status.results }}"
But you may want to read this note about idempotent shell tasks and rewrite your code with clean single task.