The condition in my playbook is not working - ansible

I have this simple playbook:
---
- name: JVM Status
shell: "ps -ef | grep {{item}} | grep -v grep | grep {{wasUser}} | wc -l"
loop: "{{jvmStatusList}}"
become: yes
become_user: "{{wasUser}}"
register: statusResult
- debug:
msg: "{{items.item}} is started"
loop: "{{statusResult.results}}"
when: item.stdout != 0
- debug:
msg: "{{item.item}} is stopped"
loop: "{{ statusResult.results }}"
when: item.stdout == 0
...
and when i run it, it doesnt take into account the condition.
I have verified what my item.stdout returns and 2 items are started and 1 is stopped but all three are returned in the first debug while the second one is being skipped by all.
Can anyone help?
thought maybe the problem was because my stdout result was not trimmed. so i tried different syntaxes including item.stdout | trim but no change. Also tried using item.stdout is 1 instead of != 0.

Related

How can i capture the failed items only from list of items and display output

I am trying to write Ansible to list only the failed commands from the list of commands
here is the code
---
- name: Verify sudo access
become: true
become_user: "{{ install_user }}"
shell: sudo -l | grep "{{ item }}"
register: systemctl_result
loop:
- /usr/bin/mv
- /usr/bin/olcenctl
- /usr/bin/yum
when: ansible_hostname in groups['operator']
ignore_errors: true
- name: Print result
debug:
var: systemctl_result['results']
/usr/bin/yum is not in sudo list and it fails and I want to capture only the failed command in the output.
Your problem can be simplified. The become, sudo, ... stuff is not relevant. What you want is to list failed items in the iteration. Use ignore_errors: true to take care of rc: 1 returned by failed grep. For example,
- shell: "echo ABC | grep {{ item }}"
register: out
ignore_errors: true
loop:
- A
- B
- X
Then, put the below declaration into the vars
failed_items: "{{ out.results|
selectattr('failed')|
map(attribute='item')|
list }}"
gives what you want
failed_items:
- X
Example of a complete playbook for testing
- hosts: localhost
vars:
failed_items: "{{ out.results|
selectattr('failed')|
map(attribute='item')|
list }}"
tasks:
- shell: "echo ABC | grep {{ item }}"
register: out
ignore_errors: true
loop:
- A
- B
- X
- debug:
var: failed_items

ansible playbook access first loop variable in second loop

I have list of applications paths in a file APP_DIR, need to loop through the paths and run start command.
- name: start
command: "{{item[1]}}"
with_nested:
- "{{ lookup('file', 'APP_DIR').splitlines() }}"
- [ "chdir={{item[0]}} ./start",
"ps -aef | grep httpd | grep -v grep"]
ERROR: FAILED! => {"msg": "'item' is undefined"}.
Thanks in Advance for help.
You can't call item variable in a nested list, you can use your command with a simple with_item and a multiline like this :
- name: start
command: |
./start
ps -aef | grep httpd | grep -v grep
args:
chdir: "{{ item[0] }}"
with_items: "{{ lookup('file', 'APP_DIR').splitlines() }}"

ansible: how do I display the output from a task that using with_items?

Ansible newbie here
Hopefully there is a simple solution to my problem
I'm trying to run SQL across a number of Oracle databases on one node. I generate a list of databases from ps -ef and use with_items to pass the dbname values.
My question is how do I display the output from each database running the select statement?
tasks:
- name: Exa check | find db instances
become: yes
become_user: oracle
shell: |
ps -ef|grep pmon|grep -v grep|grep -v ASM|awk '{ print $8 }'|cut -d '_' -f3
register: instance_list_output
changed_when: false
run_once: true
- shell: |
export ORAENV_ASK=NO; export ORACLE_SID={{ item }}; export ORACLE_HOME=/u01/app/oracle/database/12.1.0.2/dbhome_1; source /usr/local/bin/oraenv; $ORACLE_HOME/bin/sqlplus -s \"/ as sysdba\"<< EOSQL
select * from v\$instance;
EOSQL
with_items:
- "{{ instance_list_output.stdout_lines }}"
register: sqloutput
run_once: true
The loop below might work.
- debug:
msg: "{{ item.stdout }}"
loop: "{{ sqloutput.results }}"
If it does not take a look at the content of the variable and decide how to use it.
- debug: var=sqloutput

How to set a value in register with set_facts in ansible?

When a task is skipped on the basis of condition and the result of register also differ which cause of another task has been failed.
- name: Check if the partition is already mounted
shell: df | grep "{{item.partition}}" | wc -l
with_items:
- "{{ ebs_vol }}"
register: ebs_checked
when: ebs_vol is defined
- name: Make filesystem of the partition
filesystem: fstype=ext4 dev={{item.item.partition}} force=no
when: ( ebs_vol is defined and "{{item.stdout}} == True" )
changed_when: True
with_items:
- ebs_checked.results
Use default filter to handle corner cases:
- name: Make filesystem of the partition
filesystem: fstype=ext4 dev={{item.item.partition}} force=no
when: item.stdout | bool
changed_when: True
with_items: "{{ ebs_checked.results | default([]) }}"
This will iterate over empty list (read "will do nothing") if there is no results in ebs_checked.
Also you should not check for ebs_vol is defined because when statement in looped tasks is applied inside a loop, and keeping in mind that you check for ebs_vol is defined in the previous task, makes this check unnecessary inside a loop.

Ansible wait_for module, start at end of file

With the wait_formodule in Ansible if I use search_regex='foo' on a file
it seems to start at the beginning of the file, which means it will match on old data, thus when restarting a process/app (Java) which appends to a file rather than start a new file, the wait_for module will exit true for old data, but I would like to check from the tail of the file.
Regular expression in search_regex of wait_for module is by default set to multiline.
You can register the contents of the last line and then search for the string appearing after that line (this assumes there are no duplicate lines in the log file, i.e. each one contains a time stamp):
vars:
log_file_to_check: <path_to_log_file>
wanted_pattern: <pattern_to_match>
tasks:
- name: Get the contents of the last line in {{ log_file_to_check }}
shell: tail -n 1 {{ log_file_to_check }}
register: tail_output
- name: Create a variable with a meaningful name, just for clarity
set_fact:
last_line_of_the_log_file: "{{ tail_output.stdout }}"
### do some other tasks ###
- name: Match "{{ wanted_pattern }}" appearing after "{{ last_line_of_the_log_file }}" in {{ log_file_to_check }}
wait_for:
path: "{{ log_file_to_check }}"
search_regex: "{{ last_line_of_the_log_file }}\r(.*\r)*.*{{ wanted_pattern }}"
techraf's answer would work if every line inside the log file is time stamped. Otherwise, the log file may have multiple lines that are identical to the last one.
A more robust/durable approach would be to check how many lines the log file currently has, and then search for the regex/pattern occurring after the 'nth' line.
vars:
log_file: <path_to_log_file>
pattern_to_match: <pattern_to_match>
tasks:
- name: "Get contents of log file: {{ log_file }}"
command: "cat {{ log_file }}"
changed_when: false # Do not show that state was "changed" since we are simply reading the file!
register: cat_output
- name: "Create variable to store line count (for clarity)"
set_fact:
line_count: "{{ cat_output.stdout_lines | length }}"
##### DO SOME OTHER TASKS (LIKE DEPLOYING APP) #####
- name: "Wait until '{{ pattern_to_match}}' is found inside log file: {{ log_file }}"
wait_for:
path: "{{ log_file }}"
search_regex: "^{{ pattern_to_skip_preexisting_lines }}{{ pattern_to_match }}$"
state: present
vars:
pattern_to_skip_preexisting_lines : "(.*\\n){% raw %}{{% endraw %}{{ line_count }},{% raw %}}{% endraw %}" # i.e. if line_count=100, then this would equal "(.*\\n){100,}"
One more method, using intermediate temp file for tailing new records:
- name: Create tempfile for log tailing
tempfile:
state: file
register: tempfile
- name: Asynchronous tail log to temp file
shell: tail -n 0 -f /path/to/logfile > {{ tempfile.path }}
async: 60
poll: 0
- name: Wait for regex in log
wait_for:
path: "{{ tempfile.path }}"
search_regex: 'some regex here'
- name: Remove tempfile
file:
path: "{{ tempfile.path }}"
state: absent
Actually, if you can force a log rotation on your java app log file then straightforward wait_for will achieve what you want since there won't be any historical log lines to match
I am using this approach with rolling upgrade of mongodb and waiting for "waiting for connections" in the mongod logs before proceeding.
sample tasks:
tasks:
- name: Rotate mongod logs
shell: kill -SIGUSR1 $(pidof mongod)
args:
executable: /bin/bash
- name: Wait for mongod being ready
wait_for:
path: /var/log/mongodb/mongod.log
search_regex: 'waiting for connections'
I solved my problem with #Slezhuk's answer above. But I found an issue. The tail command won't stop after the completion of the playbook. It will run forever. I had to add some more logic to stop the process:
# the playbook doesn't stop the async job automatically. Need to stop it
- name: get the PID async tail
shell: ps -ef | grep {{ tmpfile.path }} | grep tail | grep -v grep | awk '{print $2}'
register: pid_grep
- set_fact:
tail_pid: "{{pid_grep.stdout}}"
# the tail command shell has two processes. So need to kill both
- name: killing async tail command
shell: ps -ef | grep " {{tail_pid}} " | grep -v grep | awk '{print $2}' | xargs kill -15
- name: Wait the tail process to be killed
wait_for:
path: /proc/{{tail_pid}}/status
state: absent
Came across this page when trying to do something similar and ansible has come a way since the above, so here's my solution using the above. Also used the timeout linux command with async - so doubling up a bit.
It would be much easier if the application rotated logs on startup - but some apps aren't nice like that!
As per this page..
https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html
The async tasks will run until they either complete, fail or timeout according to their async value.
Hope it helps someone
- name: set some common vars
set_fact:
tmp_tail_log_file: /some/path/tail_file.out # I did this as I didn't want to create a temp file every time, and i put it in the same spot.
timeout_value: 300 # async will terminate the task - but to be double sure I used timeout linux command as well.
log_file: /var/logs/my_log_file.log
- name: "Asynchronous tail log to {{ tmp_tail_log_file }}"
shell: timeout {{ timeout_value }} tail -n 0 -f {{ log_file }} > {{ tmp_tail_log_file }}
async: "{{ timeout_value }}"
poll: 0
- name: "Wait for xxxxx to finish starting"
wait_for:
timeout: "{{ timeout_value }}"
path: "{{ tmp_tail_log_file }}"
search_regex: "{{ pattern_search }}"
vars:
pattern_search: (.*Startup process completed.*)
register: waitfor
- name: "Display log file entry"
debug:
msg:
- "xxxxxx startup completed - the following line was matched in the logs"
- "{{ waitfor['match_groups'][0] }}"

Resources