ansible overwrite variable depending on result - ansible

I am running several shell commands in an ansible playbook that may or may not modify a configuration file.
One of the items in the playbook is to restart the service. But I only want to do this if a variable is set.
I am planning on registering a result in each of the shell tasks, but I do not want to overwrite the variable if it is already set to 'restart_needed' or something like that.
The idea is the restart should be the last thing to go, and if any of the commands set the restart variable, it will go, and if none of them did, the service will not be restarted. Here is an example of what I have so far...
tasks:
- name: Make a backup copy of file
copy: src={{ file_path }} dest={{ file_path }}.{{ date }} remote_src=true owner=root group=root mode=644 backup=yes
- name: get list of items
shell: |
grep <file>
register: result
- name: output will be 'restart_needed'
shell: |
NUM=14"s"; if [ "${NUM}" != "s" ]; then sed -i "${NUM}/no/yes/g" {{ file_path }}; echo "restart_needed"; else echo "nothing_changed" ; fi
with_items: "{{ result.stdout_lines }}"
register: output
- name: output will be 'nothing_changed'
shell: |
NUM="s"; if [ "${NUM}" != "s" ]; then sed -i "${NUM}/no/yes/g" {{ file_path }}; echo "restart_needed"; else echo "nothing_changed" ;; fi
with_items: "{{ result.stdout_lines }}"
register: output
- name: Restart service
service: name=myservice enabled=yes state=restarted
In the above example, the variable output will be set to restart_needed after the first task but then will be changed to 'nothing_changed' in the second task.
I want to keep the variable at 'restart_needed' if it is already there and then kick off the restart service task only if the variable is set to restart_needed.
Thanks!

For triggering restarts, you have two options: the when statement or handlers.
When statement example:
tasks:
- name: check if string "foo" exists in somefile
shell: grep -q foo somefile
register: result
- name: restart service
service:
name: myservice
enabled: yes
state: restarted
when: result.rc == 0
Handlers example:
tasks:
- name: check if string "foo" exists in somefile
shell: grep -q foo somefile
register: result
changed_when: "result.rc == 0"
notify: restart service
handlers:
- name: restart service
service:
name: myservice
enabled: yes
state: restarted

Related

About enable/disable by conditional values in ansible yaml manifest

I want a variable on Ansible to be active or passive depending on the condition.
I am using the role "davidwittman.redis". I'm setting up a redis cluster. I need to run a single yml file by deploying it to 3 servers at the same time. I can't distribute "yml" files separately.
I have a simple setup configuration below. Here, I want to activate the "redis_slaveof" value on all other servers, except the master server, depending on the condition.
In other words, I want the "redis_slaveof" value I selected to not be active on the master server, but to have this value active on the other 2 servers.
How can I do this, do you have any suggestions ?
my main ansible configuration;
> - name: Redis Replication Slave Installations hosts: localhost
>
> pre_tasks:
> - name: check redis server1
> shell: "cat /etc/hosts |grep redis |awk '{print $2}' |sed -n 1p"
> register: server01
>
> - name: check redis server2
> shell: "cat /etc/hosts |grep redis |awk '{print $2}' |sed -n 2p"
> register: server02
>
> - name: check redis server3
> shell: "cat /etc/hosts |grep redis |awk '{print $2}' |sed -n 3p"
> register: server03
>
> roles:
> - role: davidwittman.redis
>
> vars:
> - redis_version: 6.2.6
> - redis_bind: 0.0.0.0
> - redis_port: 6379
> - redis_service_name: redis-service
> - redis_protected_mode: "no"
> - redis_config_file_name: "redis.conf"
>
> # - redis_slaveof: "{{ server01.stdout }} 6379"
I tried the following;
I added extra task and "import_role". I put a when condition in it. If my server01 value and server local hostname value are not the equal, enable this role. Thus, I thought that it would activate this value on other servers except the master server.But it didn't work, added it on all servers. I saw the following error in the logs.
"[WARNING]: flush_handlers task does not support when conditional"
tasks:
- name: test
import_role:
name: davidwittman.redis
vars:
redis_slaveof: "{{ server01.stdout }} 6379"
when: '"{{ (server01.stdout) }}" != "{{ ansible_hostname }}"'
I added the when condition to the main role as above and made it to two different roles in a single yml. This method also failed.
- role: davidwittman.redis
when: '"{{ (server01.stdout) }}" != "{{ ansible_hostname }}"'
I defined the "if/else" condition as follows in the variable. But although it seems to have worked, it still gave the error "does not support when conditional". Added it to all servers as well.
redis_slaveof: "{%if '{{ (server01.stdout) }} != {{ ansible_hostname }}' %} {{ '{{ (server01.stdout) }} 6379' }} {% else %} {{ '' }} {% endif %}"
So you want to run a role differently, based on variables.
In your playbook do not use
roles:
- role: davidwittman.redis
but rather do
tasks:
- name: Pass variables to role
include_role:
name: myrole
vars:
rolevar1: value from task
This is executed after roles but now you can add when expressions:
tasks:
- name: Pass variables to role
include_role:
name: myrole
vars:
rolevar1: value from task
when: ansible_hostname = "server1"
- name: Pass variables to role
include_role:
name: myrole
vars:
rolevar1: value from task
additionalrolevar: whatever
when: ansible_hostname != "server1"

Use registered variables in other roles

Hello guys I have a Problem.
The Problem I am having at the moment, is that the role to copy the files will skip all the files no matter if the file with the filenames is empty or not.
In Role1 I want to save the output of cat for each file. In Role2 in the when conditional, I want the task to skip if the registered output is == "".
Role1:
---
- name: copy files
shell: "cat path{{ item }}files"
register: checkempty
loop:
- test1
- test2
- test3
- test4
Role2:
---
- name: Copy Files
copy:
src: "{{ var1 }}{{ var2 }}{{ var3 }}{{ var4 }}{{ item }}/"
dest: "{{ copy_dest_sys }}" #destination path
loop: "{{ lookup('file', 'pathtofile/file').split('\n')}}"
when: hostvars['localhost'].checkempty.results == ""
Playbook:
- name: check emptiness
hosts: localhost
become: yes
vars_files:
- ../variables/varsfile
roles:
- ../variables/role1
- name: Copy Files to prod/stag
hosts: "{{hosts_exec}}"
become: yes
vars_files:
- ../vars/recommendation-delta.yml
roles:
- ../roles/role2
How can I set a registered variable with with_items and compare the output of it to ""(nothing)?
Can somebody help me with this issue?
When you register a variable, it is set only on the specific host on which that task was executing. So if you are running a role on localhost that does this:
---
- name: Check if sys files Empty
command: if [ ! -s filenames/"{{ item }}"files ]; then echo "{{ item }}fileempty"; fi
register: checkempty
loop:
- sys
- wifi
- recoprop
- udfprop
Then you would reference it like this when running tasks on another host:
hostvars["localhost"].checkempty
For example:
---
- name: Copy sys Files to prod/stag
copy:
src: "{{ git_dest }}{{ git_sys_files }}{{ item }}/"
dest: "{{ copy_dest_sys }}" #destination path
loop: "{{ lookup('file', '/home/ansible/repo/hal_ansible/scripts/delta-reco/filenames/sysfiles').split('\n')}}"
when: 'hostvars["localhost"].checkempty.stdout == "sysfileempty"'
You can read more about this in the "Using Variables" documentation.
I've made some corrections to your when syntax here as well. In general, you should never use {{...}} markers in a when condition because a when condition is always evaluated as a Jinja expression.
However, you have another problem:
Your "Check if sys files Empty" task is using the command module, but you're trying to run a shell script. That will always fail. You need to use the shell module instead:
---
- name: Check if sys files Empty
shell: if [ ! -s filenames/"{{ item }}"files ]; then echo "{{ item }}fileempty"; fi
register: checkempty
loop:
- sys
- wifi
- recoprop
- udfprop

Ansible looping items in file

I have a file(a_file.txt) like this:
22
23
8080
I need to loop each item in a_file.txt with my host and formatted to be host:22, host:23, host:8080...and so on, so I can use shell module in playbook like this:
---
- hosts: host1
tasks:
- name: Remote hostname
shell: hostname
register: hostname
- name: Read items from a_file.txt
shell: cat a_file.txt
register: item_output
- name: Run shell command
shell: someCommand {{hostname.stdout_line|nice_to_yaml}}:{{item}}
with_items: item_output.stdout_lines
However, my someCommand failed because I have:
{{hostname.stdout_line|nice_to_yaml}} = - hostname\n
{{<item in a_file.txt>}} = [u'\22, u'\23, u'\8080]
you have to use:
- name: Run shell command
shell: someCommand {{hostname.stdout_line|nice_to_yaml}}:{{item}}
with_items: "{{ item_output.stdout_lines }}"

for loop not working with ansible shell module

I am trying to check group ownership of all the directory named "deployments" inside path. To do this I am using for loop with find command and store all the deployments dir in variable. And then using grep I am checking whether there is any deviation. Below is my task. The problem is that the Its not working and even if group ownership is different its not detecting it.
How can i check and fix how the command i am passing in shell module is running correctly by shell module.
---
- name: deployment dir group ownership check
shell: for i in `find /{{ path }} -name deployments -type d -print`;do ls -ld $i | grep -v 'sag';done > /dev/null 2>&1; echo $?
register: find_result
delegate_to: "{{ pub_server }}"
changed_when: False
ignore_errors: true
tags:
- deployment_dir
- debug: var=find_result
- name: status of the group in deployments dir
shell: echo "Success:Deployments dir is owned by sag group in {{ path }} in server {{ pub_server }}" >> groupCheck.log
when: find_result.stdout == "1"
changed_when: False
tags:
- deployment_dir
- name: status of the group in deployments dir
shell: echo "Fail:Deployments dir is NOT owned by sag group in {{ path }} in server {{ pub_server }}" >> groupCheck.log
changed_when: False
when: find_result.stdout == "0"
tags:
- deployment_dir
Why you use that shell with find?
You can stat the folder and then get the UID and GID for set those permissions...
---
- hosts: localhost
gather_facts: no
connection: local
tasks:
- stat:
path: /tmp
register: tmp
- debug: msg="Uid {{ tmp.stat.uid }} - Guid {{ tmp.stat.gid }}"
Playbook run:
PLAY ***************************************************************************
TASK [stat] ********************************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "Uid 0 - Guid 0"
}
Tmp folder:
14:17:17 kelson#local:/# ls -la /
drwxrwxrwt 23 root root 4096 Mai 14 14:17 tmp
Using that line of thinking, you can use the find module, register the output and use stat with your results to seek every folder permissions or set those you want...

Ansible wait_for module, start at end of file

With the wait_formodule in Ansible if I use search_regex='foo' on a file
it seems to start at the beginning of the file, which means it will match on old data, thus when restarting a process/app (Java) which appends to a file rather than start a new file, the wait_for module will exit true for old data, but I would like to check from the tail of the file.
Regular expression in search_regex of wait_for module is by default set to multiline.
You can register the contents of the last line and then search for the string appearing after that line (this assumes there are no duplicate lines in the log file, i.e. each one contains a time stamp):
vars:
log_file_to_check: <path_to_log_file>
wanted_pattern: <pattern_to_match>
tasks:
- name: Get the contents of the last line in {{ log_file_to_check }}
shell: tail -n 1 {{ log_file_to_check }}
register: tail_output
- name: Create a variable with a meaningful name, just for clarity
set_fact:
last_line_of_the_log_file: "{{ tail_output.stdout }}"
### do some other tasks ###
- name: Match "{{ wanted_pattern }}" appearing after "{{ last_line_of_the_log_file }}" in {{ log_file_to_check }}
wait_for:
path: "{{ log_file_to_check }}"
search_regex: "{{ last_line_of_the_log_file }}\r(.*\r)*.*{{ wanted_pattern }}"
techraf's answer would work if every line inside the log file is time stamped. Otherwise, the log file may have multiple lines that are identical to the last one.
A more robust/durable approach would be to check how many lines the log file currently has, and then search for the regex/pattern occurring after the 'nth' line.
vars:
log_file: <path_to_log_file>
pattern_to_match: <pattern_to_match>
tasks:
- name: "Get contents of log file: {{ log_file }}"
command: "cat {{ log_file }}"
changed_when: false # Do not show that state was "changed" since we are simply reading the file!
register: cat_output
- name: "Create variable to store line count (for clarity)"
set_fact:
line_count: "{{ cat_output.stdout_lines | length }}"
##### DO SOME OTHER TASKS (LIKE DEPLOYING APP) #####
- name: "Wait until '{{ pattern_to_match}}' is found inside log file: {{ log_file }}"
wait_for:
path: "{{ log_file }}"
search_regex: "^{{ pattern_to_skip_preexisting_lines }}{{ pattern_to_match }}$"
state: present
vars:
pattern_to_skip_preexisting_lines : "(.*\\n){% raw %}{{% endraw %}{{ line_count }},{% raw %}}{% endraw %}" # i.e. if line_count=100, then this would equal "(.*\\n){100,}"
One more method, using intermediate temp file for tailing new records:
- name: Create tempfile for log tailing
tempfile:
state: file
register: tempfile
- name: Asynchronous tail log to temp file
shell: tail -n 0 -f /path/to/logfile > {{ tempfile.path }}
async: 60
poll: 0
- name: Wait for regex in log
wait_for:
path: "{{ tempfile.path }}"
search_regex: 'some regex here'
- name: Remove tempfile
file:
path: "{{ tempfile.path }}"
state: absent
Actually, if you can force a log rotation on your java app log file then straightforward wait_for will achieve what you want since there won't be any historical log lines to match
I am using this approach with rolling upgrade of mongodb and waiting for "waiting for connections" in the mongod logs before proceeding.
sample tasks:
tasks:
- name: Rotate mongod logs
shell: kill -SIGUSR1 $(pidof mongod)
args:
executable: /bin/bash
- name: Wait for mongod being ready
wait_for:
path: /var/log/mongodb/mongod.log
search_regex: 'waiting for connections'
I solved my problem with #Slezhuk's answer above. But I found an issue. The tail command won't stop after the completion of the playbook. It will run forever. I had to add some more logic to stop the process:
# the playbook doesn't stop the async job automatically. Need to stop it
- name: get the PID async tail
shell: ps -ef | grep {{ tmpfile.path }} | grep tail | grep -v grep | awk '{print $2}'
register: pid_grep
- set_fact:
tail_pid: "{{pid_grep.stdout}}"
# the tail command shell has two processes. So need to kill both
- name: killing async tail command
shell: ps -ef | grep " {{tail_pid}} " | grep -v grep | awk '{print $2}' | xargs kill -15
- name: Wait the tail process to be killed
wait_for:
path: /proc/{{tail_pid}}/status
state: absent
Came across this page when trying to do something similar and ansible has come a way since the above, so here's my solution using the above. Also used the timeout linux command with async - so doubling up a bit.
It would be much easier if the application rotated logs on startup - but some apps aren't nice like that!
As per this page..
https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html
The async tasks will run until they either complete, fail or timeout according to their async value.
Hope it helps someone
- name: set some common vars
set_fact:
tmp_tail_log_file: /some/path/tail_file.out # I did this as I didn't want to create a temp file every time, and i put it in the same spot.
timeout_value: 300 # async will terminate the task - but to be double sure I used timeout linux command as well.
log_file: /var/logs/my_log_file.log
- name: "Asynchronous tail log to {{ tmp_tail_log_file }}"
shell: timeout {{ timeout_value }} tail -n 0 -f {{ log_file }} > {{ tmp_tail_log_file }}
async: "{{ timeout_value }}"
poll: 0
- name: "Wait for xxxxx to finish starting"
wait_for:
timeout: "{{ timeout_value }}"
path: "{{ tmp_tail_log_file }}"
search_regex: "{{ pattern_search }}"
vars:
pattern_search: (.*Startup process completed.*)
register: waitfor
- name: "Display log file entry"
debug:
msg:
- "xxxxxx startup completed - the following line was matched in the logs"
- "{{ waitfor['match_groups'][0] }}"

Resources