Write to file specified by user from command line argument - Ansible - ansible

I am trying to write a line to a file using lineinfile.
The name of the file is to be passed to the playbook at run time by the user as a command line argument.
Here is what the task looks like:
# Check for timezone.
- name: check timezone
tags: timezoneCheck
register: timezoneCheckOut
shell: timedatectl | grep -i "Time Zone" | awk --field-separator=":" '{print $2}' | awk --field-separator=" " '{print $1}'
- lineinfile:
path: {{ output }}
line: "Did not find { DesiredTimeZone }"
create: True
state: present
insertafter: EOF
when: timezoneCheckOut.stdout != DesiredTimezone
- debug: var=timezoneCheckOut.stdout
My questions are:
1. How do I specify the command line argument to be the destination file to write to (path)?
2. How do I append the argument DesiredTimeZone (specified in an external variables file) to the line argument?

My following answer might not be your solutions.
how to specify the command argument for output variable.
ansible-playbook yourplaybook.yml -e output=/path/to/outputfile
how to include DesiredTimeZone variable from external file.
vars_files:
- external.yml
full playbook.yml for testing on local:
yourplaybook.yml
- name: For testing
hosts: localhost
vars_files:
- external.yml
tasks:
- name: check timezone
tags: timezoneCheck
register: timezoneCheckOut
shell: timedatectl | grep -i "Time Zone" | awk -F":" '{print $2}' | awk --field-separator=" " '{print $1}'
- debug: var=timezoneCheckOut.stdout
- lineinfile:
path: "{{ output }}"
line: "Did not find {{ DesiredTimeZone }}"
create: True
state: present
insertafter: EOF
when: timezoneCheckOut.stdout != DesiredTimeZone
external.yml (place the same level with yourplaybook.yml)
---
DesiredTimeZone: "Asia/Tokyo"

With Ansible you should define the desired state. Period.
The correct way of doing this is to just use timezone module:
- name: set timezone
timezone:
name: "{{ DesiredTimeZone }}"
No need to jump through the hoops with shell, register, compare, print...
If you want to put system into the desired state, just run playbook:
ansible-playbook -e DesiredTimeZone=Asia/Tokyo timezone_playbook.yml
Ansible will ensure that all hosts in question will have the DesiredTimeZone.
If you just want to check if you system comply to desired state, use --check switch:
ansible-playbook -e DesiredTimeZone=Asia/Tokyo --check timezone_playbook.yml
In this case Ansible will just print to the log what should be changed in the current state to become desired state and don't make any actual changes.

Related

when condition to evaluate list of values inside with_items

My intention with below playbook is to run a shell command only when it finds any of the value from disk_list. I need help to frame out when condition as told above.
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- set_fact:
disk_list:
- sda
- sdb
- sdc
- name: Get df -hT output to know XFS file system
shell: df -hT |grep xfs|awk '{print $1}'
register: df_result
- name: Run shell command on each XFS file system
shell: ls -l {{ item }} | awk '{print $1}'
with_items: "{{ df_result.stdout_lines }}"
when: "{{ disk_list.[] }} in {{ item }}"
BTW, in my system, "df_result" variable looks as below:
TASK [debug] ***************************************************************************************************************************************
ok: [localhost] => {
"df_result": {
"changed": true,
"cmd": "df -hT |grep xfs|awk '{print $1}'",
"delta": "0:00:00.017588",
"end": "2019-03-01 23:55:21.318871",
"failed": false,
"rc": 0,
"start": "2019-03-01 23:55:21.301283",
"stderr": "",
"stderr_lines": [],
"stdout": "/dev/sda3\n/dev/sda1",
"stdout_lines": [
"/dev/sda3",
"/dev/sda1"
]
}
}
Please help !
Some notes on the playbook.
For localhost, connection: local is implicit. So there is no technical need to specify this.
set_fact is correct, but in this case I believe is more appropiate to use "vars" at a play level instead of the set_fact module in a task. This also allows you to use "vars_files" for easier use of disk device lists.
I would reccomend to try to keep your task names "easy", as you may want to use the --start-at-task option at some point.
with_items is depricated in favor of "loop".
The conditional "when" works at a task level. So it will execute the task (for all its items) if the condition is met.
Here is a working version of what you need which also reflects the (5) recommendations made:
---
- hosts: localhost
gather_facts: false
vars:
disk_list:
- sda
- sdb
- sdc
tasks:
- name: cleans previous runs
shell: cat /dev/null > /tmp/df_result
- name: creates temporary file
shell: df -hT | grep ext4 | grep -i {{ item }} | awk '{print $1}' >> /tmp/df_result
loop: "{{ disk_list }}"
- name: creates variable
shell: cat /tmp/df_result
register: df_result
- name: shows info
shell: ls -l {{ item }} | awk '{print $1}'
loop: "{{ df_result.stdout_lines }}"
I basically just use a temporary file (/tmp/df_result) with the results you need already filtered by the "grep -i {{ item }}" used in the loop on task "creates temporary file". Then, the loop in "shows info" just iterates over an already clean list of items.
To see the result on screen, you could use "-v" option when running the playbook or if you want to save the result in a file, you could add " >> /tmp/df_final" at the end of the shell line in "shows info" task.
I accidently step on this post, which is kind of old. I'm sure you already fixed this in your environment, maybe you find a better way to do it, hopefully you did.
Regards,

ansible: how do I display the output from a task that using with_items?

Ansible newbie here
Hopefully there is a simple solution to my problem
I'm trying to run SQL across a number of Oracle databases on one node. I generate a list of databases from ps -ef and use with_items to pass the dbname values.
My question is how do I display the output from each database running the select statement?
tasks:
- name: Exa check | find db instances
become: yes
become_user: oracle
shell: |
ps -ef|grep pmon|grep -v grep|grep -v ASM|awk '{ print $8 }'|cut -d '_' -f3
register: instance_list_output
changed_when: false
run_once: true
- shell: |
export ORAENV_ASK=NO; export ORACLE_SID={{ item }}; export ORACLE_HOME=/u01/app/oracle/database/12.1.0.2/dbhome_1; source /usr/local/bin/oraenv; $ORACLE_HOME/bin/sqlplus -s \"/ as sysdba\"<< EOSQL
select * from v\$instance;
EOSQL
with_items:
- "{{ instance_list_output.stdout_lines }}"
register: sqloutput
run_once: true
The loop below might work.
- debug:
msg: "{{ item.stdout }}"
loop: "{{ sqloutput.results }}"
If it does not take a look at the content of the variable and decide how to use it.
- debug: var=sqloutput

ansible read (more than one) values from a line in file

i am trying to read variables from a file into an array of something, but not into a pre defined variable.
The goal is a playbook, that searches for lines with item.0 and NOT item.1 and deletes them, later the playbook makes sure, that a line with item.0 item.1 is present, thats why i need this splitted.
For example i have a file with lines like this:
Parameter Value
Parameter Value
Parameter Value
to use this in a loop until EOF.
example part ot the playbook:
- name: lineinfile loop
lineinfile:
path: /myfile
regexp '^{{somethinglike item.0}}.(?!{{something like item.1}})'
state absent
...
Does anybody know a solution for the lookup and the loop?
Best regards
This should help you.
Input file
cat filein
Parameter1,Value1
Parameter2,Value2
Parameter3,Value3
Playbook:
---
- hosts: test
tasks:
- name: reg
shell: cat filein
register: myitems
- name: deb
debug: var=myitems.stdout_lines
- name: echo to file
shell: "echo {{ item.split(',')[1] }} - {{ item.split(',')[0] }} >> fileout"
with_items: "{{ myitems.stdout_lines }}"
Output file:
cat fileout
Value1 - Parameter1
Value2 - Parameter2
Value3 - Parameter3
This is not state of the art but should work.
The solutuion:
There is a ansible role in ansible galaxy, it provides exactly what i need:
https://github.com/mkouhei/ansible-role-includecsv

Ansible wait_for module, start at end of file

With the wait_formodule in Ansible if I use search_regex='foo' on a file
it seems to start at the beginning of the file, which means it will match on old data, thus when restarting a process/app (Java) which appends to a file rather than start a new file, the wait_for module will exit true for old data, but I would like to check from the tail of the file.
Regular expression in search_regex of wait_for module is by default set to multiline.
You can register the contents of the last line and then search for the string appearing after that line (this assumes there are no duplicate lines in the log file, i.e. each one contains a time stamp):
vars:
log_file_to_check: <path_to_log_file>
wanted_pattern: <pattern_to_match>
tasks:
- name: Get the contents of the last line in {{ log_file_to_check }}
shell: tail -n 1 {{ log_file_to_check }}
register: tail_output
- name: Create a variable with a meaningful name, just for clarity
set_fact:
last_line_of_the_log_file: "{{ tail_output.stdout }}"
### do some other tasks ###
- name: Match "{{ wanted_pattern }}" appearing after "{{ last_line_of_the_log_file }}" in {{ log_file_to_check }}
wait_for:
path: "{{ log_file_to_check }}"
search_regex: "{{ last_line_of_the_log_file }}\r(.*\r)*.*{{ wanted_pattern }}"
techraf's answer would work if every line inside the log file is time stamped. Otherwise, the log file may have multiple lines that are identical to the last one.
A more robust/durable approach would be to check how many lines the log file currently has, and then search for the regex/pattern occurring after the 'nth' line.
vars:
log_file: <path_to_log_file>
pattern_to_match: <pattern_to_match>
tasks:
- name: "Get contents of log file: {{ log_file }}"
command: "cat {{ log_file }}"
changed_when: false # Do not show that state was "changed" since we are simply reading the file!
register: cat_output
- name: "Create variable to store line count (for clarity)"
set_fact:
line_count: "{{ cat_output.stdout_lines | length }}"
##### DO SOME OTHER TASKS (LIKE DEPLOYING APP) #####
- name: "Wait until '{{ pattern_to_match}}' is found inside log file: {{ log_file }}"
wait_for:
path: "{{ log_file }}"
search_regex: "^{{ pattern_to_skip_preexisting_lines }}{{ pattern_to_match }}$"
state: present
vars:
pattern_to_skip_preexisting_lines : "(.*\\n){% raw %}{{% endraw %}{{ line_count }},{% raw %}}{% endraw %}" # i.e. if line_count=100, then this would equal "(.*\\n){100,}"
One more method, using intermediate temp file for tailing new records:
- name: Create tempfile for log tailing
tempfile:
state: file
register: tempfile
- name: Asynchronous tail log to temp file
shell: tail -n 0 -f /path/to/logfile > {{ tempfile.path }}
async: 60
poll: 0
- name: Wait for regex in log
wait_for:
path: "{{ tempfile.path }}"
search_regex: 'some regex here'
- name: Remove tempfile
file:
path: "{{ tempfile.path }}"
state: absent
Actually, if you can force a log rotation on your java app log file then straightforward wait_for will achieve what you want since there won't be any historical log lines to match
I am using this approach with rolling upgrade of mongodb and waiting for "waiting for connections" in the mongod logs before proceeding.
sample tasks:
tasks:
- name: Rotate mongod logs
shell: kill -SIGUSR1 $(pidof mongod)
args:
executable: /bin/bash
- name: Wait for mongod being ready
wait_for:
path: /var/log/mongodb/mongod.log
search_regex: 'waiting for connections'
I solved my problem with #Slezhuk's answer above. But I found an issue. The tail command won't stop after the completion of the playbook. It will run forever. I had to add some more logic to stop the process:
# the playbook doesn't stop the async job automatically. Need to stop it
- name: get the PID async tail
shell: ps -ef | grep {{ tmpfile.path }} | grep tail | grep -v grep | awk '{print $2}'
register: pid_grep
- set_fact:
tail_pid: "{{pid_grep.stdout}}"
# the tail command shell has two processes. So need to kill both
- name: killing async tail command
shell: ps -ef | grep " {{tail_pid}} " | grep -v grep | awk '{print $2}' | xargs kill -15
- name: Wait the tail process to be killed
wait_for:
path: /proc/{{tail_pid}}/status
state: absent
Came across this page when trying to do something similar and ansible has come a way since the above, so here's my solution using the above. Also used the timeout linux command with async - so doubling up a bit.
It would be much easier if the application rotated logs on startup - but some apps aren't nice like that!
As per this page..
https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html
The async tasks will run until they either complete, fail or timeout according to their async value.
Hope it helps someone
- name: set some common vars
set_fact:
tmp_tail_log_file: /some/path/tail_file.out # I did this as I didn't want to create a temp file every time, and i put it in the same spot.
timeout_value: 300 # async will terminate the task - but to be double sure I used timeout linux command as well.
log_file: /var/logs/my_log_file.log
- name: "Asynchronous tail log to {{ tmp_tail_log_file }}"
shell: timeout {{ timeout_value }} tail -n 0 -f {{ log_file }} > {{ tmp_tail_log_file }}
async: "{{ timeout_value }}"
poll: 0
- name: "Wait for xxxxx to finish starting"
wait_for:
timeout: "{{ timeout_value }}"
path: "{{ tmp_tail_log_file }}"
search_regex: "{{ pattern_search }}"
vars:
pattern_search: (.*Startup process completed.*)
register: waitfor
- name: "Display log file entry"
debug:
msg:
- "xxxxxx startup completed - the following line was matched in the logs"
- "{{ waitfor['match_groups'][0] }}"

In Ansible, how do I export an environment variable in one task so it can be used by another task?

Say I have the following file
foo.txt
Hello world=1
Hello world=2
...
Hello world=N
I want to use Ansible to insert another Hello world line, like this
foo.txt
Hello world=1
Hello world=2
...
Hello world=N
Hello world=N+1
IOW, I don't know home many Hello world lines there are, I need to find out via script, and get the value of N.
In Ansible, say I have the following shell task
- name: Find the last Hello world in foo.txt, and get the value of N
shell: >
# Get last Hello world line in foo.txt. I want to export this as an
# environment variable
LAST_HW_LINE=`awk '/Hello world/ {aline=$0} END{print aline}' "foo.txt"`
# Get left side of equation
IFS='=' read -ra LS <<< "$LAST_HW_LINE"
# Get the value of N
NUM=${LS##*\.}
# Increment by 1. I want to export this as an environment variable
NUM=$((NUM+1))
I want to be able to subsequently do this
- name: Add the next Hello world line
lineinfile:
dest: foo.txt
insertafter: "{{ lookup('env', 'LAST_HW_LINE') }}"
line: "Hello world={{ lookup('env', 'NUM') }}"
Maybe there's a better way to do this than using environment variables?
Register Variables
- shell: /usr/bin/foo
register: foo_result
Registering stdout as ansible variables
- debug: msg="{{ hello.stdout }}"
- debug: msg="{{ hello.stderr }}"
Including task with variables:
tasks:
- include: wordpress.yml wp_user=timmy
Including role with variables
- hosts: webservers
roles:
- common
- { role: foo_app_instance, dir: '/opt/a', app_port: 5000 }
For your case:
- name: so question 39041208
hosts: '{{ target | default("all") }}'
tasks:
- name: Find the last Hello world in foo.txt, and get the value of N
# shell: awk '/Hello world/ {aline=$0} END{print aline}' "/home/ak/ansible/stackoverflow/q39041208.txt"
shell: awk '/Hello world/ {aline=$0} END{print NR}' "/home/ak/ansible/stackoverflow/q39041208.txt"
register: last_line
ignore_errors: true
- name: debug
debug: msg="last line {{ last_line.stdout }}"
- name: debug next number
debug: msg="next num {{ last_line.stdout | int + 1 }}"
- name: Add the next Hello world line
lineinfile:
dest: /home/ak/ansible/stackoverflow/q39041208.txt
insertafter: "{{ last_line.stdout }}"
line: "Hello world={{ last_line.stdout | int + 1 }}"

Resources