ansible shell module fails to run sed for string insert operation - bash

I am trying to insert a string in docker.conf file and it is failing on ansible may be due to my syntax. Works when I do it manually.
vars:
memlock_value: "16777216:16777216"
docker_options_file: "/etc/systemd/system/docker.service.d/docker-options.conf"
The task is
- name: "Set in flag for ulimit in docker conf"
shell: |
- "sed '/^Environment=/ s/\"$/ --default-ulimit memlock={{ memlock_value }}\"/' {{ docker_options_file }} -i"
output:
task path: /ansible-managed/jenkins-slave/slave1/workspace/teamd/run_ansible_playbook/k8s/baremetal/roles/team-node-config/tasks/main.yml
23:27:53 Tuesday 07 December 2021 07:27:53 +0000 (0:00:11.047) 0:00:19.487 ******
fatal: [node1]: FAILED! => {"changed": true, "cmd": "- \"sed '/^Environment=/ s/\\\"$/ --default-ulimit memlock=16777216:16777216\\\"/' /etc/systemd/system/docker.service.d/docker-options.conf -i\"\n \n", "delta": "0:00:00.004384", "end": "2021-12-07 07:27:53.586179", "msg": "non-zero return code", "rc": 2, "start": "2021-12-07 07:27:53.581795", "stderr": "/bin/sh: 0: Illegal option - ", "stderr_lines": ["/bin/sh: 0: Illegal option - "], "stdout": "", "stdout_lines": []}
The manual command works:
sed '/^Environment=/ s/"$/ --default-ulimit memlock=16777216:16777216"/' /etc/systemd/system/docker.service.d/docker-options.conf -i
I also tried lineinefile but it replaces the whole line instead of just inserting at end
- name: "Set in flag for ulimit in docker conf"
lineinfile:
path: "{{ docker_options_file_dummy }}"
regexp: '^Environment='
insertafter: '^Environment=.+$'
line: "--default-ulimit memlock={{ memlock_value }}"
ignore_errors: yes
output
=> {"backup": "", "changed": true, "msg": "line replaced"}
[Service]
--default-ulimit memlock=16777216:16777216

Finally was able to figure out here it is but am still trying to figure out how to make it idempotent. multiple runs is keeping on adding the content at end of line.
- name: "add ulimit in tests file to end of line in {{ dummy_file }}"
lineinfile:
dest: "{{ dummy_file }}"
state: present
regexp: '^(Environment="[^"]*)'
backrefs: yes
line: '\1 {{ ulimit_memlock_flag }}"'
file before task
[Service]
Environment="DOCKER_OPTS= --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 --iptables=false"
file after task
[Service]
Environment="DOCKER_OPTS= --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 --iptables=false --default-ulimit memlock=16777216:16777216"

Finally was able to figure out here it is with idempotency. multiple runs does not add the content at end of line again.
- name: "with idempotency add ulimit in tests file to end of line in {{ dummy_file }}"
lineinfile:
dest: "{{ dummy_file }}"
state: present
#regexp: '^(DOCKER_OPTS="[^"]*)'
regexp: (.*"DOCKER_OPTS=.*iptables=false)
backrefs: yes
line: '\1 {{ ulimit_memlock_flag }}"'
file before task
[Service]
Environment="DOCKER_OPTS= --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 --iptables=false"
file after task run 1
[Service]
Environment="DOCKER_OPTS= --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 --iptables=false --default-ulimit memlock=16777216:16777216"
file after task run 2
[Service]
Environment="DOCKER_OPTS= --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 --iptables=false --default-ulimit memlock=16777216:16777216"

The literal text inside | block is not an array. Also, remove global " quotes - a command is not a single command but is split on spaces, so spaces have to be there.
Try:
shell: |
sed "'/^Environment=/ s/\"$/ --default-ulimit memlock={{ memlock_value }}\"/'" "{{ docker_options_file }}" -i
It's odd that -i option is on the end. I think It would be nicer to:
shell: <
sed -i -e
"'/^Environment=/ s/\"$/ --default-ulimit memlock={{ memlock_value }}\"/'
"{{ docker_options_file }}"
You should prefer command over shell
command:
argv:
- sed
- -i
- -e
- '/^Environment=/ s/"$/ --default-ulimit memlock={{ memlock_value }}"/'
- "{{ docker_options_file }}"
And, you should prefer lininfile over sed. Something along:
lineinfile:
regexp: '^(Environment=[^"]*)"'
backrefs: yes
line: '\1 --default-ulimit memlock={{ memlock_value }}"'
path: "{{ docker_options_file }}"

Related

Newline('\n') is replaced by space(' ') in ansible shell module

I have an ansible task where I make use of shell module to run a custom bash script. This bash script expects an optional parameter which could be a multiline string. Example
~/myScript.sh -e "Each \nword \nin \nnew \nline"
This works perfectly fine when I run it in the bash shell. However, I have tried several ways but haven't made it run via the ansible task. The value being a variable itself. The task below -
- name: Sample Task
shell:
cmd: "{{ home_dir }}/myScript.sh -e '{{ string_with_escaped_newlines }}'"
Here, the value of variable string_with_escaped_newlines is set to Each \nword \nin \nnew \nline programmatically by other tasks. The output of the task confirms this.
I had put an echo in my shell script to debug & I observe that the echo would print 2 different sequences in the 2 cases(running directly via shell & running via ansible).
Debug output via shell-
Each
word
in
new
line
Debug output via ansible -
Each word in new line
Notice that there is an extra space introduced in the value in place on \n. I do not understand why this happens and how do I stop this. I have checked/tried shell as well as command modules of ansible.
cmd: "{{ home_dir }}/myScript.sh -e '{{ string_with_escaped_newlines }}'"
is first converted (by Jinja2) to
cmd: "<THE HOME>/myScript.sh -e 'Each \nword \nin \nnew \nline'"
Here \n is within double quotes and in YAML double quoted \ns are NEWLINEs (think "\n" in C or JSON).
For your case you can just write:
cmd: {{ home_dir }}/myScript.sh -e '{{ string_with_escaped_newlines }}'
Or to be more safe:
cmd: |
{{ home_dir }}/myScript.sh -e '{{ string_with_escaped_newlines }}'
YAML syntax is quite "confusing". You can take a look at Ansible's own YAML intro.
Given the script
shell> cat ~/myScript.sh
#!/usr/bin/sh
echo ${1}
shell> ~/myScript.sh "Each \nword \nin \nnew \nline"
Each
word
in
new
line
Q: "Newline '\n' is replaced by space ' ' in Ansible shell module."
A: It depends on how you quote and escape the string. The simplest option is single-quoted style because only ' needs to be escaped
string_with_escaped_newlines1: 'Each \nword \nin \nnew \nline'
You have to escape \n in double-quoted style
string_with_escaped_newlines2: "Each \\nword \\nin \\nnew \\nline"
Both variables expand to the same string
- debug:
var: string_with_escaped_newlines1
- debug:
var: string_with_escaped_newlines2
gives
string_with_escaped_newlines1: Each \nword \nin \nnew \nline
string_with_escaped_newlines2: Each \nword \nin \nnew \nline
Then, the quotation in the command is not significant. All options below give the same result
- command: ~/myScript.sh "{{ string_with_escaped_newlines1 }}"
- command: ~/myScript.sh '{{ string_with_escaped_newlines1 }}'
- command: ~/myScript.sh "{{ string_with_escaped_newlines2 }}"
- command: ~/myScript.sh '{{ string_with_escaped_newlines2 }}'
Example of a complete playbook for testing
- hosts: localhost
vars:
string_with_escaped_newlines1: 'Each \nword \nin \nnew \nline'
string_with_escaped_newlines2: "Each \\nword \\nin \\nnew \\nline"
tasks:
- command: ~/myScript.sh "{{ string_with_escaped_newlines1 }}"
register: out
- debug:
var: out.stdout_lines
- command: ~/myScript.sh '{{ string_with_escaped_newlines1 }}'
register: out
- debug:
var: out.stdout_lines
- command: ~/myScript.sh "{{ string_with_escaped_newlines2 }}"
register: out
- debug:
var: out.stdout_lines
- command: ~/myScript.sh '{{ string_with_escaped_newlines2 }}'
register: out
- debug:
var: out.stdout_lines
gives (abridged) four times the same result
TASK [debug] *********************************************************
ok: [localhost] =>
out.stdout_lines:
- 'Each '
- 'word '
- 'in '
- 'new '
- line
Notes:
Using shell instead of command gives the same results.
The best practice is using the module command unless the ansible.builtin.shell module is explicitly required.
Have you tried \r\n instead of \n
It helped me elsewhere in ansible

Ansible copy file to specific host based on filename

I am trying to copy files to a list of hosts and each file has the hostname present in the name. Here is my playbook:
---
- hosts: ibank
become: true
become_exe: 'sudo su -'
tasks:
- name:"copy ibank to {{ inventory_hostname }} qa"
copy:
src: "/usr/local/jenkins_workspace/Trunk_Ibank_Build_Ansible/ibank/DIST/ibank_{{ inventory_hostname }}_qa.war"
dest: "/usr/local/ibank/ibank_{{ inventory_hostname }}_qa.war"
command:
chdir: /usr/local/ibank
cmd: /usr/local/ibank/restart.sh
But when the file is run I get "ERROR! Syntax Error while loading YAML. mapping values are not allowed here"
Not sure what I am doing wrong at the moment.
line 7: column 10 error syntax error: mapping values are not allowed here (syntax) you forgot the space line6 :
name: "copy ibank to {{ inventory_hostname }} qa"
You should move "command" module usage to another task - you can't use 2 modules in one task.
Try something like this:
---
- hosts: ibank
become: true
become_exe: 'sudo su -'
tasks:
- name:"copy ibank to {{ inventory_hostname }} qa"
copy:
src: "/usr/local/jenkins_workspace/Trunk_Ibank_Build_Ansible/ibank/DIST/ibank_{{ inventory_hostname }}_qa.war"
dest: "/usr/local/ibank/ibank_{{ inventory_hostname }}_qa.war"
- name:"restart ibank after copy"
command:
chdir: /usr/local/ibank
cmd: /usr/local/ibank/restart.sh

ansible overwrite variable depending on result

I am running several shell commands in an ansible playbook that may or may not modify a configuration file.
One of the items in the playbook is to restart the service. But I only want to do this if a variable is set.
I am planning on registering a result in each of the shell tasks, but I do not want to overwrite the variable if it is already set to 'restart_needed' or something like that.
The idea is the restart should be the last thing to go, and if any of the commands set the restart variable, it will go, and if none of them did, the service will not be restarted. Here is an example of what I have so far...
tasks:
- name: Make a backup copy of file
copy: src={{ file_path }} dest={{ file_path }}.{{ date }} remote_src=true owner=root group=root mode=644 backup=yes
- name: get list of items
shell: |
grep <file>
register: result
- name: output will be 'restart_needed'
shell: |
NUM=14"s"; if [ "${NUM}" != "s" ]; then sed -i "${NUM}/no/yes/g" {{ file_path }}; echo "restart_needed"; else echo "nothing_changed" ; fi
with_items: "{{ result.stdout_lines }}"
register: output
- name: output will be 'nothing_changed'
shell: |
NUM="s"; if [ "${NUM}" != "s" ]; then sed -i "${NUM}/no/yes/g" {{ file_path }}; echo "restart_needed"; else echo "nothing_changed" ;; fi
with_items: "{{ result.stdout_lines }}"
register: output
- name: Restart service
service: name=myservice enabled=yes state=restarted
In the above example, the variable output will be set to restart_needed after the first task but then will be changed to 'nothing_changed' in the second task.
I want to keep the variable at 'restart_needed' if it is already there and then kick off the restart service task only if the variable is set to restart_needed.
Thanks!
For triggering restarts, you have two options: the when statement or handlers.
When statement example:
tasks:
- name: check if string "foo" exists in somefile
shell: grep -q foo somefile
register: result
- name: restart service
service:
name: myservice
enabled: yes
state: restarted
when: result.rc == 0
Handlers example:
tasks:
- name: check if string "foo" exists in somefile
shell: grep -q foo somefile
register: result
changed_when: "result.rc == 0"
notify: restart service
handlers:
- name: restart service
service:
name: myservice
enabled: yes
state: restarted

Ansible wait_for module, start at end of file

With the wait_formodule in Ansible if I use search_regex='foo' on a file
it seems to start at the beginning of the file, which means it will match on old data, thus when restarting a process/app (Java) which appends to a file rather than start a new file, the wait_for module will exit true for old data, but I would like to check from the tail of the file.
Regular expression in search_regex of wait_for module is by default set to multiline.
You can register the contents of the last line and then search for the string appearing after that line (this assumes there are no duplicate lines in the log file, i.e. each one contains a time stamp):
vars:
log_file_to_check: <path_to_log_file>
wanted_pattern: <pattern_to_match>
tasks:
- name: Get the contents of the last line in {{ log_file_to_check }}
shell: tail -n 1 {{ log_file_to_check }}
register: tail_output
- name: Create a variable with a meaningful name, just for clarity
set_fact:
last_line_of_the_log_file: "{{ tail_output.stdout }}"
### do some other tasks ###
- name: Match "{{ wanted_pattern }}" appearing after "{{ last_line_of_the_log_file }}" in {{ log_file_to_check }}
wait_for:
path: "{{ log_file_to_check }}"
search_regex: "{{ last_line_of_the_log_file }}\r(.*\r)*.*{{ wanted_pattern }}"
techraf's answer would work if every line inside the log file is time stamped. Otherwise, the log file may have multiple lines that are identical to the last one.
A more robust/durable approach would be to check how many lines the log file currently has, and then search for the regex/pattern occurring after the 'nth' line.
vars:
log_file: <path_to_log_file>
pattern_to_match: <pattern_to_match>
tasks:
- name: "Get contents of log file: {{ log_file }}"
command: "cat {{ log_file }}"
changed_when: false # Do not show that state was "changed" since we are simply reading the file!
register: cat_output
- name: "Create variable to store line count (for clarity)"
set_fact:
line_count: "{{ cat_output.stdout_lines | length }}"
##### DO SOME OTHER TASKS (LIKE DEPLOYING APP) #####
- name: "Wait until '{{ pattern_to_match}}' is found inside log file: {{ log_file }}"
wait_for:
path: "{{ log_file }}"
search_regex: "^{{ pattern_to_skip_preexisting_lines }}{{ pattern_to_match }}$"
state: present
vars:
pattern_to_skip_preexisting_lines : "(.*\\n){% raw %}{{% endraw %}{{ line_count }},{% raw %}}{% endraw %}" # i.e. if line_count=100, then this would equal "(.*\\n){100,}"
One more method, using intermediate temp file for tailing new records:
- name: Create tempfile for log tailing
tempfile:
state: file
register: tempfile
- name: Asynchronous tail log to temp file
shell: tail -n 0 -f /path/to/logfile > {{ tempfile.path }}
async: 60
poll: 0
- name: Wait for regex in log
wait_for:
path: "{{ tempfile.path }}"
search_regex: 'some regex here'
- name: Remove tempfile
file:
path: "{{ tempfile.path }}"
state: absent
Actually, if you can force a log rotation on your java app log file then straightforward wait_for will achieve what you want since there won't be any historical log lines to match
I am using this approach with rolling upgrade of mongodb and waiting for "waiting for connections" in the mongod logs before proceeding.
sample tasks:
tasks:
- name: Rotate mongod logs
shell: kill -SIGUSR1 $(pidof mongod)
args:
executable: /bin/bash
- name: Wait for mongod being ready
wait_for:
path: /var/log/mongodb/mongod.log
search_regex: 'waiting for connections'
I solved my problem with #Slezhuk's answer above. But I found an issue. The tail command won't stop after the completion of the playbook. It will run forever. I had to add some more logic to stop the process:
# the playbook doesn't stop the async job automatically. Need to stop it
- name: get the PID async tail
shell: ps -ef | grep {{ tmpfile.path }} | grep tail | grep -v grep | awk '{print $2}'
register: pid_grep
- set_fact:
tail_pid: "{{pid_grep.stdout}}"
# the tail command shell has two processes. So need to kill both
- name: killing async tail command
shell: ps -ef | grep " {{tail_pid}} " | grep -v grep | awk '{print $2}' | xargs kill -15
- name: Wait the tail process to be killed
wait_for:
path: /proc/{{tail_pid}}/status
state: absent
Came across this page when trying to do something similar and ansible has come a way since the above, so here's my solution using the above. Also used the timeout linux command with async - so doubling up a bit.
It would be much easier if the application rotated logs on startup - but some apps aren't nice like that!
As per this page..
https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html
The async tasks will run until they either complete, fail or timeout according to their async value.
Hope it helps someone
- name: set some common vars
set_fact:
tmp_tail_log_file: /some/path/tail_file.out # I did this as I didn't want to create a temp file every time, and i put it in the same spot.
timeout_value: 300 # async will terminate the task - but to be double sure I used timeout linux command as well.
log_file: /var/logs/my_log_file.log
- name: "Asynchronous tail log to {{ tmp_tail_log_file }}"
shell: timeout {{ timeout_value }} tail -n 0 -f {{ log_file }} > {{ tmp_tail_log_file }}
async: "{{ timeout_value }}"
poll: 0
- name: "Wait for xxxxx to finish starting"
wait_for:
timeout: "{{ timeout_value }}"
path: "{{ tmp_tail_log_file }}"
search_regex: "{{ pattern_search }}"
vars:
pattern_search: (.*Startup process completed.*)
register: waitfor
- name: "Display log file entry"
debug:
msg:
- "xxxxxx startup completed - the following line was matched in the logs"
- "{{ waitfor['match_groups'][0] }}"

Only check whether a line present in a file (ansible)

In ansible, I need to check whether a particular line present in a file or not. Basically, I need to convert the following command to an ansible task. My goal is to only check.
grep -Fxq "127.0.0.1" /tmp/my.conf
Use check_mode, register and failed_when in concert. This fails the task if the lineinfile module would make any changes to the file being checked. Check_mode ensures nothing will change even if it otherwise would.
- name: "Ensure /tmp/my.conf contains '127.0.0.1'"
lineinfile:
name: /tmp/my.conf
line: "127.0.0.1"
state: present
check_mode: yes
register: conf
failed_when: (conf is changed) or (conf is failed)
- name: Check whether /tmp/my.conf contains "127.0.0.1"
command: grep -Fxq "127.0.0.1" /tmp/my.conf
register: checkmyconf
check_mode: no
ignore_errors: yes
changed_when: no
- name: Greet the world if /tmp/my.conf contains "127.0.0.1"
debug: msg="Hello, world!"
when: checkmyconf.rc == 0
Update 2017-08-28: Older Ansible versions need to use always_run: yes instead of check_mode: no.
User robo's regexp & absent method is quite clean, so I've fleshed it out here for easy use and added improvements from comments by #assylias and #Olivier:
- name: Ensure /tmp/my.conf contains 127.0.0.1
lineinfile:
path: /tmp/my.conf
regexp: '^127\.0\.0\.1.*whatever'
state: absent
check_mode: yes
changed_when: false
register: out
- debug:
msg: "Yes, line exists."
when: out.found
- debug:
msg: "Line does NOT exist."
when: not out.found
With the accepted solution, even though you ignore errors, you will still get ugly red error output on the first task if there is no match:
TASK: [Check whether /tmp/my.conf contains "127.0.0.1"] ***********************
failed: [localhost] => {"changed": false, "cmd": "grep -Fxq "127.0.0.1" /tmp/my.conf", "delta": "0:00:00.018709", "end": "2015-09-27 17:46:18.252024", "rc": 1, "start": "2015-09-27 17:46:18.233315", "stdout_lines": [], "warnings": []}
...ignoring
If you want less verbose output, you can use awk instead of grep. awk won't return an error on a non-match, which means the first check task below won't error regardless of a match or non-match:
- name: Check whether /tmp/my.conf contains "127.0.0.1"
command: awk /^127.0.0.1$/ /tmp/my.conf
register: checkmyconf
changed_when: False
- name: Greet the world if /tmp/my.conf contains "127.0.0.1"
debug: msg="Hello, world!"
when: checkmyconf.stdout | match("127.0.0.1")
Notice that my second task uses the match filter as awk returns the matched string if it finds a match.
The alternative above will produce the following output regardless of whether the check task has a match or not:
TASK: [Check whether /tmp/my.conf contains "127.0.0.1"] ***********************
ok: [localhost]
IMHO this is a better approach as you won't ignore other errors in your first task (e.g. if the specified file did not exist).
Use ansible lineinfile command, but this command will update the file with the line if it does not exists.
- lineinfile: dest=/tmp/my.conf line='127.0.0.1' state=present
Another way is to use the "replace module" then "lineinfile module".
The algo is closed to the one used when you want to change the values of two variables.
First, use "replace module" to detect if the line you are looking for is here and change it with the something else. (Like same line + something at the end).
Then if "replace" is true, It means your line is here then replace the new line with a particularity, with the new line looking.
Else the line you are looking for is not here.
Example:
# Vars
- name: Set parameters
set_fact:
newline : "hello, i love ansible"
lineSearched : "hello"
lineModified : "hello you"
# Tasks
- name: Try to replace the line
replace:
dest : /dir/file
replace : '{{ lineModified }} '
regexp : '{{ lineSearched }}$'
backup : yes
register : checkIfLineIsHere
- name: Line is here, change it
lineinfile:
state : present
dest : /dir/file
line : '{{ newline }}'
regexp : '{{ lineModified }}$'
when: checkIfLineIsHere.changed
If the file contains "hello", it will become "hello you" then "hello, i love ansible" at the end.
If the file content doesn't contain "hello", the file is not modified.
With the same idea, you can do something if the lineSearched is here:
# Vars
- name: Set parameters
set_fact:
newline : "hello, i love ansible"
lineSearched : "hello"
lineModified : "hello you"
# Tasks
- name: Try to replace the line
replace:
dest : /dir/file
replace : '{{ lineModified }} '
regexp : '{{ lineSearched }}$'
backup : yes
register : checkIfLineIsHere
# If the line is here, I want to add something.
- name: If line is here, do something
lineinfile:
state : present
dest : /dir/file
line : '{{ newline }}'
regexp : ''
insertafter: EOF
when: checkIfLineIsHere.changed
# But I still want this line in the file, Then restore it
- name: Restore the searched line.
lineinfile:
state : present
dest : /dir/file
line : '{{ lineSearched }}'
regexp : '{{ lineModified }}$'
when: checkIfLineIsHere.changed
If the file contains "hello", the line will still contain "hello" and "hello, i love ansible" at the end.
If the file content doesn't contain "hello", the file is not modified.
You can use the file plugin for this scenario.
To set a fact you can use in other tasks ... this works.
- name: Check whether /tmp/my.conf contains "127.0.0.1"
set_fact:
myconf: "{{ lookup('file', '/tmp/my.conf') }}"
ignore_errors: yes
- name: Greet the world if /tmp/my.conf contains "127.0.0.1"
debug: msg="Hello, world!"
when: "'127.0.0.1' in myconf"
To check the file content as a condition of a task ... this should work.
- name: Greet the world if /tmp/my.conf contains "127.0.0.1"
debug: msg="Hello, world!"
when: "'127.0.0.1' in lookup('file', '/tmp/my.conf')"
Another solution, also useful for other purposes is to loop over the contents of the file, line by line
- name: get the file
slurp:
src: /etc/locale.gen
register: slurped_file
- name: initialize the matches list
set_fact:
MATCHES: []
- name: collect matches in a list
set_fact:
MATCHES: "{{ MATCHES + [line2match] }}"
loop: "{{ file_lines }}"
loop_control:
loop_var: line2match
vars:
- decode_content: "{{ slurped_file.content | b64decode }}"
- file_lines: "{{ decode_content.split('\n') }}"
when: '"BE" in line2match'
- name: report matches if any
debug:
msg: "Found {{ MATCHES | length }} matches\n{{ MATCHES }}"
when: 'listlen | int > 0'
vars:
listlen: "{{ MATCHES | length }}"
This mechanism can be use to get a specific value from a matched line if you use it for example with the jinja regex_replace filter

Resources