Ansible echo into file - ansible

I'm struggling with some Ansible/YAML syntax here. How do I echo multiple lines (in append mode) in to a file? I also can't use the copy module (with the content arg) because it has to be appended.
This code:
- name: Write backup script for each app
shell: echo | '
line one
line two
line three
' >> /manager/backup.sh
errors out with nonsensical:
"stderr": "/bin/sh: line one line two line three : command not found"
I'm using the pipe because I think it's how you tell Ansible you want multiple lines (with preserved formatting), but maybe it's being used as a shell pipe.

You want something like this:
- name: Write backup script for each app
shell: |
echo 'line one
line two
line three' >> /manager/backup.sh
or explicitly specifying newline with printf:
- name: Write backup script for each app
shell: printf 'line one\n
line two\n
line three\n' >> /manager/backup.sh
The error message you get makes perfect sense: you tried to pipe (|) the output of the echo command to the line one line two line three command. As shell does not find the latter, it reports the command not existing. It's the same if you executed the following directly in shell:
echo | "line one line two line three" >> /manager/backup.sh
YAML uses | to indicate multi-line value, but when used directly after the key, not anywhere in the value field.

I fixed it with:
ansible node -i hosts -m shell -a "echo 'line one\nline two\nline three' | sudo tee -a /tmp/test.file;"

Related

Reading the second (or subsequent) line in Korn Shell

My OS is AIX (7200-05-03-2136) and my Korn Shell version is ksh88 (Version M-11/16/88f), but I think my question doesn't depend on versions.
Consider a single-line output of a command. I can easily put this into a variable via "read":
command | read variable
Now, suppose the command would have a two-line output. Is there a way to capture only the second line into a variable? It would be easy to use some external program like, i.e.:
command | sed '1d' | read variable
But I would like to avoid that and find a pure shell-solution. I have tried the following variations:
command | { read -r junk ; read -r variable }
command | { IFS=\n read junk ; read variable }
command | IFS='\n' read junk variable
But all these won't work.
Assign everything to a variable first. Next you can print or assign the second line.
variable=$(echo "line 1
line 2")
echo "${variable#*$'\n'}"
# or
variable="${variable#*$'\n'}"

WSL2 interop issue causes premature exit from read loop in shell script

Summary
Using read loop that runs a windows executable in WSL shell script causes it to exit the loop after the first iteration.
Details
I've been quite baffled by what appears to be an interoperability issue with running windows executables from a shell script in WSL2. The following while loop should print 3 lines but it will only print "line 1". It has been tested on Ubuntu 20.04 in dash, bash, and zsh.
while read -r line; do
powershell.exe /C "echo \"${line}\""
done << EOF
line 1
line 2
line 3
EOF
This issue also occurs when reading lines from a file instead of a heredoc even if that file has windows line endings. Note that if powershell were changed to /bin/bash or any other native executable this would print 3 lines. Also powershell could be replaced with any windows executable cmd.exe, explorer.exe, etc and it would still only run the first iteration. This appears to be a problem with read specifically since this loop will work fine.
for line in "line 1" "line 2" "line 3"
do
powershell.exe /C "echo \"${line}\""
done
Work-around
Thanks to this post I have discovered a work around is to pipe through a dummy command: echo "" | cmd.exe /C "echo \"${line}\"". A note about this fix is that only piping seems to work. Redirecting the output or running it through another layer of bash does not: /bin/bash -c "cmd.exe /C \"echo ${line}\"". I am partially posting this for improved visibility for anyone having this issue in the future, but I am still curious if anyone has any insight as to why this issue exists (perhaps due to line endings?). Thank you!
Short Answer:
A slightly-improved solution over echo "" | is to do a second-redirection from /dev/null. This avoids potential newline issues from the echo, but there are other solutions as well:
while read -r line; do
powershell.exe /C "echo \"${line}\"" < /dev/null
done << EOF
line 1
line 2
line 3
EOF
Explanation:
Well, you already had a solution, but what you really wanted was the explanation.
Jetchisel and MarkPlotnick are on the right track in the comments. This appears to be the same root cause (and solution) as in this question about ssh. To replicate your example with ssh (assuming a key in ssh-agent so that no password prompt is generated):
while read -r line; do
ssh hostname echo ${line}
done << EOF
line 1
line 2
line 3
EOF
You will see the same results as with PowerShell -- Only "line 1" displays.
In both cases, the first line goes to the read statement, but the subsequent lines are stdin which are consumed by powershell.exe (or ssh) itself.
You can see this "proven" in PowerShell through a slight modification to your script:
while read -r line; do
powershell.exe -c "echo \"--- ${line} ---\"; \$input"
done << EOF
line 1
line 2
line 3
EOF
Results in:
--- line 1 ---
line 2
line 3
The follow-up question is, IMHO, why bash doesn't have this issue. The answer is that PowerShell seems to always consume whatever stdin is available at the time of invocation and adds it to the $input magic variable. Bash, on the other hand, does not consume the additional stdin until explicitly asked:
while read -r line; do
bash -c "echo --- \"${line}\" ---; cat /dev/stdin"
done << EOF
line 1
line 2
line 3
EOF
Generates the same results as the previous PowerShell example:
--- line 1 ---
line 2
line 3
Ultimately, the main solution with PowerShell is to force a second indirection which is consumed before your desired input. echo "" | can do this, but be careful:
while read -r line; do
echo "" | powershell.exe -c "echo \"--- ${line} ---\"; \$input"
done << EOF
line 1
line 2
line 3
EOF
Results in:
--- line 1 ---
--- line 2 ---
--- line 3 ---
< /dev/null doesn't have this issue, but you could also handle it with echo -n "" | instead.

shell process and compare array element

There is a file called test.txt that contains:
ljlkfjdslkfldjfdsajflkjf word:test1
dflkdjflkdfdjls word:test2
dlkdj word:test3
word:test4
word:NewYork
dljfldflkdflkdjf word:test7
djfkd word:young
dkjflke word:lisa
amazonwle word:NewYork
dlksldjf word:test10
Now all we want is to get the strings after colon and if the result is same, print the output, in this case it is "NewYork"
Here is the script which lists the elements but when tried to push into array and compare it is failing, Please let me know my mistakes.
#!/usr/bin/sh
input="test.txt"
cat $input | while read line; do output= $(echo $line | cut -d":" -f2); done
for (( i = 0 ; i < "${#output[#]}" ; i++ ))
{
echo ${output[i]}
}
Error obtained:
./compare.sh
./compare.sh: line 11: test1: command not found
./compare.sh: line 11: test2: command not found
./compare.sh: line 11: test3: command not found
./compare.sh: line 11: test4: command not found
./compare.sh: line 11: NewYork: command not found
./compare.sh: line 11: raghav: command not found
./compare.sh: line 11: young: command not found
./compare.sh: line 11: lisa: command not found
./compare.sh: line 11: NewYork: command not found
./compare.sh: line 11: test10: command not found
Please let me know my mistakes.
output= $(..) first excutes the command $(..) inside and grabs it's output. It then sets the variable output to an empty string as-if the same as output="" and exports the variable output and executes the output of $(..) as a command. Remove the space after =.
You are setting output on the right side of a pipe inside a subshell. The changes will not be visible outside - output is unset once the pipe terminates. Use redirection while .... done < file.
And output is not an array, but a normal variable. There is no ${output[i]} (well, except ${output[0]}) as it's not an array (and output is unset, as explained above). Append an element to an array output+=("$(...)").
#!/usr/bin/sh - is invalid, sh may not be bash and support bash arrays. To use bash extension specifically use bash and use a shebang with bash - #!/bin/bash,
Now stylistic:
The format for .... { ... } is supported in bash since forever, however it's a rather undocumented syntax rather to be deprecated to be readable by people used to programming in C. Prefer the standard do ... done.
The expansions $input and ${output[i]} are unquoted.
The read will ignore leading and trailing whitespaces and also interpret/ignore \ backslash sequences (ie. see -r option to read).
echo $line will make $line undergo word splitting - multiple whitespaces will be replaced by a single space.
And using cut after read can be simpler written as just read with IFS=: and splitting on read. Also, if you're using cut, you could just cut -d: -f2 file on the whole file instead of cutting one line at a time.
Re-read basic bash introduction - notice that bash is space aware, spaces around = count. Read bashfaq how to read a file field by field, read about subshells and environments, read bashfaq I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read? and an introduction to bash arrays.

Bash preserve whitespaces and newlines from file content to variable

I have this code
TOKEN=$(cat ./config/token)
echo "$TOKEN"
cat > variables.env <<EOF
TOKEN=`echo "$TOKEN"`
EOF
I am trying to get the content of a file and output it in a new file prefixed by some text. The first echo in the console echoes the output I want, keeping the whitespaces and newlines.
However, in the new file the output is just the first line of the original string, while I'd like the same output I can see in the console with the first echo.
Use printf %q (in ksh or bash) to escape content in such a way that it will always evaluate back to its literal value:
printf 'TOKEN=%q\n' "$(<./config/token)" >variables.env
$(<file) is a ksh and bash extension which acts as a more efficient replacement for $(cat file) (as the regular command substitution needs to fork off a subprocess, set up a FIFO, and spawn an external copy of /bin/cat, whereas the $(<file) form simply tells the shell to read the file directly).
This way a taken containing an otherwise-hostile string such as $(rm -rf ~) or content that could simply be expanded as a variable ($$) will be emitted as literal content.
Providing an explicit example of how this behaves:
printf '%s\n' "first line" "second line" >token # write two lines to the file "token"
printf 'TOKEN=%q\n' "$(<token)" >variables.env # write a shell command which assigns those
# two lines to a variable to variables.env
source variables.env # execute variables.env in the current shell
echo "$TOKEN" # emit the value of TOKEN, as given in the current shell
...when run with bash, will emit the exact output:
first line
second line
...after writing the following (with bash 3.2.48; may vary with other releases) to variables.env:
TOKEN=$'first line\nsecond line'
Useless use of echo
This is what you could write:
cat > variables.env <<EOF
TOKEN=${TOKEN}
EOF
you are doing it in a very convoluted way, there are easier methods
sed '1s/./TOKEN=&/' file > newfile
will insert TOKEN= on the first line. This has an additional benefit of not modifying empty files (at least one char should exist in the original file). If that's not intended you can use unconditional insert.
You can do:
echo "TOKEN=" > newfile && cat ./config/token >> newfile
>> appends to a file.

Printing Nth line of file acting strangely with text files from Windows

I have two files, both of which appear to my eyes as follows.
a
a
The difference is that I created one of them with vim and the other with the wine version of Notepad. I wrote the following script to print each line of these files at a time, more or less emulating cat (That's not my end goal, of course, but it's the simplest example I've thought of.).
#!/usr/bin/env bash
readarray -t list_file < "$1"
for line in "${list_file[#]}"
do
echo "line content: \"$line\""
done
Expectedly, with the file created by vim (5 bytes as expected: a[newline][newline]a[newline]) as $1, it outputs this.
line content: "a"
line content: ""
line content: "a"
Unexpectedly, with the file created by Notepad (It's 6 bytes; I'm not sure why.) as $1, it outputs this. Why does it do this?
"ine content: "a
"ine content: "
line content: "a"
I also tried doing this completely differently, but the following script has exactly the same problem.
#!/usr/bin/env bash
for line_number in $(eval echo {1..$(wc -l < "$1")})
do
echo "line content: $(sed -n "${line_number}p" "$1")"
done
What is the matter with these scripts that causes them to behave like this with the Notepad-created file?
The file created by vim has lines ending with LF, as on Unix. The file created with Notepad has lines ending with CR LF, as on DOS/Windows. bash uses LF as its line delimiter, so when it reads from the second file it leaves the CR at the end of each line. When you echo this character, it causes the cursor to return to the left margin, without advancing to the next line.

Resources