Can't "continue" if I pipe output to tee - bash

I have a bash script that does pretty-much what I want using the following structure:
for x in 1 2 3
do
{
[[ $x -ne 2 ]] || continue
echo $x
} &>> foo.log
done
I need to change it so the output goes both to the terminal and the log file. This, however, doesn't work:
for x in 1 2 3
do
{
[[ $x -ne 2 ]] || continue
echo $x
} 2>&1 | tee -a foo.log
done
It looks like, by creating a process, the pipe prevents me from using continue.
Of course, I could rewrite the logic of my script without continue, but before I jump into that, I'm wondering if I'm missing a more straightforward way to achieve what I want.

You could redirect the output to a process substitution.
for x in 1 2 3
do
{
[[ $x -ne 2 ]] || continue
echo $x
} 2>&1 > >(tee -a foo.log)
done |
# I suggest to do pipe the output to ex. `cat`, so that the output
# of process substitution will be synchronized with rest of the script
cat
But why not just redirect the output of the whole loop?
for x in 1 2 3; do
[[ $x -ne 2 ]] || continue
echo $x
done 2>&1 | tee -a foo.log
You could exit from the subprocess. If you would do that, I would suggest replacing { } with ( ) just to be safe if you one day decide to remove the tee.
for x in 1 2 3
do
{
[[ $x -ne 2 ]] || exit
echo $x
} 2>&1 | tee -a foo.log
done

Related

Parse multiple echo values in bash script

I am trying to return a value from one script to another. However, in the child script there are multiple echos, so am not sure how to retrieve a specific one in the parent scrip as if I try to do return_val = $(./script.sh) then return_val will have multiple arguments. Any solution here?
script 1:
status=$(script2.sh)
if [ $status == "hi" ]; then
echo "success"
fi
script 2:
echo "blah"
status="hi"
echo $status
Solution 1) for this specific case, you could get the last line that was printed by the script 2, using the tail -1 command. Like this:
script1.sh
#!/bin/bash
status=$( ./script2.sh | tail -1 )
if [ $status == "hi" ]; then
echo "success"
fi
script2.sh
#!/bin/bash
echo "blah"
status="hi"
echo $status
The restriction is that it will only work for the cases where you need to check the last string printed by the called script.
Solution 2) If the previous solution doesn't apply for your case, you could also use an identifier and prefix the specific string that you want to check with that. Like you can see below:
script1.sh
#!/bin/bash
status=$( ./script2.sh | grep "^IDENTIFIER: " | cut -d':' -f 2 )
if [ $status == "hi" ]; then
echo "success"
fi
script2.sh
#!/bin/bash
echo "blah"
status="hi"
echo "IDENTIFIER: $status"
The grep "^IDENTIFIER: " command will filter the strings from the called script, and the cut -d':' -f 2 will split the "IDENTIFIER: hi" string and get the second field, separated by the ':' character.
You might capture the output of script2 into a bash array and access the element in the array you are interested in.
Contents of script2:
#!/bin/bash
echo "blah"
status="hi"
echo $status
Contents of script1:
#!/bin/bash
output_arr=( $(./script2) )
if [[ "${output_arr[1]}" == "hi" ]]; then
echo "success"
fi
Output:
$ ./script1
success
Script1:-
#!/bin/sh
cat > ed1 <<EOF
1p
q
EOF
next () {
[[ -z $(ed -s status < ed1 | grep "hi") ]] && main
[[ -n $(ed -s status < ed1 | grep "hi") ]] && end
}
main () {
sleep 1
next
}
end () {
echo $(ed -s status < ed1)
exit 0
}
Script2:-
#!/bin/sh
echo "blah"
echo "hi" >> status

How can I pipe output, from a command in an if statement, to a function?

I can't tell if something I'm trying here is simply impossible or if I'm really lacking knowledge in bash's syntax. This is the first script I've written.
I've got a Nextcloud instance that I am backing up daily using a script. I want to log the output of the script as it runs to a log file. This is working fine, but I wanted to see if I could also pipe the Nextcloud occ command's output to the log file too.
I've got an if statement here checking if the file scan fails:
if ! sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all; then
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
This works fine and I am able to handle the error if the system cannot execute the command. The error string above is sent to this function:
Print()
{
if [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "No" ]; then
echo "$1" | tee -a "$log_file"
elif [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "Yes" ]; then
echo "$1" >> "$log_file"
elif [[ "$logging" -eq 0 ]] && [ "$quiet_mode" = "No" ]; then
echo "$1"
fi
}
How can I make it so the output of the occ command is also piped to the Print() function so it can be logged to the console and log file?
I've tried piping the command after ! using | Print without success.
Any help would be appreciated, cheers!
The Print function doesn't read standard input so there's no point piping data to it. One possible way to do what you want with the current implementation of Print is:
if ! occ_output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1); then
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
Print "'occ' output: $occ_output"
Since there is only one line in the body of the if statement you could use || instead:
occ_output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1) \
|| Print "Error: Failed to scan files. Are you in maintenance mode?"
Print "'occ' output: $occ_output"
The 2>&1 causes both standard output and error output of occ to be captured to occ_output.
Note that the body of the Print function could be simplified to:
[[ $quiet_mode == No ]] && printf '%s\n' "$1"
(( logging )) && printf '%s\n' "$1" >> "$log_file"
See the accepted, and excellent, answer to Why is printf better than echo? for an explanation of why I replaced echo "$1" with printf '%s\n' "$1".
How's this? A bit unorthodox perhaps.
Print()
{
case $# in
0) cat;;
*) echo "$#";;
esac |
if [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "No" ]; then
tee -a "$log_file"
elif [[ "$logging" -eq 1 ]] && [ "$quiet_mode" = "Yes" ]; then
cat >> "$log_file"
elif [[ "$logging" -eq 0 ]] && [ "$quiet_mode" = "No" ]; then
cat
fi
}
With this, you can either
echo "hello mom" | Print
or
Print "hello mom"
and so your invocation could be refactored to
if ! sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all; then
echo "Error: Failed to scan files. Are you in maintenance mode?"
fi |
Print
The obvious drawback is that piping into a function loses the exit code of any failure earlier in the pipeline.
For a more traditional approach, keep your original Print definition and refactor the calling code to
if output=$(sudo -u "$web_user" "$nextcloud_dir/occ" files:scan --all 2>&1); then
: nothing
else
Print "error $?: $output"
Print "Error: Failed to scan files. Are you in maintenance mode?"
fi
I would imagine that the error message will be printed to standard error, not standard output; hence the addition of 2>&1
I included the error code $? in the error message in case that would be useful.
Sending and receiving end of a pipe must be a process, typically represented by an executable command. An if statement is not a process. You can of course put such a statement into a process. For example,
echo a | (
if true
then
cat
fi )
causes cat to write a to stdout, because the parenthesis put it into a child process.
UPDATE: As was pointed out in a comment, the explicit subprocess is not needed. One can also do a
echo a | if true
then
cat
fi

Exit script with error code based on loop operations in bash

I have a wrapper script for a CI pipeline which works great, but it always returns with 0 even though subcommands in a for loop fails. Here is an example:
#!/bin/bash
file_list=("file1 file2 file_nonexistant file3")
for file in $file_list
do
cat $file
done
>./listfiles.sh
file1 contents
file2 contents
cat: file_nonexistant: No such file or directory
file3 contents
>echo $?
>0
Since the last iteration of the loop is successfull the entire script exits with 0.
What i want is for the loop to continue on fail and for the script to exit 1 if any of the loop iterations returned errors.
What i have tried so far:
set -e but it halts the loop and exits when an iteration fails
replaced done with done || exit 1 - no effect
replaced cat $file with cat $file || continue - no effect
Alternative 1
#!/bin/bash
for i in `seq 1 6`; do
if test $i == 4; then
z=1
fi
done
if [[ $z == 1 ]]; then
exit 1
fi
With files
#!/bin/bash
touch ab c d e
for i in a b c d e; do
cat $i
if [[ $? -ne 0 ]]; then
fail=1
fi
done
if [[ $fail == 1 ]]; then
exit 1
fi
The special parameter $? holds the exit value of the last command. A value above 0 represents a failure. So just store that in a variable and check for it after the loop.
The $? parameter actually holds the exit status of the previous pipeline, if present. If the command is killed with a signal then the value of $? will be 128+SIGNAL. For example 128+2 in case of SIGINT (ctrl+c).
Overkill solution with trap
#!/bin/bash
trap ' echo X $FAIL; [[ $FAIL -eq 1 ]] && exit 22 ' EXIT
touch ab c d e
for i in c d e a b; do
cat $i || export FAIL=1
echo F $FAIL
done

Check if output file is generated and populated with expected logs - BASH

A process (in background) should create a file (e.g. result.txt) and populate it with 5 log lines.
I need to check: 1) if the file exists and 2) checks if all the logs (5 lines) are stored
If these conditions are not satisfy within xxx seconds, the process failed and print "FAILED" in terminal, otherwise print "SUCCEED".
I think I need to use a while loop, but I don't know how to implement these conditions
N.B: the lines are appended into the file (asynchronously) and I don't have to check the compliance of logs, just to check if all are stored
This one checks the log and waits 2 seconds before failing:
#!/bin/sh
log_success() {
[[ $(tail -n "$2" "$1" 2> /dev/null | wc -l) -eq "$2" ]]
}
log_success 'file.log' 5 || sleep 2
if log_success 'file.log' 5; then
echo "success"
else
echo "fail"
fi
Well here's my draft for that:
FILE=/path/to/something
for (( ;; )); do
if [[ -e $FILE ]] && WC=$(wc -l < "$FILE") && [[ WC -ge 5 ]]; then
: # Valid.
fi
done
Or
FILE=/path/to/something
for (( ;; )); do
if [[ ! -e $FILE ]]; then
: # File doesn't exist. Do something.
elif WC=$(wc -l < "$FILE") && [[ WC -ge 5 ]]; then
: # Valid.
else
: # File doesn't contain 5 or more lines or is unreadable. Invalid.
fi
done
This one could still have problems with race conditions though.

Do while loop . Make exception for the 1st record and continue on the loop as usual

I have do while loop that'll get filenames and run commands. Pretty standard stuff . What I want to do is sort the files that I feed to the Do While loop and then for the 1st file I want to run a command1 and for the rest of them command2
find $dir -iname "$search*" |<some commands> | sort -nr
while read filename; do
# if its the very 1st file . head - 1 would do that
echo "command1 > log > 2>&1& " >> appendfile
echo "if [ $? != 0 ] then ; exit 1 fi " >> appendfile
# for all other files do this
echo "command1 > log > 2>&1& " >> appendfile
Now you see what I am doing too. I am writing stuff to appendfile.ksh which will be run later on. I am choosing the 1st file that is smallest in size as "test file" to run command1. If the job abends that exit else continue on processing the rest of the files
I am trying a way how to accommodate the 1st file that enters the Do While with a slightly special treatment
#!/bin/bash
f=1
find . -name "*.txt" | while IFS= read -r filename
do
if [ $f -eq 1 ]; then
echo First file
else
echo Subsequent file
fi
((f++))
done
You can do it like this:
first=""
while read filename; do
if [ -z "$first" ]; then
first="$filename"
# if its the very 1st file . head - 1 would do that
echo "command1 > log > 2>&1& " >> appendfile
echo "if [ $? != 0 ] then ; exit 1 fi " >> appendfile
else
# for all other files do this
echo "command2 > log > 2>&1& " >> appendfile
fi
done < <(find "$dir" -iname "$search*" | <some commands> | sort -nr)
Use a compound command to consume the first file name separately, before the while loop starts. This has the benefit of allowing you to append all output of the compound command to appendfile with a single redirection, without needing to redirect each echo command separately. Be sure to note the corrected redirection syntax for each command as well.
find $dir -iname "$search*" |<some commands> | sort -nr | {
# This read gets the first line from the pipeline
read filename
echo "command1 > log 2>&1 "
echo "if [ $? != 0 ] then ; exit 1 fi "
# These reads will get the remaining lines
while read filename; do
echo "command2 > log2 2>&1 "
echo "if [ $? != 0 ] then ; exit 1 fi "
done
} >> appendfile # If appendfile isn't written to before, you can just use >
And one more bit of unsolicited advice: you can shorten your script by doing
command1 > log 2>&1 || exit 1
instead of using an explicit if statement.

Resources