I would like to be able to create a bash function that can read the exit code of the command before the pipe. I'm not sure it is possible to have access to that.
echo "1" | grep 2 returns a 1 status code
echo "1" | grep 1 returns a 0 status code
Now I would like to add a third command to read the status, with a pipe:
echo "1" | grep 2 | echo $? will echo "0", even if the status code is 1.
I know I can use the echo "1" | grep 2 && echo "0" || echo "1", but I would prefer to write it using a pipe.
Is they anyway to do that (it would be even better if it was working on most shells, like bash, sh, and zsh)
You're going to have to get the exit status before the next stage of the pipeline. Something like
exec 3> debug.txt
{ echo "1"; echo "$?" >&3; } | long | command | here
You can't (easily) encapsulate this in a function, since it would require passing a properly quoted string and executing it via eval:
debug () {
eval "$#"
echo $? >&3
}
# It looks easy in this example, but it won't take long to find
# an example that breaks it.
debug echo 1 | long | command | here
You have to write the exit status to a different file descriptor, otherwise it will interfere with the output sent to the next command in the pipeline.
In bash you can do this with the PIPESTATUS variable
echo "1" | grep 1
echo ${PIPESTATUS[0]} # returns 0
echo "1" | grep 2
echo ${PIPESTATUS[0]} # returns 0
echo "1" | grep 2
echo ${PIPESTATUS[1]} # returns 1
Related
The following command works perfectly on the terminal but the same command fails in GitLab CI.
echo Hello >> foo.txt; cat foo.txt | grep "test"; [[ $? -eq 0 ]] && echo fail || echo success
return is success
but the same command in GitLab CI
$ echo Hello >> foo.txt; cat foo.txt | grep "test"; [[ $? -eq 0 ]] && echo fail || echo success
Cleaning up file based variables
ERROR: Job failed: command terminated with exit code 1
is simply failing. I have no idea why.
echo $SHELL return /bin/bash in both.
Source of the issue
The behavior you observe is pretty standard given the "implied" set -e in a CI context.
To be more precise, your code consists in three compound commands:
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
And the grep "test" command returns a non-zero exit code (namely, 1). As a result, the script immediately exits and the last line is not executed.
Note that this feature is typical in a CI context, because if some intermediate command fails in a complex script, we'd typically want to get a failure, and avoid running the next commands (which could potentially be "off-topic" given the error).
You can reproduce this locally as well, by writing for example:
bash -e -c "
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
"
which is mostly equivalent to:
bash -c "
set -e
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
"
Relevant manual page
For more insight:
on set -e, see man 1 set
on bash -e, see man 1 bash
How to fix the issue?
You should just adopt another phrasing, avoiding [[ $? -eq 0 ]] a-posteriori tests. So the commands that may return a non-zero exit code without meaning failure should be "protected" by some if:
echo Hello >> foo.txt
if cat foo.txt | grep "test"; then
echo fail
false # if ever you want to "trigger a failure manually" at some point.
else
echo success
fi
Also, note that grep "test" foo.txt would be more idiomatic than cat foo.txt | grep "test" − which is precisely an instance of UUOC (useless use of cat).
I have no idea why.
Gitlab executes each command one at a time and checks the exit status of each command. When the exit status is not zero, the job is failed.
There is no test string inside foo.txt, so the command cat foo.txt | grep "test" exits with nonzero exit status. Thus the job is failed.
I'm trying to understand why whenever I'm using function 2>&1 | tee -a $LOG tee creates a subshell in function that can't be exited by simple exit 1 (and if I'm not using tee it works fine). Below the example:
#!/bin/bash
LOG=/root/log.log
function first()
{
echo "Function 1 - I WANT to see this."
exit 1
}
function second()
{
echo "Function 2 - I DON'T WANT to see this."
exit 1
}
first 2>&1 | tee -a $LOG
second 2>&1 | tee -a $LOG
Output:
[root#linuxbox ~]# ./1.sh
Function 1 - I WANT to see this.
Function 2 - I DON'T WANT to see this.
So. if I remove | tee -a $LOG part, it's gonna work as expected (script will be exited in the first function).
Can you, please, explain how to overcome this and exit properly in the function while being able to tee output?
If you create a pipeline, the function is run in a subshell, and if you exit from a subshell, only the subshell will be affected, not the parent shell.
printPid(){ echo $BASHPID; }
printPid #some value
printPid #same value
printPid | tee #an implicit subshell -- different value
( printPid ) #an explicit subshell -- also a different value
If, instead of aFunction | tee you do:
aFunction > >(tee)
it'll be essential the same, except aFunction won't run in a subshell, and thus will be able to affect the current environment (set variables, call exit, etc.).
Use PIPESTATUS to retrieve the exit status of the first command in the pipeline.
first 2>&1 | tee -a $LOG; test ${PIPESTATUS[0]} -eq 0 || exit ${PIPESTATUS[0]}
second 2>&1 | tee -a $LOG; test ${PIPESTATUS[0]} -eq 0 || exit ${PIPESTATUS[0]}
You can tell bash to fail if anything in the pipeline fails with set -e -o pipefail:
$ cat test.sh
#!/bin/bash
LOG=~/log.log
set -e -o pipefail
function first()
{
echo "Function 1 - I WANT to see this."
exit 1
}
function second()
{
echo "Function 2 - I DON'T WANT to see this."
exit 1
}
first 2>&1 | tee -a $LOG
second 2>&1 | tee -a $LOG
$ ./test.sh
Function 1 - I WANT to see this.
I have this code:
error(){
time=$( date +"%T %F" )
echo "Start : ${time} : ${1}" 1>&2
result=$( eval "${1}" )
if [ `echo "${PIPESTATUS[#]}" | tr -s ' ' + | bc` -ne 0 ]; then
echo "command ${1} return ERROR" 1>&2
exit
else
if [ "${2}" != "silent" ]; then
echo "${result}"
fi
fi
}
I start testing command:
error "ifconfig | wc -l" "silent"
Start : 14:41:53 2014-02-19 : ifconfig | wc -l
error "ifconfiggg | wc -l" "silent"
Start : 14:43:13 2014-02-19 : ifconfiggg | wc -l
./install-streamserver.sh: line 42: ifconfiggg: command not found
But, I expect a different result. Example:
error "ifconfig" "silent"
Start : 14:44:52 2014-02-19 : ifconfig
Start : 14:45:40 2014-02-19 : ifconfiggg
./install-streamserver.sh: line 42: ifconfiggg: command not found
command ifconfiggg return ERROR (<<<<<<<<<<<< This message)
I don't have it, because when bash runs a command with eval, as in
eval "ifconfiggg | wc -l"
the $PIPESTATUS[#] array just contains "0" instead of the expected "1 0".
How can I fix this?
The eval starts a new shell context which has a separate PIPESTATUS[] array. The lifetime of this context ends when the eval ends. You can communicate this array to the parent context through assigning to a variable, say, PIPE as follows:
$ eval 'ifconfiggg | wc -l; PIPE=${PIPESTATUS[#]}'
bash: ifconfiggg: command not found
0
$ echo $PIPE
127 0
Note the single quotes to prevent ${PIPESTATUS[#]} from expanding too early.
Wrapping this in yet another result=$(...) does not work, since this creates yet another shell context. I suggest instead something along the lines of
eval "${1}; "'PIPE=${PIPESTATUS[#]}' >result.out 2>result.err
# do something with $PIPE here
# do something with result.out or result.err here
Note the use of both double quotes followed by single quotes.
Thanks #Jens for posting this information. I noticed for
eval "${1}; "'PIPE=${PIPESTATUS[#]}' >result.out 2>result.err
that it's better to use parentheses around array PIPESTATUS. Otherwise it seems to be interpreted as string and the complete result is in ${PIPESTATUS[0]} only. So
eval "${1}; "'PIPE=(${PIPESTATUS[#]})' >result.out 2>result.err
is working as expected.
I'm writing a routine that will identify if a process stops running and will do something once the processes targeted is gone.
I came up with this code (as a test for my future code):
#!/bin/bash
value="aaa"
ls | grep $value
while [ $? = 0 ];
do
sleep 5
ls | grep $value
echo $?
done;
echo DONE
My problem is that for some reason, the loop never stops and echoes 1 after I delete the file "aaa".
0
0 >>> I delete the file at that point (in another terminal)
1
1
1
1
.
.
.
I would expect the output to be "DONE" as soon as I delete the file...
What's the problem?
SOLUTION:
#!/bin/bash
value="aaa"
ls | grep $value
while [ $? = 0 ];
do
sleep 5
ls | grep $value
done;
echo DONE
The value of $? changes very easily. In the current version of your code, this line:
echo $?
prints the status of the previous command (grep) -- but then it sets $? to 0, the status of the echo command.
Save the value of $? in another variable, one that won't be clobbered next time you execute a command:
#!/bin/bash
value="aaa"
ls | grep $value
status=$?
while [ $status = 0 ];
do
sleep 5
ls | grep $value
status=$?
echo $status
done;
echo DONE
If the ls | grep aaa is intended to check whether a file named aaa exists, this:
while [ ! -f aaa ] ; ...
is a cleaner way to do it.
$? is the return code of the last command, in this case your sleep.
You can rewrite that loop in much simpler way like this:
while [ -f aaa ]; do
sleep 5;
echo "sleeping...";
done
You ought not duplicate the command to be tested. You can always write:
while cmd; do ...; done
instead of
cmd
while [ $? = 0 ]; do ...; cmd; done
In your case, you mention in a comment that the command you are testing is parsing the output of ps. Although there are very good arguments that you ought not do that, and that the followon processing should be done by the parent of the command for which you are waiting, we'll ignore that issue at the moment. You can simply write:
while ps -ef | grep -v "grep mysqldump" |
grep mysqldump > /dev/null; do sleep 1200; done
Note that I changed the order of your pipe, since grep -v will return true if it
matches anything. In this case, I think it is not necessary, but I believe is more
readable. I've also discarded the output to clean things up a bit.
Presumably your objective is to wait until a filename containing the string $value is present in the local directory and not necessarily a single filename.
try:
#!/bin/bash
value="aaa"
while ! ls *$value*; do
sleep 5
done
echo DONE
Your original code failed because $?is filled with the return code of the echo command upon every iteration following the first.
BTW, if you intend to use ps instead of ls in the future, you will pick up your own grep unless you are clever. Use ps -ef | grep [m]ysqlplus.
I'd like to send the result of a series of commands to a variable:
variable=$(a | few | commands)
However, the command substitution resets PIPESTATUS, so I can't inspect where it went wrong after the fact. One solution would be to use mktemp and put the result there temporarily:
variable_file=$(mktemp) || exit 1
a | few | commands > $variable_file
exit_codes="${PIPESTATUS[*]}"
variable=$(<$variable_file)
Is there a more elegant solution?
Kinda hacky but I think you could fudge it like this.
variable=$(a | few | commands; echo ": ${PIPESTATUS[*]}")
PIPESTATUS=(${variable##*: })
variable=${variable%:*}
variable=${variable%$'\n'}
Building on ephemient's answer, if we need the output of the piped commands stored without them being mixed in with the pipestatus codes, but we don't really care what the exit codes themselves are (just that they all succeeded), we can do:
variable=$(a | few | commands; [[ ${PIPESTATUS[*]} == "0 0 0" ]])
This will check on the status of all the piped command status in the above example and if its exit code is not 0, will set $? to 1 (false)
If you want to exit with a different code instead of 1, you could capture the contents of PIPESTATUS[#], e.g. r_code=${PIPESTATUS[2]}, and then use (exit ${r_code[2]}) instead of false.
Below captures all the codes of PIPESTATUS, ensures they're all 0, and if not, sets the exit code to be the $? value of commands:
declare -a r_code
variable=$(a | few | commands
r_code=(${PIPESTATUS[#]})
[[ ${r_code[*]} == "0 0 0" ]] || (exit ${r_code[2]})
)
echo ${?} # echoes the exit code of `commands`
echo ${variable} # echoes only the output of a | few | commands