I'm trying to assign the exit command of the last command in a pipeline to a variable but it is not giving the expected result. Essentially I'm grepping a variable to see if it ends with '-SNAPSHOT' or not. So if I try this:
export PROJECT_VERSION=1.0.0
echo ${PROJECT_VERSION} | grep \\-SNAPSHOT$
And then do echo $? the result is 1 as expected (no match found).
If I then add the echo $? to the end of the pipe:
echo ${PROJECT_VERSION} | grep \\-SNAPSHOT$ | echo $?
The result then becomes 0.
How can I get the exit result of the grep \\-SNAPSHOT so that I can assign it to a variable?
The exit status is in $?.
echo ${PROJECT_VERSION} | grep \\-SNAPSHOT$
variable="$?"
Related
The following command works perfectly on the terminal but the same command fails in GitLab CI.
echo Hello >> foo.txt; cat foo.txt | grep "test"; [[ $? -eq 0 ]] && echo fail || echo success
return is success
but the same command in GitLab CI
$ echo Hello >> foo.txt; cat foo.txt | grep "test"; [[ $? -eq 0 ]] && echo fail || echo success
Cleaning up file based variables
ERROR: Job failed: command terminated with exit code 1
is simply failing. I have no idea why.
echo $SHELL return /bin/bash in both.
Source of the issue
The behavior you observe is pretty standard given the "implied" set -e in a CI context.
To be more precise, your code consists in three compound commands:
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
And the grep "test" command returns a non-zero exit code (namely, 1). As a result, the script immediately exits and the last line is not executed.
Note that this feature is typical in a CI context, because if some intermediate command fails in a complex script, we'd typically want to get a failure, and avoid running the next commands (which could potentially be "off-topic" given the error).
You can reproduce this locally as well, by writing for example:
bash -e -c "
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
"
which is mostly equivalent to:
bash -c "
set -e
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
"
Relevant manual page
For more insight:
on set -e, see man 1 set
on bash -e, see man 1 bash
How to fix the issue?
You should just adopt another phrasing, avoiding [[ $? -eq 0 ]] a-posteriori tests. So the commands that may return a non-zero exit code without meaning failure should be "protected" by some if:
echo Hello >> foo.txt
if cat foo.txt | grep "test"; then
echo fail
false # if ever you want to "trigger a failure manually" at some point.
else
echo success
fi
Also, note that grep "test" foo.txt would be more idiomatic than cat foo.txt | grep "test" − which is precisely an instance of UUOC (useless use of cat).
I have no idea why.
Gitlab executes each command one at a time and checks the exit status of each command. When the exit status is not zero, the job is failed.
There is no test string inside foo.txt, so the command cat foo.txt | grep "test" exits with nonzero exit status. Thus the job is failed.
I want to extract a matched regex from this file:
abc
de
{my_pattern.global} # want to extract my_pattern.global
# without curly brackets
123
and assign it to a variable in a shell script:
#!/bin/bash
l_config_file="my_file.cfg"
l_extracted_pattern=""
l_match_pattern="(?<={).+\.global(?=})"
l_my_dir=$(pwd)
echo "grep -oP '$l_match_pattern' $l_my_dir/$l_config_file"
echo "debug 1 - exit code: $?"
grep -oP '$l_match_pattern' $l_my_dir/$l_config_file
echo "debug 2 - exit code: $?"
sh -c "grep -oP '$l_match_pattern' $l_my_dir/$l_config_file"
echo "debug 3 - exit code: $?"
$l_extracted_pattern = "$(sh -c "grep -oP '$l_match_pattern' $l_my_dir/$l_config_file")"
echo "debug 4 - exit code: $?"
echo $l_extracted_pattern
Output:
grep -oP '(?<={).+\.global(?=})' /tmp/my_file.cfg
debug 1 - exit code: 0
debug 2 - exit code: 1
my_pattern.global
debug 3 - exit code: 0
./sto.sh: line 14: =: command not found.
debug 4 - exit code: 127
As you can see, the grep command works well (when executed via sh -c) but fails when trying to assign the output to the variable $l_extracted_pattern with exit code 127. That means the shell doesn't recognise the command. I suspect the regex is cause of the trouble here, but couldn't figure out what in particular. What's going wrong?
Even though I assigned it before already :
l_extracted_pattern=""
and tried to overwrite it later:
$l_extracted_pattern = "$(sh -c "grep -oP '$l_match_pattern' $l_my_dir/$l_config_file")"
That was a mistake. Apparently no variable assignment in bash may contain the $ before the variable name - even not when instantiated already before. Changed it to:
l_extracted_pattern = "$(sh -c "grep -oP '$l_match_pattern' $l_my_dir/$l_config_file")"
I have a bash shell script doing sth like
#!/bin/bash
# construct regex from input
# set FILE according to input file
egrep "${regex}" "${FILE}" | doing stuff | sort
I want this script to write the output (a list of new line separated matches) of the command to stdout if matches are found (which it is doing). If no matches are found it needs to write out an error message to stderr and exit with exit status 3.
I tried this
#!/bin/bash
# construct regex from input
# set FILE according to input file
function check () {
if ! read > /dev/null
then
echo "error message" 1>&2
exit 3
fi
}
egrep "${regex}" "${FILE}" | doing stuff |
sort | tee >(check)
Now the correct error message is written out but the exit status "cannot escape the subshell"; the outer script is still exiting with exit status 0.
I also tried sth like
#!/bin/bash
# construct regex from input
# set FILE according to input file
if ! egrep "${regex}" "${FILE}" | doing stuff | sort
then
echo "error message" 1>&2
exit 3
fi
But here I have the problem that one of the commands in the pipe (especially sort) exits with an exit status 0
How can I get my desired exit status 3 and error message while keeping the output for normal execution and without doing all the stuff twice?
EDIT:
I can solve the problem by using
#!/bin/bash
# construct regex from input
# set FILE according to input file
if ! egrep "${regex}" "${FILE}" | doing stuff | sort | grep .
then
echo "error message" 1>&2
exit 3
fi
However I am not sure this is the best way since pipes are working in parallel...
I would use the PIPESTATUS to check the exit code of egrep:
#!/bin/bash
# construct regex from input
# set FILE according to input file
egrep "${regex}" "${FILE}" | doing stuff | sort
if [[ ${PIPESTATUS[0] != 0 ]]; then
echo "error message" 1>&2
exit 3
fi
Some context:
${PIPESTATUS[#]} is just an array wich contains the exit code of every program you chained up. $? will just give you the exit code of the last command in the pipe.
I would like to be able to create a bash function that can read the exit code of the command before the pipe. I'm not sure it is possible to have access to that.
echo "1" | grep 2 returns a 1 status code
echo "1" | grep 1 returns a 0 status code
Now I would like to add a third command to read the status, with a pipe:
echo "1" | grep 2 | echo $? will echo "0", even if the status code is 1.
I know I can use the echo "1" | grep 2 && echo "0" || echo "1", but I would prefer to write it using a pipe.
Is they anyway to do that (it would be even better if it was working on most shells, like bash, sh, and zsh)
You're going to have to get the exit status before the next stage of the pipeline. Something like
exec 3> debug.txt
{ echo "1"; echo "$?" >&3; } | long | command | here
You can't (easily) encapsulate this in a function, since it would require passing a properly quoted string and executing it via eval:
debug () {
eval "$#"
echo $? >&3
}
# It looks easy in this example, but it won't take long to find
# an example that breaks it.
debug echo 1 | long | command | here
You have to write the exit status to a different file descriptor, otherwise it will interfere with the output sent to the next command in the pipeline.
In bash you can do this with the PIPESTATUS variable
echo "1" | grep 1
echo ${PIPESTATUS[0]} # returns 0
echo "1" | grep 2
echo ${PIPESTATUS[0]} # returns 0
echo "1" | grep 2
echo ${PIPESTATUS[1]} # returns 1
I'd like to send the result of a series of commands to a variable:
variable=$(a | few | commands)
However, the command substitution resets PIPESTATUS, so I can't inspect where it went wrong after the fact. One solution would be to use mktemp and put the result there temporarily:
variable_file=$(mktemp) || exit 1
a | few | commands > $variable_file
exit_codes="${PIPESTATUS[*]}"
variable=$(<$variable_file)
Is there a more elegant solution?
Kinda hacky but I think you could fudge it like this.
variable=$(a | few | commands; echo ": ${PIPESTATUS[*]}")
PIPESTATUS=(${variable##*: })
variable=${variable%:*}
variable=${variable%$'\n'}
Building on ephemient's answer, if we need the output of the piped commands stored without them being mixed in with the pipestatus codes, but we don't really care what the exit codes themselves are (just that they all succeeded), we can do:
variable=$(a | few | commands; [[ ${PIPESTATUS[*]} == "0 0 0" ]])
This will check on the status of all the piped command status in the above example and if its exit code is not 0, will set $? to 1 (false)
If you want to exit with a different code instead of 1, you could capture the contents of PIPESTATUS[#], e.g. r_code=${PIPESTATUS[2]}, and then use (exit ${r_code[2]}) instead of false.
Below captures all the codes of PIPESTATUS, ensures they're all 0, and if not, sets the exit code to be the $? value of commands:
declare -a r_code
variable=$(a | few | commands
r_code=(${PIPESTATUS[#]})
[[ ${r_code[*]} == "0 0 0" ]] || (exit ${r_code[2]})
)
echo ${?} # echoes the exit code of `commands`
echo ${variable} # echoes only the output of a | few | commands