Why same command fails in GitLab CI? - bash

The following command works perfectly on the terminal but the same command fails in GitLab CI.
echo Hello >> foo.txt; cat foo.txt | grep "test"; [[ $? -eq 0 ]] && echo fail || echo success
return is success
but the same command in GitLab CI
$ echo Hello >> foo.txt; cat foo.txt | grep "test"; [[ $? -eq 0 ]] && echo fail || echo success
Cleaning up file based variables
ERROR: Job failed: command terminated with exit code 1
is simply failing. I have no idea why.
echo $SHELL return /bin/bash in both.

Source of the issue
The behavior you observe is pretty standard given the "implied" set -e in a CI context.
To be more precise, your code consists in three compound commands:
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
And the grep "test" command returns a non-zero exit code (namely, 1). As a result, the script immediately exits and the last line is not executed.
Note that this feature is typical in a CI context, because if some intermediate command fails in a complex script, we'd typically want to get a failure, and avoid running the next commands (which could potentially be "off-topic" given the error).
You can reproduce this locally as well, by writing for example:
bash -e -c "
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
"
which is mostly equivalent to:
bash -c "
set -e
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
"
Relevant manual page
For more insight:
on set -e, see man 1 set
on bash -e, see man 1 bash
How to fix the issue?
You should just adopt another phrasing, avoiding [[ $? -eq 0 ]] a-posteriori tests. So the commands that may return a non-zero exit code without meaning failure should be "protected" by some if:
echo Hello >> foo.txt
if cat foo.txt | grep "test"; then
echo fail
false # if ever you want to "trigger a failure manually" at some point.
else
echo success
fi
Also, note that grep "test" foo.txt would be more idiomatic than cat foo.txt | grep "test" − which is precisely an instance of UUOC (useless use of cat).

I have no idea why.
Gitlab executes each command one at a time and checks the exit status of each command. When the exit status is not zero, the job is failed.
There is no test string inside foo.txt, so the command cat foo.txt | grep "test" exits with nonzero exit status. Thus the job is failed.

Related

`cat <does_not_exist | perl` succeeds

I have a makefile which succeeds where it should fail because a line like this
./preprocess.sh <PARTIAL_SOURCE | perl >FINAL_SOURCE
succeeds even though PARTIAL_SOURCE doesn't exist yet.
This isn't a quirk of preprocess.sh, it seems to be something to do with bash/sh
$> cat <does_not_exist && echo ok || echo no
bash: does_not_exist: No such file or directory
no
$> cat <does_not_exist | perl && echo ok || echo no
bash: does_not_exist: No such file or directory
ok
Why does the first fail but the second succeed?
The second succeeds because perl succeeds. The exit code of a pipeline is the exit code of the last command in the pipeline. In this example it's perl. It receives empty input, does nothing with it, so it exits with 0, as it should.
As another example, this is successful too:
$ a | b | perl && echo ok || echo no
-bash: a: command not found
-bash: b: command not found
ok
If you don't want perl to get executed before PARTIAL_SOURCE is ready,
you need to test for it before the pipeline:
if [ ! -f "$input" ]; then
./preprocess.sh < "$input" | perl >FINAL_SOURCE
fi
Or wait for the input to be ready:
while [ ! -f "$input" ]; do sleep 1; done

How to get test command return code in the shell?

Is it any better way to get return code from command in one line. eg:
$ test $(ls -l) ||echo 'ok'
-bash: test: too many arguments
ok
the above script have error in test command, because it seems parsing the output "ls - l" not return code.
I know use the "if" syntax is work fine, But need more then one lines.
ls -l
if [ $? -eq 0 ];then
echo 'ok'
fi
You can use && and || to make these things one-liner. For example, in the following:
ls -l && echo ok
echo ok will run only if the command before && (ls -l) returned 0.
On the other hand, in the following:
ls -l || echo 'not ok'
echo 'not ok' will run only if the command before || returned non zero.
Also, you can make your if..else block one-liner using ;:
if ls -l;then echo ok;else echo 'not ok';fi
But this may make your code hard to read, so not recommended.
The if statement is catching the return value of a command, for example with ls:
if ls -l; then
echo 'ok'
fi
Another question here is how to wor below sample.
ls -l xxx || (echo "file xxx not exist" ; exit 1);echo "OK"
As the sample. If file xxx does not exist. the 2nd echo OK still work even exit 1 previously.
I expect to exit return code 1 if file xxx does not exist.
instead of using
test $(ls -l) ||echo 'ok'
you should use
[[ $(ls -l) ]] && echo "ok"
[[ ]] is test and if the command returns successfully (return code of zero) then it executes the command after the &&.
[[ ]] test
$( ) execute command
ls -l command to run
&& run if test is successful
Hope this helps!

Get exit code from last pipe (stdin)

I would like to be able to create a bash function that can read the exit code of the command before the pipe. I'm not sure it is possible to have access to that.
echo "1" | grep 2 returns a 1 status code
echo "1" | grep 1 returns a 0 status code
Now I would like to add a third command to read the status, with a pipe:
echo "1" | grep 2 | echo $? will echo "0", even if the status code is 1.
I know I can use the echo "1" | grep 2 && echo "0" || echo "1", but I would prefer to write it using a pipe.
Is they anyway to do that (it would be even better if it was working on most shells, like bash, sh, and zsh)
You're going to have to get the exit status before the next stage of the pipeline. Something like
exec 3> debug.txt
{ echo "1"; echo "$?" >&3; } | long | command | here
You can't (easily) encapsulate this in a function, since it would require passing a properly quoted string and executing it via eval:
debug () {
eval "$#"
echo $? >&3
}
# It looks easy in this example, but it won't take long to find
# an example that breaks it.
debug echo 1 | long | command | here
You have to write the exit status to a different file descriptor, otherwise it will interfere with the output sent to the next command in the pipeline.
In bash you can do this with the PIPESTATUS variable
echo "1" | grep 1
echo ${PIPESTATUS[0]} # returns 0
echo "1" | grep 2
echo ${PIPESTATUS[0]} # returns 0
echo "1" | grep 2
echo ${PIPESTATUS[1]} # returns 1

Bash Script - Will not completely execute

I am writing a script that will take in 3 outputs and then search all files within a predefined path. However, my grep command seems to be breaking the script with error code 123. I have been staring at it for a while and cannot really seem the error so I was hoping someone could point out my error. Here is the code:
#! /bin/bash -e
#Check if path exists
if [ -z $ARCHIVE ]; then
echo "ARCHIVE NOT SET, PLEASE SET TO PROCEED."
echo "EXITING...."
exit 1
elif [ $# -ne 3 ]; then
echo "Illegal number of arguments"
echo "Please enter the date in yyyy mm dd"
echo "EXITING..."
exit 1
fi
filename=output.txt
#Simple signal handler
signal_handler()
{
echo ""
echo "Process killed or interrupted"
echo "Cleaning up files..."
rm -f out
echo "Finsihed"
exit 1
}
trap 'signal_handler' KILL
trap 'signal_handler' TERM
trap 'signal_handler' INT
echo "line 32"
echo $1 $2 $3
#Search for the TimeStamp field and replace the / and : characters
find $ARCHIVE | xargs grep -l "TimeStamp: $2/$3/$1"
echo "line 35"
fileSize=`wc -c out.txt | cut -f 1 -d ' '`
echo $fileSize
if [ $fileSize -ge 1 ]; then
echo "no"
xargs -n1 basename < $filename
else
echo "NO FILES EXIST"
fi
I added the echo's to know where it was breaking. My program prints out line 32 and the args but never line 35. When I check the exit code I get 123.
Thanks!
Notes:
ARCHIVE is set to a test directory, i.e. /home/'uname'/testDir
$1 $2 $3 == yyyy mm dd (ie a date)
In testDir there are N number of directories. Inside these directories there are data files that have contain data as well as a time tag. The time tag is of the following format: TimeStamp: 02/02/2004 at 20:38:01
The scripts goal is to find all files that have the date tag you are searching for.
Here's a simpler test case that demonstrates your problem:
#!/bin/bash -e
echo "This prints"
true | xargs false
echo "This does not"
The snippet exits with code 123.
The problem is that xargs exits with code 123 if any command fails. When xargs exits with non-zero status, -e causes the script to exit.
The quickest fix is to use || true to effectively ignore xargs' status:
#!/bin/bash -e
echo "This prints"
true | xargs false || true
echo "This now prints too"
The better fix is to not rely on -e, since this option is misleading and unpredictable.
xargs makes the error code 123 when grep returns a nonzero code even just once. Since you're using -e (#!/bin/bash -e), bash would exit the script when one of its commands return a nonzero exit code. Not using -e would allow your code to continue. Just disabling it on that part can be a solution too:
set +e ## Disable
find "$ARCHIVE" | xargs grep -l "TimeStamp: $2/$1/$3" ## If one of the files doesn't match the pattern, `grep` would return a nonzero code.
set -e ## Enable again.
Consider placing your variables around quotes to prevent word splitting as well like "$ARCHIVE".
-d '\n' may also be required if one of your files' filename contain spaces.
find "$ARCHIVE" | xargs -d '\n' grep -l "TimeStamp: $2/$1/$3"

How to get exit status of piped command from inside the pipeline?

Consider I have following commandline: do-things arg1 arg2 | progress-meter "Doing things...";, where progress-meter is bash function I want to implement. It should print Doing things... before running do-things arg1 arg2 or in parallel (so, it will be printed anyway at the very beginning), and record stdout+stderr of do-things command, and check it's exit status. If exit status is 0, it should print [ OK ], otherwise it should print [FAIL] and dump recorded output.
Currently I have things done using progress-meter "Doing things..." "do-things arg1 arg2";, and evaluating second argument inside, which is clumsy and I don't like that and believe there is better solution.
The problem with pipe syntax is that I don't know how can I get do-things' exit status from inside the pipeline? $PIPESTATUS seems to be useful only after all commands in pipeline finished.
Maybe process substitution like progress-meter "Doing things..." <(do-things arg1 arg2); will be fine, but in this case I also don't know how can I get exit status of do-things.
I'll be happy to hear if there is some other neat syntax possible to achieve same task without escaping command to be executed like in my example.
I greatly hope for the help of community.
UPD1: As question seems not to be clear enough, I paraphrase it:
I want bash function that can be fed with command, that will execute in parallel to function, and bash function will receive it's stdout+stderr, wait for completion and get its exit status.
Example implementation using evals:
progress_meter() {
local output;
local errcode;
echo -n -e $1;
output=$( {
eval "${cmd}";
} 2>&1; );
errcode=$?;
if (( errcode )); then {
echo '[FAIL]';
echo "Output was: ${output}"
} else {
echo '[ OK ]';
}; fi;
}
So this can be used as progress_meter "Do things..." "do-things arg1 arg2". I want the same without eval.
Why eval things? Assuming you have one fixed argument to progress-meter, you can do something like:
#!/bin/bash
# progress meter
prompt="$1"
shift
echo "$prompt"
"$#" # this just executes a command made up of
# arguments 2, 3, ... of the script
# the real script should actually read its input,
# display progress meter etc.
and call it
$ progress-meter "Doing stuff" do-things arg1 arg2
If you insist on putting progress-meter in a pipeline, I'm afraid your best bet is something like
(do-things arg1 arg2 ; echo $?) | progress-meter "Doing stuff"
I'm not sure I understand what exactly you're trying to achieve,
but you could check the pipefail option:
pipefail
If set, the return value of a pipeline is the
value of the last (rightmost) command to exit
with a non-zero status, or zero if all commands
in the pipeline exit successfully. This option
is disabled by default.
For example:
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ok: 0
bash-4.1 $ set -o pipefail
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ko: 2
Edit: I just read your comment on the other post. Why don't you just handle the error?
bash-4.1 $ ls -d /tmp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
/tmp
bash-4.1 $ ls -d /tmpp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
failed
Have your scrips in the pipeline communicate by proxy (much like the Blackboard Pattern: some guy writes on the blackboard, another guy reads it):
Modify your do-things script so that it reports its exit status to a file somewhere.
Modify your progress-meter script to read that file, using command line switches if you like so as not to hardcode the name of the blackboard file, for reporting the exit status of the program that it is reporting the progress for.

Resources