I have a makefile which succeeds where it should fail because a line like this
./preprocess.sh <PARTIAL_SOURCE | perl >FINAL_SOURCE
succeeds even though PARTIAL_SOURCE doesn't exist yet.
This isn't a quirk of preprocess.sh, it seems to be something to do with bash/sh
$> cat <does_not_exist && echo ok || echo no
bash: does_not_exist: No such file or directory
no
$> cat <does_not_exist | perl && echo ok || echo no
bash: does_not_exist: No such file or directory
ok
Why does the first fail but the second succeed?
The second succeeds because perl succeeds. The exit code of a pipeline is the exit code of the last command in the pipeline. In this example it's perl. It receives empty input, does nothing with it, so it exits with 0, as it should.
As another example, this is successful too:
$ a | b | perl && echo ok || echo no
-bash: a: command not found
-bash: b: command not found
ok
If you don't want perl to get executed before PARTIAL_SOURCE is ready,
you need to test for it before the pipeline:
if [ ! -f "$input" ]; then
./preprocess.sh < "$input" | perl >FINAL_SOURCE
fi
Or wait for the input to be ready:
while [ ! -f "$input" ]; do sleep 1; done
Related
The following command works perfectly on the terminal but the same command fails in GitLab CI.
echo Hello >> foo.txt; cat foo.txt | grep "test"; [[ $? -eq 0 ]] && echo fail || echo success
return is success
but the same command in GitLab CI
$ echo Hello >> foo.txt; cat foo.txt | grep "test"; [[ $? -eq 0 ]] && echo fail || echo success
Cleaning up file based variables
ERROR: Job failed: command terminated with exit code 1
is simply failing. I have no idea why.
echo $SHELL return /bin/bash in both.
Source of the issue
The behavior you observe is pretty standard given the "implied" set -e in a CI context.
To be more precise, your code consists in three compound commands:
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
And the grep "test" command returns a non-zero exit code (namely, 1). As a result, the script immediately exits and the last line is not executed.
Note that this feature is typical in a CI context, because if some intermediate command fails in a complex script, we'd typically want to get a failure, and avoid running the next commands (which could potentially be "off-topic" given the error).
You can reproduce this locally as well, by writing for example:
bash -e -c "
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
"
which is mostly equivalent to:
bash -c "
set -e
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
"
Relevant manual page
For more insight:
on set -e, see man 1 set
on bash -e, see man 1 bash
How to fix the issue?
You should just adopt another phrasing, avoiding [[ $? -eq 0 ]] a-posteriori tests. So the commands that may return a non-zero exit code without meaning failure should be "protected" by some if:
echo Hello >> foo.txt
if cat foo.txt | grep "test"; then
echo fail
false # if ever you want to "trigger a failure manually" at some point.
else
echo success
fi
Also, note that grep "test" foo.txt would be more idiomatic than cat foo.txt | grep "test" − which is precisely an instance of UUOC (useless use of cat).
I have no idea why.
Gitlab executes each command one at a time and checks the exit status of each command. When the exit status is not zero, the job is failed.
There is no test string inside foo.txt, so the command cat foo.txt | grep "test" exits with nonzero exit status. Thus the job is failed.
I am new to bash scripting and want to write a short script, that checks if a certain program is running. If it runs, the script should bring the window to the foreground, if it does not run, the script should start it.
#!/bin/bash
if [ "$(wmctrl -l | grep Wunderlist)" = ""]; then
/opt/google/chrome/google-chrome --profile-directory=Default --app-id=ojcflmmmcfpacggndoaaflkmcoblhnbh
else
wmctrl -a Wunderlist
fi
My comparison is wrong, but I am not even sure what I should google to find a solution. My idea is, that the "$(wmctrl -l | grep Wunderlist)" will return an empty string, if the window does not exist. I get this error when I run the script:
~/bin » sh handle_wunderlist.sh
handle_wunderlist.sh: 3: [: =: argument expected
You need a space before the closing argument, ], of the [ (test) command:
if [ "$(wmctrl -l | grep Wunderlist)" = "" ]; then
....
else
....
fi
As a side note, you have used the shebang as bash but running the script using sh (presumably dash, from the error message).
Replace:
if [ "$(wmctrl -l | grep Wunderlist)" = ""]; then
With:
if ! wmctrl -l | grep -q Wunderlist; then
grep sets its exit condition to true (0) is a match was found and false (1) if it wasn't. Because you want the inverse of that, we placed ! at the beginning of the command to invert the exit code.
Normally, grep will send the matching text to standard out. We don't want that text, we just want to know if there was a match or not. Consequently, we added the -q option to make grep quiet.
Example
To illustrate the use of grep -q in an if statement:
$ if ! echo Wunderlist | grep -q Wunderlist; then echo Not found; else echo Found; fi
Found
$ if ! echo Wunderabcd | grep -q Wunderlist; then echo Not found; else echo Found; fi
Not found
Is it any better way to get return code from command in one line. eg:
$ test $(ls -l) ||echo 'ok'
-bash: test: too many arguments
ok
the above script have error in test command, because it seems parsing the output "ls - l" not return code.
I know use the "if" syntax is work fine, But need more then one lines.
ls -l
if [ $? -eq 0 ];then
echo 'ok'
fi
You can use && and || to make these things one-liner. For example, in the following:
ls -l && echo ok
echo ok will run only if the command before && (ls -l) returned 0.
On the other hand, in the following:
ls -l || echo 'not ok'
echo 'not ok' will run only if the command before || returned non zero.
Also, you can make your if..else block one-liner using ;:
if ls -l;then echo ok;else echo 'not ok';fi
But this may make your code hard to read, so not recommended.
The if statement is catching the return value of a command, for example with ls:
if ls -l; then
echo 'ok'
fi
Another question here is how to wor below sample.
ls -l xxx || (echo "file xxx not exist" ; exit 1);echo "OK"
As the sample. If file xxx does not exist. the 2nd echo OK still work even exit 1 previously.
I expect to exit return code 1 if file xxx does not exist.
instead of using
test $(ls -l) ||echo 'ok'
you should use
[[ $(ls -l) ]] && echo "ok"
[[ ]] is test and if the command returns successfully (return code of zero) then it executes the command after the &&.
[[ ]] test
$( ) execute command
ls -l command to run
&& run if test is successful
Hope this helps!
I'm writing a routine that will identify if a process stops running and will do something once the processes targeted is gone.
I came up with this code (as a test for my future code):
#!/bin/bash
value="aaa"
ls | grep $value
while [ $? = 0 ];
do
sleep 5
ls | grep $value
echo $?
done;
echo DONE
My problem is that for some reason, the loop never stops and echoes 1 after I delete the file "aaa".
0
0 >>> I delete the file at that point (in another terminal)
1
1
1
1
.
.
.
I would expect the output to be "DONE" as soon as I delete the file...
What's the problem?
SOLUTION:
#!/bin/bash
value="aaa"
ls | grep $value
while [ $? = 0 ];
do
sleep 5
ls | grep $value
done;
echo DONE
The value of $? changes very easily. In the current version of your code, this line:
echo $?
prints the status of the previous command (grep) -- but then it sets $? to 0, the status of the echo command.
Save the value of $? in another variable, one that won't be clobbered next time you execute a command:
#!/bin/bash
value="aaa"
ls | grep $value
status=$?
while [ $status = 0 ];
do
sleep 5
ls | grep $value
status=$?
echo $status
done;
echo DONE
If the ls | grep aaa is intended to check whether a file named aaa exists, this:
while [ ! -f aaa ] ; ...
is a cleaner way to do it.
$? is the return code of the last command, in this case your sleep.
You can rewrite that loop in much simpler way like this:
while [ -f aaa ]; do
sleep 5;
echo "sleeping...";
done
You ought not duplicate the command to be tested. You can always write:
while cmd; do ...; done
instead of
cmd
while [ $? = 0 ]; do ...; cmd; done
In your case, you mention in a comment that the command you are testing is parsing the output of ps. Although there are very good arguments that you ought not do that, and that the followon processing should be done by the parent of the command for which you are waiting, we'll ignore that issue at the moment. You can simply write:
while ps -ef | grep -v "grep mysqldump" |
grep mysqldump > /dev/null; do sleep 1200; done
Note that I changed the order of your pipe, since grep -v will return true if it
matches anything. In this case, I think it is not necessary, but I believe is more
readable. I've also discarded the output to clean things up a bit.
Presumably your objective is to wait until a filename containing the string $value is present in the local directory and not necessarily a single filename.
try:
#!/bin/bash
value="aaa"
while ! ls *$value*; do
sleep 5
done
echo DONE
Your original code failed because $?is filled with the return code of the echo command upon every iteration following the first.
BTW, if you intend to use ps instead of ls in the future, you will pick up your own grep unless you are clever. Use ps -ef | grep [m]ysqlplus.
Consider I have following commandline: do-things arg1 arg2 | progress-meter "Doing things...";, where progress-meter is bash function I want to implement. It should print Doing things... before running do-things arg1 arg2 or in parallel (so, it will be printed anyway at the very beginning), and record stdout+stderr of do-things command, and check it's exit status. If exit status is 0, it should print [ OK ], otherwise it should print [FAIL] and dump recorded output.
Currently I have things done using progress-meter "Doing things..." "do-things arg1 arg2";, and evaluating second argument inside, which is clumsy and I don't like that and believe there is better solution.
The problem with pipe syntax is that I don't know how can I get do-things' exit status from inside the pipeline? $PIPESTATUS seems to be useful only after all commands in pipeline finished.
Maybe process substitution like progress-meter "Doing things..." <(do-things arg1 arg2); will be fine, but in this case I also don't know how can I get exit status of do-things.
I'll be happy to hear if there is some other neat syntax possible to achieve same task without escaping command to be executed like in my example.
I greatly hope for the help of community.
UPD1: As question seems not to be clear enough, I paraphrase it:
I want bash function that can be fed with command, that will execute in parallel to function, and bash function will receive it's stdout+stderr, wait for completion and get its exit status.
Example implementation using evals:
progress_meter() {
local output;
local errcode;
echo -n -e $1;
output=$( {
eval "${cmd}";
} 2>&1; );
errcode=$?;
if (( errcode )); then {
echo '[FAIL]';
echo "Output was: ${output}"
} else {
echo '[ OK ]';
}; fi;
}
So this can be used as progress_meter "Do things..." "do-things arg1 arg2". I want the same without eval.
Why eval things? Assuming you have one fixed argument to progress-meter, you can do something like:
#!/bin/bash
# progress meter
prompt="$1"
shift
echo "$prompt"
"$#" # this just executes a command made up of
# arguments 2, 3, ... of the script
# the real script should actually read its input,
# display progress meter etc.
and call it
$ progress-meter "Doing stuff" do-things arg1 arg2
If you insist on putting progress-meter in a pipeline, I'm afraid your best bet is something like
(do-things arg1 arg2 ; echo $?) | progress-meter "Doing stuff"
I'm not sure I understand what exactly you're trying to achieve,
but you could check the pipefail option:
pipefail
If set, the return value of a pipeline is the
value of the last (rightmost) command to exit
with a non-zero status, or zero if all commands
in the pipeline exit successfully. This option
is disabled by default.
For example:
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ok: 0
bash-4.1 $ set -o pipefail
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ko: 2
Edit: I just read your comment on the other post. Why don't you just handle the error?
bash-4.1 $ ls -d /tmp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
/tmp
bash-4.1 $ ls -d /tmpp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
failed
Have your scrips in the pipeline communicate by proxy (much like the Blackboard Pattern: some guy writes on the blackboard, another guy reads it):
Modify your do-things script so that it reports its exit status to a file somewhere.
Modify your progress-meter script to read that file, using command line switches if you like so as not to hardcode the name of the blackboard file, for reporting the exit status of the program that it is reporting the progress for.