I have to compare a file with 3 different golden files using diff.
I need to exit the script with exit 0 if test file is the same as any of the three golden files.
I tried the following:
#!/bin/sh
one=`diff -q NEW_GOLDEN_OUTPUT_ASYNC_1 /tmp/tmp_last_lines.log`
two=`diff -q NEW_GOLDEN_OUTPUT_ASYNC_2 /tmp/tmp_last_lines.log`
three=`diff -q NEW_GOLDEN_OUTPUT_ASYNC_3 /tmp/tmp_last_lines.log`
if [[ $one || $two || $three ]]; then
exit 0
else
exit 1
fi
But it returns exit 0 in all cases. I'm using /bin/ksh shell. Any suggestions?
Your code looks at the output of diff but you should look at the exit code. Try this instead:
#!/bin/sh
diff -q NEW_GOLDEN_OUTPUT_ASYNC_1 /tmp/tmp_last_lines.log && \
diff -q NEW_GOLDEN_OUTPUT_ASYNC_2 /tmp/tmp_last_lines.log && \
diff -q NEW_GOLDEN_OUTPUT_ASYNC_3 /tmp/tmp_last_lines.log
&& will only execute the next command if the previous one succeeded.
Alternatively, use set -e (Exit immediately if a command exits with a non-zero status.):
#!/bin/sh
set -e
diff -q NEW_GOLDEN_OUTPUT_ASYNC_1 /tmp/tmp_last_lines.log
diff -q NEW_GOLDEN_OUTPUT_ASYNC_2 /tmp/tmp_last_lines.log
diff -q NEW_GOLDEN_OUTPUT_ASYNC_3 /tmp/tmp_last_lines.log
Related
The following command works perfectly on the terminal but the same command fails in GitLab CI.
echo Hello >> foo.txt; cat foo.txt | grep "test"; [[ $? -eq 0 ]] && echo fail || echo success
return is success
but the same command in GitLab CI
$ echo Hello >> foo.txt; cat foo.txt | grep "test"; [[ $? -eq 0 ]] && echo fail || echo success
Cleaning up file based variables
ERROR: Job failed: command terminated with exit code 1
is simply failing. I have no idea why.
echo $SHELL return /bin/bash in both.
Source of the issue
The behavior you observe is pretty standard given the "implied" set -e in a CI context.
To be more precise, your code consists in three compound commands:
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
And the grep "test" command returns a non-zero exit code (namely, 1). As a result, the script immediately exits and the last line is not executed.
Note that this feature is typical in a CI context, because if some intermediate command fails in a complex script, we'd typically want to get a failure, and avoid running the next commands (which could potentially be "off-topic" given the error).
You can reproduce this locally as well, by writing for example:
bash -e -c "
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
"
which is mostly equivalent to:
bash -c "
set -e
echo Hello >> foo.txt
cat foo.txt | grep "test"
[[ $? -eq 0 ]] && echo fail || echo success
"
Relevant manual page
For more insight:
on set -e, see man 1 set
on bash -e, see man 1 bash
How to fix the issue?
You should just adopt another phrasing, avoiding [[ $? -eq 0 ]] a-posteriori tests. So the commands that may return a non-zero exit code without meaning failure should be "protected" by some if:
echo Hello >> foo.txt
if cat foo.txt | grep "test"; then
echo fail
false # if ever you want to "trigger a failure manually" at some point.
else
echo success
fi
Also, note that grep "test" foo.txt would be more idiomatic than cat foo.txt | grep "test" − which is precisely an instance of UUOC (useless use of cat).
I have no idea why.
Gitlab executes each command one at a time and checks the exit status of each command. When the exit status is not zero, the job is failed.
There is no test string inside foo.txt, so the command cat foo.txt | grep "test" exits with nonzero exit status. Thus the job is failed.
Problem Description
I am trying to filter directories that we don't want to ignore for the test coverage. For this purpose we are using Lcov.
When I try to put the directories that are to be ignore in a variable __ignoreinput the command #${__lcov} ${__gcovopts} --remove MYCODE.info "${__ignoreinput}" -o MYCODE_filtered.info > /dev/null 2> /dev/null doesn't work, it doesn't filter anything. Whereas when I use the command without the __ignoreinput as in
${__lcov} ${__gcovopts} --remove MYCODE.info '/opt/*' '/usr/include/*' '*3rdParty/*' '*Input_API/*' '*Grammars/*' -o MYCODE_filtered.info > /dev/null 2> /dev/null
if [[ ${?} -ne 0 ]] ;then echo "Error *** lcov filtrering failed" && exit 1 ;fi
The filter works ok. What am I doing wrong. I don't understand.
Script
#!/bin/bash
__orc=/home/anybody/workspace/project
__buildtype="local"
__output=/home/anybody/workspace/lcov
#doe not work
#__ignoreinput="'/opt/*' '/usr/include/*' '*3rdParty/*' '*Input_API/*' '*Grammars/*'"
#__ignoreinput="/opt/* /usr/include/* *3rdParty/* *Input_API/* *Grammars/* "
#__ignoreinput="\"/opt/*\" \"/usr/include/*\" \"*3rdParty/*\" \"*Input_API/*\" \"*Grammars/*\""
__gcovopts=--gcov-tool=/opt/1A/x86_64-2.6.32-v2/bin/gcov
__lcov=lcov
if [[ "${__buildtype}" == "docker" ]] ;then
__build=MYCODE/build_x86_64-2.6.32-v2_Gcov
else
__build=MYCODE/cmake-build-coverage
fi
echo "Filter lcov tracefile"
cd ${__orc}/${__build}
#does not work
#${__lcov} ${__gcovopts} --remove MYCODE.info "${__ignoreinput}" -o MYCODE_filtered.info > /dev/null 2> /dev/null
#works
${__lcov} ${__gcovopts} --remove MYCODE.info '/opt/*' '/usr/include/*' '*3rdParty/*' '*Input_API/*' '*Grammars/*' -o MYCODE_filtered.info > /dev/null 2> /dev/null
if [[ ${?} -ne 0 ]] ;then echo "Error *** lcov filtrering failed" && exit 1 ;fi
echo "Generate HTML reports"
cd ${__orc}/${__build}
genhtml --ignore-errors source -o ${__output}/lcov_"$(git rev-list HEAD -n 1)" MYCODE_filtered.info > /dev/null 2> /dev/null
if [[ ${?} -ne 0 ]] ;then echo "Error *** lcov reports failed" && exit 1 ;fi
One simple way to pass the __ignoreinput would be to store the glob expressions in the array under single quote expand them while passing to the lcov command. Write your ignore input as
__ignoreinput=( '/opt/*' '/usr/include/*' '*3rdParty/*' '*Input_API/*' '*Grammars/*' )
and doing below should work as expected.
"${__lcov}" "${__gcovopts}" --remove MYCODE.info "${__ignoreinput[#]}" -o MYCODE_filtered.info 2>&1 > /dev/null
For all the failure cases in your description __ignoreinput is set as one whole-string under ".." but the command expects words split up one expression for each. The array expansion "${__ignoreinput[#]}" though puts each word defined in the array as a separate word as expected by the command.
Also carefully single/double quote the words in the array during the definition, because with lack of quotes * could undergo path-name expansion and could expand to the list of filenames under each of those paths.
Also see how > /dev/null 2> /dev/null could be minimized to > /dev/null 2> /dev/null or simply &> /dev/null in bash.
Also naming variable names prefixed with __ is a bad practice. Like most of the languages out there, the character can itself could be a valid variable identifier. As shown above enclosing the variable name completely around {..} is the recommended way.
Is it any better way to get return code from command in one line. eg:
$ test $(ls -l) ||echo 'ok'
-bash: test: too many arguments
ok
the above script have error in test command, because it seems parsing the output "ls - l" not return code.
I know use the "if" syntax is work fine, But need more then one lines.
ls -l
if [ $? -eq 0 ];then
echo 'ok'
fi
You can use && and || to make these things one-liner. For example, in the following:
ls -l && echo ok
echo ok will run only if the command before && (ls -l) returned 0.
On the other hand, in the following:
ls -l || echo 'not ok'
echo 'not ok' will run only if the command before || returned non zero.
Also, you can make your if..else block one-liner using ;:
if ls -l;then echo ok;else echo 'not ok';fi
But this may make your code hard to read, so not recommended.
The if statement is catching the return value of a command, for example with ls:
if ls -l; then
echo 'ok'
fi
Another question here is how to wor below sample.
ls -l xxx || (echo "file xxx not exist" ; exit 1);echo "OK"
As the sample. If file xxx does not exist. the 2nd echo OK still work even exit 1 previously.
I expect to exit return code 1 if file xxx does not exist.
instead of using
test $(ls -l) ||echo 'ok'
you should use
[[ $(ls -l) ]] && echo "ok"
[[ ]] is test and if the command returns successfully (return code of zero) then it executes the command after the &&.
[[ ]] test
$( ) execute command
ls -l command to run
&& run if test is successful
Hope this helps!
If I have the following bash command:
for i in ./ x ; do ls $i ; done && echo OK
"ls ./" is executed, and then "ls x", which fails (x is missing) and OK is not printed.
If
for i in x ./ ; do ls $i ; done && echo OK
then even though "ls x" fails, because the last statement in the for loop succeeded, then OK is printed. This is a problem when using shell for loops in makefiles:
x:
for i in $(LIST) ; do \
cmd $$i ;\
done
How can I make make fail if any of the individual executions of cmd fails?
Use the break command to terminate the loop when a command fails
x:
for i in $(LIST) ; do \
cmd $$i || break ;\
done
That won't make the makefile abort, though. You could instead use exit with a non-zero code:
x:
for i in $(LIST) ; do \
cmd $$i || exit 1 ;\
done
After executing the command, check for return value of that command using $?, as its make file you need to use double $. If its non zero, then exit with failure.
x:
set -e
for i in $(LIST); do \
cmd $$i; \
[[ $$? != 0 ]] && exit -1; \
echo 'done'; \
done
How to get the command previously used to start a shell script?
for example:
nohup /script_name.sh &
Inside the script itself, how to check if "nohup" has been used?
Thanks.
You want to use the $_ parameter in your script.
Example: shell.sh
#!/bin/bash
echo $_;
user#server [~]# sh shell.sh
/usr/bin/sh
user#server [~]#
Additionally:
If you want to get rid of that full path - /usr/bin/sh - utilize basename command.
#!/bin/bash
echo `basename $_`;
user#server [~]# sh shell.sh
sh
user#server [~]#
well that depends on the script in question.There're many ways to execute a script like:
./<scriptname> #chmod 700 <scriptname> should be done before executing this script
bash <scriptname> # provided bash is used for executing the script.
or if you just want to get the name of script2 in script1, then use sed or awk for parsing the script1 with regular expression => /script2/.
Try this:
cat <script1> | awk '{ if( $0 ~ /^[^#].* \/scriptname.sh/ ){ print $1}}'
#codebaus thanks, doing something like this works but using strace definitely not.
#!/bin/bash
# echo $_
# echo $0
if grep "sh" $_ >/dev/null ; then
exit 1
fi ;
echo "string" ;
I believe you want to run this?:
#!/bin/bash
# echo $_
# echo $0
if grep "sh" $_ 2> /dev/null ; then
exit 1
fi ;
echo "string";
user#server [~]# sh shell.sh
Binary file /usr/bin/sh matches
user#server [~]#
Not sure what you are trying to accomplish in the end game. But $_ should give you what you need based on your initial question.
Additionally:
As I did not answer your strace comment, apologies. Based on the previous code above.
strace sh shell.sh
wait4(-1, Binary file /usr/bin/strace matches
[{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 874
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
--- SIGCHLD (Child exited) # 0 (0) ---