Bash conditions always evaluate true in script - bash

I have a condition which seems to always be evaluated as true.
#!/bin/bash
checkFolder() {
echo "[checkFolder]"
echo "check $1"
[ -n "$1" ] && [ -d "$1" ] && return 0
echo "[/checkFolder]"
return 1
}
rootFolder=$1
echo "check $rootFolder"
checkFolder "$rootFolder"
echo "res: $res" # !! <--- I omitted this test line, as I thought it was irrelevant.
echo "ret: $?"
When I execute my script, any path will give me a return value of 0. Which means that any string I provide seems to be seen as non-empty as well as an existing directory. I tried with:
./myScript.sh "."
./myScript.sh ""
./myScript.sh "wqert"
I will aways get a return value of 0. How comes?
If I run this command in my terminal:
param=""
[ -n "$param" ] && [ -d "$param" ] && echo ok
# returns nothing
param="hello"
[ -n "$param" ] && [ -d "$param" ] && echo ok
# returns nothing
param="/home"
[ -n "$param" ] && [ -d "$param" ] && echo ok
# returns "ok"
Why doesn't it work in my script?

$? is the exit code of the last executed command. In your case, the last executed command is echo, not checkFolder.
If you want to execute other commands between running a command and checking its status, assign it to a variable with myvar=$?

What the command return changes is the "exit code" of the function.
Add this:
checkFolder "$rootFolder"
echo "the exit code was $?"
And see the effect of your return 0 and return 1.

Related

bash determine if variable is empty and if so exit.

I am trying to perform this:
i have a test file which md5sum of files located on sftp.
variables should contain an md5sum (string), if the variable is empty it means there is no file on the sftp server.
i am trying this code but it does not work..
if [ -z $I_IDOCMD5 ] || [ -z $I_LEGALMD5 ] || [ -z $I_ZIPMD5 ]
then
echo "ERROR: At least one file not present of checksum missing no files will be deleted" >>$IN_LOG
ERRORS=$ERRORS+2
else
if [[ $I_IDOCMD5 == $($DIGEST -a md5 $SAPFOLDER/inward/idoc/$I_IDOC) ]]
then
echo "rm IDOC/$I_IDOC" >/SAP/commands_sftp.in
else
echo "problem with checksum"
ERRORS=$ERRORS+2
fi
if [[ $I_LEGALMD5 == $($DIGEST -a md5 $SAPFOLDER/inward/legal/$I_LEGAL) ]]
then
echo "rm LEGAL/$I_LEGAL" >>/SAP/commands_sftp.in
else
echo "problem with checksum"
ERRORS=$ERRORS+2
fi
if [[ $I_ZIPMD5 == $($DIGEST -a md5 $SAPFOLDER/inward/zip/$I_ZIP) ]]
then
echo "rm ZIP/$I_ZIP" >>/SAP/commands_sftp.in
else
echo "problem with checksum"
ERRORS=$ERRORS+2
fi
The answer I prefer is following
[[ -z "$1" ]] && { echo "Parameter 1 is empty" ; exit 1; }
Note, don't forget the ; into the {} after each instruction
One way to check if a variable is empty is:
if [ "$var" = "" ]; then
# $var is empty
fi
Another, shorter alternative is this:
[ "$var" ] || # var is empty
In bash you can use set -u which causes bash to exit on failed parameter expansion.
From bash man (section about set builtin):
-u
Treat unset variables and parameters other than the special parameters "#" and "*" as an error when performing parameter
expansion. If expansion is attempted on an unset variable or
parameter, the shell prints an error message, and, if not interactive,
exits with a non-zero status.
For more information I recommend this article:
http://redsymbol.net/articles/unofficial-bash-strict-mode/
You can use a short form:
FNAME="$I_IDOCMD5"
: ${FNAME:="$I_LEGALMD5"}
: ${FNAME:="$I_ZIPMD5"}
: ${FNAME:?"Usage: $0 filename"}
In this case the script will exit if neither of the I_... variables is declared, printing an error message prepended with the shell script line that triggered the message.
See more on this in abs-guide (search for «Example 10-7»).
First test only this (just to narrow it down):
if [ -z "$I_IDOCMD5" ] || [ -z "$I_LEGALMD5" ] || [ -z "$I_ZIPMD5" ]
then
echo "one is missing"
else
echo "everything OK"
fi
echo "\"$I_IDOCMD5\""
echo "\"$I_LEGALMD5\""
echo "\"$I_ZIPMD5\""
"if the variable is empty it means there is no file on the sftp server"
If there is no file on the sftp server, is the variable then really empty ?
No hidden spaces or anything like that ? or the number zero (which counts as non-empty) ?

Bash: if [ "echo test" == "test"]; then echo "echo test outputs test on shell" fi; Possible?

Is it possible with bash to execute a command from shell and if it returns a certain value (or an empty one) execute a command?
if [ "echo test" == "test"]; then
echo "echo test outputs test on shell"
fi
Yes, you can use backticks or $() syntax:
if [ $(echo test) = "test" ] ; then
echo "Got it"
fi
You should replace $(echo test) with
"`echo test`"
or
"$(echo test)"
if the output of the command you run can be empty.
And the POSIX "stings are equal" test operator is =.
something like this?
#!/bin/bash
EXPECTED="hello world"
OUTPUT=$(echo "hello world!!!!")
OK="$?" # return value of prev command (echo 'hellow world!!!!')
if [ "$OK" -eq 0 ];then
if [ "$OUTPUT" = "$EXPECTED" ];then
echo "success!"
else
echo "output was: $OUTPUT, not $EXPECTED"
fi
else
echo "return value $OK (not ok)"
fi
You can check the exit_code of the previous program like:
someprogram
id [[ $? -eq 0 ]] ; then
someotherprogram
fi
Note, normally the 0 exit code means successful finish.
You can do it shorter:
someprogram && someotherprogram
With the above someotherprogram only executes if someprogram finished successfully. Or if you want to test for unsuccessful exit:
someprogram || someotherprogram
HTH
Putting the command betweeen $( and ) or backticks (`) will substitute that expression into the return value of the command. So basically:
if [ `echo test` == "test"]; then
echo "echo test outputs test on shell"
fi
or
if [ $(echo test) == "test"]; then
echo "echo test outputs test on shell"
fi
will do the trick.

Bash, always echo in conditional statement

This may turn out to be more of a thought exercise, but I am trying to echo a newline after some command I'm executing within a conditional. For example, I have:
if ssh me#host [ -e $filename ] ; then
echo "File exists remotely"
else
echo "Does not exist remotely"
fi
And want to throw in an echo after the ssh command regardless of the outcome. The reason is formatting; that way a newline will exist after the prompt for password for ssh.
First Try
if ssh me#host [ -e $filename ] && echo ; then
Because && echo would not change the conditional outcome, but bash would not execute echo if ssh returned false. Similarly,
if ssh me#host [ -e $filename ] || (echo && false) ; then
Does not work because it will short-circuit if ssh returns true.
An answer to the problem would be
ssh me#host [ -e $filename ]
result=$?
echo
if [ $result == 0 ] ; then
but was wondering if there was some similar conditional expression to do this.
Thanks.
While this would work
if foo && echo || ! echo; then
I'd prefer putting the whole thing into a function
function addecho() {
"$#" # execute command passed as arguments (including parameters)
result= $? # store return value
echo
return $result # return stored result
}
if addecho foo; then
What about this?
if ssh me#host [ -e $filename ] && echo || echo; then
I have not thought about precedence order of && and || and surely putting some parenthesis would help, but like that it works already... you get the echo both when ssh fails and when it succeeds...
Add the "echo" before the filename test
if ssh me#host "echo; [ -e $filename ]"; then
echo "File exists remotely"
else
echo "Does not exist remotely"
fi

Checking Bash exit status of several commands efficiently

Is there something similar to pipefail for multiple commands, like a 'try' statement but within bash. I would like to do something like this:
echo "trying stuff"
try {
command1
command2
command3
}
And at any point, if any command fails, drop out and echo out the error of that command. I don't want to have to do something like:
command1
if [ $? -ne 0 ]; then
echo "command1 borked it"
fi
command2
if [ $? -ne 0 ]; then
echo "command2 borked it"
fi
And so on... or anything like:
pipefail -o
command1 "arg1" "arg2" | command2 "arg1" "arg2" | command3
Because the arguments of each command I believe (correct me if I'm wrong) will interfere with each other. These two methods seem horribly long-winded and nasty to me so I'm here appealing for a more efficient method.
You can write a function that launches and tests the command for you. Assume command1 and command2 are environment variables that have been set to a command.
function mytest {
"$#"
local status=$?
if (( status != 0 )); then
echo "error with $1" >&2
fi
return $status
}
mytest "$command1"
mytest "$command2"
What do you mean by "drop out and echo the error"? If you mean you want the script to terminate as soon as any command fails, then just do
set -e # DON'T do this. See commentary below.
at the start of the script (but note warning below). Do not bother echoing the error message: let the failing command handle that. In other words, if you do:
#!/bin/sh
set -e # Use caution. eg, don't do this
command1
command2
command3
and command2 fails, while printing an error message to stderr, then it seems that you have achieved what you want. (Unless I misinterpret what you want!)
As a corollary, any command that you write must behave well: it must report errors to stderr instead of stdout (the sample code in the question prints errors to stdout) and it must exit with a non-zero status when it fails.
However, I no longer consider this to be a good practice. set -e has changed its semantics with different versions of bash, and although it works fine for a simple script, there are so many edge cases that it is essentially unusable. (Consider things like: set -e; foo() { false; echo should not print; } ; foo && echo ok The semantics here are somewhat reasonable, but if you refactor code into a function that relied on the option setting to terminate early, you can easily get bitten.) IMO it is better to write:
#!/bin/sh
command1 || exit
command2 || exit
command3 || exit
or
#!/bin/sh
command1 && command2 && command3
I have a set of scripting functions that I use extensively on my Red Hat system. They use the system functions from /etc/init.d/functions to print green [ OK ] and red [FAILED] status indicators.
You can optionally set the $LOG_STEPS variable to a log file name if you want to log which commands fail.
Usage
step "Installing XFS filesystem tools:"
try rpm -i xfsprogs-*.rpm
next
step "Configuring udev:"
try cp *.rules /etc/udev/rules.d
try udevtrigger
next
step "Adding rc.postsysinit hook:"
try cp rc.postsysinit /etc/rc.d/
try ln -s rc.d/rc.postsysinit /etc/rc.postsysinit
try echo $'\nexec /etc/rc.postsysinit' >> /etc/rc.sysinit
next
Output
Installing XFS filesystem tools: [ OK ]
Configuring udev: [FAILED]
Adding rc.postsysinit hook: [ OK ]
Code
#!/bin/bash
. /etc/init.d/functions
# Use step(), try(), and next() to perform a series of commands and print
# [ OK ] or [FAILED] at the end. The step as a whole fails if any individual
# command fails.
#
# Example:
# step "Remounting / and /boot as read-write:"
# try mount -o remount,rw /
# try mount -o remount,rw /boot
# next
step() {
echo -n "$#"
STEP_OK=0
[[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$
}
try() {
# Check for `-b' argument to run command in the background.
local BG=
[[ $1 == -b ]] && { BG=1; shift; }
[[ $1 == -- ]] && { shift; }
# Run the command.
if [[ -z $BG ]]; then
"$#"
else
"$#" &
fi
# Check if command failed and update $STEP_OK if so.
local EXIT_CODE=$?
if [[ $EXIT_CODE -ne 0 ]]; then
STEP_OK=$EXIT_CODE
[[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$
if [[ -n $LOG_STEPS ]]; then
local FILE=$(readlink -m "${BASH_SOURCE[1]}")
local LINE=${BASH_LINENO[0]}
echo "$FILE: line $LINE: Command \`$*' failed with exit code $EXIT_CODE." >> "$LOG_STEPS"
fi
fi
return $EXIT_CODE
}
next() {
[[ -f /tmp/step.$$ ]] && { STEP_OK=$(< /tmp/step.$$); rm -f /tmp/step.$$; }
[[ $STEP_OK -eq 0 ]] && echo_success || echo_failure
echo
return $STEP_OK
}
For what it's worth, a shorter way to write code to check each command for success is:
command1 || echo "command1 borked it"
command2 || echo "command2 borked it"
It's still tedious but at least it's readable.
An alternative is simply to join the commands together with && so that the first one to fail prevents the remainder from executing:
command1 &&
command2 &&
command3
This isn't the syntax you asked for in the question, but it's a common pattern for the use case you describe. In general the commands should be responsible for printing failures so that you don't have to do so manually (maybe with a -q flag to silence errors when you don't want them). If you have the ability to modify these commands, I'd edit them to yell on failure, rather than wrap them in something else that does so.
Notice also that you don't need to do:
command1
if [ $? -ne 0 ]; then
You can simply say:
if ! command1; then
And when you do need to check return codes use an arithmetic context instead of [ ... -ne:
ret=$?
# do something
if (( ret != 0 )); then
Instead of creating runner functions or using set -e, use a trap:
trap 'echo "error"; do_cleanup failed; exit' ERR
trap 'echo "received signal to stop"; do_cleanup interrupted; exit' SIGQUIT SIGTERM SIGINT
do_cleanup () { rm tempfile; echo "$1 $(date)" >> script_log; }
command1
command2
command3
The trap even has access to the line number and the command line of the command that triggered it. The variables are $BASH_LINENO and $BASH_COMMAND.
Personally I much prefer to use a lightweight approach, as seen here;
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exit 111; }
try() { "$#" || die "cannot $*"; }
asuser() { sudo su - "$1" -c "${*:2}"; }
Example usage:
try apt-fast upgrade -y
try asuser vagrant "echo 'uname -a' >> ~/.profile"
I've developed an almost flawless try & catch implementation in bash, that allows you to write code like:
try
echo 'Hello'
false
echo 'This will not be displayed'
catch
echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!"
You can even nest the try-catch blocks inside themselves!
try {
echo 'Hello'
try {
echo 'Nested Hello'
false
echo 'This will not execute'
} catch {
echo "Nested Caught (# $__EXCEPTION_LINE__)"
}
false
echo 'This will not execute too'
} catch {
echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!"
}
The code is a part of my bash boilerplate/framework. It further extends the idea of try & catch with things like error handling with backtrace and exceptions (plus some other nice features).
Here's the code that's responsible just for try & catch:
set -o pipefail
shopt -s expand_aliases
declare -ig __oo__insideTryCatch=0
# if try-catch is nested, then set +e before so the parent handler doesn't catch us
alias try="[[ \$__oo__insideTryCatch -gt 0 ]] && set +e;
__oo__insideTryCatch+=1; ( set -e;
trap \"Exception.Capture \${LINENO}; \" ERR;"
alias catch=" ); Exception.Extract \$? || "
Exception.Capture() {
local script="${BASH_SOURCE[1]#./}"
if [[ ! -f /tmp/stored_exception_source ]]; then
echo "$script" > /tmp/stored_exception_source
fi
if [[ ! -f /tmp/stored_exception_line ]]; then
echo "$1" > /tmp/stored_exception_line
fi
return 0
}
Exception.Extract() {
if [[ $__oo__insideTryCatch -gt 1 ]]
then
set -e
fi
__oo__insideTryCatch+=-1
__EXCEPTION_CATCH__=( $(Exception.GetLastException) )
local retVal=$1
if [[ $retVal -gt 0 ]]
then
# BACKWARDS COMPATIBILE WAY:
# export __EXCEPTION_SOURCE__="${__EXCEPTION_CATCH__[(${#__EXCEPTION_CATCH__[#]}-1)]}"
# export __EXCEPTION_LINE__="${__EXCEPTION_CATCH__[(${#__EXCEPTION_CATCH__[#]}-2)]}"
export __EXCEPTION_SOURCE__="${__EXCEPTION_CATCH__[-1]}"
export __EXCEPTION_LINE__="${__EXCEPTION_CATCH__[-2]}"
export __EXCEPTION__="${__EXCEPTION_CATCH__[#]:0:(${#__EXCEPTION_CATCH__[#]} - 2)}"
return 1 # so that we may continue with a "catch"
fi
}
Exception.GetLastException() {
if [[ -f /tmp/stored_exception ]] && [[ -f /tmp/stored_exception_line ]] && [[ -f /tmp/stored_exception_source ]]
then
cat /tmp/stored_exception
cat /tmp/stored_exception_line
cat /tmp/stored_exception_source
else
echo -e " \n${BASH_LINENO[1]}\n${BASH_SOURCE[2]#./}"
fi
rm -f /tmp/stored_exception /tmp/stored_exception_line /tmp/stored_exception_source
return 0
}
Feel free to use, fork and contribute - it's on GitHub.
run() {
$*
if [ $? -ne 0 ]
then
echo "$* failed with exit code $?"
return 1
else
return 0
fi
}
run command1 && run command2 && run command3
Sorry that I can not make a comment to the first answer
But you should use new instance to execute the command: cmd_output=$($#)
#!/bin/bash
function check_exit {
cmd_output=$($#)
local status=$?
echo $status
if [ $status -ne 0 ]; then
echo "error with $1" >&2
fi
return $status
}
function run_command() {
exit 1
}
check_exit run_command
For fish shell users who stumble on this thread.
Let foo be a function that does not "return" (echo) a value, but it sets the exit code as usual.
To avoid checking $status after calling the function, you can do:
foo; and echo success; or echo failure
And if it's too long to fit on one line:
foo; and begin
echo success
end; or begin
echo failure
end
You can use #john-kugelman 's awesome solution found above on non-RedHat systems by commenting out this line in his code:
. /etc/init.d/functions
Then, paste the below code at the end. Full disclosure: This is just a direct copy & paste of the relevant bits of the above mentioned file taken from Centos 7.
Tested on MacOS and Ubuntu 18.04.
BOOTUP=color
RES_COL=60
MOVE_TO_COL="echo -en \\033[${RES_COL}G"
SETCOLOR_SUCCESS="echo -en \\033[1;32m"
SETCOLOR_FAILURE="echo -en \\033[1;31m"
SETCOLOR_WARNING="echo -en \\033[1;33m"
SETCOLOR_NORMAL="echo -en \\033[0;39m"
echo_success() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_SUCCESS
echo -n $" OK "
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 0
}
echo_failure() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_FAILURE
echo -n $"FAILED"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
echo_passed() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_WARNING
echo -n $"PASSED"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
echo_warning() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_WARNING
echo -n $"WARNING"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
When I use ssh I need to distinct between problems caused by connection issues and error codes of remote command in errexit (set -e) mode. I use the following function:
# prepare environment on calling site:
rssh="ssh -o ConnectionTimeout=5 -l root $remote_ip"
function exit255 {
local flags=$-
set +e
"$#"
local status=$?
set -$flags
if [[ $status == 255 ]]
then
exit 255
else
return $status
fi
}
export -f exit255
# callee:
set -e
set -o pipefail
[[ $rssh ]]
[[ $remote_ip ]]
[[ $( type -t exit255 ) == "function" ]]
rjournaldir="/var/log/journal"
if exit255 $rssh "[[ ! -d '$rjournaldir/' ]]"
then
$rssh "mkdir '$rjournaldir/'"
fi
rconf="/etc/systemd/journald.conf"
if [[ $( $rssh "grep '#Storage=auto' '$rconf'" ) ]]
then
$rssh "sed -i 's/#Storage=auto/Storage=persistent/' '$rconf'"
fi
$rssh systemctl reenable systemd-journald.service
$rssh systemctl is-enabled systemd-journald.service
$rssh systemctl restart systemd-journald.service
sleep 1
$rssh systemctl status systemd-journald.service
$rssh systemctl is-active systemd-journald.service
Checking status in functional manner
assert_exit_status() {
lambda() {
local val_fd=$(echo $# | tr -d ' ' | cut -d':' -f2)
local arg=$1
shift
shift
local cmd=$(echo $# | xargs -E ':')
local val=$(cat $val_fd)
eval $arg=$val
eval $cmd
}
local lambda=$1
shift
eval $#
local ret=$?
$lambda : <(echo $ret)
}
Usage:
assert_exit_status 'lambda status -> [[ $status -ne 0 ]] && echo Status is $status.' lls
Output
Status is 127
suppose
alias command1='grep a <<<abc'
alias command2='grep x <<<abc'
alias command3='grep c <<<abc'
either
{ command1 1>/dev/null || { echo "cmd1 fail"; /bin/false; } } && echo "cmd1 succeed" &&
{ command2 1>/dev/null || { echo "cmd2 fail"; /bin/false; } } && echo "cmd2 succeed" &&
{ command3 1>/dev/null || { echo "cmd3 fail"; /bin/false; } } && echo "cmd3 succeed"
or
{ { command1 1>/dev/null && echo "cmd1 succeed"; } || { echo "cmd1 fail"; /bin/false; } } &&
{ { command2 1>/dev/null && echo "cmd2 succeed"; } || { echo "cmd2 fail"; /bin/false; } } &&
{ { command3 1>/dev/null && echo "cmd3 succeed"; } || { echo "cmd3 fail"; /bin/false; } }
yields
cmd1 succeed
cmd2 fail
Tedious it is. But the readability isn't bad.

shell: return values of test on file/directory

I'm not able to check the return values of the function test; man test didn't help me much.
#!/bin/bash
test=$(test -d $1)
if [ $test -eq 1 ]
then
echo "the file exists and is a directory"
elif [ $test -eq 0 ]
echo "file does not exist or is not a directory"
else
echo "error"
fi
Try, instead
if test -d $1
then
echo 'the file exists and is a directory'
else
echo 'the file doesn't exist or is not a directory'
fi
Every time you use test on the return code of test, God kills a kitten.
if test -d "$1"
or
if [ -d "$1" ]
$(test -d $1) is going to be substituted with what test outputs, not its return code. If you want to check its return code, use $?, e.g.
test -d $1
test=$?
if [ $test -eq 1 ]
...

Resources