Bash function not executing the input command - bash

In a bash file s.sh, I have an Executor function to which I pass the commands to be executed. Whenever some command does not work as expected, this function outputs the command.
Executor()
{
if ! $*
then
echo "$*"
exit 2
fi
}
Now I am invoking this function -
Executor clangPath="Hello" make (This is used to set the value of clangPath variable as "Hello" in the makefile)
This caused an error -
./s.sh: line 5: clangPath=Hello: command not found
[./s.sh] Error: clangPath=Hello make
However executing the same command like this works fine
if ! clangPath="Hello" make
then
echo "HelloWorld!"
fi
After looking at the error, I thought there might be a mistake with the string quotations, so I tried
exitIfFail clangPath='"Hello"' make
Even this resulted in an error -
./s.sh: line 5: clangPath="Hello": command not found
[./s.sh] Error: clangPath="Hello" make
What could be the reason for the error?

If the purpose of the function is to execute some Bash expression, then print an error message, if the expression failed (returned non-zero status), then, there is a way to implement this via eval:
#!/bin/bash -
function Executor()
{
eval "$#"
if [ $? -ne 0 ]
then
echo >&2 "Failed to execute command: $#"
exit 2
fi
}
The $? variable holds the exit status of the previously executed command. So we check if it is non-zero.
Also note how we redirect the error message to the standard error descriptor.
Usage:
Executor ls -lh /tmp/unknown-something
ls: cannot access /tmp/unknown-something: No such file or directory
Failed to execute command: ls -lh /tmp/unknown-something
Executor ls -lh /tmp
# some file listing here...
The $# variable is more appropriate here, as eval interprets things itself. See $* and $#.

Related

Linux Bash Shell Custom Error Message

I'm trying to spew a custom error message using the following 1 liner bash shell command. I'm not getting the "errorMessage" var set. However if I run the command individually, I'm able to capture the error message into $errorMessage variable. What am I missing?
Command:
[ "errorMessage=$(mkdir -p /path/to/restricted/folder 2>&1)" ] && echo "Something Went Wrong; Error is: ${errorMessage}"
Trials/Output:
$ [ "errorMessage=$(mkdir -p /path/to/restricted/folder 2>&1)" ] && echo "Something Went Wrong; Error is: ${errorMessage}"
Something Went Wrong; Error is:
$ echo $errorMessage
$ errorMessage=$(mkdir -p /path/to/restricted/folder 2>&1)
$ echo $errorMessage
mkdir: cannot create directory `/path': Permission denied
[ is the command named test; when not given an argument specifying an individual test to run, the default is -n (testing whether a string is empty). This code is testing whether the string "errorMessage=" (possibly with a suffix from the stderr of mkdir) is empty or not; since it contains a fixed prefix, it will never be empty, whether any error was emitted or not.
If you want to actually assign a value to the variable, that would instead look like:
errorMessage=$(mkdir -p /path/to/restricted/folder 2>&1) \
|| echo "Something Went Wrong; Error is: ${errorMessage}"
This is checking the exit status of mkdir, and running the echo should that be nonzero.

use of ? in sh script

When I went through some shell scripts I came across the following line of codes
FILENAME=/home/user/test.tar.gz
tar -zxvf $FILENAME
RES=$?FILENAME
if [ $RES -eq 0 ]; then
echo "TAR extract success
fi
I want to know
What is the use of '?' mark in front of the variable(RES=$?FILENAME).
How to check whether tar extracted successfully
In standard (POSIX-ish) shells, $? is a special parameter. Even Bash's parameter expansion doesn't document an alternative meaning.
In the context, $?FILENAME might expand to 0FILENAME if the previous command succeeded, and perhaps 1FILENAME if it failed.
Since there's a numeric comparison requested (-eq) the value 0FILENAME might convert to 0 and then compare OK. However, on my system (Mac OS X 10.10.5, Bash 3.2.57) attempting:
if [ 0FILE -eq 0 ]; then echo equal; fi
yields the error -bash: [: 0FILE: integer expression expected.
So, adding the FILENAME after the $? is unorthodox at best (or confusing, or even, ultimately, wrong).
By default, the exit status of a function is the exit status returned by the last command in the function. After the function executes, you use the standard $? variable to determine the exit status of the function:
#!/bin/bash
# testing the exit status of a function
my_function() {
echo "trying to display a non-existent file"
ls -l no_file
}
echo "calling the function: "
my_function
echo "The exit status is: $?"
$
$ ./test4
testing the function:
trying to display a non-existent file
ls: badfile: No such file or directory
The exit status is: 1
To check if tar successfully executed or not use
tar xvf "$tar" || exit 1

Shell Script, When executing commands do something if an error is returned

I am trying to automate out a lot of our pre fs/db tasks and one thing that bugs me is not knowing whether or not a command i issue REALLY happened. I'd like a way to be able to watch for a return code of some sort from executing that command. Where if it fails to rm because of a permission denied or any error. to issue an exit..
If i have a shell script as such:
rm /oracle/$SAPSID/mirrlogA/cntrl/cntrl$SAPSID.ctl;
psuedo code could be something similar to..
rm /oracle/$SAPSID/mirrlogA/cntrl/cntrl$SAPSID.ctl;
if [returncode == 'error']
exit;
fi
how could i for example, execute that rm command and exit if its NOT rm'd. I will be adapting the answer to execute with multiple other types of commands such as sed -i -e, and cp and umount
edit:
Lets suppose i have a write protected file such as:
$ ls -lrt | grep protectedfile
-rwx------ 1 orasmq sapsys 0 Nov 14 12:39 protectedfile
And running the below script generates the following error because obviously theres no permissions..
rm: remove write-protected regular empty file `/tmp/protectedfile'? y
rm: cannot remove `/tmp/protectedfile': Operation not permitted
Here is what i worked out from your guys' answers.. is this the right way to do something like this? Also how could i dump the error rm: cannot remove /tmp/protectedfile': Operation not permitted` to a logfile?
#! /bin/bash
function log(){
//some logging code, simply writes to a file and then echo's out inpit
}
function quit(){
read -p "Failed to remove protected file, permission denied?"
log "Some log message, and somehow append the returned error message from rm"
exit 1;
}
rm /tmp/protectedfile || quit;
If I understand correctly what you want, just use this:
rm blah/blah/blah || exit 1
a possibility: a 'wrapper' so that you can retrieve the original commands stderr |and stdout?], and maybe also retry it a few times before giving up?, etc.
Here is a version that redirects both stdout and stderr
Of course you could not redirect stdout at all (and usually, you shouldn't, i guess, making the "try_to" function a bit more useful in the rest of the script!)
export timescalled=0 #external to the function itself
try_to () {
let "timescalled += 1" #let "..." allows white spaces and simple arithmetics
try_to_out="/tmp/try_to_${$}.${timescalled}"
#tries to avoid collisions within same script and/or if multiple script run in parrallel
zecmd="$1" ; shift ;
"$1" "$#" 2>"${try_to_out}.ERR" >"${try_to_out}.OUT"
try_to_ret=$?
#or: "$1" "$#" >"${try_to_out}.ERR" 2>&1 to have both in the same one
if [ "$try_to_ret" -ne "0" ]
then log "error $try_to_ret while trying to : '${zecmd} $#' ..." "${try_to_out}.ERR"
#provides custom error message + the name of the stderr from the command
rm -f "${try_to_out}.ERR" "${try_to_out}.OUT" #before we exit, better delete this
exit 1 #or exit $try_to_ret ?
fi
rm -f "${try_to_out}.ERR" "${try_to_out}.OUT"
}
it's ugly, but could help ^^
note that there are many things that could go wrong: 'timecalled' could become too high, the tmp file(s) could not be writable, zecmd could contain special caracters, etc...
Usualy some would use a command like this:
doSomething.sh
if [ $? -ne 0 ]
then
echo "oops, i did it again"
exit 1;
fi
B.T.W. searching for 'bash exit status' will give you already a lot of good results

How to get bash to ignore file-not-founds

In a (ba)sh script, how do I ignore file-not-found errors?
I am writing a script that reads a (partial) filename from stdin, using:
read file; $FILEDIR/$file.sh
I need to give the script functionality to reject filenames that don't exist.
e.g.
$UTILDIR does NOT contains script.sh
User types script
Script tries to access $UTILDIR/script.sh and fails as
./run.sh: line 32: /utiltest/script.sh: No such file or directory
How do I make the script print an error, but continue the script without printing the 'normal' error?
You can test whether the file exists using the code in #gogaman's answer, but you are probably more interested in knowing whether the file is present and executable. For that, you should use the -x test instead of -e
if [ -x "$FILEDIR/$file.sh" ]; then
echo file exists
else
echo file does not exist or is not executable
fi
if [ -e $FILEDIR/$file.sh ]; then
echo file exists;
else
echo file does not exist;
fi
Here we can define a shell procedure that runs only if the file exists
run-if-present () {
echo $1 is really there
}
[ -e $thefile ] && run-if-present $thefile
Depending on what you do with the script, the command will fail with a specific exit code. If you are executing the script, the exit code can be 126 (permission denied) or 127 (file not found).
command
if (($? == 126 || $? == 127))
then
echo 'Command not found or not executable' > /dev/stderr
fi

Error when passing argument to a command in a bash script

I would like to execute a command like this:
#!/bin/sh
`which rvmsudo` `which program` argument
but I get this issue
/usr/bin/env: argument: No such file or directory
Make sure, all of the which statements return valid:
#!/bin/bash
RVMSUDO=`which rvmsudo`
test -z "$RCMSUDO" && exit 1
PROGRAM=`which program`
test -z "$PROGRAM" && exit 2
$RVMSUDO $PROGRAM argument

Resources