bash: passing entire command (with arguments) to a function - bash

I am essentially trying to implement a function which asserts the failure (non-zero exit code) of another command, and prints a message when it fails.
Here is my function:
function assert_fail () {
COMMAND=$#
if [ `$COMMAND; echo $?` -ne 0 ]; then
echo "$COMMAND failed as expected."
else
echo "$COMMAND didn't fail"
fi
}
# This works as expected
assert_fail rm nonexistent
# This works too
assert_fail rm nonexistent nonexistent2
# This one doesn't work
assert_fail rm -f nonexixtent
As soon as I add options to the command, it doesn't work. Here is the output of the above:
rm: cannot remove `nonexistent': No such file or directory
rm nonexistent failed as expected.
rm: cannot remove `nonexistent': No such file or directory
rm: cannot remove `nonexistent2': No such file or directory
rm nonexistent nonexistent2 failed as expected.
rm -f nonexistent didn't fail
I have tried putting double quotes around the commands, to no avail. I would expect the third invocation in the above to produce similar output to the other two.
I appreciate any/all help!

#rici correctly pointed out the issue you're seeing, but there are a couple of real problems with your wrapper function. First, it doesn't correctly preserve spaces (and some other funny characters) in arguments. COMMAND=$# (or COMMAND="$#") merges all of the arguments into a single string, losing the distinction between spaces between arguments and spaces within arguments. To keep them straight, either use "$#" directly without storing it in a variable, or store it as an array (COMMAND=("$#"), then execute it as "${COMMAND[#]}"). Second, if the command prints anything to stdout, it'll wreak havoc with your exit status check; just test it directly, as #chepner said. Here's my suggested rewrite:
function assert_fail () {
if "$#"; then
echo "$* didn't fail"
else
echo "$* failed as expected."
fi
}
Note that the way I did the echo commands does lose the distinction of spaces within arguments. If that's a problem, replace the echo commands with this:
printf "%q " "$#"
echo "didn't fail"
and
printf "%q " "$#"
echo "failed as expected."

rm -f never fails on non-existent files. It has nothing to do with your wrapper. See man rm:
OPTIONS
-f, --force
ignore nonexistent files, never prompt

Related

How can I use getopts in a script that appends lines from files in a separate directory to a new file?

I am trying to write a bash script that takes in a directory, reads each file in the directory, and then appends the first line of each file in that directory to a new file. When I hard-code the variables in my script, it works fine.
This works:
#!/bin/bash
rm /local/SomePath/multigene.firstline.btab
touch /local/SomePath/multigene.firstline.btab
btabdir=/local/SomePath/test/*
outfile=/local/SomePath/multigene.firstline.btab
for f in $btabdir
do
head -1 $f >> $outfile
done
This does not work:
#!/bin/bash
while getopts ":d:o:" opt; do
case ${opt} in
d) btabdir=$OPTARG;;
o) outfile=$OPTARG;;
esac
done
rm $outfile
touch $outfile
for f in $btabdir
do
head -1 $f >> $outfile
done
Here is how I call the script:
bash /local/SomePath/Scripts/btab.besthits.wBp-q_wBm-r.sh -d /local/SomePath/test/* -o /local/SomePath/out.test/multigene.firstline.btab
And here is what I get when I run it:
rm: missing operand
Try 'rm --help' for more information.
touch: missing file operand
Try 'touch --help' for more information.
/local/SomePath/Scripts/btab.besthits.wBp-q_wBm-r.sh: line 23: $outfile: ambiguous redirect
Any suggestions? I'd like to be able to use getopts so I can make the script more generic. Thanks!
You have to pay extra attention to quoting and globbing when writing bash scripts.
When you call the script with a glob (* here) it gets expanded and split into words by your shell. This happends before your script even gets executed.
If you for example do cat *.txt cat will get all .txt files in the directory as its arguments. It will be the same as calling cat afile.txt nextfile.txt (and so on). Cat will never see the asterisk.
In your script it means that the input -d /local/SomePath/test/* gets expanded som something like /local/SomePath/test/someFile /local/SomePath/test/someOtherFile /test/someThirdFile.
Subsequently getopts only takes the first file after -d as for $btabdir and the -o doesn't get handled in the case switch.
I suggest you start by quoting every variable, preferable in the "${name}" style, and only invoke the script with quoted input.
It might also be send in a directory path, test that it is a directory (test -d), and change your for loop to for f in "${btabdir}"/*
This also works:
head -n1 -q /local/SomePath/test/* >> /local/SomePath/out.test/multigene.firstline.btab
I think the right answer here is "don't do it that way." :-)
The reason your current script isn't working may be that the wildcard is expanded by your interactive shell, not by your script. Try running your command with an echo at the beginning of the line for a hint at what's really happening. Once getopts sees the second of the matched files in the glob, it stops processing options, so -o never gets read, and $outfile remains unset. And since you don't quote your variable in rm $outfile, it's as if you're running rm without options. Test the difference in your shell between rm alone and rm "".
Also, what happens to your for loop if there's a space in a filename? Since you have bash, you have arrays. And arrays are much better for processing lists of files.
Perhaps use something like this instead:
#!/bin/bash
# initialize an array
files=()
while getopts :d:o: opt; do
case "$opt" in
d)
if [[ ! -d "$OPTARG" ]]; then
printf 'ERROR: not a directory: %s\n' "$OPTARG" >&2
exit 65
fi
# add to the array
files+=( "$OPTARG"/* )
;;
o) outfile="$OPTARG" ;;
*)
printf 'ERROR: unknown option: %s\n' "$opt" >&2
exit 64
;;
esac
done
if ! rm -f "$outfile" && touch "$outfile"; then
printf 'ERROR: cannot create %s\n' "$outfile" >&2
exit 73
fi
for f in "${files[#]}"; do
read -r < "$f"
printf '%s\n' "$REPLY"
done > "$outfile"
Here are some highlights of the changes....
We're using arrays, of course. The array ${files[#]} will contain one-file-per-record, without relying on whitespace, so with proper quoting you'll avoid problems with special characters in filenames.
We test for more error conditions, and actually show errors and exit if we see them. (The exit values are sysexits.)
Instead of using head, we use read and a single redirect to $outfile. This saves multiple forks to an external program, and multiple fopen() calls to your output file.
Note that the argument to -d should be a directory, not a glob. And you can specify options multiple times. Multiple -d options will be added together, but only the last -o option will be used.

Bash unintentionally splitting command

Currently I had a rsync command which is failing once every ~15 minutes due to poor network condition. I had written a script to rerun the rsync, however the script does not work as intended because bash is unintentionally breaking up the command I passed in:
$ cat exit-trap.sh
#!/bin/bash
count=1
while :
do
echo ==============
echo Run \#$count
$#
if [[ $? -eq 0 ]] ; then
exit
fi
echo Run \#$count failed
let count++
sleep 15
done
$ ./exit-trap.sh rsync --output-format="# %i %n%L" source::dir target
==============
Run #1
Unexpected remote arg: source::dir
rsync error: syntax or usage error (code 1) at main.c(1348) [sender=3.1.1]
After poking around for a while I guess what rsync recevied in argv is `["rsync", "--output-format=#", "%i", "%n%L", "source::dir", "target"]. The output format is appearantly unintentionally splitted into indiviual pieces, causing a syntax error. Is there a way to fix this issue?
PS: So far I've also tried sh -c $#, sh -c \"$#\", and
./exit-trap.sh rsync --output-format=\"# %i %n%L\" source::dir target
./exit-trap.sh rsync --output-format=\\\"# %i %n%L\\\" source::dir target
./exit-trap.sh "rsync --output-format=\"# %i %n%L\" source::dir target"
None of these works.
You need to use "$#" as described here https://www.gnu.org/software/bash/manual/html_node/Special-Parameters.html#Special-Parameters:
($#) Expands to the positional parameters, starting from one. When the expansion occurs within double quotes, each parameter expands to a separate word. That is, "$#" is equivalent to "$1" "$2" ….

Shellscript - Show error for specific argument when using mv

I'm currently writing code for a script to move files to a "dustbin".
As it stands - the code works fine but I want it to produce a message when a file has been moved correctly and a message when a specific file has failed to move/doesn't exist.
My code will accept multiple filenames for input, e.g.
# del test1.txt *.html testing.doc
# Successfully moved to Dustbin
However if only one of these does not exist it still produces an error message.
How do I do this but still allow it to accept arguments as seen in the above example?
My code is as follows:
#!/bin/sh
mv -u "$#" /root/Dustbin 2>/dev/null
# END OF SCRIPT
Sorry for what is probably an obvious question! I'm completely new to shellscript !
Many thanks
You would have to iterate over the arguments and try to move each one:
for path in "$#"; do
if mv -u "$path" /root/Dustbin 2>/dev/null; then
echo "Success"
else
printf 'Failed to move %s\n' "$path"
fi
done
As a shorthand for iterating over the arguments you can omit in "$#" like
for path; do
if mv -u "$path" /root/Dustbin 2>/dev/null; then
echo "Success"
else
printf 'Failed to move %s\n' "$path"
fi
done

rm -f won't handle all arguments in bash script

Got a bit of a peculiar issue in bash. I want to author a script that takes a prefix of a file name and deletes all files that begin with that prefix and end in some given suffixes. To do this, I have the following line of code:
#!/bin/bash
if [ $# -lt 1 ]; then
echo "Please provide an argument";
exit 1
fi
# Ending dot already included in $1
rm -f $1"aux" $1"bbl" $1"blg" $1"synctex.gz" $1"log"
# But even if you call it wrong...
rm -f $1".aux" $1".bbl" $1".blg" $1".synctex.gz" $1 ".log"
exit 0
Unfortunately, when I call my script (which is called cleanlatex), as intended:
cmd$ cleanlatex lpa_aaai.
only the .aux file is deleted. Clearly rm -f doesn't expand its application to all arguments, which is what would've been done if I explicitly ran rm -f lpa_aaai.aux lpa_aaai.bbl .... What am I doing wrong here?
Edit: To answer #Etan's question, this is what I see when I add those commands:
+ '[' 1 -lt 1 ']'
+ rm -v lpa_aaai.aux lpa_aaai.bbl lpa_aaai.blg lpa_aaai.synctex.gz lpa_aaai.log
removed ‘lpa_aaai.aux’
removed ‘lpa_aaai.bbl’
removed ‘lpa_aaai.blg’
removed ‘lpa_aaai.synctex.gz’
removed ‘lpa_aaai.log’
+ rm -v lpa_aaai..aux lpa_aaai..bbl lpa_aaai..blg lpa_aaai..synctex.gz lpa_aaai. .log
rm: cannot remove ‘lpa_aaai..aux’: No such file or directory
rm: cannot remove ‘lpa_aaai..bbl’: No such file or directory
rm: cannot remove ‘lpa_aaai..blg’: No such file or directory
rm: cannot remove ‘lpa_aaai..synctex.gz’: No such file or directory
rm: cannot remove ‘lpa_aaai.’: No such file or directory
rm: cannot remove ‘.log’: No such file or directory
The second set, consisting of the cannot remove messages, does not concern me: I only have those removals as a fail-safe anyway. This did exactly what I needed. Thank you.
It's not clear what your problem actually is; however, you can reduce your script to two lines of bash:
: ${1?Please provide an argument}
rm -f "$1".{aux,bbl,log,synctex.gz,log}
The first line has the same effect as your if statement, using standard POSIX parameter expansion.
The second line properly quotes $1 and uses brace expansion to produce the sequence of file names with the desired list of suffixes. It makes the simplifying assumption that the user typed the basename of the files without the trailing period: who would type foo. when foo would suffice? You're already making the assumption that there are not files named (for instance) foo.aux and foo..aux, so you might as well make an assumption that makes for less work.
I removed exit 0 because either rm succeeds and your script will exit with status 0 anyway, or rm fails, in which case you shouldn't be exiting with 0 status, but whatever status rm exits with.

Shell Script, When executing commands do something if an error is returned

I am trying to automate out a lot of our pre fs/db tasks and one thing that bugs me is not knowing whether or not a command i issue REALLY happened. I'd like a way to be able to watch for a return code of some sort from executing that command. Where if it fails to rm because of a permission denied or any error. to issue an exit..
If i have a shell script as such:
rm /oracle/$SAPSID/mirrlogA/cntrl/cntrl$SAPSID.ctl;
psuedo code could be something similar to..
rm /oracle/$SAPSID/mirrlogA/cntrl/cntrl$SAPSID.ctl;
if [returncode == 'error']
exit;
fi
how could i for example, execute that rm command and exit if its NOT rm'd. I will be adapting the answer to execute with multiple other types of commands such as sed -i -e, and cp and umount
edit:
Lets suppose i have a write protected file such as:
$ ls -lrt | grep protectedfile
-rwx------ 1 orasmq sapsys 0 Nov 14 12:39 protectedfile
And running the below script generates the following error because obviously theres no permissions..
rm: remove write-protected regular empty file `/tmp/protectedfile'? y
rm: cannot remove `/tmp/protectedfile': Operation not permitted
Here is what i worked out from your guys' answers.. is this the right way to do something like this? Also how could i dump the error rm: cannot remove /tmp/protectedfile': Operation not permitted` to a logfile?
#! /bin/bash
function log(){
//some logging code, simply writes to a file and then echo's out inpit
}
function quit(){
read -p "Failed to remove protected file, permission denied?"
log "Some log message, and somehow append the returned error message from rm"
exit 1;
}
rm /tmp/protectedfile || quit;
If I understand correctly what you want, just use this:
rm blah/blah/blah || exit 1
a possibility: a 'wrapper' so that you can retrieve the original commands stderr |and stdout?], and maybe also retry it a few times before giving up?, etc.
Here is a version that redirects both stdout and stderr
Of course you could not redirect stdout at all (and usually, you shouldn't, i guess, making the "try_to" function a bit more useful in the rest of the script!)
export timescalled=0 #external to the function itself
try_to () {
let "timescalled += 1" #let "..." allows white spaces and simple arithmetics
try_to_out="/tmp/try_to_${$}.${timescalled}"
#tries to avoid collisions within same script and/or if multiple script run in parrallel
zecmd="$1" ; shift ;
"$1" "$#" 2>"${try_to_out}.ERR" >"${try_to_out}.OUT"
try_to_ret=$?
#or: "$1" "$#" >"${try_to_out}.ERR" 2>&1 to have both in the same one
if [ "$try_to_ret" -ne "0" ]
then log "error $try_to_ret while trying to : '${zecmd} $#' ..." "${try_to_out}.ERR"
#provides custom error message + the name of the stderr from the command
rm -f "${try_to_out}.ERR" "${try_to_out}.OUT" #before we exit, better delete this
exit 1 #or exit $try_to_ret ?
fi
rm -f "${try_to_out}.ERR" "${try_to_out}.OUT"
}
it's ugly, but could help ^^
note that there are many things that could go wrong: 'timecalled' could become too high, the tmp file(s) could not be writable, zecmd could contain special caracters, etc...
Usualy some would use a command like this:
doSomething.sh
if [ $? -ne 0 ]
then
echo "oops, i did it again"
exit 1;
fi
B.T.W. searching for 'bash exit status' will give you already a lot of good results

Resources