How to handle error when exit code is zero in bash - bash

I have the following script ~/bin/cat that uses pygmentize to display syntax highlightable files when ever possible if not just regular old cat.
#!/bin/bash
for var; do
pygmentize "$var" 2> /dev/null
if [ $? -ne 0 ]; then
/bin/cat "$var"
fi
done
This works fine on my work machine but not on my home machine. At home if pygmentize doesn't recognize a file it displays the same error message but the exit status is 0 where as at work it returns 1, which breaks the script. The only difference being at work I run Fedora and at home Ubuntu.
$ pygmentize testfile
Error: no lexer for filename 'testfile' found
$ echo $?
0
$ file testfile
file: ASCII text
This is strange as both are the same version
$ pygmentize -V
Pygments version 1.4, (c) 2006-2008 by Georg Brandl.
I could grep for Error in stderr but how do I do this without throwing away stdout, How should I handle this?

Well, your best approach is to fix pygmentize to properly return an error code. As Ignacio Vazquez-Abrams mentions, one of the distros has a patch that is either causing or fixing this.
But, here is how to work around it:
If the error message is on stderr
The easiest way is probably to redirect stderr to a temporary file, and leave stdout alone:
pygmentize "$var" 2> "$tmpfile"
then you can grep "$tmpfile". There are other ways, but they're more complicated.
If the error message is on stdout
Yep, that'd be another bug in pygmentize, it should be on stderr. The temporary file will work again, however. Just cat the temporary file back to stdout if its OK. Alternatively, you can use tee to duplicate the stdout to several destinations.

Related

storing error message of command output into a shell variable [duplicate]

This question already has answers here:
How to get error output and store it in a variable or file
(3 answers)
Closed 8 years ago.
I am trying to store error meesage of a copy command in to a variable. But its not happening
Unix Command
log=`cp log.txt`
cp: missing destination file operand after `log.txt'
Try `cp --help' for more information.
echo $log
<nothing displayed>
I want to store above error message into a variable so that i can echo it whenever i want
Just redirect the stdout (normal output) to /dev/null and keep the stderror:
a=$(cp log.txt 2>&1 >/dev/null)
See an example:
$ a=$(cp log.txt 2>&1 >/dev/null)
$ echo "$a"
cp: missing destination file operand after ‘log.txt’
Try 'cp --help' for more information.
The importance to >/dev/null to keep away to normal output that in this case we do not want:
$ ls a b
ls: cannot access a: No such file or directory
b
$ a=$(ls a b 2>&1)
$ echo "$a"
ls: cannot access a: No such file or directory
b
$ a=$(ls a b 2>&1 >/dev/null)
$ echo "$a"
ls: cannot access a: No such file or directory
Note the need of quoting $a when calling it, so that the format is kept. Also, it is better to use $() rather than , as it is easier to nest and also is deprecated.
What does 2>&1 mean?
1 is stdout. 2 is stderr.
Here is one way to remember this construct (altough it is not entirely
accurate): at first, 2>1 may look like a good way to redirect stderr
to stdout. However, it will actually be interpreted as "redirect
stderr to a file named 1". & indicates that what follows is a file
descriptor and not a filename. So the construct becomes: 2>&1.

Shell Script, When executing commands do something if an error is returned

I am trying to automate out a lot of our pre fs/db tasks and one thing that bugs me is not knowing whether or not a command i issue REALLY happened. I'd like a way to be able to watch for a return code of some sort from executing that command. Where if it fails to rm because of a permission denied or any error. to issue an exit..
If i have a shell script as such:
rm /oracle/$SAPSID/mirrlogA/cntrl/cntrl$SAPSID.ctl;
psuedo code could be something similar to..
rm /oracle/$SAPSID/mirrlogA/cntrl/cntrl$SAPSID.ctl;
if [returncode == 'error']
exit;
fi
how could i for example, execute that rm command and exit if its NOT rm'd. I will be adapting the answer to execute with multiple other types of commands such as sed -i -e, and cp and umount
edit:
Lets suppose i have a write protected file such as:
$ ls -lrt | grep protectedfile
-rwx------ 1 orasmq sapsys 0 Nov 14 12:39 protectedfile
And running the below script generates the following error because obviously theres no permissions..
rm: remove write-protected regular empty file `/tmp/protectedfile'? y
rm: cannot remove `/tmp/protectedfile': Operation not permitted
Here is what i worked out from your guys' answers.. is this the right way to do something like this? Also how could i dump the error rm: cannot remove /tmp/protectedfile': Operation not permitted` to a logfile?
#! /bin/bash
function log(){
//some logging code, simply writes to a file and then echo's out inpit
}
function quit(){
read -p "Failed to remove protected file, permission denied?"
log "Some log message, and somehow append the returned error message from rm"
exit 1;
}
rm /tmp/protectedfile || quit;
If I understand correctly what you want, just use this:
rm blah/blah/blah || exit 1
a possibility: a 'wrapper' so that you can retrieve the original commands stderr |and stdout?], and maybe also retry it a few times before giving up?, etc.
Here is a version that redirects both stdout and stderr
Of course you could not redirect stdout at all (and usually, you shouldn't, i guess, making the "try_to" function a bit more useful in the rest of the script!)
export timescalled=0 #external to the function itself
try_to () {
let "timescalled += 1" #let "..." allows white spaces and simple arithmetics
try_to_out="/tmp/try_to_${$}.${timescalled}"
#tries to avoid collisions within same script and/or if multiple script run in parrallel
zecmd="$1" ; shift ;
"$1" "$#" 2>"${try_to_out}.ERR" >"${try_to_out}.OUT"
try_to_ret=$?
#or: "$1" "$#" >"${try_to_out}.ERR" 2>&1 to have both in the same one
if [ "$try_to_ret" -ne "0" ]
then log "error $try_to_ret while trying to : '${zecmd} $#' ..." "${try_to_out}.ERR"
#provides custom error message + the name of the stderr from the command
rm -f "${try_to_out}.ERR" "${try_to_out}.OUT" #before we exit, better delete this
exit 1 #or exit $try_to_ret ?
fi
rm -f "${try_to_out}.ERR" "${try_to_out}.OUT"
}
it's ugly, but could help ^^
note that there are many things that could go wrong: 'timecalled' could become too high, the tmp file(s) could not be writable, zecmd could contain special caracters, etc...
Usualy some would use a command like this:
doSomething.sh
if [ $? -ne 0 ]
then
echo "oops, i did it again"
exit 1;
fi
B.T.W. searching for 'bash exit status' will give you already a lot of good results

send bash stderr to logfile, but only if an error exists

I am using the following code to send stderr to a file.
.script >2 "errorlog.$(date)"
The problem is that a blank log file is created every time I run the script, even if an error doesn't exist. I have looked online and in a few books as well, and can't figure out how to create a log file only if errors exist.
Output redirection opens the file before the script is run, so there is no way to tell if the file will receive any output. What you can do, however, is immediately delete the file if it winds up being empty:
logfile="errorlog.$(date)"
# Note your typo; it's 2>, not >2
script 2> "$logfile"; [ -s "$logfile" ] || rm -f "$logfile"
I use -f just in case, as -s can fail if $logfile does not exist, not just if it's empty. I use ; to separate the commands because whether or not $logfile contains anything does not depend on whether or not script succeeds.
You can wrap this up in a function to make it easier to use.
save_log () {
logfile=${1:-errorlog.$(date)}
cat - > "$logfile"
[ -s "$logfile" ] || rm -f "$logfile"
}
script 2> >( save_log )
script 2> >( save_log my_logfile.txt )
Not quite as simple as redirecting to a file, and depends on a non-standard feature (process substitution), but not too bad, either.

Bash syntax error: unexpected end of file

Forgive me for this is a very simple script in Bash. Here's the code:
#!/bin/bash
# june 2011
if [ $# -lt 3 -o $# -gt 3 ]; then
echo "Error... Usage: $0 host database username"
exit 0
fi
after running sh file.sh:
syntax error: unexpected end of file
I think file.sh is with CRLF line terminators.
run
dos2unix file.sh
then the problem will be fixed.
You can install dos2unix in ubuntu with this:
sudo apt-get install dos2unix
Another thing to check (just occured to me):
terminate bodies of single-line functions with semicolon
I.e. this innocent-looking snippet will cause the same error:
die () { test -n "$#" && echo "$#"; exit 1 }
To make the dumb parser happy:
die () { test -n "$#" && echo "$#"; exit 1; }
i also just got this error message by using the wrong syntax in an if clause
else if (syntax error: unexpected end of file)
elif (correct syntax)
i debugged it by commenting bits out until it worked
an un-closed if => fi clause will raise this as well
tip: use trap to debug, if your script is huge...
e.g.
set -x
trap read debug
I got this answer from this similar problem on StackOverflow
Open the file in Vim and try
:set fileformat=unix
Convert eh line endings to unix endings and see if that solves the
issue. If editing in Vim, enter the command :set fileformat=unix and
save the file. Several other editors have the ability to convert line
endings, such as Notepad++ or Atom
Thanks #lemongrassnginger
This was happening for me when I was trying to call a function using parens, e.g.
run() {
echo hello
}
run()
should be:
run() {
echo hello
}
run
I had the problem when I wrote "if - fi" statement in one line:
if [ -f ~/.git-completion.bash ]; then . ~/.git-completion.bash fi
Write multiline solved my problem:
if [ -f ~/.git-completion.bash ]; then
. ~/.git-completion.bash
fi
So I found this post and the answers did not help me but i was able to figure out why it gave me the error. I had a
cat > temp.txt < EOF
some content
EOF
The issue was that i copied the above code to be in a function and inadvertently tabbed the code. Need to make sure the last EOF is not tabbed.
on cygwin I needed:-
export SHELLOPTS
set -o igncr
in .bash_profile . This way I didn't need to run unix2dos
FOR WINDOWS:
In my case, I was working on Windows OS and I got the same error while running autoconf.
I simply open configure.ac file with my NOTEPAD++ IDE.
Then I converted the File with EOL conversion into Windows (CR LF) as follows:
EDIT -> EOL CONVERSION -> WINDOWS (CR LF)
Missing a closing brace on a function definition will cause this error as I just discovered.
function whoIsAnIidiot() {
echo "you are for forgetting the closing brace just below this line !"
Which of course should be like this...
function whoIsAnIidiot() {
echo "not you for sure"
}
I was able to cut and paste your code into a file and it ran correctly. If you
execute it like this it should work:
Your "file.sh":
#!/bin/bash
# june 2011
if [ $# -lt 3 -o $# -gt 3 ]; then
echo "Error... Usage: $0 host database username"
exit 0
fi
The command:
$ ./file.sh arg1 arg2 arg3
Note that "file.sh" must be executable:
$ chmod +x file.sh
You may be getting that error b/c of how you're doing input (w/ a pipe, carrot,
etc.). You could also try splitting the condition into two:
if [ $# -lt 3 ] || [ $# -gt 3 ]; then
echo "Error... Usage: $0 host database username"
exit 0
fi
Or, since you're using bash, you could use built-in syntax:
if [[ $# -lt 3 || $# -gt 3 ]]; then
echo "Error... Usage: $0 host database username"
exit 0
fi
And, finally, you could of course just check if 3 arguments were given (clean,
maintains POSIX shell compatibility):
if [ $# -ne 3 ]; then
echo "Error... Usage: $0 host database username"
exit 0
fi
In my case, there is a redundant \ in the like following:
function foo() {
python tools/run_net.py \
--cfg configs/Kinetics/X3D_8x8_R50.yaml \
NUM_GPUS 1 \
TRAIN.BATCH_SIZE 8 \
SOLVER.BASE_LR 0.0125 \
DATA.PATH_TO_DATA_DIR ./afs/kinetics400 \
DATA.PATH_PREFIX ./afs/kinetics400 \ # Error
}
There is NOT a \ at the end of DATA.PATH_PREFIX ./afs/kinetics400
I just cut-and-pasted your example into a file; it ran fine under bash. I don't see any problems with it.
For good measure you may want to ensure it ends with a newline, though bash shouldn't care. (It runs for me both with and without the final newline.)
You'll sometimes see strange errors if you've accidentally embedded a control character in the file. Since it's a short script, try creating a new script by pasting it from your question here on StackOverflow, or by simply re-typing it.
What version of bash are you using? (bash --version)
Good luck!
Make sure the name of the directory in which the .sh file is present does not have a space character. e.g: Say if it is in a folder called 'New Folder', you're bound to come across the error that you've cited. Instead just name it as 'New_Folder'. I hope this helps.
Apparently, some versions of the shell can also emit this message when the final line of your script lacks a newline.
In Ubuntu:
$ gedit ~/.profile
Then, File -> Save as and set end line to Unix/Linux
I know I am too late to the party. Hope this may help someone.
Check your .bashrc file. Perhaps rename or move it.
Discussion here: Unable to source a simple bash script
For people using MacOS:
If you received a file with Windows format and wanted to run on MacOS and seeing this error, run these commands.
brew install dos2unix
sh <file.sh>
If the the script itself is valid and there are no syntax errors, then some possible causes could be:
Invalid end-of-lines (for example, \r\n instead of \n)
Presence of the byte order mark (BOM) at the beginning of the file
Both can be fixed using vim or vi.
To fix line endings open the file in vim and from the command mode type:
:set ff=unix
To remove the BOM use:
:set nobomb
For those who don't have dos2unix installed (and don't want to install it):
Remove trailing \r character that causes this error:
sed -i 's/\r$//' filename
Details from this StackOverflow answer. This was really helpful.
https://stackoverflow.com/a/32912867/7286223

BASH Unexpected EOF

My Mac keeps telling my unexpected end of file for this bash script, on the last line. I am not new to programming but very new to BASH, does anyone see anything wrong with this?
#!/bin/bash
#bootstrapper.sh
PIDD="$5"
while sleep 1; do kill -0 $PIDD || break; done
# Absolute path to this script. /home/user/bin/foo.sh
SCRIPT=$(readlink -f $0)
# Absolute path this script is in. /home/user/bin
SCRIPTPATH=`dirname $SCRIPT`
POSPAR1="$1" #-l
POSPAR2="$2" #location
POSPAR3="$3" #-d
POSPAR4="$4" #directory
cp -r -f $SCRIPTPATH/$4/* $2
rm -r -f $SCRIPTPATH/$4
Thank you in advance!
I coped your code from the question on a Mac (copy'n'paste) and ran the file with:
bash -n -v x.sh
In fact, I did that twice; the first time, I ensured there was a newline at the end of the file, and the second time I ensured that there wasn't a newline. And bash was quite happy both times.
This indicates to me that the problem is not in the visible characters; there are some invisible characters in the file causing grief. You will probably need to scrutinize the file with a tool such as od -c to find the character that is causing the trouble.
Also, FWIW, the readlink command on my Mac gives:
$ readlink -f $0
readlink: illegal option -- f
usage: readlink [-n] [file ...]
$
The Linux version of readlink takes -f. It isn't a POSIX command, so there is no de jure standard to refer to.
Analyzing the file with od -c revealed the line ending were \r\n, I did modify the file one Windows, silly me. Anyway, I am having another issues with the BASH script. This line:
while sleep 1; do kill -0 $PIDD || break; done
Is supposed to wait until the PID (stored in variable $PIDD) closes. It waits until it doesn't exist (the PID), but when it finally doesn't exist, it outputs: kill: 4: No such process. The rest of the script works as intended, but then the script doesn't terminate. Can I make the script terminate properly and not have that No such process be outputted?
Sorry for all the newbie questions, I'm awful at BASH and Linux.
Thanks again for all your help.

Resources