I'm using a script to modify some mailboxes on a Zimbra server hosted on a Ubuntu server. This script checks if mailbox exists and, if so, proceeds the required change.
I get the error
scriptname.sh: 4: Syntax error: Bad fd number
Here's the script:
#!/bin/bash
email=$1
echo "Looking for $email"
/opt/zimbra/bin/zmprov ga "$email" displayName > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "Mailbox not found on this server"; exit 2;
fi
/opt/zimbra/bin/zmprov ModifyAccount "$email" zimbraMailTransport smtp:server.domain.com:25
if [ $? -ne 0 ]; then
echo "Error updating Transport.";
exit 3;
fi
echo "Transport updated";
The error is related to this line:
/opt/zimbra/bin/zmprov ga "$email" displayName > /dev/null 2>&1
I'm quite a newbie on bash, so.. I don't really know how to debug this.
For an unknown reason, a \r was added at the end of each line of the script.. Removed it with notepad++, and it worked like a charm. – Ashina
Related
I would like to run a bash script on the target server, whenever any command execution fails, I would like to first send email with error details and why it failed and then exit from the program
I have tried redirecting the whole bash script execution to a log file and sending email the log file.
Right now I am getting email with the file.log content only when the script is executed successfully, but I would like to receive email even when it failed also, with the details. Pleas help.
exec > file.log 2>&1
case $(hostname) in
abcd|defg)
blah
blah
;;
ghij|klmn)
blah
eg: command failed here due to file not present
blah
;;
*) echo "Not found"
esac
echo -e "Sending $(cat file.log)" | mailx -s "Status" abcd#abcd.com
Try this:
I don't know what command you are using so I used PING:
#!/bin/bash
status() {
for hosts in 192.168.1.1 192.168.1.2 192.168.1.3 192.168.1.250; do
ping -t 1 -c 1 "$hosts";
if [[ "$?" -gt "0" ]]; then
echo -e "Host is down: '$hosts'" >> error.log
fi
done
}
failedornot() {
while read line; do
grep -q "down" error.log
if [[ $? -eq "0" ]]; then
echo -e "Something went wrong, sending e-mail report to: abcd#abcd.com"
cat error.log | mailx -s "Status" abcd#abcd.com
rm error.log
else
echo "Everything is fine, executing blabla.sh"
fi
done < error.log
}
status
failedornot
Output
Host is down: 192.168.1.250
Something went wrong, sending e-mail report to: <username>..
I've been using cronic to silence emails from cron jobs when the job is successful. I'm trying to customize it so when a response code is 0 and the error output matches a string of "mount: /VessRAID/RH: /dev/sde1 already mounted on /VessRAID/RH.", to not send an email. Below is the script, then the contents of the email then my attempt at trying to suppress the email which is not working. Any idea what I may be doing wrong?
#!/bin/bash
# Cronic v3 - cron job report wrapper
# Copyright 2007-2016 Chuck Houpt. No rights reserved, whatsoever.
# Public Domain CC0: http://creativecommons.org/publicdomain/zero/1.0/
set -eu
TMP=$(mktemp -d)
OUT=$TMP/cronic.out
ERR=$TMP/cronic.err
TRACE=$TMP/cronic.trace
set +e
"$#" >$OUT 2>$TRACE
RESULT=$?
set -e
PATTERN="^${PS4:0:1}\\+${PS4:1}"
if grep -aq "$PATTERN" $TRACE
then
! grep -av "$PATTERN" $TRACE > $ERR
else
ERR=$TRACE
fi
if [ $RESULT -ne 0 -o -s "$ERR" ]
then
echo "Cronic detected failure or error output for the command:"
echo "$#"
echo
echo "RESULT CODE: $RESULT"
echo
echo "ERROR OUTPUT:"
cat "$ERR"
echo
echo "STANDARD OUTPUT:"
cat "$OUT"
if [ $TRACE != $ERR ]
then
echo
echo "TRACE-ERROR OUTPUT:"
cat "$TRACE"
fi
fi
rm -rf "$TMP"
Here is what the email notification looks like:
Cronic detected failure or error output for the command:
/usr/local/sbin/reg-backup-cronic.sh daily
RESULT CODE: 0
ERROR OUTPUT:
mount: /VessRAID/RH: /dev/sde1 already mounted on /VessRAID/RH.
STANDARD OUTPUT:
/dev/sde1 on /VessRAID/RH type ext4 (rw,relatime)
Here is my attempt at a wrapper script:
#!/bin/bash
/usr/local/sbin/reg-backup.sh $1
CODE=$?
err=$TRACE
if [[ $CODE -eq 0 && $err = "mount: /VessRAID/RH: /dev/sde1 already mounted on /VessRAID/RH." ]]
then
exit $CODE
fi
Alas the emails continue.
Hat tip to the creator of cronic, Chuck Houpt, for cluing me in to an answer, which was to look at the original script and why the error is happening. Case-sensitivity got the best of me:
if mount | grep Vessraid; then
echo starting $1 backup >> /var/log/vessraid.log
Notice the case in VessRAID should have been:
if mount | grep VessRAID; then
echo starting $1 backup >> /var/log/vessraid.log
Now emails only happen when there really is an error.
I want to run this command source .env (sourcing a .env file) and if the .env file had some errors while sourcing. I want to show a message before the error output "Hey you got errors in your .env" else if there's no error, I don't want to show anything.
Here's a code sample that needs editing:
#!/bin/zsh
env_auto_sourcing() {
if [[ -f .env ]]; then
OUTPUT="$(source .env &> /dev/null)"
echo "${OUTPUT}"
if [ -n "$OUTPUT" ]; then
echo "Hey you got errors in your .env"
echo "$OUTPUT"
fi
}
You could use bash -n (zsh has has a -n option as well) to syntax check your script before sourcing it:
env_auto_sourcing() {
if [[ -f .env ]]; then
if errs=$(bash -n .env 2>&1);
then source .env;
else
printf '%s\n' "Hey you got errors" "$errs";
fi
fi
}
Storing the syntax check errors in a file is a little cleaner than the subshell approach you have used in your code.
bash -n has a few pitfalls as seen here:
How do I check syntax in bash without running the script?
Why not just use the exit code from the command source ?
You don't have to use bash -n for this because ...
If let's say your .env file contains these 2 invalid lines:
dsadsd
sdss
If you run your current accepted code using the example above:
if errs=$(bash -n .env 2>&1);
the above condition will fail to stop the file from sourcing.
So, you can use source command return code to handle all of this:
#!/bin/bash
# This doesn't actually source it. It just test if source is working
errs=$(source ".env" 2>&1 >/dev/null)
# get the return code
retval=$?
#echo "${retval}"
if [ ${retval} = 0 ]; then
# Do another check for any syntax error
if [ -n "${errs}" ]; then
echo "The source file returns 0 but you got syntax errors: "
echo "Error details:"
printf "%s\n" "${errs}"
exit 1
else
# Success here. we know that this command works without error so we source it
echo "The source file returns 0 and no syntax errors: "
source ".env"
fi
else
echo "The source command returns an error code ${retval}: "
echo "Error details:"
printf "%s\n" "${errs}"
exit 1
fi
The best thing with this approach is, it will check both bash syntax and source syntax as well:
Now you can test this data in your env file:
-
~
#
~<
>
I have a simple shell script which I want to set up as a periodic Jenkins job rather than a cronjob for visibility and usability for less experienced users.
Here is the script:
#!/bin/bash
outputfile=/opt/jhc/streaming/check_error_output.txt
if [ "grep -sq 'Unable' $outputfile" == "0" ]; then
echo -e "ERROR MESSAGE FOUND\n"
exit 1
else
echo -e "NO ERROR MESSAGES HAVE BEEN FOUND\n"
exit 0
fi
My script will always return "NO ERROR MESSAGES HAVE BEEN FOUND" regardless of whether or not 'Unable' is in $outputfile, what am I doing wrong?
I also need my Jenkins job to class this as a success if 'Unable' isn't found (e.g. If script returns "0" then pass, everything else is fail)
Execute the grep command and check the exit status instead:
#!/bin/bash
outputfile=/opt/jhc/streaming/check_error_output.txt
grep -sq 'Unable' $outputfile
if [ "$?" == "0" ]; then
echo -e "ERROR MESSAGE FOUND\n"
exit 1
else
echo -e "NO ERROR MESSAGES HAVE BEEN FOUND\n"
exit 0
fi
You are comparing two different strings. The outcome will always be false, i.e. the else part is taken.
Also, no need to explicitly query the status code. Do it like this:
if grep -sq 'Unable' $outputfile
then
....
else
....
fi
I'm writing a script to download a bunch of files, and I want it to inform when a particular file doesn't exist.
r=`wget -q www.someurl.com`
if [ $r -ne 0 ]
then echo "Not there"
else echo "OK"
fi
But it gives the following error on execution:
./file: line 2: [: -ne: unary operator expected
What's wrong?
Others have correctly posted that you can use $? to get the most recent exit code:
wget_output=$(wget -q "$URL")
if [ $? -ne 0 ]; then
...
This lets you capture both the stdout and the exit code. If you don't actually care what it prints, you can just test it directly:
if wget -q "$URL"; then
...
And if you want to suppress the output:
if wget -q "$URL" > /dev/null; then
...
$r is the text output of wget (which you've captured with backticks). To access the return code, use the $? variable.
$r is empty, and therefore your condition becomes if [ -ne 0 ] and it seems as if -ne is used as a unary operator. Try this instead:
wget -q www.someurl.com
if [ $? -ne 0 ]
...
EDIT As Andrew explained before me, backticks return standard output, while $? returns the exit code of the last operation.
you could just
wget ruffingthewitness.com && echo "WE GOT IT" || echo "Failure"
-(~)----------------------------------------------------------(07:30 Tue Apr 27)
risk#DockMaster [2024] --> wget ruffingthewitness.com && echo "WE GOT IT" || echo "Failure"
--2010-04-27 07:30:56-- http://ruffingthewitness.com/
Resolving ruffingthewitness.com... 69.56.251.239
Connecting to ruffingthewitness.com|69.56.251.239|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `index.html.1'
[ <=> ] 14,252 72.7K/s in 0.2s
2010-04-27 07:30:58 (72.7 KB/s) - `index.html.1' saved [14252]
WE GOT IT
-(~)-----------------------------------------------------------------------------------------------------------(07:30 Tue Apr 27)
risk#DockMaster [2025] --> wget ruffingthewitness.biz && echo "WE GOT IT" || echo "Failure"
--2010-04-27 07:31:05-- http://ruffingthewitness.biz/
Resolving ruffingthewitness.biz... failed: Name or service not known.
wget: unable to resolve host address `ruffingthewitness.biz'
zsh: exit 1 wget ruffingthewitness.biz
Failure
-(~)-----------------------------------------------------------------------------------------------------------(07:31 Tue Apr 27)
risk#DockMaster [2026] -->
Best way to capture the result from wget and also check the call status
wget -O filename URL
if [[ $? -ne 0 ]]; then
echo "wget failed"
exit 1;
fi
This way you can check the status of wget as well as store the output data.
If call is successful use the output stored
Otherwise it will exit with the error wget failed
I been trying all the solutions without lucky.
wget executes in non-interactive way. This means that wget work in the background and you can't catch the return code with $?.
One solution it's to handle the "--server-response" property, searching http 200 status code
Example:
wget --server-response -q -o wgetOut http://www.someurl.com
sleep 5
_wgetHttpCode=`cat wgetOut | gawk '/HTTP/{ print $2 }'`
if [ "$_wgetHttpCode" != "200" ]; then
echo "[Error] `cat wgetOut`"
fi
Note: wget need some time to finish his work, for that reason I put "sleep 5". This is not the best way to do but worked ok for test the solution.