I have a bash shell script, that should:
1) check for the existence of a file
2) If file exists exit script, otherwise create file
3) Set off a process
4) Check process has run correctly - and send result to a log file
5) Delete file
6) Exit script
if [ -f $PROPERTIES_HC ]
then
# lockfile/propertiesfile exists so exit the script
log --------- lockfile exists so operation cancelled at `date` ---------
exit 1
else
# no lockfile/propertiesfile so continue
# create the lockfile/propertiesfile
input="./$PROPERTIES_VAR"
while IFS= read -r line || [ -n "$line" ]; do
eval "echo $line" >> $PROPERTIES_HC
done < $PROPERTIES_VAR
#Run Process
RUN_MY_PROCESS --overridefile $PROPERTIES_HC >> $LOG_FILE
#Check Process Ran Okay
if [ "$?" = "0" ]; then
echo "RAN WITHOUT ERROR" >> $LOG_FILE
else
echo "SOME ERROR!" >> $LOG_FILE
fi
# Remove the lockfile/propertiesfile
rm -rf $PROPERTIES_HC
fi
This script seemed to have been running fine, however recently I came across a situation where the "RUN_MY_PROCESS" element of the script failed, and the script seems to have simply exited leaving the lockfile in place.
As I understand it unless I set something like #!/bin/sh -e, the script should run on regardless of errors. Have I misunderstood how shell scripts/shell error handling work (I am new to this!), or is it that my shell script has crashed itself - hence it didn't finish running?
Thanks in advance for any help.
The proper way to handle errors inside your script (i.e. errors that cause your script to crash) is through traps.
You could modify your script as follow :
if [ -f $PROPERTIES_HC ]
#your regular script here
#...
#Run Process
trap 'echo "SOME ERROR" >> $LOG_FILE && rm -rf $PROPERTIES_HC' ERR
RUN_MY_PROCESS --overridefile $PROPERTIES_HC >> $LOG_FILE
#rest of your script here
#....
Related
I'm trying to set a condition to a crontab script for backup to don't start another backup if the last one is not yet completed or if the script is still running, in case a backup will run slower or something like this. For this I created something similar but linux first is creating the process and than will execute the scripts commands so it will allways exist with "process is running":
ps auxw | grep backup.sh | grep -v grep > /dev/null
if [ $? = 0 ]; then
echo "process is running"
exit 1
else
./backup.sh
fi
If the code snippet comes from backup.sh file, then you can put the above verification into a separate file. Then grep will not match "itself".
Another way is using additional in-use files. Create the in-use file and - in case when the file exists - exit 1. Just make sure the in-use file is removed after the script finishes.
#!/usr/bin/env bash
set -o errexit
trap cleanup ERR INT QUIT
cleanup()
{
rm -f "$INUSE"
}
INUSE=/home/abc/inuse/backup.inuse
if [ if -f "$INUSE" ]; then
echo "process is running"
exit 1
else
touch "$INUSE"
fi
# backup starts in here
# end of backup
cleanup
I have a bash script that loops through a folder and processes all *.hql files. Sometimes one of the hive script fails (syntax, resource constraint, etc), instead of the script failing it will continue onto the next .hql file.
Anyway I can stop the bash from processing the remaining? Below is my sample bash:
for i in `ls ${layer}/*.hql`; do
echo "Processing $i ..."
hive ${hiveconf_all} -hiveconf DATE=${date} -f ${i} &
if [ $j -le 5 ]; then
j=$(( j+1 ))
else
wait
j=0
fi
done
I would check the process completion state of the previous command and invoke the exit command to come out the loop
(( $? == 0 )) && exit 1
Introduce the above line after the hive command and should do the trick.
add
set -e
to the top of your script
Use this template for running parallel processes and wait for their completion. Add your date, layer, hiveconf_all and other variables:
#!/bin/bash
set -e
#Run parallel processes and write their logs
log_dir=/tmp/my_script_logs
for i in `ls ${layer}/*.hql`; do
echo "Processing $i ..."
#Run hive in parallel and redirect to the log file
hive ${hiveconf_all} -hiveconf DATE=${date} -f ${i} 2>&1 | tee "log_dir/${i}".log &
done
#Now wait for all processes to complete
FAILED=0
for job in `jobs -p`
do
echo "job=$job"
wait $job || let "FAILED+=1"
done
if [ "$FAILED" != "0" ]; then
echo "Execution FAILED! ($FAILED)"
#Do something here, log or send message, etc
exit 1
fi
#All processes are completed successfully!
#Do something here
echo "Done successfully"
Then you will be able to inspect each process log individually.
Here I am again. Today I wrote a little script that is supposed to start an application silently in my debian env.
Easy as
silent "npm search 1234556"
This works but not at all.
As you can see, I commented the section where I have some troubles.
This line:
$($cmdLine) &
doesn't hide application output but this one
$($1 >/dev/null 2>/dev/null) &
works perfectly. What am I missing? Many thanks.
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>/dev/null"
fi
# not working
$($cmdLine) &
# works perfectly
#$($1 >/dev/null 2>/dev/null) &
With the use of evil eval following script will work:
#!/bin/sh
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>&1"
fi
eval "$cmdLine &"
Rather than building up a command with redirection tacked on the end, you can incrementally apply it:
#!/bin/sh
if [ -z "$1" ]; then
exit
fi
exec >/dev/null
if [ -n "$2" ]; then
exec 2>&1
fi
exec $1
This first redirects stdout of the shell script to /dev/null. If the second argument is given, it redirects stderr of the shell script too. Then it runs the command which will inherit stdout and stderr from the script.
I removed the ampersand (&) since being silent has nothing to do with running in the background. You can add it back (and remove the exec on the last line) if it is what you want.
I added exec at the end as it is slightly more efficient. Since it is the end of the shell script, there is nothing left to do, so you may as well be done with it, hence exec.
& means that you're doing sort of multitask whereas
1 >/dev/null 2>/dev/null
means that you redirect the output to a sort of garbage and that's why you don't see anything.
Furthermore cmdLine="$1 >/dev/null" is incorrect, you should use ' instead of " :
cmdLine='$1 >/dev/null'
you can build your command line in a var and run a bash with it in background:
bash -c "$cmdLine"&
Note that it might be useful to store the output (out/err) of the program, instead of trow them in null.
In addition, why do you need errorsRedirect??
You can even add a wait at the end, just to be safe...if you want...
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
[ ! $1 ] && echo "Please, don't joke me..." && exit 1
cmdLine="$1>/dev/null"
# if passed a second parameter, errors will be hidden
[ $2 ] && cmdLine+=" 2>/dev/null"
# not working
echo "Running \"$cmdLine\""
bash -c "$cmdLine" &
wait
I've written a script for me to start and stop my Perforce server. To shutdown the server I use the kill -SIGTERM command with the PID of the server daemon. It works as it should but there are some discrepancies in my script concerning the output behavior.
The script looks as follows:
#!/bin/sh -e
export P4JOURNAL=/var/log/perforce/journal
export P4LOG=/var/log/perforce/p4err
export P4ROOT=/var/local/perforce_depot
export P4PORT=1666
PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
. /lib/lsb/init-functions
p4start="p4d -d"
p4stop="p4 admin stop"
p4user=perforce
case "$1" in
start)
log_action_begin_msg "Starting Perforce Server"
daemon -u $p4user -- $p4start;
echo "\n"
;;
stop)
echo "BLABLA"
echo "$(pidof /usr/local/bin/p4d)"
#daemon -u $p4user -- $p4stop;
p4dPid="$(pidof /usr/local/bin/p4d)"
echo $p4dPid
if [ -z "$(pidof /usr/local/bin/p4d)" ]; then
echo "ERROR: No Perforce Server running!"
else
echo "SUCCESS: Found Perforce Server running!\n\t"
echo "Shutting down Perforce Server..."
kill -15 $p4dPid;
fi
echo "\n"
;;
restart)
stop
start
;;
*)
echo "Usage: /etc/init.d/perforce (start|stop|restart)"
exit 1
;;
esac
exit 0
When p4d is running the stop block works as intended, but when there is no p4d running the script with stop only outputs BLABLA and an empty new line because of the echo "$(pidof /usr/local/bin/p4d)". The error message stating that no server is running is never printed. What am I doing wrong here?
PS: The part if [ -z "$(pidof /usr/local/bin/p4d)" ]; then has been changed from if [ -z "$p4dPid" ]; then for debug reasons.
EDIT: I narrowed down the problem. If I don't use the p4dPid variable and comment out the lines p4dPid="$(pidof /usr/local/bin/p4d)" and echo $p4dPid the if block is processed and the error messages is printed. Still I don't unterstand what is causing this behavior.
EDIT 2: Problem solved!
The -e in #!/bin/sh -e was causing the shell to exit the script after any statement returning a non-zero return value.
When your service is not running, the command
echo "$(pidof /usr/local/bin/p4d)"
is processed as
echo ""
because pidof did not return any string. So the command outputs an empty line.
If you do not want this empty line, then just remove this statement, after all you print an error message when the process is not running.
Problem solved!
The -e in #!/bin/sh -e was causing the shell to exit after any statement returning a non-zero return value.
I have a script that runs every 15 minutes but sometimes if the box is busy it hangs and the next process will start before the first one is finished creating a snowball effect. How can I add a couple lines to the bash script to check to see if something is running first before starting?
You can use pidof -x if you know the process name, or kill -0 if you know the PID.
Example:
if pidof -x vim > /dev/null
then
echo "Vim already running"
exit 1
fi
Why don't set a lock file ?
Something like
yourapp.lock
Just remove it when you process is finished, and check for it before to launch it.
It could be done using
if [ -f yourapp.lock ]; then
echo "The process is already launched, please wait..."
fi
In lieu of pidfiles, as long as your script has a uniquely identifiable name you can do something like this:
#!/bin/bash
COMMAND=$0
# exit if I am already running
RUNNING=`ps --no-headers -C${COMMAND} | wc -l`
if [ ${RUNNING} -gt 1 ]; then
echo "Previous ${COMMAND} is still running."
exit 1
fi
... rest of script ...
pgrep -f yourscript >/dev/null && exit
This is how I do it in one of my cron jobs
lockfile=~/myproc.lock
minutes=60
if [ -f "$lockfile" ]
then
filestr=`find $lockfile -mmin +$minutes -print`
if [ "$filestr" = "" ]; then
echo "Lockfile is not older than $minutes minutes! Another $0 running. Exiting ..."
exit 1
else
echo "Lockfile is older than $minutes minutes, ignoring it!"
rm $lockfile
fi
fi
echo "Creating lockfile $lockfile"
touch $lockfile
and delete the lock file at the end of the script
echo "Removing lock $lockfile ..."
rm $lockfile
For a method that does not suffer from parsing bugs and race conditions, check out:
BashFAQ/045 - How can I ensure that only one instance of a script is running at a time (mutual exclusion)?
I had recently the same question and found from above that kill -0 is best for my case:
echo "Starting process..."
run-process > $OUTPUT &
pid=$!
echo "Process started pid=$pid"
while true; do
kill -0 $pid 2> /dev/null || { echo "Process exit detected"; break; }
sleep 1
done
echo "Done."
To expand on what #bgy says, the safe atomic way to create a lock file if it doesn't exist yet, and fail if it doesn't, is to create a temp file, then hard link it to the standard lock file. This protects against another process creating the file in between you testing for it and you creating it.
Here is the lock file code from my hourly backup script:
echo $$ > /tmp/lock.$$
if ! ln /tmp/lock.$$ /tmp/lock ; then
echo "previous backup in process"
rm /tmp/lock.$$
exit
fi
Don't forget to delete both the lock file and the temp file when you're done, even if you exit early through an error.
Use this script:
FILE="/tmp/my_file"
if [ -f "$FILE" ]; then
echo "Still running"
exit
fi
trap EXIT "rm -f $FILE"
touch $FILE
...script here...
This script will create a file and remove it on exit.