Stop Bash Script if Hive Fails - shell

I have a bash script that loops through a folder and processes all *.hql files. Sometimes one of the hive script fails (syntax, resource constraint, etc), instead of the script failing it will continue onto the next .hql file.
Anyway I can stop the bash from processing the remaining? Below is my sample bash:
for i in `ls ${layer}/*.hql`; do
echo "Processing $i ..."
hive ${hiveconf_all} -hiveconf DATE=${date} -f ${i} &
if [ $j -le 5 ]; then
j=$(( j+1 ))
else
wait
j=0
fi
done

I would check the process completion state of the previous command and invoke the exit command to come out the loop
(( $? == 0 )) && exit 1
Introduce the above line after the hive command and should do the trick.

add
set -e
to the top of your script

Use this template for running parallel processes and wait for their completion. Add your date, layer, hiveconf_all and other variables:
#!/bin/bash
set -e
#Run parallel processes and write their logs
log_dir=/tmp/my_script_logs
for i in `ls ${layer}/*.hql`; do
echo "Processing $i ..."
#Run hive in parallel and redirect to the log file
hive ${hiveconf_all} -hiveconf DATE=${date} -f ${i} 2>&1 | tee "log_dir/${i}".log &
done
#Now wait for all processes to complete
FAILED=0
for job in `jobs -p`
do
echo "job=$job"
wait $job || let "FAILED+=1"
done
if [ "$FAILED" != "0" ]; then
echo "Execution FAILED! ($FAILED)"
#Do something here, log or send message, etc
exit 1
fi
#All processes are completed successfully!
#Do something here
echo "Done successfully"
Then you will be able to inspect each process log individually.

Related

How to display long script logs to one liner?

Lets Say I have multiple scripts which need to invoke sequentially for the job.
These scripts has long and lengthy output this is my bash script,
How to avoid that but still can understand that the process is running.
Here is an example,
#!/bin/bash
echo "Script to prepare Final BUILD"
rm -vf module1.out
module1_build_script.sh #FIXME: This scripts outputs 10000 lines
#module1_build_script.sh &> /dev/null #Not interested as this makes difficult if the process hangs or running.
if [ ! -f ./out/module1.out ];then
echo "Module 1 build failed"
exit 1
fi
.
.
.
rm -vf module1.out
module4_build_script.sh # This scripts outputs 5000 lines
if [ ! -f ./out/module4.out ];then
echo "Module 4 build failed"
exit 4
fi
Now I am expecting some code gives me effect like below output as one liner without scroll
example: module1_build_script.sh | "magical code here" #FIXME:
Like below output
user#bash#./myscript
#-------content of myscript ---------------
#!/bin/bash
while (( i < 10))
do
echo -en "\r Process is running...$i"
sleep 0.5
((i++))
done
#------------------------------------------

Has my bash script crashed?

I have a bash shell script, that should:
1) check for the existence of a file
2) If file exists exit script, otherwise create file
3) Set off a process
4) Check process has run correctly - and send result to a log file
5) Delete file
6) Exit script
if [ -f $PROPERTIES_HC ]
then
# lockfile/propertiesfile exists so exit the script
log --------- lockfile exists so operation cancelled at `date` ---------
exit 1
else
# no lockfile/propertiesfile so continue
# create the lockfile/propertiesfile
input="./$PROPERTIES_VAR"
while IFS= read -r line || [ -n "$line" ]; do
eval "echo $line" >> $PROPERTIES_HC
done < $PROPERTIES_VAR
#Run Process
RUN_MY_PROCESS --overridefile $PROPERTIES_HC >> $LOG_FILE
#Check Process Ran Okay
if [ "$?" = "0" ]; then
echo "RAN WITHOUT ERROR" >> $LOG_FILE
else
echo "SOME ERROR!" >> $LOG_FILE
fi
# Remove the lockfile/propertiesfile
rm -rf $PROPERTIES_HC
fi
This script seemed to have been running fine, however recently I came across a situation where the "RUN_MY_PROCESS" element of the script failed, and the script seems to have simply exited leaving the lockfile in place.
As I understand it unless I set something like #!/bin/sh -e, the script should run on regardless of errors. Have I misunderstood how shell scripts/shell error handling work (I am new to this!), or is it that my shell script has crashed itself - hence it didn't finish running?
Thanks in advance for any help.
The proper way to handle errors inside your script (i.e. errors that cause your script to crash) is through traps.
You could modify your script as follow :
if [ -f $PROPERTIES_HC ]
#your regular script here
#...
#Run Process
trap 'echo "SOME ERROR" >> $LOG_FILE && rm -rf $PROPERTIES_HC' ERR
RUN_MY_PROCESS --overridefile $PROPERTIES_HC >> $LOG_FILE
#rest of your script here
#....

Bash script: spawning multiple processes issues

So i am writing a script to call a process 365 times and they should run in 10 batches, so this is something i wrote but there are multiple issues -
1. the log message is not getting written to the log file, i see the error message in err file
2. there is this "Command not found" error I keep getting from the script for the line process.
3. even if the command doesnt succeed, still it doesn't print FAIL but prints success
#!/bin/bash
set -m
FAIL=0
for i in {1..10}
do
waitPIDS=()
j=$i
while [ $j -lt 366 ]; do
exec 1>logfile
exec 2>errorfile
`process $j &`
waitPIDS[${#waitPIDS[#]}]=$!
j=$[$j+1]
done
for jpid in "${waitPIDS[#]}"
do
echo $jpid
wait $jpid
if [[ $? != 0 ]] ; then
echo "fail"
else
echo "success"
fi
done
done
What is wrong with it ?
thanks!
At the very least, this line:
`process $j &`
Shouldn't have any backticks in it. You probably just want:
process $j &
Besides that, you're overwriting your log files instead of appending to them; is that intended?

Why my shell script is in standby in the background till I bring it back on the foreground?

I have a shell script which is executing a php script (worker for beanstalkd).
Here is the script:
#!/bin/bash
if [ $# -eq 0 ]
then
echo "You need to specify an argument"
exit 0;
fi
CMD="/var/webserver/user/bin/console $#";
echo "$CMD";
nice $CMD;
ERR=$?
## Possibilities
# 97 - planned pause/restart
# 98 - planned restart
# 99 - planned stop, exit.
# 0 - unplanned restart (as returned by "exit;")
# - Anything else is also unplanned paused/restart
if [ $ERR -eq 97 ]
then
# a planned pause, then restart
echo "97: PLANNED_PAUSE - wait 1";
sleep 1;
exec $0 $#;
fi
if [ $ERR -eq 98 ]
then
# a planned restart - instantly
echo "98: PLANNED_RESTART";
exec $0 $#;
fi
if [ $ERR -eq 99 ]
then
# planned complete exit
echo "99: PLANNED_SHUTDOWN";
exit 0;
fi
If I execute the script manually, like this:
[user#host]$ ./workers.sh
It's working perfectly, I can see the output of my PHP script.
But if I detach the process from the console, like this:
[user#host]$ ./workers.sh &
It's not working anymore. However I can see the process in the background.
[user#host]$ jobs
[1]+ Stopped ./workers.sh email
The Queue jobs server is filling with jobs and none of them are processed until I bring the detached script in the foreground, like this:
[user#host]$ fg
At this moment I see all the job being process by my PHP script. I have no idea why this is happening. Could you help, please?
Thanks, Maxime
EDIT:
I've create a shell script to run x workers, I'm sharing it here. Not sure it's the best way to do it but it's working well at the moment:
#!/bin/bash
WORKER_PATH="/var/webserver/user/workers.sh"
declare -A Queue
Queue[email]=2
Queue[process-images]=5
for key in "${!Queue[#]}"
do
echo "Launching ${Queue[$key]} instance(s) of $key Worker..."
CMD="$WORKER_PATH $key"
for (( l=1; l<=${Queue[$key]}; l++ ))
do
INSTANCE="$CMD $l"
echo "lnch instance $INSTANCE"
nice $INSTANCE > /dev/null 2> /dev/null &
done
done
Background processes are not allowed to write to the terminal, which your script tries to do with the echo statements. You just need to redirect standard output to a file when you put it to the background.
[user#host]$ ./workers.sh > workers.output 2> workers.error &
(I've redirected standard error as well, just to be safe.)

How to check in a bash script if something is running and exit if it is

I have a script that runs every 15 minutes but sometimes if the box is busy it hangs and the next process will start before the first one is finished creating a snowball effect. How can I add a couple lines to the bash script to check to see if something is running first before starting?
You can use pidof -x if you know the process name, or kill -0 if you know the PID.
Example:
if pidof -x vim > /dev/null
then
echo "Vim already running"
exit 1
fi
Why don't set a lock file ?
Something like
yourapp.lock
Just remove it when you process is finished, and check for it before to launch it.
It could be done using
if [ -f yourapp.lock ]; then
echo "The process is already launched, please wait..."
fi
In lieu of pidfiles, as long as your script has a uniquely identifiable name you can do something like this:
#!/bin/bash
COMMAND=$0
# exit if I am already running
RUNNING=`ps --no-headers -C${COMMAND} | wc -l`
if [ ${RUNNING} -gt 1 ]; then
echo "Previous ${COMMAND} is still running."
exit 1
fi
... rest of script ...
pgrep -f yourscript >/dev/null && exit
This is how I do it in one of my cron jobs
lockfile=~/myproc.lock
minutes=60
if [ -f "$lockfile" ]
then
filestr=`find $lockfile -mmin +$minutes -print`
if [ "$filestr" = "" ]; then
echo "Lockfile is not older than $minutes minutes! Another $0 running. Exiting ..."
exit 1
else
echo "Lockfile is older than $minutes minutes, ignoring it!"
rm $lockfile
fi
fi
echo "Creating lockfile $lockfile"
touch $lockfile
and delete the lock file at the end of the script
echo "Removing lock $lockfile ..."
rm $lockfile
For a method that does not suffer from parsing bugs and race conditions, check out:
BashFAQ/045 - How can I ensure that only one instance of a script is running at a time (mutual exclusion)?
I had recently the same question and found from above that kill -0 is best for my case:
echo "Starting process..."
run-process > $OUTPUT &
pid=$!
echo "Process started pid=$pid"
while true; do
kill -0 $pid 2> /dev/null || { echo "Process exit detected"; break; }
sleep 1
done
echo "Done."
To expand on what #bgy says, the safe atomic way to create a lock file if it doesn't exist yet, and fail if it doesn't, is to create a temp file, then hard link it to the standard lock file. This protects against another process creating the file in between you testing for it and you creating it.
Here is the lock file code from my hourly backup script:
echo $$ > /tmp/lock.$$
if ! ln /tmp/lock.$$ /tmp/lock ; then
echo "previous backup in process"
rm /tmp/lock.$$
exit
fi
Don't forget to delete both the lock file and the temp file when you're done, even if you exit early through an error.
Use this script:
FILE="/tmp/my_file"
if [ -f "$FILE" ]; then
echo "Still running"
exit
fi
trap EXIT "rm -f $FILE"
touch $FILE
...script here...
This script will create a file and remove it on exit.

Resources