Problem with pidof in Bash script - bash

I've written a script for me to start and stop my Perforce server. To shutdown the server I use the kill -SIGTERM command with the PID of the server daemon. It works as it should but there are some discrepancies in my script concerning the output behavior.
The script looks as follows:
#!/bin/sh -e
export P4JOURNAL=/var/log/perforce/journal
export P4LOG=/var/log/perforce/p4err
export P4ROOT=/var/local/perforce_depot
export P4PORT=1666
PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
. /lib/lsb/init-functions
p4start="p4d -d"
p4stop="p4 admin stop"
p4user=perforce
case "$1" in
start)
log_action_begin_msg "Starting Perforce Server"
daemon -u $p4user -- $p4start;
echo "\n"
;;
stop)
echo "BLABLA"
echo "$(pidof /usr/local/bin/p4d)"
#daemon -u $p4user -- $p4stop;
p4dPid="$(pidof /usr/local/bin/p4d)"
echo $p4dPid
if [ -z "$(pidof /usr/local/bin/p4d)" ]; then
echo "ERROR: No Perforce Server running!"
else
echo "SUCCESS: Found Perforce Server running!\n\t"
echo "Shutting down Perforce Server..."
kill -15 $p4dPid;
fi
echo "\n"
;;
restart)
stop
start
;;
*)
echo "Usage: /etc/init.d/perforce (start|stop|restart)"
exit 1
;;
esac
exit 0
When p4d is running the stop block works as intended, but when there is no p4d running the script with stop only outputs BLABLA and an empty new line because of the echo "$(pidof /usr/local/bin/p4d)". The error message stating that no server is running is never printed. What am I doing wrong here?
PS: The part if [ -z "$(pidof /usr/local/bin/p4d)" ]; then has been changed from if [ -z "$p4dPid" ]; then for debug reasons.
EDIT: I narrowed down the problem. If I don't use the p4dPid variable and comment out the lines p4dPid="$(pidof /usr/local/bin/p4d)" and echo $p4dPid the if block is processed and the error messages is printed. Still I don't unterstand what is causing this behavior.
EDIT 2: Problem solved!
The -e in #!/bin/sh -e was causing the shell to exit the script after any statement returning a non-zero return value.

When your service is not running, the command
echo "$(pidof /usr/local/bin/p4d)"
is processed as
echo ""
because pidof did not return any string. So the command outputs an empty line.
If you do not want this empty line, then just remove this statement, after all you print an error message when the process is not running.

Problem solved!
The -e in #!/bin/sh -e was causing the shell to exit after any statement returning a non-zero return value.

Related

Has my bash script crashed?

I have a bash shell script, that should:
1) check for the existence of a file
2) If file exists exit script, otherwise create file
3) Set off a process
4) Check process has run correctly - and send result to a log file
5) Delete file
6) Exit script
if [ -f $PROPERTIES_HC ]
then
# lockfile/propertiesfile exists so exit the script
log --------- lockfile exists so operation cancelled at `date` ---------
exit 1
else
# no lockfile/propertiesfile so continue
# create the lockfile/propertiesfile
input="./$PROPERTIES_VAR"
while IFS= read -r line || [ -n "$line" ]; do
eval "echo $line" >> $PROPERTIES_HC
done < $PROPERTIES_VAR
#Run Process
RUN_MY_PROCESS --overridefile $PROPERTIES_HC >> $LOG_FILE
#Check Process Ran Okay
if [ "$?" = "0" ]; then
echo "RAN WITHOUT ERROR" >> $LOG_FILE
else
echo "SOME ERROR!" >> $LOG_FILE
fi
# Remove the lockfile/propertiesfile
rm -rf $PROPERTIES_HC
fi
This script seemed to have been running fine, however recently I came across a situation where the "RUN_MY_PROCESS" element of the script failed, and the script seems to have simply exited leaving the lockfile in place.
As I understand it unless I set something like #!/bin/sh -e, the script should run on regardless of errors. Have I misunderstood how shell scripts/shell error handling work (I am new to this!), or is it that my shell script has crashed itself - hence it didn't finish running?
Thanks in advance for any help.
The proper way to handle errors inside your script (i.e. errors that cause your script to crash) is through traps.
You could modify your script as follow :
if [ -f $PROPERTIES_HC ]
#your regular script here
#...
#Run Process
trap 'echo "SOME ERROR" >> $LOG_FILE && rm -rf $PROPERTIES_HC' ERR
RUN_MY_PROCESS --overridefile $PROPERTIES_HC >> $LOG_FILE
#rest of your script here
#....

Is there any better way to run a script as daemon which continuously polls a directory to check the presence of a file?

I have to continuously check if a file is present in a particular directory. I am doing this with filecopy.sh script:
#!/bin/bash
while true;
do
if [ -f /var/tmp/*.*cim ]; then
echo "Checking the file available in the path"
mv /var/tmp/*.*cim /etc/opt/maptranslator/ss7
/etc/init.d/ss7-stack restart
else
continue;
fi
done
I want the filecopy.sh script to run as daemon. I wrote the following script:
#!/bin/bash
case "$1" in
start)
/etc/init.d/filecopy.sh &
echo $!>/var/run/filecopy.pid
;;
stop)
kill `cat /var/run/filecopy.pid`
rm /var/run/filecopy.pid
;;
restart)
$0 stop
$0 start
;;
status)
if [ -e /var/run/filecopy.pid ]; then
echo filecopy.sh is running, pid=`cat /var/run/filecopy.pid`
else
echo filecopy.sh is NOT running
exit 1
fi
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0
I would like to know if there is any better way to achieve this.
Write a small C program that calls inotify(7).
See http://man7.org/linux/man-pages/man7/inotify.7.html or man 7 inotify
In this case you are waiting for /var/tmp to change.
However you really really shouldn't be using /var/tmp at all unless you want some random user to hose your protected area. File a security bug against the process that created its pid file in /var/tmp.

Shell script create unexpected file "start" on starting process

I've found and modified a simple shell script to start/stop a jar, but when launching the script it creates an extra empty start file.
I cannot understand why. Any clue?
#!/bin/bash
case $1 in
start)
if [[ -e myprog.pid ]]
then
echo "myprog.pid found. Is myprog already running?"
else
exec java -jar myprog-0.0.1-SNAPSHOT.jar 1>/dev/null 2>$1 &
echo $! > myprog.pid;
fi
;;
stop)
kill $(cat myprog.pid);
rm myprog.pid
;;
*)
echo "usage: myprog {start|stop}" ;;
esac
exit 0
Your problem is 2>$1. That's a typo.
You meant 2>&1.
What you wrote is expanded by the shell as 2>start and creates your file.

Bash command substitution stdout+stderr redirect

Good day. I have a series of commands that I wanted to execute via a function so that I could get the exit code and perform console output accordingly. With that being said, I have two issues here:
1) I can't seem to direct stderr to /dev/null.
2) The first echo line is not displayed until the $1 is executed. It's not really noticeable until I run commands that take a while to process, such as searching the hard drive for a file. Additionally, it's obvious that this is the case, because the output looks like:
sh-3.2# ./runScript.sh
sh-3.2# com.apple.auditd: Already loaded
sh-3.2# Attempting... Enable Security Auditing ...Success
In other words, the stderr was displayed before "Attempting... $2"
Here is the function I am trying to use:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
exec $1
if [ "$?" -ne 0 ]; then
echo -ne " ...Failure\n\r"
else
echo -ne " ...Success\n\r"
fi
}
saveChange "$(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)" "Enable Security Auditing"
Any help or advice is appreciated.
this is how you redirect stderr to /dev/null
command 2> /dev/null
e.g.
ls -l 2> /dev/null
Your second part (i.e. ordering of echo) -- It may be because of this you have while invoking the script. $(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)
The first echo line is displayed later because it is being execute second. $(...) will execute the code. Try the following:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
err=$($1 2>&1)
if [ -z "$err" ]; then
echo -ne " ...Success\n\r"
else
echo -ne " ...Failured\n\r"
exit 1
fi
}
saveChange "launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist" "Enable Security Auditing"
EDIT: Noticed that launchctl does not actually set $? on failure so capturing the STDERR to detect the error instead.

How to check in a bash script if something is running and exit if it is

I have a script that runs every 15 minutes but sometimes if the box is busy it hangs and the next process will start before the first one is finished creating a snowball effect. How can I add a couple lines to the bash script to check to see if something is running first before starting?
You can use pidof -x if you know the process name, or kill -0 if you know the PID.
Example:
if pidof -x vim > /dev/null
then
echo "Vim already running"
exit 1
fi
Why don't set a lock file ?
Something like
yourapp.lock
Just remove it when you process is finished, and check for it before to launch it.
It could be done using
if [ -f yourapp.lock ]; then
echo "The process is already launched, please wait..."
fi
In lieu of pidfiles, as long as your script has a uniquely identifiable name you can do something like this:
#!/bin/bash
COMMAND=$0
# exit if I am already running
RUNNING=`ps --no-headers -C${COMMAND} | wc -l`
if [ ${RUNNING} -gt 1 ]; then
echo "Previous ${COMMAND} is still running."
exit 1
fi
... rest of script ...
pgrep -f yourscript >/dev/null && exit
This is how I do it in one of my cron jobs
lockfile=~/myproc.lock
minutes=60
if [ -f "$lockfile" ]
then
filestr=`find $lockfile -mmin +$minutes -print`
if [ "$filestr" = "" ]; then
echo "Lockfile is not older than $minutes minutes! Another $0 running. Exiting ..."
exit 1
else
echo "Lockfile is older than $minutes minutes, ignoring it!"
rm $lockfile
fi
fi
echo "Creating lockfile $lockfile"
touch $lockfile
and delete the lock file at the end of the script
echo "Removing lock $lockfile ..."
rm $lockfile
For a method that does not suffer from parsing bugs and race conditions, check out:
BashFAQ/045 - How can I ensure that only one instance of a script is running at a time (mutual exclusion)?
I had recently the same question and found from above that kill -0 is best for my case:
echo "Starting process..."
run-process > $OUTPUT &
pid=$!
echo "Process started pid=$pid"
while true; do
kill -0 $pid 2> /dev/null || { echo "Process exit detected"; break; }
sleep 1
done
echo "Done."
To expand on what #bgy says, the safe atomic way to create a lock file if it doesn't exist yet, and fail if it doesn't, is to create a temp file, then hard link it to the standard lock file. This protects against another process creating the file in between you testing for it and you creating it.
Here is the lock file code from my hourly backup script:
echo $$ > /tmp/lock.$$
if ! ln /tmp/lock.$$ /tmp/lock ; then
echo "previous backup in process"
rm /tmp/lock.$$
exit
fi
Don't forget to delete both the lock file and the temp file when you're done, even if you exit early through an error.
Use this script:
FILE="/tmp/my_file"
if [ -f "$FILE" ]; then
echo "Still running"
exit
fi
trap EXIT "rm -f $FILE"
touch $FILE
...script here...
This script will create a file and remove it on exit.

Resources