How to avoid the same bash script from running more than once when its called from another script? - bash

I have a script called "upcall" which calls 4 different scripts. In upcall I call them in the way show. The first part of the script works when I run the script directly (bash upload_cloud1), but does not when its called from the script below. Im sure there is a way to fix this, but just not sure what it is. I have it currently setup in crontab to run every 15 mins to check for used space.
#!/bin/bash
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then
echo "This script is already running with PID `pidof -x $(basename $0) -o %PPID`"
exit; fi
count=$(</opt/rclone/scripts/upcount)
size=$(df -k /dev/sda2 | tail -1 | awk '{print $3}')
if [ "$size" -gt "234003200" ]; then
bash /opt/rclone/scripts/upload_cloud${count}
else
echo "Not full yet"
fi

Related

Detect if script is already running in bash script, and only restart if not

I'm trying to write a script that will check if a script is already running, and not run it on cron if its still going from the last run. I found another post on here where they suggested using:
echo `pgrep -f $0` . "!=" . "$$";
if [[ `pgrep -f $0` != "$$" ]];
While this seems to work when I run it manually in SSH, it gives weird results when run via cron:
14767 14770 . != . 14770
Is this because there are 2 processes running with 2 different pids?
I have come up with this as an alternative:
if [ -n "$(ps -ef | grep -v grep | grep 'run.sh' | wc -l)" > 2 ];
then
echo "already running"
else
# do some stuff here
fi
Running the command on its own seems to work as expected:
# ps -ef | grep -v grep | grep 'run.sh' | wc -l)
2
But when in the code, it always shows "already running" , even though my condition is not met:
bash run.sh
2
already running
Maybe I'm doing something wrong with the variable as an int?
UPDATE: As suggested, I am trying flock:
#!/bin/bash
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$#" || :
#... rest of code here
But I get:
flock: failed to execute run.sh: No such file or directory
You could write your code like that but it will be complex and errorprone. Better to use file-locking. The flock command exists for this. Its man-page provides various examples you can cut and paste, including:
#!/bin/bash
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$#" || :
# ... rest of code ...
This is useful boilerplate code for shell scripts. Put it at
the top of the shell script you want to lock and it'll automatically
lock itself on the first run. If the env var $FLOCKER is
not set to the shell script that is being run, then execute
flock and grab an exclusive non-blocking lock (using the script
itself as the lock file) before re-execing itself with the right
arguments. It also sets the FLOCKER env var to the right value
so it doesn't run again.
man flock for details.

How to check if another instance of my shell script is running

GNU bash, version 1.14.7(1)
I have a script is called "abc.sh"
I have to check this from abc.sh script only...
inside it I have written following statement
status=`ps -efww | grep -w "abc.sh" | grep -v grep | grep -v $$ | awk '{ print $2 }'`
if [ ! -z "$status" ]; then
echo "[`date`] : abc.sh : Process is already running"
exit 1;
fi
I know it's wrong because every time it exits as it found its own process in 'ps'
how to solve it?
how can I check that script is already running or not from that script only ?
An easier way to check for a process already executing is the pidof command.
if pidof -x "abc.sh" >/dev/null; then
echo "Process already running"
fi
Alternatively, have your script create a PID file when it executes. It's then a simple exercise of checking for the presence of the PID file to determine if the process is already running.
#!/bin/bash
# abc.sh
mypidfile=/var/run/abc.sh.pid
# Could add check for existence of mypidfile here if interlock is
# needed in the shell script itself.
# Ensure PID file is removed on program exit.
trap "rm -f -- '$mypidfile'" EXIT
# Create a file with current PID to indicate that process is running.
echo $$ > "$mypidfile"
...
Update:
The question has now changed to check from the script itself. In this case, we would expect to always see at least one abc.sh running. If there is more than one abc.sh, then we know that process is still running. I'd still suggest use of the pidof command which would return 2 PIDs if the process was already running. You could use grep to filter out the current PID, loop in the shell or even revert to just counting PIDs with wc to detect multiple processes.
Here's an example:
#!/bin/bash
for pid in $(pidof -x abc.sh); do
if [ $pid != $$ ]; then
echo "[$(date)] : abc.sh : Process is already running with PID $pid"
exit 1
fi
done
I you want the "pidof" method, here is the trick:
if pidof -o %PPID -x "abc.sh">/dev/null; then
echo "Process already running"
fi
Where the -o %PPID parameter tells to omit the pid of the calling shell or shell script. More info in the pidof man page.
Here's one trick you'll see in various places:
status=`ps -efww | grep -w "[a]bc.sh" | awk -vpid=$$ '$2 != pid { print $2 }'`
if [ ! -z "$status" ]; then
echo "[`date`] : abc.sh : Process is already running"
exit 1;
fi
The brackets around the [a] (or pick a different letter) prevent grep from finding itself. This makes the grep -v grep bit unnecessary. I also removed the grep -v $$ and fixed the awk part to accomplish the same thing.
Working solution:
if [[ `pgrep -f $0` != "$$" ]]; then
echo "Another instance of shell already exist! Exiting"
exit
fi
Edit: I checked out some comments lately, so I tried attempting same with some debugging. I will also will explain it.
Explanation:
$0 gives filename of your running script.
$$ gives PID of your running script.
pgrep searches for process by name and returns PID.
pgrep -f $0 searches by filename, $0 being the current bash script filename and returns its PID.
So, pgrep checks if your script PID ($0) is equal to current running script ($$). If yes, then the script runs normally. If no, that means there's another PID with same filename running, so it exits. The reason I used pgrep -f $0 instead of pgrep bash is that you could have multiple instances of bash running and thus returns multiple PIDs. By filename, its returns only single PID.
Exceptions:
Use bash script.sh not ./script.sh as it doesn't work unless you have shebang.
Fix: Use #!/bin/bash shebang at beginning.
The reason sudo doesn't work is that it returns pgrep returns PID of both bash and sudo, instead of returning of of bash.
Fix:
#!/bin/bash
pseudopid="`pgrep -f $0 -l`"
actualpid="$(echo "$pseudopid" | grep -v 'sudo' | awk -F ' ' '{print $1}')"
if [[ `echo $actualpid` != "$$" ]]; then
echo "Another instance of shell already exist! Exiting"
exit
fi
while true
do
echo "Running"
sleep 100
done
The script exits even if the script isn't running. That is because there's another process having that same filename. Try doing vim script.sh then running bash script.sh, it'll fail because of vim being opened with same filename
Fix: Use unique filename.
Someone please shoot me down if I'm wrong here
I understand that the mkdir operation is atomic, so you could create a lock directory
#!/bin/sh
lockdir=/tmp/AXgqg0lsoeykp9L9NZjIuaqvu7ANILL4foeqzpJcTs3YkwtiJ0
mkdir $lockdir || {
echo "lock directory exists. exiting"
exit 1
}
# take pains to remove lock directory when script terminates
trap "rmdir $lockdir" EXIT INT KILL TERM
# rest of script here
Here's how I do it in a bash script:
if ps ax | grep $0 | grep -v $$ | grep bash | grep -v grep
then
echo "The script is already running."
exit 1
fi
This allows me to use this snippet for any bash script. I needed to grep bash because when using with cron, it creates another process that executes it using /bin/sh.
I find the answer from #Austin Phillips is spot on. One small improvement I'd do is to add -o (to ignore the pid of the script itself) and match for the script with basename (ie same code can be put into any script):
if pidof -x "`basename $0`" -o $$ >/dev/null; then
echo "Process already running"
fi
pidof wasn't working for me so I searched some more and came across pgrep
for pid in $(pgrep -f my_script.sh); do
if [ $pid != $$ ]; then
echo "[$(date)] : my_script.sh : Process is already running with PID $pid"
exit 1
else
echo "Running with PID $pid"
fi
done
Taken in part from answers above and https://askubuntu.com/a/803106/802276
Use the PS command in a little different way to ignore child process as well:
ps -eaf | grep -v grep | grep $PROCESS | grep -v $$
I create a temporary file during execution.
This is how I do it:
#!/bin/sh
# check if lock file exists
if [ -e /tmp/script.lock ]; then
echo "script is already running"
else
# create a lock file
touch /tmp/script.lock
echo "run script..."
#remove lock file
rm /tmp/script.lock
fi
I have found that using backticks to capture command output into a variable, adversly, yeilds one too many ps aux results, e.g. for a single running instance of abc.sh:
ps aux | grep -w "abc.sh" | grep -v grep | wc -l
returns "1". However,
count=`ps aux | grep -w "abc.sh" | grep -v grep | wc -l`
echo $count
returns "2"
Seems like using the backtick construction somehow temporarily creates another process. Could be the reason why the topicstarter could not make this work. Just need to decrement the $count var.
I didn't want to hardcode abc.sh in the check, so I used the following:
MY_SCRIPT_NAME=`basename "$0"`
if pidof -o %PPID -x $MY_SCRIPT_NAME > /dev/null; then
echo "$MY_SCRIPT_NAME already running; exiting"
exit 1
fi
This is compact and universal
# exit if another instance of this script is running
for pid in $(pidof -x `basename $0`); do
[ $pid != $$ ] && { exit 1; }
done
The cleanest fastest way:
processAlreadyRunning () {
process="$(basename "${0}")"
pidof -x "${process}" -o $$ &>/dev/null
}
For other variants (like AIX) that don't have pidof or pgrep. Reliability is greatly improved by getting a "static" view of the process table as opposed to piping it directly to grep. Setting IFS to null will preserve the carriage returns when the ps output is assigned to a variable.
#!/bin/ksh93
IFS=""
script_name=$(basename $0)
PSOUT="$(ps ax)"
ANY_TEXT=$(echo $PSOUT | grep $script_name | grep -vw $$ | grep $(basename $SHELL))
if [[ $ANY_TEXT ]]; then
echo "Process is already running"
echo "$ANY_TEXT"
exit
fi
[ "$(pidof -x $(basename $0))" != $$ ] && exit
https://github.com/x-zhao/exit-if-bash-script-already-running/blob/master/script.sh

bash script to monitor myself

I need to develop a shell script that would not be started if another instance of them self is running.
If I build a test.sh that monitors itself I need to know if it is already running and then abort, otherwise (if it not previously running) I can run
#!/bin/bash
loop() {
while [ 1 ]; do
echo "run";
#-- (... omissis ...)
sleep 30
done
}
daemon="`/bin/basename $0`"
pidlist=`/usr/bin/pgrep $daemon | grep -v $$`
echo "1:[ $pidlist ]"
pidlist=$(/usr/bin/pgrep $daemon | grep -v $$)
echo "2:[ $pidlist ]"
echo "3:[ `/usr/bin/pgrep $daemon | grep -v $$` ]"
echo "4:["
/usr/bin/pgrep $daemon | grep -v $$
echo "]"
if [ -z "$pidlist" ]; then
loop &
else
echo "Process $daemon is already running with pid [ $pidlist ]"
fi
exit 0;
When I run the above script for the first time (no previous instances running) I get this output:
1:[ 20341 ]
2:[ 20344 ]
3:[ 20347 ]
4:[
]
I cannot understand why only 4th attempt does not return anything (as expected). What's wrong in my script?
Do I have to redirect output of 4th command on a temporary file and then query that file in order to decide if I can run (or not) the loop function?
Thanks anyone would help me!
Sub-shells...the first three are run in sub-shells and hence $$ has changed to the PID of the sub-shell.
Try using:
PID=$$
pidlist=`/usr/bin/pgrep $daemon | grep -v $PID`
echo "1:[ $pidlist ]"
Etc. Since the value of $PID is established before the sub-shell is run, it should be the same for all of the commands.
Is this process going to be popular enough that other people want to run the same daemon on the machine? Maybe you never have multiple users on the machine, but remember that someone else might be wanting to run the command too.

how to create the option for printing out statements vs executing them in a shell script

I'm looking for a way to create a switch for this bash script so that I have the option of either printing (echo) it to stdout or executing the command for debugging purposes. As you can see below, I am just doing this manually by commenting out one statement over the other to achieve this.
Code:
#!/usr/local/bin/bash
if [ $# != 2 ]; then
echo "Usage: testcurl.sh <localfile> <projectname>" >&2
echo "sample:testcurl.sh /share1/data/20110818.dat projectZ" >&2
exit 1
fi
echo /usr/bin/curl -c $PROXY --certkey $CERT --header "Test:'${AUTH}'" -T $localfile $fsProxyURL
#/usr/bin/curl -c $PROXY --certkey $CERT --header "Test:'${AUTH}'" -T $localfile $fsProxyURL
I'm simply looking for an elegant/better way to create like a switch from the command line. Print or execute.
One possible trick, though it will only work for simple commands (e.g., no pipes or redirection (a)) is to use a prefix variable like:
pax> cat qq.sh
${PAXPREFIX} ls /tmp
${PAXPREFIX} printf "%05d\n" 72
${PAXPREFIX} echo 3
What this will do is to insert you specific variable (PAXPREFIX in this case) before the commands. If the variable is empty, it will not affect the command, as follows:
pax> ./qq.sh
my_porn.gz copy_of_the_internet.gz
00072
3
However, if it's set to echo, it will prefix each line with that echo string.
pax> PAXPREFIX=echo ./qq.sh
ls /tmp
printf %05d\n 72
echo 3
(a) The reason why it will only work for simple commands can be seen if you have something like:
${PAXPREFIX} ls -1 | tr '[a-z]' '[A-Z]'
When PAXPREFIX is empty, it will simply give you the list of your filenames in uppercase. When it's set to echo, it will result in:
echo ls -1 | tr '[a-z]' '[A-Z]'
giving:
LS -1
(not quite what you'd expect).
In fact, you can see a problem with even the simple case above, where %05d\n is no longer surrounded by quotes.
If you want a more robust solution, I'd opt for:
if [[ ${PAXDEBUG:-0} -eq 1 ]] ; then
echo /usr/bin/curl -c $PROXY --certkey $CERT --header ...
else
/usr/bin/curl -c $PROXY --certkey $CERT --header ...
fi
and use PAXDEBUG=1 myscript.sh to run it in debug mode. This is similar to what you have now but with the advantage that you don't need to edit the file to switch between normal and debug modes.
For debugging output from the shell itself, you can run it with bash -x or put set -x in your script to turn it on at a specific point (and, of course, turn it off with set +x).
#!/usr/local/bin/bash
if [[ "$1" == "--dryrun" ]]; then
echoquoted() {
printf "%q " "$#"
echo
}
maybeecho=echoquoted
shift
else
maybeecho=""
fi
if [ $# != 2 ]; then
echo "Usage: testcurl.sh <localfile> <projectname>" >&2
echo "sample:testcurl.sh /share1/data/20110818.dat projectZ" >&2
exit 1
fi
$maybeecho /usr/bin/curl "$1" -o "$2"
Try something like this:
show=echo
$show /usr/bin/curl ...
Then set/unset $show accordingly.
This does not directly answer your specific question, but I guess you're trying to see what command gets executed for debugging. If you replace #!/usr/local/bin/bash with #!/usr/local/bin/bash -x bash will run and echo the commands in your script.
I do not know of a way for "print vs execute" but I know of a way for "print and execute", and it is using "bash -x". See this link for example.

Continue script if only one instance is running? [duplicate]

This question already has answers here:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
(43 answers)
Closed 5 years ago.
now this is embarrassing. I'm writing quick script and I can't figure out why this statement don't work.
if [ $(pidof -x test.sh | wc -w) -eq 1 ]; then echo Passed; fi
I also tried using back-ticks instead of $() but it still wouldn't work.
Can you see what is wrong with it? pidof -x test.sh | wc -w returns 1 if I run it inside of script, so I don't see any reason why basically if [ 1 -eq 1 ] wouldn't pass.
Thanks a lot!
Jefromi is correct; here is the logic I think you want:
#!/bin/bash
# this is "test.sh"
if [ $(pidof -x test.sh| wc -w) -gt 2 ]; then
echo "More than 1"
exit
fi
echo "Only one; doing whatever..."
Ah, the real answer: when you use a pipeline, you force the creation of a subshell. This will always cause you to get an increased number:
#!/bin/bash
echo "subshell:"
np=$(pidof -x foo.bash | wc -w)
echo "$np processes" # two processes
echo "no subshell:"
np=$(pidof -x foo.bash)
np=$(echo $np | wc -w)
echo "$np processes" # one process
I'm honestly not sure what the shortest way is to do what you really want to. You could avoid it all by creating a lockfile - otherwise you probably have to trace back via ppid to all the top-level processes and count them.
you don't have to pass the result of pidof to wc to count how many there are..use the shell
r=$(pidof -x -o $$ test.sh)
set -- $r
if [ "${##}" -eq 1 ];then
echo "passed"
else
echo "no"
fi
If you use the -o option to omit the PID of the script ($$), then only the PID of the subshell and any other instances of the script (and any subshells they might spawn) will be considered, so the test will pass when there's only one instance:
if [ $(pidof -x -o $$ test.sh | wc -w) -eq 1 ]; then echo Passed; fi
Here's how I would do it:
if [ "`pgrep -c someprocess`" -gt "1" ]; then
echo "More than one process running"
else
echo "Multiple processes not running"
fi
If you don't want to use a lockfile ... you can try this:
#!/bin/bash
if [[ "$(ps -N -p $$ -o comm,pid)" =~ $'\n'"${0##*/}"[[:space:]] ]]; then
echo "aready running!"
exit 1
fi
PS: it might need adjustment for a weird ${0##*/}
Just check for the existence of any one (or more) process identified as test.sh, the return code will be 1 if none are found:
pidof -x test.sh >/dev/null && echo "Passed"

Resources