There is a simple cron job:
#reboot /home/user/scripts/run.sh > /dev/null 2>&1
run.sh starts a binary (simple web server):
#!/usr/bin/env bash
NPID=/home/user/server/websrv
if [ ! -f $NPID ]
then
echo "Not started"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
NUM=$(ps ax | grep $(cat $NPID) | grep -v grep | wc -l)
if [ $NUM -lt 1 ]
then
echo "Not working"
echo "Starting"
nohup home/user/server/websrv &> my_script.out &
else
ps ax | grep $(cat $NPID) | grep -v grep
echo "All Ok"
fi
fi
websrv gets JSON from user, and runs work.sh script itselves.
The problem is that sh script, which is invoked by websrv, "does not see" commands and stops with exit 1.
The script work.sh is like this:
#!/bin/sh -e
if [ "$#" -ne 1 ]; then
echo "Usage: $0 INPUT"
exit 1
fi
cd $(dirname $0) #good!
pwd #good!
IN="$1"
echo $IN #good!
KEYFORGIT="/some/path"
eval `ssh-agent -s` #good!
which ssh-add #good! (returns /usr/bin/ssh-add)
ssh-add $KEYFORGIT/openssh #error: exit 1!
git pull #error: exit 1!
cd $(dirname $0) #good!
rm -f somefile #error: exit 1!
#############==========Etc.==============
Usage of the full paths does not help.
If the script has been executed itself, it works.
If run.sh manually, it also works.
If I run the command nohup home/user/server/websrv & if works as well.
However, if all this chain of tools is started by cron on boot, work.sh is not able to perform any command except of cp, pwd, which, etc. But invoke of ssh-add, git, cp, rm, make etc., forces exit 1 status of the script. Why it "does not see" the commands? Unfortunately, I also cannot get any extended log which might explain the particular errors.
Try adding the path from the session that runs the script correctly to the cron entry (or inside the script)
Get the current path (where the script runs fine) with echo $PATH and add that to the crontab: replacing the string below with the output -> <REPLACE_WITH_OUTPUT_FROM_ABOVE>
#reboot export PATH=$PATH:<REPLACE_WITH_OUTPUT_FROM_ABOVE>; /home/user/scripts/run.sh > /dev/null 2>&1
You can compare paths with a cron entry like this to see what cron's PATH is:
* * * * * echo $PATH > /tmp/crons_path
Then cat /tmp/crons_path to see what it says.
Example output:
$ crontab -l | grep -v \#
* * * * * echo $PATH >> /tmp/crons_path
# wait a minute or so...
$ cat /tmp/crons_path
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
$ echo $PATH
/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
As the commenter above mentioned, crontab doesn't always use the same path as user so likely something is missing.
Be sure to remove the temp cron entry after testing (crontab -e, etc.)...
Related
I'm trying to write a script that will check if a script is already running, and not run it on cron if its still going from the last run. I found another post on here where they suggested using:
echo `pgrep -f $0` . "!=" . "$$";
if [[ `pgrep -f $0` != "$$" ]];
While this seems to work when I run it manually in SSH, it gives weird results when run via cron:
14767 14770 . != . 14770
Is this because there are 2 processes running with 2 different pids?
I have come up with this as an alternative:
if [ -n "$(ps -ef | grep -v grep | grep 'run.sh' | wc -l)" > 2 ];
then
echo "already running"
else
# do some stuff here
fi
Running the command on its own seems to work as expected:
# ps -ef | grep -v grep | grep 'run.sh' | wc -l)
2
But when in the code, it always shows "already running" , even though my condition is not met:
bash run.sh
2
already running
Maybe I'm doing something wrong with the variable as an int?
UPDATE: As suggested, I am trying flock:
#!/bin/bash
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$#" || :
#... rest of code here
But I get:
flock: failed to execute run.sh: No such file or directory
You could write your code like that but it will be complex and errorprone. Better to use file-locking. The flock command exists for this. Its man-page provides various examples you can cut and paste, including:
#!/bin/bash
[ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$#" || :
# ... rest of code ...
This is useful boilerplate code for shell scripts. Put it at
the top of the shell script you want to lock and it'll automatically
lock itself on the first run. If the env var $FLOCKER is
not set to the shell script that is being run, then execute
flock and grab an exclusive non-blocking lock (using the script
itself as the lock file) before re-execing itself with the right
arguments. It also sets the FLOCKER env var to the right value
so it doesn't run again.
man flock for details.
I have a script called "upcall" which calls 4 different scripts. In upcall I call them in the way show. The first part of the script works when I run the script directly (bash upload_cloud1), but does not when its called from the script below. Im sure there is a way to fix this, but just not sure what it is. I have it currently setup in crontab to run every 15 mins to check for used space.
#!/bin/bash
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then
echo "This script is already running with PID `pidof -x $(basename $0) -o %PPID`"
exit; fi
count=$(</opt/rclone/scripts/upcount)
size=$(df -k /dev/sda2 | tail -1 | awk '{print $3}')
if [ "$size" -gt "234003200" ]; then
bash /opt/rclone/scripts/upload_cloud${count}
else
echo "Not full yet"
fi
In a shell script, how do I echo all shell commands called and expand any variable names?
For example, given the following line:
ls $DIRNAME
I would like the script to run the command and display the following
ls /full/path/to/some/dir
The purpose is to save a log of all shell commands called and their arguments. Is there perhaps a better way of generating such a log?
set -x or set -o xtrace expands variables and prints a little + sign before the line.
set -v or set -o verbose does not expand the variables before printing.
Use set +x and set +v to turn off the above settings.
On the first line of the script, one can put #!/bin/sh -x (or -v) to have the same effect as set -x (or -v) later in the script.
The above also works with /bin/sh.
See the bash-hackers' wiki on set attributes, and on debugging.
$ cat shl
#!/bin/bash
DIR=/tmp/so
ls $DIR
$ bash -x shl
+ DIR=/tmp/so
+ ls /tmp/so
$
set -x will give you what you want.
Here is an example shell script to demonstrate:
#!/bin/bash
set -x #echo on
ls $PWD
This expands all variables and prints the full commands before output of the command.
Output:
+ ls /home/user/
file1.txt file2.txt
I use a function to echo and run the command:
#!/bin/bash
# Function to display commands
exe() { echo "\$ $#" ; "$#" ; }
exe echo hello world
Which outputs
$ echo hello world
hello world
For more complicated commands pipes, etc., you can use eval:
#!/bin/bash
# Function to display commands
exe() { echo "\$ ${#/eval/}" ; "$#" ; }
exe eval "echo 'Hello, World!' | cut -d ' ' -f1"
Which outputs
$ echo 'Hello, World!' | cut -d ' ' -f1
Hello
You can also toggle this for select lines in your script by wrapping them in set -x and set +x, for example,
#!/bin/bash
...
if [[ ! -e $OUT_FILE ]];
then
echo "grabbing $URL"
set -x
curl --fail --noproxy $SERV -s -S $URL -o $OUT_FILE
set +x
fi
shuckc's answer for echoing select lines has a few downsides: you end up with the following set +x command being echoed as well, and you lose the ability to test the exit code with $? since it gets overwritten by the set +x.
Another option is to run the command in a subshell:
echo "getting URL..."
( set -x ; curl -s --fail $URL -o $OUTFILE )
if [ $? -eq 0 ] ; then
echo "curl failed"
exit 1
fi
which will give you output like:
getting URL...
+ curl -s --fail http://example.com/missing -o /tmp/example
curl failed
This does incur the overhead of creating a new subshell for the command, though.
According to TLDP's Bash Guide for Beginners: Chapter 2. Writing and debugging scripts:
2.3.1. Debugging on the entire script
$ bash -x script1.sh
...
There is now a full-fledged debugger for Bash, available at SourceForge. These debugging features are available in most modern versions of Bash, starting from 3.x.
2.3.2. Debugging on part(s) of the script
set -x # Activate debugging from here
w
set +x # Stop debugging from here
...
Table 2-1. Overview of set debugging options
Short | Long notation | Result
-------+---------------+--------------------------------------------------------------
set -f | set -o noglob | Disable file name generation using metacharacters (globbing).
set -v | set -o verbose| Prints shell input lines as they are read.
set -x | set -o xtrace | Print command traces before executing command.
...
Alternatively, these modes can be specified in the script itself, by
adding the desired options to the first line shell declaration.
Options can be combined, as is usually the case with UNIX commands:
#!/bin/bash -xv
Another option is to put "-x" at the top of your script instead of on the command line:
$ cat ./server
#!/bin/bash -x
ssh user#server
$ ./server
+ ssh user#server
user#server's password: ^C
$
You can execute a Bash script in debug mode with the -x option.
This will echo all the commands.
bash -x example_script.sh
# Console output
+ cd /home/user
+ mv text.txt mytext.txt
You can also save the -x option in the script. Just specify the -x option in the shebang.
######## example_script.sh ###################
#!/bin/bash -x
cd /home/user
mv text.txt mytext.txt
##############################################
./example_script.sh
# Console output
+ cd /home/user
+ mv text.txt mytext.txt
Type "bash -x" on the command line before the name of the Bash script. For instance, to execute foo.sh, type:
bash -x foo.sh
Combining all the answers I found this to be the best, simplest
#!/bin/bash
# https://stackoverflow.com/a/64644990/8608146
exe(){
set -x
"$#"
{ set +x; } 2>/dev/null
}
# example
exe go generate ./...
{ set +x; } 2>/dev/null from https://stackoverflow.com/a/19226038/8608146
If the exit status of the command is needed, as mentioned here
Use
{ STATUS=$?; set +x; } 2>/dev/null
And use the $STATUS later like exit $STATUS at the end
A slightly more useful one
#!/bin/bash
# https://stackoverflow.com/a/64644990/8608146
_exe(){
[ $1 == on ] && { set -x; return; } 2>/dev/null
[ $1 == off ] && { set +x; return; } 2>/dev/null
echo + "$#"
"$#"
}
exe(){
{ _exe "$#"; } 2>/dev/null
}
# examples
exe on # turn on same as set -x
echo This command prints with +
echo This too prints with +
exe off # same as set +x
echo This does not
# can also be used for individual commands
exe echo what up!
For zsh, echo
setopt VERBOSE
And for debugging,
setopt XTRACE
To allow for compound commands to be echoed, I use eval plus Soth's exe function to echo and run the command. This is useful for piped commands that would otherwise only show none or just the initial part of the piped command.
Without eval:
exe() { echo "\$ $#" ; "$#" ; }
exe ls -F | grep *.txt
Outputs:
$
file.txt
With eval:
exe() { echo "\$ $#" ; "$#" ; }
exe eval 'ls -F | grep *.txt'
Which outputs
$ exe eval 'ls -F | grep *.txt'
file.txt
For csh and tcsh, you can set verbose or set echo (or you can even set both, but it may result in some duplication most of the time).
The verbose option prints pretty much the exact shell expression that you type.
The echo option is more indicative of what will be executed through spawning.
http://www.tcsh.org/tcsh.html/Special_shell_variables.html#verbose
http://www.tcsh.org/tcsh.html/Special_shell_variables.html#echo
Special shell variables
verbose
If set, causes the words of each command to be printed, after history substitution (if any). Set by the -v command line option.
echo
If set, each command with its arguments is echoed just before it is executed. For non-builtin commands all expansions occur before echoing. Builtin commands are echoed before command and filename substitution, because these substitutions are then done selectively. Set by the -x command line option.
$ cat exampleScript.sh
#!/bin/bash
name="karthik";
echo $name;
bash -x exampleScript.sh
Output is as follows:
I used crontab -e to schedule the execution of a shell script that does ssh calls to a list of servers and gets information and prints to file. The output of crontab -l is:
SHELL = /bin/sh
PATH = /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
* 1 * * 1,2,3,4,5 /bin/bash /Users/cjones/Documents/Development/Scripts/DailyStatus.sh
The script I am running logs to files the output of echo "Beginning remote connections..." >> $logfile however does not log to a file the output of the following loop:
for servers in $(cat hostnames.txt); do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done
Pastebin of the full script: http://pastebin.com/3vD7Bba0
Additional note this script pushes the latest version of a management script then ssh's into the remote server to execute and capture the ouput. This work 100% of the time when ran manually. Any assitance would be helpful thanks!
you need to do SHELL=/bin/sh and the same with PATH. The spaces around = are wrong.
Also, use full paths when calling files in your script when you call it with crontab:
From
for servers in $(cat hostnames.txt); do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done
to
while read $servers
do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done < /path/to/hostnames.txt
^^^^^^^^^
Note the usage of while read; do ... done < file instead of the unnecessary for host in $(cat ...).
GNU bash, version 1.14.7(1)
I have a script is called "abc.sh"
I have to check this from abc.sh script only...
inside it I have written following statement
status=`ps -efww | grep -w "abc.sh" | grep -v grep | grep -v $$ | awk '{ print $2 }'`
if [ ! -z "$status" ]; then
echo "[`date`] : abc.sh : Process is already running"
exit 1;
fi
I know it's wrong because every time it exits as it found its own process in 'ps'
how to solve it?
how can I check that script is already running or not from that script only ?
An easier way to check for a process already executing is the pidof command.
if pidof -x "abc.sh" >/dev/null; then
echo "Process already running"
fi
Alternatively, have your script create a PID file when it executes. It's then a simple exercise of checking for the presence of the PID file to determine if the process is already running.
#!/bin/bash
# abc.sh
mypidfile=/var/run/abc.sh.pid
# Could add check for existence of mypidfile here if interlock is
# needed in the shell script itself.
# Ensure PID file is removed on program exit.
trap "rm -f -- '$mypidfile'" EXIT
# Create a file with current PID to indicate that process is running.
echo $$ > "$mypidfile"
...
Update:
The question has now changed to check from the script itself. In this case, we would expect to always see at least one abc.sh running. If there is more than one abc.sh, then we know that process is still running. I'd still suggest use of the pidof command which would return 2 PIDs if the process was already running. You could use grep to filter out the current PID, loop in the shell or even revert to just counting PIDs with wc to detect multiple processes.
Here's an example:
#!/bin/bash
for pid in $(pidof -x abc.sh); do
if [ $pid != $$ ]; then
echo "[$(date)] : abc.sh : Process is already running with PID $pid"
exit 1
fi
done
I you want the "pidof" method, here is the trick:
if pidof -o %PPID -x "abc.sh">/dev/null; then
echo "Process already running"
fi
Where the -o %PPID parameter tells to omit the pid of the calling shell or shell script. More info in the pidof man page.
Here's one trick you'll see in various places:
status=`ps -efww | grep -w "[a]bc.sh" | awk -vpid=$$ '$2 != pid { print $2 }'`
if [ ! -z "$status" ]; then
echo "[`date`] : abc.sh : Process is already running"
exit 1;
fi
The brackets around the [a] (or pick a different letter) prevent grep from finding itself. This makes the grep -v grep bit unnecessary. I also removed the grep -v $$ and fixed the awk part to accomplish the same thing.
Working solution:
if [[ `pgrep -f $0` != "$$" ]]; then
echo "Another instance of shell already exist! Exiting"
exit
fi
Edit: I checked out some comments lately, so I tried attempting same with some debugging. I will also will explain it.
Explanation:
$0 gives filename of your running script.
$$ gives PID of your running script.
pgrep searches for process by name and returns PID.
pgrep -f $0 searches by filename, $0 being the current bash script filename and returns its PID.
So, pgrep checks if your script PID ($0) is equal to current running script ($$). If yes, then the script runs normally. If no, that means there's another PID with same filename running, so it exits. The reason I used pgrep -f $0 instead of pgrep bash is that you could have multiple instances of bash running and thus returns multiple PIDs. By filename, its returns only single PID.
Exceptions:
Use bash script.sh not ./script.sh as it doesn't work unless you have shebang.
Fix: Use #!/bin/bash shebang at beginning.
The reason sudo doesn't work is that it returns pgrep returns PID of both bash and sudo, instead of returning of of bash.
Fix:
#!/bin/bash
pseudopid="`pgrep -f $0 -l`"
actualpid="$(echo "$pseudopid" | grep -v 'sudo' | awk -F ' ' '{print $1}')"
if [[ `echo $actualpid` != "$$" ]]; then
echo "Another instance of shell already exist! Exiting"
exit
fi
while true
do
echo "Running"
sleep 100
done
The script exits even if the script isn't running. That is because there's another process having that same filename. Try doing vim script.sh then running bash script.sh, it'll fail because of vim being opened with same filename
Fix: Use unique filename.
Someone please shoot me down if I'm wrong here
I understand that the mkdir operation is atomic, so you could create a lock directory
#!/bin/sh
lockdir=/tmp/AXgqg0lsoeykp9L9NZjIuaqvu7ANILL4foeqzpJcTs3YkwtiJ0
mkdir $lockdir || {
echo "lock directory exists. exiting"
exit 1
}
# take pains to remove lock directory when script terminates
trap "rmdir $lockdir" EXIT INT KILL TERM
# rest of script here
Here's how I do it in a bash script:
if ps ax | grep $0 | grep -v $$ | grep bash | grep -v grep
then
echo "The script is already running."
exit 1
fi
This allows me to use this snippet for any bash script. I needed to grep bash because when using with cron, it creates another process that executes it using /bin/sh.
I find the answer from #Austin Phillips is spot on. One small improvement I'd do is to add -o (to ignore the pid of the script itself) and match for the script with basename (ie same code can be put into any script):
if pidof -x "`basename $0`" -o $$ >/dev/null; then
echo "Process already running"
fi
pidof wasn't working for me so I searched some more and came across pgrep
for pid in $(pgrep -f my_script.sh); do
if [ $pid != $$ ]; then
echo "[$(date)] : my_script.sh : Process is already running with PID $pid"
exit 1
else
echo "Running with PID $pid"
fi
done
Taken in part from answers above and https://askubuntu.com/a/803106/802276
Use the PS command in a little different way to ignore child process as well:
ps -eaf | grep -v grep | grep $PROCESS | grep -v $$
I create a temporary file during execution.
This is how I do it:
#!/bin/sh
# check if lock file exists
if [ -e /tmp/script.lock ]; then
echo "script is already running"
else
# create a lock file
touch /tmp/script.lock
echo "run script..."
#remove lock file
rm /tmp/script.lock
fi
I have found that using backticks to capture command output into a variable, adversly, yeilds one too many ps aux results, e.g. for a single running instance of abc.sh:
ps aux | grep -w "abc.sh" | grep -v grep | wc -l
returns "1". However,
count=`ps aux | grep -w "abc.sh" | grep -v grep | wc -l`
echo $count
returns "2"
Seems like using the backtick construction somehow temporarily creates another process. Could be the reason why the topicstarter could not make this work. Just need to decrement the $count var.
I didn't want to hardcode abc.sh in the check, so I used the following:
MY_SCRIPT_NAME=`basename "$0"`
if pidof -o %PPID -x $MY_SCRIPT_NAME > /dev/null; then
echo "$MY_SCRIPT_NAME already running; exiting"
exit 1
fi
This is compact and universal
# exit if another instance of this script is running
for pid in $(pidof -x `basename $0`); do
[ $pid != $$ ] && { exit 1; }
done
The cleanest fastest way:
processAlreadyRunning () {
process="$(basename "${0}")"
pidof -x "${process}" -o $$ &>/dev/null
}
For other variants (like AIX) that don't have pidof or pgrep. Reliability is greatly improved by getting a "static" view of the process table as opposed to piping it directly to grep. Setting IFS to null will preserve the carriage returns when the ps output is assigned to a variable.
#!/bin/ksh93
IFS=""
script_name=$(basename $0)
PSOUT="$(ps ax)"
ANY_TEXT=$(echo $PSOUT | grep $script_name | grep -vw $$ | grep $(basename $SHELL))
if [[ $ANY_TEXT ]]; then
echo "Process is already running"
echo "$ANY_TEXT"
exit
fi
[ "$(pidof -x $(basename $0))" != $$ ] && exit
https://github.com/x-zhao/exit-if-bash-script-already-running/blob/master/script.sh