Linux crontab doesnt launch a script - bash

I have this user crontab (accessed via the command crontab -e):
# m h dom mon dow command
*/3 * * * * sh /home/FRAPS/Desktop/cronCheck.sh
The script cronCheck.sh looks like that:
#!/bin/sh
SERVICE='Script'
if ps ax | grep -v grep | grep -i "$SERVICE" > /dev/null
then
echo "######## $SERVICE service running, everything is fine ##################\n" >> CronReport.txt
else
echo "$SERVICE is not running. Launching it now\n" >> CronReport.txt
perl Script.pl
fi
When I launch the script (cronCheck.sh) from its own directory, it works like a charm, but when cron launches it, it always "# $SERVICE service running, everything is fine ###"
despite 'Script' is not running.
Thanks,

Here's an even better way to write that conditional:
services=$(ps -e -o comm | grep -cFi "$SERVICE")
case "$services" in
(0)
# restart service
;;
(1)
# everything is fine
;;
(*)
# more than one copy is running
;;
esac
By using ps -e -o comm you avoid having to do the silly grep -v grep thing, because only the actual process name appears in the ps output, not the arguments. And grep -cFi counts up the matches and gives you a number, so you don't have to deal with the exit status of a pipeline.
Also, as other posters have implied, you should lead off this script by setting the PATH variable.
PATH=/bin:/usr/bin:/sbin:/usr/sbin
export PATH
You might or might not want to put /usr/local/bin at the beginning of that list, depending on your system. Don't do it if you don't need anything from there.
Final piece of advice: When writing scripts that will execute without user supervision (such as cron jobs), it's a good idea to put set -e at the beginning. That makes them exit unsuccessfully if any command fails.

You need to put the grep -v grep after the grep -i "$SERVICE". The way you have it now it's guaranteed to be true.

Checking the return status of a pipe like that could be problematic. You should either check the $PIPESTATUS array, or you can pipe the final grep into wc -l to count the number of lines.

cron typically does not set up a lot of the environment like a user account does. You may need to modify your script to get things setup properly.

Cron jobs don't get the same environment settings that you get at a shell prompt - those are generally set up by your shell on login - so you want to use absolute rather than relative paths throughout. (i.e. don't assume the PATH environment variable will exist or be set up the same as it is for you at a shell prompt, and don't assume the script will run with PWD set to your home directory, etc.) So:
in your crontab entry replace sh with /bin/sh (or remove it if cronCheck.sh is executable, the shebang line will do).
in cronCheck.sh add paths to the log file and the perl script.
cronCheck.sh should end up looking something like:
#!/bin/sh
SERVICE='Script'
if ps ax | grep -v grep | grep -i "$SERVICE" > /dev/null
then
echo "######## $SERVICE service running, everything is fine ##################\n" >> CronReport.txt
else
# Specify absolute path to a log file that's writeable for the user the
# cron runs as (probably you). Example: /tmp/CronReport.txt
echo "$SERVICE is not running. Launching it now\n" >> /tmp/CronReport.txt
# Specify absolute path to both perl and the script. Example: /usr/bin/perl
# and /home/FRAPS/scripts/Script.pl
/usr/bin/perl /home/FRAPS/scripts/Script.pl
fi
(Again you can get rid of the /usr/bin/perl bit if Script.pl is executable and has the path to the right perl in the shebang line.)

Related

cron script won't reboot as it should

I have a Raspberry Pi connected to a VPN via openvpn. Periodically, the connection drops, so I use the following script:
#!/bin/bash
ps -ef | grep -v grep | grep openvpn
if [ $? -eq 1 ] ; then
/sbin/shutdown -r now
fi
I added it to crontab (using sudo crontab -e), I want the script to be executed every 5 minutes:
*/5 * * * * /etc/openvpn/check.sh
The script doesn't work, but it still seems to be executed every five minutes:
tail /var/log/syslog | grep CRON
gives:
Mar 16 21:15:01 raspberrypi CRON[11113]: (root) CMD (/etc/openvpn/check.sh)
...
Moreover, when I run the script manually with sudo ./check.sh, the Pi reboots just like it should.
I don't really understand what's going on here ?
Edit :
As suggested, I added the full path names and went from rebooting the Pi to restarting openvpn:
#!/bin/bash
if ! /bin/ps -ef | /bin/grep '[o]penvpn'; then
cd /etc/openvpn/
/usr/sbin/openvpn --config /etc/openvpn/config.ovpn
fi
The script still doesn't work, although it runs fine when I execute it myself. The script's permissions are 755, so it should be ok ?
The path name of the script matches the final grep so it finds itself, and is satisfied.
The reason this didn't happen interactively was that you didn't run it with a full path.
This is (a twist on) a very common FAQ.
Tangentially, your script contains two very common antipatterns. You are reinventing pidof poorly, and you are examining $? explicitly. Unless you specifically require the exit code to be 1, you should simply be doing
if ! ps -ef | grep -q '[o]penvpn'; then
because the purpose of if is to run a command and examine its exit code; and notice also the trick to use a regex which doesn't match itself. But using pidof also lets you easily examine just the binary executable's file name, not its path.
I finally understood why the script didn't work. Since it was located under /etc/openvpn, the condition if ! ps -ef | grep -q '[o]penvpn' wouldn't return true because of the script being executed. I noticed it when I changed the crontab line to:
*/5 * * * * /etc/openvpn/check.sh >/home/pi/output 2>/home/pi/erroutput
the output file showed the /etc/openvpn/check.sh script being run.
The script now is:
#!/bin/bash
if ! pidof openvpn; then
cd /etc/openvpn/
/usr/sbin/openvpn --config /etc/openvpn/config.ovpn
fi
and this works just fine. Thank you all.

Issue with scheduling in Linux

I scheduled a script using at scheduler in linux.
The job ran fine but the echo statements which I had redirected to a file are no where to be found.
The at scheduling command is as follows:
at -f /app/data/scripts/func_test.sh >> /app/data/log/log.txt 2>&1 -v 09:50
Can anyone point out what is the issue with the above command.
I cannot see any echo statements from the script in the log.txt file
To include shell syntax like I/O redirection, you'll need to either fold it into your script, or pass the input to at via standard input, like so:
at -v 09:50 <<EOF
sh /app/data/scripts/func_test.sh >> /app/data/log/log.txt 2>&1
EOF
If func_test.sh is already executable, you can omit the sh from the beginning of the command; it's there to ensure that you are passing a valid command line to at.
You can also simply ensure that your script itself redirects all its output to a specific log file. As an example,
#!/bin/bash
echo foo
echo bar
becomes
#!/bin/bash
{
echo foo
echo bar
} >> /app/data/log/log.txt 2>&1
Then you can simply run your script with at using
at -f /app/data/scripts/func_test.sh -v 09:50
with no output redirection, because the script itself already redirects all its output to that file.

Determining whether shell script was executed "sourcing" it

Is it possible for a shell script to test whether it was executed through source? That is, for example,
$ source myscript.sh
$ ./myscript.sh
Can myscript.sh distinguish from these different shell environments?
I think, what Sam wants to do may be not possible.
To what degree a half-baken workaround is possible, depends on...
...the default shell of users, and
...which alternative shells they are allowed to use.
If I understand Sam's requirement correctly, he wants to have a 'script',
myscript, that is...
...not directly executable via invoking it by its name myscript
(i.e. that has chmod a-x);
...not indirectly executable for users by invoking sh myscript or
invoking bash myscript
...only running its contained functions and commands if invoked by
sourcing it: . myscript
The first things to consider are these
Invoking a script directly by its name (myscript) requires a first line in
the script like #!/bin/bash or similar. This will directly determine which
installed instance of the bash executable (or symlink) will be invoked to run
the script's content. This will be a new shell process. It requires the
scriptfile itself to have the executable flag set.
Running a script by invoking a shell binary with the script's (path+)name as
an argument (sh myscript), is the same as '1.' -- except that the
executable flag does not need to be set, and said first line with the
hashbang isn't required either. The only thing needed is that the invoking
user needs read access to the scriptfile.
Invoking a script by sourcing its filename (. myscript) is very much the
same as '1.' -- exept that it isn't a new shell that is invoked. All the
script's commands are executed in the current shell, using its environment
(and also "polluting" its environment with any (new) variables it may set or
change. (Usually this is a very dangerous thing to do: but here it could be
used to execute exit $RETURNVALUE under certain conditions....)
For '1.':
Easy to achieve: chmod a-x myscript will prevent myscript from being
directly executable. But this will not fullfill requirements '2.' and '3.'.
For '2.' and '3.':
Much harder to achieve. Invokations by sh myscript require reading
privileges for the file. So an obvious way out would seem to chmod a-r
myscript. However, this will also dis-allow '3.': you will not be able to
source the script either.
So what about writting the script in a way that uses a Bashism? A Bashism is a
specific way to do something which other shells do not understand: using
specific variables, commands etc. This could be used inside the script to
discover this condition and "do something" about it (like "display warning.txt",
"mailto admin" etc.). But there is no way in hell that this will prevent sh or
bash or any other shell from reading and trying to execute all the following
commands/lines written into the script unless you kill the shell by invoking
exit.
Examples: in Bash, the environment seen by the script knows of $BASH,
$BASH_ARGV, $BASH_COMMAND, $BASH_SUBSHELL, BASH_EXECUTION_STRING... . If
invoked by sh (also if sourced inside a sh), the executing shell will see
all these $BASH_* as empty environment variables. Again, this could be used
inside the script to discover this condition and "do something"... but not
prevent the following commands from being invoked!
I'm now assuming that...
...the script is using #!/bin/bash as its first line,
...users have set Bash as their shell and are invoking commands in the
following table from Bash and it is their login shell,
...sh is available and it is a symlink to bash or dash.
This will mean the following invokations are possible, with the listed values
for environment variables
vars+invok's | ./scriptname | sh scriptname | bash scriptname | . scriptname
---------------+--------------+---------------+-----------------+-------------
$0 | ./scriptname | ./scriptname | ./scriptname | -bash
$SHLVL | 2 | 1 | 2 | 1
$SHELLOPTS | braceexpand: | (empty) | braceexpand:.. | braceexpand:
$BASH | /bin/bash | (empty) | /bin/bash | /bin/bash
$BASH_ARGV | (empty) | (empty) | (empty) | scriptname
$BASH_SUBSHELL | 0 | (empty) | 0 | 0
$SHELL | /bin/bash | /bin/bash | /bin/bash | /bin/bash
$OPTARG | (empty) | (empty) | (emtpy) | (emtpy)
Now you could put a logic into your text script:
If $0 is not equal to -bash, then do an exit $SOMERETURNVALUE.
In case the script was called via sh myscript or bash myscript, then it will
exit the calling shell. In case it was run in the current shell, it will
continue to run. (Warning: in case the script has any other exit statements,
your current shell will be 'killed'...)
So put into your non-executable myscript.txt near its beginning something like
this may do something close to your goal:
echo BASH=$BASH
test x${BASH} = x/bin/bash && echo "$? : FINE.... You're using 'bash ...'"
test x${BASH} = x/bin/bash || echo "$? : RATS !!! -- You're not using BASH and I will kick you out!"
test x${BASH} = x/bin/bash || exit 42
test x"${0}" = x"-bash" && echo "$? : FINE.... You've sourced me, and I'm your login shell."
test x"${0}" = x"-bash" || echo "$? : RATS !!! -- You've not sourced me (or I'm not your bash login shell) and I will kick you out!"
test x"${0}" = x"-bash" || exit 33
This may or may not be what the asker wanted but, on a similar situation, I wanted a script to indicate that it is meant to be sourced and not directly run.
To achieve this effect my script reads:
#!/bin/echo Should be run as: source
export SOMEPATH="/some/path/on/my/system"
echo "Your environment has been set up"
So when I run it either as a command or sourced I get:
$ ./myscript.sh
Should be run as: source ./myscript.sh
$ source ./myscript.sh
Your environment has been set up
You can of course fool the script by running it as sh ./myscript.sh, but at least it gives the correct expected behaviour on 2 out of 3 cases.
This is what I was looking for:
[[ ${BASH_SOURCE[0]} = $0 ]] && main "$#"
I cannot add comment yet (stackexchange policies) so I add my own answer:
This one may works regardless if we do:
bash scriptname
scriptname
./scriptname.
on both bash and mksh.
if [ "${0##/*}" == scriptname ] # if the current name is our script
then
echo run
else
echo sourced
fi
If you have a non-altering file path for regular users, then:
if [ "$(/bin/readlink -f "$0")" = "$KNOWN_PATH_OF_THIS_FILE" ]; then
# the file was executed
else
# the file was sourced
fi
(it can also easily be loosened to only check for the filename or whatever).
But your users need to have read permission to be able to source the file, so absolutely nothing can stop them from doing what they want with the file. But it might help them out to not use it in the wrong way.
This solution is not dependent on Bashisms.
Yes it is possible. In general you can do the following:
#! /bin/bash
sourced () {
echo Sourced
}
executed () {
echo Executed
}
if [[ ${0##*/} == -* ]]; then
sourced
else
executed $#
fi
Giving the following output:
$ ./myscript
Executed
$ . ./myscript
Sourced
Based on Kurt Pfeifle’s answer, this works for me
if [ $SHLVL = 1 ]
then
echo 'script was sourced'
fi
Example
Since all of our machines have history, I did this:
check_script_call=$(history |tail -1|grep myscript.sh )
if [ -z "$check_script_call" ];then
echo "This file should be called as a source."
echo "Please, try again this way:"
echo "$ source /path/to/myscript.sh"
exit 1
fi
Everytime you run a script (without source), your shell creates a new env without history.
If you want to care about performance you can try this:
if ! history |tail -1|grep set_vars ;then
echo -e "This file should be called as a source.\n"
echo "Please, try again this way:"
echo -e "$ source /path/to/set_vars\n"
exit 1
fi
PS: I think Kurt's answer is much more complete but I think this could help.
In the first case, $0 will be "myscript.sh". In the second case, it will be "./myscript". But, in general, there's no way to tell source was used.
If you tell us what you're trying to do, instead of how you want to do it, a better answer might be forthcoming.

How to determine the current interactive shell that I'm in (command-line)

How can I determine the current shell I am working on?
Would the output of the ps command alone be sufficient?
How can this be done in different flavors of Unix?
There are three approaches to finding the name of the current shell's executable:
Please note that all three approaches can be fooled if the executable of the shell is /bin/sh, but it's really a renamed bash, for example (which frequently happens).
Thus your second question of whether ps output will do is answered with "not always".
echo $0 - will print the program name... which in the case of the shell is the actual shell.
ps -ef | grep $$ | grep -v grep - this will look for the current process ID in the list of running processes. Since the current process is the shell, it will be included.
This is not 100% reliable, as you might have other processes whose ps listing includes the same number as shell's process ID, especially if that ID is a small number (for example, if the shell's PID is "5", you may find processes called "java5" or "perl5" in the same grep output!). This is the second problem with the "ps" approach, on top of not being able to rely on the shell name.
echo $SHELL - The path to the current shell is stored as the SHELL variable for any shell. The caveat for this one is that if you launch a shell explicitly as a subprocess (for example, it's not your login shell), you will get your login shell's value instead. If that's a possibility, use the ps or $0 approach.
If, however, the executable doesn't match your actual shell (e.g. /bin/sh is actually bash or ksh), you need heuristics. Here are some environmental variables specific to various shells:
$version is set on tcsh
$BASH is set on bash
$shell (lowercase) is set to actual shell name in csh or tcsh
$ZSH_NAME is set on zsh
ksh has $PS3 and $PS4 set, whereas the normal Bourne shell (sh) only has $PS1 and $PS2 set. This generally seems like the hardest to distinguish - the only difference in the entire set of environment variables between sh and ksh we have installed on Solaris boxen is $ERRNO, $FCEDIT, $LINENO, $PPID, $PS3, $PS4, $RANDOM, $SECONDS, and $TMOUT.
ps -p $$
should work anywhere that the solutions involving ps -ef and grep do (on any Unix variant which supports POSIX options for ps) and will not suffer from the false positives introduced by grepping for a sequence of digits which may appear elsewhere.
Try
ps -p $$ -oargs=
or
ps -p $$ -ocomm=
If you just want to ensure the user is invoking a script with Bash:
if [ -z "$BASH" ]; then echo "Please run this script $0 with bash"; exit; fi
or ref
if [ -z "$BASH" ]; then exec bash $0 ; exit; fi
You can try:
ps | grep `echo $$` | awk '{ print $4 }'
Or:
echo $SHELL
$SHELL need not always show the current shell. It only reflects the default shell to be invoked.
To test the above, say bash is the default shell, try echo $SHELL, and then in the same terminal, get into some other shell (KornShell (ksh) for example) and try $SHELL. You will see the result as bash in both cases.
To get the name of the current shell, Use cat /proc/$$/cmdline. And the path to the shell executable by readlink /proc/$$/exe.
There are many ways to find out the shell and its corresponding version. Here are few which worked for me.
Straightforward
$> echo $0 (Gives you the program name. In my case the output was -bash.)
$> $SHELL (This takes you into the shell and in the prompt you get the shell name and version. In my case bash3.2$.)
$> echo $SHELL (This will give you executable path. In my case /bin/bash.)
$> $SHELL --version (This will give complete info about the shell software with license type)
Hackish approach
$> ******* (Type a set of random characters and in the output you will get the shell name. In my case -bash: chapter2-a-sample-isomorphic-app: command not found)
ps is the most reliable method. The SHELL environment variable is not guaranteed to be set and even if it is, it can be easily spoofed.
I have a simple trick to find the current shell. Just type a random string (which is not a command). It will fail and return a "not found" error, but at start of the line it will say which shell it is:
ksh: aaaaa: not found [No such file or directory]
bash: aaaaa: command not found
I have tried many different approaches and the best one for me is:
ps -p $$
It also works under Cygwin and cannot produce false positives as PID grepping. With some cleaning, it outputs just an executable name (under Cygwin with path):
ps -p $$ | tail -1 | awk '{print $NF}'
You can create a function so you don't have to memorize it:
# Print currently active shell
shell () {
ps -p $$ | tail -1 | awk '{print $NF}'
}
...and then just execute shell.
It was tested under Debian and Cygwin.
The following will always give the actual shell used - it gets the name of the actual executable and not the shell name (i.e. ksh93 instead of ksh, etc.). For /bin/sh, it will show the actual shell used, i.e. dash.
ls -l /proc/$$/exe | sed 's%.*/%%'
I know that there are many who say the ls output should never be processed, but what is the probability you'll have a shell you are using that is named with special characters or placed in a directory named with special characters? If this is still the case, there are plenty of other examples of doing it differently.
As pointed out by Toby Speight, this would be a more proper and cleaner way of achieving the same:
basename $(readlink /proc/$$/exe)
My variant on printing the parent process:
ps -p $$ | awk '$1 == PP {print $4}' PP=$$
Don't run unnecessary applications when AWK can do it for you.
Provided that your /bin/sh supports the POSIX standard and your system has the lsof command installed - a possible alternative to lsof could in this case be pid2path - you can also use (or adapt) the following script that prints full paths:
#!/bin/sh
# cat /usr/local/bin/cursh
set -eu
pid="$$"
set -- sh bash zsh ksh ash dash csh tcsh pdksh mksh fish psh rc scsh bournesh wish Wish login
unset echo env sed ps lsof awk getconf
# getconf _POSIX_VERSION # reliable test for availability of POSIX system?
PATH="`PATH=/usr/bin:/bin:/usr/sbin:/sbin getconf PATH`"
[ $? -ne 0 ] && { echo "'getconf PATH' failed"; exit 1; }
export PATH
cmd="lsof"
env -i PATH="${PATH}" type "$cmd" 1>/dev/null 2>&1 || { echo "$cmd not found"; exit 1; }
awkstr="`echo "$#" | sed 's/\([^ ]\{1,\}\)/|\/\1/g; s/ /$/g' | sed 's/^|//; s/$/$/'`"
ppid="`env -i PATH="${PATH}" ps -p $pid -o ppid=`"
[ "${ppid}"X = ""X ] && { echo "no ppid found"; exit 1; }
lsofstr="`lsof -p $ppid`" ||
{ printf "%s\n" "lsof failed" "try: sudo lsof -p \`ps -p \$\$ -o ppid=\`"; exit 1; }
printf "%s\n" "${lsofstr}" |
LC_ALL=C awk -v var="${awkstr}" '$NF ~ var {print $NF}'
My solution:
ps -o command | grep -v -e "\<ps\>" -e grep -e tail | tail -1
This should be portable across different platforms and shells. It uses ps like other solutions, but it doesn't rely on sed or awk and filters out junk from piping and ps itself so that the shell should always be the last entry. This way we don't need to rely on non-portable PID variables or picking out the right lines and columns.
I've tested on Debian and macOS with Bash, Z shell (zsh), and fish (which doesn't work with most of these solutions without changing the expression specifically for fish, because it uses a different PID variable).
If you just want to check that you are running (a particular version of) Bash, the best way to do so is to use the $BASH_VERSINFO array variable. As a (read-only) array variable it cannot be set in the environment,
so you can be sure it is coming (if at all) from the current shell.
However, since Bash has a different behavior when invoked as sh, you do also need to check the $BASH environment variable ends with /bash.
In a script I wrote that uses function names with - (not underscore), and depends on associative arrays (added in Bash 4), I have the following sanity check (with helpful user error message):
case `eval 'echo $BASH#${BASH_VERSINFO[0]}' 2>/dev/null` in
*/bash#[456789])
# Claims bash version 4+, check for func-names and associative arrays
if ! eval "declare -A _ARRAY && func-name() { :; }" 2>/dev/null; then
echo >&2 "bash $BASH_VERSION is not supported (not really bash?)"
exit 1
fi
;;
*/bash#[123])
echo >&2 "bash $BASH_VERSION is not supported (version 4+ required)"
exit 1
;;
*)
echo >&2 "This script requires BASH (version 4+) - not regular sh"
echo >&2 "Re-run as \"bash $CMD\" for proper operation"
exit 1
;;
esac
You could omit the somewhat paranoid functional check for features in the first case, and just assume that future Bash versions would be compatible.
None of the answers worked with fish shell (it doesn't have the variables $$ or $0).
This works for me (tested on sh, bash, fish, ksh, csh, true, tcsh, and zsh; openSUSE 13.2):
ps | tail -n 4 | sed -E '2,$d;s/.* (.*)/\1/'
This command outputs a string like bash. Here I'm only using ps, tail, and sed (without GNU extesions; try to add --posix to check it). They are all standard POSIX commands. I'm sure tail can be removed, but my sed fu is not strong enough to do this.
It seems to me, that this solution is not very portable as it doesn't work on OS X. :(
echo $$ # Gives the Parent Process ID
ps -ef | grep $$ | awk '{print $8}' # Use the PID to see what the process is.
From How do you know what your current shell is?.
This is not a very clean solution, but it does what you want.
# MUST BE SOURCED..
getshell() {
local shell="`ps -p $$ | tail -1 | awk '{print $4}'`"
shells_array=(
# It is important that the shells are listed in descending order of their name length.
pdksh
bash dash mksh
zsh ksh
sh
)
local suited=false
for i in ${shells_array[*]}; do
if ! [ -z `printf $shell | grep $i` ] && ! $suited; then
shell=$i
suited=true
fi
done
echo $shell
}
getshell
Now you can use $(getshell) --version.
This works, though, only on KornShell-like shells (ksh).
Do the following to know whether your shell is using Dash/Bash.
ls –la /bin/sh:
if the result is /bin/sh -> /bin/bash ==> Then your shell is using Bash.
if the result is /bin/sh ->/bin/dash ==> Then your shell is using Dash.
If you want to change from Bash to Dash or vice-versa, use the below code:
ln -s /bin/bash /bin/sh (change shell to Bash)
Note: If the above command results in a error saying, /bin/sh already exists, remove the /bin/sh and try again.
I like Nahuel Fouilleul's solution particularly, but I had to run the following variant of it on Ubuntu 18.04 (Bionic Beaver) with the built-in Bash shell:
bash -c 'shellPID=$$; ps -ocomm= -q $shellPID'
Without the temporary variable shellPID, e.g. the following:
bash -c 'ps -ocomm= -q $$'
Would just output ps for me. Maybe you aren't all using non-interactive mode, and that makes a difference.
Get it with the $SHELL environment variable. A simple sed could remove the path:
echo $SHELL | sed -E 's/^.*\/([aA-zZ]+$)/\1/g'
Output:
bash
It was tested on macOS, Ubuntu, and CentOS.
On Mac OS X (and FreeBSD):
ps -p $$ -axco command | sed -n '$p'
Grepping PID from the output of "ps" is not needed, because you can read the respective command line for any PID from the /proc directory structure:
echo $(cat /proc/$$/cmdline)
However, that might not be any better than just simply:
echo $0
About running an actually different shell than the name indicates, one idea is to request the version from the shell using the name you got previously:
<some_shell> --version
sh seems to fail with exit code 2 while others give something useful (but I am not able to verify all since I don't have them):
$ sh --version
sh: 0: Illegal option --
echo $?
2
One way is:
ps -p $$ -o exe=
which is IMO better than using -o args or -o comm as suggested in another answer (these may use, e.g., some symbolic link like when /bin/sh points to some specific shell as Dash or Bash).
The above returns the path of the executable, but beware that due to /usr-merge, one might need to check for multiple paths (e.g., /bin/bash and /usr/bin/bash).
Also note that the above is not fully POSIX-compatible (POSIX ps doesn't have exe).
Kindly use the below command:
ps -p $$ | tail -1 | awk '{print $4}'
This one works well on Red Hat Linux (RHEL), macOS, BSD and some AIXes:
ps -T $$ | awk 'NR==2{print $NF}'
alternatively, the following one should also work if pstree is available,
pstree | egrep $$ | awk 'NR==2{print $NF}'
You can use echo $SHELL|sed "s/\/bin\///g"
And I came up with this:
sed 's/.*SHELL=//; s/[[:upper:]].*//' /proc/$$/environ

How to set the process name of a shell script?

Is there any way to set the process name of a shell script? This is needed for killing this script with the killall command.
Here's a way to do it, it is a hack/workaround but it works pretty good. Feel free to tweak it to your needs, it certainly needs some checks on the symbolic link creation or using a tmp folder to avoid possible race conditions (if they are problematic in your case).
Demonstration
wrapper
#!/bin/bash
script="./dummy"
newname="./killme"
rm -iv "$newname"
ln -s "$script" "$newname"
exec "$newname" "$#"
dummy
#!/bin/bash
echo "I am $0"
echo "my params: $#"
ps aux | grep bash
echo "sleeping 10s... Kill me!"
sleep 10
Test it using:
chmod +x dummy wrapper
./wrapper some params
In another terminal, kill it using:
killall killme
Notes
Make sure you can write in your current folder (current working directory).
If your current command is:
/path/to/file -q --params somefile1 somefile2
Set the script variable in wrapper to /path/to/file (instead of ./dummy) and call wrapper like this:
./wrapper -q --params somefile1 somefile2
You can use the kill command on a PID so what you can do is run something in the background, get its ID and kill it
PID of last job run in background can be obtained using $!.
echo test & echo $!
You cannot do this reliably and portably, as far as I know. On some flavors of Unix, changing what's in argv[0] will do the job. I don't believe there's a way to do that in most shells, though.
Here are some references on the topic.
Howto change a UNIX process and child process name by modifying argv0
Is there a way to change the effective process name in Python?
This is an extremely old post. Pretty sure the original poster got his/her answer long ago. But for newcomers, thought I'd explain my own experience (after playing with bash for a half hour). If you start a script by script name w/ something like:
./script.sh
the process name listed by ps will be "bash" (on my system). However if you start a script by calling bash directly:
/bin/bash script.sh
/bin/sh script.sh
bash script.sh
you will end up with a process name that contains the name of the script. e.g.:
/bin/bash script.sh
results in a process name of the same name. This can be used to mark pids with a specific script name. And, this can be useful to (for example) use the kill command to stop all processes (by pid) that have a process name containing said script name.
You can all use the -f flag to pgrep/pkill which will search the entire command line rather than just the process name. E.g.
./script &
pkill -f script
Include
#![path to shell]
Example for path to shell -
/usr/bin/bash
/bin/bash
/bin/sh
Full example
#!/usr/bin/bash
On Linux at least, killall dvb works even though dvb is a shell script labelled with #!. The only trick is to make the script executable and invoke it by name, e.g.,
dvb watch abc write game7 from 9pm for 3:30
Running ps shows a process named
/usr/bin/lua5.1 dvb watch ...
but killall dvb takes it down.
%1, %2... also do an adequate job:
#!/bin/bash
# set -ex
sleep 101 &
FIRSTPID=$!
sleep 102 &
SECONDPID=$!
echo $(ps ax|grep "^\(${FIRSTPID}\|${SECONDPID}\) ")
kill %2
echo $(ps ax|grep "^\(${FIRSTPID}\|${SECONDPID}\) ")
sleep 1
kill %1
echo $(ps ax|grep "^\(${FIRSTPID}\|${SECONDPID}\) ")
I put these two lines at the start of my scripts so I do not have to retype the script name each time I revise the script. It won't take $0 of you put it after the first shebang. Maybe someone who actually knows can correct me but I believe this is because the script hasn't started until the second line so $0 doesn't exist until then:
#!/bin/bash
#!/bin/bash ./$0
This should do it.
My solution uses a trivial python script, and the setproctitle package. For what it's worth:
#!/usr/bin/env python3
from sys import argv
from setproctitle import setproctitle
from subprocess import run
setproctitle(argv[1])
run(argv[2:])
Call it e.g. run-with-title and stick it in your path somewhere. Then use via
run-with-title <desired-title> <script-name> [<arg>...]
Run bash script with explicit call to bash (not just like ./test.sh). Process name will contain script in this case and can be found by script name. Or by explicit call to bash with full path as
suggested in display_name_11011's answer:
bash test.sh # explicit bash mentioning
/bin/bash test.sh # or with full path to bash
ps aux | grep test.sh | grep -v grep # searching PID by script name
If the first line in script (test.sh) explicitly specifies interpreter:
#!/bin/bash
echo 'test script'
then it can be called without explicit bash mentioning to create process with name '/bin/bash test.sh':
./test.sh
ps aux | grep test.sh | grep -v grep
Also as dirty workaround it is possible to copy and use bash with custom name:
sudo cp /usr/bin/bash /usr/bin/bash_with_other_name
/usr/bin/bash_with_other_name test.sh
ps aux | grep bash_with_other_name | grep -v grep
Erm... unless I'm misunderstanding the question, the name of a shell script is whatever you've named the file. If your script is named foo then killall foo will kill it.
We won't be able to find pid of the shell script using "ps -ef | grep {scriptName}" unless the name of script is overridden using shebang. Although all the running shell scripts come in response of "ps -ef | grep bash". But this will become trickier to identify the running process as there will be multiple bash processing running simultaneously.
So a better approach is to give an appropriate name to the shell script.
Edit the shell script file and use shebang (the very first line) to name the process e.g. #!/bin/bash /scriptName.sh
In this way we would be able to grep the process id of scriptName using
"ps -ef | grep {scriptName}"

Resources