Bash - execute commands in opened a program - bash

Code looks like this:
#!/bin/bash
X=$(pgrep weechat)
re='^[0-9]+$'
if [[ $X =~ $re ]] ; then
echo "process '$X' killed"
`kill -9 $X`
else
echo "no running weechat sessions"
fi
weechat
sleep .1
echo "/connect secure"
The last "echo" needs to write "/connect secure" and hit enter inside of the weechat program
How do you recommend I do this?

According to WeeChat user’s guide there is a FIFO created that accepts commands "if option plugins.var.fifo.fifo is enabled, it is by default".
Simply grab the PID of the weechat process (in order to find the FIFO file):
weechat &
weechat_pid=$!
printf '%s\n' "/connect secure" >~/.weechat/weechat_fifo_${weechat_pid}
I'd also suggest a change to your kill command as listed; doing this:
`kill -9 $X`
attempts to execute a command that is the stdout resulting from the kill command. You're simply trying to kill the process, so just run the kill command:
kill -9 "$X"
or since you're already relying on pgrep, just use pkill:
pkill -9 "$X"
and since you just seem to be checking whether pgrep found a PID or not, just check it's return code:
pgrep weechat >/dev/null && pkill weechat

Related

Is there a command, or programmatic way I can use to extract PIDs, and put them in a space-separated list? [duplicate]

I'm writing a bash script, which does several things.
In the beginning it starts several monitor scripts, each of them runs some other tools.
At the end of my main script, I would like to kill all things that were spawned from my shell.
So, it might looks like this:
#!/bin/bash
some_monitor1.sh &
some_monitor2.sh &
some_monitor3.sh &
do_some_work
...
kill_subprocesses
The thing is that most of these monitors spawn their own subprocesses, so doing (for example): killall some_monitor1.sh will not always help.
Any other way to handle this situation?
pkill -P $$
will fit (just kills it's own descendants)
EDIT: I got a downvote, don't know why. Anyway here is the help of -P
-P, --parent ppid,...
Only match processes whose parent process ID is listed.
and $$ is the process id of the script itself
After starting each child process, you can get its id with
ID=$!
Then you can use the stored PIDs to find and kill all grandchild etc. processes as described here or here.
If you use a negative PID with kill it will kill a process group. Example:
kill -- -1234
Extending pihentagy's answer to recursively kill all descendants (not just children):
kill_descendant_processes() {
local pid="$1"
local and_self="${2:-false}"
if children="$(pgrep -P "$pid")"; then
for child in $children; do
kill_descendant_processes "$child" true
done
fi
if [[ "$and_self" == true ]]; then
kill -9 "$pid"
fi
}
Now
kill_descendant_processes $$
will kill descedants of the current script/shell.
(Tested on Mac OS 10.9.5. Only depends on pgrep and kill)
kill $(jobs -p)
Rhys Ulerich's suggestion:
Caveat a race condition, using [code below] accomplishes what Jürgen suggested without causing an error when no jobs exist
[[ -z "$(jobs -p)" ]] || kill $(jobs -p)
pkill with optioin "-P" should help:
pkill -P $(pgrep some_monitor1.sh)
from man page:
-P ppid,...
Only match processes whose parent process ID is listed.
There are some discussions on linuxquests.org, please check:
http://www.linuxquestions.org/questions/programming-9/use-only-one-kill-to-kill-father-and-child-processes-665753/
I like the following straightforward approach: start the subprocesses with an environment variable with some name/value and use this to kill the subprocesses later. Most convenient is to use the process-id of the running bash script i.e. $$. This also works when subprocesses starts another subprocesses as the environment is inherited.
So start the subprocesses like this:
MY_SCRIPT_TOKEN=$$ some_monitor1.sh &
MY_SCRIPT_TOKEN=$$ some_monitor2.sh &
And afterwards kill them like this:
ps -Eef | grep "MY_SCRIPT_TOKEN=$$" | awk '{print $2}' | xargs kill
Similar to above, just a minor tweak to kill all processes indicated by ps:
ps -o pid= | tail -n +2 | xargs kill -9
Perhaps sloppy / fragile, but seemed to work at first blush. Relies on fact that current process ($$) tends to be first line.
Description of commands, in order:
Print PIDs for processes in current terminal, excl. header column
Start from Line 2 (excl. current terminal's shell)
Kill those procs
I've incorporated a bunch of the suggestions from the answers here into a single function. It gives time for processes to exit, murders them if they take too long, and doesn't have to grep through output (eg, via ps)
#!/bin/bash
# This function will kill all sub jobs.
function KillJobs() {
[[ -z "$(jobs -p)" ]] && return # no jobs to kill
local SIG="INT" # default to a gentle goodbye
[[ ! -z "$1" ]] && SIG="$1" # optionally send a different signal
# my version of 'kill' doesn't seem to understand `kill -- -${PID}`
#jobs -p | xargs -I%% kill -s "$SIG" -- -%% # kill each job's processes group
jobs -p | xargs kill -s "$SIG" # kill each job's processes group
## give the processes a moment to die, before forcing them to.
[[ "$SIG" != "KILL" ]] && {
sleep 0.2
KillJobs "KILL"
}
}
I also tried to get a variation working with pkill, but on my system (xubuntu 21.10) it does absolutely nothing.
#!/bin/bash
# This function doesn't seem to work.
function KillChildren() {
local SIG="INT" # default to a gentle goodbye
[[ ! -z "$1" ]] && SIG="$1" # optionally send a different signal
pkill --signal "$SIG" -P $$ # kill descendent's and their processes groups
[[ "$SIG" != "KILL" ]] && {
# give them a moment to die before we force them to.
sleep 0.2
KillChildren "KILL" ;
}
}

Check if bash script already running except itself with arguments

So I've looked up other questions and answers for this and as you can imagine, there are lots of ways to find this. However, my situation is kind of different.
I'm able to check whether a bash script is already running or not and I want to kill the script if it's already running.
The problem is that with the below code, -since I'm running this within the same script- the script kills itself too because it sees a script already running.
result=`ps aux | grep -i "myscript.sh" | grep -v "grep" | wc -l`
if [ $result -ge 1 ]
then
echo "script is running"
else
echo "script is not running"
fi
So how can I check if a script is already running besides it's own self and kill itself if there's another instance of the same script is running, else, continue without killing itself.
I thought I could combine the above code with $$ command to find the script's own PID and differentiate them this way but I'm not sure how to do that.
Also a side note, my script can be run multiple times at the same time within the same machine but with different arguments and that's fine. I only need to identify if script is already running with the same arguments.
pid=$(pgrep myscript.sh | grep -x -v $$)
# filter non-existent pids
pid=$(<<<"$pid" xargs -n1 sh -c 'kill -0 "$1" 2>/dev/null && echo "$1"' --)
if [ -n "$pid" ]; then
echo "Other script is running with pid $pid"
echo "Killing him!"
kill $pid
fi
pgrep lists the pids that match the name myscript.sh. From the list we filter current $$ shell with grep -v. It the result is non-empty, then you could kill the other pid.
Without the xargs, it would work, but the pgrep myscript.sh will pick up the temporary pid created for command substitution or the pipe. So the pid will never be empty and the kill will always execute complaining about the non-existent process. To do that, for each pid in pids, I check if the pid exists with kill -0. If it does, then it is outputted, effectively filtering all nonexistent pids.
You could also use a normal for loop to filter the pids:
# filter non-existent pids
pid=$(
for i in $pid; do
if kill -0 "$i" 2>/dev/null; then
echo "$i"
fi
done
)
Alternatively, you could use flock to lock the file and use lsof to list current open files with filtering the current one. As it is now, I think it will kill also editors that are editing the file and such. I believe the lsof output could be better filtered to accommodate this.
if [ "${FLOCKER}" != "$0" ]; then
pids=$(lsof -p "^$$" -- ./myscript.sh | awk 'NR>1{print $2}')
if [ -n "$pids" ]; then
echo "Other processes with $(echo $pids) found. Killing them"
kill $pids
fi
exec env FLOCKER="$0" flock -en "$0" "$0" "$#"
fi
I would go with either of 2 ways to solve this problem.
1st solution: Create a watchdog file lets say a .lck file kind of on a location before starting the script's execution(Make sure we use trap etc commands in case script is aborted so that .lck file should be removed) AND remove it once execution of script is completed successfully.
Example script for 1st solution: This is just an example a test one. We need to take care of interruptions in the script, lets say script got interrupted by a command or etc then we could use trap in it too, since at that time it would have not been completed but you may need to kick it off again(since last time it was not completed).
cat file.ksh
#!/bin/bash
PWD=`pwd`
watchdog_file="$PWD/script.lck"
if [[ -f "$watchdog_file" ]]
then
echo "Please wait script is still running, exiting from script now.."
exit 1;
else
touch $watchdog_file
fi
while true
do
echo "singh" > test1
done
if [[ -f "$watchdog_file" ]]
then
rm "$watchdog_file"
fi
2nd solution: Take pid of current running shell using $$ save it in a file. Then check if that process is still running come out of script if NOT running then move on to run statements in script.

locking a PID file

I have a function in a bash script that runs indefinitely in background and that shall be terminated by running again the same script. It is a sort of switch, when I invoke this script it starts or kills the function if already running. To do this I use a PID file:
#!/bin/bash
background_function() {
...
}
if [[ ! -s myscript.pid ]]
then
background_function &
echo $! > myscript.pid
else
kill $(cat myscript.pid) && rm myscript.pid
fi
Now, I would like to avoid multiple instances running and race conditions. I tried to use flock and I rewrote the above code in this way:
#!/bin/bash
background_function() {
...
}
exec 200>myscript.pid
if flock -n 200
then
background_function &
echo $! > myscript.pid
else
kill $(cat myscript.pid) && rm myscript.pid
fi
In doing so, however, I have a lock on the pid file but every time I launch the script again the pid file is rewritten by exec 200>myscript.pid and therefore I am unable to retrieve the PID of the already running instance and kill it.
What can I do? Should I use two different files, a pid file and a lock file? Or would it be better to implement other lock mechanisms by using mkdir and touch? Thanks.
If an echo $$ is atomic enough for you, you could use:
echo $$ >> lock.pid
lockedby=`head -1 lock.pid`
if [ $$ != $lockedby ] ; then
kill -9 $lockedby
echo $$ > lock.pid
echo "Murdered $lockedby because it had the lock"
fi
# do things in the script
rm lock.pid

How to kill all subprocesses of shell?

I'm writing a bash script, which does several things.
In the beginning it starts several monitor scripts, each of them runs some other tools.
At the end of my main script, I would like to kill all things that were spawned from my shell.
So, it might looks like this:
#!/bin/bash
some_monitor1.sh &
some_monitor2.sh &
some_monitor3.sh &
do_some_work
...
kill_subprocesses
The thing is that most of these monitors spawn their own subprocesses, so doing (for example): killall some_monitor1.sh will not always help.
Any other way to handle this situation?
pkill -P $$
will fit (just kills its own descendants)
And here is the help of -P
-P, --parent ppid,...
Only match processes whose parent process ID is listed.
and $$ is the process id of the script itself
After starting each child process, you can get its id with
ID=$!
Then you can use the stored PIDs to find and kill all grandchild etc. processes as described here or here.
If you use a negative PID with kill it will kill a process group. Example:
kill -- -1234
Extending pihentagy's answer to recursively kill all descendants (not just children):
kill_descendant_processes() {
local pid="$1"
local and_self="${2:-false}"
if children="$(pgrep -P "$pid")"; then
for child in $children; do
kill_descendant_processes "$child" true
done
fi
if [[ "$and_self" == true ]]; then
kill -9 "$pid"
fi
}
Now
kill_descendant_processes $$
will kill descedants of the current script/shell.
(Tested on Mac OS 10.9.5. Only depends on pgrep and kill)
kill $(jobs -p)
Rhys Ulerich's suggestion:
Caveat a race condition, using [code below] accomplishes what Jürgen suggested without causing an error when no jobs exist
[[ -z "$(jobs -p)" ]] || kill $(jobs -p)
pkill with optioin "-P" should help:
pkill -P $(pgrep some_monitor1.sh)
from man page:
-P ppid,...
Only match processes whose parent process ID is listed.
There are some discussions on linuxquests.org, please check:
http://www.linuxquestions.org/questions/programming-9/use-only-one-kill-to-kill-father-and-child-processes-665753/
I like the following straightforward approach: start the subprocesses with an environment variable with some name/value and use this to kill the subprocesses later. Most convenient is to use the process-id of the running bash script i.e. $$. This also works when subprocesses starts another subprocesses as the environment is inherited.
So start the subprocesses like this:
MY_SCRIPT_TOKEN=$$ some_monitor1.sh &
MY_SCRIPT_TOKEN=$$ some_monitor2.sh &
And afterwards kill them like this:
ps -Eef | grep "MY_SCRIPT_TOKEN=$$" | awk '{print $2}' | xargs kill
Similar to above, just a minor tweak to kill all processes indicated by ps:
ps -o pid= | tail -n +2 | xargs kill -9
Perhaps sloppy / fragile, but seemed to work at first blush. Relies on fact that current process ($$) tends to be first line.
Description of commands, in order:
Print PIDs for processes in current terminal, excl. header column
Start from Line 2 (excl. current terminal's shell)
Kill those procs
I've incorporated a bunch of the suggestions from the answers here into a single function. It gives time for processes to exit, murders them if they take too long, and doesn't have to grep through output (eg, via ps)
#!/bin/bash
# This function will kill all sub jobs.
function KillJobs() {
[[ -z "$(jobs -p)" ]] && return # no jobs to kill
local SIG="INT" # default to a gentle goodbye
[[ ! -z "$1" ]] && SIG="$1" # optionally send a different signal
# my version of 'kill' doesn't seem to understand `kill -- -${PID}`
#jobs -p | xargs -I%% kill -s "$SIG" -- -%% # kill each job's processes group
jobs -p | xargs kill -s "$SIG" # kill each job's processes group
## give the processes a moment to die, before forcing them to.
[[ "$SIG" != "KILL" ]] && {
sleep 0.2
KillJobs "KILL"
}
}
I also tried to get a variation working with pkill, but on my system (xubuntu 21.10) it does absolutely nothing.
#!/bin/bash
# This function doesn't seem to work.
function KillChildren() {
local SIG="INT" # default to a gentle goodbye
[[ ! -z "$1" ]] && SIG="$1" # optionally send a different signal
pkill --signal "$SIG" -P $$ # kill descendent's and their processes groups
[[ "$SIG" != "KILL" ]] && {
# give them a moment to die before we force them to.
sleep 0.2
KillChildren "KILL" ;
}
}

How to check in a bash script if something is running and exit if it is

I have a script that runs every 15 minutes but sometimes if the box is busy it hangs and the next process will start before the first one is finished creating a snowball effect. How can I add a couple lines to the bash script to check to see if something is running first before starting?
You can use pidof -x if you know the process name, or kill -0 if you know the PID.
Example:
if pidof -x vim > /dev/null
then
echo "Vim already running"
exit 1
fi
Why don't set a lock file ?
Something like
yourapp.lock
Just remove it when you process is finished, and check for it before to launch it.
It could be done using
if [ -f yourapp.lock ]; then
echo "The process is already launched, please wait..."
fi
In lieu of pidfiles, as long as your script has a uniquely identifiable name you can do something like this:
#!/bin/bash
COMMAND=$0
# exit if I am already running
RUNNING=`ps --no-headers -C${COMMAND} | wc -l`
if [ ${RUNNING} -gt 1 ]; then
echo "Previous ${COMMAND} is still running."
exit 1
fi
... rest of script ...
pgrep -f yourscript >/dev/null && exit
This is how I do it in one of my cron jobs
lockfile=~/myproc.lock
minutes=60
if [ -f "$lockfile" ]
then
filestr=`find $lockfile -mmin +$minutes -print`
if [ "$filestr" = "" ]; then
echo "Lockfile is not older than $minutes minutes! Another $0 running. Exiting ..."
exit 1
else
echo "Lockfile is older than $minutes minutes, ignoring it!"
rm $lockfile
fi
fi
echo "Creating lockfile $lockfile"
touch $lockfile
and delete the lock file at the end of the script
echo "Removing lock $lockfile ..."
rm $lockfile
For a method that does not suffer from parsing bugs and race conditions, check out:
BashFAQ/045 - How can I ensure that only one instance of a script is running at a time (mutual exclusion)?
I had recently the same question and found from above that kill -0 is best for my case:
echo "Starting process..."
run-process > $OUTPUT &
pid=$!
echo "Process started pid=$pid"
while true; do
kill -0 $pid 2> /dev/null || { echo "Process exit detected"; break; }
sleep 1
done
echo "Done."
To expand on what #bgy says, the safe atomic way to create a lock file if it doesn't exist yet, and fail if it doesn't, is to create a temp file, then hard link it to the standard lock file. This protects against another process creating the file in between you testing for it and you creating it.
Here is the lock file code from my hourly backup script:
echo $$ > /tmp/lock.$$
if ! ln /tmp/lock.$$ /tmp/lock ; then
echo "previous backup in process"
rm /tmp/lock.$$
exit
fi
Don't forget to delete both the lock file and the temp file when you're done, even if you exit early through an error.
Use this script:
FILE="/tmp/my_file"
if [ -f "$FILE" ]; then
echo "Still running"
exit
fi
trap EXIT "rm -f $FILE"
touch $FILE
...script here...
This script will create a file and remove it on exit.

Resources