How to Kill Processes in Bash - bash

I used the following bash code:
for pid in `top -n 1 | awk '{if($8 == "R") print $1;}'`
do
kill $pid
done
It says:
./kill.sh: line 3: kill: 29162: arguments must be process or job IDs
./kill.sh: line 3: kill: 29165: arguments must be process or job IDs
./kill.sh: line 3: kill: 29166: arguments must be process or job IDs
./kill.sh: line 3: kill: 29169: arguments must be process or job IDs
What causes this error and how do I kill processes in Bash?

I usuallly use:
pkill <process name>
In your case:
pkill R
Note that this will kill all the running instances of R, which may or may not be what you want.

Probably this awk command is not returning any reliable data, in any case theres a much easier way:
kill `pidof R`
Or:
killall R

It seems there may be a (possibly aliased) script kill.sh in your current directory which is acting as an intermediary and calling the kill builtin. However, the script is passing the wrong arguments to the builtin. (I can't give details without seeing the script.)
Solution.
Your command will work fine using the kill builtin. The simplest solution is to ensure you use the bash builtin kill. Execute:
chmod a-x kill.sh
unalias kill
unset -f kill
This will prevent the script from running and remove any alias or function that may interfere with your use of the kill builtin.
Note.
You can also simplify your command:
kill `top -n 1 | awk '{if($8 == "R") print $1;}'`
Alternatively...
You can also use builtin to bypass any functions and aliases:
builtin kill `top -n 1 | awk '{if($8 == "R") print $1;}'`

Please try:
top -n 1 | awk '{print "(" $1 ") - (" $8 ")";}'
to understand how $1 and $8 are evaluated
Please also post the content of ./kill.sh and explain the purpose of this script

top issues a header you need to suppress or skip over.
Or maybe you can use ps since its output is highly configurable.
Also note that kill accepts multiple PIDs, so there's no reason to employ a loop.
kill $( ps h -o pid,state | awk '$2 == "R" { print $1 }' )

In addition to awk, sed can provide pid isolation in the list for kill:
kill $(ps h -o pid,state | grep R | sed -e 's/\s.*$//')
Basically, the opposite side of the same coin.

Related

Getting chained/piped commands result to shell variable

Given a db2 proc call's output:
$ db2 "call SOME_PROC(103,5,0,'','',0,0)"
Return Status = 0
I wish to just get the value and when I 'chain-em-up' it does not work as I think it should, so given:
$ db2 "call SOME_PROC(103,5,0,'','',0,0)" | sed -rn 's/ *Return Status *= *([0-9]+)/\1/p'
0
I try to chain 'em up:
$ var=$(db2 "call SOME_PROC(103,5,0,'','',0,0)" | sed -rn 's/ *Return Status *= *([0-9]+)/\1/p')
$ echo $var
You get nothin !
But if you redirect to tmp file:
$ db2 "call SOME_PROC(103,5,0,'','',0,0)" > /tmp/fff
$ var=$(cat /tmp/fff | sed -rn 's/ *Return Status *= *([0-9]+)/\1/p')
$ echo $var
0
You do get it ...
Similarly if you put in var:
$ var=$(db2 "call DB2INST1.USP_SPOTLIGHT_GET_DATA(103,5,0,'','',0,0)")
$ var=$(echo $var | sed -rn 's/ *Return Status *= *([0-9]+)/\1/p')
$ echo $var
0
You also get it ...
Is there a way to get value as my first attempt? And also I wonder why does it not work? What am I missing?
I also tried the below and it also givs nothing!
cat <(db2 -x "call DB2INST1.USP_SPOTLIGHT_GET_DATA(103,5,0,'','',0,0)" | sed -rn 's/ *Return Status *= *([0-9]+)/\1/p')
The db2 command-line interface requires that the db2 command be issued as a direct child of the parent of the command which initiated the connection. In other words, the db2 call and db2 connect commands need to be initiated from the same shell process.
That does not interact well with many shell features:
pipelines: cmd1 | cmd2 runs both commands in subshells (different processes).
command substitution: $(cmd) runs the command in a subshell.
process substitution (bash): <(cmd) runs the command in a subshell.
However, if the shell is bash, the situation is not quite that restricted, so under some circumstances the above constructions will still work with db2. In pipelines and command substitution, bash will optimize away the subshell if the command to be run in the subshell is simple enough. (Roughly speaking, it must be a simple command without redirects.)
So, for example, if some bash process P executes
cmd1 | cmd2
then both commands have P as their parent, because they are both simple commands. Similarly with
a=$(cmd)
However, if a pipelined command or a substituted command is not simple, then a subshell is required. For example, even though { ...} does not require a subshell, the syntax is not a simple command. So in
{ cmd1; } | cmd2
the first command is a child of a subshell, while the second command is a child of the main shell.
In particular, in
a=$(cmd1 | cmd2)
bash will not optimize away the command-substitution subshell, because cmd1 | cmd2 is not a simple command. So it will create a subshell to execute the pipeline; that subshell will apply the optimization, so there will not be additional subshells to execute the simple commands cmd1 and cmd2.
In short, you can pipeline the output of db2 call or you can capture the output in a variable, but you cannot capture the output of a pipeline in a variable.
Other shells are more (or less) capable of subshell optimizations. With zsh, for example, you can use process substitution to avoid the subshell:
# This will work with zsh but not with bash
var=$(db2 "call ..." > >(sed -rn ...))
The syntax cmd1 > >(cmd2) is very similar to the pipeline cmd1 | cmd2, but it differs in that is syntactically a simple command. For zsh, that is sufficient to allow the elimination of the subshell (but not for bash, which won't optimize away a subshell if the command involves a redirection).
As #rici so briliantly explained it all, I just wanna show it live:
With cmd | cmd you get:
$ db2 "call SOME_PROC(103,5,0,'','',0,0)" | cat
Return Status = 0
But with {cmd ;} | cmd you get:
$ { db2 "call SOME_PROC(103,5,0,'','',0,0)" ;} | cat
SQL1024N A database connection does not exist. SQLSTATE=08003

How to kill process in bash? [duplicate]

I used the following bash code:
for pid in `top -n 1 | awk '{if($8 == "R") print $1;}'`
do
kill $pid
done
It says:
./kill.sh: line 3: kill: 29162: arguments must be process or job IDs
./kill.sh: line 3: kill: 29165: arguments must be process or job IDs
./kill.sh: line 3: kill: 29166: arguments must be process or job IDs
./kill.sh: line 3: kill: 29169: arguments must be process or job IDs
What causes this error and how do I kill processes in Bash?
I usuallly use:
pkill <process name>
In your case:
pkill R
Note that this will kill all the running instances of R, which may or may not be what you want.
Probably this awk command is not returning any reliable data, in any case theres a much easier way:
kill `pidof R`
Or:
killall R
It seems there may be a (possibly aliased) script kill.sh in your current directory which is acting as an intermediary and calling the kill builtin. However, the script is passing the wrong arguments to the builtin. (I can't give details without seeing the script.)
Solution.
Your command will work fine using the kill builtin. The simplest solution is to ensure you use the bash builtin kill. Execute:
chmod a-x kill.sh
unalias kill
unset -f kill
This will prevent the script from running and remove any alias or function that may interfere with your use of the kill builtin.
Note.
You can also simplify your command:
kill `top -n 1 | awk '{if($8 == "R") print $1;}'`
Alternatively...
You can also use builtin to bypass any functions and aliases:
builtin kill `top -n 1 | awk '{if($8 == "R") print $1;}'`
Please try:
top -n 1 | awk '{print "(" $1 ") - (" $8 ")";}'
to understand how $1 and $8 are evaluated
Please also post the content of ./kill.sh and explain the purpose of this script
top issues a header you need to suppress or skip over.
Or maybe you can use ps since its output is highly configurable.
Also note that kill accepts multiple PIDs, so there's no reason to employ a loop.
kill $( ps h -o pid,state | awk '$2 == "R" { print $1 }' )
In addition to awk, sed can provide pid isolation in the list for kill:
kill $(ps h -o pid,state | grep R | sed -e 's/\s.*$//')
Basically, the opposite side of the same coin.

Pipe multiple commands to a single command with no EOF signal wait

How can I pipe the std-out of multiple commands to a single command? Something like:
(cat my_program/logs/log.*;tail -0f my_program/logs/log.0) | grep "filtered lines"
I want to run all the following commands on a single command-line using pipes and no redirects to a temp file (if possible). There is one small nuance that means I can't use parentheses; I want the last command to be a tail feed so I want the grep to happen after every line is received by the std-in - not wait for EOF signal.
Try using the current shell instead of a subshell:
{ cat file1; tail -f file2; } | grep something
The semi-colon before the closing brace is required.
If the commands don't have to be executed in parallel then you can use cat to unify the output into a single consecutive stream, like this:
tail -0f log.0 | cat - log.* | grep "filtered lines"
The magic bit is the - parameter for cat; it tells cat to take stdin and add it to the list of inputs (the others, in this case, being logfiles) to be concatenated.
Okay I have a messy solution (it's a script - sadly this would be unpractical for the command line):
#!/bin/bash
ProcessFIFO(){
while [[ -p /tmp/logfifo ]]; do
grep "filtered lines" < /tmp/logfifo
done
}
OnExit(){
rm -f /tmp/logfifo
echo 'temp files deleted'
exit
}
# Create FIFO
mkfifo /tmp/logfifo
trap OnExit SIGINT SIGTERM
ProcessFIFO &
cat log.* > /tmp/logfifo
tail -0f log.0 > /tmp/logfifo
I have used to FIFO method suggested by #itsbruce and I've also had to use a while loop to stop the grep from ending at the EOF signal:
while [[ -p /tmp/logfifo ]]
I have also included a trap to delete the FIFO:
trap OnExit SIGINT SIGTERM

Shell Script Help--Accept Input and Run in BackGround?

I have a shell script in which in the first line I ask the user to input how many minutes they want the script to run for:
#!/usr/bin/ksh
echo "How long do you want the script to run for in minutes?:\c"
read scriptduration
loopcnt=0
interval=1
date2=$(date +%H:%M%S)
(( intervalsec = $interval * 1 ))
totalmin=${1:-$scriptduration}
(( loopmax = ${totalmin} * 60 ))
ofile=/home2/s499929/test.log
echo "$date2 total runtime is $totalmin minutes at 2 sec intervals"
while(( $loopmax > $loopcnt ))
do
date1=$(date +%H:%M:%S)
pid=`/usr/local/bin/lsof | grep 16752 | grep LISTEN |awk '{print $2}'` > /dev/null 2>&1
count=$(netstat -an|grep 16752|grep ESTABLISHED|wc -l| sed "s/ //g")
process=$(ps -ef | grep $pid | wc -l | sed "s/ //g")
port=$(netstat -an | grep 16752 | grep LISTEN | wc -l| sed "s/ //g")
echo "$date1 activeTCPcount:$count activePID:$pid activePIDcount=$process listen=$port" >> ${ofile}
sleep $intervalsec
(( loopcnt = loopcnt + 1 ))
done
It works great if I kick it off an input the values manually. But if I want to run this for 3 hours I need to kick off the script to run in the background.
I have tried just running ./scriptname & and I get this:
$ How long do you want the test to run for in minutes:360
ksh: 360: not found.
[2] + Stopped (SIGTTIN) ./test.sh &
And the script dies. Is this possible, any suggestions on how I can accept this one input and then run in the background?? Thanks!!!
You could do something like this:
test.sh arg1 arg2 &
Just refer to arg1 and arg2 as $1 and $2, respectively, in the bash script. ($0 is the name of the script)
So,
test.sh 360 &
will pass 360 as the first argument to the bash or ksh script which can be referred to as $1 in the script.
So the first few lines of your script would now be:
#!/usr/bin/ksh
scriptduration=$1
loopcnt=0
...
...
With bash you can start the script in the foreground and after you finished with the user input, interrupt it by hitting Ctrl-Z.
Then type
$ bg %
and the script will continue to run in the background.
Why You're Getting What You're Getting
When you run the script in the background, it can't take any user input. In fact, the program will freeze if it expects user input until its put back in the foreground. However, output has to go somewhere. Thus, the output goes to the screen (even though the program is running in the background. Thus, you see the prompt.
The prompt you see your program displaying is meaningless because you can't input at the prompt. Instead, you type in 360 and your shell is interpreting it as a command you want because you're not putting it in the program, you're putting it in the command prompt.
You want your program to be in the foreground for the input, but run in the background. You can't do both at once.
Solutions To Your Dilemma
You can have two programs. The first takes the input, and the second runs the actual program in the background.
Something like this:
#! /bin/ksh
read time?"How long in seconds do you want to run the job? "
my_actual_job.ksh $time &
In fact, you could even have a mechanism to run the job in the background if the time is over a certain limit, but otherwise run the job in the foreground.
#! /bin/ksh
readonly MAX_FOREGROUND_TIME=30
read time?"How long in seconds do you want to run the job? "
if [ $time -gt $MAX_FOREGROUND_TIME ]
then
my_actual_job.ksh $time &
else
my_actual_job.ksh $time
fi
Also remember if your job is in the background, it cannot print to the screen. You can redirect the output elsewhere, but if you don't, it'll print to the screen at inopportune times. For example, you could be in VI editing a file, and suddenly have the output appear smack in the middle of your VI session.
I believe there's an easy way to tell if your job is in the background, but I can't remember it offhand. You could find your current process ID by looking at $$, then looking at the output of jobs -p and see if that process ID is in the list. However, I'm sure someone will come up with an easy way to tell.
It is also possible that a program could throw itself into the background via the bg $$ command.
Some Hints
If you're running Kornshell, you might consider taking advantage of many of Kornshell's special features:
print: The print command is more flexible and robust than echo. Take a look at the manpage for Kornshell and see all of its features.
read: You notice that you can use the read var?"prompt" form of the read command.
readonly: Use readonly to declare constants. That way, you don't accidentally change the value of that variable later. Besides, it's good programming technique.
typeset: Take a look at typeset in the ksh manpage. The typeset command can help you declare particular variables as floating point vs. real, and can automatically do things like zero fill, right or left justify, etc.
Some things not specific to Kornshell:
The awk and sed commands can also do what grep does, so there's no reason to filter something through grep and then through awk or sed.
You can combine greps by using the -e parameter. grep foo | grep bar is the same as grep -e foo -e bar.
Hope this helps.
I've tested this with ksh and it worked. The trick is to let the script call itself with the time to wait as parameter:
if [ -z "$1" ]; then
echo "How long do you want the test to run for in minutes:\c"
read scriptduration
echo "running task in background"
$0 $scriptduration &
exit 0
else
scriptduration=$1
fi
loopcnt=0
interval=1
# ... and so on
So are you using bash or ksh? In bash, you can do this:
{ echo 360 | ./test.sh ; } &
It could work for ksh also.

Passing multiple arguments to a UNIX shell script

I have the following (bash) shell script, that I would ideally use to kill multiple processes by name.
#!/bin/bash
kill `ps -A | grep $* | awk '{ print $1 }'`
However, while this script works is one argument is passed:
end chrome
(the name of the script is end)
it does not work if more than one argument is passed:
$end chrome firefox
grep: firefox: No such file or directory
What is going on here?
I thought the $* passes multiple arguments to the shell script in sequence. I'm not mistyping anything in my input - and the programs I want to kill (chrome and firefox) are open.
Any help is appreciated.
Remember what grep does with multiple arguments - the first is the word to search for, and the remainder are the files to scan.
Also remember that $*, "$*", and $# all lose track of white space in arguments, whereas the magical "$#" notation does not.
So, to deal with your case, you're going to need to modify the way you invoke grep. You either need to use grep -F (aka fgrep) with options for each argument, or you need to use grep -E (aka egrep) with alternation. In part, it depends on whether you might have to deal with arguments that themselves contain pipe symbols.
It is surprisingly tricky to do this reliably with a single invocation of grep; you might well be best off tolerating the overhead of running the pipeline multiple times:
for process in "$#"
do
kill $(ps -A | grep -w "$process" | awk '{print $1}')
done
If the overhead of running ps multiple times like that is too painful (it hurts me to write it - but I've not measured the cost), then you probably do something like:
case $# in
(0) echo "Usage: $(basename $0 .sh) procname [...]" >&2; exit 1;;
(1) kill $(ps -A | grep -w "$1" | awk '{print $1}');;
(*) tmp=${TMPDIR:-/tmp}/end.$$
trap "rm -f $tmp.?; exit 1" 0 1 2 3 13 15
ps -A > $tmp.1
for process in "$#"
do
grep "$process" $tmp.1
done |
awk '{print $1}' |
sort -u |
xargs kill
rm -f $tmp.1
trap 0
;;
esac
The use of plain xargs is OK because it is dealing with a list of process IDs, and process IDs do not contain spaces or newlines. This keeps the simple code for the simple case; the complex case uses a temporary file to hold the output of ps and then scans it once per process name in the command line. The sort -u ensures that if some process happens to match all your keywords (for example, grep -E '(firefox|chrome)' would match both), only one signal is sent.
The trap lines etc ensure that the temporary file is cleaned up unless someone is excessively brutal to the command (the signals caught are HUP, INT, QUIT, PIPE and TERM, aka 1, 2, 3, 13 and 15; the zero catches the shell exiting for any reason). Any time a script creates a temporary file, you should have similar trapping around the use of the file so that it will be cleaned up if the process is terminated.
If you're feeling cautious and you have GNU Grep, you might add the -w option so that the names provided on the command line only match whole words.
All the above will work with almost any shell in the Bourne/Korn/POSIX/Bash family (you'd need to use backticks with strict Bourne shell in place of $(...), and the leading parenthesis on the conditions in the case are also not allowed with Bourne shell). However, you can use an array to get things handled right.
n=0
unset args # Force args to be an empty array (it could be an env var on entry)
for i in "$#"
do
args[$((n++))]="-e"
args[$((n++))]="$i"
done
kill $(ps -A | fgrep "${args[#]}" | awk '{print $1}')
This carefully preserves spacing in the arguments and uses exact matches for the process names. It avoids temporary files. The code shown doesn't validate for zero arguments; that would have to be done beforehand. Or you could add a line args[0]='/collywobbles/' or something similar to provide a default - non-existent - command to search for.
To answer your question, what's going on is that $* expands to a parameter list, and so the second and later words look like files to grep(1).
To process them in sequence, you have to do something like:
for i in $*; do
echo $i
done
Usually, "$#" (with the quotes) is used in place of $* in cases like this.
See man sh, and check out killall(1), pkill(1), and pgrep(1) as well.
Look into pkill(1) instead, or killall(1) as #khachik comments.
$* should be rarely used. I would generally recommend "$#". Shell argument parsing is relatively complex and easy to get wrong. Usually the way you get it wrong is to end up having things evaluated that shouldn't be.
For example, if you typed this:
end '`rm foo`'
you would discover that if you had a file named 'foo' you don't anymore.
Here is a script that will do what you are asking to have done. It fails if any of the arguments contain '\n' or '\0' characters:
#!/bin/sh
kill $(ps -A | fgrep -e "$(for arg in "$#"; do echo "$arg"; done)" | awk '{ print $1; }')
I vastly prefer $(...) syntax for doing what backtick does. It's much clearer, and it's also less ambiguous when you nest things.

Resources