Bash hide "killed" - bash

I cannot see to hide the "killed" output from my script. Please help.
killall -9 $SCRIPT
I have tried:
killall -9 $SCRIPT >/dev/null 2>&1
and everyone redirect combination it seems. Thanks for the help.
* UPDATE *
The main script cannot run in the background. It outputs a bunch on information to the user while running. Thanks for the input though. Any other ideas?
Here is the desired output
HEV Terminated
administrator#HEV-DEV:~/hev-1.2.7$
Here is the current output:
HEV Terminated
Killed
administrator#HEV-DEV:~/hev-1.2.7$

It's not the script printing it, it's the shell. You can suppress it by putting it in the background and waiting for it to finish, redirecting the error from there:
./script &
wait 2>/dev/null

It looks like this can be avoided if you execute the script (including the &) in a subshell:
$ ( ./script.sh & )
$ ps
PID TTY TIME CMD
14208 pts/0 00:00:00 bash
16134 pts/0 00:00:00 script.sh
16135 pts/0 00:00:00 sleep
16136 pts/0 00:00:00 ps
$ killall script.sh
$
If I execute it (without the subshell) like ./script.sh then I get the output
[1]+ Terminated ./script.sh &>/dev/null

Try ...
(killall -9 $SCRIPT) 2>/dev/null
This executes the "killall" in a subshell and all output from that subshell is redirected, including the the "KILLED" output.

Try killall -q,--quiet don't print complaints
killall -q -9 $SCRIPT

Related

bash getting background process id gives parent pid

Creating a bash script with this command:
cat <<"END"> z
#! /bin/bash
sleep 20 && exit 1 &
ret=$!
ps $ret | grep $ret
END
and then running it gives:
7230 pts/39 S+ 0:00 /bin/bash ./z
I was expecting to see sleep 20 ... which is the child process. If I remove the && exit 1 it does return the child process.
Whats the reason? How can I get the child process id in above statement?
You already get the right information about the child process. Only in your case, ps doesn't know or want to show a proper COMMAND name for your chained sub-process you start in the background - what probably confused you.
Looks like this is the case with the chained commands (.. && ..., thus it has nothing to do with exit 1 could be also echo 5 etc.) where the process group leader name is showed as cmd name instead.
From the (ps man page)
`cmd | COMMAND`: simple name of executable
# Process state codes
`S`: interruptible sleep (waiting for an event to complete)
`+`: is in the foreground process group
See the S+ in your ps | grep output.
So, you can adapt your script a bit to confirm that you actually capture(d) the right information about the child process, like so:
cat <<"END"> z
#! /bin/bash
sleep 20 && exit 1 &
ret=$!
echo $ret
jobs -l
# display parent and child process info
# -j Jobs format
ps -j $$ $ret
END
Output of echo $ret:
30274
Output of jobs -l:
[1]+ 30274 Running sleep 20 && exit 1 &
Output of ps -j $$ $ret:
PID PGID SID TTY STAT TIME COMMAND
30273 30273 21804 pts/0 S+ 0:00 /bin/bash ./z
30274 30273 21804 pts/0 S+ 0:00 /bin/bash ./z
Note that both the parent and child have the same PGID, whereas the pid 30274 of the child process displayed by jobs -l and ps ... matches.
Further, if you change sleep 20 && exit 1 & as bash -c 'sleep 20 && exit 1' & you would get a proper command name for the child this time, as follows (cf. output order above):
30384
[1]+ 30384 Running bash -c 'sleep 20 && exit 1' &
PID PGID SID TTY STAT TIME COMMAND
30383 30383 21804 pts/0 S+ 0:00 /bin/bash ./z
30384 30383 21804 pts/0 S+ 0:00 bash -c sleep 20 && exit 1
Last but not least, in your original version instead of ps $ret | grep $ret you could also try
pstree -s $ret
From pstree man page
-s: Show parent processes of the specified process.
Which will provide you with an output similar to that one below, which would also confirm that you get the right process info for sleep 20 && exit 1 &:
systemd───systemd───gnome-terminal-───bash───bash───sleep
What you see is not parent pid, but sub-shell pid
When you run :
sleep 20 && exit 1 &
The processes tree is like :
current-shell ---> sub-shell ---> 'sleep 20 && exit 1'
When you run :
sleep 20 &
The processes tree is like :
current-shell ---> 'sleep 20'
Reason why you see pid for 'sleep 20'
Whats the reason?
The reason is that some entity has to do &&. It can't be sleep, because sleep only sleeps, and after sleep terminates (so there is no longer sleep to make any decision), some "entity" needs to compare the exit status of sleep and decide and then execute exit 1. That "entity" is the shell, that has to be "above" sleep to do the action. So the "real" background process is the shell, and sleep is it's child process.
In case of only sleep 20 & there is an optimization in bash that the parent shell in case bash sees there is only a single command to do. So bash scans the whole command command bla bla & and sees there is only one command to do. Because of that bash does only call to exec instead of the standard fork+exec and becomes sleep itself instead of running a child process. Because of the exec the subshell becomes sleep, so you see it in process name. It's a resource optimization done bash.

Bash script, killing a program (vim and atom->editor) within the running script

Is there Any way to KILL/EXIT/CLOSE VI and ATOM from a running script
example script, test.sh:
EDITOR1=vi
EDITOR2=atom
$EDITOR1 helloWorld.txt
$EDITOR2 file1.txt
kill $EDITOR1
kill $EITOR2
Is there any NOT Set way to kill it, I mean with a variable fore example the Filename.
You can use pkill -f <filename>, as shown below:
[fsilveir#fsilveir ~]$ ps -ef | grep vim
fsilveir 28867 28384 0 00:07 pts/5 00:00:00 vim foo.txt
fsilveir 28870 28456 0 00:07 pts/6 00:00:00 vim bar.txt
[fsilveir#fsilveir ~]$
[fsilveir#fsilveir ~]$ pkill -f bar.txt
[fsilveir#fsilveir ~]$
[fsilveir#fsilveir ~]$ ps -ef | grep vim
fsilveir 22344 11182 0 Mar18 pts/0 00:00:00 vim openfiles.py
fsilveir 28867 28384 0 00:07 pts/5 00:00:00 vim foo.txt
fsilveir 28958 28740 0 00:08 pts/7 00:00:00 grep --color=auto vim
[fsilveir#fsilveir ~]$
I can think of two ways:
Kill a process by its name.
kill `pidof $EDITOR1`
Kill the last started process
vi file &
kill $!
&: start process in the background (for interactive command line sessions)
$!: saves pid of last executed process.
kill won't work on any of the EDITOR variable because none of them are job id, kill only works on job ids or process ids. Running your above script does not place any of that commands in the background, once EDITOR1 executes it blocks the entire program, nothing can run until that process ( created by the $EDITOR1 filename is killed. For you to achieve your desired goal you have to run them on the background using &, when you use & a JOB ID will be created ( which increments ) at every call to &, you have to find a way to keep track of the JOB IDS . For example if you do $EDITOR1 filename.txt &; a job id of 1 is created, if you do $EDITOR2 filename2.txt a job id of 2 is created. Your case
EDITOR1=vi ; EDITOR2=atom
declare -a JOB_ID
declare -i JOB=0;
$EDITOR1 helloword.txt &;
JOB_ID+=( $(( ++JOB )) );
$EDITOR2 file1.txt;
JOB_ID+=( $(( ++JOB )) );
kill ${JOB_ID[0]}; // kills the first job
kill ${JOB_ID[1]}; // kills the second job
or you can use associative arrays
EDITOR1=vi ; EDITOR2=atom
declare -A JOB_ID
declare -I JOB=0;
$EDITOR1 helloword.txt &
JOB_ID[$EDITOR1]=$(( ++JOB ));
$EDITOR2 file1.txt &
JOB_ID[$EDITOR2]=$(( ++JOB ));
kill ${JOB_ID[$EDITOR1]}; // kills the first job
kill ${JOB_ID[$EDITOR2]}; // kills the second job
i have not tried any of this , but it's going to work
Why don't you use killall command, for example:
killall vim
killall atom

How to kill a background process created in a script

Suppose I input the following in a shell
(while true; do echo hahaha; sleep 1; done)&
Then I know I can kill it by
fg; CTRL-C
However, if the command above is in a script e.g. tmp.sh and I'm running that script, how to kill it?
(while true; do echo hahaha; sleep 1; done)&
RUNNING_PID=$!
kill ${RUNNING_PID}
$! will pick up the PID of the process that is running so you can do with it as you wish
Let's suppose that you have your bash script named tmp.sh with the next content:
#!/bin/bash
(while true; do echo hahaha; sleep 1; done)&
And you execute it! Of course, it will print hahaha to the stdout every 1 second. You can't list it with the jobs command. But... it's still a process! And it's a child in the forest of the current terminal! So:
1- Get the file name of the terminal connected to standard input:
$tty
/dev/pts/2
2- List the processes associated with the terminal (In the example we are using pts/2), and show the status with S and display in a forest format f:
$ps --tty pts/2 Sf
PID TTY STAT TIME COMMAND
3691 pts/2 Ss+ 0:00 /bin/bash
3787 pts/2 S 0:00 /bin/bash
4879 pts/2 S 0:00 \_ sleep 1
3- Now, you can see that the example lists a sleep 1 command that is a child of the /bin/bash process with PID 3787. Now kill it!
kill -9 3787
Note: Don't kill the bash process that has the s+ statuses, is bash process that gives you the prompt! From man(ps):
s is a session leader
+ is in the foreground process group
Recommendations:
In a case like this, you should save the PID in a file:
#!/bin/bash
(while true; do echo hahaha; sleep 1; done)&
echo $! > /path/to/my_script.pid
Then, you could just do some script to shut it down:
#!/bin/bash
kill -9 $(cat /path/to/my_script.pid)

How to get pid of piped command?

(or How to kill the child process)?
inotifywait -mqr --format '%w %f %e' $feedDir | while read dir file event
do
#something
done &
echo $! #5431
ps eg:
>$ ps
PID TTY TIME CMD
2867 pts/3 00:00:02 bash
5430 pts/3 00:00:00 inotifywait
5431 pts/3 00:00:00 bash
5454 pts/3 00:00:00 ps
It seems if I kill 5431 then 5430 (inotifywait) will be left running, but if I kill 5430 then both processes die. I don't suppose I can reliably assume that the pid of inotifywait will always be 1 less than $!?
When we run a pipe, each command is executed in a separated process. The interpreter waits for the last one but if we use ampersand (&).
cmd1 | cmd2 &
The pid of processes will be probably close, but we cannot assume it reliably. In the case where the last command is a bash reserved word as while, it creates a dedicated bash (that's why your 'dir', 'file' variables won't exist after the done keyword). Example:
ps # shows one bash process
echo "azerty" | while read line; do ps; done # shows one more bash
When the first command exits, the second one will terminate because the read on the pipe return EOF.
When the second command exits, the first command will be terminated by the signal SIGPIPE (write on a pipe with no reader) when it tries to write to the pipe. But if the command waits indefinitely... it is not terminated.
echo "$!" prints the pid of the last command executed in background. In your case, the bash process that is executing the while loop.
You can find the pid of "inotifywait" with the following syntax. But it's uggly:
(inotifywait ... & echo "$!">inotifywait.pid) | \
while read dir file event
do
#something
done &
cat inotifywait.pid # prints pid of inotifywait
If you don't want the pid, but just be sure the process will be terminated, you can use the -t option of inotifywait:
(while true; do inotifywait -t 10 ...; done)| \
while read dir file event
do
#something
done &
kill "$!" # kill the while loop
None of this solution are nice. What is your real achievement? Maybe we can find a more elegant solution.
If your goal is to make sure all of the children can be killed or interrupted elegantly. If you're using BusyBox's Ash, you don't have process substitution. If you don't want to use an fd either, check out this solution.
#!/bin/sh
pid=$$
terminate() {
pkill -9 -P "$pid"
}
trap terminate SIGHUP SIGINT SIGQUIT SIGTERM
# do your stuff here, note: should be run in the background {{{
inotifywait -mqr --format '%w %f %e' $feedDir | while read dir file event
do
#something
done &
# }}}
# Either pkill -9 -P "$pid" here
wait
# or pkill -9 -P "$pid" here
Or in another shell:
kill <pid ($$)>

How to suppress Terminated message after killing in bash?

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Resources