Convert job control number (%1) to pid so that I can kill sudo'ed background job - bash

I often end up in this situation:
$ sudo something &
[1] 21838
$# Oh, shoot, it's hung, and assume the pid has scrolled off the screen
$ kill %1
-bash: kill: (21838) - Operation not permitted
$# Ah, rats. I forgot I sudo'ed that.
$# Wishful thinking:
$ sudo kill %1
kill: cannot find process "%1"
$# Now I have to use ps and find the pid I want.
$ ps -elf | grep something
$ ps -elf | grep sleep
4 S root 21838 1928 0 80 0 - 53969 poll_s 11:28 pts/2 00:00:00 sudo sleep 100
4 S root 21840 21838 0 80 0 - 26974 hrtime 11:28 pts/2 00:00:00 sleep 100
$ sudo kill -9 21838
[1]+ Killed sudo something
I would really like to know if there is a better workflow for this. I'm really surprised there isn't a bash expression to turn %1 into a pid number.
Is there a bash trick for converting %1 to it's underlying pid? (Yes, I know I could have saved it at launch with $!)

To get the PID of a job, use: jobs -p N, where N is the job number:
$ yes >/dev/null &
[1] 2189
$ jobs -p 1
2189
$ sudo kill $(jobs -p 1)
[1]+ Terminated yes > /dev/null
Alternatively, and more strictly answering your question, you might find -x useful: it runs a command, replacing a job spec with the corresponding PID:
$ yes >/dev/null &
[1] 2458
$ jobs -x sudo kill %1
[1]+ Terminated yes > /dev/null
I find -p more intuitive, personally, but I get the appeal of -x.

Related

Bash script, killing a program (vim and atom->editor) within the running script

Is there Any way to KILL/EXIT/CLOSE VI and ATOM from a running script
example script, test.sh:
EDITOR1=vi
EDITOR2=atom
$EDITOR1 helloWorld.txt
$EDITOR2 file1.txt
kill $EDITOR1
kill $EITOR2
Is there any NOT Set way to kill it, I mean with a variable fore example the Filename.
You can use pkill -f <filename>, as shown below:
[fsilveir#fsilveir ~]$ ps -ef | grep vim
fsilveir 28867 28384 0 00:07 pts/5 00:00:00 vim foo.txt
fsilveir 28870 28456 0 00:07 pts/6 00:00:00 vim bar.txt
[fsilveir#fsilveir ~]$
[fsilveir#fsilveir ~]$ pkill -f bar.txt
[fsilveir#fsilveir ~]$
[fsilveir#fsilveir ~]$ ps -ef | grep vim
fsilveir 22344 11182 0 Mar18 pts/0 00:00:00 vim openfiles.py
fsilveir 28867 28384 0 00:07 pts/5 00:00:00 vim foo.txt
fsilveir 28958 28740 0 00:08 pts/7 00:00:00 grep --color=auto vim
[fsilveir#fsilveir ~]$
I can think of two ways:
Kill a process by its name.
kill `pidof $EDITOR1`
Kill the last started process
vi file &
kill $!
&: start process in the background (for interactive command line sessions)
$!: saves pid of last executed process.
kill won't work on any of the EDITOR variable because none of them are job id, kill only works on job ids or process ids. Running your above script does not place any of that commands in the background, once EDITOR1 executes it blocks the entire program, nothing can run until that process ( created by the $EDITOR1 filename is killed. For you to achieve your desired goal you have to run them on the background using &, when you use & a JOB ID will be created ( which increments ) at every call to &, you have to find a way to keep track of the JOB IDS . For example if you do $EDITOR1 filename.txt &; a job id of 1 is created, if you do $EDITOR2 filename2.txt a job id of 2 is created. Your case
EDITOR1=vi ; EDITOR2=atom
declare -a JOB_ID
declare -i JOB=0;
$EDITOR1 helloword.txt &;
JOB_ID+=( $(( ++JOB )) );
$EDITOR2 file1.txt;
JOB_ID+=( $(( ++JOB )) );
kill ${JOB_ID[0]}; // kills the first job
kill ${JOB_ID[1]}; // kills the second job
or you can use associative arrays
EDITOR1=vi ; EDITOR2=atom
declare -A JOB_ID
declare -I JOB=0;
$EDITOR1 helloword.txt &
JOB_ID[$EDITOR1]=$(( ++JOB ));
$EDITOR2 file1.txt &
JOB_ID[$EDITOR2]=$(( ++JOB ));
kill ${JOB_ID[$EDITOR1]}; // kills the first job
kill ${JOB_ID[$EDITOR2]}; // kills the second job
i have not tried any of this , but it's going to work
Why don't you use killall command, for example:
killall vim
killall atom

how many child process (subprocess) generated by 'su -c command'

When using Upstart, controlling subprocesses (child process) is quite important. But what confused me is as following, which has gone beyond upstart itself:
scenario 1:
root#ubuntu-jstorm:~/Desktop# su cr -c 'sleep 20 > /tmp/a.out'
I got 3 processes by: cr#ubuntu-jstorm:~$ ps -ef | grep -v grep | grep sleep
root 8026 6544 0 11:11 pts/2 00:00:00 su cr -c sleep 20 > /tmp/a.out
cr 8027 8026 0 11:11 ? 00:00:00 bash -c sleep 20 > /tmp/a.out
cr 8028 8027 0 11:11 ? 00:00:00 sleep 20
scenario 2:
root#ubuntu-jstorm:~/Desktop# su cr -c 'sleep 20'
I got 2 processes by: cr#ubuntu-jstorm:~$ ps -ef | grep -v grep | grep sleep
root 7975 6544 0 10:03 pts/2 00:00:00 su cr -c sleep 20
cr 7976 7975 0 10:03 ? 00:00:00 sleep 20
The process of sleep 20 is the one I care, especially in Upstart, the process managed by Upstart should be this while not bash -c sleep 20 > /tmp/a.out is managed by Upstart, while not the sleep 20.
In scenario 1, upstart will not work correctly, above is the reason.
Therefore, why scenario 1 got 3 process, this doesn't make sense for me. Even though I know I can use command 'exec' to fix it, I just want to get the procedure what happened when the two command committed.
su -c starts the shell and passes it the command via its -c option. The shell may spawn as many processes as it likes (it depends on the given command).
It appears the shell executes the command directly without forking in some cases e.g., if you run su -c '/bin/sleep $$' then the apparent behaviour as if:
su starts a shell process (e.g., /bin/sh)
the shell gets its own process id (PID) and substitute $$ with it
the shell exec() /bin/sleep.
You should see in ps output that sleep's argument is equal to its pid in this case.
If you run su -c '/bin/sleep $$ >/tmp/sleep' then /bin/sleep argument is different from its PID (it is equal to the ancestor's PID) i.e.:
su starts a shell process (e.g., /bin/sh)
the shell gets its own process id (PID) and substitute $$ with it
the shell double forks and exec() /bin/sleep.
The double fork indicates that the actual sequence of events might be different e.g., su could orchestrate the forking or not forking, not the shell (I don't know). It seems the double fork is there to make sure that the command won't get a controlling terminal.
command > file
This is not atomic action, and actually done in 2 process.
One is execute the command;
the other do the output redirection.
Above two action can not done in one process.
Am I right?

Bash hide "killed"

I cannot see to hide the "killed" output from my script. Please help.
killall -9 $SCRIPT
I have tried:
killall -9 $SCRIPT >/dev/null 2>&1
and everyone redirect combination it seems. Thanks for the help.
* UPDATE *
The main script cannot run in the background. It outputs a bunch on information to the user while running. Thanks for the input though. Any other ideas?
Here is the desired output
HEV Terminated
administrator#HEV-DEV:~/hev-1.2.7$
Here is the current output:
HEV Terminated
Killed
administrator#HEV-DEV:~/hev-1.2.7$
It's not the script printing it, it's the shell. You can suppress it by putting it in the background and waiting for it to finish, redirecting the error from there:
./script &
wait 2>/dev/null
It looks like this can be avoided if you execute the script (including the &) in a subshell:
$ ( ./script.sh & )
$ ps
PID TTY TIME CMD
14208 pts/0 00:00:00 bash
16134 pts/0 00:00:00 script.sh
16135 pts/0 00:00:00 sleep
16136 pts/0 00:00:00 ps
$ killall script.sh
$
If I execute it (without the subshell) like ./script.sh then I get the output
[1]+ Terminated ./script.sh &>/dev/null
Try ...
(killall -9 $SCRIPT) 2>/dev/null
This executes the "killall" in a subshell and all output from that subshell is redirected, including the the "KILLED" output.
Try killall -q,--quiet don't print complaints
killall -q -9 $SCRIPT

how to stop tail -f command executed in sub shell

I tried various steps from http://mywiki.wooledge.org/ProcessManagement and http://mywiki.wooledge.org/BashFAQ/068My but I am unable to achieve is how to kill the tail -f command after certain time interval.
my script:
#!/bin/bash
function strt ()
{
command 1..
command 2..
}
export -f strt
su user -c 'set -e && RUN_Server.sh > server.log && tail -f server.log & pid=$! { sleep 20; kill $pid; } && strt'
exit 0.
I am trying to kill the pid of tail -f server.log and proceed to 'strt' which is small function to find if jboss server is started or not.
on executing I get error as
bash: -c: line 0: syntax error near unexpected token `{' .
Try this
timeout 20 tail -f server.log
pid=$! { sleep 20 ; kill $pid; }
What are you trying to do? Maybe just adding a semicolon before { can help?
The problem you're having is that you sleep command won't run until after you kill your tail.
The command structure:
command1 && command2
says to run command1 and if it exits with an exit code of 0, then run command2. It's equivalent to this:
if command1
then
command2
fi
I had a similar situation to this a while ago. I had to start up a Weblogic server, wait until the server started, then do something on the server.
I ended up using named pipes (Directions) to do this. I ran the command that started the Weblogic server through this named pipe. I then had another process read the pipe, and when I saw the startup string, I broke out of my loop and continued my program.
The link above should give you complete directions on doing just that.
I was trying something similar, namely wanted to print out the pid of a process spawned in the background with ampersand (&), in a one-liner/single line:
$ tail -f /var/log/syslog & ; echo $!
bash: syntax error near unexpected token `;'
... but kept getting the dreaded syntax error, which brought me to this post.
What I failed to realize in my example above, is that the & (ampersand) is also a command separator/terminator in bash - just like the ; (semicolon) is!! Note:
BashSheet - Greg's Wiki
[command] ; [command] [newline]
Semi-colons and newlines separate synchronous commands from each other.
[command] & [command]
A single ampersand terminates an asynchronous command.
So - given that the &, which is a command line terminator in bash, in my example above is followed by a ;, which is also a command line terminator - bash chokes. The answer is simply to remove the semicolon ;, and let only the ampersand & act as a command line separator in the one-liner:
$ tail -f /var/log/syslog & echo $!
[1] 15562
15562
$ May 1 05:39:16 mypc avahi-autoipd(eth0)[23315]: Got SIGTERM, quitting.
May 1 06:09:01 mypc CRON[2496]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete)
May 1 06:17:01 mypc CRON[5077]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
May 1 06:25:01 mypc CRON[7587]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
^C
$ May 1 06:52:01 mypc CRON[15934]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ))
$ ps -p 15562
PID TTY TIME CMD
15562 pts/0 00:00:00 tail
$ kill 15562
$ ps -p 15562
PID TTY TIME CMD
[1]+ Terminated tail -f /var/log/syslog
$ ps -p 15562
PID TTY TIME CMD
$
... however, in this example, you have to manually kill the spawned process.
To go back to OP problem, I can reconstruct the problem with this command line:
$ tail -f /var/log/syslog & pid=$! { sleep 2; kill $pid; }
bash: syntax error near unexpected token `}'
Thinking about this - bash sees & as separator, then sees "legal" command pid=$!, and then - with the previous "legal" command unterminated, sees a curly brace { which means a new command group in current shell. So the answer is simply to terminate the pid=$! with a semicolon ;, so the new command group can properly start:
$ tail -f /var/log/syslog & pid=$! ; { sleep 2; kill $pid; }
[1] 20228
May 1 05:39:16 mypc avahi-autoipd(eth0)[23315]: Got SIGTERM, quitting.
May 1 06:09:01 mypc CRON[2496]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete)
May 1 06:17:01 mypc CRON[5077]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
May 1 06:25:01 mypc CRON[7587]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))
May 1 06:52:01 mypc CRON[15934]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ))
$ ps -p 20228
PID TTY TIME CMD
[1]+ Terminated tail -f /var/log/syslog
$
Note that the tail -f process seems to terminate property, but in my bash (version 4.2.8(1)), I have to press Enter in shell, to see the "[1]+ Terminated ..." message.
Hope this helps,
Cheers!

Getting a PID from a Background Process Run as Another User

Getting a background process ID is easy to do from the prompt by going:
$ my_daemon &
$ echo $!
But what if I want to run it as a different user like:
su - joe -c "/path/to/my_daemon &;"
Now how can I capture the PID of my_daemon?
Succinctly - with a good deal of difficulty.
You have to arrange for the su'd shell to write the child PID to a file and then pick the output. Given that it will be 'joe' creating the file and not 'dex', that adds another layer of complexity.
The simplest solution is probably:
su - joe -c "/path/to/my_daemon & echo \$! > /tmp/su.joe.$$"
bg=$(</tmp/su.joe.$$)
rm -f /tmp/su.joe.$$ # Probably fails - joe owns it, dex does not
The next solution involves using a spare file descriptor - number 3.
su - joe -c "/path/to/my_daemon 3>&- & echo \$! 1>&3" 3>/tmp/su.joe.$$
bg=$(</tmp/su.joe.$$)
rm -f /tmp/su.joe.$$
If you're worried about interrupts etc (and you probably should be), then you trap things too:
tmp=/tmp/su.joe.$$
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15
su - joe -c "/path/to/my_daemon 3>&- & echo \$! 1>&3" 3>$tmp
bg=$(<$tmp)
rm -f $tmp
trap 0 1 2 3 13 15
(The caught signals are HUP, INT, QUIT, PIPE and TERM - plus 0 for shell exit.)
Warning: nice theory - untested code...
The approaches presented here didn't work for me. Here's what I did:
PID_FILE=/tmp/service_pid_file
su -m $SERVICE_USER -s /bin/bash -c "/path/to/executable $ARGS >/dev/null 2>&1 & echo \$! >$PID_FILE"
PID=`cat $PID_FILE`
As long as the output from the background process is redirected, you can send the PID to stdout:
su "${user}" -c "${executable} > '${log_file}' 2>&1 & echo \$!"
The PID can then be redirected to a file owned by the first user, rather than the second user.
su "${user}" -c "${executable} > '${log_file}' 2>&1 & echo \$!" > "${pid_file}"
The log files do need to be owned by the second user to do it this way, though.
Here's my solution
su oracle -c "/home/oracle/database/runInstaller" &
pid=$(pgrep -P $!)
Explantation
pgrep -P $! - Gets the child process of the parent pid $!
I took the above solution by Linux, but had to add a sleep to give the child process a chance to start.
su - joe -c "/path/to/my_daemon > /some/output/file" &
parent=$!
sleep 1
pid=$(pgrep -P $parent)
Running in bash, it doesn't like pid=$(pgrep -P $!) but if I add a space after the ! it's ok: pid=$(pgrep -P $! ). I stuck with the extra $parent variable to remind myself what I'm doing next time I look at the script.

Resources