Stop background jobs after exiting the shell - shell

I use nohup to start several background jobs. Then I can use job -l to display those jobs and use fg jobID to put them front and stop them. But now I have quitted the session and after I login again and use "job -l" to display jobs, as expected, it displays nothing. Now, I want to know now could I stop those background jobs? Is there anyway similar to use fg jobpid to put it front and kill it? Thanks in advance!

Try this (be careful, it can be dangerous):
ps -ef | grep 'some string int the process command' | grep -v grep | awk '{print $2}' | xargs kill -9

You can also use something like tmux or screen and detach from the shell, when you log back in re-attach to the shell and run your normal commands as if you'd been logged in the whole time.

Related

How do I make a stop after a run?

If I do a:
npm run script
Can I stop it with a stop?
npm stop script
Why I tried it and it does not work.
I know that with the combination of "Ctrl + c" I kill it, but I want to do it by command.
Try something like that:
ps -ef | grep script | awk '{print $2}' | head -n1 | xargs kill -9
This command should find first process named script on the list of all unix processes created by all users and kill it by with using its PID.

Bash function to kill process

I made an alias for this function in order to kill processes in bash:
On my .bashrc file
kill_process(){
# $1 being a parameter for the process name
kill $(ps ax | grep "$1" | awk '{print $1}')
}
alias kill_process=kill_process
So, suppose I want to kill the meteor process:
Let's see all meteor processes:
ps aux | grep 'meteor' | awk '{print $2}'
21565
21602
21575
21546
Calling the kill_process function with the alias
kill_process meteor
bash: kill: (21612) - No such process
So, the kill_process function effectively terminates the meteor processes, but it's kill command looks for an inexistent pid. Notice the pid 21612 wasn't listed by ps aux | grep. Any ideas to improve the kill_process function to avoid this?
I think in your case the killall command would do what you want:
killall NAME
The standard way of killing processes by name is using killall, as Swoogan suggests in his answer.
As to your kill_process function, the grep expression that filters ps will match the very own grep process (you can see this running the pipeline without awk), but by the time kill is invoked, that process is no longer running. That's the message you see.
Each time you run the command, grep runs again with a new PID: that's the reason you can't find it on the list when you test it.
You could:
Run ps first, pipe it into a file or variable, then grep
Filter grep's PID out of the list
(Simpler) supress kill output:
kill $(...) 2>/dev/null

Using grep with ps -ax to kill numerous applications at once (Mac)

I'm very new to shell scripting. I recently covered a tutorial of using bash with grep.
About 5 minutes ago there was, as it happened, a practical application for this function as my browser appeared to have crashed. I wanted to, for practice, use the shell to find the process and then kill it.
I typed ps -ax | grep Chrome
But many lines appeared with lots of numbers on the left (are these "job numbers" or "process numbers"?)
Is there a way to Kill all "jobs" where Chrome exists in the name of the process? Rather than kill each one individually?
I do not recommend using killall.
And you should NOT use -9 at first.
I recommend a simple pkill:
pkill Chrome
You can use killall:
killall Chrome
Edit: Removed "-9" since it should be used as a last resource (was: killall -9 Chrome).
I like my solution better. Because adds a kind of wildcard to it. I did this:
mkdir ~/.bin
echo "ps -ef | grep -i $1 | grep -v grep | awk '{print $2}' | xargs kill -9" > ~/.bin/k
echo "alias k='/Users/YOUR_USER_HERE/.bin/k $1'" >> ~/.profile
chmod +x ~/.bin/k
Now to kill stuff you type:
k Chrome
or
k chrome
or
k chr
Just be careful with system processes or not to be killed processes.
Like if you have Calculator running and another one called Tabulator also running. If you type
k ulator it will kill'em both.
Also remember that this will kill ALL instances of the program(s) that match the variable after the command k.
Because of this, you can create two aliases inside your profile file and make 2 files inside .bin one with the command grep -i and the other without it. The one without the -i flag will be case and name sensitive.

Killing processes SHELL

This command ps -ef | grep php returns a list of processes
I want to kill in one command or with a shell script all those processes
Thanks
The easiest way to kill all commands with a given name is to use killall:
killall php
Note, this only sends an interrupt signal. This should be enough if the processes are behaving. If they're not dying from that, you can forcibly kill them using
killall -9 php
The normal way to do this is to use xargs as in ps -ef | grep php | xargs kill, but there are several ways to do this.
ps -ef lists all processes and then you use grep to pick a few lines that mention "php". This means that also commands that have "php" as part of their command line will match, and be killed. If you really want to match the command (and not the arguments as well), it is probably better to use pgrep php.
You can use a shell backtick to provide the output of a command as arguments to another command, as in
kill `pgrep php`
If you want to kill processes only, there is a command pkill that matches a pattern to the command. This can not be used if you want to do something else with the processes though. This means that if you want to kill all processes where the command contain "php", you can do this using pkill php.
Hope this helps.
You can find its pid (it's on the first column ps prints) and use the kill command to forcibly kill it:
kill -9 <pid you found>
Use xargs:
ps -ef | grep php | grep -v grep | awk '{print $2}' | xargs kill -9
grep -v grep is exclude the command itself and awk gives the list of PIDs which are then passed kill command.
Use pkill php. More on this topic in this similar question: How can I kill a process by name instead of PID?

Bash - starting and killing processes

I need some advice on a "simple" bash script.
I want to start around 500 instances of a program "myprog", and kill all of them after x number of seconds
In short, I have a loop that starts the program in background, and after sleep x (number of seconds) pkill is called with the program name.
My questions are:
How can I verify that after 10 seconds all 500 instances are running? ps and grep combination with counting or is there another way?
How can I get a count of how many processes did the pkill (or similar kill functions) actually kill (so that there are not any processes that terminate before the actual timelimit)?
How can one redirect the output of pkill(or similar kill functions) so that it doesn't output the killed process information, so that 500 lines of ./initializeTest: line 250: 7566 Terminated ./$myprog can be avoided. Redirecting to /dev/null didn't do the trick.
In bash there is the ulimit command that controls the resources of a (sub)shell.
This, for example, is guaranteed to use at most 10 seconds of cpu time and then die:
(ulimit -t 10; ./do_something)
That doesn't answer your question but hopefully it is helpful.
1,2. Use pgrep. I don't remember off the top of my head whether pgrep has -c parameter, so you might need to pipe that to wc -l.
3: that output is produced by your shell's job control. I think if you run that as a script (not in an interactive shell), there shouldn't be such an output. For an interactive shell, the are number of ways to turn that off, but they are shell-dependent, so refer to your shell's manual.
Well my 2 cents :
ps and grep can do the job. I found that kill -0 $pid is better, by the way :) (it tells you if a process is running or not)
You can use ps/grep or kill -0. For your problem, I will start all processes in the background and get their pid with $!, store them in an array or a list, then use kill -0 to get the status of all the processes.
use &> or 2>&1 as it is probably written on stderr
my2c
To make sure that each process gets their fair share of 10 seconds before they are killed, I would wrap each command within a subshell with it's own sleep && kill.
function run_with_tmout {
CMD=$1; TMOUT=$2
$CMD &
PID=$!
sleep $TMOUT
kill $PID
}
for ((i=0; i < 500; i++)); do
run_with_tmout ./myprog 10 &
done
# wait for all child processes to end
wait && echo "all done"
For a more complete example, see this example from bashcookbook.com which first checks if the process is still running, then tries kill -s SIGTERM before resorting to SIGKILL.
I have been using something like the following to get a list of pids.
PS=$(ps ax | sed s/^' '*// | grep java | grep program_name | cut -d' ' -f1)
Then I use kill $PS to stop them.
!/bin/bash
PS=$(ps ax | sed s/^' '*// | grep java | grep program_name | cut -d' ' -f1)
kill $PS

Resources