What does percent sign % do in "kill %vmtouch"? - shell

I came across this shell script
bash# while true; do
vmtouch -m 10000000000 -l *head* & sleep 10m
kill %vmtouch
done
and wonder how does the kill %vmtouch portion work?
I normally pass a pid to kill a process but how does %vmtouch resolve to a pid?
I tried to run portions of script seperately but I got
-bash: kill: %vmtouch: no such job error.

%something is not a general shell script feature, but syntax used by the kill, fg and bg builtin commands to identify jobs. It searches the list of the shell's active jobs for the given string, and then signals that.
Here's man bash searching for /jobspec:
The character % introduces a job specification (jobspec).
Job number n may be referred to as %n. A job may also be referred to using a prefix of the name used to start it, or using a substring that appears in its command line. [...]
So if you do:
sleep 30 &
cat &
You can use things like %sleep or %sl to conveniently refer to the last one without having to find or remember its pid or job number.

You should look at the Job control section of the man bash page. The character % introduces a job specification (jobspec). Ideally when you have started this background job, you should have seen an entry in the terminal
[1] 25647
where 25647 is some random number I used. The line above means that the process id of the last backgrounded job (on a pipeline, the process id of the last process) is using job number as 1.
The way you are using the job spec is wrong in your case as it does not take process name of the background job. The last backgrounded is referred to as %1, so ideally your kill command should have been written as below, which is the same as writing kill 25647
vmtouch -m 10000000000 -l *head* & sleep 10m
kill %1
But that said, instead of relying the jobspec ids, you can access the process id of the background job which is stored in a special shell variable $! which you can use as
vmtouch -m whatever -l *head* & vmtouch_pid=$!
sleep 10m
kill "$vmtouch_pid"
See Job Control Basics from the GNU bash man page.

Related

How to cancel a curl request in a shell script [duplicate]

How do I kill the last spawned background task in linux?
Example:
doSomething
doAnotherThing
doB &
doC
doD
#kill doB
????
You can kill by job number. When you put a task in the background you'll see something like:
$ ./script &
[1] 35341
That [1] is the job number and can be referenced like:
$ kill %1
$ kill %% # Most recent background job
To see a list of job numbers use the jobs command. More from man bash:
There are a number of ways to refer to a job in the shell. The character % introduces a job name. Job number n may be
referred to as %n. A job may also be referred to using a prefix of the name used to start it, or using a substring that
appears in its command line. For example, %ce refers to a stopped ce job. If a prefix matches more than one job, bash
reports an error. Using %?ce, on the other hand, refers to any job containing the string ce in its command line. If the
substring matches more than one job, bash reports an error. The symbols %% and %+ refer to the shell's notion of the current job, which is the last job stopped while it was in the foreground or started in the background. The previous job may
be referenced using %-. In output pertaining to jobs (e.g., the output of the jobs command), the current job is always
flagged with a +, and the previous job with a -. A single % (with no accompanying job specification) also refers to the
current job.
There's a special variable for this in bash:
kill $!
$! expands to the PID of the last process executed in the background.
The following command gives you a list of all background processes in your session, along with the pid. You can then use it to kill the process.
jobs -l
Example usage:
$ sleep 300 &
$ jobs -l
[1]+ 31139 Running sleep 300 &
$ kill 31139
This should kill all background processes:
jobs -p | xargs kill -9
skill doB
skill is a version of the kill command that lets you select one or multiple processes based on a given criteria.
You need its pid... use "ps -A" to find it.
this is an out of topic answer, but, for those who are interested, it maybe valuable.
As in #John Kugelman's answer, % is related to job specification.
how to efficiently find that? use less's &pattern command, seems man use less pager (not that sure), in man bash type &% then type Enter will only show lines that containing '%', to reshow all, type &. then Enter.
Just use the killall command:
killall taskname
for more info and more advanced options, type "man killall".

How to make a simple shell script that checks if the system has a process with the name specified?

Pretty much a script that checks if the system has a process with the name specified. If it does find any of the processes, it kills all of them, reporting how many processes have been terminated, otherwise it echoes that no such process exists.
for example:
$ terminateProcess [a running cpp program]
should kill all the [given file name] processes.
Can any body get me started..
No need to make a shellscript, pkill exists for years. man pkill:
pkill will send the specified signal (by default SIGTERM) to each
process instead of listing them on stdout.
-c, --count
Suppress normal output; instead print a count of matching pro‐
cesses. When count does not match anything, e.g. returns zero,
the command will return non-zero value.
Example 2: Make syslog reread its configuration file:
$ pkill -HUP syslogd

starting a new process group from bash script

I basically want to run a script (which calls more scripts) in a new process group so that I can send signal to all the processes called by the script.
In Linux, I found out setsid helps me in doing that, but this is not available on FreeBSD.
Syntax for setsid (provided by util-linux-ng).
setsid /path/to/myscript
I, however learnt that session and process group are not the same. But starting a new session also solves my problem.
Sessions and groups are not the same thing. Let's make things clean:
A session consists of one or more process groups, and can have a controlling terminal. When the session has a controlling terminal, the session has, at any moment, exactly one foreground process group and one or more background process groups. In such a scenario, all terminal-generated signals and input is seen by every process in the foreground process group.
Also, when a session has a controlling terminal, the shell process is usually the session leader, dictating which process group is the foreground process group (implicitly making the other groups background process groups). Processes in a group are usually put there by a linear pipeline. For example, ls -l | grep a | sort will typically create a new process group where ls, grep and sort live.
Shells that support job control (which also requires support by the kernel and the terminal driver), as in the case of bash, create a new process group for each command invoked -- and if you invoke it to run in the background (with the & notation), that process group is not given the control of the terminal, and the shell makes it a background process group (and the foreground process group remains the shell).
So, as you can see, you almost certainly don't want to create a session in this case. A typical situation where you'd want to create a session is if you were daemonizing a process, but other than that, there is usually not much use in creating a new session.
You can run the script as a background job, as I mentioned, this will create a new process group. Since fork() inherits the process group ID, every process executed by the script will be in the same group. For example, consider this simple script:
#!/bin/bash
ps -o pid,ppid,pgid,comm | grep ".*"
This prints something like:
PID PPID PGID COMMAND
11888 11885 11888 bash
12343 11888 12343 execute.sh
12344 12343 12343 ps
12345 12343 12343 grep
As you can see, execute.sh, ps and grep are all on the same process group (the value in PGID).
So all you want is:
/path/to/myscript &
Then you can check the process group ID of myscript with ps -o pid,ppid,pgid,comm | grep myscript. To send a signal to the group, send it to the group leader (PGID is the PID of the leader of the group). A signal sent to a group is delivered to every process in that group.
Using FreeBSD you may try using the script command that will internally execute the setsid command.
stty -echo -onlcr # avoid added \r in output
script -q /dev/null /path/to/myscript
stty echo onlcr
# sync # ... if terminal prompt does not return
This is not exactly answer, but is an alternative approach based on names.
You can have a common part of name for all process. For example we have my_proc_group_29387172 part for all the following processes:
-rwxrwxr-x. my_proc_group_29387172_microservice_1
-rwxrwxr-x. my_proc_group_29387172_microservice_2
-rwxrwxr-x. my_proc_group_29387172_data_dumper
Spawn all of them (and as much as you want):
ADDR=1 ./my_proc_group_29387172_microservice_1
ADDR=2 ./my_proc_group_29387172_microservice_1
ADDR=3 ./my_proc_group_29387172_microservice_2
./my_proc_group_29387172_data_dumper
When you want to kill all processes you can use pkill command (pattern kill) or killall with --regexp parameter:
pkill my_proc_group_29387172
Benefit :) - you can start as many process as you want at any time (or any day) from any script.
Drawback :( - you can kill innocent processes if they has common part of name with your pattern.

How to Parse Values from output in BASH

I'm writing a script that should create a rotating series of debug logs as it runs over a period of time. My current problem is that when I ran it with -vx attached, I can see that it stops during the actual debugging process and doesn't proceed through the loop. This is reflective of how the command would run normally. So I thought to continue the process, I want to run with &.
The problem is that this will become exponentially messier over time (since none of the processes are stopping). So what I'm looking for is a way to parse the PID output of the & command into a variable, and then I will add a kill command at the start of the loop pointed at that variable.
Figuring out how to parse the output of commands will also be useful in the other part of my project, which is to terminate the while loop based on a particular % free in a df -h for a select partition
No parsing needed. The PID of the most recent background process is stored in $!.
command & # run command in background
pid=$! # save pid as $pid
...
kill $pid # kill command

Is there a way to make bash job control quiet?

Bash is quite verbose when running jobs in the background:
$ echo toto&
toto
[1] 15922
[1]+ Done echo toto
Since I'm trying to run jobs in parallel and use the output, I'd like to find a way to silence bash. Is there a way to remove this superfluous output?
You can use parentheses to run a background command in a subshell, and that will silence the job control messages. For example:
(sleep 10 & )
Note: The following applies to interactive Bash sessions. In scripts, job-control messages are never printed.
There are 2 basic scenarios for silencing Bash's job-control messages:
Launch-and-forget:
CodeGnome's helpful answer answer suggests enclosing the background command in a simple subshell - e.g, (sleep 10 &) - which effectively silences job-control messages - both on job creation and on job termination.
This has an important side effect:
By using control operator & inside the subshell, you lose control of the background job - jobs won't list it, and neither %% (the spec. (ID) of the most recently launched job) nor $! (the PID of the (last) process launched (as part of) the most recent job) will reflect it.[1]
For launch-and-forget scenarios, this is not a problem:
You just fire off the background job,
and you let it finish on its own (and you trust that it runs correctly).
[1] Conceivably, you could go looking for the process yourself, by searching running processes for ones matching its command line, but that is cumbersome and not easy to make robust.
Launch-and-control-later:
If you want to remain in control of the job, so that you can later:
kill it, if need be.
synchronously wait (at some later point) for its completion,
a different approach is needed:
Silencing the creation job-control messages is handled below, but in order to silence the termination job-control messages categorically, you must turn the job-control shell option OFF:
set +m (set -m turns it back on)
Caveat: This is a global setting that has a number of important side effects, notably:
Stdin for background commands is then /dev/null rather than the current shell's.
The keyboard shortcuts for suspending (Ctrl-Z) and delay-suspending (Ctrl-Y) a foreground command are disabled.
For the full story, see man bash and (case-insensitively) search for occurrences of "job control".
To silence the creation job-control messages, enclose the background command in a group command and redirect the latter's stderr output to /dev/null
{ sleep 5 & } 2>/dev/null
The following example shows how to quietly launch a background job while retaining control of the job in principle.
$ set +m; { sleep 5 & } 2>/dev/null # turn job-control option off and launch quietly
$ jobs # shows the job just launched; it will complete quietly due to set +m
If you do not want to turn off the job-control option (set +m), the only way to silence the termination job-control message is to either kill the job or wait for it:
Caveat: There are two edge cases where this technique still produces output:
If the background command tries to read from stdin right away.
If the background command terminates right away.
To launch the job quietly (as above, but without set +m):
$ { sleep 5 & } 2>/dev/null
To wait for it quietly:
$ wait %% 2>/dev/null # use of %% is optional here
To kill it quietly:
{ kill %% && wait; } 2>/dev/null
The additional wait is necessary to make the termination job-control message that is normally displayed asynchronously by Bash (at the time of actual process termination, shortly after the kill) a synchronous output from wait, which then allows silencing.
But, as stated, if the job completes by itself, a job-control message will still be displayed.
Wrap it in a dummy script:
quiet.sh:
#!/bin/bash
$# &
then call it, passing your command to it as an argument:
./quiet.sh echo toto
You may need to play with quotes depending on your input.
Interactively, no. It will always display job status. You can influence when the status is shown using set -b.
There's nothing preventing you from using the output of your commands (via pipes, or storing it variables, etc). The job status is sent to the controlling terminal by the shell and doesn't mix with other I/O. If you're doing something complex with jobs, the solution is to write a separate script.
The job messages are only really a problem if you have, say, functions in your bashrc which make use of job control which you want to have direct access to your interactive environment. Unfortunately there's nothing you can do about it.
One solution (in bash anyway) is to route all the output to /dev/null
echo 'hello world' > /dev/null &
The above will not give you any output other than the id for the bg process.

Resources