Can I get the pid of touch when it's creating a file? I've tried touch ID$! & but it doesn't display the pid correctly. It takes the command before touch. Any advices?
I suppose you could write a small C or Perl program that calls fork() and then uses one of the exec*() functions to invoke touch from the child process. The parent process would receive the child's PID as the result of the fork call.
You say in a comment that you want to insert the PID into the name of the file. I can't think of a way to invoke touch with its own PID as part of its comand-line argument; you won't know the PID soon enough to do that. I suppose you could rename the file after touching it.
But the PID of the touch process isn't particularly meaningful. The process will have terminated before you can make any use of it.
If you just want a (more or less) unique number as part of the file name, I can't think of any good reason that it has to be the PID of the touch process. You could just do something like:
NEW_PID=$(sh -c 'echo $$')
touch foo-$NEW_PID.txt
which gives you the PID of a short-lived shell process.
Note that PIDs are not unique; since there are only finitely many possible PIDs, they're re-used after a while. (I've been able to force a PID to be reused in less than a minute by forking multiple processes very quickly.)
This is touch rewritten in perl with the pid of the creating process as part of the filename
perl -e 'open(F,">".$$."myfile")||die $!'
If you really need that pid, it's a multi-step process:
f=$(mktemp)
touch $f &
wait $!
mv $f ./ID$!
Related
What I usually do is pause my script, run it in the background and then disown it like
./script
^Z
bg
disown
However, I would like to be able to cancel my script at any time. If I have a script that runs indefinitely, I would like to be able to cancel it after a few hours or a day or whenever I feel like cancelling it.
Since you are having a bit of trouble following along, let's see if we can keep it simple for you. (this presumes you can write to /tmp, change as required). Let's start your script in the background and create a PID file containing the PID of its process.
$ ./script & echo $! > /tmp/scriptPID
You can check the contents of /tmp/scriptPID
$ cat /tmp/scriptPID
######
Where ###### is the PID number of the running ./script process. You can further confirm with pidof script (which will return the same ######). You can use ps aux | grep script to view the number as well.
When you are ready to kill the ./script process, you simply pass the number (e.g. ######) to kill. You can do that directly with:
$ kill $(</tmp/scriptPID)
(or with the other methods listed in my comment)
You can add rm /tmp/scriptPID to remove the pid file after killing the process.
Look things over and let me know if you have any further questions.
Pretty much a script that checks if the system has a process with the name specified. If it does find any of the processes, it kills all of them, reporting how many processes have been terminated, otherwise it echoes that no such process exists.
for example:
$ terminateProcess [a running cpp program]
should kill all the [given file name] processes.
Can any body get me started..
No need to make a shellscript, pkill exists for years. man pkill:
pkill will send the specified signal (by default SIGTERM) to each
process instead of listing them on stdout.
-c, --count
Suppress normal output; instead print a count of matching pro‐
cesses. When count does not match anything, e.g. returns zero,
the command will return non-zero value.
Example 2: Make syslog reread its configuration file:
$ pkill -HUP syslogd
I basically want to run a script (which calls more scripts) in a new process group so that I can send signal to all the processes called by the script.
In Linux, I found out setsid helps me in doing that, but this is not available on FreeBSD.
Syntax for setsid (provided by util-linux-ng).
setsid /path/to/myscript
I, however learnt that session and process group are not the same. But starting a new session also solves my problem.
Sessions and groups are not the same thing. Let's make things clean:
A session consists of one or more process groups, and can have a controlling terminal. When the session has a controlling terminal, the session has, at any moment, exactly one foreground process group and one or more background process groups. In such a scenario, all terminal-generated signals and input is seen by every process in the foreground process group.
Also, when a session has a controlling terminal, the shell process is usually the session leader, dictating which process group is the foreground process group (implicitly making the other groups background process groups). Processes in a group are usually put there by a linear pipeline. For example, ls -l | grep a | sort will typically create a new process group where ls, grep and sort live.
Shells that support job control (which also requires support by the kernel and the terminal driver), as in the case of bash, create a new process group for each command invoked -- and if you invoke it to run in the background (with the & notation), that process group is not given the control of the terminal, and the shell makes it a background process group (and the foreground process group remains the shell).
So, as you can see, you almost certainly don't want to create a session in this case. A typical situation where you'd want to create a session is if you were daemonizing a process, but other than that, there is usually not much use in creating a new session.
You can run the script as a background job, as I mentioned, this will create a new process group. Since fork() inherits the process group ID, every process executed by the script will be in the same group. For example, consider this simple script:
#!/bin/bash
ps -o pid,ppid,pgid,comm | grep ".*"
This prints something like:
PID PPID PGID COMMAND
11888 11885 11888 bash
12343 11888 12343 execute.sh
12344 12343 12343 ps
12345 12343 12343 grep
As you can see, execute.sh, ps and grep are all on the same process group (the value in PGID).
So all you want is:
/path/to/myscript &
Then you can check the process group ID of myscript with ps -o pid,ppid,pgid,comm | grep myscript. To send a signal to the group, send it to the group leader (PGID is the PID of the leader of the group). A signal sent to a group is delivered to every process in that group.
Using FreeBSD you may try using the script command that will internally execute the setsid command.
stty -echo -onlcr # avoid added \r in output
script -q /dev/null /path/to/myscript
stty echo onlcr
# sync # ... if terminal prompt does not return
This is not exactly answer, but is an alternative approach based on names.
You can have a common part of name for all process. For example we have my_proc_group_29387172 part for all the following processes:
-rwxrwxr-x. my_proc_group_29387172_microservice_1
-rwxrwxr-x. my_proc_group_29387172_microservice_2
-rwxrwxr-x. my_proc_group_29387172_data_dumper
Spawn all of them (and as much as you want):
ADDR=1 ./my_proc_group_29387172_microservice_1
ADDR=2 ./my_proc_group_29387172_microservice_1
ADDR=3 ./my_proc_group_29387172_microservice_2
./my_proc_group_29387172_data_dumper
When you want to kill all processes you can use pkill command (pattern kill) or killall with --regexp parameter:
pkill my_proc_group_29387172
Benefit :) - you can start as many process as you want at any time (or any day) from any script.
Drawback :( - you can kill innocent processes if they has common part of name with your pattern.
Please note that this questions was edited after a couple of comments I received. Initially I wanted to split my goal into smaller pieces to make it simpler (and perhaps expand my knowledge on various fronts), but it seems I went too far with the simplicity :). So, here I am asking the big question.
Using bash, is there a way one can actually create an anonymous pipe between two child processes and know their pids?
The reason I'm asking is when you use the classic pipeline, e.g.
cmd1 | cmd2 &
you lose the ability to send signals to cmd1. In my case the actual commands I am running are these
./my_web_server | ./my_log_parser &
my_web_server is a basic web server that dump a lot of logging information to it's stdout
my_log_parser is a log parser that I wrote that reads through all the logging information it receives from my_web_server and it basically selects only certain values from the log (in reality it actually stores the whole log as it received it, but additionally it creates an extra csv file with the values it finds).
The issue I am having is that my_web_server actually never stops by itself (it is a web server, you don't want that from a web server :)). So after I am done, I need to stop it myself. I would like for the bash script to do this when I stop it (the bash script), either via SIGINT or SIGTERM.
For something like this, traps are the way to go. In essence I would create a trap for INT and TERM and the function it would call would kill my_web_server, but... I don't have the pid and even though I know I could look for it via ps, I am looking for a pretty solution :).
Some of you might say: "Well, why don't you just kill my_log_parser and let my_web_server die on its own with SIGPIPE?". The reason why I don't want to kill it is when you kill a process that's at the end of the pipeline, the output buffer of the process before it, is not flushed. Ergo, you lose stuff.
I've seen several solutions here and in other places that suggested to store the pid of my_web_server in a file. This is a solution that works. It is possible to write the pipeline by fiddling with the filedescriptors a bit. I, however don't like this solution, because I have to generate files. I don't like the idea of creating arbitrary files just to store a 5-character PID :).
What I ended up doing for now is this:
#!/bin/bash
trap " " HUP
fifo="$( mktemp -u "$( basename "${0}" ).XXXXXX" )"
mkfifo "${fifo}"
<"${fifo}" ./my_log_parser &
parser_pid="$!"
>"${fifo}" ./my_web_server &
server_pid="$!"
rm "${fifo}"
trap '2>/dev/null kill -TERM '"${server_pid}"'' INT TERM
while true; do
wait "${parser_pid}" && break
done
This solves the issue with me not being able to terminate my_web_server when the script receives SIGINT or SIGTERM. It seems more readable than any hackery fiddling with file descriptors in order to eventually use a file to store my_web_server's pid, which I think is good, because it improves the readability.
But it still uses a file (named pipe). Even though I know it uses the file (named pipe) for my_web_server and my_log_parser to talk (which is a pretty good reason) and the file gets wiped from the disk very shortly after it's created, it's still a file :).
Would any of you guys know of a way to do this task without using any files (named pipes)?
From the Bash man pages:
! Expands to the process ID of the most recently executed back-
ground (asynchronous) command.
You are not running a background command, you are running process substitution to read to file descriptor 3.
The following works, but I'm not sure if it is what you are trying to achieve:
sleep 120 &
child_pid="$!"
wait "${child_pid}"
sleep 120
Edit:
Comment was: I know I can pretty much do this the silly 'while read i; do blah blah; done < <( ./my_proxy_server )'-way, but I don't particularly like the fact that when a script using this approach receives INT or TERM, it simply dies without telling ./my_proxy_server to bugger off too :)
So, it seems like your problem stems from the fact that it is not so easy to get the PID of the proxy server. So, how about using your own named pipe, with the trap command:
pipe='/tmp/mypipe'
mkfifo "$pipe"
./my_proxy_server > "$pipe" &
child_pid="$!"
echo "child pid is $child_pid"
# Tell the proxy server to bugger-off
trap 'kill $child_pid' INT TERM
while read
do
echo $REPLY
# blah blah blah
done < "$pipe"
rm "$pipe"
You could probably also use kill %1 instead of using $child_pid.
YAE (Yet Another Edit):
You ask how to get the PIDS from:
./my_web_server | ./my_log_parser &
Simples, sort of. To test I used sleep, just like your original.
sleep 400 | sleep 500 &
jobs -l
Gives:
[1]+ 8419 Running sleep 400
8420 Running | sleep 500 &
So its just a question of extracting those PIDS:
pid1=$(jobs -l|awk 'NR==1{print $2}')
pid2=$(jobs -l|awk 'NR==2{print $1}')
I hate calling awk twice here, but anything else is just jumping through hoops.
I'm currently writing a bash script to kill a process and then start it again.
I'm using
ps -ef | grep Cp1252
to return the list of processes based on the file encoding.
Finding the process based on file encoding is not ideal, as other processes may also have this string.
I need to uniquely identify two processes, but they have quite generic process names:
/user/jdk/jdk1.6.0_43/bin/java
My question is, is there any way to add a flag or a unique identifier to a process?
Or indeed, is there any other way I can solve this issue?
Thanks,
Dearg
UPDATE
I found a solution to my problem, I found that I can use the PPID to uniquely identify each process. The command name of the process associated with the PPID is distinctive enough so that I can tell what is a normal Java process, and what it is that I want to restart.
Thanks for all your help anyway, it certainly helped me to narrow down the alternatives! :-)
You could write a helper script to start the process and record the pid in a file. Just make sure to delete the file when the process stops (which could be done by the wrapper again).
something like
#!/bin/bash
PIDFILE=/path/to/pid-file
COMMAND=YOUR_PROGRAMM_HERE
# optionally check if pid-file exists
if [ -e ${PIDFILE} ]; then
echo "${PIDFILE} already exists, assuming ${COMMAND} already running"
exit 1;
fi
${COMMAND} &
WAITPID="$!"
echo ${WAITPID} > "${PIDFILE}"
# cleanup on signal
trap "kill ${WAITPID}; rm ${PIDFILE}; exit" SIGINT SIGTERM SIGHUP
# optionally wait for the program to be done
wait ${WAITPID}
rm "${PIDFILE}"
so to restart your process, just read the pid from /path/to/pid-file and send the appropriate signal.
If you can modify the source code of the processes you run, I recommend adding an option to record the pid from within the process itself.