How to capture a process Id and also add a trigger when that process finishes in a bash script? - bash

I am trying to make a bash script to start a jar file and do it in the background. For that reason I'm using nohup. Right now I can capture the pid of the java process but I also need to be able to execute a command when the process finishes.
This is how I started
nohup java -jar jarfile.jar & echo $! > conf/pid
I also know from this answer that using ; will make a command execute after the first one finishes.
nohup java -jar jarfile.jar; echo "done"
echo "done" is just an example. My problem now is that I don't know how to combine them both. If I run echo $! first then echo "done" executes immediately. While if echo "done" goes first then echo $! will capture the PID of echo "done" instead of the one of the jarfile.
I know that I could achieve the desire functionality by polling until I don't see the PID running anymore. But I would like to avoid that as much as possible.

You can use the bash util wait once you start the process using nohup
nohup java -jar jarfile.jar &
pid=$! # Getting the process id of the last command executed
wait $pid # Waits until the process mentioned by the pid is complete
echo "Done, execute the new command"

I don't think you're going to get around "polling until you don't see the pid running anymore." wait is a bash builtin; it's what you want and I'm certain that's exactly what it does behind the scenes. But since Inian beat me to it, here's a friendly function for you anyway (in case you want to get a few things running in parallel).
alert_when_finished () {
declare cmd="${#}";
${cmd} &
declare pid="${!}";
while [[ -d "/proc/${pid}/" ]]; do :; done; #equivalent to wait
echo "[${pid}] Finished running: ${cmd}";
}
Running a command like this will give the desired effect and suppress unneeded job output:
( alert_when_finished 'sleep 5' & )

Related

wait command not working on parent process [duplicate]

Context:
Users provide me their custom scripts to run. These scripts can be of any sort like scripts to start multiple GUI programs, backend services. I have no control over how the scripts are written. These scripts can be of blocking type i.e. execution waits till all the child processes (programs that are run sequentially) exit
#exaple of blocking script
echo "START"
first_program
second_program
echo "DONE"
or non blocking type i.e. ones that fork child process in the background and exit something like
#example of non-blocking script
echo "START"
first_program &
second_program &
echo "DONE"
What am I trying to achieve?
User provided scripts can be of any of the above two types or mix of both. My job is to run the script and wait till all the processes started by it exit and then shutdown the node. If its of blocking type, case is plain simple i.e. get the PID of script execution process and wait till ps -ef|grep -ef PID has no more entries. Non-blocking scripts are the ones giving me trouble
Is there a way I can get list of PIDs of all the child process spawned by execution of a script? Any pointers or hints will be highly appreciated
You can use wait to wait for all the background processes started by userscript to complete. Since wait only works on children of the current shell, you'll need to source their script instead of running it as a separate process.
( source userscript; wait )
Sourcing the script in an explicit subshell should simulate starting a new process closely enough. If not, you can also background the subshell, which forces a new process to be started, then wait for it to complete.
( source userscript; wait ) & wait
ps --ppid $PID will list all child processes of the process with $PID.
You can open a file descriptor that gets inherited by other processes, and then wait until it's no longer in use. This is a low overhead method that usually works fine, though it's possible for processes to work around it if they want:
foo=$(mktemp)
( flock -x 5000; theirscript; ) 5000> "$foo"
flock -x 0 < "$foo"
rm "$foo"
echo "The script and its subprocesses are done"
You can follow all invoked processes using ptrace, such as with strace. This is easier, but has some associated overhead and may not work when scripts invoke suid binaries:
strace -f -e none theirscript
You can use pgrep -P <parent_pid> to get a list of child processes. Example:
IFS=$'\n' read -ra CHILD_PROCS -d '' < <(exec pgrep -P "$1")
And to get the grand-children, simply do the same procedure on each child process.
Check out my blog Bash functions to list and kill or send signals to process trees.
You can use one of those function to properly list all processes spawned under one process. Each has their own method or order of sending signals to process.
The only limitation by those is that process still have to be connected and not orphaned. If you could somehow find a way to group your processes, then that might be your solution.
To simply answer the question that was asked. You could store the process ID of each script you're calling into the same variable:
echo "START"
first_program &
child_process_ids+="$! "
second_program &
child_process_ids+="$! "
echo $child_process_ids
echo "DONE"
$child_process_ids would just be a space delimited string of process Ids. Now, this answers the question asked, however, what I would do would be a bit different. I would call each script from a for loop, store its process ID, then wait on each one in another for loop to finish and inspect each exit code individually. Using the same example, here's what it would look like.
echo "START"
scripts="first_program second_program"
for script in $scripts; do
#Call script and send to background
./$script &
#Store the script's processID that was just sent to the background
child_process_ids+="$! "
done
for child_process_id in $child_process_ids; do
#Pass each processId into the wait command to retrieve its exit
#code and store it in $rc
wait $child_process_id
rc=$?
#Inspect each processes exit code
if [ $rc -ne 0 ]; then
echo "$child_process_id failed with an exit code of $rc"
else
echo "$child_process_id was successful"
fi
done

How to run the a shell script as background process and move on with next script without waiting for completion of first

I have below scripts ready with me -
1.sh:
echo "Good"
sleep 10
echo "Morning"
2.sh:
echo "Whats"
sleep 30
echo "Up"
script1.sh:
sh1.sh &
sh2.sh &
script2.sh:
echo "Hello world"
Requirement:
Execute script1.sh and do not wait for its completion or failure i.e., let the script run in background As soon as script1.sh is triggered the very next second execute the script2.sh.
./script1.sh
./script2.sh
Challenge:
./script2.sh keeps on waiting for completion of . ./script1.sh.
Like ./script2.sh I have lot of scripts to be run one after another but they should never wait for completion of ./script1.sh
Thanks,
B.J.
Just as youdid in 1.sh, you should append & after script1.sh:
#! /bin/bash
./script1.sh &
./script2.sh
exit 0
This will create a background process of script1.sh and continues in the main thread with script2.sh.
Usually, it a good practice not to leave background processes (unless they are long running servers, daemons, etc.). Better to make the parent script wait for all the children. Otherwise, you might have lot of orphan processes, which may use resources and have unintended consequences (e.g., open files, logging, ...)
Consider
#! /bin/bash
script1.sh &
script2.sh
script3.sh
wait # wait for any backgrounded processs
One immediate advantage is that killing the main script will also kill running script1 and script2. If for some reason the main script exit before all background childs are terminated, they can not be easily stopped (other then killing them by PID).
Also, using ps/pstree will show system status in clear way

Does Bash support a way of "triggering" an event?

I have a couple of bahs scripts running at the same time, and they communicate with each other by saving trigger variables in a folder. So one script will do something, and when its done it will echo "done" > variablefolder. The second script has a loop, checking every now and then if there is a "done" in the variable folder. If it is, the script executes something.
Does Bash support any better way of doing this? I know about export name=value, but that in practice does pretty much the same as what I'm doing now. I'm thinking, is there any way of pushing information to a Bash script that reacts on it? So when something is pushed to it, the Bash script will run a function, or something?
One way to handle inter-process communications is to use signals...
To send a signal to another process you can use the kill command.
The kill command uses the process id to identify the process.
You can save the process id to a file after the script starts using the $$ variable
Here is an example of a script that will catch a signal:
#!/bin/bash
echo $$ > /tmp/pid # Save the pid
function do_stuff {
echo "I am doing stuff"
exit
}
trap do_stuff SIGINT
while `true`
do
echo "Waiting for a signal"
sleep 1
done
So to send it a signal you can do this:
#!/bin/bash
pid=`cat /tmp/pid` # Read the pid
kill -s INT $pid

How to wait on all child (and grandchild etc) process spawned by a script

Context:
Users provide me their custom scripts to run. These scripts can be of any sort like scripts to start multiple GUI programs, backend services. I have no control over how the scripts are written. These scripts can be of blocking type i.e. execution waits till all the child processes (programs that are run sequentially) exit
#exaple of blocking script
echo "START"
first_program
second_program
echo "DONE"
or non blocking type i.e. ones that fork child process in the background and exit something like
#example of non-blocking script
echo "START"
first_program &
second_program &
echo "DONE"
What am I trying to achieve?
User provided scripts can be of any of the above two types or mix of both. My job is to run the script and wait till all the processes started by it exit and then shutdown the node. If its of blocking type, case is plain simple i.e. get the PID of script execution process and wait till ps -ef|grep -ef PID has no more entries. Non-blocking scripts are the ones giving me trouble
Is there a way I can get list of PIDs of all the child process spawned by execution of a script? Any pointers or hints will be highly appreciated
You can use wait to wait for all the background processes started by userscript to complete. Since wait only works on children of the current shell, you'll need to source their script instead of running it as a separate process.
( source userscript; wait )
Sourcing the script in an explicit subshell should simulate starting a new process closely enough. If not, you can also background the subshell, which forces a new process to be started, then wait for it to complete.
( source userscript; wait ) & wait
ps --ppid $PID will list all child processes of the process with $PID.
You can open a file descriptor that gets inherited by other processes, and then wait until it's no longer in use. This is a low overhead method that usually works fine, though it's possible for processes to work around it if they want:
foo=$(mktemp)
( flock -x 5000; theirscript; ) 5000> "$foo"
flock -x 0 < "$foo"
rm "$foo"
echo "The script and its subprocesses are done"
You can follow all invoked processes using ptrace, such as with strace. This is easier, but has some associated overhead and may not work when scripts invoke suid binaries:
strace -f -e none theirscript
You can use pgrep -P <parent_pid> to get a list of child processes. Example:
IFS=$'\n' read -ra CHILD_PROCS -d '' < <(exec pgrep -P "$1")
And to get the grand-children, simply do the same procedure on each child process.
Check out my blog Bash functions to list and kill or send signals to process trees.
You can use one of those function to properly list all processes spawned under one process. Each has their own method or order of sending signals to process.
The only limitation by those is that process still have to be connected and not orphaned. If you could somehow find a way to group your processes, then that might be your solution.
To simply answer the question that was asked. You could store the process ID of each script you're calling into the same variable:
echo "START"
first_program &
child_process_ids+="$! "
second_program &
child_process_ids+="$! "
echo $child_process_ids
echo "DONE"
$child_process_ids would just be a space delimited string of process Ids. Now, this answers the question asked, however, what I would do would be a bit different. I would call each script from a for loop, store its process ID, then wait on each one in another for loop to finish and inspect each exit code individually. Using the same example, here's what it would look like.
echo "START"
scripts="first_program second_program"
for script in $scripts; do
#Call script and send to background
./$script &
#Store the script's processID that was just sent to the background
child_process_ids+="$! "
done
for child_process_id in $child_process_ids; do
#Pass each processId into the wait command to retrieve its exit
#code and store it in $rc
wait $child_process_id
rc=$?
#Inspect each processes exit code
if [ $rc -ne 0 ]; then
echo "$child_process_id failed with an exit code of $rc"
else
echo "$child_process_id was successful"
fi
done

start and monitoring a process inside shell script for completion

I have a simple shell script whose also is below:
#!/usr/bin/sh
echo "starting the process which is a c++ process which does some database action for around 30 minutes"
#this below process should be run in the background
<binary name> <arg1> <arg2>
exit
Now what I want is to monitor and display the status information of the process.
I don't want to go deep into its functionality. Since I know that the process will complete in 30 minutes, I want to show to the user that 3.3% is completed for every 1 min and also check whether the process is running in the background and finally if the process is completed I want to display that it is completed.
could anybody please help me?
The best thing you could do is to put some kind of instrumentation in your application,
and let it report the actual progress in terms of work items processed / total amount of work.
Failing that, you can indeed refer to the time that the thing has been running.
Here's a sample of what I've used in the past. Works in ksh93 and bash.
#! /bin/ksh
set -u
prog_under_test="sleep"
args_for_prog=30
max=30 interval=1 n=0
main() {
($prog_under_test $args_for_prog) & pid=$! t0=$SECONDS
while is_running $pid; do
sleep $interval
(( delta_t = SECONDS-t0 ))
(( percent=100*delta_t/max ))
report_progress $percent
done
echo
}
is_running() { (kill -0 ${1:?is_running: missing process ID}) 2>& -; }
function report_progress { typeset percent=$1
printf "\r%5.1f %% complete (est.) " $(( percent ))
}
main
If your process involves a pipe than http://www.ivarch.com/programs/quickref/pv.shtml would be an excellent solution or an alternative is http://clpbar.sourceforge.net/ . But these are essentially like "cat" with a progress bar and need something to pipe through them. There is a small program that you could compile and then execute as a background process then kill when things finish up, http://www.dreamincode.net/code/snippet3062.htm that would probablly work if you just want to dispaly something for 30 minutes and then print out almost done in the console if your process runs long and it exits, but you would have to modify it. Might be better just to create another shell script that displays a character every few seconds in a loop and checks if the pid of the previous process is still running, I believe you can get the parent pid by looking at the $$ variable then check if it is still running in /proc/pid .
You really should let the command output statistics, but for simplicity's sake you can do something like this to simply increment a counter while your process runs:
#!/bin/sh
cmd & # execute a command
pid=$! # Record the pid of the command
i=0
while sleep 60; do
: $(( i += 1 ))
e=$( echo $i 3.3 \* p | dc ) # compute percent completed
printf "$e percent complete\r" # report completion
done & # reporter is running in the background
pid2=$! # record reporter's pid
# Wait for the original command to finish
if wait $pid; then
echo cmd completed successfully
else
echo cmd failed
fi
kill $pid2 # Kill the status reporter

Resources