Context:
Users provide me their custom scripts to run. These scripts can be of any sort like scripts to start multiple GUI programs, backend services. I have no control over how the scripts are written. These scripts can be of blocking type i.e. execution waits till all the child processes (programs that are run sequentially) exit
#exaple of blocking script
echo "START"
first_program
second_program
echo "DONE"
or non blocking type i.e. ones that fork child process in the background and exit something like
#example of non-blocking script
echo "START"
first_program &
second_program &
echo "DONE"
What am I trying to achieve?
User provided scripts can be of any of the above two types or mix of both. My job is to run the script and wait till all the processes started by it exit and then shutdown the node. If its of blocking type, case is plain simple i.e. get the PID of script execution process and wait till ps -ef|grep -ef PID has no more entries. Non-blocking scripts are the ones giving me trouble
Is there a way I can get list of PIDs of all the child process spawned by execution of a script? Any pointers or hints will be highly appreciated
You can use wait to wait for all the background processes started by userscript to complete. Since wait only works on children of the current shell, you'll need to source their script instead of running it as a separate process.
( source userscript; wait )
Sourcing the script in an explicit subshell should simulate starting a new process closely enough. If not, you can also background the subshell, which forces a new process to be started, then wait for it to complete.
( source userscript; wait ) & wait
ps --ppid $PID will list all child processes of the process with $PID.
You can open a file descriptor that gets inherited by other processes, and then wait until it's no longer in use. This is a low overhead method that usually works fine, though it's possible for processes to work around it if they want:
foo=$(mktemp)
( flock -x 5000; theirscript; ) 5000> "$foo"
flock -x 0 < "$foo"
rm "$foo"
echo "The script and its subprocesses are done"
You can follow all invoked processes using ptrace, such as with strace. This is easier, but has some associated overhead and may not work when scripts invoke suid binaries:
strace -f -e none theirscript
You can use pgrep -P <parent_pid> to get a list of child processes. Example:
IFS=$'\n' read -ra CHILD_PROCS -d '' < <(exec pgrep -P "$1")
And to get the grand-children, simply do the same procedure on each child process.
Check out my blog Bash functions to list and kill or send signals to process trees.
You can use one of those function to properly list all processes spawned under one process. Each has their own method or order of sending signals to process.
The only limitation by those is that process still have to be connected and not orphaned. If you could somehow find a way to group your processes, then that might be your solution.
To simply answer the question that was asked. You could store the process ID of each script you're calling into the same variable:
echo "START"
first_program &
child_process_ids+="$! "
second_program &
child_process_ids+="$! "
echo $child_process_ids
echo "DONE"
$child_process_ids would just be a space delimited string of process Ids. Now, this answers the question asked, however, what I would do would be a bit different. I would call each script from a for loop, store its process ID, then wait on each one in another for loop to finish and inspect each exit code individually. Using the same example, here's what it would look like.
echo "START"
scripts="first_program second_program"
for script in $scripts; do
#Call script and send to background
./$script &
#Store the script's processID that was just sent to the background
child_process_ids+="$! "
done
for child_process_id in $child_process_ids; do
#Pass each processId into the wait command to retrieve its exit
#code and store it in $rc
wait $child_process_id
rc=$?
#Inspect each processes exit code
if [ $rc -ne 0 ]; then
echo "$child_process_id failed with an exit code of $rc"
else
echo "$child_process_id was successful"
fi
done
Related
I have below scripts ready with me -
1.sh:
echo "Good"
sleep 10
echo "Morning"
2.sh:
echo "Whats"
sleep 30
echo "Up"
script1.sh:
sh1.sh &
sh2.sh &
script2.sh:
echo "Hello world"
Requirement:
Execute script1.sh and do not wait for its completion or failure i.e., let the script run in background As soon as script1.sh is triggered the very next second execute the script2.sh.
./script1.sh
./script2.sh
Challenge:
./script2.sh keeps on waiting for completion of . ./script1.sh.
Like ./script2.sh I have lot of scripts to be run one after another but they should never wait for completion of ./script1.sh
Thanks,
B.J.
Just as youdid in 1.sh, you should append & after script1.sh:
#! /bin/bash
./script1.sh &
./script2.sh
exit 0
This will create a background process of script1.sh and continues in the main thread with script2.sh.
Usually, it a good practice not to leave background processes (unless they are long running servers, daemons, etc.). Better to make the parent script wait for all the children. Otherwise, you might have lot of orphan processes, which may use resources and have unintended consequences (e.g., open files, logging, ...)
Consider
#! /bin/bash
script1.sh &
script2.sh
script3.sh
wait # wait for any backgrounded processs
One immediate advantage is that killing the main script will also kill running script1 and script2. If for some reason the main script exit before all background childs are terminated, they can not be easily stopped (other then killing them by PID).
Also, using ps/pstree will show system status in clear way
I have a program I want to start. Let' say this program will run a while(true)-loop (so it does not terminate. I want to write a bash script which:
Starts the program (./endlessloop &)
Waits 1 second (sleep 1)
Kills the program --> How?
I cannot use $! to get pid from child because server is running a lot of instances concurrently.
Store the PID:
./endlessloop & endlessloop_pid=$!
sleep 1
kill "$endlessloop_pid"
You can also check whether the process is still running with kill -0:
if kill -0 "$endlessloop_pid"; then
echo "Endlessloop is still running"
fi
...and storing the content in a variable means it scales to multiple processes:
endlessloop_pids=( ) # initialize an empty array to store PIDs
./endlessloop & endlessloop_pids+=( "$!" ) # start one in background and store its PID
./endlessloop & endlessloop_pids+=( "$!" ) # start another and store its PID also
kill "${endlessloop_pids[#]}" # kill both endlessloop instances started above
See also BashFAQ #68, "How do I run a command, and have it abort (timeout) after N seconds?"
The ProcessManagement page on the Wooledge wiki also discusses relevant best practices.
You can use the pgrep command for the same:
kill $(pgrep endlessloop)
I've seen a few examples out there but not been able to work them to my situation.
I have a script that calls a long running command, but I want to periodically (say every 1s) get the status of that call. For example:
#!/bin/bash
curl localhost:9200/my_index/_forcemerge?max_num_segments=2 &
while [ command is running ]; do
curl -XGET localhost:9200/_cat/shards/my_index?v&h=index,shard,prirep,segments.count
sleep 1
done
echo "finished!"
Is it possible to get the status of the child process in this way?
Edit: Clarifying what I'm actually doing. It's actually two curl commands to an Elasticsearch cluster. The long running command merges data segments together, the "status" command will get the current segment count.
I think that the safest way of doing this is to save the process ID of the child process and then periodically check to see if this is still running:
#!/bin/bash
mycommand &
child_pid=$!
while kill -0 $child_pid >/dev/null 2>&1; do
echo "Child process is still running"
sleep 1
done
echo "Child process has finished"
The variable $! will hold the process ID of the last process started in the background.
The kill -0 will not send a signal to the process, it only make kill return with a zero exit status if the given process ID exists and belongs to the user executing kill.
One could come up with a solution using pgrep too, but that will probably be a bit more "unsafe" in the sense that care must be taken not to catch any similar running processes.
I have a couple of bahs scripts running at the same time, and they communicate with each other by saving trigger variables in a folder. So one script will do something, and when its done it will echo "done" > variablefolder. The second script has a loop, checking every now and then if there is a "done" in the variable folder. If it is, the script executes something.
Does Bash support any better way of doing this? I know about export name=value, but that in practice does pretty much the same as what I'm doing now. I'm thinking, is there any way of pushing information to a Bash script that reacts on it? So when something is pushed to it, the Bash script will run a function, or something?
One way to handle inter-process communications is to use signals...
To send a signal to another process you can use the kill command.
The kill command uses the process id to identify the process.
You can save the process id to a file after the script starts using the $$ variable
Here is an example of a script that will catch a signal:
#!/bin/bash
echo $$ > /tmp/pid # Save the pid
function do_stuff {
echo "I am doing stuff"
exit
}
trap do_stuff SIGINT
while `true`
do
echo "Waiting for a signal"
sleep 1
done
So to send it a signal you can do this:
#!/bin/bash
pid=`cat /tmp/pid` # Read the pid
kill -s INT $pid
Context:
Users provide me their custom scripts to run. These scripts can be of any sort like scripts to start multiple GUI programs, backend services. I have no control over how the scripts are written. These scripts can be of blocking type i.e. execution waits till all the child processes (programs that are run sequentially) exit
#exaple of blocking script
echo "START"
first_program
second_program
echo "DONE"
or non blocking type i.e. ones that fork child process in the background and exit something like
#example of non-blocking script
echo "START"
first_program &
second_program &
echo "DONE"
What am I trying to achieve?
User provided scripts can be of any of the above two types or mix of both. My job is to run the script and wait till all the processes started by it exit and then shutdown the node. If its of blocking type, case is plain simple i.e. get the PID of script execution process and wait till ps -ef|grep -ef PID has no more entries. Non-blocking scripts are the ones giving me trouble
Is there a way I can get list of PIDs of all the child process spawned by execution of a script? Any pointers or hints will be highly appreciated
You can use wait to wait for all the background processes started by userscript to complete. Since wait only works on children of the current shell, you'll need to source their script instead of running it as a separate process.
( source userscript; wait )
Sourcing the script in an explicit subshell should simulate starting a new process closely enough. If not, you can also background the subshell, which forces a new process to be started, then wait for it to complete.
( source userscript; wait ) & wait
ps --ppid $PID will list all child processes of the process with $PID.
You can open a file descriptor that gets inherited by other processes, and then wait until it's no longer in use. This is a low overhead method that usually works fine, though it's possible for processes to work around it if they want:
foo=$(mktemp)
( flock -x 5000; theirscript; ) 5000> "$foo"
flock -x 0 < "$foo"
rm "$foo"
echo "The script and its subprocesses are done"
You can follow all invoked processes using ptrace, such as with strace. This is easier, but has some associated overhead and may not work when scripts invoke suid binaries:
strace -f -e none theirscript
You can use pgrep -P <parent_pid> to get a list of child processes. Example:
IFS=$'\n' read -ra CHILD_PROCS -d '' < <(exec pgrep -P "$1")
And to get the grand-children, simply do the same procedure on each child process.
Check out my blog Bash functions to list and kill or send signals to process trees.
You can use one of those function to properly list all processes spawned under one process. Each has their own method or order of sending signals to process.
The only limitation by those is that process still have to be connected and not orphaned. If you could somehow find a way to group your processes, then that might be your solution.
To simply answer the question that was asked. You could store the process ID of each script you're calling into the same variable:
echo "START"
first_program &
child_process_ids+="$! "
second_program &
child_process_ids+="$! "
echo $child_process_ids
echo "DONE"
$child_process_ids would just be a space delimited string of process Ids. Now, this answers the question asked, however, what I would do would be a bit different. I would call each script from a for loop, store its process ID, then wait on each one in another for loop to finish and inspect each exit code individually. Using the same example, here's what it would look like.
echo "START"
scripts="first_program second_program"
for script in $scripts; do
#Call script and send to background
./$script &
#Store the script's processID that was just sent to the background
child_process_ids+="$! "
done
for child_process_id in $child_process_ids; do
#Pass each processId into the wait command to retrieve its exit
#code and store it in $rc
wait $child_process_id
rc=$?
#Inspect each processes exit code
if [ $rc -ne 0 ]; then
echo "$child_process_id failed with an exit code of $rc"
else
echo "$child_process_id was successful"
fi
done