How to run the a shell script as background process and move on with next script without waiting for completion of first - bash

I have below scripts ready with me -
1.sh:
echo "Good"
sleep 10
echo "Morning"
2.sh:
echo "Whats"
sleep 30
echo "Up"
script1.sh:
sh1.sh &
sh2.sh &
script2.sh:
echo "Hello world"
Requirement:
Execute script1.sh and do not wait for its completion or failure i.e., let the script run in background As soon as script1.sh is triggered the very next second execute the script2.sh.
./script1.sh
./script2.sh
Challenge:
./script2.sh keeps on waiting for completion of . ./script1.sh.
Like ./script2.sh I have lot of scripts to be run one after another but they should never wait for completion of ./script1.sh
Thanks,
B.J.

Just as youdid in 1.sh, you should append & after script1.sh:
#! /bin/bash
./script1.sh &
./script2.sh
exit 0
This will create a background process of script1.sh and continues in the main thread with script2.sh.

Usually, it a good practice not to leave background processes (unless they are long running servers, daemons, etc.). Better to make the parent script wait for all the children. Otherwise, you might have lot of orphan processes, which may use resources and have unintended consequences (e.g., open files, logging, ...)
Consider
#! /bin/bash
script1.sh &
script2.sh
script3.sh
wait # wait for any backgrounded processs
One immediate advantage is that killing the main script will also kill running script1 and script2. If for some reason the main script exit before all background childs are terminated, they can not be easily stopped (other then killing them by PID).
Also, using ps/pstree will show system status in clear way

Related

wait command not working on parent process [duplicate]

Context:
Users provide me their custom scripts to run. These scripts can be of any sort like scripts to start multiple GUI programs, backend services. I have no control over how the scripts are written. These scripts can be of blocking type i.e. execution waits till all the child processes (programs that are run sequentially) exit
#exaple of blocking script
echo "START"
first_program
second_program
echo "DONE"
or non blocking type i.e. ones that fork child process in the background and exit something like
#example of non-blocking script
echo "START"
first_program &
second_program &
echo "DONE"
What am I trying to achieve?
User provided scripts can be of any of the above two types or mix of both. My job is to run the script and wait till all the processes started by it exit and then shutdown the node. If its of blocking type, case is plain simple i.e. get the PID of script execution process and wait till ps -ef|grep -ef PID has no more entries. Non-blocking scripts are the ones giving me trouble
Is there a way I can get list of PIDs of all the child process spawned by execution of a script? Any pointers or hints will be highly appreciated
You can use wait to wait for all the background processes started by userscript to complete. Since wait only works on children of the current shell, you'll need to source their script instead of running it as a separate process.
( source userscript; wait )
Sourcing the script in an explicit subshell should simulate starting a new process closely enough. If not, you can also background the subshell, which forces a new process to be started, then wait for it to complete.
( source userscript; wait ) & wait
ps --ppid $PID will list all child processes of the process with $PID.
You can open a file descriptor that gets inherited by other processes, and then wait until it's no longer in use. This is a low overhead method that usually works fine, though it's possible for processes to work around it if they want:
foo=$(mktemp)
( flock -x 5000; theirscript; ) 5000> "$foo"
flock -x 0 < "$foo"
rm "$foo"
echo "The script and its subprocesses are done"
You can follow all invoked processes using ptrace, such as with strace. This is easier, but has some associated overhead and may not work when scripts invoke suid binaries:
strace -f -e none theirscript
You can use pgrep -P <parent_pid> to get a list of child processes. Example:
IFS=$'\n' read -ra CHILD_PROCS -d '' < <(exec pgrep -P "$1")
And to get the grand-children, simply do the same procedure on each child process.
Check out my blog Bash functions to list and kill or send signals to process trees.
You can use one of those function to properly list all processes spawned under one process. Each has their own method or order of sending signals to process.
The only limitation by those is that process still have to be connected and not orphaned. If you could somehow find a way to group your processes, then that might be your solution.
To simply answer the question that was asked. You could store the process ID of each script you're calling into the same variable:
echo "START"
first_program &
child_process_ids+="$! "
second_program &
child_process_ids+="$! "
echo $child_process_ids
echo "DONE"
$child_process_ids would just be a space delimited string of process Ids. Now, this answers the question asked, however, what I would do would be a bit different. I would call each script from a for loop, store its process ID, then wait on each one in another for loop to finish and inspect each exit code individually. Using the same example, here's what it would look like.
echo "START"
scripts="first_program second_program"
for script in $scripts; do
#Call script and send to background
./$script &
#Store the script's processID that was just sent to the background
child_process_ids+="$! "
done
for child_process_id in $child_process_ids; do
#Pass each processId into the wait command to retrieve its exit
#code and store it in $rc
wait $child_process_id
rc=$?
#Inspect each processes exit code
if [ $rc -ne 0 ]; then
echo "$child_process_id failed with an exit code of $rc"
else
echo "$child_process_id was successful"
fi
done

Trying to close all child processes when I interrupt my bash script

I have written a bash script to carry out some tests on my system. The tests run in the background and in parallel. The tests can take a long time and sometimes I may wish to abort the tests part way through.
If I Control+C then it aborts the parent script, but leaves the various children running. I wish to make it so that I can hit Control+C or otherwise to quit and then kill all child processes running in the background. I have a bit of code that does the job if I'm running running the background jobs directly from the terminal, but it doesn't work in my script.
I have a minimal working example.
I have tried using trap in combination with pgrep -P $$.
#!/bin/bash
trap 'kill -n 2 $(pgrep -P $$)' 2
sleep 10 &
wait
I was hoping that on hitting control+c (SIGINT) would kill everything that the script started but it actually says:
./breakTest.sh: line 1: kill: (3220) - No such process
This number changes, but doesn't seem to apply to any running processes, so I don't know where it is coming from.
I guess if the contents of the trap command get evaluated where the trap command occurs then it might explain the outcome. The 3220 pid might be for pgrep itself.
I'd appreciate some insight here
Thanks
I have found a solution using pkill. This example also deals with many child processes.
#!/bin/bash
trap 'pkill -P $$' SIGINT SIGTERM
for i in {1..10}; do
sleep 10 &
done
wait
This appears to kill all the child processes elegantly. Though I don't properly understand what the issue was with my original code, apart from sending the correct signal.
in bash whenever you you use & after a command it places that command as a background job ( this background jobs are called job_spec ) which is incremented by one until you exit that terminal session. You can use the jobs command to get the list of the background jobs running. To work with this jobs you have to use the % with the job id. The jobs command also accept other options such as jobs -p to see the proces sids of all jobs , jobs -p %JOB_SPEC to see the process of id of that particular job.
#!/usr/bin/env bash
trap 'kill -9 %1' 2
sleep 10 &
wait
or
#!/usr/bin/env bash
trap 'kill -9 $(jobs -p %1)' 2
sleep 10 &
wait
I implemented something like this few years back, you can take a look at it async bash
You can try something like the following:
pkill -TERM -P <your_parent_id_here>

WAIT for "1 of many process" to finish

Is there any built in feature in bash to wait for 1 out of many processes to finish? And then kill remaining processes?
pids=""
# Run five concurrent processes
for i in {1..5}; do
( longprocess ) &
# store PID of process
pids+=" $!"
done
if [ "one of them finished" ]; then
kill_rest_of_them;
fi
I'm looking for "one of them finished" command. Is there any?
bash 4.3 added a -n flag to the built-in wait command, which causes the script to wait for the next child to complete. The -p option to jobs also means you don't need to store the list of pids, as long as there aren't any background jobs that you don't want to wait on.
# Run five concurrent processes
for i in {1..5}; do
( longprocess ) &
done
wait -n
kill $(jobs -p)
Note that if there is another background job other than the 5 long processes that completes first, wait -n will exit when it completes. That would also mean you would still want to save the list of process ids to kill, rather than killing whatever jobs -p returns.
It's actually fairly easy:
#!/bin/bash
set -o monitor
killAll()
{
# code to kill all child processes
}
# call function to kill all children on SIGCHLD from the first one
trap killAll SIGCHLD
# start your child processes here
# now wait for them to finish
wait
You just have to be really careful in your script to use only bash built-in commands. You can't start any utilities that run as a separate process after you issue the trap command - any child process exiting will send SIGCHLD - and you can't tell where it came from.

How to capture a process Id and also add a trigger when that process finishes in a bash script?

I am trying to make a bash script to start a jar file and do it in the background. For that reason I'm using nohup. Right now I can capture the pid of the java process but I also need to be able to execute a command when the process finishes.
This is how I started
nohup java -jar jarfile.jar & echo $! > conf/pid
I also know from this answer that using ; will make a command execute after the first one finishes.
nohup java -jar jarfile.jar; echo "done"
echo "done" is just an example. My problem now is that I don't know how to combine them both. If I run echo $! first then echo "done" executes immediately. While if echo "done" goes first then echo $! will capture the PID of echo "done" instead of the one of the jarfile.
I know that I could achieve the desire functionality by polling until I don't see the PID running anymore. But I would like to avoid that as much as possible.
You can use the bash util wait once you start the process using nohup
nohup java -jar jarfile.jar &
pid=$! # Getting the process id of the last command executed
wait $pid # Waits until the process mentioned by the pid is complete
echo "Done, execute the new command"
I don't think you're going to get around "polling until you don't see the pid running anymore." wait is a bash builtin; it's what you want and I'm certain that's exactly what it does behind the scenes. But since Inian beat me to it, here's a friendly function for you anyway (in case you want to get a few things running in parallel).
alert_when_finished () {
declare cmd="${#}";
${cmd} &
declare pid="${!}";
while [[ -d "/proc/${pid}/" ]]; do :; done; #equivalent to wait
echo "[${pid}] Finished running: ${cmd}";
}
Running a command like this will give the desired effect and suppress unneeded job output:
( alert_when_finished 'sleep 5' & )

How to wait on all child (and grandchild etc) process spawned by a script

Context:
Users provide me their custom scripts to run. These scripts can be of any sort like scripts to start multiple GUI programs, backend services. I have no control over how the scripts are written. These scripts can be of blocking type i.e. execution waits till all the child processes (programs that are run sequentially) exit
#exaple of blocking script
echo "START"
first_program
second_program
echo "DONE"
or non blocking type i.e. ones that fork child process in the background and exit something like
#example of non-blocking script
echo "START"
first_program &
second_program &
echo "DONE"
What am I trying to achieve?
User provided scripts can be of any of the above two types or mix of both. My job is to run the script and wait till all the processes started by it exit and then shutdown the node. If its of blocking type, case is plain simple i.e. get the PID of script execution process and wait till ps -ef|grep -ef PID has no more entries. Non-blocking scripts are the ones giving me trouble
Is there a way I can get list of PIDs of all the child process spawned by execution of a script? Any pointers or hints will be highly appreciated
You can use wait to wait for all the background processes started by userscript to complete. Since wait only works on children of the current shell, you'll need to source their script instead of running it as a separate process.
( source userscript; wait )
Sourcing the script in an explicit subshell should simulate starting a new process closely enough. If not, you can also background the subshell, which forces a new process to be started, then wait for it to complete.
( source userscript; wait ) & wait
ps --ppid $PID will list all child processes of the process with $PID.
You can open a file descriptor that gets inherited by other processes, and then wait until it's no longer in use. This is a low overhead method that usually works fine, though it's possible for processes to work around it if they want:
foo=$(mktemp)
( flock -x 5000; theirscript; ) 5000> "$foo"
flock -x 0 < "$foo"
rm "$foo"
echo "The script and its subprocesses are done"
You can follow all invoked processes using ptrace, such as with strace. This is easier, but has some associated overhead and may not work when scripts invoke suid binaries:
strace -f -e none theirscript
You can use pgrep -P <parent_pid> to get a list of child processes. Example:
IFS=$'\n' read -ra CHILD_PROCS -d '' < <(exec pgrep -P "$1")
And to get the grand-children, simply do the same procedure on each child process.
Check out my blog Bash functions to list and kill or send signals to process trees.
You can use one of those function to properly list all processes spawned under one process. Each has their own method or order of sending signals to process.
The only limitation by those is that process still have to be connected and not orphaned. If you could somehow find a way to group your processes, then that might be your solution.
To simply answer the question that was asked. You could store the process ID of each script you're calling into the same variable:
echo "START"
first_program &
child_process_ids+="$! "
second_program &
child_process_ids+="$! "
echo $child_process_ids
echo "DONE"
$child_process_ids would just be a space delimited string of process Ids. Now, this answers the question asked, however, what I would do would be a bit different. I would call each script from a for loop, store its process ID, then wait on each one in another for loop to finish and inspect each exit code individually. Using the same example, here's what it would look like.
echo "START"
scripts="first_program second_program"
for script in $scripts; do
#Call script and send to background
./$script &
#Store the script's processID that was just sent to the background
child_process_ids+="$! "
done
for child_process_id in $child_process_ids; do
#Pass each processId into the wait command to retrieve its exit
#code and store it in $rc
wait $child_process_id
rc=$?
#Inspect each processes exit code
if [ $rc -ne 0 ]; then
echo "$child_process_id failed with an exit code of $rc"
else
echo "$child_process_id was successful"
fi
done

Resources