I would like to start two C codes from a bash file in parallel and the second one stops when the first one has finished.
The instruction wait expects both processes to stop which is not what I would like to do.
Thanks for any suggestion.
GNU parallel can do this kind of job. Check termination section, it can shutdown down remaining processes based on the exit code (either success or failure:
parallel -j2 --halt now,success=1 ::: 'cmd1 args' 'cmd2 args'
When one of the job finishes successfully, it will send TERM signal to the other jobs (if jobs are not terminated it forces using KILL signal).
With $! you get the pid of the last command executed in parallel. See some nice examples here: Bash `wait` command, waiting for more than 1 PID to finish execution
For your peculiar problem I imagine something like:
#!/bin/bash
command_master() {
echo -e "Command_master"
sleep 1
}
command_tokill() {
echo -e "Command_tokill"
sleep 10
}
command_master & pid_master=($!)
command_tokill & pid_tokill=($!)
wait "$pid_master"
kill "$pid_tokill"
wait -n is what you are looking for. It waits for the next job to finish. You can then have a list of the PIDs of the remaining jobs with jobs -p if you want to kill them.
prog1 & pids=( $! )
prog2 & pids+=( $! )
wait -n
kill "${pids[#]}"
This requires bash.
The two programs are started as background jobs, and the shell waits for one of them to exit.
When this happens, kill is used to terminate both processes (this will cause an error since one of them is already dead).
Related
I have below scripts ready with me -
1.sh:
echo "Good"
sleep 10
echo "Morning"
2.sh:
echo "Whats"
sleep 30
echo "Up"
script1.sh:
sh1.sh &
sh2.sh &
script2.sh:
echo "Hello world"
Requirement:
Execute script1.sh and do not wait for its completion or failure i.e., let the script run in background As soon as script1.sh is triggered the very next second execute the script2.sh.
./script1.sh
./script2.sh
Challenge:
./script2.sh keeps on waiting for completion of . ./script1.sh.
Like ./script2.sh I have lot of scripts to be run one after another but they should never wait for completion of ./script1.sh
Thanks,
B.J.
Just as youdid in 1.sh, you should append & after script1.sh:
#! /bin/bash
./script1.sh &
./script2.sh
exit 0
This will create a background process of script1.sh and continues in the main thread with script2.sh.
Usually, it a good practice not to leave background processes (unless they are long running servers, daemons, etc.). Better to make the parent script wait for all the children. Otherwise, you might have lot of orphan processes, which may use resources and have unintended consequences (e.g., open files, logging, ...)
Consider
#! /bin/bash
script1.sh &
script2.sh
script3.sh
wait # wait for any backgrounded processs
One immediate advantage is that killing the main script will also kill running script1 and script2. If for some reason the main script exit before all background childs are terminated, they can not be easily stopped (other then killing them by PID).
Also, using ps/pstree will show system status in clear way
I have written a bash script to carry out some tests on my system. The tests run in the background and in parallel. The tests can take a long time and sometimes I may wish to abort the tests part way through.
If I Control+C then it aborts the parent script, but leaves the various children running. I wish to make it so that I can hit Control+C or otherwise to quit and then kill all child processes running in the background. I have a bit of code that does the job if I'm running running the background jobs directly from the terminal, but it doesn't work in my script.
I have a minimal working example.
I have tried using trap in combination with pgrep -P $$.
#!/bin/bash
trap 'kill -n 2 $(pgrep -P $$)' 2
sleep 10 &
wait
I was hoping that on hitting control+c (SIGINT) would kill everything that the script started but it actually says:
./breakTest.sh: line 1: kill: (3220) - No such process
This number changes, but doesn't seem to apply to any running processes, so I don't know where it is coming from.
I guess if the contents of the trap command get evaluated where the trap command occurs then it might explain the outcome. The 3220 pid might be for pgrep itself.
I'd appreciate some insight here
Thanks
I have found a solution using pkill. This example also deals with many child processes.
#!/bin/bash
trap 'pkill -P $$' SIGINT SIGTERM
for i in {1..10}; do
sleep 10 &
done
wait
This appears to kill all the child processes elegantly. Though I don't properly understand what the issue was with my original code, apart from sending the correct signal.
in bash whenever you you use & after a command it places that command as a background job ( this background jobs are called job_spec ) which is incremented by one until you exit that terminal session. You can use the jobs command to get the list of the background jobs running. To work with this jobs you have to use the % with the job id. The jobs command also accept other options such as jobs -p to see the proces sids of all jobs , jobs -p %JOB_SPEC to see the process of id of that particular job.
#!/usr/bin/env bash
trap 'kill -9 %1' 2
sleep 10 &
wait
or
#!/usr/bin/env bash
trap 'kill -9 $(jobs -p %1)' 2
sleep 10 &
wait
I implemented something like this few years back, you can take a look at it async bash
You can try something like the following:
pkill -TERM -P <your_parent_id_here>
Is there any built in feature in bash to wait for 1 out of many processes to finish? And then kill remaining processes?
pids=""
# Run five concurrent processes
for i in {1..5}; do
( longprocess ) &
# store PID of process
pids+=" $!"
done
if [ "one of them finished" ]; then
kill_rest_of_them;
fi
I'm looking for "one of them finished" command. Is there any?
bash 4.3 added a -n flag to the built-in wait command, which causes the script to wait for the next child to complete. The -p option to jobs also means you don't need to store the list of pids, as long as there aren't any background jobs that you don't want to wait on.
# Run five concurrent processes
for i in {1..5}; do
( longprocess ) &
done
wait -n
kill $(jobs -p)
Note that if there is another background job other than the 5 long processes that completes first, wait -n will exit when it completes. That would also mean you would still want to save the list of process ids to kill, rather than killing whatever jobs -p returns.
It's actually fairly easy:
#!/bin/bash
set -o monitor
killAll()
{
# code to kill all child processes
}
# call function to kill all children on SIGCHLD from the first one
trap killAll SIGCHLD
# start your child processes here
# now wait for them to finish
wait
You just have to be really careful in your script to use only bash built-in commands. You can't start any utilities that run as a separate process after you issue the trap command - any child process exiting will send SIGCHLD - and you can't tell where it came from.
I want to reference a background Bash job in another background Bash job. Is that possible?
For example, say I start a background job:
$ long_running_process &
[1] 12345
Now I want something to happen when that job finishes, so I can use wait:
$ wait %1 && thing_to_happen_after_long_running_process_finishes
However, that will block, and I want my terminal back to do other stuff, but Ctrl+Z does nothing.
Attempting to start this in the background in the first place instead fails:
$ { wait %1 && thing_to_happen_after_long_running_process_finishes; } &
[2] 12346
-bash: line 3: wait: %1: no such job
$ jobs
[1]- Running long_running_process &
[2]+ Exit 127 { wait %1 && thing_to_happen_after_long_running process_finishes; }
Is there some way to reference one job using wait in another background job?
I see this behaviour using GNU Bash 4.1.2(1)-release.
A shell can only wait on its own children. Since backgrounding a job creates a new shell, a wait in that shell can only wait on its own children, not the children of its parent (i.e., the shell from which the background-wait forked). For what you want, you need to plan ahead:
long_running_process && thing_to_happen_after &
There is one alternative:
long_running_process &
LRP_PID=$!
{ while kill -0 $LRP_PID 2> /dev/null; do sleep 1; done; thing_to_happen_after; } &
This would set up a loop that tries to ping your background process once a second. When the process is complete, the kill will fail, and move on to the post-process program. It carries the slight risk that your process would exit and another process would be given the same process ID between checks, in which case the kill would become confused and think your process was still running, when in fact it is the new one. But it's very slight risk, and actually it might be OK if thing_to_happen_after is delayed a little longer until there is no process with ID $LRP_PID.
try something like this :
x2=$(long_running_process && thing_to_happen_after_long_running_process_finishes ) &
Let's say I have a bash script that executes three scripts in parallel
./script1 &
./script2 &
./script3 &
Now, let us say that ./script4 depends on script1, script2 and script3. How can I force it to wait for those, while still executing the three scripts in parallel?
You can use wait a built-in command available in Bash and in some other shells.
(see equivalent command WAITFOR on Windows)
wait documentation
Wait for each specified process to complete and return its termination
status.
Syntax
wait [n ...]
Key
n A process ID or a job specification
Each n can be a process ID or a job specification; if a job
specification is given, all processes in that job's pipeline are
waited for.
If n is not given, all currently active child processes are waited
for, and the return status is zero.
If n specifies a non-existent process or job, the return status is
127. Otherwise, the return status is the exit status of the last process or job waited for.
Simple solution
Below wait waits indefinitely for all currently active child processes to be all ended (i.e. in this case the three scripts).
./script1 &
./script2 &
./script3 &
wait # waits for all child processes
./script4
Store the PIDs in shell local variables
./script1 & pid1=$!
./script2 & pid2=$!
./script3 & pid3=$!
wait $pid1 $pid2 $pid3 # waits for 3 PIDs
./script4
Store the PIDs in temporary files
./script1 & echo $! >1.pid
./script2 & echo $! >2.pid
./script3 & echo $! >3.pid
wait $(<1.pid) $(<2.pid) $(<3.pid)
rm 1.pid 2.pid 3.pid # clean up
./script4
This last solution pollutes the current directory with three files (1.pid, 2.pid and 3.pid). One of these file may be corrupted before wait call. Moreover these files could be left in the file-system in case of crash.
From the bash man page:
wait [n ...]
Wait for each specified process and return its termination status.
Each `n` may be a process ID or a job specification.... If `n` is not
given, all currently active child processes are waited for, and the return
status is zero.
The easiest implementation might be for your last script to start the others. That way it's easy for it to store their PIDs and pass them to wait.
I whipped up something quickly years ago, but now I wanted nested parallelism. This is what I came up with:
# Run each supplied argument as a bash command, inheriting calling environment.
# bash_parallel's can be nested, though escaping quotes can be tricky -- define helper function for such cases.
# Example: bash_parallel "sleep 10" "ls -altrc"
function bash_parallel
{
(
i=0
unset BASH_PARALLEL_PIDS # Do not inherit BASH_PARALLEL_PIDS from parent bash_parallel (if any)
for cmd in "$#"
do
($cmd) & # In subshell, so sibling bash_parallel's wont interfere
BASH_PARALLEL_PIDS[$i]=$!
echo "bash_parallel started PID ${BASH_PARALLEL_PIDS[$i]}: $cmd"
i=$(($i + 1))
done
echo "bash_parallel waiting for PIDs: ${BASH_PARALLEL_PIDS[#]}"
wait ${BASH_PARALLEL_PIDS[#]}
) # In subshell, so ctrl-c will kill still-running children.
}
Use:
eisbaw#leno:~$ time (bash_parallel "sleep 10" "sleep 5")
bash_parallel started PID 30183: sleep 10
bash_parallel started PID 30184: sleep 5
bash_parallel waiting for PIDs: 30183 30184
real 0m10.007s
user 0m0.000s
sys 0m0.004s