Run shell command in parallel and wait for result - bash

I want to run command_a and command_b in parallel, and wait for both of them finishing to start another command_c. Is there a simple command/idiom in shell that allows me do that?

Can you simply do
$ command_a &
$ command_b &
$ wait
(the ampersand puts the shell job in the background)
From https://ss64.com/bash/wait.html
If n is not given, all currently active child processes are waited
for, and the return status is zero.

Related

Bash script is waiting to open second file in gedit until I close the first one [duplicate]

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

Bash files: run process in parallel and stop when one is over

I would like to start two C codes from a bash file in parallel and the second one stops when the first one has finished.
The instruction wait expects both processes to stop which is not what I would like to do.
Thanks for any suggestion.
GNU parallel can do this kind of job. Check termination section, it can shutdown down remaining processes based on the exit code (either success or failure:
parallel -j2 --halt now,success=1 ::: 'cmd1 args' 'cmd2 args'
When one of the job finishes successfully, it will send TERM signal to the other jobs (if jobs are not terminated it forces using KILL signal).
With $! you get the pid of the last command executed in parallel. See some nice examples here: Bash `wait` command, waiting for more than 1 PID to finish execution
For your peculiar problem I imagine something like:
#!/bin/bash
command_master() {
echo -e "Command_master"
sleep 1
}
command_tokill() {
echo -e "Command_tokill"
sleep 10
}
command_master & pid_master=($!)
command_tokill & pid_tokill=($!)
wait "$pid_master"
kill "$pid_tokill"
wait -n is what you are looking for. It waits for the next job to finish. You can then have a list of the PIDs of the remaining jobs with jobs -p if you want to kill them.
prog1 & pids=( $! )
prog2 & pids+=( $! )
wait -n
kill "${pids[#]}"
This requires bash.
The two programs are started as background jobs, and the shell waits for one of them to exit.
When this happens, kill is used to terminate both processes (this will cause an error since one of them is already dead).

`cat & wait` in a script proceeds immediately

From the command line, typing cat waits for user input.
But in the following script, wait ignores the background process.
#!/bin/bash
cat &
wait
echo "After wait"
This script immediately blasts right past the wait command. How can I make wait actually wait for the cat command to finish? I've tried waiting for the specific PID or job number, but the effect is the same.
That's because cat is exiting right away, because stdin is not inherited. Try this instead:
cat <&0 &

Referencing Bash jobs in other jobs

I want to reference a background Bash job in another background Bash job. Is that possible?
For example, say I start a background job:
$ long_running_process &
[1] 12345
Now I want something to happen when that job finishes, so I can use wait:
$ wait %1 && thing_to_happen_after_long_running_process_finishes
However, that will block, and I want my terminal back to do other stuff, but Ctrl+Z does nothing.
Attempting to start this in the background in the first place instead fails:
$ { wait %1 && thing_to_happen_after_long_running_process_finishes; } &
[2] 12346
-bash: line 3: wait: %1: no such job
$ jobs
[1]- Running long_running_process &
[2]+ Exit 127 { wait %1 && thing_to_happen_after_long_running process_finishes; }
Is there some way to reference one job using wait in another background job?
I see this behaviour using GNU Bash 4.1.2(1)-release.
A shell can only wait on its own children. Since backgrounding a job creates a new shell, a wait in that shell can only wait on its own children, not the children of its parent (i.e., the shell from which the background-wait forked). For what you want, you need to plan ahead:
long_running_process && thing_to_happen_after &
There is one alternative:
long_running_process &
LRP_PID=$!
{ while kill -0 $LRP_PID 2> /dev/null; do sleep 1; done; thing_to_happen_after; } &
This would set up a loop that tries to ping your background process once a second. When the process is complete, the kill will fail, and move on to the post-process program. It carries the slight risk that your process would exit and another process would be given the same process ID between checks, in which case the kill would become confused and think your process was still running, when in fact it is the new one. But it's very slight risk, and actually it might be OK if thing_to_happen_after is delayed a little longer until there is no process with ID $LRP_PID.
try something like this :
x2=$(long_running_process && thing_to_happen_after_long_running_process_finishes ) &

Is it possible for bash commands to continue before the result of the previous command?

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

Resources