I've seen many questions about parallelizing bash scripts but so far I haven't found one that answer my questions.
I have a bash script that runs two python scripts sequentially (the fact that are python script is not important though, it could be any other bash job):
python script_1.py
python script_2.py
Now, assume that script_1.py takes a certain (unknown) time to finish, while script_2.py has an infinite loop in it.
I'd like to run the two scripts in parallel, and when script_1.py finishes the execution I'd like to kill script_2.py as well.
Note that I'm not interested in doing this within the python scripts, but I'm interested to do this from a bash point of view.
What I thought was to create 2 "sub" bash scripts: bash_1.sh and bash_2.sh, and to run them in parallel from a main_bash.sh script that looks like:
bash_1.sh & bash_2.sh
where each bash_i.sh job runs a script_i.py script.
However, this wouldn't terminate the second infinite loop once the first one is done.
Is there a way of doing this, adding some sort of condition that kills one script when the other one is done?
As an additional (less important) point, I'd be interested in monitoring the terminal output
of the first script, but not of the second one.
If your scripts need to start in that sequence, you could wait for the bash_1 to finish:
bash_1 &
b1=$!
bash_2 &
b2=$!
wait $b1
kill $b2
It's simpler than you think. When bash_2.sh finishes, just kill bash_1.sh. The trick is getting the process id that kill will need to do this.
bash_2.sh &
b2_pid=$!
bash_1.sh
kill $b2_pid
You can also use job control, if enabled.
bash_2.sh &
bash_1.sh
kill %%
Note that you don't need bash script for this; you can run your Python scripts directly in the same fashion:
python script_2.py &
python script_1.py
kill %%
Related
When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html
I have an array and using that array I need to run the shell scripts in parallel as
for i in arr
do
sh i.sh &
done
wait
I need to wait for the completion of their execution before proceeding to the next step.
I think that your script doesn't do what you want it to do for a different reason than you're expecting. sh i.sh & is trying to run a file called i.sh. It's not using the variable i. To fix it, simply add $ before the i. it is waiting for commands to complete. Just not the ones you're expecting it to. It's actually trying to run the same script that doesn't exist a bunch of times.
for i in arr
do
sh $i.sh &
done
wait
How do I start multiple processes in bash and time how long they take?
From this question I know how to start multiple processes in a bash script but using time script.sh doesn't work because the processes spawned end after the script ends.
I tried using wait but that didn't change anything.
Here is the script in its entirety:
for i in `seq $1`
do
( ./client & )
done
wait # This doesn't seem to change anything
I'm trying to get the total time for all the processes to finish and not the time for each process.
Why the parentheses around client invocation? That's going to run the command in a subshell. Since the background job isn't in the top level shell, that's why the wait is ineffective (there's no jobs in this shell to wait for).
Then you can add time back inside the for loop and it should work.
I hope that question hasn't been asked too many times but I couldn't find an answer on google (I didn't know how to specify it).
Does someone know how to execute two parallel commands in bash but, when one finishes, I would like the other to finish?
For instance, I have two different python scripts :
while 1: pass: loop.py
print(42): print.py
I would like to do something like python3 loop.py ** python3 print.py. The two scripts must run in parallel and when print finishes, loop would end automatically.
My usage of that command would be to make something like:
tcpdump -i any -w out.trace ** python3 network_script.py
Thank you in advance
What you want is
tcpdump ... & pid=$!
python3 network_script.py
kill $pid
Run the first script in the background, then start the second script. When the second script ends, kill the first one.
Not the cleanest solution, but you can start processes in the background with a trailing & and then wait for them to complete.
Each of those processes would have to kill all others upon completion.
It could look like this:
(python loop.py && killall Python) &
(python print.py && killall Python) &
wait
echo "Done with at least one!"
When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html