I have an array and using that array I need to run the shell scripts in parallel as
for i in arr
do
sh i.sh &
done
wait
I need to wait for the completion of their execution before proceeding to the next step.
I think that your script doesn't do what you want it to do for a different reason than you're expecting. sh i.sh & is trying to run a file called i.sh. It's not using the variable i. To fix it, simply add $ before the i. it is waiting for commands to complete. Just not the ones you're expecting it to. It's actually trying to run the same script that doesn't exist a bunch of times.
for i in arr
do
sh $i.sh &
done
wait
Related
When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html
I've seen many questions about parallelizing bash scripts but so far I haven't found one that answer my questions.
I have a bash script that runs two python scripts sequentially (the fact that are python script is not important though, it could be any other bash job):
python script_1.py
python script_2.py
Now, assume that script_1.py takes a certain (unknown) time to finish, while script_2.py has an infinite loop in it.
I'd like to run the two scripts in parallel, and when script_1.py finishes the execution I'd like to kill script_2.py as well.
Note that I'm not interested in doing this within the python scripts, but I'm interested to do this from a bash point of view.
What I thought was to create 2 "sub" bash scripts: bash_1.sh and bash_2.sh, and to run them in parallel from a main_bash.sh script that looks like:
bash_1.sh & bash_2.sh
where each bash_i.sh job runs a script_i.py script.
However, this wouldn't terminate the second infinite loop once the first one is done.
Is there a way of doing this, adding some sort of condition that kills one script when the other one is done?
As an additional (less important) point, I'd be interested in monitoring the terminal output
of the first script, but not of the second one.
If your scripts need to start in that sequence, you could wait for the bash_1 to finish:
bash_1 &
b1=$!
bash_2 &
b2=$!
wait $b1
kill $b2
It's simpler than you think. When bash_2.sh finishes, just kill bash_1.sh. The trick is getting the process id that kill will need to do this.
bash_2.sh &
b2_pid=$!
bash_1.sh
kill $b2_pid
You can also use job control, if enabled.
bash_2.sh &
bash_1.sh
kill %%
Note that you don't need bash script for this; you can run your Python scripts directly in the same fashion:
python script_2.py &
python script_1.py
kill %%
If I want to call another batch script from within a batch script I could use
CALL File.bat
to pause the execution of the current batch file and wait for the CALLed script to complete.
I can use
START File.bat
if I want them to run simultaneously.
How do I achieve this behavior in a shell script??
If you want to wait:
#!/bin/bash
# do some stuff
/path/to/other/script
# do other stuff
To run it simultaneously (i.e. "in the background"):
#!/bin/bash
# do some stuff
/path/to/other/script &
# do other stuff, then optionally:
wait
# this will wait for all background jobs to finish
There are other ways, and certain things you should consider about input and output redirection for the background process if you want to provide specific input and/or capture output or errors, but that's the basics.
By “shell“, I assume you mean *NIX sh.
To execute another script and wait for it to complete, do
sh file.sh
To start it in background, do
(sh file.sh) &
For bash (and other Bourne shell-compatible shells):
you don't need CALL; invoking another script or program executes it synchronously. someprog.sh
you append & to a command to run it asynchronously; note that the program will halt if it attempts to read from stdin if it's in the background. someprog.sh &
Do lines in a bash script execute sequentially? I can't see any reason why not, but I am really new to bash scripting and I have a couple commands that need to execute in order.
For example:
#!/bin/sh
# will this get finished before the next command starts?
./someLongCommand1 arg1
./someLongCommand2 arg1
Yes, they are executed sequentially. However, if you run a program in the background, the next command in your script is executed immediately after the backgrounded command is started.
#!/bin/sh
# will this get finished before the next command starts?
./someLongCommand1 arg1 &
./someLongCommand2 arg1 &
would result in an near-instant completion of the script; however, the commands started in it will not have completed. (You start a command in the background by putting an ampersand (&) behind the name.
Yes... unless you go out of your way to run one of the commands in the background, one will finish before the next one starts.
When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html