bash different loops in background using & and wait - bash

I've taken a look to the related topics but I did not found an answer.
Here's my problem:
I'm trying to put these commands that I usually run NOT in a for loop into two separate for loops
original commands:
command1 &
command2 &
wait
command 3
This obviously starts two commands in background and, after BOTH are finished, it starts the command 3
Now here there's my for loop script:
file1=directory1/*.txt
for i in $file1;
do
command1 ${i} > ${i}.test & # I know, it will generate files like ".txt.test". It's ok.
done &
file2=directory2/*.txt
for i2 in $file2;
do
command1 ${i2} > ${i2}.test &
done &
wait
command3
Now there is somethig wrong in my script because sometimes when is performing the command 3 I can find some jobs from command1 or command2 EVEN if I put the "wait".
I've tried different option like the second "done" without &. I've tried also two wait..but everything I do...I mess up all the jobs :(
Where is my mistake (please...be polite :P)?
Thank you
Fabio

Save both "for" loops, but remove all "&" from them, as separate files:loop1,loop2. chmod a+rx loop1 loop2; and execute:
loop1 &
loop2 &
wait
command3
I don't know the behaviour of "done &", better don't use it.
Your code is executing everything at the same time. I am assumming that you want 2 threads.
Edit: Single script solution:
script1=`mktemp /tmp/.script.XXXXXX`;
cat >$script1 <<END
for i in directory1/*.txt; do
command1 ${i} > ${i}.test;
done
END
script2=`mktemp /tmp/.script.XXXXXX`;
cat >$script2 <<END
for i in directory2/*.txt; do
command1 ${i} > ${i}.test;
done
END
chmod u+rx $script1 $script2
$script1 &
$script2 &
wait;
command3
/bin/rm $script1 $script2

There is no need for you to put the ampersand (&) after each done, you can simply put it after each command and all jobs from both loops will be put into the background and a single wait will do the trick.
for i in $( seq 1 3 ); do
command1 "$i" > "$i.test" &
done
for i in $( seq 4 6 ); do
command2 "$i" > "$i.test" &
done
wait
command3
An alternative approach is to store the pid of each background process by making use of $!, like so
pid=""
command & pid="$pid $!"
wait $pid

Related

set -e with multiple subshells. non-blocking wait -n

In a CI setting, I'd like to run multiple jobs in the background, and use set -e to exit on the first error.
This requires using wait -n instead of wait, but for increasing throughput I'd then want to move the for i in {1..20}; do wait -n; done to the end of the script.
Unfortunately, this means that it is hard to track the errors.
Rather, what I would want is to do the equivalent to a non-blocking wait -n often, and exit as soon as possible.
Is this possible or do I have to write my bash scripts as a Makefile?
Alternative Approach: Emulate set -e for background jobs
Instead of checking the jobs all the time it could be easier and more efficient to exit the script directly when a job fails. To this end, append ... || kill $$ to every job you start:
# before
myCommand &
myProgram arg1 arg2 &
# after
myCommand || kill $$ &
myProgram arg1 arg2 || kill $$ &
Non-Blocking wait -n
If you really have to, you can write your own non-blocking wait -n with a little trick:
nextJobExitCode() {
sleep 0.1 &
wait -n
exitCode="$?"
kill %%
return "$exitCode"
}
The function nextJobExitCode waits at most 0.1 seconds for your jobs. If none of your jobs were already finished or did finish in that 0.1 seconds, nextJobExitCode will terminate with exit code 0.
Example usage
set -e
sleep 1 & # job 1
(sleep 3; false) & # job 2
nextJobExitCode # won't exit. No jobs finished yet
sleep 2
nextJobExitCode # won't exit. Job 1 finished with 0
sleep 2
nextJobExitCode # will exit! Job 2 finished with 1

BASH - Run 6 scripts but only 3 togheter at a time and if one is finished then start with another [duplicate]

This question already has answers here:
bash script to run a constant number of jobs in the background
(4 answers)
Closed 5 years ago.
.
I have a simple bash script where i run 3 at the same time and when they are done they start with the next 3 like this:
command1 &
command2 &
command3 &
wait
command4 &
command5 &
command6 &
exit
But how can i do so i always run 3 of these at the same time and not wait for the other three? lets say command1 and command2 finished but command3 still runnin, then i want command4 and command5 to start so there is always 3 commands running.
Thanks
bash 4.3 introduced a -n option to wait, which lets you wait for the next job to complete.
command1 &
command2 &
command3 &
wait -n
command4 &
wait -n
command5 &
wait -n
command6 &
exit

Executing a command (only) when prior jobs are finished in Bash

I am trying to ensure that a command is run serially after parallel commands have been terminated.
command1 &
command2 &
command3
In the above example, command1 and command2 are launched in background at the same time, but command3 is run soon after. I know this is expected behaviour in Bash, but I was wondering if there was a way for command3 to be launched after command1 and command2 are terminated.
It is probably possible to do:
(command1; touch done1) &
(command2; touch done2) &
while [ ! -f done1 ] && [ ! -f done2 ]; do sleep 1000; done
command3
...but if a more elegant solution is available I will take it. The join needs to be passive as these commands are destined to be used in PBS queues. Any ideas? Thanks in advance.
You can use wait without arguments to wait for all previous jobs to complete:
command1 &
command2 &
wait
command3

how to program wait and continue in this bash script

I have two shell scripts say A and B. I need to run A in the background and run B in the foreground till A finishes its execution in the background. I need to repeat this process for couple of runs, hence once A finishes, I need to suspend current iteration and move to next iteration.
Rough idea is like this:
for((i=0; i< 10; i++))
do
./A.sh &
for ((c=0; c< C_MAX; c++))
do
./B.sh
done
continue
done
how do I use 'wait' and 'continue' so that B runs as many times while A is in the background and the entire process moves to next iteration once A finishes
Use the PID of the current background process:
./A.sh &
while ps -p $! >/dev/null; do
./B.sh
done
I am just translating your rough idea into bash scripting.
The core idea to implement the wait-continue mechanism (while ps -p $A_PID >/dev/null; do...) is taken from #thiton who posted an answer earlier to your question.
for i in `seq 0 10`
do
./A.sh &
A_PID=$!
for i in `seq 0 $C_MAX`
do
./B.sh
done
while ps -p $A_PID >/dev/null; do
sleep 1
done
done

How to include nohup inside a bash script?

I have a large script called mandacalc which I want to always run with the nohup command. If I call it from the command line as:
nohup mandacalc &
everything runs swiftly. But, if I try to include nohup inside my command, so I don't need to type it everytime I execute it, I get an error message.
So far I tried these options:
nohup (
command1
....
commandn
exit 0
)
and also:
nohup bash -c "
command1
....
commandn
exit 0
" # and also with single quotes.
So far I only get error messages complaining about the implementation of the nohup command, or about other quotes used inside the script.
cheers.
Try putting this at the beginning of your script:
#!/bin/bash
case "$1" in
-d|--daemon)
$0 < /dev/null &> /dev/null & disown
exit 0
;;
*)
;;
esac
# do stuff here
If you now start your script with --daemon as an argument, it will restart itself detached from your current shell.
You can still run your script "in the foreground" by starting it without this option.
Just put trap '' HUP on the beggining of your script.
Also if it creates child process someCommand& you will have to change them to nohup someCommand& to work properly... I have been researching this for a long time and only the combination of these two (the trap and nohup) works on my specific script where xterm closes too fast.
Create an alias of the same name in your bash (or preferred shell) startup file:
alias mandacalc="nohup mandacalc &"
Why don't you just make a script containing nohup ./original_script ?
There is a nice answer here: http://compgroups.net/comp.unix.shell/can-a-script-nohup-itself/498135
#!/bin/bash
### make sure that the script is called with `nohup nice ...`
if [ "$1" != "calling_myself" ]
then
# this script has *not* been called recursively by itself
datestamp=$(date +%F | tr -d -)
nohup_out=nohup-$datestamp.out
nohup nice "$0" "calling_myself" "$#" > $nohup_out &
sleep 1
tail -f $nohup_out
exit
else
# this script has been called recursively by itself
shift # remove the termination condition flag in $1
fi
### the rest of the script goes here
. . . . .
the best way to handle this is to use $()
nohup $( command1, command2 ...) &
nohup is expecting one command and in that way You're able to execute multiple commands with one nohup

Resources