I am trying to ensure that a command is run serially after parallel commands have been terminated.
command1 &
command2 &
command3
In the above example, command1 and command2 are launched in background at the same time, but command3 is run soon after. I know this is expected behaviour in Bash, but I was wondering if there was a way for command3 to be launched after command1 and command2 are terminated.
It is probably possible to do:
(command1; touch done1) &
(command2; touch done2) &
while [ ! -f done1 ] && [ ! -f done2 ]; do sleep 1000; done
command3
...but if a more elegant solution is available I will take it. The join needs to be passive as these commands are destined to be used in PBS queues. Any ideas? Thanks in advance.
You can use wait without arguments to wait for all previous jobs to complete:
command1 &
command2 &
wait
command3
Related
I would like to run my bash my_script.sh using GNU parallel:
parallel < my_script.sh
where my_script.sh is:
command1 && command2
command3
command4
Will command1 and command2 run in parallel or sequentially.
In case I am not clear:
command1
command2
command3
command4
or
command1 -> command2
command3
command4
?
Thank you!
&& lets you do something based on whether the previous command completed successfully.
It will run like this:
command1 -> command2 (only if the exit status of "command1" is zero)
command3
command4
Makefile:
default:
command1 &
command2 &
When I run make and hit Control-C, command1 and command2 continue running. How can I make it so that command1 and command2 are killed once make is killed? command1 and command2 should run in parallel. Command1 watches source files and compiles them. Command2 is a webserver.
You can use make's own parallelism option make -j2 with a makefile like this :
default: task1 task2
task1:
command1
task2:
command2
make will take care of interrupting both command1 and command2.
The following code works I have no idea why. I took it from https://veithen.github.io/2014/11/16/sigterm-propagation.html
Makefile:
.PHONY: default
default:
bash build.sh
build.sh:
#!/bin/bash
trap 'kill -TERM $PID' TERM INT
webpack & # Command 1
http-server -p 8000 -c-1 & # Command 2
PID=$!
wait $PID
trap - TERM INT
wait $PID
EXIT_STATUS=$?
This question already has answers here:
Why can't I use Unix Nohup with Bash For-loop?
(3 answers)
Closed 7 years ago.
I have a series of commands that I want to use nohup with.
Each command can take a day or two, and sometimes I get disconnected from the terminal.
What is the right way to achieve this?
Method1:
nohup command1 && command2 && command3 ...
or
Method2:
nohup command1 && nohup command2 && nohup command3 ...
or
Method3:
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I can see method 3 might work, but it forces me to create a temp file. If I can choose from only method 1 or 2, what is the right way?
nohup command1 && command2 && command3 ...
The nohup will apply only to command1. When it finishes (assuming it doesn't fail), command2 will be executed without nohup, and will be vulnerable to a hangup signal.
nohup command1 && nohup command2 && nohup command3 ...
I don't think this would work. Each of the three commands will be protected by nohup, but the shell that handles the && operators will not. If you logout before command2 begins, I don't think it will be started; likewise for command3.
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I think this should work -- but there's another way that doesn't require creating a script:
nohup sh -c 'command1 && command2 && command3'
The shell is then protected from hangup signals, and I believe the three sub-commands are as well. If I'm mistaken on that last point, you could do:
nohup sh -c 'nohup command1 && nohup command2 && nohup command3'
The following script runs with the -e option, so it will exit if any of the commands in it fail:
#!/bin/sh -e
command1 #script should fail if command1 fails
command2 #script should NOT fail if command2 fails
command3 #script should fail if command3 fails
How can I make the script not to fail on command2?
command1
command2 || true
command3
You could turn off the setting as required:
#!/bin/sh
set -e
command1 #script should fail if command1 fails
set +e
command2 #script should NOT fail if command2 fails
set -e
command3 #script should fail if command3 fails
if you are running a command in shell that exits you can prevent it by running it inside a sub-shell:
(. script.sh)
I've taken a look to the related topics but I did not found an answer.
Here's my problem:
I'm trying to put these commands that I usually run NOT in a for loop into two separate for loops
original commands:
command1 &
command2 &
wait
command 3
This obviously starts two commands in background and, after BOTH are finished, it starts the command 3
Now here there's my for loop script:
file1=directory1/*.txt
for i in $file1;
do
command1 ${i} > ${i}.test & # I know, it will generate files like ".txt.test". It's ok.
done &
file2=directory2/*.txt
for i2 in $file2;
do
command1 ${i2} > ${i2}.test &
done &
wait
command3
Now there is somethig wrong in my script because sometimes when is performing the command 3 I can find some jobs from command1 or command2 EVEN if I put the "wait".
I've tried different option like the second "done" without &. I've tried also two wait..but everything I do...I mess up all the jobs :(
Where is my mistake (please...be polite :P)?
Thank you
Fabio
Save both "for" loops, but remove all "&" from them, as separate files:loop1,loop2. chmod a+rx loop1 loop2; and execute:
loop1 &
loop2 &
wait
command3
I don't know the behaviour of "done &", better don't use it.
Your code is executing everything at the same time. I am assumming that you want 2 threads.
Edit: Single script solution:
script1=`mktemp /tmp/.script.XXXXXX`;
cat >$script1 <<END
for i in directory1/*.txt; do
command1 ${i} > ${i}.test;
done
END
script2=`mktemp /tmp/.script.XXXXXX`;
cat >$script2 <<END
for i in directory2/*.txt; do
command1 ${i} > ${i}.test;
done
END
chmod u+rx $script1 $script2
$script1 &
$script2 &
wait;
command3
/bin/rm $script1 $script2
There is no need for you to put the ampersand (&) after each done, you can simply put it after each command and all jobs from both loops will be put into the background and a single wait will do the trick.
for i in $( seq 1 3 ); do
command1 "$i" > "$i.test" &
done
for i in $( seq 4 6 ); do
command2 "$i" > "$i.test" &
done
wait
command3
An alternative approach is to store the pid of each background process by making use of $!, like so
pid=""
command & pid="$pid $!"
wait $pid