I would like to run my bash my_script.sh using GNU parallel:
parallel < my_script.sh
where my_script.sh is:
command1 && command2
command3
command4
Will command1 and command2 run in parallel or sequentially.
In case I am not clear:
command1
command2
command3
command4
or
command1 -> command2
command3
command4
?
Thank you!
&& lets you do something based on whether the previous command completed successfully.
It will run like this:
command1 -> command2 (only if the exit status of "command1" is zero)
command3
command4
Related
Makefile:
default:
command1 &
command2 &
When I run make and hit Control-C, command1 and command2 continue running. How can I make it so that command1 and command2 are killed once make is killed? command1 and command2 should run in parallel. Command1 watches source files and compiles them. Command2 is a webserver.
You can use make's own parallelism option make -j2 with a makefile like this :
default: task1 task2
task1:
command1
task2:
command2
make will take care of interrupting both command1 and command2.
The following code works I have no idea why. I took it from https://veithen.github.io/2014/11/16/sigterm-propagation.html
Makefile:
.PHONY: default
default:
bash build.sh
build.sh:
#!/bin/bash
trap 'kill -TERM $PID' TERM INT
webpack & # Command 1
http-server -p 8000 -c-1 & # Command 2
PID=$!
wait $PID
trap - TERM INT
wait $PID
EXIT_STATUS=$?
This question already has answers here:
Why can't I use Unix Nohup with Bash For-loop?
(3 answers)
Closed 7 years ago.
I have a series of commands that I want to use nohup with.
Each command can take a day or two, and sometimes I get disconnected from the terminal.
What is the right way to achieve this?
Method1:
nohup command1 && command2 && command3 ...
or
Method2:
nohup command1 && nohup command2 && nohup command3 ...
or
Method3:
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I can see method 3 might work, but it forces me to create a temp file. If I can choose from only method 1 or 2, what is the right way?
nohup command1 && command2 && command3 ...
The nohup will apply only to command1. When it finishes (assuming it doesn't fail), command2 will be executed without nohup, and will be vulnerable to a hangup signal.
nohup command1 && nohup command2 && nohup command3 ...
I don't think this would work. Each of the three commands will be protected by nohup, but the shell that handles the && operators will not. If you logout before command2 begins, I don't think it will be started; likewise for command3.
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I think this should work -- but there's another way that doesn't require creating a script:
nohup sh -c 'command1 && command2 && command3'
The shell is then protected from hangup signals, and I believe the three sub-commands are as well. If I'm mistaken on that last point, you could do:
nohup sh -c 'nohup command1 && nohup command2 && nohup command3'
The following script runs with the -e option, so it will exit if any of the commands in it fail:
#!/bin/sh -e
command1 #script should fail if command1 fails
command2 #script should NOT fail if command2 fails
command3 #script should fail if command3 fails
How can I make the script not to fail on command2?
command1
command2 || true
command3
You could turn off the setting as required:
#!/bin/sh
set -e
command1 #script should fail if command1 fails
set +e
command2 #script should NOT fail if command2 fails
set -e
command3 #script should fail if command3 fails
if you are running a command in shell that exits you can prevent it by running it inside a sub-shell:
(. script.sh)
I am trying to ensure that a command is run serially after parallel commands have been terminated.
command1 &
command2 &
command3
In the above example, command1 and command2 are launched in background at the same time, but command3 is run soon after. I know this is expected behaviour in Bash, but I was wondering if there was a way for command3 to be launched after command1 and command2 are terminated.
It is probably possible to do:
(command1; touch done1) &
(command2; touch done2) &
while [ ! -f done1 ] && [ ! -f done2 ]; do sleep 1000; done
command3
...but if a more elegant solution is available I will take it. The join needs to be passive as these commands are destined to be used in PBS queues. Any ideas? Thanks in advance.
You can use wait without arguments to wait for all previous jobs to complete:
command1 &
command2 &
wait
command3
I have commands like this:
COMMAND1 &&
COMMAND2 | COMMAND3 | COMMAND4 &&
COMMAND5
I want to make sure all of the COMMAND 1-5 are successful. Is there an easy way to achieve this? By doing research, I found PIPESTATUS can be used, but yield to some very complicated commands like
COMMAND1 &&
COMMAND2 | COMMAND3 | COMMAND4 &&
($PST=("${PIPESTATUS[#]}") && (exit ${PST[0]}) && (exit ${PST[1]})) &&
COMMAND5
Is there a way to do it easily?
BTW: I used (exit n) to get a command that does nothing but exit with status n. Is there a UNIX command that does this directly, like true and false?
Thanks.
Try using set -o pipefail. This ensures that the error code of the pipeline is the error code of the last process with an error.