Bash: how to detect error in pipe - bash

I have commands like this:
COMMAND1 &&
COMMAND2 | COMMAND3 | COMMAND4 &&
COMMAND5
I want to make sure all of the COMMAND 1-5 are successful. Is there an easy way to achieve this? By doing research, I found PIPESTATUS can be used, but yield to some very complicated commands like
COMMAND1 &&
COMMAND2 | COMMAND3 | COMMAND4 &&
($PST=("${PIPESTATUS[#]}") && (exit ${PST[0]}) && (exit ${PST[1]})) &&
COMMAND5
Is there a way to do it easily?
BTW: I used (exit n) to get a command that does nothing but exit with status n. Is there a UNIX command that does this directly, like true and false?
Thanks.

Try using set -o pipefail. This ensures that the error code of the pipeline is the error code of the last process with an error.

Related

&& in bash script using GNU "parallel"

I would like to run my bash my_script.sh using GNU parallel:
parallel < my_script.sh
where my_script.sh is:
command1 && command2
command3
command4
Will command1 and command2 run in parallel or sequentially.
In case I am not clear:
command1
command2
command3
command4
or
command1 -> command2
command3
command4
?
Thank you!
&& lets you do something based on whether the previous command completed successfully.
It will run like this:
command1 -> command2 (only if the exit status of "command1" is zero)
command3
command4

series of commands using nohup [duplicate]

This question already has answers here:
Why can't I use Unix Nohup with Bash For-loop?
(3 answers)
Closed 7 years ago.
I have a series of commands that I want to use nohup with.
Each command can take a day or two, and sometimes I get disconnected from the terminal.
What is the right way to achieve this?
Method1:
nohup command1 && command2 && command3 ...
or
Method2:
nohup command1 && nohup command2 && nohup command3 ...
or
Method3:
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I can see method 3 might work, but it forces me to create a temp file. If I can choose from only method 1 or 2, what is the right way?
nohup command1 && command2 && command3 ...
The nohup will apply only to command1. When it finishes (assuming it doesn't fail), command2 will be executed without nohup, and will be vulnerable to a hangup signal.
nohup command1 && nohup command2 && nohup command3 ...
I don't think this would work. Each of the three commands will be protected by nohup, but the shell that handles the && operators will not. If you logout before command2 begins, I don't think it will be started; likewise for command3.
echo -e "command1\ncommand2\n..." > commands_to_run
nohup sh commands_to_run
I think this should work -- but there's another way that doesn't require creating a script:
nohup sh -c 'command1 && command2 && command3'
The shell is then protected from hangup signals, and I believe the three sub-commands are as well. If I'm mistaken on that last point, you could do:
nohup sh -c 'nohup command1 && nohup command2 && nohup command3'

How to avoid a shell script exiting on failure for particular commands

The following script runs with the -e option, so it will exit if any of the commands in it fail:
#!/bin/sh -e
command1 #script should fail if command1 fails
command2 #script should NOT fail if command2 fails
command3 #script should fail if command3 fails
How can I make the script not to fail on command2?
command1
command2 || true
command3
You could turn off the setting as required:
#!/bin/sh
set -e
command1 #script should fail if command1 fails
set +e
command2 #script should NOT fail if command2 fails
set -e
command3 #script should fail if command3 fails
if you are running a command in shell that exits you can prevent it by running it inside a sub-shell:
(. script.sh)

Executing a command (only) when prior jobs are finished in Bash

I am trying to ensure that a command is run serially after parallel commands have been terminated.
command1 &
command2 &
command3
In the above example, command1 and command2 are launched in background at the same time, but command3 is run soon after. I know this is expected behaviour in Bash, but I was wondering if there was a way for command3 to be launched after command1 and command2 are terminated.
It is probably possible to do:
(command1; touch done1) &
(command2; touch done2) &
while [ ! -f done1 ] && [ ! -f done2 ]; do sleep 1000; done
command3
...but if a more elegant solution is available I will take it. The join needs to be passive as these commands are destined to be used in PBS queues. Any ideas? Thanks in advance.
You can use wait without arguments to wait for all previous jobs to complete:
command1 &
command2 &
wait
command3

Find out which shell PHP is using

I'm trying to execute a piped shell commands like this
set -o pipefail && command1 | command2 | command3
from a PHP script. The set -o pipefail part is to make the pipe break as soon as any of the commands fails. But the commands results in this:
sh: 1: set: Illegal option -o pipefail
whereas it runs fine from the terminal. Maybe explicitly specifying which shell PHP CLI should use (i.e. bin/bash) when executing shell commands could solve the problem or is there better way out?
You can always run bash -c 'set -o pipefail && command1 | command2 | command3' instead.
you can find it out by doing
echo `echo $SHELL`;

Resources