I'm trying to execute a piped shell commands like this
set -o pipefail && command1 | command2 | command3
from a PHP script. The set -o pipefail part is to make the pipe break as soon as any of the commands fails. But the commands results in this:
sh: 1: set: Illegal option -o pipefail
whereas it runs fine from the terminal. Maybe explicitly specifying which shell PHP CLI should use (i.e. bin/bash) when executing shell commands could solve the problem or is there better way out?
You can always run bash -c 'set -o pipefail && command1 | command2 | command3' instead.
you can find it out by doing
echo `echo $SHELL`;
Related
Is it possible to use output redirection inside bsub command such as:
bsub -q short "cat <(head -2 myfile.txt) > outputfile.txt"
Currently this bsub execution fails. Also my attempts to escape the redirection sign and the parenthesis were all failed, such as:
bsub -q short "cat \<\(head -2 myfile.txt\) > outputfile.txt"
bsub -q short "cat <\(head -2 myfile.txt\) > outputfile.txt"
*Note, I'm well aware that the redirection in this simple command is not necessary as the command could easily be written as:
bsub -q short "head -2 myfile.txt > outputfile.txt"
and then it would indeed be executed properly (without errors). I am however interested in implementing the redirection of output '<' within the context of a more composed command, and am bringing this simple command here as an example only.
<(...) is process substitution -- a bash extension not available on baseline POSIX shells. system(), subprocess.Popen(..., shell=True) and similar calls use /bin/sh, which is not guaranteed to have such extensions.
As a mechanism that works with any possible command without needing to worry about how to correctly escape it into a string, you can export that function and any variables it uses through the environment:
# for the sake of example, moving filenames out-of-band
in_file=myfile.txt
out_file=outputfile.txt
mycmd() { cat <(head -2 <"$in_file") >"$out_file"; }
export -f mycmd # export the function into the environment
export in_file out_file # and also any variables it uses
bsub -q short 'bash -c mycmd' # ...before telling bsub to invoke bash to run the function
<(...) is a bash feature while your command runs with sh.
Invoke bash explicitly to handle your bash-only features:
bsub -q short "bash -c 'cat <(head -2 myfile.txt) > outputfile.txt'"
I have a shell script when need to run as a particular user. So I call that script as below,
su - testuser -c "/root/check_package.sh | tee -a /var/log/check_package.log"
So after this when I check the last execution exitcode it returns always 0 only even if that script fails.
I tried something below also which didn't help,
su - testuser -c "/root/check_package.sh | tee -a /var/log/check_package.log && echo $? || echo $?"
Is there way to get the exitcode of command whatever running through su.
The problem here is not su, but tee: By default, the shell exits with the exit status of the last pipeline component; in your code, that component is not check_package.sh, but instead is tee.
If your /bin/sh is provided by bash (as opposed to ash, dash, or another POSIX-baseline shell), use set -o pipefail to cause the entirely pipeline to fail if any component of it does:
su - testuser -c "set -o pipefail; /root/check_package.sh | tee -a /var/log/check_package.log"
Alternately, you can do the tee out-of-band with redirection to a process substitution (though this requires your current user to have permission to write to check_package.log):
su - testuser -c "/root/check_package.sh" > >(tee -a /var/log/check_package.log
Both su and sudo exit with the exit status of the command they execute (if authentication succeeded):
$ sudo false; echo $?
1
$ su -c false; echo $?
1
Your problem is that the command pipeline that su runs is a pipeline. The exit status of your pipeline is that of the tee command (which succeeds), but what you really want is that of the first command in the pipeline.
If your shell is bash, you have a couple of options:
set -o pipefail before your pipeline, which will make it return the rightmost failure value of all the commands if any of them fail
Examine the specific member of the PIPESTATUS array variable - this can give you the exit status of the first command whether or not tee succeeds.
Examples:
$ sudo bash -c "false | tee -a /dev/null"; echo $?
0
$ sudo bash -c "set -o pipefail; false | tee -a /dev/null"; echo $?
1
$ sudo bash -c 'false | tee -a /dev/null; exit ${PIPESTATUS[0]}'; echo $?
1
You will get similar results using su -c, if your system shell (in /bin/sh) is Bash. If not, then you'd need to explicitly invoke bash, at which point sudo is clearly simpler.
I was facing a similar issue today, in case the topic is still open here my solution, otherwise just ignore it...
I wrote a bash script (let's say my_script.sh) which looks more or less like this:
### FUNCTIONS ###
<all functions listed in the main script which do what I want...>
### MAIN SCRIPT ### calls the functions defined in the section above
main_script() {
log_message "START" 0
check_env
check_data
create_package
tar_package
zip_package
log_message "END" 0
}
main_script |tee -a ${var_log} # executes script and writes info into log file
var_sta=${PIPESTATUS[0]} # captures status of pipeline
exit ${var_sta} # exits with value of status
It works when you call the script directly or in sudo mode
I'm trying to write a bash script where every command is passed through a function that evaluates the command using this line:
eval $1 2>&1 >>~/max.log | tee --append ~/max.log
An example of a case where it does not work is when trying to evaluate a cd command:
eval cd /usr/local/src 2>&1 >>~/max.log | tee --append ~/max.log
The part the causes the issue is the | tee --append ~/max.log part. Any idea why I'm experiencing issues?
From the bash(1) man page:
Each command in a pipeline is executed as a separate process (i.e., in a subshell).
Therefore, cd can not change the working directory of the current shell when used in a pipeline. To work around this restriction, the usual approach would be to group cd with other commands and redirect the output of the group command:
{
cd /usr/local/src
command1
command2
} | tee --append ~/max.log
Without breaking your existing design, you could instead handle cd specially in your filter function:
# eval all commands (will catch any cd output, but will not change directory):
eval $1 2>&1 >>~/max.log | tee --append ~/max.log
# if command starts with "cd ", execute it once more, but in the current shell:
[[ "$1" == cd\ * ]] && $1
Depending on your situation, this may not be enough: You may have to handle other commands that modify the environment or shell variables like set, history, ulimit, read, pushd, and popd as well. In that case it would probably be a good idea to re-think the program's design.
Is there some similar option in dash shell corresponding to pipefail in bash?
Or any other way of getting a non-zero status if one of the commands in pipe fail (but not exiting on it which set -e would).
To make it clearer, here is an example of what I want to achieve:
In a sample debugging makefile, my rule looks like this:
set -o pipefail; gcc -Wall $$f.c -o $$f 2>&1 | tee err; if [ $$? -ne 0 ]; then vim -o $$f.c err; ./$$f; fi;
Basically it runs opens the error file and source file on error and runs the programs when there is no error. Saves me some typing. Above snippet works well on bash but my newer Ubunty system uses dash which doesn't seem to support pipefail option.
I basically want a FAILURE status if the first part of the below group of commands fail:
gcc -Wall $$f.c -o $$f 2>&1 | tee err
so that I can use that for the if statement.
Are there any alternate ways of achieving it?
Thanks!
I ran into this same issue and the bash options of set -o pipefail and ${PIPESTATUS[0]} both failed in the dash shell (/bin/sh) on the docker image I'm using. I'd rather not modify the image or install another package, but the good news is that using a named pipe worked perfectly for me =)
mkfifo named_pipe
tee err < named_pipe &
gcc -Wall $$f.c -o $$f > named_pipe 2>&1
echo $?
See this answer for where I found the info: https://stackoverflow.com/a/1221844/431296
The Q.'s sample problem requires:
I basically want a FAILURE status if the first part of the ... group of commands fail:
Install moreutils, and try the mispipe util, which returns the exit status of the first command in a pipe:
sudo apt install moreutils
Then:
if mispipe "gcc -Wall $$f.c -o $$f 2>&1" "tee err" ; then \
./$$f
else
vim -o $$f.c err
fi
While 'mispipe' does the job here, it is not an exact duplicate of the bash shell's pipefail; from man mispipe:
Note that some shells, notably bash, do offer a
pipefail option, however, that option does not
behave the same since it makes a failure of any
command in the pipeline be returned, not just the
exit status of the first.
So I am trying to use PV to create a progress bar for various commands (ie. tar). I am running these commands in a ruby script. The problem is that since pv is the last command in the pipe chain, it is absorbing all the errors.
ie.
result = `tar -cpz testDir 2>&1 | pv -pterb > testTar.tar.gz`
The below command will not return any error if it fails (ie. run out of space in directory) because it is absorbed by the pv command. Any ideas?
Right, normally the last command counts. You need the pipefail option.
$ sh -c ' false | true'; echo $?
0
$ sh -c 'set -o pipefail; false | true'; echo $?
1
There is no simple way to duplicate pipefail in pure Posix, but I have noticed that bash and the generally-true-to-Posix dash(1) does implement it.