equivalent of pipefail in dash shell - bash

Is there some similar option in dash shell corresponding to pipefail in bash?
Or any other way of getting a non-zero status if one of the commands in pipe fail (but not exiting on it which set -e would).
To make it clearer, here is an example of what I want to achieve:
In a sample debugging makefile, my rule looks like this:
set -o pipefail; gcc -Wall $$f.c -o $$f 2>&1 | tee err; if [ $$? -ne 0 ]; then vim -o $$f.c err; ./$$f; fi;
Basically it runs opens the error file and source file on error and runs the programs when there is no error. Saves me some typing. Above snippet works well on bash but my newer Ubunty system uses dash which doesn't seem to support pipefail option.
I basically want a FAILURE status if the first part of the below group of commands fail:
gcc -Wall $$f.c -o $$f 2>&1 | tee err
so that I can use that for the if statement.
Are there any alternate ways of achieving it?
Thanks!

I ran into this same issue and the bash options of set -o pipefail and ${PIPESTATUS[0]} both failed in the dash shell (/bin/sh) on the docker image I'm using. I'd rather not modify the image or install another package, but the good news is that using a named pipe worked perfectly for me =)
mkfifo named_pipe
tee err < named_pipe &
gcc -Wall $$f.c -o $$f > named_pipe 2>&1
echo $?
See this answer for where I found the info: https://stackoverflow.com/a/1221844/431296

The Q.'s sample problem requires:
I basically want a FAILURE status if the first part of the ... group of commands fail:
Install moreutils, and try the mispipe util, which returns the exit status of the first command in a pipe:
sudo apt install moreutils
Then:
if mispipe "gcc -Wall $$f.c -o $$f 2>&1" "tee err" ; then \
./$$f
else
vim -o $$f.c err
fi
While 'mispipe' does the job here, it is not an exact duplicate of the bash shell's pipefail; from man mispipe:
Note that some shells, notably bash, do offer a
pipefail option, however, that option does not
behave the same since it makes a failure of any
command in the pipeline be returned, not just the
exit status of the first.

Related

Why doesn't my script work on FreeBSD, even though it seems to work on Linux? It's as if FreeBSD ignores "if"

I am trying to write a portable installation script for building the compiler for my programming language. You can see the script here:
mkdir ArithmeticExpressionCompiler
cd ArithmeticExpressionCompiler
if command -v wget &> /dev/null
then
wget https://flatassembler.github.io/Duktape.zip
else
curl -o Duktape.zip https://flatassembler.github.io/Duktape.zip
fi
unzip Duktape.zip
if command -v gcc &> /dev/null
then
gcc -o aec aec.c duktape.c -lm # The linker that comes with recent versions of Debian Linux insists that "-lm" is put AFTER the source files, or else it outputs some confusing error message.
else
clang -o aec aec.c duktape.c -lm
fi
./aec analogClock.aec
if command -v gcc &> /dev/null
then
gcc -o analogClock analogClock.s -m32
else
clang -o analogClock analogClock.s -m32
fi
./analogClock
However, when I run it on FreeBSD, it complains that wget is not found. But the script checks whether wget exists before calling it. wget is not supposed to be called on FreeBSD. Now, I know FreeBSD uses sh rather than bash, and I suppose my script is not actually POSIX-compliant. So, what am I doing wrong?
From the POSIX Spec:
If a command is terminated by the control operator ( '&'
), the shell shall execute the command asynchronously in a subshell.
This means that the shell shall not wait for the command to finish
before executing the next command.
In posix &> is not supported by posix instead it will see & as a background command indicator causing your command to be run asynchronously with the next part > /dev/null which is seen as a seperate command. This is basically if you were to run:
command -v wget & > /dev/null
Instead you have to redirect another way:
command -v wget >/dev/null 2>&1

How to parametrize a function call in bash without using eval?

I have the following code (simplified):
if ! sudo -u user command1 "$Options" -o1 -o2 2>>"$log" > "$Dir"/output;
However, in some cases (determined at run time, if a variable docker is set to true), I instead want to execute
if ! docker exec -t "$cont" command2 "$Options" -o1 -o2 2>>"$log" > "$Dir"/output;
What changes is the way to call the command (1 or 2). The rest of the parameters remain the same.
So I'd like to parametrize the call to command1 or command2.
Something like
if $docker then;
Command = docker exec -t "$cont" command2
else
Command = sudo -u user command1
if ! $Command "$Options" -o1 -o2 2>>"$log" > "$Dir"/output;
This does not work. Is it possible to achieve what I want without resorting to eval, which I understood to be a bad practice ?
Thanks
You can use an array to hold the command-specific part, and rely on word splitting to build the entire command line to run. Something like
#!/usr/bin/env bash
# Set up your assorted variables used below
if $docker; then
cmd=(docker exec -t "$cont" command2)
else
cmd=(sudo -u user command1)
fi
if ! "${cmd[#]}" "$Options" -o1 -o2 2>>"$log" > "$Dir"/output; then
# etc.
fi

Unable to capture command exit code in makefile

I'm trying to setup my first makefile and am hitting a block at step 1. In my shell script, I did this:
which brew | grep 'brew not found' >/dev/null 2>&1
if [ $? == 0 ]; then
xcode-select --install
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
fi
This worked just fine as a bash script. After some googling, for a Makefile, I've so far come up with this one command:
BREW_INSTALLED = $(shell which brew | grep 'brew not found' >/dev/null 2>&1; echo $$?)
However, running it gets me
make: BREW_INSTALLED: No such file or directory
I'm equally unsure when I should be adding # to a command (seems like anything I don't want to output?).
I'm currently on GNU Make 3.81.
There are several odds in this line:
BREW_INSTALLED = $(shell which brew | grep 'brew not found' >/dev/null 2>&1; echo $$?)
In case of success, which writes its output to stdout, in case of failure to stderr. You are trying to capture the error message on stdout.
To feed the stderr of which to grep, you would need to write
which brew 2>&1 >/dev/null | grep 'brew not found'
(The order of 2>&1 and > also matters).
But you should not rely on the specific error message of which.
But you already get the return code you want from which, so you don't need grep at all.
Which returns the number of failed arguments, or -1 when no `programname' was given.
https://linux.die.net/man/1/which
Consider using grep -q 'expression' to supress output instead of redirecting stdout and stderr.
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected.
https://linux.die.net/man/1/grep
And the error message you get has nothing to do with what I'm writing above. This means the shell is trying to run BREW_INSTALLED as command, which probably means that make puts it at the beginning of a new shell.
Maybe you wrote it after a tabspace? see https://www.gnu.org/software/make/manual/html_node/Recipe-Syntax.html
To capture the return code (as string!):
BREW_INSTALLED := $(shell which brew >/dev/null 2>&1; echo $$?)
A typical makefile would check the presence of needed tools like this:
BREW := $(shell which brew)
# Check if variable brew is empty
ifeq ($(BREW),)
$(error brew not found)
else
$(info brew found: $(BREW))
endif
all:
#echo "Do something with brew"
$(BREW) --version
Note: There must be no tabspaces in the first two indented lines.
The two Recipe lines if the all Rule have to be indented with tabs.
The # at the beginning of a recipe supresses echoing: https://www.gnu.org/software/make/manual/html_node/Echoing.html

How to get error output of command that is piped through "pv" command

So I am trying to use PV to create a progress bar for various commands (ie. tar). I am running these commands in a ruby script. The problem is that since pv is the last command in the pipe chain, it is absorbing all the errors.
ie.
result = `tar -cpz testDir 2>&1 | pv -pterb > testTar.tar.gz`
The below command will not return any error if it fails (ie. run out of space in directory) because it is absorbed by the pv command. Any ideas?
Right, normally the last command counts. You need the pipefail option.
$ sh -c ' false | true'; echo $?
0
$ sh -c 'set -o pipefail; false | true'; echo $?
1
There is no simple way to duplicate pipefail in pure Posix, but I have noticed that bash and the generally-true-to-Posix dash(1) does implement it.

Find out which shell PHP is using

I'm trying to execute a piped shell commands like this
set -o pipefail && command1 | command2 | command3
from a PHP script. The set -o pipefail part is to make the pipe break as soon as any of the commands fails. But the commands results in this:
sh: 1: set: Illegal option -o pipefail
whereas it runs fine from the terminal. Maybe explicitly specifying which shell PHP CLI should use (i.e. bin/bash) when executing shell commands could solve the problem or is there better way out?
You can always run bash -c 'set -o pipefail && command1 | command2 | command3' instead.
you can find it out by doing
echo `echo $SHELL`;

Resources