How to get exit codes for different sections of a command in bash - bash

Let's say I have a line in my bash script with ssh bad#location "find -name 'fruit.txt' | grep "Apple" and I'm trying to retrieve the exit codes of ssh, find . -name 'fruit.txt', and "grep "Apple` to see which command went bad.
So far, I've tried something like echo $? ${PIPESTATUS[0]} ${PIPESTATUS[1]}, but it looks like $? returns the same thing as ${PIPESTATUS[0]} in this case. I only need to return the first non-zero exit code along with dmesg for debugging purposes.
I've also considered using set -o pipefail, which will return a failure exit code if any command errors, but I'd like to somehow know which command failed for debugging.
I'd like either get an exit code of 255 (from ssh) and its corresponding dmesg, or somehow get all of the exit codes.

ssh only returns one exit status (per channel) to the calling shell; if you want to get exit status for the individual pipeline components it ran remotely, you need to collect them remotely, put them in with the data, and then parse them back out. One way to do that, if you have a very new version of bash, is like so:
#!/usr/bin/env bash
# note <<'EOF' not just <<EOF; with the former, the local shell does not munge
# heredoc contents.
remote_script=$(cat <<'EOF'
tempfile=$(mktemp "${TMPDIR:-/tmp}/output.XXXXXX"); mktemp_rc=$?
find -name 'fruit.txt' | grep Apple >"$tempfile"
printf '%s\0' "$mktemp_rc" "${PIPESTATUS[#]}"
cat "$tempfile"
rm -f -- "$tempfile"
exit 0 # so a bad exit status will be from ssh itself
EOF
)
# note that collecting a process substitution PID needs bash 4.4!
exec {ssh_fd}< <(ssh bad#location "$remote_script" </dev/null); ssh_pid=$!
IFS= read -r -d '' mktemp_rc <&$ssh_fd # read $? of mktemp
IFS= read -r -d '' find_rc <&$ssh_fd # read $? of find
IFS= read -r -d '' grep_rc <&$ssh_fd # read $? of grep
cat <&$ssh_fd # spool output of grep to our own output
wait "$ssh_pid"; ssh_rc=$? # let ssh finish and read its $?
echo "mktemp exited with status $mktemp_rc" >&2
echo "find exited with status $find_rc" >&2
echo "grep exited with status $grep_rc" >&2
echo "ssh exited with status $ssh_rc" >&2
How does this work?
exec {fd_var_name}< <(...) uses the bash 4.1 automatic file descriptor allocation feature to generate a file descriptor number, and associate it with content read from the process substitution running ....
In bash 4.4 or newer, process substitutions also set $!, so their PIDs can be captured, to later wait for them and collect their exit status; this is what we're storing in ssh_pid.
IFS= read -r -d '' varname reads from stdin up to the next NUL (in read -d '', the first character of '' is treated as the end of input; as an empty string in a C-derived language, the first byte of the string is its NUL terminator).
This could theoretically be made easier by writing the output before the exit status values -- you wouldn't need a temporary file on the remote machine that way -- but the caveat there is that if there were a NUL anywhere in the find | grep output, then some of that output could be picked up by the reads. (Similarly, you could store output in a variable instead of a temporary file, but again, that would destroy any NULs in the stream's output).

Related

Detecting exit status on process substitution

I'm currently using bash 4.1 and I'm using a function to perform a SVN cat on a repository file. After that, it iterates over each line to perform some transformations (mostly concatenations and such). If said file does not exist, the script should stop with an error message. The script is as follows:
function getFile {
svnCat=`svn cat (file) 2>&1`
if [[ -n $(echo "$svnCat" | grep "W160013") ]]; then # W160013 is the returned value by SVN stderr case a file doesn't exist
echo "File doesn't exist" >&2
exit 1
else
echo "$svnCat" | while read -r; do
#Do your job
done
fi
}
function processFile{
while read -r do
#do stuff
done < <(getFile)
#do even more stuff
}
However, in situations where a file does not exist, the error message is printed once but the script keeps executing. Is there a way to detect that the while looped failed and should stop the script completely?
Can't use set -e option since I require to delete some files that were created in the process.
Update: I've tried to add || exit after the done command as follows:
function processFile{
while read -r do
#do stuff
done || exit 1 < <(getFile)
However, the script is waiting for user output and when I press enter, it executes the content in the while loop
Tracking exit status from a process substitution is tricky, and requires a very modern version of bash (off the top of my head, I want to say 4.3 or newer). Prior to that, the $! after the <(getFile) will not be correctly populated, and so the wait will fail (or, worse, refer to a previously-started subprocess).
#!/usr/bin/env bash
### If you *don't* want any transforms at this stage, eliminate getFile entirely
### ...and just use < <(svn cat "$1") in processFile; you can/should rely on svn cat itself
### ...to have a nonzero exit status in the event of *any* failure; if it fails to do so,
### ...file a bug upstream.
getFile() {
local content
content=$(svn cat "$1") || exit # pass through exit status of failed svn cat
while read -r line; do
echo "Generating a transformed version of $line"
done <<<"$content"
}
processFile() {
local getFileFd getFilePid line
# start a new process running getFile; record its pid to use to check exit status later
exec {getFileFd}< <(getFile "$1"); getFilePid=$!
# actual loop over received content
while IFS= read -r line; do
echo "Retrieved line $line from process $getFilePid"
done <&"$getFileFd"
# close the FIFO from the subprocess
exec {getFileFd}<&-
# then use wait to wait for it to exit and collect its exit status
if wait "$getFilePid"; then
echo "OK: getFile reports success" >&2
else
getFileRetval=$?
echo "ERROR: getFile returned exit status $getFileRetval" >&2
fi
}

Not able to fetch the exit status of a multiple commands (separated by PIPE) which got assigned to a variable

Below is the sample script which i am trying to execute; but it fails to fetch the exit status of $cmd; is there any other way to fetch its exit status..!?
cmd="curl -mddddddd google.com"
status=$($cmd | wc -l)
echo ${PIPESTATUS[0]}
I know that, if i replace status=$($cmd | wc -l) with $cmd | wc -l , i could fetch the exit status of $cmd using PIPESTATUS. But in my case i have to assign it to a variable (example: status in above case).
Please help me here..!
Regards,
Rohith
What you're assigning to the status variable is not a status, but what $cmd | wc -l pipeline prints to standard output.
Why do you echo anyway? Try realstatus=${PIPESTATUS[0]}.
EDIT (After some digging and RTFMing...):
Just this -- realstatus=${PIPESTATUS[0]} -- doesn't seem to help, since $(command_substitution), which is in your code, is done "in a subshell environment", while PIPESTATUS is about "the most-recently-executed foreground pipeline"
If what you're trying to do in this particular case is to ensure the curl (aka $cmd) command was succesfull in the pipeline you should probably make use of pipefail option (see here).
If the output of the command is text and not excessively large, the simplest way to get the status of the command is to not use a pipe:
cmd_output=$($cmd)
echo "'$cmd' exited with $?"
linecount=$(wc -l <<<"$cmd_output")
echo "'wc' exited with $?"
What counts as "excessively large" depends on the system, but I successfully tested the code above with a command that generated 50 megabytes (over one million lines) of output on an old Linux system.
If the output of the command is too big to store in memory, another option is to put it in a temporary file:
$cmd >tmpfile
echo "'$cmd' exited with $?"
linecount=$(wc -l <tmpfile)
echo "'wc' exited with $?"
You need to be careful when using temporary files though. See Creating temporary files in Bash and How create a temporary file in shell script?.
Note that, as with the OP's example code, the unquoted $cmd in the code examples above is dangerous. It should not be used in real code.
If you just want to echo the pipe status, you can redirect that to stderr. But you have to do it in the subshell.
status=$($cmd | wc -l; echo ${PIPESTATUS[0]} >&2)
Or you can capture both variables from the subshell using read
read -rd $'\0' status pstatus <<<$($cmd | wc -l; echo ${PIPESTATUS[0]})

Safe shell redirection when command not found

Let's say we have a text file named text (doesn't matter what it contains) in current directory, when I run the command (in Ubuntu 14.04, bash version 4.3.11):
nocommand > text # make sure noommand doesn't exists in your system
It reports a 'command not found' error and it erases the text file! I just wonder if I can avoid the clobber of the file if the command doesn't exist.
I try this command set -o noclobber but the same problem happens if I run:
nocommand >| text # make sure noommand doesn't exists in your system
It seems that bash redirects output before looking for specific command to run. Can anyone give me some advices how to avoid this?
Actually, the shell first looks at the redirection and creates the file. It evaluates the command after that.
Thus what happens exactly is: Because it's a > redirection, it first replaces the file with an empty file, then evaluates a command which does not exist, which produces an error message on stderr and nothing on stdout. It then stores stdout in this file (which is nothing so the file remains empty).
I agree with Nitesh that you simply need to check if the command exists first, but according to this thread, you should avoid using which. I think a good starting point would be to check at the beginning of your script that you can run all the required functions (see the thread, 3 solutions), and abort the script otherwise.
Write to a temporary file first, and only move it into place over the desired file if the command succeeds.
nocommand > tmp.txt && mv tmp.txt text
This avoids errors not only when nocommand doesn't exist, but also when an existing command exits before it can finish writing its output, so you don't overwrite text with incomplete data.
With a little more work, you can clean up the temp file in the event of an error.
{ nocommand > tmp.txt || { rm tmp.txt; false; } } && mv tmp.txt text
The inner command group ensures that the exit status of the outer command group is non-zero so that even if the rm succeeds, the mv command is not triggered.
A simpler command that carries the slight risk of removing the temp file when nocommand succeeds but the mv fails is
nocommand > tmp.txt && mv tmp.txt text || rm tmp.txt
This would write to file only if the pipe sends at least a single character:
nocommand | (
IFS= read -d '' -n 1 || exit
exec >myfile
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
)
Or using a function:
function protected_write {
IFS= read -d '' -n 1 || exit
exec >"$1"
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
}
nocommand | protected_write myfile
Note that if lastpipe option is enabled, you'll have to place it on a subshell:
nocommand | ( protected_write myfile )
At your option you can also just summon subshell on the function by default:
function protected_write {
(
IFS= read -d '' -n 1 || exit
exec >"$1"
[[ -n $REPLY ]] && echo -n "$REPLY" || printf '\x00'
exec cat
)
}
() summons a subshell. A subshell is a fork and runs on a different process space. In x | y, y is also summoned by default in a subshell unless lastpipe option (try shopt lastpipe) is enabled.
IFS= read -d '' -n 1 waits for a single character (see help read) and would return zero code when it reads one which bypasses exit.
exec >"$1" redirects stdout to file. This makes everything that prints to stdout print to file instead.
Everything besides \x00 when read is stored in REPLY that is why we do printf '\x00' when REPLY has null (empty) value.
exec cat replaces the subshell's process with cat which would send everything that it receives to the file and finish the remaining job. See help exec.
If you do:
set -o noclobber
then
invalidcmd > myfile
if myfile exists in current path then you will get:
-bash: myfile: cannot overwrite existing file
Check using "which" command
#!/usr/bin/env bash
command_name="npm2" # Add your command here
command=`which $command_name`
if [ -z "$command" ]; then #if command exists go ahead with your logic
echo "Command not found"
else # Else fallback
echo "$command"
fi
Hope this helps

Pipe command output, but keep the error code [duplicate]

This question already has answers here:
Pipe output and capture exit status in Bash
(16 answers)
Closed 5 years ago.
How do I get the correct return code from a unix command line application after I've piped it through another command that succeeded?
In detail, here's the situation :
$ tar -cEvhf - -I ${sh_tar_inputlist} | gzip -5 -c > ${sh_tar_file} -- when only the tar command fails $?=0
$ echo $?
0
And, what I'd like to see is:
$ tar -cEvhf - -I ${sh_tar_inputlist} 2>${sh_tar_error_file} | gzip -5 -c > ${sh_tar_file}
$ echo $?
1
Does anyone know how to accomplish this?
Use ${PIPESTATUS[0]} to get the exit status of the first command in the pipe.
For details, see http://tldp.org/LDP/abs/html/internalvariables.html#PIPESTATUSREF
See also http://cfajohnson.com/shell/cus-faq-2.html for other approaches if your shell does not support $PIPESTATUS.
Look at $PIPESTATUS which is an array variable holding exit statuses. So ${PIPESTATUS[0]} holds the exit status of the first command in the pipe, ${PIPESTATUS[1]} the exit status of the second command, and so on.
For example:
$ tar -cEvhf - -I ${sh_tar_inputlist} | gzip -5 -c > ${sh_tar_file}
$ echo ${PIPESTATUS[0]}
To print out all statuses use:
$ echo ${PIPESTATUS[#]}
Here is a general solution using only POSIX shell and no temporary files:
Starting from the pipeline:
foo | bar | baz
exec 4>&1
error_statuses=`((foo || echo "0:$?" >&3) |
(bar || echo "1:$?" >&3) |
(baz || echo "2:$?" >&3)) 3>&1 >&4`
exec 4>&-
$error_statuses contains the status codes of any failed processes, in random order, with indexes to tell which command emitted each status.
# if "bar" failed, output its status:
echo $error_statuses | grep '1:' | cut -d: -f2
# test if all commands succeeded:
test -z "$error_statuses"
# test if the last command succeeded:
echo $error_statuses | grep '2:' >/dev/null
As others have pointed out, some modern shells provide PIPESTATUS to get this info. In classic sh, it's a bit more difficult, and you need to use a fifo:
#!/bin/sh
trap 'rm -rf $TMPDIR' 0
TMPDIR=$( mktemp -d )
mkfifo ${FIFO=$TMPDIR/fifo}
cmd1 > $FIFO &
cmd2 < $FIFO
wait $!
echo The return value of cmd1 is $?
(Well, you don't need to use a fifo. You can have the commands early in the pipe echo a status variable and eval that in the main shell, redirecting file descriptors all over the place and basically bending over backwards to check things, but using a fifo is much, much easier.)

Bash process substitution and exit codes

I'd like to turn the following:
git status --short && (git status --short | xargs -Istr test -z str)
which gets me the desired result of mirroring the output to stdout and doing a zero length check on the result into something closer to:
git status --short | tee >(xargs -Istr test -z str)
which unfortunately returns the exit code of tee (always zero).
Is there any way to get at the exit code of the substituted process elegantly?
[EDIT]
I'm going with the following for now, it prevents running the same command twice but seems to beg for something better:
OUT=$(git status --short) && echo "${OUT}" && test -z "${OUT}"
Look here:
$ echo xxx | tee >(xargs test -n); echo $?
xxx
0
$ echo xxx | tee >(xargs test -z); echo $?
xxx
0
and look here:
$echo xxx | tee >(xargs test -z; echo "${PIPESTATUS[*]}")
xxx
123
$echo xxx | tee >(xargs test -n; echo "${PIPESTATUS[*]}")
xxx
0
Is it?
See also Pipe status after command substitution
I've been working on this for a while, and it seems that there is no way to do that with process substitution, except for resorting to inline signalling, and that can really be used only for input pipes, so I'm not going to expand on it.
However, bash-4.0 provides coprocesses which can be used to replace process substitution in this context and provide clean reaping.
The following snippet provided by you:
git status --short | tee >(xargs -Istr test -z str)
can be replaced by something alike:
coproc GIT_XARGS { xargs -Istr test -z str; }
{ git status --short | tee; } >&${GIT_XARGS[1]}
exec {GIT_XARGS[1]}>&-
wait ${GIT_XARGS_PID}
Now, for some explanation:
The coproc call creates a new coprocess, naming it GIT_XARGS (you can use any name you like), and running the command in braces. A pair of pipes is created for the coprocess, redirecting its stdin and stdout.
The coproc call sets two variables:
${GIT_XARGS[#]} containing pipes to process' stdin and stdout, appropriately ([0] to read from stdout, [1] to write to stdin),
${GIT_XARGS_PID} containing the coprocess' PID.
Afterwards, your command is run and its output is directed to the second pipe (i.e. coprocess' stdin). The cryptically looking >&${GIT_XARGS[1]} part is expanded to something like >&60 which is regular output-to-fd redirection.
Please note that I needed to put your command in braces. This is because a pipeline causes subprocesses to be spawned, and they don't inherit file descriptors from the parent process. In other words, the following:
git status --short | tee >&${GIT_XARGS[1]}
would fail with invalid file descriptor error, since the relevant fd exists in parent process and not the spawned tee process. Putting it in brace causes bash to apply the redirection to the whole pipeline.
The exec call is used to close the pipe to your coprocess. When you used process substitution, the process was spawned as part of output redirection and the pipe to it was closed immediately after the redirection no longer had effect. Since coprocess' pipe's lifetime extends beyond a single redirection, we need to close it explicitly.
Closing the output pipe should cause the process to get EOF condition on stdin and terminate gracefully. We use wait to wait for its termination and reap it. wait returns the coprocess' exit status.
As a last note, please note that in this case, you can't use kill to terminate the coprocess since that would alter its exit status.
#!/bin/bash
if read q < <(git status -s)
then
echo $q
exit
fi

Resources