Not able to fetch the exit status of a multiple commands (separated by PIPE) which got assigned to a variable - bash

Below is the sample script which i am trying to execute; but it fails to fetch the exit status of $cmd; is there any other way to fetch its exit status..!?
cmd="curl -mddddddd google.com"
status=$($cmd | wc -l)
echo ${PIPESTATUS[0]}
I know that, if i replace status=$($cmd | wc -l) with $cmd | wc -l , i could fetch the exit status of $cmd using PIPESTATUS. But in my case i have to assign it to a variable (example: status in above case).
Please help me here..!
Regards,
Rohith

What you're assigning to the status variable is not a status, but what $cmd | wc -l pipeline prints to standard output.
Why do you echo anyway? Try realstatus=${PIPESTATUS[0]}.
EDIT (After some digging and RTFMing...):
Just this -- realstatus=${PIPESTATUS[0]} -- doesn't seem to help, since $(command_substitution), which is in your code, is done "in a subshell environment", while PIPESTATUS is about "the most-recently-executed foreground pipeline"
If what you're trying to do in this particular case is to ensure the curl (aka $cmd) command was succesfull in the pipeline you should probably make use of pipefail option (see here).

If the output of the command is text and not excessively large, the simplest way to get the status of the command is to not use a pipe:
cmd_output=$($cmd)
echo "'$cmd' exited with $?"
linecount=$(wc -l <<<"$cmd_output")
echo "'wc' exited with $?"
What counts as "excessively large" depends on the system, but I successfully tested the code above with a command that generated 50 megabytes (over one million lines) of output on an old Linux system.
If the output of the command is too big to store in memory, another option is to put it in a temporary file:
$cmd >tmpfile
echo "'$cmd' exited with $?"
linecount=$(wc -l <tmpfile)
echo "'wc' exited with $?"
You need to be careful when using temporary files though. See Creating temporary files in Bash and How create a temporary file in shell script?.
Note that, as with the OP's example code, the unquoted $cmd in the code examples above is dangerous. It should not be used in real code.

If you just want to echo the pipe status, you can redirect that to stderr. But you have to do it in the subshell.
status=$($cmd | wc -l; echo ${PIPESTATUS[0]} >&2)
Or you can capture both variables from the subshell using read
read -rd $'\0' status pstatus <<<$($cmd | wc -l; echo ${PIPESTATUS[0]})

Related

How to get exit codes for different sections of a command in bash

Let's say I have a line in my bash script with ssh bad#location "find -name 'fruit.txt' | grep "Apple" and I'm trying to retrieve the exit codes of ssh, find . -name 'fruit.txt', and "grep "Apple` to see which command went bad.
So far, I've tried something like echo $? ${PIPESTATUS[0]} ${PIPESTATUS[1]}, but it looks like $? returns the same thing as ${PIPESTATUS[0]} in this case. I only need to return the first non-zero exit code along with dmesg for debugging purposes.
I've also considered using set -o pipefail, which will return a failure exit code if any command errors, but I'd like to somehow know which command failed for debugging.
I'd like either get an exit code of 255 (from ssh) and its corresponding dmesg, or somehow get all of the exit codes.
ssh only returns one exit status (per channel) to the calling shell; if you want to get exit status for the individual pipeline components it ran remotely, you need to collect them remotely, put them in with the data, and then parse them back out. One way to do that, if you have a very new version of bash, is like so:
#!/usr/bin/env bash
# note <<'EOF' not just <<EOF; with the former, the local shell does not munge
# heredoc contents.
remote_script=$(cat <<'EOF'
tempfile=$(mktemp "${TMPDIR:-/tmp}/output.XXXXXX"); mktemp_rc=$?
find -name 'fruit.txt' | grep Apple >"$tempfile"
printf '%s\0' "$mktemp_rc" "${PIPESTATUS[#]}"
cat "$tempfile"
rm -f -- "$tempfile"
exit 0 # so a bad exit status will be from ssh itself
EOF
)
# note that collecting a process substitution PID needs bash 4.4!
exec {ssh_fd}< <(ssh bad#location "$remote_script" </dev/null); ssh_pid=$!
IFS= read -r -d '' mktemp_rc <&$ssh_fd # read $? of mktemp
IFS= read -r -d '' find_rc <&$ssh_fd # read $? of find
IFS= read -r -d '' grep_rc <&$ssh_fd # read $? of grep
cat <&$ssh_fd # spool output of grep to our own output
wait "$ssh_pid"; ssh_rc=$? # let ssh finish and read its $?
echo "mktemp exited with status $mktemp_rc" >&2
echo "find exited with status $find_rc" >&2
echo "grep exited with status $grep_rc" >&2
echo "ssh exited with status $ssh_rc" >&2
How does this work?
exec {fd_var_name}< <(...) uses the bash 4.1 automatic file descriptor allocation feature to generate a file descriptor number, and associate it with content read from the process substitution running ....
In bash 4.4 or newer, process substitutions also set $!, so their PIDs can be captured, to later wait for them and collect their exit status; this is what we're storing in ssh_pid.
IFS= read -r -d '' varname reads from stdin up to the next NUL (in read -d '', the first character of '' is treated as the end of input; as an empty string in a C-derived language, the first byte of the string is its NUL terminator).
This could theoretically be made easier by writing the output before the exit status values -- you wouldn't need a temporary file on the remote machine that way -- but the caveat there is that if there were a NUL anywhere in the find | grep output, then some of that output could be picked up by the reads. (Similarly, you could store output in a variable instead of a temporary file, but again, that would destroy any NULs in the stream's output).

string comparison is shell script

I have a scenario to copy file from one server to another, for that i need to check any existing scp is in progress, have wrote a sample shell script but the condition is not being met even though syntax is correct, the main problem here is the output of ps command will gets stored in variable scpstat and the same compared for matching string in if statement, here I'm getting the output of the variable is different from executing outside of the script. can see it is formatted different in script execution when executing sh -x scpsamp.sh, why there is "sh" appended to the output, but while comparing without ps and assigning as scpstat='scp' i can able to get the condition correct, am i doing anything wrong while getting output in to the variable. please help
#!/bin/sh
scpstat=`ps -ef | grep scp | egrep -v 'grep|ssh' | awk '{print $8}')`
if [ "$scpstat" = "scp" ];
then
echo "SCP is in progress"
else
echo "No SCP in progress"
fi
sh -x output
It's notoriously difficult to extract information from the output of ps. If your system has pgrep, it's much easier:
if pgrep scp >/dev/null
then
echo "SCP is in progress"
else
echo "No SCP in progress"
fi

In unix how to find out if process running and return true/false?

I'm writing a unix shell script and need to check if there are currently running processes with "xyz" in their directory. If yes than continue to next command and show text like "Found It".
If not than don't continue and display text like "Process Not Found".
I tried something like this:
if ps -ef | grep xyz
then
echo "XYZ Process Found!"
else
echo "XYZ Process Not Found!"
fi
But it just showing me the processes and display "process found" even if there's no xyz process.
I believe you want to check the output of the command against a value using Command substition, from the linked bash-hackers wiki The command substitution expands to the output of commands. These commands are executed in a subshell, and their stdout data is what the substitution syntax expands to. Also, count the lines and remove grep. Something like,
if [[ $(ps -ef | grep xyz | grep -v grep | wc -l) != 0 ]]; then
echo "XYZ Process Found!"
else
echo "XYZ Process Not Found!"
fi
Edit
Based on the comments below, you should probably use
if [[ $(ps -ef | grep -c xyz) -ne 1 ]]; then
which is a lot easier to read.
When you run grep xyz, that process - grep xyz - is also running & thus shown in the output of ps -ef.
This running process command line contains xyz. Thus grep passes that line to output.
Hence you always get zero exit status - i.e. success.
2 Solutions:
use if ps -ef | grep '[x]yz'; then. (You may want to suppress grep output with -q)
The grep command being run is grep [x]yz. This gets printed in ps -ef output.
Obviously, grep filters out this line. [x]yz could be matched with \[x\]yz, not with [x]yz.
use if pgrep -f xyz >/dev/null; then
Check man pgrep for more details..
You can also use pgrep. From pgrep(1):
pgrep looks through the currently running processes and lists the
process IDs which match the selection criteria to stdout.
[...]
EXIT STATUS
0 One or more processes matched the criteria.
1 No processes matched.
2 Syntax error in the command line.
3 Fatal error: out of memory etc.
Example output:
[~]% pgrep xterm
18231
19070
31727
You can use it in an if statement like so:
if pgrep xterm > /dev/null; then
echo Found xterm
else
echo xterm not found
fi
Note: pgrep is not a standard utility (ie. it's not in POSIX), but widely available on at least Linux and I believe most BSD systems.
is_xyz_running() {
[ "$(pgrep xyz)" ] && echo true || echo false
}

Pipe command output, but keep the error code [duplicate]

This question already has answers here:
Pipe output and capture exit status in Bash
(16 answers)
Closed 5 years ago.
How do I get the correct return code from a unix command line application after I've piped it through another command that succeeded?
In detail, here's the situation :
$ tar -cEvhf - -I ${sh_tar_inputlist} | gzip -5 -c > ${sh_tar_file} -- when only the tar command fails $?=0
$ echo $?
0
And, what I'd like to see is:
$ tar -cEvhf - -I ${sh_tar_inputlist} 2>${sh_tar_error_file} | gzip -5 -c > ${sh_tar_file}
$ echo $?
1
Does anyone know how to accomplish this?
Use ${PIPESTATUS[0]} to get the exit status of the first command in the pipe.
For details, see http://tldp.org/LDP/abs/html/internalvariables.html#PIPESTATUSREF
See also http://cfajohnson.com/shell/cus-faq-2.html for other approaches if your shell does not support $PIPESTATUS.
Look at $PIPESTATUS which is an array variable holding exit statuses. So ${PIPESTATUS[0]} holds the exit status of the first command in the pipe, ${PIPESTATUS[1]} the exit status of the second command, and so on.
For example:
$ tar -cEvhf - -I ${sh_tar_inputlist} | gzip -5 -c > ${sh_tar_file}
$ echo ${PIPESTATUS[0]}
To print out all statuses use:
$ echo ${PIPESTATUS[#]}
Here is a general solution using only POSIX shell and no temporary files:
Starting from the pipeline:
foo | bar | baz
exec 4>&1
error_statuses=`((foo || echo "0:$?" >&3) |
(bar || echo "1:$?" >&3) |
(baz || echo "2:$?" >&3)) 3>&1 >&4`
exec 4>&-
$error_statuses contains the status codes of any failed processes, in random order, with indexes to tell which command emitted each status.
# if "bar" failed, output its status:
echo $error_statuses | grep '1:' | cut -d: -f2
# test if all commands succeeded:
test -z "$error_statuses"
# test if the last command succeeded:
echo $error_statuses | grep '2:' >/dev/null
As others have pointed out, some modern shells provide PIPESTATUS to get this info. In classic sh, it's a bit more difficult, and you need to use a fifo:
#!/bin/sh
trap 'rm -rf $TMPDIR' 0
TMPDIR=$( mktemp -d )
mkfifo ${FIFO=$TMPDIR/fifo}
cmd1 > $FIFO &
cmd2 < $FIFO
wait $!
echo The return value of cmd1 is $?
(Well, you don't need to use a fifo. You can have the commands early in the pipe echo a status variable and eval that in the main shell, redirecting file descriptors all over the place and basically bending over backwards to check things, but using a fifo is much, much easier.)

Bash process substitution and exit codes

I'd like to turn the following:
git status --short && (git status --short | xargs -Istr test -z str)
which gets me the desired result of mirroring the output to stdout and doing a zero length check on the result into something closer to:
git status --short | tee >(xargs -Istr test -z str)
which unfortunately returns the exit code of tee (always zero).
Is there any way to get at the exit code of the substituted process elegantly?
[EDIT]
I'm going with the following for now, it prevents running the same command twice but seems to beg for something better:
OUT=$(git status --short) && echo "${OUT}" && test -z "${OUT}"
Look here:
$ echo xxx | tee >(xargs test -n); echo $?
xxx
0
$ echo xxx | tee >(xargs test -z); echo $?
xxx
0
and look here:
$echo xxx | tee >(xargs test -z; echo "${PIPESTATUS[*]}")
xxx
123
$echo xxx | tee >(xargs test -n; echo "${PIPESTATUS[*]}")
xxx
0
Is it?
See also Pipe status after command substitution
I've been working on this for a while, and it seems that there is no way to do that with process substitution, except for resorting to inline signalling, and that can really be used only for input pipes, so I'm not going to expand on it.
However, bash-4.0 provides coprocesses which can be used to replace process substitution in this context and provide clean reaping.
The following snippet provided by you:
git status --short | tee >(xargs -Istr test -z str)
can be replaced by something alike:
coproc GIT_XARGS { xargs -Istr test -z str; }
{ git status --short | tee; } >&${GIT_XARGS[1]}
exec {GIT_XARGS[1]}>&-
wait ${GIT_XARGS_PID}
Now, for some explanation:
The coproc call creates a new coprocess, naming it GIT_XARGS (you can use any name you like), and running the command in braces. A pair of pipes is created for the coprocess, redirecting its stdin and stdout.
The coproc call sets two variables:
${GIT_XARGS[#]} containing pipes to process' stdin and stdout, appropriately ([0] to read from stdout, [1] to write to stdin),
${GIT_XARGS_PID} containing the coprocess' PID.
Afterwards, your command is run and its output is directed to the second pipe (i.e. coprocess' stdin). The cryptically looking >&${GIT_XARGS[1]} part is expanded to something like >&60 which is regular output-to-fd redirection.
Please note that I needed to put your command in braces. This is because a pipeline causes subprocesses to be spawned, and they don't inherit file descriptors from the parent process. In other words, the following:
git status --short | tee >&${GIT_XARGS[1]}
would fail with invalid file descriptor error, since the relevant fd exists in parent process and not the spawned tee process. Putting it in brace causes bash to apply the redirection to the whole pipeline.
The exec call is used to close the pipe to your coprocess. When you used process substitution, the process was spawned as part of output redirection and the pipe to it was closed immediately after the redirection no longer had effect. Since coprocess' pipe's lifetime extends beyond a single redirection, we need to close it explicitly.
Closing the output pipe should cause the process to get EOF condition on stdin and terminate gracefully. We use wait to wait for its termination and reap it. wait returns the coprocess' exit status.
As a last note, please note that in this case, you can't use kill to terminate the coprocess since that would alter its exit status.
#!/bin/bash
if read q < <(git status -s)
then
echo $q
exit
fi

Resources