Let's imagine I have a bash script, where I call this:
bash -c "some_command"
do something with code of some_command here
Is it possible to obtain the code of some_command? I'm not executing some_command directly in the shell running the script because I don't want to alter it's environment.
$? will contain the return code of some_command just as usual.
Of course it might also contain a code from bash, in case something went wrong before your command could even be executed (wrong filename, whatnot).
Here's an illustration of $? and the parenthesis subshell mentioned by Paggas and Matti:
$ (exit a); echo $?
-bash: exit: a: numeric argument required
255
$ (exit 33); echo $?
33
In the first case, the code is a Bash error and in the second case it's the exit code of exit.
You can use the $? variable, check out the bash documentation for this, it stores the exit status of the last command.
Also, you might want to check out the bracket-style command blocks of bash (e.g. comm1 && (comm2 || comm3) && comm4), they are always executed in a subshell thus not altering the current environment, and are more powerful as well!
EDIT: For instance, when using ()-style blocks as compared to bash -c 'command', you don't have to worry about escaping any argument strings with spaces, or any other special shell syntax. You directly use the shell syntax, it's a normal part of the rest of the code.
Related
I am writing a bash script, where in I want to make a decision whether to execute command A or command B based on the return value from the command - git worktree remove.
Say, if the worktree was successfully deleted then I will execute command A.
if the worktree name mentioned in the command was wrong or for any other reason if git worktree remove fails then I want to execute command B
So I have come up with the logic as below -
.
.
.
wt_del=$(git worktree remove -f $DIR)
echo $wt_del ---> for debugging script
if [ $wt_del -eq 0]
then
git branch
read -p "Enter the branch : " BUG
git branch -d $BUG
else
echo "Failed to remove worktree $DIR"
.
.
.
when I run this script with an invalid worktree name, then I see the output like - wt_del =
So it means that git worktree remove command is not returning any integer value to indicate success or failure.
So how could I make my decision?
You should check the exit code right after running the process. Check variable $?. If it's not 0, then there was an error.
git blahblah
if [ $? -ne 0 ]; then
echo there was a problem
fi
This is a general shell programming issue (not specific to Git at all, and generic across many POSIX-style shells including sh, bash, zsh, and more). The syntax for running a command is straightforward:
cmd arg1 arg2 ... argN
The output from this command goes to the standard output. Wrapping the command in dollar-sign-prefixed parentheses and assigning this to a variable with:
var=$(cmd arg1 arg2 ... argN)
tells the shell to capture the output (by redirecting the command's standard output somewhere, then reading this output). The command's standard error stream is not affected, but using:
var=$(cmd ... 2>&1)
will cause the parenthesized sub-command to send its own standard error output to the same place its own standard output is already going, which is to say, the outer shell that is collecting it up. So if you have a command that prints to both stdout and stderr, you can collect both outputs.
This is not what you wanted to do. You wanted to save the exit status of the command. To do that, as eftshift0 notes, you need to use the $? pseudo-variable:
cmd arg1 arg2 ... argN
status=$?
Having captured the exit status in an ordinary shell variable, you can now test it repeatedly, or—if your shell supports arithmetic—do arithmetic with the status, or whatever you like.
As phd notes in a comment, though, if you just want to inspect this status once, for zero-vs-non-zero, the way to do that is with the shell's if ...; then ...; else ...; fi construct:
if cmd arg1 arg2 ... argN
then
# stuff to do if the exit status was zero
else
# stuff to do if the exit status was nonzero
fi
Note that the then keyword must occur in the spot where a command would normally be found: that's why it's on a separate line. You can use an entire pipeline in an if construct, but only the last command's exit status matters:1
if cmd1 | cmd2 | cmd3; then
echo cmd3 exited zero
else
echo cmd3 exited nonzero
fi
Here then is on the same line, but we use a semicolon to make sure it appears where a standalone command would. The semicolon terminates the three-command pipeline.
Since you wrote:
wt_del=$(git worktree remove -f $DIR)
you captured the standard output of git worktree remove here, which is not really useful. You then have:
if [ $wt_del -eq 0]
which has a syntactic glitch: the ] should be separated from the zero by white-space. The [ program demands that its last argument be ]. The variant built into bash thus complains:
bash$ [ $wt_del -eq 0]
bash: [: missing `]'
This suggests you re-typed your actual script when convert it for posting purposes, since you didn't mention any such complaint. That's a recipe for getting useless answers. Note that since $wt_del might be empty, it should be quoted: fixing the above produces:
bash$ [ $wt_del -eq 0 ]
bash: [: -eq: unary operator expected
```none
Fixing *that* in turn produces the "correct" error:
```none
bash$ [ "$wt_del" -eq 0 ]
bash: [: : integer expression expected
but in general here you want if git ... constructs. Most Git commands endeavor to produce a useful exit status (which brings us back to git, finally)—but these details are sometimes not documented, so you have to test and then hope that the results you get for your version are general to all versions.
If Git calls something a plumbing command, it's meant for use in scripts, and should have a fully reliable way to be used from scripts, including both predictable, machine-parse-ready output and a predictable and useful exit status. Unfortunately git worktree is not a plumbing command, but it does have a --porcelain option for its list sub-command; --porcelain indicates that it's being used in "plumbing mode". So it should have a reliable exit status.
1Bash has a setting (pipefail) to adjust this behavior. Using them can be powerful, but decreases the portability of your shell script. Bash's PIPESTATUS array gives the most control of all but is annoying to use.
In the original Bourne shell, this last-command-status-matters trick was arranged by having the if ... then construct run the entire ... section in a sub-shell. If the sub-shell had just one command to run, it ran it with an execve system call. If it had a pipeline, it would spawn a sub-sub-shell to run each of the additional parts of the pipeline, then run the last command directly with an execve system call. This meant the last command in a pipeline was the parent process of an earlier command! Some badly-written commands were tripped up by this, sometimes. Bash can still do this, via the lastpipe setting.
If in a script I use set -e the script continues after an error has occurred in a statement that executes two commands with &&.
For example:
set -e
cd nonexistingdirectory && echo "&&"
echo continue
This gives the following output:
./exit.sh: line 3: cd: nonexistingdirectory: No such file or directory
continue
I want the script to exit after cd nonexistingdirectory and stop.
How can I do this?
** Edit**
I have multiple scripts using && that I need to fix to make sure they exit upon error. But I want a minimum impact/risk solution. I will try the solution mentioned in the comments to replace && with ; combined with set -e.
This is by design. If you use &&, bash assumes you want to handle errors yourself, so it doesn't abort on failure in the first command.
Possible fix:
set -e
cd nonexistingdirectory
echo "&&"
echo continue
Now there are only two possibilities:
cd succeeds and the script continues as usual.
cd fails and bash aborts execution because of set -e.
The problem here is your && command.
In fact, when a command that retrieves error is executed together (&&) with another command, the set -e doesn't work.
If you explain better the real use case we could find out a work around that fits your needs.
set [+abefhkmnptuvxBCEHPT] [+o option] [arg ...]
-e Exit immediately if a pipeline (which may consist of a single simple command), a subshell command enclosed in parentheses, or
one of the commands executed as part of a command list enclosed by braces (see SHELL GRAMMAR above) exits with a non-zero status.
The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword,
part of the test following the if or elif reserved words, part of any command executed in a && or || list except the command fol-
lowing the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted with !.
from man bash
Suppose I'm using a shell like bash or zsh, and also suppose that I have a command which writes to stdout. I want to capture the output of the command into a shell variable, and also to capture the command's return code into another shell variable.
I know I can do something like this ...
command >tempfile
rc=$?
output=$(cat tempfile)
Then, I have the return code in the 'rc' shell variable and the command's output in the 'output' shell variable.
However, I want to do this without using any temporary files.
Also, I could do this to get the command output ...
output=$(command)
... but then, I don't know how to get the command's return code into any shell variable.
Can anyone suggest a way to get both the return code and the command's output into two shell variables without using any files?
Thank you very much.
Just capture $? as you did before. Using the executables true and false, we can demonstrate that command substitution does set a proper return code:
$ output=$(true); rc=$?; echo $rc
0
$ output=$(false); rc=$?; echo $rc
1
Multiple command substitutions in one assignment
If more than one command substitution appears in an assignment, the return code of the last command substitution determines the return code of the assignment:
$ output="$(true) $(false)"; rc=$?; echo $rc
1
$ output="$(false) $(true)"; rc=$?; echo $rc
0
Documentation
From the section of man bash describing variable assignment:
If one of the expansions contained a command substitution, the exit
status of the command is the exit status of the last command
substitution performed. If there were no command substitutions, the
command exits with a status of zero.
Editing this post, original is at bottom beneath the "Thanks!"
command='a.out arg1 arg2 &'
eval ${command}
if [ $? -ne 0 ]; then
printf "Command \'${command}\' failed\n"
exit 1
fi
wait
Here is a test script that demonstrates the problem, which I oversimplified
in the original post. Notice the ampersand in line 2 and the wait command.
These more faithfully represent my script. In case it matters, the ampersand
is sometimes there and sometimes not, its presence is determined by a user-
specified flag that indicates whether or not to background a long arithmetic
calculation. And, also in case it matters, I'm actually backgrounding many
(twelve) processes, i.e., ${command[0..11]}. I want the script to die if any
fail. I use 'wait' to synchronize the successful return of all processes.
Happy (sort of) to use another approach but this almost works.
The ampersand (for backgrounding) seems to cause the problem.
When ${command} omits the ampersand, the script runs as expected:
The executable a.out is not found, a complaint to that effect is issued,
and $? is non-zero so the host script exits. When ${command} includes
the ampersand, the same complaint is issued but $? is zero so the
script continues to execute. I want the script to die immediately when
a.out fails but how do I obtain the non-zero return value from a
backgrounded process?
Thanks!
(original post):
I have a bash script that uses commands of the form
eval ${command}
if [ $? -ne 0 ]; then
printf "Command ${command} failed"
exit 1
fi
where ${command} is a string of words, e.g., "a.out arg1 ... argn".
The problem is that the return code from eval (i.e., $?) is always
zero even when ${command} fails. Removing the "eval" from the above
snippet allows the correct return code ($?) to be returned and thus
halt the script. I need to keep my command string in a variable
(${command}) in order to manipulate it elsewhere, and simply running
${command} without the eval doesn't work well for other reasons. How do I catch the
correct return code when using eval?
Thanks!
Charlie
The ampersand (for backgrounding) seems to cause the problem.
That is correct.
The shell cannot know a command's exit code until the command completes. When you put a command in background, the shell does not wait for completion. Hence, it cannot know the (future) return status of the command in background.
This is documented in man bash:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.
In other words, the return code after putting a command in background is always 0 because the shell cannot predict the future return code of a command that has not yet completed.
If you want to find the return status of commands in the background, you need to use the wait command.
Examples
The command false always sets a return status of 1:
$ false ; echo status=$?
status=1
Observe, though, what happens if we background it:
$ false & echo status=$?
[1] 4051
status=0
The status is 0 because the command was put in background and the shell cannot predict its future exit code. If we wait a few moments, we will see:
$
[1]+ Exit 1 false
Here, the shell is notifying us that the brackground task completed and its return status was just as it should be: 1.
In the above, we did not use eval. If we do, nothing changes:
$ eval 'false &' ; echo status=$?
[1] 4094
status=0
$
[1]+ Exit 1 false
If you do want the return status of a backgrounded command, use wait. For example, this shows how to capture the return status of false:
$ false & wait $!; echo status=$?
[1] 4613
[1]+ Exit 1 false
status=1
From the man page on my system:
eval [arg ...] The args are read and concatenated together into a single command. This command is then read and executed by the shell, and its exit status is returned as the value of eval. If there are no args, or only null arguments, eval returns 0.
If your system documentation is 'the same', then, most likely, whatever commands you are running are the problem, i.e. 'a.out' is returning '0' on exit instead of a non-zero value. Add appropriate 'exit return code' to your compiled program.
You might also try using $() which will 'run' your binary instead of 'evaluating' it..., i.e.
STATUS=$(a.out var var var)
As long on only one 'command' is in the stream, then the value of $? is the 'exit code'; otherwise, $? is the return code for the last command in a multi-command 'pipe'...
:)
Dale
I use a Makefile (with GNU make running under Linux) to automate my grunt work when refactoring a Python script.
The script creates an output file, and I want to make sure that the output file remains unchanged in face of my refactorings.
However, I found no way to get the status code of a command to affect a subsequent shell if command.
The following rule illustrates the problem:
check-cond-codes:
diff report2008_4.csv report2008_4.csv-save-for-regression-testing; echo no differences: =$$!=
diff -q poalim report2008_4.csv; echo differences: =$$!=
The first 'diff' compares two equal files, and the second one compares two different files.
The output is:
diff report2008_4.csv report2008_4.csv-save-for-regression-testing; echo no differences: =$!=
no differences: ==
diff -q poalim report2008_4.csv; echo differences: =$!=
Files poalim and report2008_4.csv differ
differences: ==
So obviously '$$!' is the wrong variable to capture the status code of 'diff'.
Even using
SHELL := /bin/bash
at beginning of the Makefile did not solve the problem.
A variable returning the value, which I need, would (if it exists at all) be used in an 'if' command in the real rule.
The alternative of creating a small ad-hoc shell script in lieu of writing all commands inline in the Makefile is undesirable, but I'll use it as a last resort.
Related:
How to make a failing shell command interrupt make
I think you're looking for the $? shell variable, which gives the exit code of the previous command. For example:
$ diff foo.txt foo.txt
$ echo $?
0
To use this in your makefile, you would have to escape the $, as in $$?:
all:
diff foo.txt foo.txt ; if [ $$? -eq 0 ] ; then echo "no differences" ; fi
Do note that each command in your rule body in make is run in a separate subshell. For example, the following will not work:
all:
diff foo.txt foo.txt
if [ $$? -eq 0 ] ; then echo "no differences" ; fi
Because the diff and the if commands are executed in different shell processes. If you want to use the output status from the command, you must do so in the context of the same shell, as in my previous example.
Use '$$?' instead of '$$!' (thanks to 4th answer of Exit Shell Script Based on Process Exit Code)
Don't forget that each of your commands is being run in separate subshells.
That's why you quite often see something like:
my_target:
do something \
do something else \
do last thing.
And when debugging, don't forget the every helpful -n option which will print the commands but not execute them and the -p option which will show you the complete make environment including where the various bits and pieces have been set.
HTH
cheers,
If you are passing the result code to an if, you could simply do:
all:
if diff foo.txt foo.txt ; then echo "no differences" ; fi
The bash variable is $?, but why do you want to print out the status code anyway?
Try `\$?' I think the $$ is being interpreted by the makefile