Shell status codes in make - shell

I use a Makefile (with GNU make running under Linux) to automate my grunt work when refactoring a Python script.
The script creates an output file, and I want to make sure that the output file remains unchanged in face of my refactorings.
However, I found no way to get the status code of a command to affect a subsequent shell if command.
The following rule illustrates the problem:
check-cond-codes:
diff report2008_4.csv report2008_4.csv-save-for-regression-testing; echo no differences: =$$!=
diff -q poalim report2008_4.csv; echo differences: =$$!=
The first 'diff' compares two equal files, and the second one compares two different files.
The output is:
diff report2008_4.csv report2008_4.csv-save-for-regression-testing; echo no differences: =$!=
no differences: ==
diff -q poalim report2008_4.csv; echo differences: =$!=
Files poalim and report2008_4.csv differ
differences: ==
So obviously '$$!' is the wrong variable to capture the status code of 'diff'.
Even using
SHELL := /bin/bash
at beginning of the Makefile did not solve the problem.
A variable returning the value, which I need, would (if it exists at all) be used in an 'if' command in the real rule.
The alternative of creating a small ad-hoc shell script in lieu of writing all commands inline in the Makefile is undesirable, but I'll use it as a last resort.
Related:
How to make a failing shell command interrupt make

I think you're looking for the $? shell variable, which gives the exit code of the previous command. For example:
$ diff foo.txt foo.txt
$ echo $?
0
To use this in your makefile, you would have to escape the $, as in $$?:
all:
diff foo.txt foo.txt ; if [ $$? -eq 0 ] ; then echo "no differences" ; fi
Do note that each command in your rule body in make is run in a separate subshell. For example, the following will not work:
all:
diff foo.txt foo.txt
if [ $$? -eq 0 ] ; then echo "no differences" ; fi
Because the diff and the if commands are executed in different shell processes. If you want to use the output status from the command, you must do so in the context of the same shell, as in my previous example.

Use '$$?' instead of '$$!' (thanks to 4th answer of Exit Shell Script Based on Process Exit Code)

Don't forget that each of your commands is being run in separate subshells.
That's why you quite often see something like:
my_target:
do something \
do something else \
do last thing.
And when debugging, don't forget the every helpful -n option which will print the commands but not execute them and the -p option which will show you the complete make environment including where the various bits and pieces have been set.
HTH
cheers,

If you are passing the result code to an if, you could simply do:
all:
if diff foo.txt foo.txt ; then echo "no differences" ; fi

The bash variable is $?, but why do you want to print out the status code anyway?

Try `\$?' I think the $$ is being interpreted by the makefile

Related

Bash Check if a Script Ran Successfully (Exit Code not Working)

I have the following bash script:
echo one
echo two
cd x
echo three
which fails at the 3rd line as there is no directory named x. However, after running the script, when I do $?, 0 is returned, even though the script has an error. How do I detect whether the script ran successfully or not?
Check the condition of directory existence in the script statements:
[ -d x ] && cd x || { echo "no such directory"; exit 1; }
Or put set -e after shebang line:
#!/bin/bash
set -e
echo one
echo two
cd x
echo three
You should end with an exit statement
echo one
echo two
cd x
exitCode=$?
echo three
exit $exitCode;
Then
./myscript
echo $?
1
I have searched all over with no clear answer to this. Put simply it doesn't appear to be a native feature in bash. So I will give you the hard way.
To make a .sh script with multiple commands and you don't know if any will error but you want to check if at least one has. You would need to put a $? at the end of literally every command and redirect it to a text file. Once it's in the text file you could format it like.
Command1 = 0 Success.
Command2 = 127 Fail.
Or you could just add the numbers to the file run it through some kind of calculator to add everything together and if the output is greater than zero then the command at some point failed. But this won't be overly useful if you want the exact number and there are more than one failure.
UPDATED - This is the best way I could find.
You can put this at the top of your script file to catch any errors and exit if it fails.
set -euo pipefail
Feel free to read the manual pages.

Have `make` echo to standard error without re-direction?

Some of the targets in my Makefile run programs whose output (which they send to stdout) I am interested in. For a reason not known to me, the authors of make decided to echo the executed commands to stdout, which pollutes the latter.
A hard way around this problem that involves swapping file descriptors was suggested here. I am wondering if there is a simpler way to force make echo to stderr.
I looked through the man page of make, but did not find anything to this end besides the -s option. I prefer to preserve the echo of commands, but have it in stderr.
I also tried making an auxiliary target (which I made a prerequisite of all other targets), in which I put:
exec 3>&2
exec 2>&1
exec 1>&3
but bash complained that 3 wasn't a valid file descriptor. I tried only exec 1>&2, but that did not have any effect...
The reason make shows the command line on stdout is because that's what the POSIX standard for make requires, and 30+ years of history expect. See http://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html and search for the section on "STDOUT".
You cannot modify the file descriptors in the make program from within a recipe, because the recipe is run in a subshell: any changes you make to the file descriptors only take effect in the subshell. It's not possible in UNIX for a child process to modify the file descriptors of its parent.
Similarly, each line in a recipe in make is run in a different subshell. If you want to do fancy things like redirect output for a recipe you'll have to write it all on one line:
exec 3>&2; exec 2>&1; exec 1>&3; <my command here>
Of course if you intend to do this a lot I would put that in a make variable and use that variable instead.
There is no way to get make to write its output to stderr instead of stdout, unless you want to modify the source code for GNU make and use the version you build yourself instead. It would actually be straightforward to do this as long as you're using a newer version of GNU make (4.0 and above) since all output is generated from one plase (in output.c).
What you can do entirely in the Makefile is this:
define REDIR
#printf 1>&2 "%s\n" '$(1)'; $(1)
endef
.PHONY: all
all:
$(call REDIR,echo updating .stamp)
$(call REDIR,touch .stamp)
That is to say, take control of the command echoing yourself via a macro. Unfortunately, it involves writing your recipe lines as `$(call ...) syntax.
REDIR now implements the semantics of echoing the command, and executing it, via macro expansion.
The 1>&2 is Bash-specific syntax for duplicating the standard error file descriptor to standard out, so the command then effectively prints to standard output.
Test run:
$ make
echo updating .stamp
updating .stamp
touch .stamp
$ make 2> /dev/null
updating .stamp
As you can see, updating .stamp, which is the output of our explicitly coded echo line, nicely goes to standard output. The commands are implicitly sent to standard error.
If you don't want to pollute the output of echo of what make produces, can't you simply run
make -n >&2 && make -s
This is the sample Makefile:
all:
ls
echo done
Here is the output of make:
ls
Makefile
echo done
done
Here is output of make -n >&2 && make -s:
ls
echo done
Makefile
done
Naturally, output of either step can be redirected to file.
Suppose we have the following Makefile:
target-1:
target-1-body
target-2:
target-2-body
target-3:
target-3-body
We change it as follows:
target-1-raw:
target-1-body
target-2-raw:
target-2-body
target-3-raw:
target-3-body
%-raw:
echo "Error: Target $# does not exist!"
%:
#make $#-raw 3>&2 2>&1 1>&3
The invocation is same as before, e.g. make target-1.
With two additional targets we made make output to stderr.
FYI: I am trying to develop this solution further so the user would not be able to invoke the raw targets directly.
Another posix-incompatible solution is to put
#!/bin/bash
exec 3>&2; exec 2>&1; exec 1>&3;
into helper/stderr relative to my project, and
helperdir = helper
SHELL = BASH_ENV="$(helperdir)/stderr" /bin/bash
into my Makefile.
Now all executed rule code output is redirected to stderr file descriptor.
BASH_ENV environment variable does, if set to a script path, execute that script at every bash invocation.

Is it possible to get the exit code from a subshell?

Let's imagine I have a bash script, where I call this:
bash -c "some_command"
do something with code of some_command here
Is it possible to obtain the code of some_command? I'm not executing some_command directly in the shell running the script because I don't want to alter it's environment.
$? will contain the return code of some_command just as usual.
Of course it might also contain a code from bash, in case something went wrong before your command could even be executed (wrong filename, whatnot).
Here's an illustration of $? and the parenthesis subshell mentioned by Paggas and Matti:
$ (exit a); echo $?
-bash: exit: a: numeric argument required
255
$ (exit 33); echo $?
33
In the first case, the code is a Bash error and in the second case it's the exit code of exit.
You can use the $? variable, check out the bash documentation for this, it stores the exit status of the last command.
Also, you might want to check out the bracket-style command blocks of bash (e.g. comm1 && (comm2 || comm3) && comm4), they are always executed in a subshell thus not altering the current environment, and are more powerful as well!
EDIT: For instance, when using ()-style blocks as compared to bash -c 'command', you don't have to worry about escaping any argument strings with spaces, or any other special shell syntax. You directly use the shell syntax, it's a normal part of the rest of the code.

BASH Variables with multiple commands and reentrant

I have a bash script that sources contents from another file. The contents of the other file are commands I would like to execute and compare the return value. Some of the commands are have multiple commands separated by either a semicolon (;) or by ampersands (&&) and I can't seem to make this work. To work on this, I created some test scripts as shown:
test.conf is the file being sourced by test
Example-1 (this works), My output is 2 seconds in difference
test.conf
CMD[1]="date"
test.sh
. test.conf
i=2
echo "$(${CMD[$i]})"
sleep 2
echo "$(${CMD[$i]})"
Example-2 (this does not work)
test.conf (same script as above)
CMD[1]="date;date"
Example-3 (tried this, it does not work either)
test.conf (same script as above)
CMD[1]="date && date"
I don't want my variable, CMD, to be inside tick marks because then, the commands would be executed at time of invocation of the source and I see no way of re-evaluating the variable.
This script essentially calls CMD on pass-1 to check something, if on pass-1 I get a false reading, I do some work in the script to correct the false reading and re-execute & re-evaluate the output of CMD; pass-2.
Here is an example. Here I'm checking to see if SSHD is running. If it's not running when I evaluate CMD[1] on pass-1, I will start it and re-evaluate CMD[1] again.
test.conf
CMD[1]=`pgrep -u root -d , sshd 1>/dev/null; echo $?`
So if I modify this for my test script, then test.conf becomes:
NOTE: Tick marks are not showing up but it's the key below the ~ mark on my keyboard.
CMD[1]=`date;date` or `date && date`
My script looks like this (to handle the tick marks)
. test.conf
i=2
echo "${CMD[$i]}"
sleep 2
echo "${CMD[$i]}"
I get the same date/time printed twice despite the 2 second delay. As such, CMD is not getting re-evaluate.
First of all, you should never use backticks unless you need to be compatible with an old shell that doesn't support $() - and only then.
Secondly, I don't understand why you're setting CMD[1] but then calling CMD[$i] with i set to 2.
Anyway, this is one way (and it's similar to part of Barry's answer):
CMD[1]='$(date;date)' # no backticks (remember - they carry Lime disease)
eval echo "${CMD[1]}" # or $i instead of 1
From the couple of lines of your question, I would have expected some approach like this:
#!/bin/bash
while read -r line; do
# munge $line
if eval "$line"; then
# success
else
# fail
fi
done
Where you have backticks in the source, you'll have to escape them to avoid evaluating them too early. Also, backticks aren't the only way to evaluate code - there is eval, as shown above. Maybe it's eval that you were looking for?
For example, this line:
CMD[1]=`pgrep -u root -d , sshd 1>/dev/null; echo $?`
Ought probably look more like this:
CMD[1]='`pgrep -u root -d , sshd 1>/dev/null; echo $?`'

How can I use Bash syntax in Makefile targets?

I often find Bash syntax very helpful, e.g. process substitution like in diff <(sort file1) <(sort file2).
Is it possible to use such Bash commands in a Makefile? I'm thinking of something like this:
file-differences:
diff <(sort file1) <(sort file2) > $#
In my GNU Make 3.80 this will give an error since it uses the shell instead of bash to execute the commands.
From the GNU Make documentation,
5.3.2 Choosing the Shell
------------------------
The program used as the shell is taken from the variable `SHELL'. If
this variable is not set in your makefile, the program `/bin/sh' is
used as the shell.
So put SHELL := /bin/bash at the top of your makefile, and you should be good to go.
BTW: You can also do this for one target, at least for GNU Make. Each target can have its own variable assignments, like this:
all: a b
a:
#echo "a is $$0"
b: SHELL:=/bin/bash # HERE: this is setting the shell for b only
b:
#echo "b is $$0"
That'll print:
a is /bin/sh
b is /bin/bash
See "Target-specific Variable Values" in the documentation for more details. That line can go anywhere in the Makefile, it doesn't have to be immediately before the target.
You can call bash directly, use the -c flag:
bash -c "diff <(sort file1) <(sort file2) > $#"
Of course, you may not be able to redirect to the variable $#, but when I tried to do this, I got -bash: $#: ambiguous redirect as an error message, so you may want to look into that before you get too into this (though I'm using bash 3.2.something, so maybe yours works differently).
One way that also works is putting it this way in the first line of the your target:
your-target: $(eval SHELL:=/bin/bash)
#echo "here shell is $$0"
If portability is important you may not want to depend on a specific shell in your Makefile. Not all environments have bash available.
You can call bash directly within your Makefile instead of using the default shell:
bash -c "ls -al"
instead of:
ls -al
There is a way to do this without explicitly setting your SHELL variable to point to bash. This can be useful if you have many makefiles since SHELL isn't inherited by subsequent makefiles or taken from the environment. You also need to be sure that anyone who compiles your code configures their system this way.
If you run sudo dpkg-reconfigure dash and answer 'no' to the prompt, your system will not use dash as the default shell. It will then point to bash (at least in Ubuntu). Note that using dash as your system shell is a bit more efficient though.
It's not a direct answer to the question, makeit is limited Makefile replacement with bash syntax and it can be useful in some cases (I'm the author)
rules can be defined as bash-functions
auto-completion feature
Basic idea is to have while loop in the end of the script:
while [ $# != 0 ]; do
if [ "$(type -t $1)" == 'function' ]; then
$1
else
exit 1
fi
shift
done
https://asciinema.org/a/435159

Resources