How can I use Bash syntax in Makefile targets? - bash

I often find Bash syntax very helpful, e.g. process substitution like in diff <(sort file1) <(sort file2).
Is it possible to use such Bash commands in a Makefile? I'm thinking of something like this:
file-differences:
diff <(sort file1) <(sort file2) > $#
In my GNU Make 3.80 this will give an error since it uses the shell instead of bash to execute the commands.

From the GNU Make documentation,
5.3.2 Choosing the Shell
------------------------
The program used as the shell is taken from the variable `SHELL'. If
this variable is not set in your makefile, the program `/bin/sh' is
used as the shell.
So put SHELL := /bin/bash at the top of your makefile, and you should be good to go.
BTW: You can also do this for one target, at least for GNU Make. Each target can have its own variable assignments, like this:
all: a b
a:
#echo "a is $$0"
b: SHELL:=/bin/bash # HERE: this is setting the shell for b only
b:
#echo "b is $$0"
That'll print:
a is /bin/sh
b is /bin/bash
See "Target-specific Variable Values" in the documentation for more details. That line can go anywhere in the Makefile, it doesn't have to be immediately before the target.

You can call bash directly, use the -c flag:
bash -c "diff <(sort file1) <(sort file2) > $#"
Of course, you may not be able to redirect to the variable $#, but when I tried to do this, I got -bash: $#: ambiguous redirect as an error message, so you may want to look into that before you get too into this (though I'm using bash 3.2.something, so maybe yours works differently).

One way that also works is putting it this way in the first line of the your target:
your-target: $(eval SHELL:=/bin/bash)
#echo "here shell is $$0"

If portability is important you may not want to depend on a specific shell in your Makefile. Not all environments have bash available.

You can call bash directly within your Makefile instead of using the default shell:
bash -c "ls -al"
instead of:
ls -al

There is a way to do this without explicitly setting your SHELL variable to point to bash. This can be useful if you have many makefiles since SHELL isn't inherited by subsequent makefiles or taken from the environment. You also need to be sure that anyone who compiles your code configures their system this way.
If you run sudo dpkg-reconfigure dash and answer 'no' to the prompt, your system will not use dash as the default shell. It will then point to bash (at least in Ubuntu). Note that using dash as your system shell is a bit more efficient though.

It's not a direct answer to the question, makeit is limited Makefile replacement with bash syntax and it can be useful in some cases (I'm the author)
rules can be defined as bash-functions
auto-completion feature
Basic idea is to have while loop in the end of the script:
while [ $# != 0 ]; do
if [ "$(type -t $1)" == 'function' ]; then
$1
else
exit 1
fi
shift
done
https://asciinema.org/a/435159

Related

What are the differences in echo between zsh and bash?

In bash, in this specific case, echo behaves like so:
$ bash -c 'echo "a\nb"'
a\nb
but in zsh the same thing turns out very differently...:
$ zsh -c 'echo "a\nb"'
a
b
and fwiw in fish, because I was curious:
$ fish -c 'echo "a\nb"'
a\nb
I did realize that I can run:
$ zsh -c 'echo -E "a\nb"'
a\nb
But now I am worried that I'm about to stumble into more gotchas on such a basic operation. (Thus my investigation into fish: if I'm going to have to make changes at such a low level for zsh, why not go all the way and switch up to something that is blatant about being so different?)
I did not myself find any documentation to help clarify this echo difference in bash vs zsh or pages directly listing the differences, so can someone here list them out? And maybe direct me to any broader set of potentially impactful gotchas when making the switch, that would cover this case?
Usually prefer printf for consistent results.
If you need predictable consistent echo implementation, you can override it with your own function.
This will behave the same, regardless of the shell.
echo(){ printf %s\\n "$*";}
echo "a\nb"
echo is good and portable only for printing literal strings that end
with a newline but it's not good for anything more complex, you can
read more about it in this
answer and in
Shellcheck documentation
here.
Even though according to
POSIX
each implementation has to understand character sequences without any
additional options you cannot rely on that. As you already noticed, in
Bash for example echo 'a\nb' produces a\nb but it can be changed
with xpg_echo shell option:
$ echo 'a\nb'
a\nb
$ shopt -s xpg_echo
$ echo 'a\nb'
a
b
And maybe direct me to any broader set of potentially impactful
gotchas when making the switch, that would cover this case?
Notice that the inconsistency between different echo implementations
can manifest itself not only in shell but also in other places where
shell is used indirectly, for example in Makefile. I've once come
across Makefile that looked like this:
all:
#echo "target\tdoes this"
#echo "another-target\tdoes this"
make uses /bin/sh to run these commands so if /bin/sh is a symlink
to bash on your system what you get is:
$ make
target\tdoes this
another-target\tdoes this
If you want portability in shell use printf. This:
printf 'a\nb\n'
should produce the same output in most shells.
echo provides the same output, regardless of the shell interpreter it's called from.
The difference lies in the way that each shell will print the standard output buffer to the screen. zsh interprets escape characters/sequences (i.e., "\n\t\v\r..." or ANSI escape sequences) automatically where bash does not.
Using bash, you'll have to supply the -e flag to print newlines:
#Input:
echo "[text-to-print]\n[text-to-print-on-newline]"
#Output:
[text-to-print]\n[text-to-print-on-newline]
#Input:
echo -e "[text-to-print]\n[text-to-print-on-newline]"
#Output:
[text-to-print]
[text-to-print-on-newline]
Using zsh, the interpreter does the escape sequence interpretation by itself, and the output is the same regardless of the -e flag:
#Input:
echo "[text-to-print]\n[text-to-print-on-newline]"
#Output:
[text-to-print]
[text-to-print-on-newline]
#Input:
echo -e "[text-to-print]\n[text-to-print-on-newline]"
#Output:
[text-to-print]
[text-to-print-on-newline]
This should fix the discrepancy that you're seeing between shell interpreters.

when to use bash with option -c?

I'm trying to understand -c option for bash better. The man page says:
-c: If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, they are assigned to the positional parameters, starting with $0.
I'm having trouble understanding what this means.
If I do the following command with and without bash -c, I get the same result (example from http://www.tldp.org/LDP/abs/html/abs-guide.html):
$ set w x y z; IFS=":-;"; echo "$*"
w:x:y:z
$ bash -c 'set w x y z; IFS=":-;"; echo "$*"'
w:x:y:z
bash -c isn't as interesting when you're already running bash. Consider, on the other hand, the case when you want to run bash code from a Python script:
#!/usr/bin/env python
import subprocess
fileOne='hello'
fileTwo='world'
p = subprocess.Popen(['bash', '-c', 'diff <(sort "$1") <(sort "$2")',
'_', # this is $0 inside the bash script above
fileOne, # this is $1
fileTwo, # and this is $2
])
print p.communicate() # run that bash interpreter, and print its stdout and stderr
Here, because we're using bash-only syntax (<(...)), you couldn't run this with anything that used POSIX sh by default, which is the case for subprocess.Popen(..., shell=True); using bash -c thus provides access to capabilities that wouldn't otherwise be available without playing with FIFOs yourself.
Incidentally, this isn't the only way to do that: One could also use bash -s, and pass code in on stdin. Below, that's being done not from Python but POSIX sh (/bin/sh, which likewise is not guaranteed to have <(...) available):
#!/bin/sh
# ...this is POSIX sh code, not bash code; you can't use <() here
# ...so, if we want to do that, one way is as follows:
fileOne=hello
fileTwo=world
bash -s "$fileOne" "$fileTwo" <<'EOF'
# the inside of this heredoc is bash code, not POSIX sh code
diff <(sort "$1") <(sort "$2")
EOF
The -c option finds its most important uses when bash is launched by another program, and especially when the code to be executed may or does include redirections, pipelines, shell built-ins, shell variable assignments, and / or non-trivial lists. On POSIX systems that have /bin/sh being an alias for bash, it specifically supports the C library's system() function.
Equivalent behavior is much trickier to implement on top of fork / exec without using -c, though not altogether impossible.
How to execute BASH code from outside the BASH shell?
The answer is, using the -c option, which makes BASH execute whatever that has been passed as an argument to -c.
So, yeah, this is the purpose of this option, to execute BASH code arbitrarily, but just in another way.

Properly splitting a string in UNIX shell

I need to be able to split a string so that each string are passed as variable in my shell.
I tried something like this:
$ cat test.sh
#!/bin/sh
COMPOPT="CC=clang CXX=clang++"
$COMPOPT cmake ../gdcm
I also tried a bash specific solution, but with no luck so far:
$ cat test.sh
#!/bin/bash -x
COMPOPT="CC=clang CXX=clang++"
ARRAY=($COMPOPT)
"${ARRAY[0]}" "${ARRAY[1]}" cmake ../gdcm
I always get the non-informative error message:
./test.sh: 5: ./t.sh: CC=clang: not found
Of course if I try directly from the running shell this works:
$ CC=clang CXX=clang++ cmake ../gdcm
Another eval-free solution is to use the env program:
env "${ARRAY[#]}" cmake ../gdm
which provides a level of indirection to the usual FOO=BAR command syntax.
When you say:
$COMPOPT cmake ../gdcm
the shell would attempt to execute the value of the variable as a command.
The evil eval is rather handy in such cases. Say:
eval $COMPOPT cmake ../gdcm
Though devnull's answer works but uses eval and that has known pitfalls.
Here is a way it can be done without invoking eval:
#!/bin/sh
COMPOPT="CC=clang CXX=clang++"
sh -c "$COMPOPT cmake ../gdcm"
i.e. pass the whole command line to sh (or bash).

Can you wrapper each command in GNU's make?

I want to inject a transparent wrappering command on each shell command in a make file. Something like the time shell command. ( However, not the time command. This is a completely different command.)
Is there a way to specify some sort of wrapper or decorator for each shell command that gmake will issue?
Kind of. You can tell make to use a different shell.
SHELL = myshell
where myshell is a wrapper like
#!/bin/sh
time /bin/sh "$0" "$#"
However, the usual way to do that is to prefix a variable to all command calls. While I can't see any show-stopper for the SHELL approach, the prefix approach has the advantage that it's more flexible (you can specify different prefixes for different commands, and override prefix values on the command line), and could be visibly faster.
# Set Q=# to not display command names
TIME = time
foo:
$(Q)$(TIME) foo_compiler
And here's a complete, working example of a shell wrapper:
#!/bin/bash
RESULTZ=/home/rbroger1/repos/knl/results
if [ "$1" == "-c" ] ; then
shift
fi
strace -f -o `mktemp $RESULTZ/result_XXXXXXX` -e trace=open,stat64,execve,exit_group,chdir /bin/sh -c "$#" | awk '{if (match("Process PID=\d+ runs in (64|32) bit",$0) == 0) {print $0}}'
# EOF
I don't think there is a way to do what you want within GNUMake itself.
I have done things like modify the PATH env variable in the Makefile so a directory with my script linked to all name the bins I wanted wrapped was executed rather than the actual bin. The script would then look at how it was called and exec the actual bin with the wrapped command.
ie. exec time "$0" "$#"
These days I usually just update the targets in the Makefile itself. Keeping all your modifications to one file is usually better IMO than managing a directory of links.
Update
I defer to Gilles answer. It's a better answer than mine.
The program that GNU make(1) uses to run commands is specified by the SHELL make variable. It will run each command as
$SHELL -c <command>
You cannot get make to not put the -c in, since that is required for most shells. -c is passed as the first argument ($1) and <command> is passed as a single argument string as the second argument ($2).
You can write your own shell wrapper that prepends the command that you want, taking into account the -c:
#!/bin/sh
eval time "$2"
That will cause time to be run in front of each command. You need eval since $2 will often not be a single command and can contain all sorts of shell metacharacters that need to be expanded or processed.

Shell status codes in make

I use a Makefile (with GNU make running under Linux) to automate my grunt work when refactoring a Python script.
The script creates an output file, and I want to make sure that the output file remains unchanged in face of my refactorings.
However, I found no way to get the status code of a command to affect a subsequent shell if command.
The following rule illustrates the problem:
check-cond-codes:
diff report2008_4.csv report2008_4.csv-save-for-regression-testing; echo no differences: =$$!=
diff -q poalim report2008_4.csv; echo differences: =$$!=
The first 'diff' compares two equal files, and the second one compares two different files.
The output is:
diff report2008_4.csv report2008_4.csv-save-for-regression-testing; echo no differences: =$!=
no differences: ==
diff -q poalim report2008_4.csv; echo differences: =$!=
Files poalim and report2008_4.csv differ
differences: ==
So obviously '$$!' is the wrong variable to capture the status code of 'diff'.
Even using
SHELL := /bin/bash
at beginning of the Makefile did not solve the problem.
A variable returning the value, which I need, would (if it exists at all) be used in an 'if' command in the real rule.
The alternative of creating a small ad-hoc shell script in lieu of writing all commands inline in the Makefile is undesirable, but I'll use it as a last resort.
Related:
How to make a failing shell command interrupt make
I think you're looking for the $? shell variable, which gives the exit code of the previous command. For example:
$ diff foo.txt foo.txt
$ echo $?
0
To use this in your makefile, you would have to escape the $, as in $$?:
all:
diff foo.txt foo.txt ; if [ $$? -eq 0 ] ; then echo "no differences" ; fi
Do note that each command in your rule body in make is run in a separate subshell. For example, the following will not work:
all:
diff foo.txt foo.txt
if [ $$? -eq 0 ] ; then echo "no differences" ; fi
Because the diff and the if commands are executed in different shell processes. If you want to use the output status from the command, you must do so in the context of the same shell, as in my previous example.
Use '$$?' instead of '$$!' (thanks to 4th answer of Exit Shell Script Based on Process Exit Code)
Don't forget that each of your commands is being run in separate subshells.
That's why you quite often see something like:
my_target:
do something \
do something else \
do last thing.
And when debugging, don't forget the every helpful -n option which will print the commands but not execute them and the -p option which will show you the complete make environment including where the various bits and pieces have been set.
HTH
cheers,
If you are passing the result code to an if, you could simply do:
all:
if diff foo.txt foo.txt ; then echo "no differences" ; fi
The bash variable is $?, but why do you want to print out the status code anyway?
Try `\$?' I think the $$ is being interpreted by the makefile

Resources