Have `make` echo to standard error without re-direction? - bash

Some of the targets in my Makefile run programs whose output (which they send to stdout) I am interested in. For a reason not known to me, the authors of make decided to echo the executed commands to stdout, which pollutes the latter.
A hard way around this problem that involves swapping file descriptors was suggested here. I am wondering if there is a simpler way to force make echo to stderr.
I looked through the man page of make, but did not find anything to this end besides the -s option. I prefer to preserve the echo of commands, but have it in stderr.
I also tried making an auxiliary target (which I made a prerequisite of all other targets), in which I put:
exec 3>&2
exec 2>&1
exec 1>&3
but bash complained that 3 wasn't a valid file descriptor. I tried only exec 1>&2, but that did not have any effect...

The reason make shows the command line on stdout is because that's what the POSIX standard for make requires, and 30+ years of history expect. See http://pubs.opengroup.org/onlinepubs/9699919799/utilities/make.html and search for the section on "STDOUT".
You cannot modify the file descriptors in the make program from within a recipe, because the recipe is run in a subshell: any changes you make to the file descriptors only take effect in the subshell. It's not possible in UNIX for a child process to modify the file descriptors of its parent.
Similarly, each line in a recipe in make is run in a different subshell. If you want to do fancy things like redirect output for a recipe you'll have to write it all on one line:
exec 3>&2; exec 2>&1; exec 1>&3; <my command here>
Of course if you intend to do this a lot I would put that in a make variable and use that variable instead.
There is no way to get make to write its output to stderr instead of stdout, unless you want to modify the source code for GNU make and use the version you build yourself instead. It would actually be straightforward to do this as long as you're using a newer version of GNU make (4.0 and above) since all output is generated from one plase (in output.c).

What you can do entirely in the Makefile is this:
define REDIR
#printf 1>&2 "%s\n" '$(1)'; $(1)
endef
.PHONY: all
all:
$(call REDIR,echo updating .stamp)
$(call REDIR,touch .stamp)
That is to say, take control of the command echoing yourself via a macro. Unfortunately, it involves writing your recipe lines as `$(call ...) syntax.
REDIR now implements the semantics of echoing the command, and executing it, via macro expansion.
The 1>&2 is Bash-specific syntax for duplicating the standard error file descriptor to standard out, so the command then effectively prints to standard output.
Test run:
$ make
echo updating .stamp
updating .stamp
touch .stamp
$ make 2> /dev/null
updating .stamp
As you can see, updating .stamp, which is the output of our explicitly coded echo line, nicely goes to standard output. The commands are implicitly sent to standard error.

If you don't want to pollute the output of echo of what make produces, can't you simply run
make -n >&2 && make -s
This is the sample Makefile:
all:
ls
echo done
Here is the output of make:
ls
Makefile
echo done
done
Here is output of make -n >&2 && make -s:
ls
echo done
Makefile
done
Naturally, output of either step can be redirected to file.

Suppose we have the following Makefile:
target-1:
target-1-body
target-2:
target-2-body
target-3:
target-3-body
We change it as follows:
target-1-raw:
target-1-body
target-2-raw:
target-2-body
target-3-raw:
target-3-body
%-raw:
echo "Error: Target $# does not exist!"
%:
#make $#-raw 3>&2 2>&1 1>&3
The invocation is same as before, e.g. make target-1.
With two additional targets we made make output to stderr.
FYI: I am trying to develop this solution further so the user would not be able to invoke the raw targets directly.

Another posix-incompatible solution is to put
#!/bin/bash
exec 3>&2; exec 2>&1; exec 1>&3;
into helper/stderr relative to my project, and
helperdir = helper
SHELL = BASH_ENV="$(helperdir)/stderr" /bin/bash
into my Makefile.
Now all executed rule code output is redirected to stderr file descriptor.
BASH_ENV environment variable does, if set to a script path, execute that script at every bash invocation.

Related

Redirect output from file to stdout

I have a program that can output its results only to files, with -o option. This time I need to output it to console, i.e. stdout. Here is my first try:
myprog -o /dev/stdout input_file
But it says:
/dev/ not writable
I've found this question that's similar to mine, but /dev/stdout is obviously not going to work without some additional magic.
Q: How to redirect output from file to stdout?
P.S. Conventional methods without any specialized software are preferable.
Many tools interpret a - as stdin/stdout depending on the context of its usage. Though, this is not part of the shell and therefore depends on the program used.
In your case the following could solve your problem:
myprog -o - input_file
If the program can only write to a file, then you could use a named pipe:
pipename=/tmp/mypipe.$$
mkfifo "$pipename"
./myprog -o "$pipename" &
while read line
do
echo "output from myprog: $line"
done < "$pipename"
rm "$pipename"
First we create the pipe, we put it into /tmp to keep it out of the way of backup programs. The $$ is our PID, and makes the name unique at runtime.
We run the program in background, and it should block trying to write to the pipe. Some programs use a technique called "memory mapping" in which case this will fail, because a pipe cannot be memory mapped (a good program would check for this).
Then we read the pipe in the script as we would any other file.
Finally we delete the pipe.
You can cat the contents of the file written by myprog.
myprog -o tmpfile input_file && cat tmpfile
This would have the described effect -- allowing you to pipe the output of myprog to some subsequent command -- although it is a different approach than you had envisioned.
In the circumstance that the output of myprog (perhaps more aptly notmyprog) is too big to write to disk, this approach would not be good.
A solution that cleans up the temp file in the same line and still pipes the contents out at the end would be this
myprog -o tmpfile input_file && contents=`cat tmpfile` && rm tmpfile && echo "$contents"
Which stores the contents of the file in a variable so that it may be accessed after deleting the file. Note the quotes in the argument of the echo command. These are important to preserve newlines in the file contents.

How to save the command you are about to execute in bash?

Is there a better way to save a command line before it it executed?
A number of my /bin/bash scripts construct a very long command line. I generally save the command line to a text file for easier debugging and (sometimes) execution.
My code is littered with this idiom:
echo >saved.txt cd $NEW_PLACE '&&' command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
cd $NEW_PLACE && command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
Obviously updating code in two places is error-prone. Less obvious is that Certain parts need to be quoted in the first line but not the next. Thus, I can not do the update by simple copy-and-paste. If the command includes quotes, it gets even more complicated.
There has got to be a better way! Suggestions?
How about creating a helper function which logs and then executes the command? "$#" will expand to whatever command you pass in.
log() {
echo "$#" >> /tmp/cmd.log
"$#"
}
Use it by simply prepending log to any existing command. It won't handle && or || though, so you'll have to log those commands separately.
log cd $NEW_PLACE && log command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
are you looking for set -x (or bash -x)? This writes every command to standard out after executing.
use script and you will get archived everything.
use -x for tracing your script, e.g. run them as bash -x script_name args....
use set -x in your current bash (you will get echoed your commands with substitued globs and variables
combine 2 and 3 with the 1
If you just execute the command file immediately after creating it, you will only need to construct the command once, with one level of escapes.
If that would create too many discrete little command files, you could create shell procedures and then run an individual one.
(echo fun123 '()' {
echo echo something important
echo }
) > saved.txt
. saved.txt
fun123
It sounds like your goal is to keep a good log of what your script did so that you can debug it when things go bad. I would suggest using the -x parameter in your shebang like so:
#!/bin/sh -x
# the -x above makes bash print out every command before it is executed.
# you can also use the -e option to make bash exit immediately if any command
# returns a non-zero return code.
Also, see my answer on a previous question about redirecting all of this debug output to a log when --log is passed into your shell script. This will redirect all stdout and stderr. Occasionally, you'll still want to write to the terminal to give the user feedback. You can do this by saving stdout to a new file descriptor and using that with echo (or other programs):
exec 3>&1 # save stdout to fd 3
# perform log redirection as per above linked answer
# now all stdout and stderr will be redirected to the file and console.
# remove the `tee` command if you want it to go just to the file.
# now if you want to write to the original stdout (i.e. terminal)
echo "Hello World" >&3
# "Hello World" will be written to the terminal and not the logs.
I suggest you look into the xargs command. It was made to solve the problem of programtically building up argument lists and passing them off to executables for batch processing
http://en.wikipedia.org/wiki/Xargs

Can you wrapper each command in GNU's make?

I want to inject a transparent wrappering command on each shell command in a make file. Something like the time shell command. ( However, not the time command. This is a completely different command.)
Is there a way to specify some sort of wrapper or decorator for each shell command that gmake will issue?
Kind of. You can tell make to use a different shell.
SHELL = myshell
where myshell is a wrapper like
#!/bin/sh
time /bin/sh "$0" "$#"
However, the usual way to do that is to prefix a variable to all command calls. While I can't see any show-stopper for the SHELL approach, the prefix approach has the advantage that it's more flexible (you can specify different prefixes for different commands, and override prefix values on the command line), and could be visibly faster.
# Set Q=# to not display command names
TIME = time
foo:
$(Q)$(TIME) foo_compiler
And here's a complete, working example of a shell wrapper:
#!/bin/bash
RESULTZ=/home/rbroger1/repos/knl/results
if [ "$1" == "-c" ] ; then
shift
fi
strace -f -o `mktemp $RESULTZ/result_XXXXXXX` -e trace=open,stat64,execve,exit_group,chdir /bin/sh -c "$#" | awk '{if (match("Process PID=\d+ runs in (64|32) bit",$0) == 0) {print $0}}'
# EOF
I don't think there is a way to do what you want within GNUMake itself.
I have done things like modify the PATH env variable in the Makefile so a directory with my script linked to all name the bins I wanted wrapped was executed rather than the actual bin. The script would then look at how it was called and exec the actual bin with the wrapped command.
ie. exec time "$0" "$#"
These days I usually just update the targets in the Makefile itself. Keeping all your modifications to one file is usually better IMO than managing a directory of links.
Update
I defer to Gilles answer. It's a better answer than mine.
The program that GNU make(1) uses to run commands is specified by the SHELL make variable. It will run each command as
$SHELL -c <command>
You cannot get make to not put the -c in, since that is required for most shells. -c is passed as the first argument ($1) and <command> is passed as a single argument string as the second argument ($2).
You can write your own shell wrapper that prepends the command that you want, taking into account the -c:
#!/bin/sh
eval time "$2"
That will cause time to be run in front of each command. You need eval since $2 will often not be a single command and can contain all sorts of shell metacharacters that need to be expanded or processed.

Is there a command-line shortcut for ">/dev/null 2>&1"

It's really annoying to type this whenever I don't want to see a program's output. I'd love to know if there is a shorter way to write:
$ program >/dev/null 2>&1
Generic shell is the best, but other shells would be interesting to know about too, especially bash or dash.
>& /dev/null
You can write a function for this:
function nullify() {
"$#" >/dev/null 2>&1
}
To use this function:
nullify program arg1 arg2 ...
Of course, you can name the function whatever you want. It can be a single character for example.
By the way, you can use exec to redirect stdout and stderr to /dev/null temporarily. I don't know if this is helpful in your case, but I thought of sharing it.
# Save stdout, stderr to file descriptors 6, 7 respectively.
exec 6>&1 7>&2
# Redirect stdout, stderr to /dev/null
exec 1>/dev/null 2>/dev/null
# Run program.
program arg1 arg2 ...
# Restore stdout, stderr.
exec 1>&6 2>&7
In bash, zsh, and dash:
$ program >&- 2>&-
It may also appear to work in other shells because &- is a bad file descriptor.
Note that this solution closes the file descriptors rather than redirecting them to /dev/null, which could potentially cause programs to abort.
Most shells support aliases. For instance, in my .zshrc I have things like:
alias -g no='2> /dev/null > /dev/null'
Then I just type
program no
If /dev/null is too much to type, you could (as root) do something like:
ln -s /dev/null /n
Then you could just do:
program >/n 2>&1
But of course, scripts you write in this way won't be portable to other systems without setting up that symlink first.
It's also worth noting, that often times redirecting output is not really necessary. Many Unix and Linux programs accept a "silent flag", usually -n or -q, that suppresses any output and only returns a value on success or failure.
For example
grep foo bar.txt >/dev/null 2>&1
if [ $? -eq 0 ]; then
do_something
fi
Can be rewritten as
grep -q foo bar.txt
if [ $? -eq 0 ]; then
do_something
fi
Edit: the (:) or |: based solutions might cause an error because : doesn't read stdin. Though it might not be as bad as closing the file descriptor, as proposed in Zaz's answer.
For bash and bash-compliant shells (zsh...):
$ program &>/dev/null
OR
$ program &> >(:) # Should actually cause error or abortion
For all shells:
$ program 2>&1 >/dev/null
OR
$ program 2>&1|: # Should actually cause error or abortion
$ program 2>&1 > >(:) does not work for dash because it refuses to operate process substitution right of a file substitution.
Explanations:
2>&1 redirects stderr (file descriptor 2) to stdout (file descriptor 1).
| is the regular piping of stdout to the stdin of another command.
: is a shell builtin which does nothing (it is equivalent to true).
&> redirects both stdout and stderr outputs to a file.
>(your-command) is process substitution. It is replaced with a path to a special file, for instance: /proc/self/fd/6. This file is used as input file for the command your-command.
Note: A process trying to write to a closed file descriptor will get an EBADF (bad file descriptor) error which is more likely to cause abortion than trying to write to | true. The latter would cause an EPIPE (pipe) error, see Charles Duffy's comment.
Ayman Hourieh's solution works well for one-off invocations of overly chatty programs. But if there's only a small set of commonly called programs for which you want to suppress output, consider silencing them by adding the following to your .bashrc file (or the equivalent, if you use another shell):
CHATTY_PROGRAMS=(okular firefox libreoffice kwrite)
for PROGRAM in "${CHATTY_PROGRAMS[#]}"
do
printf -v eval_str '%q() { command %q "$#" &>/dev/null; }' "$PROGRAM" "$PROGRAM"
eval "$eval_str"
done
This way you can continue to invoke programs using their usual names, but their stdout and stderr output will disappear into the bit bucket.
Note also that certain programs allow you to configure how much logging/debugging output they spew. For KDE applications, you can run kdebugdialog and selectively or globally disable debugging output.
Seems to me, that the most portable solution, and best answer, would be a macro on your terminal (PC).
That way, no matter what server you log in to, it will always be there.
If you happen to run Windows, you can get the desired outcome with AHK (google it, it's opensource) in two tiny lines of code. That can translate any string of keys into any other string of keys, in situ.
You type "ugly.sh >>NULL" and it will rewrite it as "ugly.sh 2>&1 > /dev/null" or what not.
Solutions for other platforms are somewhat more difficult. AppleScript can paste in keyboard presses, but can't be triggered that easily.

Shell status codes in make

I use a Makefile (with GNU make running under Linux) to automate my grunt work when refactoring a Python script.
The script creates an output file, and I want to make sure that the output file remains unchanged in face of my refactorings.
However, I found no way to get the status code of a command to affect a subsequent shell if command.
The following rule illustrates the problem:
check-cond-codes:
diff report2008_4.csv report2008_4.csv-save-for-regression-testing; echo no differences: =$$!=
diff -q poalim report2008_4.csv; echo differences: =$$!=
The first 'diff' compares two equal files, and the second one compares two different files.
The output is:
diff report2008_4.csv report2008_4.csv-save-for-regression-testing; echo no differences: =$!=
no differences: ==
diff -q poalim report2008_4.csv; echo differences: =$!=
Files poalim and report2008_4.csv differ
differences: ==
So obviously '$$!' is the wrong variable to capture the status code of 'diff'.
Even using
SHELL := /bin/bash
at beginning of the Makefile did not solve the problem.
A variable returning the value, which I need, would (if it exists at all) be used in an 'if' command in the real rule.
The alternative of creating a small ad-hoc shell script in lieu of writing all commands inline in the Makefile is undesirable, but I'll use it as a last resort.
Related:
How to make a failing shell command interrupt make
I think you're looking for the $? shell variable, which gives the exit code of the previous command. For example:
$ diff foo.txt foo.txt
$ echo $?
0
To use this in your makefile, you would have to escape the $, as in $$?:
all:
diff foo.txt foo.txt ; if [ $$? -eq 0 ] ; then echo "no differences" ; fi
Do note that each command in your rule body in make is run in a separate subshell. For example, the following will not work:
all:
diff foo.txt foo.txt
if [ $$? -eq 0 ] ; then echo "no differences" ; fi
Because the diff and the if commands are executed in different shell processes. If you want to use the output status from the command, you must do so in the context of the same shell, as in my previous example.
Use '$$?' instead of '$$!' (thanks to 4th answer of Exit Shell Script Based on Process Exit Code)
Don't forget that each of your commands is being run in separate subshells.
That's why you quite often see something like:
my_target:
do something \
do something else \
do last thing.
And when debugging, don't forget the every helpful -n option which will print the commands but not execute them and the -p option which will show you the complete make environment including where the various bits and pieces have been set.
HTH
cheers,
If you are passing the result code to an if, you could simply do:
all:
if diff foo.txt foo.txt ; then echo "no differences" ; fi
The bash variable is $?, but why do you want to print out the status code anyway?
Try `\$?' I think the $$ is being interpreted by the makefile

Resources