I need to redirect command output either to log file, or to screen and log file depending on VERBOSE environment variable. Log file name depends on the target.
I am trying this, but it doesn't work:
ifeq "$(VERBOSE)" "yes"
OUTPUT := 2>&1 | tee $$#.log
else
OUTPUT := 1>$$#.log 2>&1
endif
target:
my_command $(OUTPUT)
But I end up with .log file instead of target.log.
I.e., when rule is executed, say for $ make VERBOSE=yes target, make sees the rule as
target:
my_command 2>&1 | tee .log
instead of
target:
my_command 2>&1 | tee $#.log
How can this be fixed?
You can't use := if you want the variable to have references to other variables that are not set yet: := expands the value of the variable immediately when the variable is assigned. Once the variable is expanded the first time, it won't be re-expanded later. $# is not set until the variable is used in the rule. Change to use = instead and remove the escape:
ifeq "$(VERBOSE)" "yes"
OUTPUT = 2>&1 | tee $#.log
else
OUTPUT = 1>$#.log 2>&1
endif
and it will work.
Related
I designed a custom script to grep a concatenated list of .bash_history backup files. In my script, I am creating a temporary file with mktemp and saving it to a variable temp. Next, I am redirecting output to that file using the cat command.
Is there a means to create a temporary file (using mktemp), redirect output to it, then store it in a variable in one command, while preserving newline characters?
The below snippet of code works just fine, but I have a feeling there is a more terse and canonical way to achieve this in one line – maybe using process substitution or something of the like.
# Concatenate all .bash_history files into a temporary file `temp`.
temp="$(mktemp)"
cat "$HOME/.bash_history."* > $temp
trap 'rm -f $temp' 0
# Set `HISTFILE` shell variable to the `temp` file.
HISTFILE="$temp"
keyword="$1"
# Search for `keyword` using the `history` command
if [[ "$keyword" ]]; then
# Enable history
set -o history
history | grep "$keyword"
# Disable history
set +o history
else
echo -e "usage: search <keyword>"
exit 0
fi
If you're comfortable with the side effect of making the assignment conditional on tempfile not previously having a nonempty value, this is straightforward via the ${var:=value} expansion:
cat "$HOME/.bash_history" >"${tempfile:=$(mktemp)}"
cat myfile.txt | f=`mktemp` && cat > "${f}"
I guess there is more than one way to do it. I found following to be working for me:
cat myfile.txt > $(echo "$(mktemp)")
Also don't forget about tee:
cat myfile.txt | tee "$(mktemp)" > /dev/null
What we have is a set of commands including script files to be executed, a command string which can be executed.
We want to execute these commands and store the stderr output to a variable (lets say "err") and combined output of stderr and stdout to another variable ("combined").
e.g.
#!/bin/bash
cmds="<cmd1>; <cmd2>; <cmd3>;"
<cmd4>;
<cmd5>;
<cmd6>;
<cmd7>;
eval $cmds;
./myscript.sh
err=<some magic>
combined=<some magic>
So, variable $err should contain all the errors and $combined should contain combined output of the commands in exactly that order in which the commands were executed.
You can write your output and errors to separate files and at end read them and put in variable.
For collecting output use foo >>outputs.file and for error use foo 2>>errors.file so the command will be like this foo >>outputs.file 2>>errors.file
foo is your command and >> means append to file. if you use single > it will truncate the file before writing in it.
and for putting file contains in a variable use myvar=$(cat outputs.file)
. no space before and after equal sign
#!/bin/bash
cmds="<cmd1>; <cmd2>; <cmd3>;"
$(
exec 3>"combined.log" 2> >(tee "err.log" >&3) 1> >(tee "out.log" >&3)
<cmd4>;
<cmd5>;
<cmd6>;
<cmd7>;
eval $cmds;
./myscript.sh
)
out=$(cat "out.log")
err=$(cat "err.log")
combined=$(cat "combined.log")
echo -e "OUT :: \n\n$out \n\n"
echo -e "ERR :: \n\n$err \n\n"
echo -e "COMB :: \n\n$combined \n\n"
The line 3>"combined.log" set the stream 3 to a file combined.log.
The commands are executed in a subshell, in which we are "tee"ing stderr and stdout to separate files and finally redirecting all these outputs to fd 3.
Editor's note: This question has undergone several revisions that altered the nature of the problem, potentially invalidating older comments and answers; its original title was "Cannot redirect Bash output to /dev/null".
I'm trying to create a small Bash script for which I'd like to implement a silent mode. Basically I'd just like to hide most of the output by default and, in case I run it with -v, show all output.
So at the beginning of my script I've put :
#!/bin/bash
declare output=''
if [ "$1" = -v ]; then
output="&1"
else
output="/dev/null"
fi
Then in my scripts I have:
{
# whatever commands...
} > $output 2>&1
The silent mode works well. However, if I try the verbose mode, a file called either & or &1 (based on my output variable) is being created. Also, the file is empty. This is not the desired outcome: I want the output to stay on the terminal!
Is this due to the order in which I redirect or is it my variable which is wrong? EDIT: The script now works fully, by replacing "&1" as output value with "/dev/tty".
Most probably the error output is getting printed. Note that with > /dev/null only stdout is redirected and stderr is not. You can use 2>&1 to redirect stderr to stdout before redirecting stdout. Something like:
{
# whatever commands...
} > $output 2>&1
tl;dr
You can't store redirections to file descriptors (&1 in your case) in variables - if you do, you'll create a file by that name instead (a file literally named &1 in your case).
Setting $output to /dev/tty is ill-advised, because you then always output to the terminal, which prevents capturing the verbose output in a file; e.g., script -v > file won't work.
Take gniourf_gniourf's advice from the comments, and use exec instead: exec > /dev/null 2>&1 or, using Bash-specific syntax, exec &>/dev/null silences the remainder of the script.
Redirecting to a file descriptor only works:
if the redirection target is a literal, not a variable reference.
if there are no spaces between the > and the target.
date > &1 # !! SYNTAX ERROR: syntax error near unexpected token `&'
date >&1 # OK - but works with a *literal* `&1` only
If you use a variable, that doesn't matter, because the variable content is invariably interpreted as a filename - that's why you didn't get a syntax error, but wound up with an output file literally named &1 instead.
output='&1'
date >$output # !! Creates a *file* named '&1'.
date > $output # !! Ditto.
To solve your problem, I suggest taking gniourf_gniourf's advice from the comments: use exec instead:
#!/usr/bin/env bash
if [[ $1 == '-v' ]]; then
: # verbose mode: redirect nothing
else
# quiet mode: silence both stdout and stderr
exec &>/dev/null
fi
echo 'hello' # will only print with -v specified
echo 'hello' >&2 # ditto
When I try to grep pattern and write to a file, sometimes it complains the file bar.txt already exist, so I have to use >> instead of > to overwrite it.
grep 'pattern' foo.txt >> bar.txt
But if the file didn't exist, using >> it will complain about no such file or directory. Is there a way for shell to automatically make it's own decision? If not exist, create a file. If exists, overwrite.
Extracted from the tcsh man page:
> name
...
If the shell variable noclobber is set, then the file must not
exist or be a character special file (e.g., a terminal or
`/dev/null') or an error results. This helps prevent acciden-
tal destruction of files.
...
>> name
...
Like `>', but appends output to the end of name. If the shell
variable noclobber is set, then it is an error for the file not
to exist, unless one of the `!' forms is given.
So ... it sounds to me as if you have "noclobber" set.
% set noclobber
% echo foo >> bar
bar: No such file or directory.
% echo foo > bar
% echo foo > bar
bar: File exists.
% unset noclobber
% echo foo > bar
If this special shell variable was set in your .tcshrc or .cshrc or .login, you can unset it. Or, if it is on by default or set in a system-wide shell startup file, simply append a line to your rc file:
unset noclobber
And you should be good to go.
I want a grep pipeline to succeed if no lines are selected:
set -o errexit
echo foo | grep -v foo
# no output, pipeline returns 1, shell exits
! echo foo | grep -v foo
# no output, pipeline returns 1 reversed by !, shell continues
Similarly, I want the pipeline to fail (and the enclosing shell to exit, when errexit is set) if any lines come out the end. The above approach does not work:
echo foo | grep -v bar
# output, pipeline returns 0, shell continues
! echo foo | grep -v bar
# output, ???, shell continues
! (echo foo | grep -v bar)
# output, ???, shell continues
I finally found a method, but it seems ugly, and I'm looking for a better way.
echo foo | grep -v bar | (! grep '.*')
# output, pipeline returns 1, shell exits
Any explanation of the behavior above would be appreciated as well.
set -e, aka set -o errexit, only handles unchecked commands; its intent is to prevent errors from going unnoticed, not to be an explicit flow control mechanism. It makes sense to check explicitly here, since it's a case you actively care about as opposed to an error that could happen as a surprise.
echo foo | grep -v bar && exit
More to the point, several of your commands make it explicit that you care about (and thus are presumably already manually handling) exit status -- thus setting the "checked" flag, making errexit have no effect.
Running ! pipeline, in particular, sets the checked flag for the pipeline -- it means that you're explicitly doing something with its exit status, thus implying that automatic failure-case handling need not apply.
This is not a one-liner, but it works:
if echo foo | grep foo ; then
exit 1
fi
Meaning the shell exits with 1 if grep finds something. Also I don't think that set -e or set -o errexit should be used for logic, it should be used for what it is meant for, for stopping your script if an unexpected error occurs.