I'm guessing this has nothing to do with ack but more with bash:
Here we create file.txt containing the string foobar soack can find foobar in it:
> echo foobar > file.txt
> echo 'ack foobar file.txt' > ack.sh
> bash ack.sh
foobar
> bash < ack.sh
foobar
So far so good. But why doesn't ack find anything in it like this?
> cat ack.sh | bash
(no output)
or
> echo 'ack foobar file.txt' | bash
(no output)
Why doesn't ack find foobar in the last two cases?
Adding unbuffer (from expect) in front makes it work, which I don't understand:
> echo 'unbuffer ack foobar file.txt' | bash
foobar
Even stranger:
> cat ack2.sh
echo running
ack foobar file.txt
echo running again
unbuffer ack foobar file.txt
# Behaves as I'd expect
> bash ack2.sh
running
foobar
running again
foobar
# Strange output
> cat ack2.sh | bash
running
unbuffer ack foobar file.txt
What's up with this output? It echos unbuffer ack foobar file.txt but not running again? Huh?
ack gets confused because stdin is a pipe rather than a terminal. You need to pass the --nofilter option to force ack to treat stdin as a tty.
This:
# ack.sh
ack --nofilter foobar file.txt
works:
$ cat ack.sh | bash
foobar
If you ask me, that behaviour is quite unexpected. Probably it is expected when someone understand the concepts of ack which I do not atm. I would expect that ack doesn't look at stdin when filename arguments are passed to it.
Why does unbuffer "solve" the problem?
unbuffer, following it's man page, does not attempt to read from stdin:
Normally, unbuffer does not read from stdin. This simplifies use of
unbuffer in some situations. To use unbuffer in a pipeline, use the -p
flag. ...
Looks like ack tries to be too! smart about stdin here. If it is empty it does not read from stdin and looks at the filenames passed to it. Again, imo it would be correct to not look at stdin at all if filename arguments are present.
The big mismatch here is that ack was never intended to be used in shell scripts. It's meant as a command line tool for humans. That means that it makes some assumptions and optimizations for humans. For example, by default ack's output is different if it's going to a terminal vs. getting redirected in a pipe. There's also dangers in using ack in a shell script because its behavior can be affected by ackrc files and environment variables. If you're going to be using ack in a script, you should be using the --noenv flag. Better still, for shell scripts I'd use plain ol' grep.
What is the use case that brought up this problem?
I agree that this is a bug – ack could look at stdin, however in a NON BLOCKING way. It is a bug to hang over a pipe that's empty…
Related
In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output
I'm looking for an opposite of this:
Trick an application into thinking its stdin is interactive, not a pipe
I'd like to get the output of a command on stdout, but make it think it's writing into a pipe.
The usual solution is to | cat but I have the additional requirement that this is cross platform (ie sh, not bash) and returns a valid exit code if the command fails. Normally I would use pipefail but this isn't available everywhere.
I've tried various incantations of stty but haven't been successful.
You could always use a named pipe:
mkfifo tmp.pipe
# Reader runs in background
cat tmp.pipe &
# Producer in foreground
your_command > tmp.pipe
command_rtn=$?
rm tmp.pipe
Alternately, if you don't need the output in realtime and the output is reasonably small:
output=$(your_command)
command_rtn=$?
echo "${output}"
Or you can write the exit status to a file doing something terrible like:
{ your_command; echo $? > rtn_file; } | cat
command_rtn=$(cat rtn_file)
I've got a script like:
#!/bin/bash
exec /usr/bin/some_binary > /tmp/my.log 2>&1
Problem is that some_binary sends all of its logging to stdout, and buffering makes it so that I only see output in chunks of a few lines. This is annoying when something gets stuck and I need to see what the last line says.
Is there any way to make stdout unbuffered before I do the exec that will affect some_binary so it has more useful logging?
(The wrapper script is only setting a few environment variables before the exec, so a solution in perl or python would also be feasible.)
GNU coreutils-8.5 also has the stdbuf command to modify I/O stream buffering:
http://www.pixelbeat.org/programming/stdio_buffering/
So, in your example case, simply invoke:
stdbuf -oL /usr/bin/some_binary > /tmp/my.log 2>&1
This will allow text to appear immediately line-by-line (once a line is completed with the end-of-line "\n" character in C). If you really want immediate output, use -o0 instead.
This way could be more desirable if you do not want to introduce dependency to expect via unbuffer command. The unbuffer way, on the other hand, is needed if you have to fool some_binary into thinking that it is facing a real tty standard output.
You might find that the unbuffer script that comes with expect may help.
Some command line programs have an option to modify their stdout stream buffering behaviour. So that's the way to go if the C source is available ...
# two command options ...
man file | less -p '--no-buffer'
man grep | less -p '--line-buffered'
# ... and their respective source code
# from: http://www.opensource.apple.com/source/file/file-6.2.1/file/src/file.c
if(nobuffer)
(void) fflush(stdout);
# from: http://www.opensource.apple.com/source/grep/grep-28/grep/src/grep.c
if (line_buffered)
fflush (stdout);
As an alternative to using expect's unbuffer script or modifying the program's source code, you may also try to use script(1) to avoid stdout hiccups caused by a pipe:
See: Trick an application into thinking its stdin is interactive, not a pipe
# Linux
script -c "[executable string]" /dev/null
# FreeBSD, Mac OS X
script -q /dev/null "[executable string]"
I scoured the internets for an answer, and none of this worked for uniq which is too stubborn to buffer everything except for stdbuf:
{piped_command_here} | stdbuf -oL uniq | {more_piped_command_here}
GNU Coreutils-8 includes a program called stdbuf which essentially does the LD_PRELOAD trick. It works on Linux and reportedly works on BSD systems.
An environment variable can set the terminal IO mode to unbuffered.
export NSUnbufferedIO=YES
This will set the terminal unbuffered for both C and Ojective-C terminal output commands.
It's really annoying to type this whenever I don't want to see a program's output. I'd love to know if there is a shorter way to write:
$ program >/dev/null 2>&1
Generic shell is the best, but other shells would be interesting to know about too, especially bash or dash.
>& /dev/null
You can write a function for this:
function nullify() {
"$#" >/dev/null 2>&1
}
To use this function:
nullify program arg1 arg2 ...
Of course, you can name the function whatever you want. It can be a single character for example.
By the way, you can use exec to redirect stdout and stderr to /dev/null temporarily. I don't know if this is helpful in your case, but I thought of sharing it.
# Save stdout, stderr to file descriptors 6, 7 respectively.
exec 6>&1 7>&2
# Redirect stdout, stderr to /dev/null
exec 1>/dev/null 2>/dev/null
# Run program.
program arg1 arg2 ...
# Restore stdout, stderr.
exec 1>&6 2>&7
In bash, zsh, and dash:
$ program >&- 2>&-
It may also appear to work in other shells because &- is a bad file descriptor.
Note that this solution closes the file descriptors rather than redirecting them to /dev/null, which could potentially cause programs to abort.
Most shells support aliases. For instance, in my .zshrc I have things like:
alias -g no='2> /dev/null > /dev/null'
Then I just type
program no
If /dev/null is too much to type, you could (as root) do something like:
ln -s /dev/null /n
Then you could just do:
program >/n 2>&1
But of course, scripts you write in this way won't be portable to other systems without setting up that symlink first.
It's also worth noting, that often times redirecting output is not really necessary. Many Unix and Linux programs accept a "silent flag", usually -n or -q, that suppresses any output and only returns a value on success or failure.
For example
grep foo bar.txt >/dev/null 2>&1
if [ $? -eq 0 ]; then
do_something
fi
Can be rewritten as
grep -q foo bar.txt
if [ $? -eq 0 ]; then
do_something
fi
Edit: the (:) or |: based solutions might cause an error because : doesn't read stdin. Though it might not be as bad as closing the file descriptor, as proposed in Zaz's answer.
For bash and bash-compliant shells (zsh...):
$ program &>/dev/null
OR
$ program &> >(:) # Should actually cause error or abortion
For all shells:
$ program 2>&1 >/dev/null
OR
$ program 2>&1|: # Should actually cause error or abortion
$ program 2>&1 > >(:) does not work for dash because it refuses to operate process substitution right of a file substitution.
Explanations:
2>&1 redirects stderr (file descriptor 2) to stdout (file descriptor 1).
| is the regular piping of stdout to the stdin of another command.
: is a shell builtin which does nothing (it is equivalent to true).
&> redirects both stdout and stderr outputs to a file.
>(your-command) is process substitution. It is replaced with a path to a special file, for instance: /proc/self/fd/6. This file is used as input file for the command your-command.
Note: A process trying to write to a closed file descriptor will get an EBADF (bad file descriptor) error which is more likely to cause abortion than trying to write to | true. The latter would cause an EPIPE (pipe) error, see Charles Duffy's comment.
Ayman Hourieh's solution works well for one-off invocations of overly chatty programs. But if there's only a small set of commonly called programs for which you want to suppress output, consider silencing them by adding the following to your .bashrc file (or the equivalent, if you use another shell):
CHATTY_PROGRAMS=(okular firefox libreoffice kwrite)
for PROGRAM in "${CHATTY_PROGRAMS[#]}"
do
printf -v eval_str '%q() { command %q "$#" &>/dev/null; }' "$PROGRAM" "$PROGRAM"
eval "$eval_str"
done
This way you can continue to invoke programs using their usual names, but their stdout and stderr output will disappear into the bit bucket.
Note also that certain programs allow you to configure how much logging/debugging output they spew. For KDE applications, you can run kdebugdialog and selectively or globally disable debugging output.
Seems to me, that the most portable solution, and best answer, would be a macro on your terminal (PC).
That way, no matter what server you log in to, it will always be there.
If you happen to run Windows, you can get the desired outcome with AHK (google it, it's opensource) in two tiny lines of code. That can translate any string of keys into any other string of keys, in situ.
You type "ugly.sh >>NULL" and it will rewrite it as "ugly.sh 2>&1 > /dev/null" or what not.
Solutions for other platforms are somewhat more difficult. AppleScript can paste in keyboard presses, but can't be triggered that easily.
There seem to be two bash idioms for redirecting STDOUT and STDERR to a file:
fooscript &> foo
... and ...
fooscript > foo 2>&1
What's the difference? It seems to me that the first one is just a shortcut for the second one, but my coworker contends that the second one will produce no output even if there's an error with the initial redirect, whereas the first one will spit redirect errors to STDOUT.
EDIT: Okay... it seems like people are not understanding what I am asking, so I will try to clarify:
Can anyone give me an example where the two specific lines lines written above will yield different behavior?
From the bash manual:
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
The phrase "semantically equivalent" should settle the issue with your coworker.
The situation where the two lines have different behavior is when your script is not running in bash but some simpler shell in the sh family, e.g. dash (which I believe is used as /bin/sh in some Linux distros because it is more lightweight than bash). In that case,
fooscript &> foo
is interpreted as two commands: the first one runs fooscript in the background, and the second one truncates the file foo. The command
fooscript > foo 2>&1
runs fooscript in the foreground and redirects its output and standard error to the file foo. In bash I think the lines will always do the same thing.
The main reason to use 2>&1, in my experience, is when you want to append all output to a file rather than overwrite the file. With &> syntax, you can't append. So with 2>&1, you can write something like program >> alloutput.log 2>&1 and get stdout and stderr output appended to the log file.
&>foo is less typing than >foo 2>&1, and less error-prone (you can't get it in the wrong order), but achieves the same result.
2>&1 is confusing, because you need to put it after the 1> redirect, unless stdout is being redirected to a | pipe, in which case it goes before...
$ some-command 2>&1 >foo # does the unexpected
$ some-command >foo 2>&1 # does the same as
$ some-command &>foo # this and
$ some-command >&foo # compatible with other shells, but trouble if the filename is numeric
$ some-command 2>&1 | less # but the redirect goes *before* the pipe here...
&> foo # Will take all and redirect all output to foo.
2>&1 # will redirect stderr to stdout.
2>&1 depends on the order in which it is specified on the command line. Where &> sends both stdout and stderr to wherever, 2>&1 sends stderr to where stdout is currently going at that point in the command line. Thus:
command > file 2>&1
is different than:
command 2>&1 > file
where the former is redirecting both stdout and stderr to file, the latter redirects stderr to where stdout is going before it is redirected to the file (in this case, probably the terminal.) This is useful if you wanted to do something like:
command 2>&1 > file | less
Where you want to use less to page through the output of stderr and store the output of stdout to a file.
2>&1 might be useful for cases where you aren't redirecting stdout to somewhere else, but rather you just want stderr to be sent to the same place (such as the console) as stdout (perhaps if the command you are running is already sending stderr somewhere else other than the console).
I know its a pretty old posting..but sharing what could answer the question :
* ..will yield different behavior?
Scenario:
When you are trying to use "tee" and want to preserve the exit code using process substitution in bash...
someScript.sh 2>&1 >( tee "toALog") # this fails to capture the out put to log
where as :
someScript.sh >& >( tee "toALog") # works fine!