Direct output to standard output and an output file simultaneously - shell

I know that
./executable &>outputfile
will redirect the standard output and standard error to a file. This is what I want, but I would also like the output to continue to be printed in the terminal. What is the best way to do this?
Ok, here is my exact command: I have tried
./damp2Plan 10 | tee log.txt
and
./damp2Plan 10 2>&1 | tee log.txt
where 10 is just an argument passed to main. Neither work correctly. The result is that the very first printf statement in the code does go to terminal and log.txt just fine, but none of the rest do. I'm on UbuntuĀ 12.04 (Precise Pangolin).

Use tee:
./executable 2>&1 | tee outputfile
tee outputs in chunks and there may be some delay before you see any output. If you want closer to real-time output, you could redirect to a file as you are now, and monitor it with tail -f in a different shell:
./executable 2>&1 > outputfile
tail -f outputfile

Related

Redirect both stdout and stderr to file, print stdout only [duplicate]

This question already has answers here:
Separately redirecting and recombining stderr/stdout without losing ordering
(2 answers)
Closed 4 years ago.
I have a large amount of text coming in stdout and stderr; I would like to log all of it in a file (in the same order), and print only what comes from stdout in the console for further processing (like grep).
Any combination of > file or &> file, even with | or |& will permanently redirect the stream and I cannot pipe it afterwards:
my_command > output.log | grep something # logs only stdout, prints only stderr
my_command &> output.log | grep something # logs everything in correct order, prints nothing
my_command > output.log |& grep something # logs everything in correct order, prints nothing
my_command &> output.log |& grep something # logs everything in correct order, prints nothing
Any use of tee will either
print what comes from stderr then log everything that comes from stdout and print it out, so I lose the order of the text that comes in
log both in the correct order if I use |& tee but I lose control over the streams since now everything is in stdout.
example:
my_command | tee output.log | grep something # logs only stdout, prints all of stderr then all of stdout
my_command |& tee output.log | grep something # logs everything, prints everything to stdout
my_command | tee output.log 3>&1 1>&2 2>&3 | tee -a output.log | grep something # logs only stdout, prints all of stderr then all of stdout
Now I'm all out of ideas.
This is what my test case looks like:
testFunction() {
echo "output";
1>&2 echo "error";
echo "output-2";
1>&2 echo "error-2";
echo "output-3";
1>&2 echo "error-3";
}
I would like my console output to look like:
output
output-2
output-3
And my output.log file to look like:
output
error
output-2
error-2
output-3
error-3
For more details, I'm filtering the output of mvn clean install with grep to only keep minimal information in the terminal, but I also would like to have a full log somewhere in case I need to investigate a stack trace or something. The java test logs are sent to stderr so I choose to discard it in my console output.
While not really a solution which uses redirects or anything of that order, you might want to use annotate-output for this.
Assume that script.sh contains your function, then you can do:
$ annotate-output ./script.sh
13:17:15 I: Started ./script.sh
13:17:15 O: output
13:17:15 E: error
13:17:15 E: error-2
13:17:15 O: output-2
13:17:15 E: error-3
13:17:15 O: output-3
13:17:15 I: Finished with exitcode 0
So now it is easy to reprocess that information and send it to the files you want:
$ annotate-output ./script.sh \
| awk '{s=substr($0,13)}/ [OE]: /{print s> "logfile"}/ O: /{print s}'
output
output-2
output-3
$ cat logfile
output
error
error-2
output-2
error-3
output-3
Or any other combination of tee sed cut ...
As per comment from #CharlesDuffy:
Since stdout and stderr are processed in parallel, it can happen that some lines received on
stdout will show up before later-printed stderr lines (and vice-versa).
This is unfortunately very hard to fix with the current annotation strategy. A fix would
involve switching to PTRACE'ing the process. Giving nice a (much) higher priority over the
executed program could, however, cause this behaviour to show up less frequently.
source: man annotate-output

How to save the terminal screen output in piping command

I have few commands that I'm piping. The first command gives a big file output, while its output on the screen is only a very short statistical summary of it. The big file output is being processed fine through the piping, but I'd like to save the screen output into a text file, so my question is how to do it within the piping?
So far I've tried using tee the below:
&> someFile.txt
> someFile.txt
>> someFile.txt
But all of them gave me the big file output, but I'd like only the screen short output.
Any ideas how to do that?
If you just want the output of command_to_refine_big_output on stdout and in a file called log in the current directory, this works:
command_with_big_output | command_to_refine_big_output | tee log
Note that this only writes stdout to log file, if your want stderr to, you can do:
command_with_big_output | command_to_refine_big_output 2>&1 | tee log
or, if you want all output, errors include of the complete chain:
command_with_big_output 2>&1 | command_to_refine_big_output 2>&1 | tee log

Unable to redirect output of a perl script to a file

Even though the question sounds annoyingly silly, I am stuck with this. The described issue occurs on both Ubuntu 14.04 and CentOS 6.3.
I am using a perl script called netbps as posted in the answer (by RedGrittyBrick): https://superuser.com/questions/356907/how-to-get-real-time-network-statistics-in-linux-with-kb-mb-bytes-format-and-for
The above script basically takes the output of tcpdump (a command whose details we don't need to know here) and represents it in a different format. Note that the script does this in streaming mode (i.e., the output is produced on the fly).
Hence, my command looks like this:
tcpdump -i eth0 -l -e -n "src portrange 22-233333 or dst portrange 22-23333" 2>&1 | ./netbps.prl
And the output produced on the shell/console looks like this:
13:52:09 47.86 Bps
13:52:20 517.54 Bps
13:52:30 222.59 Bps
13:52:41 4111.77 Bps
I am trying to capture this output to a file, however, I am unable to do so. I have tried the following:
Redirect to file:
tcpdump -i eth0 -l -e -n "src portrange 22-233333 or dst portrange 22-23333" 2>&1 | ./netbps.prl > out.out 2>&1
This creates an empty out.out file. No output appears on the shell/console.
Pipe and grep:
tcpdump -i eth0 -l -e -n "src portrange 22-233333 or dst portrange 22-23333" 2>&1 | ./netbps.prl 2>&1 | grep "Bps"
No output appears on the shell/console.
I don't know much about perl, but this seems to me like a buffering issue -- not sure though? Any help will be appreciated.
It is a buffering problem. Add the line STDOUT->autoflush(1) to netbps and it will work.
STDOUT is normally line buffered, so the newline on the end of printf should trigger a buffer flush, but because it's redirected to a file it is buffered like any normal file. You can see this with...
$ perl -e 'while(1) { print "foo\n"; sleep 5; }'
vs
$ perl -e 'while(1) { print "foo\n"; sleep 5; }' > test.out

why cant I redirect the output from sed to a file

I am trying to run the following command
./someprogram | tee /dev/tty | sed 's/^.\{2\}//' > output_file
But the file is always blank when I go to check it. If I remove > output_file from the end of the command, I am able to see the output from sed without any issues.
Is there any way that I can redirect the output from sed in this command to a file?
Remove output-buffering from sed command using the -u flag and make sure what you want to log isn't on stderr
-u, --unbuffered
load minimal amounts of data from the input files and flush the output buffers more often
Final command :
./someprogram | tee /dev/tty | sed -u 's/^.\{2\}//' > output_file
This happens with streams (usually a program sending output to stdout during its whole lifetime).
sed / grep and other commands do some buffering in those cases and you have to explicitly disable it to be able to have an output while the program is still running.
You got a Stderr & stdout problem. Checkout In the shell, what does " 2>&1 " mean? on this topic. Should fix you right up.

bash, nested commands and redirects

I am trying to track the CPU usage of a process using a command like this:
top -b -d 1 | grep myprocess.exe
Next, I would like to redirect this to a log file, e.g.
top -b -d 1 | grep myprocess.exe > output.log
Now, this does not actually work because it thinks I am grepping myprocess.exe > output.log
instead of myprocess.exe
Does anybody know how I can get this redirect to work?
Now, this does not actually work because it thinks I am grepping myprocess.exe > output.log instead of myprocess.exe
Wrong. All should be fine. The 1st example executes the pipeline with stdout set to your terminal (thus you see the output, but nothing is written to the file). The 2nd example executes the pipeline with stdout set to output.log (thus you don't see output, but it will go right in your file).
If you want the output written to both, you need another process that gets your previous pipeline's stdout as stdin, and duplicates it. Like:
previous_pipeline | tee output.log
tee will print on stdout what it gets on stdin (So for stdout, everything is the same as before), but additionally open another file (given as cmdline arg) and write a copy to it.
Try tee:
top -b -d 1 | grep myprocess.exe | tee output.log
If you want it to show no output:
top -b -d 1 | grep myprocess.exe | tee output.log > /dev/null

Resources