I am using exim 4.92.3 on a CentOS 7.8.
I wanted to capture all the output from the command used for testing aliases resolution (exim -d -bt adres#domain |& tee exim-test.out), but only stdout was displayed on the terminal and written to the file. When I split outputs with exim [...] 1>1.out 2>2.out the streams are separated and recorded as expected. How to send both stdout and stderr from exim to one file, and why it is behaving like this?
Thank you in advance for help.
why it is behaving like this?
This can only be answered if you specify it, i. e. which shell is used. It may be that it doesn't offer |&.
How to send both stdout and stderr from exim to one file
2>&1 will work, i. e. exim -d -bt adres#domain 2>&1 | tee exim-test.out.
tee is changing the order of lines
You may be able to avoid the perceived reordering by prepending stdbuf -oL to the exim command.
Related
I am trying to find when and how the cache is flushed, planned to use the command redis-cli monitor | grep -iE "del|flush" > redis_log.txt for that, but for some reason the file is empty. If i use the command without > redis_log.txt part - it shows a correct output in the terminal, if i use redis-cli monitor > redis_log.txt command - it also saves an actual output to the file, but together it fails, only an empty file is created. Has anybody met a similar issue before?
As mentioned in the comments, the issue you notice certainly comes from the I/O buffering applied to the grep command, especially when its standard output is not attached to a terminal, but redirected to a file or so.
To be more precise, see e.g. this nice blog article which concludes with this wrap-up:
Here’s how buffering is usually set up:
STDIN is always buffered.
STDERR is never buffered.
if STDOUT is a terminal, line buffering will be automatically selected. Otherwise, block buffering (probably 4096 bytes) will be used.
[…] these 3 points explain all “weird” behaviors.
General solution
To tweak the I/O streams buffering of a program, a very handy program provided by coreutils is stdbuf.
So for your use case:
you may want to replace grep -iE "del|flush" with:
stdbuf -o0 grep -iE "del|flush" to completely disable STDOUT buffering;
or if you'd like some trade-off and just have STDOUT line-buffering,
you may want to replace grep -iE "del|flush" with:
either stdbuf -oL grep -iE "del|flush",
or grep --line-buffered -iE "del|flush".
Wrap-up
Finally as suggested by #jetchisel, you'll probably want to redirect STDERR as well to your log file in order not to miss some errors messages… Hence, for example:
redis-cli monitor | stdbuf -o0 grep -iE "del|flush" > redis_log.txt 2>&1
I would like to write some filtered output of a given command (rsync) into a file but keep the complete unfiltered output on stdout (the screen/terminal).
I've tried some combinations of sed, tee & process substitution but cannot make this work.
Here's what I've got so far:
rsync -aAXz --stats -v src dest > >(sed '0,/^$/d' | tee -a "summary.log")
sed '0,/^$/d' deletes everything before the first blank line which leaves rsync's summary and deletes the leading verbose output. This is working as expected and only prints the summary to summary.log.
Obviously it also deletes the verbose output from stdout since the tee command only receives the filtered sed output over the pipe.
How can I write to stdout before filtering with sed to see all the verbose output on the screen/terminal?
To write your complete output from stdout to the log file, do that first (using tee), and then deal with the terminal. Something like this:
rsync -aAXz --stats -v src dest | tee -a "summary.log" | sed '0,/^$/d'
That "splits" or duplicates the output stream, with one copy of the complete stream being diverted by tee to the output log, and another copy being sent to its stdout, which becomes the input to sed....
I am unable to see what I receive through the MQTT/mosquitto stream by means of echoing.
My code is as follows:
#!/bin/bash
`mosquitto_sub -d -t +/# >>mqtt_log.csv`
mqtt_stream_variable=`sed '$!d' mqtt_log.csv`
echo "$mqtt_stream_variable"
First line subscribes to the mqtt stream and appends the output to the mqtt_log.csv file. Then I sed '$!d' mqtt_log.csv so I get the last lines value assigned to the mqtt_stream variable, I later echo this.
When I execute this - I don't see any echoing I was curious to know how I could do this? When I cat mqtt_log.csv there are things in there. So the mosquitto_sub -d -t +/# >>mqtt_log.csv part is working. It's just the echoing that is being problematic.
Ideally after mqtt_stream=``sed '$!d' mqtt_log.csv I would like to play around with the values in mqtt_log.csv [as it's a csv string]. So by means of echoing I can see what the mqtt_stream_variable variable holds
The mosquitto_sub command will never return and sed will read the empty file before any messages are written to it and then exit.
How about something like this
#!/bin/bash
mosquitto_sub -d -t +/# | tee -a mqtt_log.csv | sed '$!d'
No need for all the sub shells and pipes will get you what you want.
The only other thing is why the need for both wild cards in the topic? +/# should be the same as just # (you will probably need to wrap the # in quotes on it's own)
For context, I'm attempting to create a shell script that simplifies the realtime console output of ffmpeg, only displaying the current frame being encoded. My end goal is to use this information in some sort of progress indicator for batch processing.
For those unfamiliar with ffmpeg's output, it outputs encoded video information to stdout and console information to stderr. Also, when it actually gets to displaying encode information, it uses carriage returns to keep the console screen from filling up. This makes it impossible to simply use grep and awk to capture the appropriate line and frame information.
The first thing I've tried is replacing the carriage returns using tr:
$ ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&1 | tr '\r' '\n'
This works in that it displays realtime output to the console. However, if I then pipe that information to grep or awk or anything else, tr's output is buffered and is no longer realtime. For example: $ ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&1 | tr '\r' '\n'>log.txt results in a file that is immediately filled with some information, then 5-10 secs later, more lines get dropped into the log file.
At first I thought sed would be great for this: $ # ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&1 | sed 's/\\r/\\n/', but it gets to the line with all the carriage returns and waits until the processing has finished before it attempts to do anything. I assume this is because sed works on a line-by-line basis and needs the whole line to have completed before it does anything else, and then it doesn't replace the carriage returns anyway. I've tried various different regex's for the carriage return and new line, and have yet to find a solution that replaces the carriage return. I'm running OSX 10.6.8, so I am using BSD sed, which might account for that.
I have also attempted to write the information to a log file and use tail -f to read it back, but I still run into the issue of replacing carriage returns in realtime.
I have seen that there are solutions for this in python and perl, however, I'm reluctant to go that route immediately. First, I don't know python or perl. Second, I have a completely functional batch processing shell application that I would need to either port or figure out how to integrate with python/perl. Probably not hard, but not what I want to get into unless I absolutely have to. So I'm looking for a shell solution, preferably bash, but any of the OSX shells would be fine.
And if what I want is simply not doable, well I guess I'll cross that bridge when I get there.
If it is only a matter of output buffering by the receiving application after the pipe. Then you could try using gawk (and some BSD awk) or mawk which can flush buffers. For example, try:
... | gawk '1;{fflush()}' RS='\r\n' > log.txt
Alternatively if you awk does not support this you could force this by repeatedly closing the output file and appending the next line...
... | awk '{sub(/\r$/,x); print>>f; close(f)}' f=log.out
Or you could just use shell, for example in bash:
... | while IFS= read -r line; do printf "%s\n" "${line%$'\r'}"; done > log.out
Libc uses line-buffering when stdout and stderr are connected to a terminal and full-buffering (with a 4KB buffer) when connected to a pipe. This happens in the process generating the output, not in the receiving process—it's ffmpeg's fault, in your case, not tr's.
unbuffer ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&1 | tr '\r' '\n'
stdbuf -e0 -o0 ffmpeg -i "ScreeningSchedule-1.mov" -y "test.mp4" 2>&1 | tr '\r' '\n'
Try using unbuffer or stdbuf to disable output buffering.
The buffering of data between processes in a pipe is controlled with some system limits, which is at least on my system (Fedora 17) not possible to modify:
$ ulimit -a | grep pipe
pipe size (512 bytes, -p) 8
$ ulimit -p 1
bash: ulimit: pipe size: cannot modify limit: Invalid argument
$
Although this buffering is mostly related to how much excess data the producer is allowed to produce before it is stopped if the consumer is not consuming at the same speed, it might also affect timing of delivery of smaller amounts of data (not quite sure of this).
That is the buffering of pipe data, and I do not think there is much to tweak here. However, the programs reading/writing the piped data might also buffer stdin/stdout data and this you want to avoid in your case.
Here is a perl script that should do the translation with minimal input buffering and no output buffering:
#!/usr/bin/perl
use strict;
use warnings;
use Term::ReadKey;
$ReadKeyTimeout = 10; # seconds
$| = 1; # OUTPUT_AUTOFLUSH
while( my $key = ReadKey($ReadKeyTimeout) ) {
if ($key eq "\r") {
print "\n";
next;
}
print $key;
}
However, as already pointer out, you should make sure that ffmpeg does not buffer its output if you want real-time response.
When I go and type
./script.txt
It displays the output in terminal, but if I want to display it on the screen and store it at the same time, how do I do this? Because If I do
./script.txt >> example.txt
It will only store it.
try
./script.txt 2>&1 | tee -a example.txt
The 2>&1 redirects stderr into the stdout stream. Now both streams come thru the pipe to tee, which dupes a file for output AND also sends a copy of all input to its stdout.
I hope this helps.
P.S. you're not really naming your scripts with .txt extension are you? ;-)