Unable to redirect output of a perl script to a file - bash

Even though the question sounds annoyingly silly, I am stuck with this. The described issue occurs on both Ubuntu 14.04 and CentOS 6.3.
I am using a perl script called netbps as posted in the answer (by RedGrittyBrick): https://superuser.com/questions/356907/how-to-get-real-time-network-statistics-in-linux-with-kb-mb-bytes-format-and-for
The above script basically takes the output of tcpdump (a command whose details we don't need to know here) and represents it in a different format. Note that the script does this in streaming mode (i.e., the output is produced on the fly).
Hence, my command looks like this:
tcpdump -i eth0 -l -e -n "src portrange 22-233333 or dst portrange 22-23333" 2>&1 | ./netbps.prl
And the output produced on the shell/console looks like this:
13:52:09 47.86 Bps
13:52:20 517.54 Bps
13:52:30 222.59 Bps
13:52:41 4111.77 Bps
I am trying to capture this output to a file, however, I am unable to do so. I have tried the following:
Redirect to file:
tcpdump -i eth0 -l -e -n "src portrange 22-233333 or dst portrange 22-23333" 2>&1 | ./netbps.prl > out.out 2>&1
This creates an empty out.out file. No output appears on the shell/console.
Pipe and grep:
tcpdump -i eth0 -l -e -n "src portrange 22-233333 or dst portrange 22-23333" 2>&1 | ./netbps.prl 2>&1 | grep "Bps"
No output appears on the shell/console.
I don't know much about perl, but this seems to me like a buffering issue -- not sure though? Any help will be appreciated.

It is a buffering problem. Add the line STDOUT->autoflush(1) to netbps and it will work.
STDOUT is normally line buffered, so the newline on the end of printf should trigger a buffer flush, but because it's redirected to a file it is buffered like any normal file. You can see this with...
$ perl -e 'while(1) { print "foo\n"; sleep 5; }'
vs
$ perl -e 'while(1) { print "foo\n"; sleep 5; }' > test.out

Related

Using Bash Less and Grep together [duplicate]

Is that possible to use grep on a continuous stream?
What I mean is sort of a tail -f <file> command, but with grep on the output in order to keep only the lines that interest me.
I've tried tail -f <file> | grep pattern but it seems that grep can only be executed once tail finishes, that is to say never.
Turn on grep's line buffering mode when using BSD grep (FreeBSD, Mac OS X etc.)
tail -f file | grep --line-buffered my_pattern
It looks like a while ago --line-buffered didn't matter for GNU grep (used on pretty much any Linux) as it flushed by default (YMMV for other Unix-likes such as SmartOS, AIX or QNX). However, as of November 2020, --line-buffered is needed (at least with GNU grep 3.5 in openSUSE, but it seems generally needed based on comments below).
I use the tail -f <file> | grep <pattern> all the time.
It will wait till grep flushes, not till it finishes (I'm using Ubuntu).
I think that your problem is that grep uses some output buffering. Try
tail -f file | stdbuf -o0 grep my_pattern
it will set output buffering mode of grep to unbuffered.
If you want to find matches in the entire file (not just the tail), and you want it to sit and wait for any new matches, this works nicely:
tail -c +0 -f <file> | grep --line-buffered <pattern>
The -c +0 flag says that the output should start 0 bytes (-c) from the beginning (+) of the file.
In most cases, you can tail -f /var/log/some.log |grep foo and it will work just fine.
If you need to use multiple greps on a running log file and you find that you get no output, you may need to stick the --line-buffered switch into your middle grep(s), like so:
tail -f /var/log/some.log | grep --line-buffered foo | grep bar
you may consider this answer as enhancement .. usually I am using
tail -F <fileName> | grep --line-buffered <pattern> -A 3 -B 5
-F is better in case of file rotate (-f will not work properly if file rotated)
-A and -B is useful to get lines just before and after the pattern occurrence .. these blocks will appeared between dashed line separators
But For me I prefer doing the following
tail -F <file> | less
this is very useful if you want to search inside streamed logs. I mean go back and forward and look deeply
Didn't see anyone offer my usual go-to for this:
less +F <file>
ctrl + c
/<search term>
<enter>
shift + f
I prefer this, because you can use ctrl + c to stop and navigate through the file whenever, and then just hit shift + f to return to the live, streaming search.
sed would be a better choice (stream editor)
tail -n0 -f <file> | sed -n '/search string/p'
and then if you wanted the tail command to exit once you found a particular string:
tail --pid=$(($BASHPID+1)) -n0 -f <file> | sed -n '/search string/{p; q}'
Obviously a bashism: $BASHPID will be the process id of the tail command. The sed command is next after tail in the pipe, so the sed process id will be $BASHPID+1.
Yes, this will actually work just fine. Grep and most Unix commands operate on streams one line at a time. Each line that comes out of tail will be analyzed and passed on if it matches.
This one command workes for me (Suse):
mail-srv:/var/log # tail -f /var/log/mail.info |grep --line-buffered LOGIN >> logins_to_mail
collecting logins to mail service
Coming some late on this question, considering this kind of work as an important part of monitoring job, here is my (not so short) answer...
Following logs using bash
1. Command tail
This command is a little more porewfull than read on already published answer
Difference between follow option tail -f and tail -F, from manpage:
-f, --follow[={name|descriptor}]
output appended data as the file grows;
...
-F same as --follow=name --retry
...
--retry
keep trying to open a file if it is inaccessible
This mean: by using -F instead of -f, tail will re-open file(s) when removed (on log rotation, for sample).
This is usefull for watching logfile over many days.
Ability of following more than one file simultaneously
I've already used:
tail -F /var/www/clients/client*/web*/log/{error,access}.log /var/log/{mail,auth}.log \
/var/log/apache2/{,ssl_,other_vhosts_}access.log \
/var/log/pure-ftpd/transfer.log
For following events through hundreds of files... (consider rest of this answer to understand how to make it readable... ;)
Using switches -n (Don't use -c for line buffering!).By default tail will show 10 last lines. This can be tunned:
tail -n 0 -F file
Will follow file, but only new lines will be printed
tail -n +0 -F file
Will print whole file before following his progression.
2. Buffer issues when piping:
If you plan to filter ouptuts, consider buffering! See -u option for sed, --line-buffered for grep, or stdbuf command:
tail -F /some/files | sed -une '/Regular Expression/p'
Is (a lot more efficient than using grep) a lot more reactive than if you does'nt use -u switch in sed command.
tail -F /some/files |
sed -une '/Regular Expression/p' |
stdbuf -i0 -o0 tee /some/resultfile
3. Recent journaling system
On recent system, instead of tail -f /var/log/syslog you have to run journalctl -xf, in near same way...
journalctl -axf | sed -une '/Regular Expression/p'
But read man page, this tool was built for log analyses!
4. Integrating this in a bash script
Colored output of two files (or more)
Here is a sample of script watching for many files, coloring ouptut differently for 1st file than others:
#!/bin/bash
tail -F "$#" |
sed -une "
/^==> /{h;};
//!{
G;
s/^\\(.*\\)\\n==>.*${1//\//\\\/}.*<==/\\o33[47m\\1\\o33[0m/;
s/^\\(.*\\)\\n==> .* <==/\\o33[47;31m\\1\\o33[0m/;
p;}"
They work fine on my host, running:
sudo ./myColoredTail /var/log/{kern.,sys}log
Interactive script
You may be watching logs for reacting on events?
Here is a little script playing some sound when some USB device appear or disappear, but same script could send mail, or any other interaction, like powering on coffe machine...
#!/bin/bash
exec {tailF}< <(tail -F /var/log/kern.log)
tailPid=$!
while :;do
read -rsn 1 -t .3 keyboard
[ "${keyboard,}" = "q" ] && break
if read -ru $tailF -t 0 _ ;then
read -ru $tailF line
case $line in
*New\ USB\ device\ found* ) play /some/sound.ogg ;;
*USB\ disconnect* ) play /some/othersound.ogg ;;
esac
printf "\r%s\e[K" "$line"
fi
done
echo
exec {tailF}<&-
kill $tailPid
You could quit by pressing Q key.
you certainly won't succeed with
tail -f /var/log/foo.log |grep --line-buffered string2search
when you use "colortail" as an alias for tail, eg. in bash
alias tail='colortail -n 30'
you can check by
type alias
if this outputs something like
tail isan alias of colortail -n 30.
then you have your culprit :)
Solution:
remove the alias with
unalias tail
ensure that you're using the 'real' tail binary by this command
type tail
which should output something like:
tail is /usr/bin/tail
and then you can run your command
tail -f foo.log |grep --line-buffered something
Good luck.
Use awk(another great bash utility) instead of grep where you dont have the line buffered option! It will continuously stream your data from tail.
this is how you use grep
tail -f <file> | grep pattern
This is how you would use awk
tail -f <file> | awk '/pattern/{print $0}'

why cant I redirect the output from sed to a file

I am trying to run the following command
./someprogram | tee /dev/tty | sed 's/^.\{2\}//' > output_file
But the file is always blank when I go to check it. If I remove > output_file from the end of the command, I am able to see the output from sed without any issues.
Is there any way that I can redirect the output from sed in this command to a file?
Remove output-buffering from sed command using the -u flag and make sure what you want to log isn't on stderr
-u, --unbuffered
load minimal amounts of data from the input files and flush the output buffers more often
Final command :
./someprogram | tee /dev/tty | sed -u 's/^.\{2\}//' > output_file
This happens with streams (usually a program sending output to stdout during its whole lifetime).
sed / grep and other commands do some buffering in those cases and you have to explicitly disable it to be able to have an output while the program is still running.
You got a Stderr & stdout problem. Checkout In the shell, what does " 2>&1 " mean? on this topic. Should fix you right up.

Write output to file with tabs/text added in ksh script

I am writing a KornShell (ksh) script that is logging to a file. I am redirecting the output of one of my commands (scp) to the same file, but I would like to add a tab at the start of those lines in the log file if possible.
Is this possible to do?
EDIT: Also I should mention that the text I am redirecting is coming from stderr. My line currently looks like this:
scp -q ${wks}:${file_location} ${save_directory} >> ${script_log} 2>&1
Note: the below doesn't work for ksh (see this question for possible solutions).
You probably can do something like
my_command | sed 's/^/\t/' >> my.log
The idea is to process the output of the command with a stream editor like sed in some manner. In this case, a tab will be added at the beginning of every line. Consider:
$ echo -e 'Test\nfoobar' | sed 's/^/\t/'
Test
foobar
I haven't tested this in ksh, but a quick web search suggests that it should work.
Also note that some commands can write to both stdout and stderr, don't forget to handle it.
Edit: in response to the comment and the edit in the question, the adjusted command can look like
scp -q ${wks}:${file_location} ${save_directory} 2>&1 | \
sed 's/^/\t/' >> ${script_log}
or, if you want to get rid of stdout completely,
scp -q ${wks}:${file_location} ${save_directory} 2>&1 >/dev/null | \
sed 's/^/\t/' >> ${script_log}
The technique is described in this answer.

Direct output to standard output and an output file simultaneously

I know that
./executable &>outputfile
will redirect the standard output and standard error to a file. This is what I want, but I would also like the output to continue to be printed in the terminal. What is the best way to do this?
Ok, here is my exact command: I have tried
./damp2Plan 10 | tee log.txt
and
./damp2Plan 10 2>&1 | tee log.txt
where 10 is just an argument passed to main. Neither work correctly. The result is that the very first printf statement in the code does go to terminal and log.txt just fine, but none of the rest do. I'm on UbuntuĀ 12.04 (Precise Pangolin).
Use tee:
./executable 2>&1 | tee outputfile
tee outputs in chunks and there may be some delay before you see any output. If you want closer to real-time output, you could redirect to a file as you are now, and monitor it with tail -f in a different shell:
./executable 2>&1 > outputfile
tail -f outputfile

How to echo text and have output from a command be sent to a file in parallel

I would like to echo text and also remotely ping a computer and have the output be sent to a log file. I also need to do this in parallel but I am having some trouble with how the output is being sent into the log file.
I would like the output to look like:
host | hostTo | result of ping command
but since I have this running as a background process it is outputting:
host hostTo host hostTo rtt
rtt
rtt
etc...
Is there a way to allow this to be a background process but make it so that the echo is part of that process, so the logfile isn't out of order?
here's my script, thanks in advance!
for host in `cat data/ips.txt`; do
echo -n "$host ";
for hostTo in `cat data/ips.txt`; do
{
echo -n "$host $hostTo " >> logs/$host.log;
(ssh -n -o StrictHostKeyChecking=no -o ConnectTimeout=1 -T username#$host ping -c 10 $hostTo | tail -1 >> logs/$host.log) &
};
done;
done
It's possible to do this with awk. What you're basically asking is how to print out the hosts as well as the result at the same time.
ie. Remove the line with echo and change the following:
ssh .... ping -c 10 $hostTo | awk -v from=$host -v to=$hostTo 'END {print from, to, $0}' >> logs/${host}.log
Note that tail is effectively being done inside awk also. Including shell var inside awk tends to be a PITA, maybe there's an easier way to do it without all the escaping and quotes. [Updated: Assign var in awk]
PS. The title of your question is a little unclear, it does sounds like you want to pipe your program output to the display and file at the same time.

Resources