No Output When Using tail -f With Multiple Grep Commands In Mac OS-X [duplicate] - macos

This question already has an answer here:
Read from a endless pipe bash [duplicate]
(1 answer)
Closed 7 years ago.
I'm trying to execute the following command from the Mac OS X terminal:
$tail -f FILE_PATH | grep "DESIRED_STRING" | grep -v "EXCLUDED_STRING"
Unfortunately I do not get any results in return.
But, when using cat instead of tail -f:
$cat FILE_PATH | grep "DESIRED_STRING" | grep -v "EXCLUDED_STRING"
I get the expected result. Unfortunately, this workaround is no good for me, as I need to tail the file in realtime.

grep buffers its output by default. Since tail -f never completes, neither does grep and you must wait until the last one has accumulated enough output to fill its buffer. With cat, the command eventually completes, allowing both greps to complete and print whatever output they have accumulated (whether or not its buffer was filled).
Adding --line-buffered to the grep commands changes how grep buffers its output, allowing you to see the output as each line is complete.

Related

Using Bash Less and Grep together [duplicate]

Is that possible to use grep on a continuous stream?
What I mean is sort of a tail -f <file> command, but with grep on the output in order to keep only the lines that interest me.
I've tried tail -f <file> | grep pattern but it seems that grep can only be executed once tail finishes, that is to say never.
Turn on grep's line buffering mode when using BSD grep (FreeBSD, Mac OS X etc.)
tail -f file | grep --line-buffered my_pattern
It looks like a while ago --line-buffered didn't matter for GNU grep (used on pretty much any Linux) as it flushed by default (YMMV for other Unix-likes such as SmartOS, AIX or QNX). However, as of November 2020, --line-buffered is needed (at least with GNU grep 3.5 in openSUSE, but it seems generally needed based on comments below).
I use the tail -f <file> | grep <pattern> all the time.
It will wait till grep flushes, not till it finishes (I'm using Ubuntu).
I think that your problem is that grep uses some output buffering. Try
tail -f file | stdbuf -o0 grep my_pattern
it will set output buffering mode of grep to unbuffered.
If you want to find matches in the entire file (not just the tail), and you want it to sit and wait for any new matches, this works nicely:
tail -c +0 -f <file> | grep --line-buffered <pattern>
The -c +0 flag says that the output should start 0 bytes (-c) from the beginning (+) of the file.
In most cases, you can tail -f /var/log/some.log |grep foo and it will work just fine.
If you need to use multiple greps on a running log file and you find that you get no output, you may need to stick the --line-buffered switch into your middle grep(s), like so:
tail -f /var/log/some.log | grep --line-buffered foo | grep bar
you may consider this answer as enhancement .. usually I am using
tail -F <fileName> | grep --line-buffered <pattern> -A 3 -B 5
-F is better in case of file rotate (-f will not work properly if file rotated)
-A and -B is useful to get lines just before and after the pattern occurrence .. these blocks will appeared between dashed line separators
But For me I prefer doing the following
tail -F <file> | less
this is very useful if you want to search inside streamed logs. I mean go back and forward and look deeply
Didn't see anyone offer my usual go-to for this:
less +F <file>
ctrl + c
/<search term>
<enter>
shift + f
I prefer this, because you can use ctrl + c to stop and navigate through the file whenever, and then just hit shift + f to return to the live, streaming search.
sed would be a better choice (stream editor)
tail -n0 -f <file> | sed -n '/search string/p'
and then if you wanted the tail command to exit once you found a particular string:
tail --pid=$(($BASHPID+1)) -n0 -f <file> | sed -n '/search string/{p; q}'
Obviously a bashism: $BASHPID will be the process id of the tail command. The sed command is next after tail in the pipe, so the sed process id will be $BASHPID+1.
Yes, this will actually work just fine. Grep and most Unix commands operate on streams one line at a time. Each line that comes out of tail will be analyzed and passed on if it matches.
This one command workes for me (Suse):
mail-srv:/var/log # tail -f /var/log/mail.info |grep --line-buffered LOGIN >> logins_to_mail
collecting logins to mail service
Coming some late on this question, considering this kind of work as an important part of monitoring job, here is my (not so short) answer...
Following logs using bash
1. Command tail
This command is a little more porewfull than read on already published answer
Difference between follow option tail -f and tail -F, from manpage:
-f, --follow[={name|descriptor}]
output appended data as the file grows;
...
-F same as --follow=name --retry
...
--retry
keep trying to open a file if it is inaccessible
This mean: by using -F instead of -f, tail will re-open file(s) when removed (on log rotation, for sample).
This is usefull for watching logfile over many days.
Ability of following more than one file simultaneously
I've already used:
tail -F /var/www/clients/client*/web*/log/{error,access}.log /var/log/{mail,auth}.log \
/var/log/apache2/{,ssl_,other_vhosts_}access.log \
/var/log/pure-ftpd/transfer.log
For following events through hundreds of files... (consider rest of this answer to understand how to make it readable... ;)
Using switches -n (Don't use -c for line buffering!).By default tail will show 10 last lines. This can be tunned:
tail -n 0 -F file
Will follow file, but only new lines will be printed
tail -n +0 -F file
Will print whole file before following his progression.
2. Buffer issues when piping:
If you plan to filter ouptuts, consider buffering! See -u option for sed, --line-buffered for grep, or stdbuf command:
tail -F /some/files | sed -une '/Regular Expression/p'
Is (a lot more efficient than using grep) a lot more reactive than if you does'nt use -u switch in sed command.
tail -F /some/files |
sed -une '/Regular Expression/p' |
stdbuf -i0 -o0 tee /some/resultfile
3. Recent journaling system
On recent system, instead of tail -f /var/log/syslog you have to run journalctl -xf, in near same way...
journalctl -axf | sed -une '/Regular Expression/p'
But read man page, this tool was built for log analyses!
4. Integrating this in a bash script
Colored output of two files (or more)
Here is a sample of script watching for many files, coloring ouptut differently for 1st file than others:
#!/bin/bash
tail -F "$#" |
sed -une "
/^==> /{h;};
//!{
G;
s/^\\(.*\\)\\n==>.*${1//\//\\\/}.*<==/\\o33[47m\\1\\o33[0m/;
s/^\\(.*\\)\\n==> .* <==/\\o33[47;31m\\1\\o33[0m/;
p;}"
They work fine on my host, running:
sudo ./myColoredTail /var/log/{kern.,sys}log
Interactive script
You may be watching logs for reacting on events?
Here is a little script playing some sound when some USB device appear or disappear, but same script could send mail, or any other interaction, like powering on coffe machine...
#!/bin/bash
exec {tailF}< <(tail -F /var/log/kern.log)
tailPid=$!
while :;do
read -rsn 1 -t .3 keyboard
[ "${keyboard,}" = "q" ] && break
if read -ru $tailF -t 0 _ ;then
read -ru $tailF line
case $line in
*New\ USB\ device\ found* ) play /some/sound.ogg ;;
*USB\ disconnect* ) play /some/othersound.ogg ;;
esac
printf "\r%s\e[K" "$line"
fi
done
echo
exec {tailF}<&-
kill $tailPid
You could quit by pressing Q key.
you certainly won't succeed with
tail -f /var/log/foo.log |grep --line-buffered string2search
when you use "colortail" as an alias for tail, eg. in bash
alias tail='colortail -n 30'
you can check by
type alias
if this outputs something like
tail isan alias of colortail -n 30.
then you have your culprit :)
Solution:
remove the alias with
unalias tail
ensure that you're using the 'real' tail binary by this command
type tail
which should output something like:
tail is /usr/bin/tail
and then you can run your command
tail -f foo.log |grep --line-buffered something
Good luck.
Use awk(another great bash utility) instead of grep where you dont have the line buffered option! It will continuously stream your data from tail.
this is how you use grep
tail -f <file> | grep pattern
This is how you would use awk
tail -f <file> | awk '/pattern/{print $0}'

How can I delete empty line from my ouput by grep? [duplicate]

This question already has answers here:
Remove empty lines in a text file via grep
(11 answers)
Closed 4 years ago.
Exists way to remove empty lines with cat myfile | grep -w #something ?
I looking for simple way for remove empty lines from my output like in the way the presented above.
This really belongs on the codegolfing stackexchange because it's not related to how anyone would ever write a script. However, you can do it like this:
cat myfile | grep -w '.*..*'
It's equivalent to the more canonical grep ., but adds explicit .*s on either side so that it will always match the complete line, thereby satisfying the word boundary conditions imposed by -w
You can pipe your output to awk to easily remove empty lines
cat myfile | grep -w #something | awk NF
EDIT: so... you just want cat myfile | awk NF?
if you have to use grep, you can do grep myfile -v '^[[:blank:]]*$'

How to remove lines from the output of a command in a bash script [duplicate]

This question already has answers here:
Get last line of shell output as a variable
(2 answers)
Ignoring the first line of stderr and keeping stdout intact
(1 answer)
Cronjob - How to output stdout, and ignore stderr
(3 answers)
Closed 4 years ago.
I am trying to do a command that gives multiple lines; eg ldapwhoami and Id like to make it in a bash script that prints only the last line instead of all the command lines.
I tried the following
#/bin/bash
$(ldapwhoami | sed 1d2d3d)
but it doesn't seem to work, any help would be appreciated.
To print only the final line use tail:
ldapwhoami | tail -n 1
To delete the first three lines with sed, change your command to:
ldapwhoami | sed '1d;2d;3d;'
note the semicolons and the quotes
Also possible with awk
ldapwhoami | awk 'NR > 3'
The above assumes that all output goes to standard output. In unix though there are two output streams that are connected to each process, the standard output (denoted with 1 - that is used for the output of the program), and the standard error (denoted with 2 - that is used for any diagnostic/error messages). The reason for this separation is that it is often desirable not to "pollute" the output with diagnostic messages, if it is processed by another script.
So for commands that generate output on both steams, if we want to capture both, we redirect the standard error to standard output, using 2>&1) like this:
ldapwhoami 2>&1 | tail -n 1
(for awk and sed the same syntax is used)
In bash, the above may be written using shorthand form as
ldapwhoami |& tail -n 1
If all you need is the standard output, and you don't care about standard error, you can redirect it to /dev/null
ldapwhoami 2> /dev/null

Bash script sometimes not working [duplicate]

This question already has answers here:
How can I use a file in a command and redirect output to the same file without truncating it?
(14 answers)
Closed 5 years ago.
Okay, so i have strange problem with the following piece of code:
who > tmp
cat tmp | awk '{print $1}' | sort | uniq > tmp
ps aux | grep -Fvf tmp
It is supposed to list processes of all users not logged in at the moment. Problem is it sometimes works, and sometimes doesn't and I have no idea what causes it. I can enter exactly same commands and I get 2 different results. I've narrowed the problem to 2nd line > tmp redirect, where it saves proper user list or wipes the file completely and I have no idea why it happens.
PS. I know it may not be proper solution for the task, but it's what I came up with during limited time I was given.
It's probably a timing issue: you're reading from and truncating the file in the same pipeline.
The simple solution is to not use temp files at all:
ps aux | grep -Fvf <(who | awk '{print $1}' | sort -u)

unusual chaining of "grep" in the shell

I stumbled upon a shell instruction which looks odd:
ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t ; echo $?
I suspect that the returned value would represent an error. Could anyone confirm this or explain in which circumstances this instruction would be applied ?
The first grep only allows through lines that contain qmail (preceded by any character and followed by a dash, but that is largely immaterial). The second grep strips out lines that contain mail, which means every line passed by the first grep is deleted by the second. There's nothing left for the third one to process, so the file t will always be empty. The value for $? should be 1, failure, since the third grep failed to find any lines that matched its pattern (because it got no lines to process).
It is a mistake.
It is hard to know how to fix it without knowing what it is trying to do.
The bash shell (and most other shells) let users use the output of one command as the input of another. This is accomplished with the | operator which is called a pipe. So the output of of ls -a is fed to grep ".qmail-" and so on. The > operator sends the output of the command to a file, in this case t. So ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t lists the contents of a directory and passes the output through successive filters before finally saving the output to the file t.
The semicolon signals the end of a command and allows multiple bash commands to be entered on a single line.
echo $? prints out the return value of the last executed command, in this case, ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t. By convention, any value besides 0 indicates some sort of error occurred.The Linux Documentation Project gives a list of some common exit codes.

Resources