Log file pattern matching [closed] - bash

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
i want to monitor application log file for specific error patterns on content added in since last 10 min (or since script last run) please not i dont want to monitor entire log file but only lines that are added in last 10 min, when patern is matched i want it displayed on screen. I'm confused how to achieve this thru script.
TIA
regards
tnt5273

FILE=logfile
lines=$(wc -l < "$FILE")
while sleep 600; do
clines=$(wc -l < "$FILE")
diff=$(($clines - $lines))
tail -$diff $FILE | grep PATTERN
lines=$clines
done

What you appear to be describing is commonly achieved at a console by:
tail -F /path/to/my/file | grep "pattern"
This an idiom used by many system adminstrators.
There's another approach where you want to be alerted if a particular event is logged, but you don't want to watch for it.
The Simple Event Correllator is a perl script designed to watch logs, correlate events and perform actions.

Related

Is there a way to stop cp from overwriting the second of two globbed files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Intention:
cp /path/to/code.{c,h} .
Concise version:
cp /path/to/code.* .
Re-occurring typo:
cp /path/to/code.*
In the typo case the second file is overwritten by the first.
This has bitten me repeatedly and I'm not optimistic there's a solution outside of re-writing my neural circuits, but one can dream.
Asking for confirmation every time or some visual indication of danger would both be solutions.
Defaulting to --no-clobber or some such is not a solution because I am usually clobbering something in the intended destination.
As suggested, you could create an alias
alias cp='cp -i'
such that you will always be prompted when invoking cp from the command line. Note that this will not affect scripts.
The man page for cp has this to say:
-i, --interactive
prompt before overwrite (overrides a previous -n option)

How to filter out useless messages in `bash` shell by default? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Is there any way to filter out absolutely useless messages in bash session by default?
For example, I would like to never see this absolutely useless message: Binary file ... matches while running grep .... It's extremely hard to type something like grep ... 2>/dev/null each time, especially considering how often I need to run this command. Besides it will filter out useful messages as well and this is unwanted.
What I would like to see, is some sort of file in /etc where I could put a bunch of regular expressions of the useless messages line by line. This filter must apply to tty only, i.e. redirected output must stay untouched!
There are some ways to play with your stderr, but there are a number of issues that make that undesirable. For example:
exec 2>/tmp/errorfile
will put all the STDERR output in the errorfile. You could start a
tail -f /tmp/errorfile | grep -v 'Binary file' &
in your .bashrc to get the other messages as well. You will see some funny side effects; for example I found that the prompt is written on STDERR.
You will probably have to create a more elaborate command than the tail|grep to filter-out the undesirable messages and do something about your prompt as well. And you might need to clean-up your errorfile as well.

unix shell, tail -f from last occurence of pattern [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have very large log file, which contains log of service restart messages. After I initiated service restart with external command I need to tail this log file from last occurrence of reboot message and check following messages to confirm correct restart procedure. I'm analysing messages by python, so only find last occurrence and follow file needed, then i check output line-by-line and simply close connection when read everything I need.
.... # lots of previous data
[timestamp] previous message
[timestamp] Rebooting... # from tis point i need to track messages
[timestamp] doing thing
[timestamp] doing other thing
[timestamp] doing final thing # final point, reboot successful
[timestamp] service activity message #
How can I perform such tailing?
tail -f <from last Rebooting... message>
give a generous buffer value, reverse, extract, reverse
$ tail -1000 file | tac | awk '1,/Rebooting/' | tac
or, replace awk script with !p; /Rebooting/{p=1}
Perhaps something like:
tail -fn +$(awk '/Rebooting/ { line = NR } END { print(line) }' log) log
which uses awk to find the line number of the last occurrence of the pattern and then tails starting at that line.
This still scans the entire file, though.
If you're really doing it from python, you can probably do better by searching the file in reverse directly in python.

What is the difference between 'ls --color' and 'ls --color=tty'? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am making an alias for ls in my .zshrc profile so that it always has a colored output. As it turns out, I stumbled across either
alias ls="ls --color=tty"
or, without the tty value
alias ls="ls --color"
Is there any particular situation where either the commands $ ls --color=tty and $ ls --color, or the above aliases, can behave differently ?
With no argument attached to the option (--color), the output is always colorized. With --color=tty, it is only colorized when stdout is connected to a tty. This matters when the output of ls is piped or redirected.

Linux mint terminal output disappearing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm running a script on terminal and it is supposed to produce a long output, but for some reason the terminal is just showing me the end of the result and I cannot scroll up to see the complete result. Is there a way to save all the terminal instructions and results until I type clear.
The script I'm using has a loop so I need to add the output of the loop if Ill be redirecting the output to a file.
Depending on your system, the size of the terminal buffer may be fixed and hence you may not be able to scroll far enough to see the full output.
A good alternative would be to output your program/script to a text file using:
user#terminal # ./nameofprogram > text_file.txt
Otherwise you will have to find a way to increase the number of lines. In some terminal applications you can go to edit>profiles>edit>scrolling tab and adjust your settings.
You can either redirect the output of your script in a file:
script > file
(Be careful to choose a file that does not exist otherwise the content will be erased)
Or you can buffer the output with less:
script | less

Resources