Error while exiting a while loop - shell

I am monitoring a log file through a shell script and once a string matches i want to exit. i am using a while statement to read the logs file. But the problem is my script never exits it prints the string which i expect but never exits. below is my piece of script
tail -fn0 $TOMCAT_HOME/logs/catalina.out | \
while read line ; do
echo "$line" | grep "Starting ProtocolHandler"
if [ $? = 0 ]
then
exit
fi
done
Tried using grep -q but doesn't work out
Any help will be appreciated

You can just use grep -q:
tail -fn0 $TOMCAT_HOME/logs/catalina.out | grep -q "Starting ProtocolHandler"
It will exit immediately after 1st occurrence of string "Starting ProtocolHandler"

Are you just wanting the line number? What about grep -n? Then you don't even need a script.
-n, --line-number
Prefix each line of output with the line number within its input file.
Couple that with -m to get it to exit after 1 match
-m NUM, --max-count=NUM
Stop reading a file after NUM matching lines. If the input is standard input from a regular file, and NUM matching lines are output, grep ensures that the standard input is positioned to just after the last matching line before exiting, regardless of the presence of trailing context lines. This enables a calling process to resume a search. When grep stops after NUM matching lines, it outputs any trailing context lines. When the -c or --count option is also used, grep does not output a count greater than NUM. When the -v or --invert-match option is also used, grep stops after outputting NUM non-matching lines.
So, you'd have something like
grep -n -m 1 "Starting ProtocolHandler" [filename]
If you want it to exit:
cat [filename] | grep -n -m 1 "Starting ProtocolHandler"

Related

Search only last n lines of a file for matching text pattern and print filenames that do not have this pattern

I would like to search the last 2 lines of a bunch of similarly named files for a given text pattern and write out the filenames where matches are NOT found.
I tried this:
tail -n 2 slurm-* | grep -L "run complete"
As you can see "slurm" is the file base, and I want to find files where "run complete" is absent. However, this command does not give me any output. I tried the regular (non-inverse) problem:
tail -n 2 slurm-* | grep -H "run complete"
and I get a bunch of output (all the matches that are found but not the filenames):
(standard input):run complete
I figure that I have misunderstood how piping tail output to grep works, any help is greatly appreciated.
This should work -
for file in `ls slurm-*`;
do
res=`tail -n2 $file | grep "run complete" 1>/dev/null 2>&1; echo $?`;
if [ $res -ne 0 ];
then
echo $file ;
fi ;
done ;
Explanation -
"echo $?" gives us the return code of the grep command. If grep finds the pattern in the file, it returns 0. Otherwise the return code is non-zero.
We check for this non-zero return code, and only then, "echo" the file name. Since you have not mentioned whether the output of grep is necessary, I have discarded the STD_OUT and STD_ERR by sending it to /dev/null.

unusual chaining of "grep" in the shell

I stumbled upon a shell instruction which looks odd:
ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t ; echo $?
I suspect that the returned value would represent an error. Could anyone confirm this or explain in which circumstances this instruction would be applied ?
The first grep only allows through lines that contain qmail (preceded by any character and followed by a dash, but that is largely immaterial). The second grep strips out lines that contain mail, which means every line passed by the first grep is deleted by the second. There's nothing left for the third one to process, so the file t will always be empty. The value for $? should be 1, failure, since the third grep failed to find any lines that matched its pattern (because it got no lines to process).
It is a mistake.
It is hard to know how to fix it without knowing what it is trying to do.
The bash shell (and most other shells) let users use the output of one command as the input of another. This is accomplished with the | operator which is called a pipe. So the output of of ls -a is fed to grep ".qmail-" and so on. The > operator sends the output of the command to a file, in this case t. So ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t lists the contents of a directory and passes the output through successive filters before finally saving the output to the file t.
The semicolon signals the end of a command and allows multiple bash commands to be entered on a single line.
echo $? prints out the return value of the last executed command, in this case, ls -a | grep ".qmail-" | grep -v "mail" | grep ".mail" > t. By convention, any value besides 0 indicates some sort of error occurred.The Linux Documentation Project gives a list of some common exit codes.

Processing the real-time last line in a currently being written text file

I have a text file which is in fact open and does logging activities performed by process P1 in the system. I was wondering how I can get the real time content of the last line of this file in a bash script and do "echo" a message, say "done was seen", if the line equals to "done".
You could use something like this :
tail -f log.txt | sed -n '/^done$/q' && echo done was seen
Explanation:
tail -f will output appended data as the file grows
sed -n '/^done$/q' will exit when a line containing only done is encountered, ending the command pipeline.
This should work for you:
tail -f log.txt | grep -q -m 1 done && echo done was seen
The -m flag to grep means "exit after N matches", and the && ensures that the echo statement will only be done on a successful exit from grep.

Grep without filtering

How do I grep without actually filtering, or highlighting?
The goal is to find out if a certain text is in the output, without affecting the output. I could tee to a file and then inspect the file offline, but, if the output is large, that is a waste of time, because it processes the output only after the process is finished:
command | tee file
file=`mktemp`
if grep -q pattern "$file"; then
echo Pattern found.
fi
rm "$file"
I thought I could also use grep's before (-B) and after (-A) flags to achieve live processing, but that won't output anything if there are no matches.
# Won't even work - DON'T USE.
if command | grep -A 1000000 -B 1000000 pattern; then
echo Pattern found.
fi
Is there a better way to achieve this? Something like a "pretend you're grepping and set the exit code, but don't grep anything".
(Really, what I will be doing is to pipe stderr, since I'm looking for a certain error, so instead of command | ... I will use command 2> >(... >&2; result=${PIPESTATUS[*]}), which achieves the same, only it works on stderr.)
If all you want to do is set the exit code if a pattern is found, then this should do the trick:
awk -v rc=1 '/pattern/ { rc=0 } 1; END {exit rc}'
The -v rc=1 creates a variable inside the Awk program called rc (short for "return code") and initializes it to the value 1. The stanza /pattern/ { rc=0 } causes that variable to be set to 0 whenever a line is encountered that matches the regular expression pattern. The 1; is an always-true condition with no action attached, meaning the default action will be taken on every line; that default action is printing the line out, so this filter will copy its input to its output unchanged. Finally, the END {exit rc} runs when there is no more input left to process, and ensures that awk terminates with the value of the rc variable as its process exit status: 0 if a match was found, 1 otherwise.
The shell interprets exit code 0 as true and nonzero as false, so this command is suitable for use as the condition of a shell if or while statement, possibly at the end of a pipeline.
To allow output with search result you can use awk:
command | awk '/pattern/{print "Pattern found"} 1'
This will print "Pattern found" when pattern is matched in any line. (Line will be printed later)
If you want Line to print before then use:
command | awk '{print} /pattern/{print "Pattern found"}'
EDIT: To execute any command on match use:
command | awk '/pattern/{system("some_command")} 1'
EDIT 2: To take care of special characters in keyword use this:
command | awk -v search="abc*foo?bar" 'index($0, search) {system("some_command"); exit} 1'
Try this script. It will not modify anything of output of your-command and sed exit with 0 when pattern is found, 1 otherwise. I think its what you want from my understand of your question and comment.:
if your-command | sed -nr -e '/pattern/h;p' -e '${x;/^.+$/ q0;/^.+$/ !q1}'; then
echo Pattern found.
fi
Below is some test case:
ubuntu-user:~$ if echo patt | sed -nr -e '/pattern/h;p' -e '${x;/^.+$/ q0;/^.+$/ !q1}'; then echo Pattern found.; fi
patt
ubuntu-user:~$ if echo pattern | sed -nr -e '/pattern/h;p' -e '${x;/^.+$/ q0;/^.+$/ !q1}'; then echo Pattern found.; fi
pattern
Pattern found.
Note previous script fails to work when there is no ouput from your-command because then sed will not run sed expression and exit with 0 all the time.
I take it you want to print out each line of your output, but at the same time, track whether or not a particular pattern is found. Simply passing the output to sed or grep would affect the output. You need to do something like this:
pattern=0
command | while read line
do
echo "$line"
if grep -q "$pattern" <<< "$lines"
then
((pattern+=1))
fi
done
if [[ $pattern -gt 0 ]]
then
echo "Pattern was found $pattern times in the output"
else
echo "Didn't find the pattern at all"
fi
ADDENDUM
If the original command has both stdout and stderr output, which come in a specific order, with the two possibly interleaved, then will your solution ensure that the outputs are interleaved as they normally would?
Okay, I think I understand what you're talking about. You want both STDERR and STDOUT to be grepped for this pattern.
STDERR and STDOUT are two different things. They both appear on the terminal window because that's where you put them. The pipe (|) only takes STDOUT. STDERR is left alone. In the above, only the output of STDOUT would be used. If you want both STDOUT and STDERR, you have to redirect STDERR into STDOUT:
pattern=0
command 2>&1 | while read line
do
echo "$line"
if grep -q "$pattern" <<< "$lines"
then
((pattern+=1))
fi
done
if [[ $pattern -gt 0 ]]
then
echo "Pattern was found $pattern times in the output"
else
echo "Didn't find the pattern at all"
fi
Note the 2>&1. This says to take STDERR (which is File Descriptor 2) and redirect it into STDOUT (File Descriptor 1). Now, both will be piped into that while read loop.
The grep -q will prevent grep from printing out its output to STDOUT. It will print to STDERR, but that shouldn't be an issue in this case. Grep only prints out STDERR if it cannot open a file requested, or the pattern is missing.
You can do this:
echo "'search string' appeared $(command |& tee /dev/stderr | grep 'search string' | wc -l) times"
This will print the entire output of command followed by the line:
'search string' appeared xxx times
The trick is, that the tee command is not used to push a copy into a file, but to copy everything in stdout to stderr. The stderr stream is immediately displayed on the screen as it is not connected to the pipe, while the copy on stdout is gobbled up by the grep/wc combination.
Since error messages are usually emitted to stderr, and you said that you want to grep for error messages, the |& operator is used for the first pipe to combine the stderr of command into its stdout, and push both into the tee command.

how to proceed once a file containing something in shell

I am writing some BASH shell script that will continuously check a file to see if the file already contains "Completed!" before proceeding. (Of course, assume the file is being updated and will eventually contain the phrase "Completed!")
I am not sure how to do this. Thank you for your help.
You can do something like:
while ! grep -q -e 'Completed!' file ; do
sleep 1 # Or some other number of seconds
done
# Here the file contains completed
Amongst the standard utilities, tail has an option to keep reading from a file: tail -f. So filter the output of tail -f.
<some_file tail -f -n +1 | grep 'Completed!' | head -n 1 >/dev/null
There may be a delay due to buffering. You can at least reduce the delay by using fewer tools in the pipeline. In fact, some implementations of tail never buffer when you do tail -f, so the following snippet will return as soon as Completed! is written to the file.
<some_file tail -f -n +1 | sed -e '/Completed!/ q'
This assumes that the file is being appended to by some other tool. If the file is overwritten by the data-producing program after you start tail, this solution won't work. You can search the file periodically. On some systems you can call a notification mechanism to know whenever the file changes, e.g. with inotifywait under Linux.
I've done this in Kornshell:
tail -f somefile | while read line
do
echo $line
[[ $line == *Completed!* ]] && break
done
Note no quotes around the *Completed!* string. This allows the double square brackets to do glob pattern matching instead of string matching.
This seems to work in BASH too. However, the line with the Completed must end in a NL. Otherwise, it'll take an extra line before it breaks the loop.
You can use grep too:
tail -f somefile | while read line
do
echo $line
grep -iq "Completed!" && break
done
The -q parameter means quiet. If your grep doesn't take the -q parameter, you might have to pipe it to /dev/null. The -i is ignore case. Whether you want to do that is up to you.
The advantage is that you aren't doing any processing unless there's a line to read. Using sleep may mean you miss the line, or that you're processing when no line has been added to the file.
Using grep in a pipe you may turn on line buffering mode by adding the --line-buffered option!

Categories

Resources