in bash how can I "break" a watch then re-add it? - bash

I'm using watch because I need to detect new files created in a log folder and tail them. I can't seem to use tail ie. tail /dir/*.log and have it detect new files created in the folder. So at the moment I'm using
#!/bin/bash
while :
do
watch -n 1 "tail /tmp/tomcat-logs/*.log | grep --line-buffered \"ERROR\|INFO: Server startup in:\|Exception:\" | sed 's/ERROR/PROBLEMO/g' | tee /tmp/errchecker-log.txt"
echo "do some processing here when a token is found"
done
In this case, when a token is found "ERROR" I need to stop watching - then grep the output ( count lines etc.. ) then re-watch until the next error .. rinse repeat
Cheers

Use while read
#!/bin/bash
while read LINE; do
echo $LINE
done < <(for i in $(seq 10); do echo $i; sleep 1; done)

Related

Bash- Running a command on each grep correspondence without stopping tail -n0 -f

I'm currently monitoring a log file and my ultimate goal is to write a script that uses tail -n0 -f and execute a certain command once grep finds a correspondence. My current code:
tail -n 0 -f $logfile | grep -q $pattern && echo $warning > $anotherlogfile
This works but only once, since grep -q stops when it finds a match. The script must keep searching and running the command, so I can update a status log and run another script to automatically fix the problem. Can you give me a hint?
Thanks
use a while loop
tail -n 0 -f "$logfile" | while read LINE; do
echo "$LINE" | grep -q "$pattern" && echo "$warning" > "$anotherlogfile"
done
awk will let us continue to process lines and take actions when a pattern is found. Something like:
tail -n0 -f "$logfile" | awk -v pattern="$pattern" '$0 ~ pattern {print "WARN" >> "anotherLogFile"}'
If you need to pass in the warning message and path to anotherLogFile you can use more -v flags to awk. Also, you could have awk take the action you want instead. It can run commands via the system() function where you pass the shell command to run

Why does bash script stop working

The script monitors incoming HTTP messages and forwards them to a monitoring application called zabbix, It works fine, however after about 1-2 days it stops working. Heres what I know so far:
Using pgrep i see the script is still running
the logfile file gets updated properly (first command of script)
The FIFO pipe seems to be working
The problem must be somewhere in WHILE loop or tail command.
Im new at scripting so maybe someone can spot the problem right away?
#!/bin/bash
tcpflow -p -c -i enp2s0 port 80 | grep --line-buffered -oE 'boo.php.* HTTP/1.[01]' >> /usr/local/bin/logfile &
pipe=/tmp/fifopipe
trap "rm -f $pipe" EXIT
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n0 -F /usr/local/bin/logfile > /tmp/fifopipe &
while true
do
if read line <$pipe; then
unset sn
for ((c=1; c<=3; c++)) # c is no of max parameters x 2 + 1
do
URL="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
if [[ "$URL" == 'sn' ]]; then
((c++))
sn="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
fi
done
if [[ "$sn" ]]; then
hosttype="US2G_"
host=$hosttype$sn
zabbix_sender -z nuc -s $host -k serial -o $sn -vv
fi
fi
done
You're inputting from the fifo incorrectly. By writing:
while true; do read line < $pipe ....; done
you are closing and reopening the fifo on each iteration of the loop. The first time you close it, the producer to the pipe (the tail -f) gets a SIGPIPE and dies. Change the structure to:
while true; do read line; ...; done < $pipe
Note that every process inside the loop now has the potential to inadvertently read from the pipe, so you'll probably want to explicitly close stdin for each.

How to wait till a particular line appears in a file

Is it possible to write a script that does not proceed till a given line appears in a particular file?
For example I want to do something like this:
CANARY_LINE='Server started'
FILE='/var/logs/deployment.log'
echo 'Waiting for server to start'
.... watch $FILE for $CANARY_LINE ...
echo 'Server started'
Basically, a shell script that watches a file for line (or regex).
tail -n0 -f path_to_my_log_file.log | sed '/particular_line/ q'
You can use the q flag while parsing the input via sed. Then sed will interrupt tail as soon as Server started appears in /var/logs/deployment.log.
tail -f /var/logs/deployment.log | sed '/Server started/ q'
Another way to do the same thing
( tail -f -n0 /var/logs/deployment.log & ) | grep -q "Server Started"
Previous answer (works but not as efficient than this one)
We have to be careful with loops.
For example if you want to check for a file to start an algorithm you've probably have to do something like that:
FILE_TO_CHECK="/var/logs/deployment.log"
LINE_TO_CONTAIN="Server started"
SLEEP_TIME=10
while [ $(cat FILE_TO_CHECK | grep "${LINE_TO_CONTAIN}") ]
do
sleep ${SLEEP_TIME}
done
# Start your algorithm here
But, in order to prevent an infinite loop you should add some bound:
FILE_TO_CHECK="/var/logs/deployment.log"
LINE_TO_CONTAIN="Server started"
SLEEP_TIME=10
COUNT=0
MAX=10
while [ $(cat FILE_TO_CHECK | grep "${LINE_TO_CONTAIN}") -a ${COUNT} -lt ${MAX} ]
do
sleep ${SLEEP_TIME}
COUNT=$(($COUNT + 1))
done
if [ ! $(cat FILE_TO_CHECK | grep "${LINE_TO_CONTAIN}") ]
then
echo "Let's go, the file is containing what we want"
# Start your algorithm here
else
echo "Timed out"
exit 10
fi
CANARY_LINE='Server started'
FILE='/var/logs/deployment.log'
echo 'Waiting for server to start'
grep -q $CANARY_LINE <(tail -f $FILE)
echo 'Server started'
Source: adapted from How to wait for message to appear in log in shell
Try this:
#!/bin/bash
canary_line='Server started'
file='/var/logs/deployment.log'
echo 'Waiting for server to start'
until grep -q "${canary_line}" "${file}"
do
sleep 1s
done
echo 'Server started'
Adjust sleep's parameter to your taste.
If the line in the file needs to match exactly, i.e. the whole line, change grep's second parameter to "^${canary_line}$".
If the line contains any characters that grep thinks are special, you're going to have to solve that... somehow.

bash script: continue until .jpg exists on website

I would like to do the following: bash script which starts on particular URL and continues until image exists on website. For example:
www.example.com/1.jpg
www.example.com/2.jpg
www.example.com/3.jpg
www.example.com/4.jpg
www.example.com/5.jpg
Script should continue for 1,2,3,4,5 and stop when it reaches 6 as there is no image anymore. I want to do it alone, but I need one thing: how to check if image exists?
#!/bin/bash
host='www.example.com/'
i=1
while curl -I --stderr /dev/null "${host}${i}.jpg" | head -1 | cut -d' ' -f2 | grep 200
do
echo "Do something"
i=$i++
done
You could also use wget:
#!/bin/bash
i=1
while wget -q "www.example.com/image${i}.jpg"; do
echo "Got $i"
(( i++ ))
done

How can I find the names of log files with 'error' on the last line?

I am trying to find the id's of jobs that end in error.
When I use
for i in *log
do tail -n 1 $i | grep error
echo $i
done
It seems to find error on the last line of each file, even for files that don't have errors on the last line, and returns all of the filenames with
STOP fatal_error
out1.log
STOP fatal_error
out2.log
STOP fatal_error
out3.log
....
even though
grep error out1.log
returns nothing
Alternatively, is there an easier way to get a list of the jobs that end in error? I tagged with qsub because I use qsub to submit the jobs
You need an if statement so that you only echo the filename when the grep succeeds:
for i in *.log
do
if tail -n 1 $i | grep error > /dev/null
then
echo $i
fi
done
Also, redirect the grep results to /dev/null so it doesn't appear in the output.
You want to say
do tail -n 1 $i | grep error
not
do tail -n 1 *.log | grep error
Otherwise, you are checking every log file at every iteration and will always get the same results.
Your logic is incorrect.
echo $i will have the listing of the file, not the grep output

Resources