Is it possible to write a script that does not proceed till a given line appears in a particular file?
For example I want to do something like this:
CANARY_LINE='Server started'
FILE='/var/logs/deployment.log'
echo 'Waiting for server to start'
.... watch $FILE for $CANARY_LINE ...
echo 'Server started'
Basically, a shell script that watches a file for line (or regex).
tail -n0 -f path_to_my_log_file.log | sed '/particular_line/ q'
You can use the q flag while parsing the input via sed. Then sed will interrupt tail as soon as Server started appears in /var/logs/deployment.log.
tail -f /var/logs/deployment.log | sed '/Server started/ q'
Another way to do the same thing
( tail -f -n0 /var/logs/deployment.log & ) | grep -q "Server Started"
Previous answer (works but not as efficient than this one)
We have to be careful with loops.
For example if you want to check for a file to start an algorithm you've probably have to do something like that:
FILE_TO_CHECK="/var/logs/deployment.log"
LINE_TO_CONTAIN="Server started"
SLEEP_TIME=10
while [ $(cat FILE_TO_CHECK | grep "${LINE_TO_CONTAIN}") ]
do
sleep ${SLEEP_TIME}
done
# Start your algorithm here
But, in order to prevent an infinite loop you should add some bound:
FILE_TO_CHECK="/var/logs/deployment.log"
LINE_TO_CONTAIN="Server started"
SLEEP_TIME=10
COUNT=0
MAX=10
while [ $(cat FILE_TO_CHECK | grep "${LINE_TO_CONTAIN}") -a ${COUNT} -lt ${MAX} ]
do
sleep ${SLEEP_TIME}
COUNT=$(($COUNT + 1))
done
if [ ! $(cat FILE_TO_CHECK | grep "${LINE_TO_CONTAIN}") ]
then
echo "Let's go, the file is containing what we want"
# Start your algorithm here
else
echo "Timed out"
exit 10
fi
CANARY_LINE='Server started'
FILE='/var/logs/deployment.log'
echo 'Waiting for server to start'
grep -q $CANARY_LINE <(tail -f $FILE)
echo 'Server started'
Source: adapted from How to wait for message to appear in log in shell
Try this:
#!/bin/bash
canary_line='Server started'
file='/var/logs/deployment.log'
echo 'Waiting for server to start'
until grep -q "${canary_line}" "${file}"
do
sleep 1s
done
echo 'Server started'
Adjust sleep's parameter to your taste.
If the line in the file needs to match exactly, i.e. the whole line, change grep's second parameter to "^${canary_line}$".
If the line contains any characters that grep thinks are special, you're going to have to solve that... somehow.
Related
Log files is written line by line by underwater drones on a server. TWhen at surface, the drones speak slowly to the server (say ~200o/s on a phone line which is not stable) and only from time to time (say every ~6h). Depending on the messages, I have to execute commands on the server while the drones are online and when they hang up other commands. Other processes may be looking at the same files with similar tasks.
A lot can be found on this website on somewhat similar problems but the solution I have built on is still unsatisfactory. Presently I'm doing this with bash
while logfile_drone=`inotifywait -e create --format '%f' log_directory`; do
logfile=log_directory/${logfile_drone}
while action=`inotifywait -q -t 120 -e modify -e close --format '%e' ${logfile} ` ; do
exidCode=$?
lastLine=$( tail -n2 ${logFile} | head -n1 ) # because with tail -n1 I can got only part of the line. this happens quite often
match =$( # match set to true if lastLine matches some pattern )
if [[ $action == 'MODIFY' ]] && $match ; then # do something ; fi
if [[ $( echo $action | cut -c1-5 ) == 'CLOSE' ]] ; then
# do something
break
fi
if [[ $exitCode -eq 2 ]] ; then break ; fi
done
# do something after the drone has hang up
done # wait for a new call from the same or another drone
The main problems are :
the second inotify misses lines, may be because of the other processes looking at the same file.
the way I catch the time out doesn't seem to work.
I can't monitor 2 drones simultaneously.
Basically the code works more or less but isn't very robust. I wonder if problem 3 can be managed by putting the second while loop in a function which is put in background when called. Finally, I wonder if a higher level language (I'm familiar with php which has a PECL extension for inotify) would not do this much better. However, I imagine that php will not solve problem 3 better than than bash.
Here is the code where I'm facing the problem of abrupt exit from the while loop, implemented according to Philippe's answer, which works fine otherwise:
while read -r action ; do
...
resume=$( grep -e 'RESUMING MISSION' <<< $lastLine )
if [ -n "$resume" ] ; then
ssh user#another_server "/usr/bin/php /path_to_matlab_command/matlabCmd.php --drone=${vehicle}" &
fi
if [ $( echo $action | cut -c1-5 ) == 'CLOSE' ] ; then ... ; sigKill=true ; fi
...
if $sigKill ; then break; fi
done < <(inotifywait -q -m -e modify -e close_write --format '%e' ${logFile})
When I comment the line with ssh the script can exit properly with a break triggered by CLOSE, otherwise the while loop finishes abruptly after the ssh command. The ssh is put in background because the matlab code runs for long time.
monitor mode (-m) of inotifywait may serve better here :
inotifywait -m -q -e create -e modify -e close log_directory |\
while read -r dir action file; do
...
done
monitor mode (-m) does not buffer, it just print all events to standard output.
To preserve the variables :
while read -r dir action file; do
echo $dir $action $file
done < <(inotifywait -m -q -e create -e modify -e close log_directory)
echo "End of script"
The script monitors incoming HTTP messages and forwards them to a monitoring application called zabbix, It works fine, however after about 1-2 days it stops working. Heres what I know so far:
Using pgrep i see the script is still running
the logfile file gets updated properly (first command of script)
The FIFO pipe seems to be working
The problem must be somewhere in WHILE loop or tail command.
Im new at scripting so maybe someone can spot the problem right away?
#!/bin/bash
tcpflow -p -c -i enp2s0 port 80 | grep --line-buffered -oE 'boo.php.* HTTP/1.[01]' >> /usr/local/bin/logfile &
pipe=/tmp/fifopipe
trap "rm -f $pipe" EXIT
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n0 -F /usr/local/bin/logfile > /tmp/fifopipe &
while true
do
if read line <$pipe; then
unset sn
for ((c=1; c<=3; c++)) # c is no of max parameters x 2 + 1
do
URL="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
if [[ "$URL" == 'sn' ]]; then
((c++))
sn="$(echo $line | awk -F'[ =&?]' '{print $'$c'}')"
fi
done
if [[ "$sn" ]]; then
hosttype="US2G_"
host=$hosttype$sn
zabbix_sender -z nuc -s $host -k serial -o $sn -vv
fi
fi
done
You're inputting from the fifo incorrectly. By writing:
while true; do read line < $pipe ....; done
you are closing and reopening the fifo on each iteration of the loop. The first time you close it, the producer to the pipe (the tail -f) gets a SIGPIPE and dies. Change the structure to:
while true; do read line; ...; done < $pipe
Note that every process inside the loop now has the potential to inadvertently read from the pipe, so you'll probably want to explicitly close stdin for each.
I have a section of code in a bash script that uses a while loop to grep a file until the string I am looking for is there, then exit. Currently, its just hanging using the following code:
hostname="test-cust-15"
VAR1=$(/bin/grep -wo -m1 "HOST ALERT: $hostname;DOWN" /var/log/logfile)
while [ ! "$VAR1" ]
do
sleep 5
done
echo $VAR1 was found
I know the part of the script responsible for inserting this string into the logfile works, as I can grep it out side of the script and find it.
One thing I have tried is to change up the variables. Like this:
hostname="test-cust-15"
VAR1="HOST ALERT: $hostname;DOWN"
while [ ! /bin/grep "$VAR1" /var/log/logfile ]
do
sleep 5
done
echo $VAR1 was found
But i get a binary operator expected message and once I got a too many arguments message when using this:
while [ ! /bin/grep -q -wo "$VAR1" /var/log/logfile ]
What do I need to do to fix this?
while/until can work off of the exit status of a program directly.
until /bin/grep "$VAR1" /var/log/logfile
do
sleep 5
done
echo "$VAR1" was found
You also mentioned that it prints out the match in an above comment. If that's not desirable, use output redirection, or grep's -q option.
until /bin/grep "$VAR1" /var/log/logfile >/dev/null
until /bin/grep -q "$VAR1" /var/log/logfile
No need to bother with command substitution or test operator there. Simply:
while ! grep -wo -m1 "HOST ALERT: $hostname;DOWN" /var/log/logfile; do
sleep 5
done
Don't waste resources, use tail!
#!/bin/bash
while read line
do
echo $line
break
done < <(tail -f /tmp/logfile | grep --line-buffered "HOST ALERT")
I am making a menu for myself, because sometimes I need to search (Or NMAP which port).
I want to do the same as running the command in the command line.
Here is a piece of my code:
nmap $1 | grep open | while read line; do
serviceport=$(echo $line | cut -d' ' -f1 | cut -d'/' -f1);
if [ $i -eq $choice ]; then
echo "Running command: netcat $1 $serviceport";
netcat $1 $serviceport;
fi;
i=$(($i+1));
done;
It is closing immediately after it scanned everything with nmap.
Don't use FD 0 (stdin) for both your read loop and netcat. If you don't distinguish these streams, netcat can consume content emitted by the nmap | grep pipeline rather than leaving that content to be read by read.
This has a few undesirable effects: Further parts of the while/read loop don't get executed, and netcat sees a closed stdin stream and exits when the pipeline's contents are consumed (so you don't get interactive control of netcat, if that's what you're trying to accomplish). An easy way to work around this issue is to feed the output of your nmap pipeline in on a non-default file descriptor; below, I'm using FD 3.
There's a lot wrong with this code beyond the scope of the question, so please don't consider the parts I've copied-and-pasted an endorsement, but:
while read -r -u 3 line; do
serviceport=${line%% *}; serviceport=${serviceport##/*}
if [ "$i" -eq "$choice" ]; then
echo "Running command: netcat $1 $serviceport"
netcat "$1" "$serviceport"
fi
done 3< <(nmap "$1" | grep open)
I would like to do the following: bash script which starts on particular URL and continues until image exists on website. For example:
www.example.com/1.jpg
www.example.com/2.jpg
www.example.com/3.jpg
www.example.com/4.jpg
www.example.com/5.jpg
Script should continue for 1,2,3,4,5 and stop when it reaches 6 as there is no image anymore. I want to do it alone, but I need one thing: how to check if image exists?
#!/bin/bash
host='www.example.com/'
i=1
while curl -I --stderr /dev/null "${host}${i}.jpg" | head -1 | cut -d' ' -f2 | grep 200
do
echo "Do something"
i=$i++
done
You could also use wget:
#!/bin/bash
i=1
while wget -q "www.example.com/image${i}.jpg"; do
echo "Got $i"
(( i++ ))
done