How can I use tail utility to view a log file that is frequently recreated - bash

I need a solution in creating a script to tail a log file that is recreated (with the same name) after it reaches a certain size.
Using "tail -f" causes the tailing to stop when the file is recreated/rotated.
What I would like to do is create a script that would tail the file and after it reaches 100 lines for example, then restart the command... Or even better to restart the command when the file is recreated?
Is it possible?

Yes! Use this (the retry will make tail retry when the file doesn't exist or is otherwise inaccessible rather than just failing - such as potentially when you are changing files):
tail -f --retry <filename>
OR
tail --follow=name --retry
OR
tail -F <filename>

try running
watch "tail -f" yourfile.log

If tail -F is not available, and you are trying to recover from logrotate, you may add the copytruncate option to your logrotate.d/ spec file so instead of creating a new file each time after rotation, the file is kept and truncated, while a copy is rotated out.
This way the old file handle continues to point to the new (truncated) log file where new logs are appended.
Note that there may be some loss of data during this copy-truncate process.

Since you dont have a tail that support all the features and because you dont have watch you could use a simple script that loop indefinitely to execute the tail.
#!/bin/bash
PID=`mktemp`
while true;
do
[ -e "$1" ] && IO=`stat -c %i "$1"`
[ -e "$1" ] && echo "restarting tail" && { tail -f "$1" 2> /dev/null & echo $! > $PID; }
# as long as the file exists and the inode number did not change
while [[ -e "$1" ]] && [[ $IO = `stat -c %i "$1"` ]]
do
sleep 0.5
done
[ ! -z $PID ] && kill `cat $PID` 2> /dev/null && echo > $PID
sleep 0.5
done 2> /dev/null
rm -rf $PID
You might want to use trap to cleanly exit this script. This is up to you.
Basicaly, this script check if the inode number (using stat -c %i "$1") change to kill the tail command and start a new one when the file is recreated.
Note: you might get rid of the echo "restarting tail" which will pollute your output. It was only useful for testing. Also problems might occur if the file is replaced after we check the inode number and before we start the tail process.

Related

monitoring and searching a file with inotify, and command line tools

Log files is written line by line by underwater drones on a server. TWhen at surface, the drones speak slowly to the server (say ~200o/s on a phone line which is not stable) and only from time to time (say every ~6h). Depending on the messages, I have to execute commands on the server while the drones are online and when they hang up other commands. Other processes may be looking at the same files with similar tasks.
A lot can be found on this website on somewhat similar problems but the solution I have built on is still unsatisfactory. Presently I'm doing this with bash
while logfile_drone=`inotifywait -e create --format '%f' log_directory`; do
logfile=log_directory/${logfile_drone}
while action=`inotifywait -q -t 120 -e modify -e close --format '%e' ${logfile} ` ; do
exidCode=$?
lastLine=$( tail -n2 ${logFile} | head -n1 ) # because with tail -n1 I can got only part of the line. this happens quite often
match =$( # match set to true if lastLine matches some pattern )
if [[ $action == 'MODIFY' ]] && $match ; then # do something ; fi
if [[ $( echo $action | cut -c1-5 ) == 'CLOSE' ]] ; then
# do something
break
fi
if [[ $exitCode -eq 2 ]] ; then break ; fi
done
# do something after the drone has hang up
done # wait for a new call from the same or another drone
The main problems are :
the second inotify misses lines, may be because of the other processes looking at the same file.
the way I catch the time out doesn't seem to work.
I can't monitor 2 drones simultaneously.
Basically the code works more or less but isn't very robust. I wonder if problem 3 can be managed by putting the second while loop in a function which is put in background when called. Finally, I wonder if a higher level language (I'm familiar with php which has a PECL extension for inotify) would not do this much better. However, I imagine that php will not solve problem 3 better than than bash.
Here is the code where I'm facing the problem of abrupt exit from the while loop, implemented according to Philippe's answer, which works fine otherwise:
while read -r action ; do
...
resume=$( grep -e 'RESUMING MISSION' <<< $lastLine )
if [ -n "$resume" ] ; then
ssh user#another_server "/usr/bin/php /path_to_matlab_command/matlabCmd.php --drone=${vehicle}" &
fi
if [ $( echo $action | cut -c1-5 ) == 'CLOSE' ] ; then ... ; sigKill=true ; fi
...
if $sigKill ; then break; fi
done < <(inotifywait -q -m -e modify -e close_write --format '%e' ${logFile})
When I comment the line with ssh the script can exit properly with a break triggered by CLOSE, otherwise the while loop finishes abruptly after the ssh command. The ssh is put in background because the matlab code runs for long time.
monitor mode (-m) of inotifywait may serve better here :
inotifywait -m -q -e create -e modify -e close log_directory |\
while read -r dir action file; do
...
done
monitor mode (-m) does not buffer, it just print all events to standard output.
To preserve the variables :
while read -r dir action file; do
echo $dir $action $file
done < <(inotifywait -m -q -e create -e modify -e close log_directory)
echo "End of script"

Check if bash script already running except itself with arguments

So I've looked up other questions and answers for this and as you can imagine, there are lots of ways to find this. However, my situation is kind of different.
I'm able to check whether a bash script is already running or not and I want to kill the script if it's already running.
The problem is that with the below code, -since I'm running this within the same script- the script kills itself too because it sees a script already running.
result=`ps aux | grep -i "myscript.sh" | grep -v "grep" | wc -l`
if [ $result -ge 1 ]
then
echo "script is running"
else
echo "script is not running"
fi
So how can I check if a script is already running besides it's own self and kill itself if there's another instance of the same script is running, else, continue without killing itself.
I thought I could combine the above code with $$ command to find the script's own PID and differentiate them this way but I'm not sure how to do that.
Also a side note, my script can be run multiple times at the same time within the same machine but with different arguments and that's fine. I only need to identify if script is already running with the same arguments.
pid=$(pgrep myscript.sh | grep -x -v $$)
# filter non-existent pids
pid=$(<<<"$pid" xargs -n1 sh -c 'kill -0 "$1" 2>/dev/null && echo "$1"' --)
if [ -n "$pid" ]; then
echo "Other script is running with pid $pid"
echo "Killing him!"
kill $pid
fi
pgrep lists the pids that match the name myscript.sh. From the list we filter current $$ shell with grep -v. It the result is non-empty, then you could kill the other pid.
Without the xargs, it would work, but the pgrep myscript.sh will pick up the temporary pid created for command substitution or the pipe. So the pid will never be empty and the kill will always execute complaining about the non-existent process. To do that, for each pid in pids, I check if the pid exists with kill -0. If it does, then it is outputted, effectively filtering all nonexistent pids.
You could also use a normal for loop to filter the pids:
# filter non-existent pids
pid=$(
for i in $pid; do
if kill -0 "$i" 2>/dev/null; then
echo "$i"
fi
done
)
Alternatively, you could use flock to lock the file and use lsof to list current open files with filtering the current one. As it is now, I think it will kill also editors that are editing the file and such. I believe the lsof output could be better filtered to accommodate this.
if [ "${FLOCKER}" != "$0" ]; then
pids=$(lsof -p "^$$" -- ./myscript.sh | awk 'NR>1{print $2}')
if [ -n "$pids" ]; then
echo "Other processes with $(echo $pids) found. Killing them"
kill $pids
fi
exec env FLOCKER="$0" flock -en "$0" "$0" "$#"
fi
I would go with either of 2 ways to solve this problem.
1st solution: Create a watchdog file lets say a .lck file kind of on a location before starting the script's execution(Make sure we use trap etc commands in case script is aborted so that .lck file should be removed) AND remove it once execution of script is completed successfully.
Example script for 1st solution: This is just an example a test one. We need to take care of interruptions in the script, lets say script got interrupted by a command or etc then we could use trap in it too, since at that time it would have not been completed but you may need to kick it off again(since last time it was not completed).
cat file.ksh
#!/bin/bash
PWD=`pwd`
watchdog_file="$PWD/script.lck"
if [[ -f "$watchdog_file" ]]
then
echo "Please wait script is still running, exiting from script now.."
exit 1;
else
touch $watchdog_file
fi
while true
do
echo "singh" > test1
done
if [[ -f "$watchdog_file" ]]
then
rm "$watchdog_file"
fi
2nd solution: Take pid of current running shell using $$ save it in a file. Then check if that process is still running come out of script if NOT running then move on to run statements in script.

Display output of command in terminal while using command substitution

So I'm trying to check for the output of a command, but I also want to be able display the output directly in the terminal.
#!/bin/bash
while :
do
OUT=$(streamlink -o "$NAME" "$STREAM" best)
echo "$OUT"
if [[ $OUT == *"No playable streams"* ]]; then
echo "Delaying!"
sleep 15s
fi
done
This is what I tried to do.
The code checks if the output of a command contains that error substring, if so it'd add a delay. It works well on that part.
But it doesn't work well when the command is actually successfully downloading a file as it won't perform that echo until it is finished with the download (which would take hours). So until then I have no way of personally checking the output of the command
Plus the output of this particular command displays and updates the speed and filesize in real-time, something echo wouldn't be able to replicate.
So is there a way to be able to display the output of a command in real-time, while also command substituting them in order to check the output for substrings after the command is finished?
Use a temporary file:
TEMP=$(mktemp) || exit 1
while true
do
streamlink -o "$NAME" "$STREAM" best |& tee "$TEMP"
OUT=$( cat "$TEMP" )
#echo "$OUT" # not longer needed
if [[ $OUT == *"No playable streams"* ]]; then
echo "Delaying!"
sleep 15s
fi
done
# not really needed here because of endless loop
rm -f "$TEMP"

File Lock on Linux

So there are 2 scripts: A and B, both want to write to the same file. It's possible to both scripts want to write to the file at the same time. How can I lock the file? While script A is writing to the file, script B has to wait till the file is get unlocked.
I tried this:
while [ -f $LOCK ]
do
sleep 0.1
done
touch $LOCK
#action
rm $LOCK
The problem with the script above that it's possible, that both of A and B is looking for the $LOCK at the same time, and they cant find't to start writing.
Any help?
Another possibility is:
echo $$ >> lockfile
locked_by=$(head -1 lockfile)
if [ $$ = $locked_by ] ; then
echo "Hurray! the file is mine!"
#do stuff
rm lockfile
else
echo "$locked_by has the lock, sorry"
fi
the echo $$ >> lockfile is, most of the time, atomic enough.
Try this:
Script A are open file then set attribute chattr +i test.txt and after script A are done use chattr -i test.txt.
For example:
Script A
chattr +i test.txt
tail -n 50 /var/log/maillog > test.txt
chattr -i test.txt
Script B
chattr +i test.txt
tail -n 50 /var/log/messages > test.txt
chattr -i test.txt
You can use kill -0 and if construction. For example:
kill -0 $lockPid
if [[ $? == 0 ]];then
echo "File locked"
break
else
bash script2.sh
fi

Improvements to this bash script to simulate "tail --follow"

I need to remote tail log files such that the tailing continues working even when the file is rolled over.
My attempts to do so, started by directly using the tail command over ssh:
ssh root#some-remote-host tail -1000f /some-directory/application.log | tee /c/local-directory/applicaiton.log
That allows me to filter through /c/local-directory/applicaiton.log locally using Otroslogviewer (which was the original purpose of trying to tail the log file locally).
The above stops tailing when the remote log file is rolled over (which happens at every 10MB). I do not have the access required to change the roll over settings.
Unfortunately, none of the tail versions on the remote OS (Solaris) support a --follow (or -f) option which can handle file rollovers, so I had to write the following tailer.sh script to simulate that option:
<!-- language: lang-bash -->
#!/bin/bash
function startTail {
echo "Path: $1"
tail -199999f "$1" 2>/dev/null & #Try to tail the file
while [[ -f $1 ]]; do sleep 1; done #Check every second if the file still exists
echo "***** File Not Available as of: $(date)" #If not then log a message and,
kill "$!" 2>/dev/null #Kill the tail process, then
echo "Waiting for file to appear" #Wait for the file to re-appear
while [ ! -f "$1" ]
do
echo -ne "." #Show progress bar
sleep 1
done
echo -ne '\n' #Advance to next line #File has appeared again
echo "Starting Tail again"
startTail "$1"
}
startTail "$1"
I am relatively happy with the above script. However, it suffers from one issue stemming from the limitation of the sleep command on the remote OS. It can only accept whole numbers, so sleep 1 is the smallest amount of time I can wait before checking for the existence of the file again. That time period is enough to detect a file rollover sometimes, but fails enough number of times to be a problem I want to fix.
The only other way I can think of is to implement a file-rollover check by checking for the file size. So, check for the filesize every one second, if it's less than the previously recorded size then the file was rolled over. Then, re-start the tail.
I checked for other more reliable alternatives like inotifywait, inotify but they are not available on the remote server(s) and I do not have the access to install them.
Can you think of any other way to detect a file rollover with a bash script?
Edit: Based on Hema's answer below, the modified (working!) script is as follows:
#!/bin/bash
function startTail {
echo "Path: $1"
tail -199999f "$1" 2>/dev/null & #Try to tail the file
#Check every second if the file still exists
while [[ -f $1 ]]
do
perl -MTime::HiRes -e "Time::HiRes::sleep(0.1)"
done
echo "***** File Not Available as of: $(date)" #If not then log a message and,
kill $! 2>/dev/null #Kill the tail process, then
echo "Waiting for file to appear" #Wait for the file to re-appear
while [ ! -f $1 ]
do
echo -ne "." #Show progress bar
sleep 1
done
echo -ne '\n' #Advance to next line #File has appeared again
echo "Starting Tail again"
startTail "$1"
}
startTail "$1"
For sleeping in microseconds, you can use
perl -MTime::HiRes -e "Time::HiRes::usleep(1)" ;
perl -MTime::HiRes -e "Time::HiRes::sleep(0.001)" ;
Unfortunately, none of the tail versions on the remote OS (Solaris)
support the --follow option
That's a little harsh.
Just use -f (rather than --follow) on both Solaris and Linux. On Linux you can use --follow as a synonym for -f. On Solaris you can't.
But anyway, to be more precise: you want a follow option that handles rollovers. GNU tail (i.e. Linux) has that natively by the way of the -F (capital F) option. Solaris doesn't. The GNU tail -F option can handle that the file is rolled over as long as it keeps the name name. In other words on Solaris you would have to use gtail command to force the use of GNU tail.
If you are a prudent Solaris site then such GNU tool would just be there, without you having to worry about it. You shouldn't accept a Solaris install from your SysAdmin where he/she has deliberately neglected to make sure the basic GNU tools are there. On Solaris 11 (as an example) he really has to go out of his way to make that happen.
You would make your script OS independent by the well known method:
TAILCMD="tail"
# We need GNU tail, not any other implementation of 'tail'
if [ "$(uname -s)" == "SunOS" ]; then
TAILCMD="gtail"
fi
$TAILCMD -F myfile.log

Resources