Why there is an epoch time from empty value? - bash

I'm trying to improve a monitor that reads log file and sends a notification if an application has stopped running and logging. To avoid getting notifications from moments when log file rotates and app hasn't logged anything I added a check to see if the oldest log line is older than two minutes.
Here's a snippet of the important part.
<bash lines and variable setting>
LATEST=$(gawk 'match($0, /\[#\|(.*[0-9]*?)T(.*[0-9]*?)\+.*<AppToMonitor>/, m) {print m[1], m[2];} ' $LOG_TO_MONITOR | tail -1 )
OLDEST=$(gawk 'match($0, /\[#\|(.*[0-9]*?)T(.*[0-9]*?)\+.*INFO/, m) {print m[1], m[2];} ' $LOG_TO_MONITOR | head -1)
if [ -z "$LATEST" ]
then
# no line in log
OLDEST_EPOCH=`(date --date="$OLDEST" +%s)`
CURR_MINUS_TWO=$(date +"%Y-%m-%d %T" -d "2 mins ago")
CURR_MINUS_TWO_EPOCH=`(date --date="$CURR_MINUS_TWO" +%s)`
# If oldest log line is over two minutes old
if [[ "$OLDEST_EPOCH" -lt "$CURR_MINUS_TWO_EPOCH" ]]
then
log "No lines found."
<send notification>
else
log "Server.log rotated."
fi
<else and stuff>
I still got some notifications when log rotated and the culprit reason was that the epoch time was taken from totally empty log file. I tested this by creating empty .log-file with touch test.log, then setting EMPTY=$(gawk 'match($0, /\[#\|(.*[0-9]*?)T(.*[0-9]*?)\+.*INFO/, m) {print m[1], m[2];} ' /home/USER/test.log | head -1)
Now, if I echo $EMPTY, I get a blank line. But, if I convert this empty line to epoch time EPOCHEMPTY=`(date --date="$EMPTY" +%s)` I get an epoch time 1584914400 from echo. This refers to yesterday evening. Apparently, this same epoch comes every time an empty date is converted to epoch time, like replacing "$EMPTY" with "", at least while writing this.
So the question is, what is this epoch time from empty line? When the if-statement makes the comparison with this value it triggers the notification even though it should not. Is there a way to avoid taking empty string to comparison but some else time value from the log file?

date's manual defines that an empty string passed to -d will be considered as the start of the day.
You could however rely on the -f/--file option and process substitution :
date -f <(echo -n "$your_date")
The -f option lets you pass a file as parameters, each line of which will be treated as an input for -d. An empty file will just return an empty output.
The process substitution is used to create on the fly a ephemeral file (an anonymous pipe to be precise, but that's still a file) that will only contain the content of your variable, that is either an empty file if the variable is undefined or the empty string, and a single line otherwise.

Related

How do I search a log file based on timestamp

I have written a simple code which will send out an email when a service is down, once i restart the service,script will check the file for the same keyword. problem is it may find the earlier error in the log and give a false alarm that the service is still down.
so I decided to search based on the time stamp.
dt=$(date +"%D %T")
awk '$0 ~ "Connection refused" && $0 >= $dt' /***.log
this is still returning all the old results as well
This is how the contents of the log look like.
[08/06/20 11:36:54.577]:Work...
Please let me know what I'm missing here and if this is the best way to go about with this.
Edit: This is going to be an automated script that will be run every hour.
Thank you!
The reason you get the old results as well is that you don't really compare with that date, but with some undefined $dt inside the awk condition. The awk body is not a place where you use a bash variable as is. See how you do this: https://www.gnu.org/software/gawk/manual/html_node/Using-Shell-Variables.html
dt=$(date +"%D %T")
awk -v dt="$dt" '$0 >= dt && $0 ~ /Connection refused/' file
The alphabetical comparison seems enough for your case, I assume you look into logs of a few hours or days (I think that it could fail only around New Years Day, or not, depending maybe on the log file rotation and your environment).
To make it faster, as your log lines are still sorted by date, you want to search from the restart timestamp to the end of file, so you could set a flag when you find that timestamp and check for the pattern only after that:
awk -v dt="$dt" 'f && $0 ~ /Connection refused/{print; next} $0 >= dt {f=1}' file
You see that you don't check again any timestamps after the critical point. And in any case, it is better to match exactly the last service restart (how to do this depends on the details and you have not provided any) rather than comparing.
Edit: In the sample line of the question we have the timestamp inside brackets
[08/06/20 11:36:54.577]:Work...
and this can be passed e.g. with this modification
awk -v dt="$dt" 'f && $0 ~ /Connection refused/{print; next} substr($0,2) >= dt {f=1}' file
where substr($0,2) returns $0 without the first character.

tail a log file from a specific line number

I know how to tail a text file with a specific number of lines,
tail -n50 /this/is/my.log
However, how do I make that line count a variable?
Let's say I have a large log file which is appended to daily by some program, all lines in the log file start with a datetime in this format:
Day Mon YY HH:MM:SS
Every day I want to output the tail of the log file but only for the previous days records. Let's say this output runs just after midnight, I'm not worried about the tail spilling over into the next day.
I just want to be able to work out how many rows to tail, based on the first occurrence of yesterdays date...
Is that possible?
Answering the question of the title, for anyone who comes here that way, head and tail can both accept a code for how much of the file to exclude.
For tail, use -n +num for the line number num to start at
For head, use -n -num for the number of lines not to print
This is relevant to the actual question if you have remembered the number of lines from the previous time you did the command, and then used that number for tail -n +$prevlines to get the next portion of the partial log, regardless of how often the log is checked.
Answering the actual question, one way to print everything after a certain line that you can grep is to use the -A option with a ridiculous count. This may be more useful than the other answers here as you can get a number of days of results. So to get everything from yesterday and so-far today:
grep "^`date -d yesterday '+%d %b %y'`" -A1000000 log_file.txt
You can combine 2 greps to print between 2 date ranges.
Note that this relies on the date actually occurring in the log file. It has the weakness that if no events were logged on a particular day used as the range marker, then it will fail to find anything.
To resolve that you could inject dummy records for the start and end dates and sort the file before grepping. This is probably overkill, though, and the sort may be expensive, so I won't example it.
I don't think tail has any functionality like this.
You could work out the beginning and ending line numbers using awk, but if you just want to exact those lines from the log file, the simplest way is probably to use grep combined with date to do it. Matching yesterday's date at beginning of line should work:
grep "^`date -d yesterday '+%d %b %y'`" < log_file.txt
You may need to adjust the date format to match exactly what you've got in the log file.
You can do it without tail, just grep rows with previous date:
cat my.log | grep "$( date -d "yesterday 13:00" '+%d %m %Y')"
And if you need line count you can add
| wc -l
I worked this out through trial and error by getting the line numbers for the first line containing the date and the total lines, as follows:
lines=$(wc -l < myfile.log)
start=$(cat myfile.log | grep -no $datestring | head -n1 | cut -f1 -d:)
n=$((lines-start))
and then a tail, based on that:
tail -n$n myfile.log

Remove all lines in file older than 24 hours

Ive seen a lot of questions regarding removing files that are older than x number of hours. I have not seen any pertaining to removing lines in a file older than x number of hours.
Here is an example of the log I am dealing with. For the sake of the example, assume current time is 2016-12-06 06:08:48,594
2016-12-05 00:44:48,194 INFO this line should be deleted
2016-12-05 01:02:10,220 INFO this line should be deleted
2016-12-05 05:59:10,540 INFO this line should be deleted
2016-12-05 06:08:10,220 INFO this line should be deleted
2016-12-05 16:05:30,521 INFO do not delete this line
2016-12-05 22:23:08,623 INFO do not delete this line
2016-12-06 01:06:28,323 INFO do not delete this line
2016-12-06 05:49:55,619 INFO do not delete this line
2016-12-06 06:06:55,419 INFO do not delete this line
I realize that it might be easier to do this in python or Perl but this needs to be done in bash. That being said, please post any and all relevant answers.
So far Ive tried using sed, awk, etc to convert the timestamps to seconds.
#! /bin/bash
TODAY=$(date +%Y-%m-%d)
# one day ago
YESTERDAY=$(date -d #$(( $(date +"%s") - 86400)) +%Y-%m-%d)
REPORT_LOG=report_log-$TODAY.log
# current date in seconds
NOW=$(date +%s)
# oldest date in the log trimmed by timestamp
OLDEST_DATE=$(head -1 $REPORT_LOG | awk '{print $1" "$2}')
# oldest date converted to seconds
CONVERT_OLDEST_DATE=$(date -d "$OLDEST_DATE" +%s)
TIME_DIFF=$(($NOW-$CONVERT_OLDEST_DATE))
# if difference is less than 24 hours, then...
if [ $TIME_DIFF -ge 86400 ]; then
LATEST_LOG_TIME=$(tail -1 $REPORT_LOG | awk '{print $2}'| cut -c 1-8)
RESULTS=$(awk "/${YESTERDAY} ${LATEST_LOG_TIME}/{i++}i" $REPORT_LOG)
if [ -z $RESULTS]; then
awk "/${YESTERDAY} ${LATEST_LOG_TIME}/{i++}i" $REPORT_LOG > $REPORT_LOG.tmp && mv $REPORT_LOG.tmp $REPORT_LOG
else
echo "Out of ideas at this point"
fi
else
echo "All times newer than date"
fi
The problem with my above snippet is that it relies on a date to repeat itself for the awk to work, which is not always the case. There are hour long gaps in the log files so it is possible for the last line's date (ex. 2016-12-06 06:06:55) to be the only time that date appears. If the timestamp has not previously appeared, my script will delete all results before the matched timestamp.
Any and all help is appreciated.
awk to the rescue!
$ awk -v d="2016-12-05 06:08:48,594" '($1 " " $2) > d' file
will print the newer entries. Obviously, you want to create the date dynamically.
Ignoring the milliseconds part to simplify, you can use
$ awk -v d="$(date --date="yesterday" "+%Y-%m-%d %H:%m:%S,999")" ...
Note that lexical comparison works only for your hierarchial formatted date (why don't everybody use this?), for any other format you are better off converting to seconds from epoch and do numerical comparison on integers
Do the dates in times since the Unix epoch, using the format string +%s. For instance:
yesterday=$(date --date="yesterday" +%s)
Then interpret the dates you've extracted with awk or similar like:
dateInUnixEpoch=$(date --date="$whateverDate" +%s)
Then just compare the dates:
if [ "$yesterday" -ge "$dateInUnixEpoch" ];
then do whatever to delete the lines
fi

Incrementing Numbers & Counting with sed syntax

I am trying to wrap my head around sed and thought it would be best to try using something simple yet useful. At work I want to keep count on a small LCD display each time a specific script is run by users. I am currently doing this with a total count using the following syntax:
oldnum=`cut -d ':' -f2 TotalCount.txt`
newnum=`expr $oldnum + 1`
sed -i "s/$oldnum\$/$newnum/g" TotalCount.txt
This modifies the file that has this one line in it:
Total Recordings:0
Now I want to elaborate a little and increment the numbers starting at midnight and resetting to zero at 23:59:59 each day. I created a secondary .txt file for the display to read from with only one single line in it:
Total Recordings Today:0
But the syntax is not going to be the same. How must the above sed syntax be changed to change the number in the dialog of the second file?
I can change and reset the files using sed/bash in conjunction with a simple cron job on a schedule. The problem is that I can't figure out the syntax of sed to replicate the same effect as I originally got to work. Can anyone help please, I have been reading for hours on this, finally decided to post this and just make a pot of coffee. I have a 4 line LCD and would love to track counts across schedules if it is easy enough to learn the syntax.
sed should work fine for doing increments on both Total Recordings:, or Total Recordings Today: in your file since it's looking for the same pattern. To reset it each day at a certain time I would recommend a cronjob.
0 0 * * * echo \"Total Recordings Today:0\" > /path/to/TotalCount.txt >/dev/null 2>&1
The other things I would encourage is to use the newer style syntax $( ... ) for the shell expansion, and create a variable for your TotalCount.txt file.
#!/bin/bash
totals=/path/to/TotalCount.txt
oldnum=$(cut -d ':' -f2 "$totals")
newnum=$((oldnum + 1))
sed -i "s/$oldnum\$/$newnum/g" "$totals"
This way you can easily reuse it for whatever else you want to do with it, quote it properly and simplfy your code. Note: on OS X sed inplace expansion would need to be sed -i ''.
Whenever in doubt, http://shellcheck.net is a really nice tool to help find mistakes in your code.
although you're looking for a sed solution, cannot resist to post how it can be done in awk
$ awk -F: -v OFS=: '{$2++}1' file > temp && temp > file
-F: set the input field delimiter and -v OFS=: output field delimiter to :, awk parses the second field and increments by one, 1 is a shorthand for print (can be replaced with any "true" value); output will be written to a temp file and if successful will overwrite the original input file (to mimic in-place edit).
Sed is a fine tool, but notoriously not the best for arithmetic. You could make what you already have work by initializing the counter to zero prior to incrementing it, if the file was not last modified today (or does not exist):
[ `date +%Y-%m-%d` != "`stat --printf %z TotalCount.txt 2> /dev/null|cut -d ' ' -f 1`" ] && echo "Total Recordings Today:0" > TotalCount.txt
To do same with shifts, you would likely calculate shift "ordinal number" by subtracting first shift start since midnight (say 7 * 3600) from seconds since epoch (which is a midnight) and dividing by length of shift (8 * 3600) and initialize the counter if that changes. Something like:
[ $(((`date +%s` - 7 * 3600) / (8 * 3600))) -gt $(((`stat --printf %Z TotalCount.txt 2> /dev/null` - 7 * 3600) / (8 * 3600))) ] && echo "Total Recordings This Shift:0" > TotalCount.txt

Delete lines in file over an hour old using timestamps bash

Having a bit of bother trying to get the following to work.
I have a file containing hostname:timestamp as below:
hostname1:1445072150
hostname2:1445076364
I am trying to create a bash script that will query this file (using a cron job) to check if the timestamp is over 1 hour old and if so, remove the line.
Below is what I have so far but it doesn't appear to be removing the line in the file.
#!/bin/bash
hosts=/tmp/hosts
current_timestamp=$(date +%s)
while read line; do
hostname=`echo $line | sed -e 's/:.*//g'`
timestamp=`echo $line | cut -d ":" -f 2`
diff=$(($current_timestamp-$timestamp))
if [ $diff -ge 3600 ]; then
echo "$hostname - Timestamp over an hour old. Deleting line."
sed -i '/$hostname/d' $hosts
fi
done <$hosts
I have managed to get the timestamp part working correctly in identifying hosts that are over an hour old but having trouble removing the time from the file.
I suspect it may be due to the while loop keeping the file open but not 100% sure how to work around it. Also tried making a copy of the file and editing that but still nothing.
ALTERNATIVELY: If there is a better way to get this to work and produce the same result, I am open to suggestions :)
Any help would be much appreciated.
Cheers
The problem in your script was just this line:
sed -i '/$hostname/d' $hosts
Variables inside single-quotes are not expanded to their values,
so the command is trying to replace literally "$hostname", instead of its value. If you replace the single-quotes with double-quotes,
the variable will get expanded to its value, which is what you need here:
sed -i "/$hostname/d" $hosts
There are improvements possible:
#!/bin/bash
hosts=/tmp/hosts
current_timestamp=$(date +%s)
while read line; do
set -- ${line/:/ }
hostname=$1
timestamp=$2
((diff = current_timestamp - timestamp))
if ((diff >= 3600)); then
echo "$hostname - Timestamp over an hour old. Deleting line."
sed -i "/^$hostname:/d" $hosts
fi
done <$hosts
The improvements:
More strict pattern in the sed command, to make it more robust and to avoid some potential errors
Simpler way to extract hostname part and timestamp part without any sub-shells
Simpler arithmetic operations by enclosing within ((...))
You ask for alternatives — use awk:
awk -F: -v ts=$(date +%s) '$2 <= ts-3600 { next }' $hosts > $hosts.$$
mv $hosts.$$ $hosts
The ts=$(date +%s) sets the awk variable ts to the value from date. The script skips any lines where the value in the second column (after the first colon) is smaller than the threshold. You could do the subtraction once in a BEGIN block if you wanted to. Decide whether <= or < is correct for your purposes.
If you need to know which lines are deleted, you can add
printf "Deleting %s - timestamp %d older than %d\n", $1, $2, (ts-3600) >/dev/stderr;
before the next to print the information on standard error. If you must write that to standard output, then you need to arrange for retained lines to be written to a file with print > file as an alternative action after the filter condition (passing -v file="$hosts.$$" as another pair of arguments to awk). The tweaks that can be made are endless.
If the file is of any significant size, it will be quicker to copy the relevant subsection of the file once to a temporary file and then to the final file than to edit the file in place multiple times as in the original code. If the file is small enough, there isn't a problem.

Resources