Is there a bash expansion for syslog.1 syslog when syslog.1 may not exist? - bash

I'd like to monitor syslog events every hour. I use dategrep to get the last hour but on log rotation the last hour may span to the previous syslog.
Is there an expansion to achieve listing the two recent syslog files in ascending order?
$(ls -tr syslog* | tail -n 2)
The output should be
syslog.1 syslog # when syslog.1 exists
or
syslog # when it doesn't
I've tried syslog{.1,} but it always outputs syslog.1.
Thank you!

Related

FreeBSD script to show active connections and append number remote file

I am using NetScaler FreeBSD, which recognizes many of the UNIX like commands, grep, awk, crontab… etc.
I run the following command to get the number of connected users that we have on the system
#> nsconmsg -g aaa_cur_ica_conn -d stats
OUTPUT (numbered lines):
Line1: Displaying current counter value information
Line2: NetScaler V20 Performance Data
Line3: NetScaler NS11.1: Build 63.9.nc, Date: Oct 11 2019, 06:17:35
Line4:
Line5: reltime:mili second between two records Sun Jun 28 23:12:15 2020
Line6: Index reltime counter-value symbol-name&device-no
Line7: 1 2675410 605 aaa_cur_ica_conn
…
…
From above output - I only need the number of connected users (represented in Line 7, 3rd column (605 to be precise), along with the Hostname and Time (of the running script)
Now, to extract this important 3rd column number i.e. 605, along with the hostname, and time of data collected - I wrote the following script:
printf '%s - %s - %s\n' "$(hostname)" "$(date '+%H:%M')" "$(nsconmsg -g aaa_cur_ica_conn -d stats | grep aaa_cur_ica_conn | awk '{print $3}')"
The result is perfect, showing hostname, time, and the number of connected users as follows:
Hostname - 09:00 – 605
Now can anyone please shed light on how I can:
Run this script every day - 5am to 5pm (12hours)?
Each time scripts runs - append a file on a remote Unix share with the output?
I appreciate this might be a bit if a challenge... however would be grateful for any bash scripts wizards out there that can create magic!
Thanks in advance!
I would suggest a quick look into the FreeBSD Handbook or For People New to Both FreeBSD and UNIX® so that you could get familiar with the operating system and tools that could help you achieve better what you want.
For example, there is a utility/command named cron
The software utility cron is a time-based job scheduler in Unix-like computer operating systems.
For example, to run something all days between 5am to 5pm every minute, you could use something like:
* 05-17 * * * command
Try more options here: https://crontab.guru/#*_05-17_*_*_*.
There are more tools for scheduling commands, for example at (https://en.wikipedia.org/wiki/At_(command)) but this something you need to evaluate and read more about it.
Now regarding the command, you are using to get the "number of connected users", you could avoid the grep and just used awk for example:
awk '/aaa_cur_ica_conn/ {print $3}'
This will print only column 3 if line contains aaa_cur_ica_conn, but as before I invite you to read more about the topic so that you could bet a better overview and better understand the commands.
Last but not least, check this link How do I ask a good question? the better you could format, and elaborate your question the easy for others to give an answer.

How to resume reading a file?

I'm trying to find the best and most efficient way to resume reading a file from a given point.
The given file is being written frequently (this is a log file).
This file is rotated on a daily basis.
In the log file I'm looking for a pattern 'slow transaction'. End of such lines have a number into parentheses. I want to have the sum of the numbers.
Example of log line:
Jun 24 2015 10:00:00 slow transaction (5)
Jun 24 2015 10:00:06 slow transaction (1)
This is easy part that I could do with awk command to get total of 6 with above example.
Now my challenge is that I want to get the values from this file on a regular basis. I've an external system that polls a custom OID using SNMP. When hitting this OID the Linux host runs a couple of basic commands.
I want this SNMP polling event to get the number of events since the last polling only. I don't want to have the total every time, just the total of the newly added lines.
Just to mention that only bash can be used, or basic commands such as awk sed tail etc. No perl or advanced programming language.
I hope my description will be clear enough. Apologizes if this is duplicate. I did some researches before posting but did not find something that precisely correspond to my need.
Thank you for any assistance
In addition to the methods in the comment link, you can also simply use dd and stat to read the logfile size, save it and sleep 300 then check the logfile size again. If the filesize has changed, then skip over the old information with dd and read the new information only.
Note: you can add a test to handle the case where the logfile is deleted and then restarted with 0 size (e.g. if $((newsize < size)) then read all.
Here is a short example with 5 minute intervals:
#!/bin/bash
lfn=${1:-/path/to/logfile}
size=$(stat -c "%s" "$lfn") ## save original log size
while :; do
newsize=$(stat -c "%s" "$lfn") ## get new log size
if ((size != newsize)); then ## if change, use new info
## use dd to skip over existing text to new text
newtext=$(dd if="$lfn" bs="$size" skip=1 2>/dev/null)
## process newtext however you need
printf "\nnewtext:\n\n%s\n" "$newtext"
size=$((newsize)); ## update size to newsize
fi
sleep 300
done

How to get access_log summary by goaccess starting from certain date?

Currently I keep 6 weeks of apache access_log. If I generate a access summary at month end:
cat /var/log/httpd/access_log* | goaccess --output-format=csv
the summary will include some access data from previous month.
How can I skip logs of previous month and summarise from first day of month?
p.s. the data-format is: %d/%b/%Y
You can trade the Useless Use of cat for a useful grep.
grep -n $(date +'[0-3][0-9]/%b/%Y') /var/log/httpd/access_log* |
goaccess --output-format=csv
If the logs are by date, it would be a lot more economical to skip the logs which you know are too old or too new, i.e. modify the wildcard argument so you only match the files you really want (or run something like find -mtime -30 to at least narrow the set to a few files).
(The cat is useless because, if goaccess is at all correctly written, it should be able to handle
goaccess --output-format=csv /var/log/httpd/access_log*
just fine.)

Can you view historic logs for parse.com cloud code?

On the Parse.com cloud-code console, I can see logs, but they only go back maybe 100-200 lines. Is there a way to see or download older logs?
I've searched their website & googled, and don't see anything.
Using the parse command-line tool, you can retrieve an arbitrary number of log lines:
Usage:
parse logs [flags]
Aliases:
logs, log
Flags:
-f, --follow=false: Emulates tail -f and streams new messages from the server
-l, --level="INFO": The log level to restrict to. Can be 'INFO' or 'ERROR'.
-n, --num=10: The number of the messages to display
Not sure if there is a limit, but I've been able to fetch 5000 lines of log with this command:
parse logs prod -n 5000
To add on to Pascal Bourque's answer, you may also wish to filter the logs by a given range of dates. To achieve this, I used the following:
parse logs -n 5000 | sed -n '/2016-01-10/, /2016-01-15/p' > filteredLog.txt
This will get up to 5000 logs, use the sed command to keep all of the logs which are between 2016-01-10 and 2016-01-15, and store the results in filteredLog.txt.

How to get the logs in my script when its been getting rotated?

I have a script where I'm fetching the logs from the tomcat and sending that into the my cloud resource. Everything works well, but I have a problem when my tomcat rotates the log.
When the logs get rotated its been prefixed with date ( log gets rotated every day ). Since my script just runs every half an hour I may miss the logs when it gets rotated, because I'm fetching the logs with their static name, in the example logfile.log.
Before getting rotated the file will look like this :
logfile.log
After getting rotated, it will look like this :
logfile.log.2012-10-09
Are there any ways to get rid of this problem?
Edit:
My script :
cp /tomcat/logs/$logname $fileName
gzip $fileName
s3cmd put $fileName.gz s3://x.x.x.x.x/$folderName
Thanks in advance.
I think the best way to backup you logs is to do a check according to the mtime of the logfiles.
You can keep the log file mtime of the last backup somewhere, then check both rotated log files and current log file. If there is a rotated log file that newer then the last mtime stored, you could append the current log file to the rotated one and then backup. If only current log file is newer, then just backup it.
The mtime of the file could be retrieved by: LC_ALL=C stat logfile.log | grep '^Modify' | cut -d: -f2-, or the unix timestamp by date "+%s" --date="$(LC_ALL=C stat logfile.log | grep '^Modify' | cut -d: -f2-)"

Resources