Logrotate using rsyslog's omprog hangs over time - rsyslog

Tried almost everything, but still cant find any solution for the issue so wanted to ask for little help in such case:
I have got logrotate (v. 3.7.8) configured based on size of log files:
/home/test/logs/*.log {
missingok
notifempty
nodateext
size 10k
rotate 100
copytruncate
}
Rotation of logs is based only on size, invoked whenever message will arrived to rsyslog deamon (v. 5.8.10). Configuration of rsyslog:
$ModLoad omprog
$ActionOMProgBinary /usr/local/bin/log_rotate.sh
$MaxMessageSize 64k
$ModLoad imuxsock
$ModLoad imklog
$ModLoad imtcp
$InputTCPServerRun 514
$template FORMATTER, "%HOSTNAME% | %msg:R,ERE,4,FIELD:(.*)\s(.*)(:::)(.*)--end%\n"
$ActionFileDefaultTemplate FORMATTER
$Escape8BitCharactersOnReceive off
$EscapeControlCharactersOnReceive off
$SystemLogRateLimitInterval 0
$SystemLogRateLimitBurst 0
$FileOwner test
$FileGroup test
$DirOwner test
$DirGroup test
# Log each module execution to separate log files and don't use the prepending module_execution_ in the log name.
$template CUSTOM_LOGS,"/home/test/logs/%programname:R,ERE,1,FIELD:^module_execution_(.*)--end%.log"
if $programname startswith 'module_execution_' then :omprog:
if $programname startswith 'module_execution_' then ?CUSTOM_LOGS
& ~
Script invoked by the omprog just runs logrotate and for test purpose sends new line to logrot file:
#!/bin/bash
echo "1" >> /home/test/logrot
/usr/sbin/logrotate /etc/logrotate.conf -v
How to reproduce:
execute:
for i in {1..50000}; do logger -t "module_execution_test" "test message"; done;
check rotate files - there will be a lot of files test.log.1,2,3 etc. with size near to the 10kB and one test.log with size much bigger then predicted
check:
wc -l /home/test/logrot
It will be growing for some time but then stops even if the messages still arrives (hangs exactly in the time when rotation stops to happen) - it means that rsyslog doesnt call external script anymore
So IMO it looks like a bug in rsyslog or omprog plugin. Any idea what is going on?
br

Related

Top command redirection

I am trying to analyse a randomly crashing application.To get some details of the process i am implementing a script like below
procID=$(ps -aef|grep app|awk '{print $2}')
top -p $procID -b -d 10 > test.log
In this case top command will execute for every 10 sec and write to test.log. Planning to run it indefinitely, but I want to flush out the contents of test.log each time top command writes some value to it. How can I modify the script accordingly ?
To avoid your log getting too big, I suggest you use logrotate instead. In short, logrotate creates a cronjob that will "rotate" your log file upon a condition (one file per day, rotate when the file reaches a given size, etc.). You can also choose to keep the X last rotated logs.
This is what is used for lastmessage and the other linux log files (hence the .log.1, .log.2 etc.)
Here is a basic way to make it work:
Create a configuration for your file in /etc/logrotate.d. For example:
# /etc/logrotate.d/test
/var/log/test.log {
size 4M
create 770 somegroup someuser
rotate 4
missingok
copytruncate
}
What it says:
size 4M: rotate file after it reaches 4M in size
create ...: create new file with the given permisions (replace somegroup someuser by your usergroup and username)
rotate 4: keep the last 4 log files (test.log, test.log.1 etc)
missingOk an copyTruncate: avoid some errors (see the doc for more info)
Check that your file is correct by executing:
sudo logrotate /etc/logrotate.d/test
enjoy.
More about logrotate: https://support.rackspace.com/how-to/understanding-logrotate-utility/

Logrotate not working in Ubuntu 14.0.4

I have installed Logrotate in my system.
My Log file name is like : log-2015-09-09.php
Here is my configuration in etc/logrotate.conf file :
/home/root/php/www/myProject/CI/application/logs/log-%Y-%m-%d.php{
daily
size 1K
copytruncate
compress
rotate 1
notifempty
missingok
}
When i check status using :
cat /var/lib/logrotate/status
It did not show me anything about my logs and also did not delete or compress my log files.
Is there something wrong in my configuration that i need to change.
I would imagine that the directory/file name is the cause here. I'm not sure what you are tying to do with the % in there, but you can use wildcards instead like:
/home/root/php/www/myProject/CI/application/logs/log-*.php {
daily
size 1K
copytruncate
compress
rotate 1
notifempty
missingok
}
You could also test the logrotate too using:
logrotate -d -f /etc/logrotate.conf
-d = Turns on debug mode and no changes will be made to the log files.
-f = Tells logrotate to force the rotation, even if it doesn't think this is necessary

How to resume reading a file?

I'm trying to find the best and most efficient way to resume reading a file from a given point.
The given file is being written frequently (this is a log file).
This file is rotated on a daily basis.
In the log file I'm looking for a pattern 'slow transaction'. End of such lines have a number into parentheses. I want to have the sum of the numbers.
Example of log line:
Jun 24 2015 10:00:00 slow transaction (5)
Jun 24 2015 10:00:06 slow transaction (1)
This is easy part that I could do with awk command to get total of 6 with above example.
Now my challenge is that I want to get the values from this file on a regular basis. I've an external system that polls a custom OID using SNMP. When hitting this OID the Linux host runs a couple of basic commands.
I want this SNMP polling event to get the number of events since the last polling only. I don't want to have the total every time, just the total of the newly added lines.
Just to mention that only bash can be used, or basic commands such as awk sed tail etc. No perl or advanced programming language.
I hope my description will be clear enough. Apologizes if this is duplicate. I did some researches before posting but did not find something that precisely correspond to my need.
Thank you for any assistance
In addition to the methods in the comment link, you can also simply use dd and stat to read the logfile size, save it and sleep 300 then check the logfile size again. If the filesize has changed, then skip over the old information with dd and read the new information only.
Note: you can add a test to handle the case where the logfile is deleted and then restarted with 0 size (e.g. if $((newsize < size)) then read all.
Here is a short example with 5 minute intervals:
#!/bin/bash
lfn=${1:-/path/to/logfile}
size=$(stat -c "%s" "$lfn") ## save original log size
while :; do
newsize=$(stat -c "%s" "$lfn") ## get new log size
if ((size != newsize)); then ## if change, use new info
## use dd to skip over existing text to new text
newtext=$(dd if="$lfn" bs="$size" skip=1 2>/dev/null)
## process newtext however you need
printf "\nnewtext:\n\n%s\n" "$newtext"
size=$((newsize)); ## update size to newsize
fi
sleep 300
done

Logrotate does not upload to S3

I've spent some hours trying to figure out why logrotate won't successfully upload my logs to S3, so I'm posting my setup here. Here's the thing--logrotate uploads the log file correctly to s3 when I force it like this:
sudo logrotate -f /etc/logrotate.d/haproxy
Starting S3 Log Upload...
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
/var/log/haproxy-2014-12-23-044414.gz -> s3://my-haproxy-access-logs/haproxy-2014-12-23-044414.gz [1 of 1]
315840 of 315840 100% in 0s 2.23 MB/s done
But it does not succeed as part of the normal logrotate process. The logs are still compressed by my postrotate script, so I know that it is being run. Here is my setup:
/etc/logrotate.d/haproxy =>
/var/log/haproxy.log {
size 1k
rotate 1
missingok
copytruncate
sharedscripts
su root root
create 777 syslog adm
postrotate
/usr/local/admintools/upload.sh 2>&1 /var/log/upload_errors
endscript
}
/usr/local/admintools/upload.sh =>
echo "Starting S3 Log Upload..."
BUCKET_NAME="my-haproxy-access-logs"
# Perform Rotated Log File Compression
filename=/var/log/haproxy-$(date +%F-%H%M%S).gz \
tar -czPf "$filename" /var/log/haproxy.log.1
# Upload log file to Amazon S3 bucket
/usr/bin/s3cmd put "$filename" s3://"$BUCKET_NAME"
And here is the output of a dry run of logrotate:
sudo logrotate -fd /etc/logrotate.d/haproxy
reading config file /etc/logrotate.d/haproxy
Handling 1 logs
rotating pattern: /var/log/haproxy.log forced from command line (1 rotations)
empty log files are rotated, old logs are removed
considering log /var/log/haproxy.log
log needs rotating
rotating log /var/log/haproxy.log, log->rotateCount is 1
dateext suffix '-20141223'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
renaming /var/log/haproxy.log.1 to /var/log/haproxy.log.2 (rotatecount 1, logstart 1, i 1),
renaming /var/log/haproxy.log.0 to /var/log/haproxy.log.1 (rotatecount 1, logstart 1, i 0),
copying /var/log/haproxy.log to /var/log/haproxy.log.1
truncating /var/log/haproxy.log
running postrotate script
running script with arg /var/log/haproxy.log : "
/usr/local/admintools/upload.sh 2>&1 /var/log/upload_errors
"
removing old log /var/log/haproxy.log.2
Any insight appreciated.
It turned out that my s3cmd was configured for my user, not for root.
ERROR: /root/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.
Solution was to copy my config file over. – worker1138

Can you view historic logs for parse.com cloud code?

On the Parse.com cloud-code console, I can see logs, but they only go back maybe 100-200 lines. Is there a way to see or download older logs?
I've searched their website & googled, and don't see anything.
Using the parse command-line tool, you can retrieve an arbitrary number of log lines:
Usage:
parse logs [flags]
Aliases:
logs, log
Flags:
-f, --follow=false: Emulates tail -f and streams new messages from the server
-l, --level="INFO": The log level to restrict to. Can be 'INFO' or 'ERROR'.
-n, --num=10: The number of the messages to display
Not sure if there is a limit, but I've been able to fetch 5000 lines of log with this command:
parse logs prod -n 5000
To add on to Pascal Bourque's answer, you may also wish to filter the logs by a given range of dates. To achieve this, I used the following:
parse logs -n 5000 | sed -n '/2016-01-10/, /2016-01-15/p' > filteredLog.txt
This will get up to 5000 logs, use the sed command to keep all of the logs which are between 2016-01-10 and 2016-01-15, and store the results in filteredLog.txt.

Resources