Logrotate not working in Ubuntu 14.0.4 - codeigniter

I have installed Logrotate in my system.
My Log file name is like : log-2015-09-09.php
Here is my configuration in etc/logrotate.conf file :
/home/root/php/www/myProject/CI/application/logs/log-%Y-%m-%d.php{
daily
size 1K
copytruncate
compress
rotate 1
notifempty
missingok
}
When i check status using :
cat /var/lib/logrotate/status
It did not show me anything about my logs and also did not delete or compress my log files.
Is there something wrong in my configuration that i need to change.

I would imagine that the directory/file name is the cause here. I'm not sure what you are tying to do with the % in there, but you can use wildcards instead like:
/home/root/php/www/myProject/CI/application/logs/log-*.php {
daily
size 1K
copytruncate
compress
rotate 1
notifempty
missingok
}
You could also test the logrotate too using:
logrotate -d -f /etc/logrotate.conf
-d = Turns on debug mode and no changes will be made to the log files.
-f = Tells logrotate to force the rotation, even if it doesn't think this is necessary

Related

Top command redirection

I am trying to analyse a randomly crashing application.To get some details of the process i am implementing a script like below
procID=$(ps -aef|grep app|awk '{print $2}')
top -p $procID -b -d 10 > test.log
In this case top command will execute for every 10 sec and write to test.log. Planning to run it indefinitely, but I want to flush out the contents of test.log each time top command writes some value to it. How can I modify the script accordingly ?
To avoid your log getting too big, I suggest you use logrotate instead. In short, logrotate creates a cronjob that will "rotate" your log file upon a condition (one file per day, rotate when the file reaches a given size, etc.). You can also choose to keep the X last rotated logs.
This is what is used for lastmessage and the other linux log files (hence the .log.1, .log.2 etc.)
Here is a basic way to make it work:
Create a configuration for your file in /etc/logrotate.d. For example:
# /etc/logrotate.d/test
/var/log/test.log {
size 4M
create 770 somegroup someuser
rotate 4
missingok
copytruncate
}
What it says:
size 4M: rotate file after it reaches 4M in size
create ...: create new file with the given permisions (replace somegroup someuser by your usergroup and username)
rotate 4: keep the last 4 log files (test.log, test.log.1 etc)
missingOk an copyTruncate: avoid some errors (see the doc for more info)
Check that your file is correct by executing:
sudo logrotate /etc/logrotate.d/test
enjoy.
More about logrotate: https://support.rackspace.com/how-to/understanding-logrotate-utility/

Logrotate using rsyslog's omprog hangs over time

Tried almost everything, but still cant find any solution for the issue so wanted to ask for little help in such case:
I have got logrotate (v. 3.7.8) configured based on size of log files:
/home/test/logs/*.log {
missingok
notifempty
nodateext
size 10k
rotate 100
copytruncate
}
Rotation of logs is based only on size, invoked whenever message will arrived to rsyslog deamon (v. 5.8.10). Configuration of rsyslog:
$ModLoad omprog
$ActionOMProgBinary /usr/local/bin/log_rotate.sh
$MaxMessageSize 64k
$ModLoad imuxsock
$ModLoad imklog
$ModLoad imtcp
$InputTCPServerRun 514
$template FORMATTER, "%HOSTNAME% | %msg:R,ERE,4,FIELD:(.*)\s(.*)(:::)(.*)--end%\n"
$ActionFileDefaultTemplate FORMATTER
$Escape8BitCharactersOnReceive off
$EscapeControlCharactersOnReceive off
$SystemLogRateLimitInterval 0
$SystemLogRateLimitBurst 0
$FileOwner test
$FileGroup test
$DirOwner test
$DirGroup test
# Log each module execution to separate log files and don't use the prepending module_execution_ in the log name.
$template CUSTOM_LOGS,"/home/test/logs/%programname:R,ERE,1,FIELD:^module_execution_(.*)--end%.log"
if $programname startswith 'module_execution_' then :omprog:
if $programname startswith 'module_execution_' then ?CUSTOM_LOGS
& ~
Script invoked by the omprog just runs logrotate and for test purpose sends new line to logrot file:
#!/bin/bash
echo "1" >> /home/test/logrot
/usr/sbin/logrotate /etc/logrotate.conf -v
How to reproduce:
execute:
for i in {1..50000}; do logger -t "module_execution_test" "test message"; done;
check rotate files - there will be a lot of files test.log.1,2,3 etc. with size near to the 10kB and one test.log with size much bigger then predicted
check:
wc -l /home/test/logrot
It will be growing for some time but then stops even if the messages still arrives (hangs exactly in the time when rotation stops to happen) - it means that rsyslog doesnt call external script anymore
So IMO it looks like a bug in rsyslog or omprog plugin. Any idea what is going on?
br

Filter int, float and character from a text file in a Shell Script

Suppose I have a text file, which contains data like this.
Below output generated from du - sh /home/*
1.5G user1
2.5G user2
And so on...
Now if I want that those files size be stored in an array and compared to 5 GB if the user is consuming more than 5 Gb. What can I do?
The du command shows the usage of each folder in home directory. So if i want myself to be notified that some user is consuming more than 5 GB. Because there is a long list of users. It will be tedious to identify each user's disk usage. I want a shell script to identify the usage for each directory in home. And then I will put mail function to notify myself for exceeded limits.
Note : Don't want to implement quota as I just want to monitor the usage.
Use du's -t (--threshold) option to specify you only want to know about directories with more than a certain amount of data in them:
$ du -sh -t 5G /home/*
If you're picky about precisely how big a gigabyte is, note that 5G uses multiples of 1024; you may prefer -t 5GB for multiples of 1000, or even -t 5000M to mix them.
For lots of users, you're probably better off writing that using -d 1 instead of -s to avoid the shell having to expand the * into a very long list:
$ du -h -d 1 -t 5G /home/

Issue in creating Vectors from text in Mahout

I'm using Mahout 0.9 (installed on HDP 2.2) for topic discovery (Latent Drichlet Allocation algorithm). I have my text file stored in directory
inputraw and executed the following commands in order
command #1:
mahout seqdirectory -i inputraw -o output-directory -c UTF-8
command #2:
mahout seq2sparse -i output-directory -o output-vector-str -wt tf -ng 3 --maxDFPercent 40 -ow -nv
command #3:
mahout rowid -i output-vector-str/tf-vectors/ -o output-vector-int
command #4:
mahout cvb -i output-vector-int/matrix -o output-topics -k 1 -mt output-tmp -x 10 -dict output-vector-str/dictionary.file-0
After executing the second command and as expected it creates a bunch of subfolders and files under the
output-vector-str (named df-count, dictionary.file-0, frequency.file-0, tf-vectors,tokenized-documents and wordcount). The size of these files all looks ok considering the size of my input file however the file under ``tf-vectors` has a very small size, in fact it's only 118 bytes).
Apparently as the
`tf-vectors` is the input to the 3rd command, the third command also generates a file of small size. Does anyone know:
what is the reason of the file under
`tf-vectors` folder to be that small? There must be something wrong.
Starting from the first command, all the generated files have a strange coding and are nor human readable. Is this something expected?
Your answers are as follows:
what is the reason of the file under tf-vectors folder to be that small?
The vectors are small considering you have given maxdf percentage to be only 40%, implying that only terms which have a doc freq(percentage freq of terms occurring throughout the docs) of less than 40% would be taken in consideration. In other words, only terms which occur in 40% of the documents or less would be taken in consideration while generating vectors.
what is the reason of the file under tf-vectors folder to be that small?
There is a command in mahout called the mahout seqdumper which would come to your rescue for dumping the files in "sequential" format to "human" readable format.
Good Luck!!

Logrotate does not upload to S3

I've spent some hours trying to figure out why logrotate won't successfully upload my logs to S3, so I'm posting my setup here. Here's the thing--logrotate uploads the log file correctly to s3 when I force it like this:
sudo logrotate -f /etc/logrotate.d/haproxy
Starting S3 Log Upload...
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
/var/log/haproxy-2014-12-23-044414.gz -> s3://my-haproxy-access-logs/haproxy-2014-12-23-044414.gz [1 of 1]
315840 of 315840 100% in 0s 2.23 MB/s done
But it does not succeed as part of the normal logrotate process. The logs are still compressed by my postrotate script, so I know that it is being run. Here is my setup:
/etc/logrotate.d/haproxy =>
/var/log/haproxy.log {
size 1k
rotate 1
missingok
copytruncate
sharedscripts
su root root
create 777 syslog adm
postrotate
/usr/local/admintools/upload.sh 2>&1 /var/log/upload_errors
endscript
}
/usr/local/admintools/upload.sh =>
echo "Starting S3 Log Upload..."
BUCKET_NAME="my-haproxy-access-logs"
# Perform Rotated Log File Compression
filename=/var/log/haproxy-$(date +%F-%H%M%S).gz \
tar -czPf "$filename" /var/log/haproxy.log.1
# Upload log file to Amazon S3 bucket
/usr/bin/s3cmd put "$filename" s3://"$BUCKET_NAME"
And here is the output of a dry run of logrotate:
sudo logrotate -fd /etc/logrotate.d/haproxy
reading config file /etc/logrotate.d/haproxy
Handling 1 logs
rotating pattern: /var/log/haproxy.log forced from command line (1 rotations)
empty log files are rotated, old logs are removed
considering log /var/log/haproxy.log
log needs rotating
rotating log /var/log/haproxy.log, log->rotateCount is 1
dateext suffix '-20141223'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
renaming /var/log/haproxy.log.1 to /var/log/haproxy.log.2 (rotatecount 1, logstart 1, i 1),
renaming /var/log/haproxy.log.0 to /var/log/haproxy.log.1 (rotatecount 1, logstart 1, i 0),
copying /var/log/haproxy.log to /var/log/haproxy.log.1
truncating /var/log/haproxy.log
running postrotate script
running script with arg /var/log/haproxy.log : "
/usr/local/admintools/upload.sh 2>&1 /var/log/upload_errors
"
removing old log /var/log/haproxy.log.2
Any insight appreciated.
It turned out that my s3cmd was configured for my user, not for root.
ERROR: /root/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.
Solution was to copy my config file over. – worker1138

Resources