Logrotate does not upload to S3 - bash

I've spent some hours trying to figure out why logrotate won't successfully upload my logs to S3, so I'm posting my setup here. Here's the thing--logrotate uploads the log file correctly to s3 when I force it like this:
sudo logrotate -f /etc/logrotate.d/haproxy
Starting S3 Log Upload...
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
/var/log/haproxy-2014-12-23-044414.gz -> s3://my-haproxy-access-logs/haproxy-2014-12-23-044414.gz [1 of 1]
315840 of 315840 100% in 0s 2.23 MB/s done
But it does not succeed as part of the normal logrotate process. The logs are still compressed by my postrotate script, so I know that it is being run. Here is my setup:
/etc/logrotate.d/haproxy =>
/var/log/haproxy.log {
size 1k
rotate 1
missingok
copytruncate
sharedscripts
su root root
create 777 syslog adm
postrotate
/usr/local/admintools/upload.sh 2>&1 /var/log/upload_errors
endscript
}
/usr/local/admintools/upload.sh =>
echo "Starting S3 Log Upload..."
BUCKET_NAME="my-haproxy-access-logs"
# Perform Rotated Log File Compression
filename=/var/log/haproxy-$(date +%F-%H%M%S).gz \
tar -czPf "$filename" /var/log/haproxy.log.1
# Upload log file to Amazon S3 bucket
/usr/bin/s3cmd put "$filename" s3://"$BUCKET_NAME"
And here is the output of a dry run of logrotate:
sudo logrotate -fd /etc/logrotate.d/haproxy
reading config file /etc/logrotate.d/haproxy
Handling 1 logs
rotating pattern: /var/log/haproxy.log forced from command line (1 rotations)
empty log files are rotated, old logs are removed
considering log /var/log/haproxy.log
log needs rotating
rotating log /var/log/haproxy.log, log->rotateCount is 1
dateext suffix '-20141223'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
renaming /var/log/haproxy.log.1 to /var/log/haproxy.log.2 (rotatecount 1, logstart 1, i 1),
renaming /var/log/haproxy.log.0 to /var/log/haproxy.log.1 (rotatecount 1, logstart 1, i 0),
copying /var/log/haproxy.log to /var/log/haproxy.log.1
truncating /var/log/haproxy.log
running postrotate script
running script with arg /var/log/haproxy.log : "
/usr/local/admintools/upload.sh 2>&1 /var/log/upload_errors
"
removing old log /var/log/haproxy.log.2
Any insight appreciated.

It turned out that my s3cmd was configured for my user, not for root.
ERROR: /root/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.
Solution was to copy my config file over. – worker1138

Related

fatal error: An error occurred (404) when calling the HeadObject operation: Key " " does not exist

This is my setup:
I use AWS Batch that is running a custom Docker image
The startup.sh file is an entrypoint script that is reading the nth line of a text file and copying it from s3 into the docker.
For example, if the first line of the .txt file is 'Startup_00001/ Startup_000018 Startup_000019', the bash script reads this line, and uses a for loop to copy them over.
This is part of my bash script:
STARTUP_FILE_S3_URL=s3://cmtestbucke/Config/
Startup_FileNames=$(sed -n ${LINE}p file.txt)
for i in ${Startup_FileNames}
do
Startup_FileURL=${STARTUP_FILE_S3_URL}$i
echo $Startup_FileURL
aws s3 cp ${Startup_FileURL} /home/CM_Projects/ &
done
Here is the log output from aws:
s3://cmtestbucke/Config/Startup_000017
s3://cmtestbucke/Config/Startup_000018
s3://cmtestbucke/Config/Startup_000019
Completed 727 Bytes/727 Bytes (7.1 KiB/s) with 1 file(s) remaining download: s3://cmtestbucke/Config/Startup_000018 to Data/Config/Startup_000018
Completed 731 Bytes/731 Bytes (10.1 KiB/s) with 1 file(s) remaining download: s3://cmtestbucke/Config/Startup_000017 to Data/Config/Startup_000017
fatal error: *An error occurred (404) when calling the HeadObject operation: Key
"Config/Startup_000019 " does not exist.*
My s3 bucket certainly contains the object s3://cmtestbucke/Config/Startup_000019
I noticed this happens regardless of filenames. The last iteration always gives this error.
I tested this bash logic locally with the same aws commands. It copies all 3 files.
Can someone please help me figure out what is wrong here?
The problem was with EOL of the text file. It was set to Windows(CR LF). The docker image is running Ubuntu which caused the error. I changed the EOL to Unix(LF). The problem was solved.

Top command redirection

I am trying to analyse a randomly crashing application.To get some details of the process i am implementing a script like below
procID=$(ps -aef|grep app|awk '{print $2}')
top -p $procID -b -d 10 > test.log
In this case top command will execute for every 10 sec and write to test.log. Planning to run it indefinitely, but I want to flush out the contents of test.log each time top command writes some value to it. How can I modify the script accordingly ?
To avoid your log getting too big, I suggest you use logrotate instead. In short, logrotate creates a cronjob that will "rotate" your log file upon a condition (one file per day, rotate when the file reaches a given size, etc.). You can also choose to keep the X last rotated logs.
This is what is used for lastmessage and the other linux log files (hence the .log.1, .log.2 etc.)
Here is a basic way to make it work:
Create a configuration for your file in /etc/logrotate.d. For example:
# /etc/logrotate.d/test
/var/log/test.log {
size 4M
create 770 somegroup someuser
rotate 4
missingok
copytruncate
}
What it says:
size 4M: rotate file after it reaches 4M in size
create ...: create new file with the given permisions (replace somegroup someuser by your usergroup and username)
rotate 4: keep the last 4 log files (test.log, test.log.1 etc)
missingOk an copyTruncate: avoid some errors (see the doc for more info)
Check that your file is correct by executing:
sudo logrotate /etc/logrotate.d/test
enjoy.
More about logrotate: https://support.rackspace.com/how-to/understanding-logrotate-utility/

Logrotate using rsyslog's omprog hangs over time

Tried almost everything, but still cant find any solution for the issue so wanted to ask for little help in such case:
I have got logrotate (v. 3.7.8) configured based on size of log files:
/home/test/logs/*.log {
missingok
notifempty
nodateext
size 10k
rotate 100
copytruncate
}
Rotation of logs is based only on size, invoked whenever message will arrived to rsyslog deamon (v. 5.8.10). Configuration of rsyslog:
$ModLoad omprog
$ActionOMProgBinary /usr/local/bin/log_rotate.sh
$MaxMessageSize 64k
$ModLoad imuxsock
$ModLoad imklog
$ModLoad imtcp
$InputTCPServerRun 514
$template FORMATTER, "%HOSTNAME% | %msg:R,ERE,4,FIELD:(.*)\s(.*)(:::)(.*)--end%\n"
$ActionFileDefaultTemplate FORMATTER
$Escape8BitCharactersOnReceive off
$EscapeControlCharactersOnReceive off
$SystemLogRateLimitInterval 0
$SystemLogRateLimitBurst 0
$FileOwner test
$FileGroup test
$DirOwner test
$DirGroup test
# Log each module execution to separate log files and don't use the prepending module_execution_ in the log name.
$template CUSTOM_LOGS,"/home/test/logs/%programname:R,ERE,1,FIELD:^module_execution_(.*)--end%.log"
if $programname startswith 'module_execution_' then :omprog:
if $programname startswith 'module_execution_' then ?CUSTOM_LOGS
& ~
Script invoked by the omprog just runs logrotate and for test purpose sends new line to logrot file:
#!/bin/bash
echo "1" >> /home/test/logrot
/usr/sbin/logrotate /etc/logrotate.conf -v
How to reproduce:
execute:
for i in {1..50000}; do logger -t "module_execution_test" "test message"; done;
check rotate files - there will be a lot of files test.log.1,2,3 etc. with size near to the 10kB and one test.log with size much bigger then predicted
check:
wc -l /home/test/logrot
It will be growing for some time but then stops even if the messages still arrives (hangs exactly in the time when rotation stops to happen) - it means that rsyslog doesnt call external script anymore
So IMO it looks like a bug in rsyslog or omprog plugin. Any idea what is going on?
br

Logrotate not working in Ubuntu 14.0.4

I have installed Logrotate in my system.
My Log file name is like : log-2015-09-09.php
Here is my configuration in etc/logrotate.conf file :
/home/root/php/www/myProject/CI/application/logs/log-%Y-%m-%d.php{
daily
size 1K
copytruncate
compress
rotate 1
notifempty
missingok
}
When i check status using :
cat /var/lib/logrotate/status
It did not show me anything about my logs and also did not delete or compress my log files.
Is there something wrong in my configuration that i need to change.
I would imagine that the directory/file name is the cause here. I'm not sure what you are tying to do with the % in there, but you can use wildcards instead like:
/home/root/php/www/myProject/CI/application/logs/log-*.php {
daily
size 1K
copytruncate
compress
rotate 1
notifempty
missingok
}
You could also test the logrotate too using:
logrotate -d -f /etc/logrotate.conf
-d = Turns on debug mode and no changes will be made to the log files.
-f = Tells logrotate to force the rotation, even if it doesn't think this is necessary

s3cmd not "Getting" the distcp jar file

Hi guys : I'm trying to get the s3 distcp jar file via s3, in an EMR cluster :
s3cmd get s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0.1/s3distcp.jar
However, the "get" command is not working:
ERROR: Skipping libs/s3distcp/: No such file or directory
This file exists in other s3 regions, also, so I even have tried :
s3cmd get s3://us-east-1.elasticmapreduce/libs/s3distcp/1.0.1/s3distcp.jar
But the ecommand still fails. But alas -- this .jar file CLEARLY exists, when we run "s3cmd ls", we can see it listed. See below for the details (example with the eu-west region) :
hadoop#ip-10-58-254-82:/mnt$ s3cmd ls s3://eu-west-1.elasticmapreduce/libs/s3distcp/
Bucket 'eu-west-1.elasticmapreduce':
2012-06-01 00:32 3614287 s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0.1/s3distcp.jar
2012-06-05 17:14 3615026 s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0.2/s3distcp.jar
2012-06-12 20:52 1893078 s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0.3/s3distcp.jar
2012-06-20 01:17 1893140 s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0.4/s3distcp.jar
2012-06-27 21:27 1893846 s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0.5/s3distcp.jar
2012-03-15 21:21 3613175 s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0/s3distcp.jar
2012-06-27 21:27 1893846 s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.latest/s3distcp.jar
The above seems to confirm that, in fact the file exists.
*How can I enable the "get" command to work for this file ? *
The jar is just working fine, can you paste the error message you are getting after the get command?
:s3cmd ls s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0.1/s3distcp.jar
2012-06-01 00:32 3614287 s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0.1/s3distcp.jar
:s3cmd get s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0.1/s3distcp.jar
s3://eu-west-1.elasticmapreduce/libs/s3distcp/1.0.1/s3distcp.jar -> ./s3distcp.jar [1 of 1]
3614287 of 3614287 100% in 3s 1008.86 kB/s done

Resources