rsyslog not all the logs exist on central log server - rsyslog

I wanted all the logs from /var/log on clientserver to be copied to the logserver
Right now it only copies the kernel and sshd logs.
Changes I made to /etc/rsyslog.conf:
On the logserver I have:
$template SSS, "/var/log/remotehosts/SSS/logs/%$YEAR%-%$MONTH%/%HOSTNAME%-%$DAY%.log"
if $fromhost == 'clientserver' then ?SSS
& ~
On the clientserver I have:
*.* ##logserver.domain:514
What I'm missing? why it did'nt it copy all the log files under /var/log?

Related

Filebeat harvest rotated files

Problem description:
I have relatively big /var/log/messages file which is rotated.
The file list looks like this:
ls -l /var/log/messages*
-rw-------. 1 root 928873050 Mar 5 10:37 /var/log/messages
-rw-------. 1 root 889843643 Mar 5 07:49 /var/log/messages.1
-rw-------. 1 root 890148183 Mar 5 07:50 /var/log/messages.2
-rw-------. 1 root 587333632 Mar 5 07:51 /var/log/messages.3
My filebeat configuration snippet:
filebeat.prospectors:
- input_type: log
paths:
- /var/log/messages
- /var/lib/ntp/drift
- /var/log/syslog
- /var/log/secure
tail_files: True
With multiple /var/log/messages* files as shown above each time filebeat is restarted it starts to harvest and ingest the old log files.
When I have just one /var/log/messages file, this issue is not observed.
On Linux systems, Filebeat keeps track of files not by filename but with inode number which doesn't change when renamed. This is from Filebeat documentation.
The harvester is responsible for opening and closing the file, which
means that the file descriptor remains open while the harvester is
running. If a file is removed or renamed while it’s being harvested,
Filebeat continues to read the file. This has the side effect that the
space on your disk is reserved until the harvester closes. By default,
Filebeat keeps the file open until close_inactive is reached.
Which means this is what happens in your case
Reads current messages file (inode#1) and keeps track of its inode number in the registry.
Filebeat Stops, but messages file rotated to messages.1 (inode#1) and new messages (inode#2) file got created.
When Filebeat restarts then it will start reading
messages.1 (inode#1) file from where it left off
messages (inode#2) since it matches the path you configured (/var/log/messages)
If your plan is to harvest all messages file even the rotated ones, then it would be better to configure the path as
/var/log/messages*
It seems like the syslog and security plugins were ON in the configuration. That triggered the loading of the rotated syslog files.

Logrotate using rsyslog's omprog hangs over time

Tried almost everything, but still cant find any solution for the issue so wanted to ask for little help in such case:
I have got logrotate (v. 3.7.8) configured based on size of log files:
/home/test/logs/*.log {
missingok
notifempty
nodateext
size 10k
rotate 100
copytruncate
}
Rotation of logs is based only on size, invoked whenever message will arrived to rsyslog deamon (v. 5.8.10). Configuration of rsyslog:
$ModLoad omprog
$ActionOMProgBinary /usr/local/bin/log_rotate.sh
$MaxMessageSize 64k
$ModLoad imuxsock
$ModLoad imklog
$ModLoad imtcp
$InputTCPServerRun 514
$template FORMATTER, "%HOSTNAME% | %msg:R,ERE,4,FIELD:(.*)\s(.*)(:::)(.*)--end%\n"
$ActionFileDefaultTemplate FORMATTER
$Escape8BitCharactersOnReceive off
$EscapeControlCharactersOnReceive off
$SystemLogRateLimitInterval 0
$SystemLogRateLimitBurst 0
$FileOwner test
$FileGroup test
$DirOwner test
$DirGroup test
# Log each module execution to separate log files and don't use the prepending module_execution_ in the log name.
$template CUSTOM_LOGS,"/home/test/logs/%programname:R,ERE,1,FIELD:^module_execution_(.*)--end%.log"
if $programname startswith 'module_execution_' then :omprog:
if $programname startswith 'module_execution_' then ?CUSTOM_LOGS
& ~
Script invoked by the omprog just runs logrotate and for test purpose sends new line to logrot file:
#!/bin/bash
echo "1" >> /home/test/logrot
/usr/sbin/logrotate /etc/logrotate.conf -v
How to reproduce:
execute:
for i in {1..50000}; do logger -t "module_execution_test" "test message"; done;
check rotate files - there will be a lot of files test.log.1,2,3 etc. with size near to the 10kB and one test.log with size much bigger then predicted
check:
wc -l /home/test/logrot
It will be growing for some time but then stops even if the messages still arrives (hangs exactly in the time when rotation stops to happen) - it means that rsyslog doesnt call external script anymore
So IMO it looks like a bug in rsyslog or omprog plugin. Any idea what is going on?
br

Flume-ng: source path and type for copying log file from local to HDFS

I am trying to copy some log files from local to HDFS using flume-ng. The source is /home/cloudera/flume/weblogs/ and the sink is hdfs://localhost:8020/flume/dump/. A cron job will copy the logs from tomcat server to /home/cloudera/flume/weblogs/ and I want to log files to be copied to HDFS as the files are available in /home/cloudera/flume/weblogs/ using flume-ng. Below is the conf file I created:
agent1.sources= local
agent1.channels= MemChannel
agent1.sinks=HDFS
agent1.sources.local.type = ???
agent1.sources.local.channels=MemChannel
agent1.sinks.HDFS.channel=MemChannel
agent1.sinks.HDFS.type=hdfs
agent1.sinks.HDFS.hdfs.path=hdfs://localhost:8020/flume/dump/
agent1.sinks.HDFS.hdfs.fileType=DataStream
agent1.sinks.HDFS.hdfs.writeformat=Text
agent1.sinks.HDFS.hdfs.batchSize=1000
agent1.sinks.HDFS.hdfs.rollSize=0
agent1.sinks.HDFS.hdfs.rollCount=10000
agent1.sinks.HDFS.hdfs.rollInterval=600
agent1.channels.MemChannel.type=memory
agent1.channels.MemChannel.capacity=10000
agent1.channels.MemChannel.transactionCapacity=100
I am not able to understand:
1) what will be the value of agent1.sources.local.type = ???
2) where to mention the source path /home/cloudera/flume/weblogs/ in the above conf file ?
3) Is there anything I am missing in the above conf file?
Please let me know on these.
You can use either :
An Exec Source and use a command (i.e. cat or tail on gnu/linux on you files)
Or a Spooling Directory Source for read all files in a directory

how to forward specific log file to a remote rsyslog server

I have a cassandra node (192.168.122.3) and a rsyslog server(192.168.122.2). On cassandra node, cassandra dumps its log files in /var/log/cassandra/cassandra.log. I want to pull this cassandra.log file to the remote server(rsyslog server) in the /var/log/ directory. how to do it ?
$ModLoad imfile #Load the imfile input module
$InputFilePollInterval 10
$InputFileName /var/log/cassandra/cassandra.log
$InputFileTag cassandra-access:
$InputFileStateFile stat-cassandra-access
$InputFileSeverity Info
$InputRunFileMonitor
$template cas_log, " %msg% "
if $programname == 'cassandra-access' then ##remote_server_address:port;cas_log
if $programname == 'cassandra-access' then stop
Follow the following steps: 1) Go to /etc/rsyslog.d 2) create a empty file named as cas-log.conf 3) Copy the above mentioned code and paste into this(cas-log) file. Note: replace the destination rsyslog server ip/name in second last line with remote_server_address & port. 4) Restart your rsyslog.
5) On sever side you can see logs in /var/log/syslog file.

Logrotate does not upload to S3

I've spent some hours trying to figure out why logrotate won't successfully upload my logs to S3, so I'm posting my setup here. Here's the thing--logrotate uploads the log file correctly to s3 when I force it like this:
sudo logrotate -f /etc/logrotate.d/haproxy
Starting S3 Log Upload...
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
/var/log/haproxy-2014-12-23-044414.gz -> s3://my-haproxy-access-logs/haproxy-2014-12-23-044414.gz [1 of 1]
315840 of 315840 100% in 0s 2.23 MB/s done
But it does not succeed as part of the normal logrotate process. The logs are still compressed by my postrotate script, so I know that it is being run. Here is my setup:
/etc/logrotate.d/haproxy =>
/var/log/haproxy.log {
size 1k
rotate 1
missingok
copytruncate
sharedscripts
su root root
create 777 syslog adm
postrotate
/usr/local/admintools/upload.sh 2>&1 /var/log/upload_errors
endscript
}
/usr/local/admintools/upload.sh =>
echo "Starting S3 Log Upload..."
BUCKET_NAME="my-haproxy-access-logs"
# Perform Rotated Log File Compression
filename=/var/log/haproxy-$(date +%F-%H%M%S).gz \
tar -czPf "$filename" /var/log/haproxy.log.1
# Upload log file to Amazon S3 bucket
/usr/bin/s3cmd put "$filename" s3://"$BUCKET_NAME"
And here is the output of a dry run of logrotate:
sudo logrotate -fd /etc/logrotate.d/haproxy
reading config file /etc/logrotate.d/haproxy
Handling 1 logs
rotating pattern: /var/log/haproxy.log forced from command line (1 rotations)
empty log files are rotated, old logs are removed
considering log /var/log/haproxy.log
log needs rotating
rotating log /var/log/haproxy.log, log->rotateCount is 1
dateext suffix '-20141223'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
renaming /var/log/haproxy.log.1 to /var/log/haproxy.log.2 (rotatecount 1, logstart 1, i 1),
renaming /var/log/haproxy.log.0 to /var/log/haproxy.log.1 (rotatecount 1, logstart 1, i 0),
copying /var/log/haproxy.log to /var/log/haproxy.log.1
truncating /var/log/haproxy.log
running postrotate script
running script with arg /var/log/haproxy.log : "
/usr/local/admintools/upload.sh 2>&1 /var/log/upload_errors
"
removing old log /var/log/haproxy.log.2
Any insight appreciated.
It turned out that my s3cmd was configured for my user, not for root.
ERROR: /root/.s3cfg: No such file or directory
ERROR: Configuration file not available.
ERROR: Consider using --configure parameter to create one.
Solution was to copy my config file over. – worker1138

Resources