how to forward specific log file to a remote rsyslog server - rsyslog

I have a cassandra node (192.168.122.3) and a rsyslog server(192.168.122.2). On cassandra node, cassandra dumps its log files in /var/log/cassandra/cassandra.log. I want to pull this cassandra.log file to the remote server(rsyslog server) in the /var/log/ directory. how to do it ?

$ModLoad imfile #Load the imfile input module
$InputFilePollInterval 10
$InputFileName /var/log/cassandra/cassandra.log
$InputFileTag cassandra-access:
$InputFileStateFile stat-cassandra-access
$InputFileSeverity Info
$InputRunFileMonitor
$template cas_log, " %msg% "
if $programname == 'cassandra-access' then ##remote_server_address:port;cas_log
if $programname == 'cassandra-access' then stop
Follow the following steps: 1) Go to /etc/rsyslog.d 2) create a empty file named as cas-log.conf 3) Copy the above mentioned code and paste into this(cas-log) file. Note: replace the destination rsyslog server ip/name in second last line with remote_server_address & port. 4) Restart your rsyslog.
5) On sever side you can see logs in /var/log/syslog file.

Related

rsyslogd does not write data to logfile when configured with TLS

I'm trying to set up rsyslog with TLS to forward specific records from /var/log/auth.log from host A to a remote server B.
The configuration file I wrote for rsyslog is the following:
$DefaultNetstreamDriverCAFile /etc/licensing/certificates/ca.pem
$DefaultNetstreamDriverCertFile /etc/licensing/certificates/client-cert.pem
$DefaultNetstreamDriverKeyFile /etc/licensing/certificates/client-key.pem
$InputFilePollInterval 10
#Read from the auth.log file and assign the tag "ssl-auth" for its messages
input
(type="imfile"
File="/var/log/auth.log"
reopenOnTruncate="on"
deleteStateOnFileDelete="on"
Tag="ssl-auth")
$template auth_log, " %msg% "
# Send ssl traffic to server on port 514
if ($syslogtag == 'ssl-auth') then{action
(type="omfwd"
protocol="tcp"
target="<ip#server>"
port="514"
template="auth_log"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="x509/name"
)}
Using this configuration, when I try to ssh-login the first time into the host A from another host X everything works fine; the file /var/log/auth.log is written and the tcpdump shows traffic towards server B.
But from then on, it does not work anymore.
Even if I try to exit from host A and login back again whenever I do, the file /var/log/auth.log is not ever written and no traffic appears over tcpdump.
The very strange things is that if I remove the TLS from the configuration it works.

rsyslog not all the logs exist on central log server

I wanted all the logs from /var/log on clientserver to be copied to the logserver
Right now it only copies the kernel and sshd logs.
Changes I made to /etc/rsyslog.conf:
On the logserver I have:
$template SSS, "/var/log/remotehosts/SSS/logs/%$YEAR%-%$MONTH%/%HOSTNAME%-%$DAY%.log"
if $fromhost == 'clientserver' then ?SSS
& ~
On the clientserver I have:
*.* ##logserver.domain:514
What I'm missing? why it did'nt it copy all the log files under /var/log?

Rsyslog imfile error: no file name given

I am using rsyslog version 8.16.0 on ubuntu 16.04.
Following is my configuration file :
module(load="imfile") #needs to be done just once
# File 1
input(type="imfile"
mode="inotify"
File="/var/log/application/hello.log"
Tag="application-access-logs"
Severity="info"
PersistStateInterval="20000"
)
$PrivDropToGroup adm
$WorkDirectory /var/spool/rsyslog
$InputRunFileMonitor
#Template for application access events
$template ApplicationLogs,"%HOSTNAME% %msg%\n"
if $programname == 'application-access-logs' then ##xx.xx.xx.xx:12345;ApplicationLogs
if $programname == 'application-access-logs' then ~
I am getting the following error :
rsyslogd: version 8.16.0, config validation run (level 1), master config /etc/rsyslog.conf
rsyslogd: error during parsing file /etc/rsyslog.d/21-application-test.conf, on or before line 10: parameter 'mode' not known -- typo in config file? [v8.16.0 try http://www.rsyslog.com/e/2207 ]
rsyslogd: imfile error: no file name given, file monitor can not be created [v8.16.0 try http://www.rsyslog.com/e/2046 ]
What I am doing wrong here ?
I am using inotify mode because I want to use wildcards in file name.
I know this is not a complete answer but an observation that can help others struggling with the same issue.
module(load="imfile" mode="inotify") is set globally so if any other .conf file has module property set it seems to throw this error for any future files.

Flume-ng: source path and type for copying log file from local to HDFS

I am trying to copy some log files from local to HDFS using flume-ng. The source is /home/cloudera/flume/weblogs/ and the sink is hdfs://localhost:8020/flume/dump/. A cron job will copy the logs from tomcat server to /home/cloudera/flume/weblogs/ and I want to log files to be copied to HDFS as the files are available in /home/cloudera/flume/weblogs/ using flume-ng. Below is the conf file I created:
agent1.sources= local
agent1.channels= MemChannel
agent1.sinks=HDFS
agent1.sources.local.type = ???
agent1.sources.local.channels=MemChannel
agent1.sinks.HDFS.channel=MemChannel
agent1.sinks.HDFS.type=hdfs
agent1.sinks.HDFS.hdfs.path=hdfs://localhost:8020/flume/dump/
agent1.sinks.HDFS.hdfs.fileType=DataStream
agent1.sinks.HDFS.hdfs.writeformat=Text
agent1.sinks.HDFS.hdfs.batchSize=1000
agent1.sinks.HDFS.hdfs.rollSize=0
agent1.sinks.HDFS.hdfs.rollCount=10000
agent1.sinks.HDFS.hdfs.rollInterval=600
agent1.channels.MemChannel.type=memory
agent1.channels.MemChannel.capacity=10000
agent1.channels.MemChannel.transactionCapacity=100
I am not able to understand:
1) what will be the value of agent1.sources.local.type = ???
2) where to mention the source path /home/cloudera/flume/weblogs/ in the above conf file ?
3) Is there anything I am missing in the above conf file?
Please let me know on these.
You can use either :
An Exec Source and use a command (i.e. cat or tail on gnu/linux on you files)
Or a Spooling Directory Source for read all files in a directory

How to load data from local machine to hdfs using flume

i am new to flume so please tell me...how to store log files from my local machine to local my HDFS using flume
i have issues in setting classpath and flume.conf file
Thank you,
ajay
agent.sources = weblog
agent.channels = memoryChannel
agent.sinks = mycluster
## Sources #########################################################
agent.sources.weblog.type = exec
agent.sources.weblog.command = tail -F REPLACE-WITH-PATH2-your.log-FILE
agent.sources.weblog.batchSize = 1
agent.sources.weblog.channels =
REPLACE-WITH-
CHANNEL-NAME
## Channels ########################################################
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100 agent.channels.memoryChannel.transactionCapacity = 100
## Sinks ###########################################################
agent.sinks.mycluster.type =REPLACE-WITH-CLUSTER-TYPE
agent.sinks.mycluster.hdfs.path=/user/root/flumedata
agent.sinks.mycluster.channel =REPLACE-WITH-CHANNEL-NAME
Save this file as logagent.conf and run with below command
# flume-ng agent –n agent –f logagent.conf &
We do need more information to know why things are working for you.
The short answer is that you need a Source to read your data from (maybe the spooling directory source), a Channel (memory channel if you don't need reliable storage) and the HDFS sink.
Update
The OP reports receiving the error message, "you must include conf file in flume class path".
You need to provide the conf file as an argument. You do so with the --conf-file parameter. For example, the command line I use in development is:
bin/flume-ng agent --conf-file /etc/flume-ng/conf/flume.conf --name castellan-indexer --conf /etc/flume-ng/conf
The error message reads that way because the bin/flume-ng script adds the contents of the --conf-file argument to the classpath before running Flume.
If you are appending data to your local file, you can use an exec source with "tail -F" command. If the file is static, use cat command to transfer the data to hadoop.
The overall architecture would be:
Source: Exec source reading data from your file
Channel : Either memory channel or file channel
Sink: Hdfs sink where data is being dumped.
Use user guide to create your conf file (https://flume.apache.org/FlumeUserGuide.html)
Once you have your conf file ready, you can run it like this:
bin/flume-ng agent -n $agent_name -c conf -f conf/your-flume-conf.conf

Resources