rsyslog send logs of tag to specific file with different template - rsyslog

I have a script that uses the logger command to log things to rsyslog.
I want all logs from this script to go to /var/log/xen-shutdown-script.log
The logs don't seem to be sent to this log file.
Example logger command:
logger -t xen-shutdown-script -p syslog.info "test message"
I have the following in a file in path '/etc/rsyslog.d/01-xen-shutdown-script.conf':
template(
name="xenShutdownScriptFormat"
type="string"
string="<%syslogprioriy-text%>%TIMESTAMP:::date-rfc3339% %HOSTNAME% %msg%\n"
)
if $syslogtag contains "xen-shutdown-script" then /var/log/xen-pool-shutdown.log;xenShutdownScriptFormat

Related

User defined (or emmited) username when using the logger(1) linux bash tool command

I am trying to log some custom logs. The problem is that if I use the logger command, the username running the command is also logged. I would like to ommit that info so I can manually fill anything I want. I have read the manual but could not find anything like that. I also tried implementing it in a script (java) but not quit succeed.
Example. Now I am seeing this:
Mar 2 10:31:28 $HOSTNAME $USERNAME: Hello world!
What I would like to see is this:
Mar 2 10:31:28 suhosin[666]: ALERT - canary mismatch on efree() - heap overflow detected (attacker '000.000.000.000', file 'xyz')
Use the -t option to set the tag.
$ logger -t 'nobody' 'hello'
Produces log:
Feb 28 10:25:37 myhostname nobody: hello
Relevant man page section (bold added for emphasis):
-t, --tag tag
Mark every line to be logged with the specified tag. The default tag is the name of the user logged in on the terminal (or a user name based on effective user ID).

Replace IP Address from file

Sample.txt file:
.........................
some log file entries
some log file entries
some log file entries
some log file entries
This system ip is not found
some log file entries
some log file entries
some log file entries
This system IP is 172.16.80.10
some log file entries
some log file entries
This system IP:172.16.80.10
some log file entries
some log file entries
some log file entries
Hostname::ip-172.16.80.10.ec2.internal
some log file entries
some log file entries
......................
I want to replace the IP Address from file not the host name, however it is changing hostname also.
I am using this command to replace IP address:
sed s/172.16.80.10/172.16.80.12/g sample.txt
Getting Output
.........................
some log file entries
some log file entries
some log file entries
some log file entries
This system ip is not found
some log file entries
some log file entries
some log file entries
This system IP is 172.16.80.12
some log file entries
some log file entries
This system IP:172.16.80.12
some log file entries
some log file entries
some log file entries
Hostname::ip-172.16.80.12.ec2.internal
some log file entries
some log file entries
......................
(Changing Hostname also)
Desired output is
.........................
some log file entries
some log file entries
some log file entries
some log file entries
This system ip is not found
some log file entries
some log file entries
some log file entries
This system IP is 172.16.80.12
some log file entries
some log file entries
This system IP:172.16.80.12
some log file entries
some log file entries
some log file entries
Hostname::ip-172.16.80.10.ec2.internal
some log file entries
some log file entries
......................
Code example:
while getopts i:h: opt
do
case $opt in
i)
CurrentLocalIpv4=`curl -s http://169.254.169.254/latest/meta-data/local-ipv4`
if [[ $OPTARG =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
sed -i s/"$CurrentLocalIpv4"/"$OPTARG"/g /home/ec2-user/sample.txt
echo "ip cahnged"
else
echo "fail"
fi
;;
h)
echo"hostname" ;;
esac
done
Using your command, you can modify it as follows:
sed 's/172.16.80.10$/172.16.80.12/g' sample.txt
That is, telling sed that you want only change the lines with the IP to be changed and which have no more characters after it (172.16.80.10$).
In case of you have the IP to be modified within a variable, ($CurrentLocalIpv4), you may write it as follows:
sed 's/'$CurrentLocalIpv4'$/172.16.80.12/g' sample.txt

Logrotate using rsyslog's omprog hangs over time

Tried almost everything, but still cant find any solution for the issue so wanted to ask for little help in such case:
I have got logrotate (v. 3.7.8) configured based on size of log files:
/home/test/logs/*.log {
missingok
notifempty
nodateext
size 10k
rotate 100
copytruncate
}
Rotation of logs is based only on size, invoked whenever message will arrived to rsyslog deamon (v. 5.8.10). Configuration of rsyslog:
$ModLoad omprog
$ActionOMProgBinary /usr/local/bin/log_rotate.sh
$MaxMessageSize 64k
$ModLoad imuxsock
$ModLoad imklog
$ModLoad imtcp
$InputTCPServerRun 514
$template FORMATTER, "%HOSTNAME% | %msg:R,ERE,4,FIELD:(.*)\s(.*)(:::)(.*)--end%\n"
$ActionFileDefaultTemplate FORMATTER
$Escape8BitCharactersOnReceive off
$EscapeControlCharactersOnReceive off
$SystemLogRateLimitInterval 0
$SystemLogRateLimitBurst 0
$FileOwner test
$FileGroup test
$DirOwner test
$DirGroup test
# Log each module execution to separate log files and don't use the prepending module_execution_ in the log name.
$template CUSTOM_LOGS,"/home/test/logs/%programname:R,ERE,1,FIELD:^module_execution_(.*)--end%.log"
if $programname startswith 'module_execution_' then :omprog:
if $programname startswith 'module_execution_' then ?CUSTOM_LOGS
& ~
Script invoked by the omprog just runs logrotate and for test purpose sends new line to logrot file:
#!/bin/bash
echo "1" >> /home/test/logrot
/usr/sbin/logrotate /etc/logrotate.conf -v
How to reproduce:
execute:
for i in {1..50000}; do logger -t "module_execution_test" "test message"; done;
check rotate files - there will be a lot of files test.log.1,2,3 etc. with size near to the 10kB and one test.log with size much bigger then predicted
check:
wc -l /home/test/logrot
It will be growing for some time but then stops even if the messages still arrives (hangs exactly in the time when rotation stops to happen) - it means that rsyslog doesnt call external script anymore
So IMO it looks like a bug in rsyslog or omprog plugin. Any idea what is going on?
br

how to forward specific log file to a remote rsyslog server

I have a cassandra node (192.168.122.3) and a rsyslog server(192.168.122.2). On cassandra node, cassandra dumps its log files in /var/log/cassandra/cassandra.log. I want to pull this cassandra.log file to the remote server(rsyslog server) in the /var/log/ directory. how to do it ?
$ModLoad imfile #Load the imfile input module
$InputFilePollInterval 10
$InputFileName /var/log/cassandra/cassandra.log
$InputFileTag cassandra-access:
$InputFileStateFile stat-cassandra-access
$InputFileSeverity Info
$InputRunFileMonitor
$template cas_log, " %msg% "
if $programname == 'cassandra-access' then ##remote_server_address:port;cas_log
if $programname == 'cassandra-access' then stop
Follow the following steps: 1) Go to /etc/rsyslog.d 2) create a empty file named as cas-log.conf 3) Copy the above mentioned code and paste into this(cas-log) file. Note: replace the destination rsyslog server ip/name in second last line with remote_server_address & port. 4) Restart your rsyslog.
5) On sever side you can see logs in /var/log/syslog file.

How to load data from local machine to hdfs using flume

i am new to flume so please tell me...how to store log files from my local machine to local my HDFS using flume
i have issues in setting classpath and flume.conf file
Thank you,
ajay
agent.sources = weblog
agent.channels = memoryChannel
agent.sinks = mycluster
## Sources #########################################################
agent.sources.weblog.type = exec
agent.sources.weblog.command = tail -F REPLACE-WITH-PATH2-your.log-FILE
agent.sources.weblog.batchSize = 1
agent.sources.weblog.channels =
REPLACE-WITH-
CHANNEL-NAME
## Channels ########################################################
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100 agent.channels.memoryChannel.transactionCapacity = 100
## Sinks ###########################################################
agent.sinks.mycluster.type =REPLACE-WITH-CLUSTER-TYPE
agent.sinks.mycluster.hdfs.path=/user/root/flumedata
agent.sinks.mycluster.channel =REPLACE-WITH-CHANNEL-NAME
Save this file as logagent.conf and run with below command
# flume-ng agent –n agent –f logagent.conf &
We do need more information to know why things are working for you.
The short answer is that you need a Source to read your data from (maybe the spooling directory source), a Channel (memory channel if you don't need reliable storage) and the HDFS sink.
Update
The OP reports receiving the error message, "you must include conf file in flume class path".
You need to provide the conf file as an argument. You do so with the --conf-file parameter. For example, the command line I use in development is:
bin/flume-ng agent --conf-file /etc/flume-ng/conf/flume.conf --name castellan-indexer --conf /etc/flume-ng/conf
The error message reads that way because the bin/flume-ng script adds the contents of the --conf-file argument to the classpath before running Flume.
If you are appending data to your local file, you can use an exec source with "tail -F" command. If the file is static, use cat command to transfer the data to hadoop.
The overall architecture would be:
Source: Exec source reading data from your file
Channel : Either memory channel or file channel
Sink: Hdfs sink where data is being dumped.
Use user guide to create your conf file (https://flume.apache.org/FlumeUserGuide.html)
Once you have your conf file ready, you can run it like this:
bin/flume-ng agent -n $agent_name -c conf -f conf/your-flume-conf.conf

Resources