I have some question about syslog fifo and log file.
For example I have my gc.log and I have this configuration on syslog
source s_splunk {
udp(ip("127.0.0.1") port(514));
file("/logs/gc.log" follow_freq(1));
};
destination d_splunk {
tcp (my.splunk.intranet port (1514));
};
log {
source (s_splunk);
destination (d_splunk);
};
to index this gc.log on splunk. But this way I get high cpu consume and I like to change how I'm indexing this log file.
I would like to do indexing by fifo file but I can't change how the application generate this log file.
How can i do this ?
I found some way to solve my problem. I delete my gc.log file and build this file like a fifo file and i changed de permission of this file.
So the JVM use de fifo to log and on the syslog-ng i'm configuring one destination to write the log on file and send to my splunk vip (my.splunk.intranet).
With this solution my syslog don't have high cpu usage.
Related
Tried almost everything, but still cant find any solution for the issue so wanted to ask for little help in such case:
I have got logrotate (v. 3.7.8) configured based on size of log files:
/home/test/logs/*.log {
missingok
notifempty
nodateext
size 10k
rotate 100
copytruncate
}
Rotation of logs is based only on size, invoked whenever message will arrived to rsyslog deamon (v. 5.8.10). Configuration of rsyslog:
$ModLoad omprog
$ActionOMProgBinary /usr/local/bin/log_rotate.sh
$MaxMessageSize 64k
$ModLoad imuxsock
$ModLoad imklog
$ModLoad imtcp
$InputTCPServerRun 514
$template FORMATTER, "%HOSTNAME% | %msg:R,ERE,4,FIELD:(.*)\s(.*)(:::)(.*)--end%\n"
$ActionFileDefaultTemplate FORMATTER
$Escape8BitCharactersOnReceive off
$EscapeControlCharactersOnReceive off
$SystemLogRateLimitInterval 0
$SystemLogRateLimitBurst 0
$FileOwner test
$FileGroup test
$DirOwner test
$DirGroup test
# Log each module execution to separate log files and don't use the prepending module_execution_ in the log name.
$template CUSTOM_LOGS,"/home/test/logs/%programname:R,ERE,1,FIELD:^module_execution_(.*)--end%.log"
if $programname startswith 'module_execution_' then :omprog:
if $programname startswith 'module_execution_' then ?CUSTOM_LOGS
& ~
Script invoked by the omprog just runs logrotate and for test purpose sends new line to logrot file:
#!/bin/bash
echo "1" >> /home/test/logrot
/usr/sbin/logrotate /etc/logrotate.conf -v
How to reproduce:
execute:
for i in {1..50000}; do logger -t "module_execution_test" "test message"; done;
check rotate files - there will be a lot of files test.log.1,2,3 etc. with size near to the 10kB and one test.log with size much bigger then predicted
check:
wc -l /home/test/logrot
It will be growing for some time but then stops even if the messages still arrives (hangs exactly in the time when rotation stops to happen) - it means that rsyslog doesnt call external script anymore
So IMO it looks like a bug in rsyslog or omprog plugin. Any idea what is going on?
br
I had deleted an index. It contained logs from a certain file. Even after deletion of that index why doesn't logstash/elasticsearch read the same log file while creating a new index? And who does the role of reading the logs- ES or LS?
Logstash reads your logs and puts them into elasticsearch. There is something called a sincedb that Logstash uses to keep track of what files it has already processed. If you remove it and restart logstash it should reprocess all of your logs.
If there is a specific log you want to reparse, the easiest way to do it is to do this:
mv logfile logfile.copy
cp logfile.copy logfile
rm logfile.copy
This gives it a new inode and makes logstash think it is a new log.
I want to read a log file from different server in flume which is up and running on some different server.......so for doing so how can I achive this by changing my flume-conf.properties file.......what should i write in the configuration file of flume to achieve this...
a1.sources = AspectJ
a1.channels = memoryChannel
a1.sinks = kafkaSink
a1.sources.AspectJ.type = com.flume.MySource
a1.sources.AspectJ.command = tail -F /tmp/data/Log.txt
for achiving this what should I write in place of
a1.sources.AspectJ.command = tail -F /tmp/data/Log.txt
I believe what you want to ask is that, if Flume is setup on host 'F' and your log files exists on host 'L', how will you configure flume to read log files from host 'L', correct ?
If so, then you need to setup Flume on host 'L' and not on 'F'. Setup flume on the same host where the log files are and setup the Sink to point to Kafka topic.
I am trying to transfer a 60mb file to queue, but Websphere MQ fte stall the transfer and keep recovering. I am running WebSphere MQ FTE on default configuration.
I have tested following scenario with different results according to configuration changes I made.
These commands were issued to create monitor:
fteCreateTransfer -sa AGENT1 -sm TQM.FTE -da AGENT2 -dm QM.FTE -dq FTE.TEST.Q -p QM.FTE -de overwrite -sd delete -gt /var/IBM/WMQFTE/config/TQM.FTE/TEST_TRANSFER.xml D:\\rvs\\tstusrdat\\ALZtoSIP\\INC\\*.zip
fteCreateMonitor -ma AGENT1 -mn TEST_MONITOR -md D:\\rvs\\tstusrdat\\ALZtoSIP\\INC -mt /var/IBM/WMQFTE/config/TQM.FTE/TEST_TRANSFER.xml -tr match,*.zip
Test was performed on files: 53MB and 30MB
Default configuration (just enableQueueInputOutput=true added to AGENT2.properties)
1) all default
no success, transfer status: "recovering"
for both files
2) added maxInputOutputMessageLength=60000000, destination queue max message length changed to 103809024
result transfer status: "failed" with following exception PM71138: BFGIO0178E: A QUEUE WRITE FAILED DUE TO A WMQAPIEXCEPTION WITH MESSAGE TEXT CC=2 RC=2142 MQRC_HEADER_ERROR
for both files
After reading this: http://pic.dhe.ibm.com/infocenter/wmqfte/v7r0/topic/com.ibm.wmqfte.doc/message_length.htm I came with working settings:
3) maxInputOutputMessageLength=34603008 (its maximum value), destination queue max message length still to 103809024
result for file with size 30MB: succcess
result for file with size 53MB: "failed" with following exception PM71138: BFGIO0178E: A QUEUE WRITE FAILED DUE TO A WMQAPIEXCEPTION WITH MESSAGE TEXT CC=2 RC=2142 MQRC_HEADER_ERROR
So according to this I am afraid one can't transfer larger then 34603008 bytes.
If you are transferring file to queue you definitely can't use the default settings. You have to add "enableQueueInputOutput=true" to agent.properties for agent thet uses queue as source or destination.
i am new to flume so please tell me...how to store log files from my local machine to local my HDFS using flume
i have issues in setting classpath and flume.conf file
Thank you,
ajay
agent.sources = weblog
agent.channels = memoryChannel
agent.sinks = mycluster
## Sources #########################################################
agent.sources.weblog.type = exec
agent.sources.weblog.command = tail -F REPLACE-WITH-PATH2-your.log-FILE
agent.sources.weblog.batchSize = 1
agent.sources.weblog.channels =
REPLACE-WITH-
CHANNEL-NAME
## Channels ########################################################
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100 agent.channels.memoryChannel.transactionCapacity = 100
## Sinks ###########################################################
agent.sinks.mycluster.type =REPLACE-WITH-CLUSTER-TYPE
agent.sinks.mycluster.hdfs.path=/user/root/flumedata
agent.sinks.mycluster.channel =REPLACE-WITH-CHANNEL-NAME
Save this file as logagent.conf and run with below command
# flume-ng agent –n agent –f logagent.conf &
We do need more information to know why things are working for you.
The short answer is that you need a Source to read your data from (maybe the spooling directory source), a Channel (memory channel if you don't need reliable storage) and the HDFS sink.
Update
The OP reports receiving the error message, "you must include conf file in flume class path".
You need to provide the conf file as an argument. You do so with the --conf-file parameter. For example, the command line I use in development is:
bin/flume-ng agent --conf-file /etc/flume-ng/conf/flume.conf --name castellan-indexer --conf /etc/flume-ng/conf
The error message reads that way because the bin/flume-ng script adds the contents of the --conf-file argument to the classpath before running Flume.
If you are appending data to your local file, you can use an exec source with "tail -F" command. If the file is static, use cat command to transfer the data to hadoop.
The overall architecture would be:
Source: Exec source reading data from your file
Channel : Either memory channel or file channel
Sink: Hdfs sink where data is being dumped.
Use user guide to create your conf file (https://flume.apache.org/FlumeUserGuide.html)
Once you have your conf file ready, you can run it like this:
bin/flume-ng agent -n $agent_name -c conf -f conf/your-flume-conf.conf