How to stop logstash to write logstash logs to syslog? - elasticsearch

I have my logstash configuration in my ubuntu server which reads data from the postgres database and send the data to elastic search. I have configured a schedule at each 15 minutes the logstash will look the postgres table, if there is any change in the table it sends the data to elastic search.
But each time the logstash is also sending the logs to syslog which I does not need. Because of logstash my syslog file consumes more memory.
So how to stop logstash to send its logs to syslog. Is there is any configuration in logstash.yml to avoid sending logs to syslog.
I referred many sites in online in which they said to remove below line from the configuration.
stdout { codec => rubydebug }
But I don't have this line.
In my output I just send my data to elastic search which I brought from AWS.
Is there is a way to stop logstash to sending its logs to syslog?

disable the rootLogger.appendRef.console in log4j
The logfiles that logstash itself produces are created through log4j, one stream goes by default to the console. Syslog will write to consolelogs to the syslog file itself. In the Ubuntu version of logstash this is configured in the file name/etc/logstash/log4j2.properties
In the default configuration there is a line that starts with
rootLogger.appenderRef.console
If you add a # in front of the line and restart logstash. The logfiles that logstash creates will stop going to syslog
service logstash restart
The other rootLogger that uses the RollingFileAppender should still write logmessages from logstash itself (so not the messages that are being processed by your pipeline) to
/var/log/logstash/logstash-plain.log
It's easy to confuse the logfiles that logstash creates with the messages that you process, especially if they get mixed by the logstash-output-stdout or logstash-output-syslog plugins. This is not applicable to you because you use the logstash-output-elasticsearch plugin that writes to elasticsearch.
The log4j.properties file gets skipped if you run logstash from the commandline, in Ubuntu. It's a nice way of testing your pipeline in a terminal, you can run multiple logstash instances in parallel (e.g. the service and a commandline test pipeline)
/usr/share/logstash/bin/logstash -f your_pipeline.conf

To avoid write to syslog, check your pipelines and log4j.properties files.
In your pipelines files, remove all occurences of this :
stdout { codec => rubydebug }
And in your log4j.properties files comment this line :
rootLogger.appenderRef.console

Related

why logstash produce logs in /var/log with messages-20220731 naming convention?

as I know logstash log path is /var/log/logstash.but logstash produce tons of log witwith messages-date naming convention.how to stop producing this logs ?

How to receive logs from multiple filebeats running on different machine(ec2) to one logstash instance?

i have 2 filebeats running on different machine(Ex: A & B) and filebeat is also configured to send logstash machine but logstash is not receiving the data even though I have added the private IP of those machines.
so i have 2 logstash .conf file to receive from 2 different filebeats.
My conf file looks like this ->
A.conf
input{
beat{
port:""
host:"private-ip-m1"
}
}
B.conf
input{
beat{
port:""
host:"private-ip-m2"
}
}
after I run logstash it is unable to connect. it says Error: Cannot assign requested address.
Can anyone tell me is there any other way to this ?

Filebeat not picking up some files

I am trying to send tomcat logs to ELK. I am using Filebeat to scan the files.
My log file name would be "project_err.DD-MM-YYYY". In filebeat configuration, I am giving the file name as foldername\project_err*
But filebeat is ignoring the files. Is this configuration correct?

Need help debugging kafka source to hdfs sink with flume

I'm trying to send data from kafka (eventually we'll use kafka running on a different instance) to hdfs. I think flume or some sort of ingestion protocol is necessary to get data into hdfs. So we're using cloudera's flume service and hdfs.
This is my flume-conf file. The other conf file is empty
tier1.sources=source1
tier1.channels=channel1
tier1.sinks=sink1
tier1.sources.source1.type=org.apache.flume.source.kafka.KafkaSource
tier1.sources.source1.zookeeperConnect=localhost:2181
tier1.sources.source1.topic=test
tier1.sources.source1.groupId=flume
tier1.sources.source1.channels=channel1
tier1.sources.source1.interceptors=i1
tier1.sources.source1.interceptors.i1.type=timestamp
tier1.sources.source1.kafka.consumer.timeout.ms=100
tier1.channels.channel1.type=memory
tier1.channels.channel1.capacity=10000
tier1.channels.channel1.transactionCapacity=1000
tier1.sinks.sink1.type=hdfs
tier1.sinks.sink1.hdfs.path=/tmp/kafka/test/data
tier1.sinks.sink1.hdfs.rollInterval=5
tier1.sinks.sink1.hdfs.rollSize=0
tier1.sinks.sink1.hdfs.rollCount=0
tier1.sinks.sink1.hdfs.fileType=DataStream
When I start a kafka consumer it can get messages from a kafka producer just fine on localhost:2181. But I don't see any errors from the flume agent and nothing gets put into hdfs. I also can't find any log files.
This is how I start the agent.
flume-ng agent --conf /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/flume-ng/conf --conf-file flume-conf --name agent1 -Dflume.root.logger=DEBUG,INFO,console
Help please?
Fixed it.
Have to change
--name agent1
to --name tier1

Unable to re-process the log file using logstash version 2.3.2

I have processed a file using logstash and pushed it to elasticsearch it work. However, I had to make some changes to the logstash conf file and need to process the log file again. I deleted the index on es and restarted the logstash. I dont see the data in elasticsearch, it looks like the file is not being processed.
1. I am using logstash version 2.3.2
2. I deleted _sincedb file, restarted logstash, no log
3. I checked the conf file syntax via --configcheck and it is ok.
Any ideas what I am missing here?
I dont see any index created, no data in es. I tried these steps multiple times.
Logstash is smart enough to remember until which line it has already processed each file you've given him and stores that cursor in a sincedb file.
So, in addition to the path setting, you need to specify two more parameters in your file input that will make sure that the file is re-processed on each run:
file {
path => "/path/to/file"
start_position => "beginning"
sincedb_path => "/dev/null"
}

Resources