Does fluentd depend on rsyslog? - rsyslog

Still wrapping my head around logging technology. I'm following the fluentd to graylog2 recipe but I don't understand this step:
Open /etc/rsyslog.conf and add the following line to the beginning of the file: *.* #127.0.0.1:5140 Then, restart rsyslogd by running sudo /etc/init.d/rsyslog restart.
What's supposed to listen on 127.0.0.1:5140? Is rsyslog a fluentd dependency?

According to Parse Syslog Messages Robustly:
The problem with syslog is that services have a wide range of log
format, and no single parser can parse all syslog messages
effectively.
Rsyslog seems the recommended way to forward logs to fluentd.

Fluentd listens on the port 5140 if you enable the Rsyslog input. Changing the line in
/etc/rsyslogd.conf
forwards the traffic from Rsyslog to Fluentd.
However, if you don't want to turn on Rsyslog you can just send the traffic straight to port 5140.

Related

rsyslog forwarding to different port

I am receiving syslog logs over port 513 that I am trying to forward to port 514 (where I have a service listening for them). So far, all my attempts have been unsuccessful.
I've tried making a file in /etc/rsyslog.d/ with
:fromhost-ip, isequal, "10.20.0.1" #127.0.0.1:514
I've tried adding a ruleset to the /etc/rsyslog.conf file:
ruleset (name="to514"){
action(type="omfwd" Target="127.0.0.1" Port="514" Protocol="udp")
}
input(type="imudp" port"513" ruleset="to513")
What is the right way to go about this?

Filebeat unable to send data to logstash which results in empty data in elastic & kibana

I am trying to deploy ELK stack in openshift platform (OKD - v3.11) and using filebeat to automatically detect the logs.
The kibana dashboard is up, elastic & logstash api's are working fine but the filebeat is not sending the data to logstash since I do not see any data polling on the logstash listening on 5044 port.
So I found that from elastic forums that the following iptables command would resolve my issue but no luck,
iptables -A OUTPUT -t mangle -p tcp --dport 5044 -j MARK --set-mark 10
Still nothing is polling on the logstash listener. Please help me if I am missing anything and let me know if you need any more information.
NOTE:
The filebeat.yml, logstash.yml & logstash.conf files are working perfectly while deployed in the plain kubernetes.
The steps I have followed to debug this issue are:
Check if Kibana is coming up,
Check if Elastic API's are working,
Check if Logstash is accessible from Filebeat.
Everything is working fine in my case. Added log levels in Filebeat.yml and found "Permission Denied" error while filebeat is accessing the docker container logs under "/var/lib/docker/containers//" folder.
Fixed the issue by setting selinux to "Permissive" by running the following command,
sudo setenforce Permissive
After this ELK started to sync the logs.

send logs to external elasticsearch from openshift projects

I'm trying to send specific openshift project logs to unsecured external elastic search.
I have tried solution which is there in https://github.com/richm/docs/releases/tag/20190205142308. But found that it will work only when ELS is secured.
Later I have tried using elasticsearch plugin also by adding in output-applications.conf.
output-applications.conf:
<match *.*>
#type elasticsearch
host xxxxx
port 9200
logstash_format true
</match>
All other files are same which is described in https://github.com/richm/docs/releases/tag/20190205142308 #Application logs from specific namespaces/pods/containers
Included output-applications.conf in fluent.conf file.
In fluentd logs except "[info]: reading config file path="/etc/fluent/fluent.conf" " this message i dont see any other things and data is not reaching to elasticsearch
Can anyone tell how to proceed?

Kafka: client has run out of available brokers to talk to

I'm trying to wrap up changes to our Kafka but I'm in over my head and am having a hard time debugging the issue.
I have multiple servers funneling their Ruby on Rails logs to 1 Kafka broker using Filebeat, from there the logs go to our Logstash server, and are then stashed in Elasticsearch. I didnt setup the original system but I tried taking us down from 3 Kafka servers to 1 as they weren't need. I updated the IP address configs in these files in our setup to remove the 2 old Kafka servers and restarted the appropriate services.
# main (filebeat)
sudo vi /etc/filebeat/filebeat.yml
sudo service filebeat restart
# kafka
sudo vi /etc/hosts
sudo vi /etc/kafka/config/server.properties
sudo vi /etc/zookeeper/conf/zoo.cfg
sudo vi /etc/filebeat/filebeat.yml
sudo service kafka-server restart
sudo service zookeeper-server restart
sudo service filebeat restart
# elasticsearch
sudo service elasticsearch restart
# logstash
sudo vi /etc/logstash/conf.d/00-input-kafka.conf
sudo service logstash restart
sudo service kibana restart
When I tail the Filebeat logs I see this -
2018-04-23T15:20:05Z WARN kafka message: client/metadata got error from broker while fetching metadata:%!(EXTRA *net.OpError=dial tcp 172.16.137.132:9092: getsockopt: connection refused)
2018-04-23T15:20:05Z WARN kafka message: client/metadata no available broker to send metadata request to
2018-04-23T15:20:05Z WARN client/brokers resurrecting 1 dead seed brokers
2018-04-23T15:20:05Z WARN kafka message: Closing Client
2018-04-23T15:20:05Z ERR Kafka connect fails with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
to 1 Kafka broker... I tried taking us down from 3 Kafka servers to 1 as they weren't need. I updated the IP address configs in these files in our setup to remove the 2 old Kafka servers and restarted the appropriate services
I think you are misunderstanding that Kafka is only a highly available system if you have more than one broker, so the other 2 are needed despite you possibly only providing a single broker in the logstash config
Your errors state the single broker refused a connection, and therefore no logs will be sent to it.
At a minimum, I would recommend 4 brokers, and a replication factor of 3 on all your critical topics for a useful Kafka cluster.. That way, you can tolerate broker outages as well as distribute the load of your Kafka brokers.
It would also be beneficial to make the topic count a factor of your total logging servers, as well as key a Kafka message based on the application type, for example. That way you are guaranteed log order for those applications

how to enable ElasticSearch http access log

I opened couple client nodes with http 9200 to sever ElasticSearch queries/indices. I wanna log the access log from clients via http 9200, just like Http-Apache has the access.log. How should I enable this in ES please.
There's no such thing in Elasticsearch itself.
However, if you install the Shield plugin, you can enable auditing by adding this to your elasticsearch.yml configuration file.
shield.audit.enabled: true
You'll then get a new file called elasticsearch-access.log in your ES logs folder.
UPDATE by #lucabelluccini: Shield audit logs to syslog
In case you are interested in forwarding such audit logs to syslog, you can thanks to log4j SyslogAppender class which allows to forward logs to syslog via local socket.
Edit your logging.yml (customize the format etc...)
appender:
syslog:
type: org.apache.log4j.net.SyslogAppender
syslogHost: localhost
facility: local0
layout:
type: org.apache.log4j.PatternLayout
conversionPattern: "%d{ISO8601} %t %p %c %M %m %n"
Ensure rsyslog configuration allows UDP sources.
Associate this appender to the shield audit topic.

Resources