Transfer logs from kafka to elasticsearch - elasticsearch

I am looking for the light-weight log shipper which can directly transfer my logs to elasticsearch from kafka. Out of Filebeat, Logagent, Logstash(but i need light weighted) which among them or others can suites my use-case the best?

rsyslog is lightweight. As from version 8.27, it supports kafka as input. Elasticsearch as output is supported from even earlier.
Kafka input module configuration is described here
Elasticsearch output module configuration is described here

Related

How to configure the Kafka Cluster to work with Elastic Search Cluster?

I have to build a log-cluster and monitoring cluster ( For high-availability ) like this topology. I'm wondering to know how to config those log-shippers clusters. ( I have 2 Topo in the Image)
If I use Kafka with FileBeat in Kafka Cluster, Will Elastic Search
receive duplication data because Kafka has replicas in data?
If I use Logstash (In Elastic Search Cluster) for getting logs from
Kafka Cluster, how the config should be because I think that
Logstash will not know where to read the log efficiency on Kafka
Cluster.
Cluster topology
Thanks for reading. If you have any idea, please discuss with me ^^!
As i see both configurations are compatible with Kafka, you can use filebeat, logstash or mixed them in consumer and producer stages!
IMHO all depends about your needs, ie: sometimes we use some filters to rich the data before ingest to kafka (producer stage), or before index the data to elastic (consumer stage), in this case is better work with logsatsh, because is easier using filters than in filebeat
But if you want to play with raw data, maybe filebeat is betther, because the agent is lighter.
About your questions:
Kafka has the data replicted, but for HA propouses, you only read one time the data with the same consumer group
For read the log from kafka with logstash, you can use the logstash input plugin for kafka, is easy and works fine!
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html

Can I use Kafka between Logstash and Elasticsearch? ( Using two Kafka)

I'm trying to integrate Apache Kafka with Elastic Stack(Beats, Logstash, Elasticsearch, and Kibana)
From the diagram, Kafka is located between Beats and Logstash. I was wondering if I can put another Kafka between Logstash and Elasticsearch. (Where I drew with a red pen.)
Two Kafka sound okay?
Any ideas or thoughts to share?
Yes.
Logstash can write to Kafka as an output.
You can use Kafka Connect Elasticsearch for streaming from Kafka into Elasticsearch.
If you want to buffer/scale the output from Logstash by using Kafka here, it is possible and would make sense.
But bear in mind that you could also:
(a) write from Beats to Kafka and do any processing with KSQL/Kafka Streams etc to write back to Kafka and then Kafka Connect to Elasticsearch
or
(b) just write from Logstash to Elasticsearch

Kafka to Elasticsearch, HDFS with Logstash or Kafka Streams/Connect

I use Kafka for message queue/processing. My question is about performance/best practice. I will do my own performance tests but maybe someone has results/experience already.
The data is raw in a Kafka (0.10) topic and I want to transfer it structured to ES and HDFS.
Now I see 2 possibilities:
Logstash (Kafka input plugin, grok filter (parsing), ES/webhdfs output plugin)
Kafka Streams (parsing), Kafka Connect (ES sink, HDFS sink)
Without any tests I would say that the second option is better/cleaner and more reliable?
Logstash "best practice" for getting data into Elasticsearch. WebHDFS won't have the raw performance of the Java API that is part of the Kafka Connect plugin, however.
Grok could be done in a Kafka Streams process, so your parsing could be done in either location.
If you are on an Elastic subscription, then they would like to sell Logstash. Confluent would like to sell Kafka Streams + Kafka Connect.
Avro seems to be the best medium for data transfer, and the Schema Registry is a popular way to do that. IIUC, Logstash doesn't work well with a Schema Registry or Avro, and prefers JSON.
In the Hadoop landscape, I would offer the intermediate options of Apache Nifi or Streamsets.
In the end, it really depends on your priorities, and how well you (and your team) can support these tools.

Kafka-Connect vs Filebeat & Logstash

I'm looking to consume from Kafka and save data into Hadoop and Elasticsearch.
I've seen 2 ways of doing this currently: using Filebeat to consume from Kafka and send it to ES and using Kafka-Connect framework. There is a Kafka-Connect-HDFS and Kafka-Connect-Elasticsearch module.
I'm not sure which one to use to send streaming data. Though I think that if I want at some point to take data from Kafka and place it into Cassandra I can use a Kafka-Connect module for that but no such feature exists for Filebeat.
Kafka Connect can handle streaming data and is a bit more flexible. If you are just going to elastic, Filebeat is a clean integration for log sources. However, if you are going from Kafka to a number of different sinks, Kafka Connect is probably what you want. I'd recommend checking out the connector hub to see some examples of open source connectors at your disposal currently http://www.confluent.io/product/connectors/

Logstash or Elasticsearch integration with Apache Spark streaming

Is it possible to receive live input streams of logs from Logstash or Elasticsearch into Spark Streaming?
I see there's a builtin Flume receiver. But any existing Custom receivers for Logstash or Elasticsearch?
Possibly the best solution currently seems to be to use the logstash output plugin for kafka and then read the kafka topic using spark kafka receiver
http://spark.apache.org/docs/latest/streaming-kafka-integration.html

Resources