Fluentd vs Kafka - elasticsearch

The use case is this:
I've several java applications running which all have to interact with different (each one has a specific target) elasticsearch indices. For instance an application A uses the indices A,B,C of ElasticSearch to query and update. Application B uses indices A,C,D(say).
Some common interface is required which can manage all these data streams. Currently I'm evaluating Kafka and fluentd for this purpose.
Can someone explain which will be better suited for this situation. I've looked at features of both Kafka and Fluentd and I don't really understand the difference it would make here.
Thanks a lot.

kafka provides publish/subscribe messaging as a distributed commit log. Usually you install kafka on each host where you need to produce some data to be forwarded somewhere else and all those hosts will together form a cluster. The good thing here is that if for some reason network connectivity becomes unstable or goes down, your application can continue to produce data/logs and they won't be lost. Whereas if your application directly sends logs to some remote centralized logging host, you might lose some logs during the time the network goes down.
fluentd is a centralized log collector which is commonly installed on one host (or more if you need horizontal scaling). It connects to remote data sources, applies filtering and sends unified log data to remote data sinks.
From the fluentd docs, you can see that fluentd can consume data from kafka and produce data towards kafka as well. This alone should hint that fluentd and kafka are on different layers since the former uses the latter.
It would be more logical to compare fluentd and logstash actually. As far as fluentd is concerned, kafka is just another data source and/or data sink, but they are different beasts altogether.
If you want the best of both worlds, use kafka as input/output data pipes from/to your apps and fluentd (or logstash) as your centralized logging system reading from those kafka topics.
If you want to read more on the topic, you can read how fluentd and kafka complement each other very well, read they are not competing against each other.

From: The Life Blood Of Your Data Pipeline
Kafka is primarily related to holding log data rather than moving log
data. Thus, Kafka producers need to write the code to put data in
Kafka, and Kafka consumers need to write the code to pull data out of
Kafka.
Fluentd has both input and output plugins for Kafka so that data
engineers can write less code to get data in and out of Kafka. We have
many users that use Fluentd as a Kafka producer and/or consumer.

Related

Read data directly from REST Api with Kafka

Is such a situation even possible ? :
There is an application "XYZ" (in which there is no Kafka) that exposes a REST api. It is a SpringBoot application with which Angular application communicates.
A new application (SpringBoot) is created which wants to use Kafka and needs to fetch data from "XYZ" application. And it wants to do this using Kafka.
The "XYZ" application has an example endpoint [GET] api/message/all which displays all messages.
Is there a way to "connect" Kafka directly to this endpoint and read data from it ? In short, the idea is for Kafka to consume data directly from the EP. Communication between two microservices, where one microservice does not have a kafka.
What suggestions do you have for solving this situation. Because I guess this option is not possible. Is it necessary to add a publisher in application XYZ which will send data to the queue and only then will they be available for consumption by a new application ??
Getting them via the REST-Interface might not be a very good idea.
Simply put, in the messaging world, message delivery guarantees are a big topic and the standard ways to solve that with Kafka are usually
Producing messages from your service using the Producer-API to a Kafka topic.
Using Kafka-Connect to read from an outbox-table.
Since you most likely have a database already attached to your API-Service, there might arise the problem of dual writes if you choose to produce the messages directly to a topic. What this means, is that writes to a database might fail while it might be successfully written to Kafka/vice-versa. So you can end up with inconsistent states. Depending on your use case this might be a problem or not.
Nevertheless, to overcome that, the outbox pattern can come in handy.
Via the outbox pattern, you'd basically write your messages to a table, a so-called outbox-table, and then you'd use Kafka-Connect to poll this table of the database. Kafka Connect is basically a cluster of workers that consume this database table and forward the entries of the table to a Kafka topic. You might want to look at confluent cloud, they offer a fully managed Kafka-Connect service. Like this you don't have to manage the cluster of workers yourself. Once you have the messages in a Kafka topic, you can consume them with the standard Kafka Consumer-API/ Stream-API.
What you're looking for is a Source-Connector.
A source connector for a specific database. E.g. MongoDB
E.g. https://www.confluent.io/hub/mongodb/kafka-connect-mongodb
For now, most source-connectors produce in an at-least-once fashion. This means that the topic you configure the connector to write to might contain a message twice. So make sure that if you need them to be consumed exactly once, you think about deduplicating these messages.

Why use Beats if i can post directly to Elasticsearch?

Recently i have been reading into Elastic stack and finding out about this thing called Beats, which basically used for lightweight shippers.
So the question is, if my service can directly hit to Elasticsearch, do i actually need beats for it? Since from what i have known it's just kinda a proxy (?)
Hopefully my question is clear enough
Not sure which beat you are specifically referring but let's take an example of Filebeat.
Suppose application logs need to be indexed into Elasticsearch. Options
Post the logs directly to Elasticsearch
Save the logs to a file, then use Filebeat to index logs
Publish logs to a AMQP service like RabbitMQ or Kafka, then use Logstash input plugins to read from RabbitMQ or Kafka and index into Elasticsearch
Option 2 Benefits
Filebeat ensures that each log message got delivered at-least-once. Filebeat is able to achieve this behavior because it stores the delivery state of each event in the registry file. In situations where the defined output is blocked and has not confirmed all events, Filebeat will keep trying to send events until the output acknowledges that it has received the events.
Before shipping data to Elasticsearh, we can do some additional processing or filtering. We want to drop some logs based on some text in the log message or add additional field (eg: Add Application Name to all logs, so that we can index multiple application logs into single index, then on consumption side we can filter the logs based on application name.)
Essentially beats provide the reliable way of indexing data without causing much overhead to the system as beats are lightweight shippers.
Option 3 - This also provides the same benefits as option2. This might be more useful in case if we want to ship the logs directly to an external system instead of storing it in a file in the local system. For any applications deployed in Docker/Kubernetes, where we do not have much access or enough space to store files in the local system.
Beats are good as lightweight agents for collecting streaming data like log files, OS metrics, etc, where you need some sort of agent to collect and send. If you have a service that wants to put things into Elastic, then yes by all means it can just use rest/java etc API directly.
Filebeat offers a way to centralize live logs from Multiple Servers
Let's say you are running multiple instances of an application in different servers and they are writing logs.
You can ship all these logs to a single ElasticSearch index and analyze or visualize them from there.
A single static file doesn't need Filebeat for moving to ElasticSearch.

Multiple Logstash instances vs Filebeats

I'm trying to establish the best architecture for our elastic stack implementation.
We have two distinct networks (lets call them internal and external) and several web / db / application servers (approx 10) on each of these networks.
I would like to consume IIS logs, our rabbitMQ messages and some other bits and bobs from machines in both networks and send them to a single server on the internal network where my elastic and kibana installation are located.
For the servers on both the internal and external networks I can see two main ways to get the logs sent to elastic.
Setup logstash on each server and send the output to the elastic server on the internal network.
Setup filebeats on each server and send the logs to a single server running logstash (this could be the same box that hosts elastic and kibana)
I'm unsure of the pros and cons of these approaches at the moment. I believe the correct approach is to use Filebeats, but I'm unaware why I wouldn't just put logstash in multiple places as it seems like I would be better distributing the processing of logs.
Then again, perhaps having one logstash with 20-30 inputs isn't a problem?
Interested in any thoughts or guidance in this area.
From what I read in the documentation, Logstash is much more demanding in term of memory than Filebeat, especially if you do some kind of treatment on the logs (like grok parsing). Logstash represent at least a JVM (with JRuby). For filebeat, I assume its footprint is much smaller, since it's optimized for shipping logs (I never used it, so I can't say).
Also it complicates any update you would want to do to the Logstash instances or their configurations.
For a centralized Logstash, the advantage would be that it is easy to change the adress of the Elasticsearch instance, redirect to a cache like redis or add another output. I also found Logstash (in version 2.+) required frequent restart, so that's easier if you only have one instance to deal with.
I have never used Logstash with multiple inputs, so I can't say.
In the job where I was responsible of a log centralisation system, we used beaver (a filebeat equivalent) to ship the logs to a redis server and we had two or three Logstash server sending everything to Elasticsearch. All of the comments above comes from that period.

What is use of kafka in Big Data cluster?

I have recently deployed Big Data cluster. In that I've used Apache Kafka and zookeeper. But still I didn't understand about its usage in cluster. When both are required and for what purpose?
I am simplifying the concepts here. You can find detailed explanation at this article
Kafka is a fast, scalable, distributed in nature by its design, partitioned and replicated commit log service. It has a unique design.
A stream of Messages of a particular type is defined as a Topic.
A Producer can be anyone who can publish messages to a Topic.
The published messages are then stored at a set of servers called Brokers or Kafka Cluster.
A Consumer can subscribe to one or more Topics and consume the published Messages by pulling data from the Brokers.
ZooKeeper is a distributed, hierarchical file system that facilitates loose coupling between clients.
ZooKeeper achieves high availability by running multiple ZooKeeper servers, called an ensemble.
ZooKeeper is used for managing, coordinating Kafka broker.
Each Kafka broker is coordinating with other Kafka brokers using ZooKeeper.
Producer and consumer are notified by ZooKeeper service about the presence of new broker in Kafka system or failure of the broker in Kafka system.
Kafka is a distributed messaging system optimised for high throughput. It is has a persistent queue with messages being appended to to files with on disk structures and performs consistently, even with very modest hardware. In short you will use Kafka to load data into your big data clusters and you will be able to do this at a high speed even when using modest hardware because of the distributed nature of Kafka.
Regarding Zookeeper, its a centralized distributed configuration service and naming registry for large distributed systems. It is robust, since the persisted data is distributed between multiple nodes and one client connects to any of them , migrating if one node fails; as long as a strict majority of nodes are working. So in short, Zookeeper makes sure your big data cluster remains online even if some of its nodes are offline.
In regards to Kafka, I would add a couple things.
Kafka describes itself as being a log not a queue. A log is an append-only, totally-ordered sequence of records ordered by time.
In a strict data structures sense, a queue is FIFO collection that is designed to hold data, but then once it is taken out of the queue there's no way to get it back. Jaco does describe it has being a persistent queue, but using different terms (queue v. log) can help in understanding.
Kafka's log is saved to disk instead of being kept in memory. The designers of Kafka have chosen this because 1. They wanted to avoid a lot of the JVM overhead you get when storing things in data structures. 2. They wanted messages to persist even if the Java process dies for some reason.
Kafka is designed for multiple consumers (Kafka term) to read from the same logs. Each consumer tracks its own offset in the log, Consumer A might be at offset 2, Consumer B might be at offset 8, etc. Tracking Consumers by offset eliminates a lot of complexities from Kafka's side.
Reading that first link will explain a lot of the differences between Kafka and other messaging services.

Apache Kafka vs Apache Storm

Apache Kafka: Distributed messaging system
Apache Storm: Real Time Message Processing
How we can use both technologies in a real-time data pipeline for processing event data?
In terms of real time data pipeline both seems to me do the job identical. How can we use both the technologies on a data pipeline?
You use Apache Kafka as a distributed and robust queue that can handle high volume data and enables you to pass messages from one end-point to another.
Storm is not a queue. It is a system that has distributed real time processing abilities, meaning you can execute all kind of manipulations on real time data in parallel.
The common flow of these tools (as I know it) goes as follows:
real-time-system --> Kafka --> Storm --> NoSql --> BI(optional)
So you have your real time app handling high volume data, sends it to Kafka queue. Storm pulls the data from kafka and applies some required manipulation. At this point you usually like to get some benefits from this data, so you either send it to some Nosql db for additional BI calculations, or you could simply query this NoSql from any other system.
I know that this is an older thread and the comparisons of Apache Kafka and Storm were valid and correct when they were written but it is worth noting that Apache Kafka has evolved a lot over the years and since version 0.10 (April 2016) Kafka has included a Kafka Streams API which provides stream processing capabilities without the need for any additional software such as Storm. Kafka also includes the Connect API for connecting into various sources and sinks (destinations) of data.
Announcement blog - https://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/
Current Apache documentation - https://kafka.apache.org/documentation/streams/
In 0.11 Kafka the stream processing functionality was further expanded to provide Exactly Once Semantics and Transactions.
https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/
Kafka and Storm have a slightly different purpose:
Kafka is a distributed message broker which can handle big amount of messages per second. It uses publish-subscribe paradigm and relies on topics and partitions. Kafka uses Zookeeper to share and save state between brokers. So Kafka is basically responsible for transferring messages from one machine to another.
Storm is a scalable, fault-tolerant, real-time analytic system (think like Hadoop in realtime). It consumes data from sources (Spouts) and passes it to pipeline (Bolts). You can combine them in the topology. So Storm is basically a computation unit (aggregation, machine learning).
But you can use them together: for example your application uses kafka to send data to other servers which uses storm to make some computation on it.
This is how it works
Kafka - To provide a realtime stream
Storm - To perform some operations on that stream
You might take a look at the GitHub project https://github.com/abhishekgoel137/kafka-nodejs-d3js.
(D3js is a graph-representation library)
Ideal case:
Realtime application -> Kafka -> Storm -> NoSQL -> d3js
This repository is based on:
Realtime application -> Kafka -> <plain Node.js> -> NoSQL -> d3js
As every one explain you that
Apache Kafka: is continuous messaging queue
Apache Storm: is continuous processing tool
here in this aspect Kafka will get the data from any website like FB,Twitter by using API's and that data is processed by using Apache Storm and you can store the processed data in either in any databases you like.
https://github.com/miguno/kafka-storm-starter
Just follow it you will get some idea
When I have a use case that requires me to visualize or alert on patterns (think of twitter trends), while continuing to process the events, I have a several patterns.
NiFi would allow me to process an event and update a persistent data store with low(er) batch aggregation with very, very little custom coding.
Storm (lots of custom coding) allows me nearly real time access to the trending events.
If I can wait for many seconds, then I can batch out of kafka, into hdfs (Parquet) and process.
If I need to know in seconds, I need NiFi, and probably even Storm. (Think of monitoring thousands of earth stations, where I need to see small region weather conditions for tornado warnings).
Simply Kafka send the messages from node to another , and Storm processing the messages . Check this example of how you can Integration Apache Kafka With Storm

Resources