what is the maximum tasks for kafka streams on multiple topcis with different partition - apache-kafka-streams

Suppose we have two input topics. Topic1 has 2 partitions, and Topic2 has 4 partitions.
We create the kafka stream application with thread number 1.
Question: what is the maximum number that we can run the stream application that all will be assigned a partition?
as for my understanding, it is decided by the maximum partition of input topics. that is 4.
while what I want to achieve is 6, that is the sum number of all topics' partition. do you know is this doable? Thanks.

The parallelism of a streams application is defined number of partitions in the input topic(s), you are correct. You cannot change this. A workaround would be to work with an intermediate repartition topic: you repartition the input topic into a new topic with 6 partitions, and then do the actual work with a parallelism of 6.

Related

Kafka Streams splitting the topolgy

In my use case, i have below two option to build my kafka streams topology,
Topics where data streamed from:
Topic A - 6 partitions
Topic B - 6 partitions
Topic C - 6 partitions
Intermediate Topic:
Topic INTERMIDIATE - 6 partitions
Option 1: with two topologies
Topology 1:
A.leftJoin(B).to(INTERMIDIATE)
Topology 2:
INTERMIDIATE.leftJoin(C).to(ResultTopic)
Option 2: with single topology
A.leftJoin(B).leftJoin(C).to(ResultTopic)
I know both are resulting in same output, my question is,
Option 1 creates 12 tasks max-partition(A, B) + max-partition(INTERMIDIATE,C)
Option 2 creates just 6 tasks max-partition(A, B, C)
At high level, it looks like Option 2 is the best solution because I can just configure less threads to handle the tasks but in my use case i am making a big single topology,
A.leftJoin(B).leftJoin(C).leftJoin(D).leftJoin(E).leftJoin(F).to(ResultTopic)
In this case again just 6 tasks are created but the topics are more, is this a good solution to end up in less tasks by chaining the topics in a single topology?

Kafka Streams - Multiple Joins and Number of threads in a single instance

I have a use case to do multiple joins on the two topics,
Lets say, I have topic A (2 partitions) and topic B (2 partitions) and running single instance of KafkaStreams application.
I have use case to find breaks, left miss and right miss between the two topics, so I am performing following 3 operations,
A.join(B)
B.leftJoin(A)
A.leftJoin(B)
As per the documentation, there will be two tasks (max(2,2)) will be created for each topology and a total of 6 tasks, i.e,
1. A.join(B) - two tasks created - each task is assigned two
partitions
2. B.leftJoin(A) - two tasks created - each task is assigned
two partitions
3. A.leftJoin(B) - two tasks created - each task is
assigned two partitions
Since i am running a single instance, to scale up, i am planning to configure num.stream.threads=6 and each thread will be assigned one task.
is my above understanding correct? Please correct me if i am mistake.
Thanks in Advance.
Regards,
Sathish
From confluent documentation:
The default implementation provided by Kafka Streams is
DefaultPartitionGrouper, which assigns each task with at most one
partition for each of the source topic partitions; therefore, the
generated number of tasks is equal to the largest number of partitions
among the input topics. [1]
So if you aren't overriding partition.grouper config, the number of tasks should be 2.
Links:
[1] http://docs.confluent.io/current/streams/developer-guide.html#optional-configuration-parameters

Window operation on Spark streaming from Kafka

I am trying to explore Spark streaming from Kafka as the source. As per this link, createDirectStream has 1:1 parallelism between kafka partitions and Spark. So this would mean that, if there is a Kafka topic with 3 partitions then 3 spark executors would run parallel, each reading a partition.
Questions
Suppose i have a window operation after the data is read. Does the window operation apply window across partitions or within one
partition i.e. lets say my batch interval is 10s and window interval
is 50s. Does window accumulate data for 50s of data across partitions
(if each partition has 10 records each for 50s, does window hold 30
records) or 50s of data per partition in parallel (if each partition
has 10 records each for 50s, does window hold 10 records)?
pseudo code:
rdd = createDirectStream(...)
rdd.window()
rdd.saveAsTextFile() //Does this write 30 records in 1 file or 3 files
with 10 records per file?
Suppose i have this...
Pseudo code:
rdd = createDirectStream()
rdd.action1()
rdd.window()
rdd.action2()
Lets say, i have 3 kafka partitions and 3 executors (each reading a
topic). This spins 2 jobs as there are 2 actions. Each spark executor
would have partition of the RDD and action1 is applied in parallel.
Now for action2, would the same set of executors be used (otherwise,
the data has to be read from Kafka again - not good)?
Q) if there is a Kafka topic with 3 partitions then 3 spark executors would run parallel, each reading a partition.
In more specific terms, there will be 3 tasks submitted to the Spark cluster, one for each partition. Where these tasks execute depend on your cluster topology and locality settings but in general you can consider that these 3 tasks will run in parallel.
Q) Suppose I have a window operation after the data is read. Does the window operation apply window across partitions or within one partition?
The fundamental model of Spark and by transitivity of Spark Streaming is that operations are declared on an abstraction (RDD/Datasets for Spark, DStream for Spark Streaming) and at the execution level, those operations will be applied in a distributed fashion, using the native partitioning of the data.
((I'm not sure about the distinction the question makes between "across partitions or within one partition". The window will be preserved per partition. The operation(s) will be applied according to their own semantics. For example, a map operation will be applied per partition, while a count operation will be first applied to each partition and then consolidated to one result.))
Regarding the pseudo code:
val dstream = createDirectStream(..., Seconds(30))
dstream.window(Seconds(600)) // this does nothing as the new dstream is not referenced any further
val windowDstream = dstream.window(timePeriod) // this creates a new Windowed DStream based on the base DStream
dstream.saveAsTextFiles() // this writes using the original streaming interval (30 seconds). It will write 1 logical file in the distributed file system with 3 partitions
windowDstream.saveAsTextFiles() // this writes using the windowed interval (600 seconds). It will write 1 logical file in the distributed file system with 3 partitions.
Given this code (note naming changes!):
val dstream = createDirectStream(...)
dstream.action1()
val windowDStream = dstream.window(...)
windowDStream.action2()
for action2, would the same set of executors be used (otherwise, the data has to be read from Kafka again - not good)?
In the case of Direct Stream model, the RDDs at each interval do not contain any data, only offsets (offset-start, offset-end). It's only when an action is applied that the data is read.
A windowed dstream over a direct producer is, therefore, just a series of offsets: Window (1-3) = (offset1-start, offset1-end), (offset2-start, offset2-end), (offset3-start, offset3-end). When an action is applied to that window, these offsets will be fetched from Kafka and the operation will be applied. This is not "bad" as implied in the question. This prevents us from having to store intermediate data for long periods of time and lets us preserve operation semantics on the data.
So, yes, the data will be read again, and that's a good thing.

Spark Direct Stream is not creating parallel streams per kafka partition

We are facing performance issue while integrating Spark-Kafka streams.
Project setup:
We are using Kafka topics with 3 partitions and producing 3000 messages in each partition and processing it in Spark direct streaming.
Problem we are facing:
In the processing end we are having Spark direct stream approach to process the same. As per the below documentation. Spark should create parallel direct streams as many as the number of partitions in the topic (which is 3 in this case). But while reading we can see all the messages from partition 1 is getting processed first then second then third. Any help why it is not processing parallel? as per my understanding if it is reading parallel from all the partition at the same time then the message output should be random.
http://spark.apache.org/docs/latest/streaming-kafka-0-8-integration.html#approach-2-direct-approach-no-receivers
Did you try setting the spark.streaming.concurrentJobs parameter.
May be in your case, it can be set to three.
sparkConf.set("spark.streaming.concurrentJobs", "3").
Thanks.

what is difference between partition and replica of a topic in kafka cluster

What is difference between partition and replica of a topic in kafka cluster.
I mean both store the copies of messages in a topic. Then what is the real diffrence?
When you add the message to the topic, you call send(KeyedMessage message) method of the producer API. This means that your message contains key and value. When you create a topic, you specify the number of partitions you want it to have. When you call "send" method for this topic, the data would be sent to only ONE specific partition based on the hash value of your key (by default). Each partition may have a replica, which means that both partitions and its replicas store the same data. The limitation is that both your producer and consumer work only with the main replica and its copies are used only for redundancy.
Refer to the documentation: http://kafka.apache.org/documentation.html#producerapi
And a basic training: http://www.slideshare.net/miguno/apache-kafka-08-basic-training-verisign
Topics are partitioned across multiple nodes so a topic can grow beyond the limits of a node. Partitions are replicated for fault tolerance. Replication and leader takeover is one of the biggest difference between Kafka and other brokers/Flume. From the Apache Kafka site:
Each partition has one server which acts as the "leader" and zero or
more servers which act as "followers". The leader handles all read and
write requests for the partition while the followers passively
replicate the leader. If the leader fails, one of the followers will
automatically become the new leader. Each server acts as a leader for
some of its partitions and a follower for others so load is well
balanced within the cluster.
partition: each topic can be splitted up into partitions for load balancing (you could write into different partitions at the same time) & scalability (the topic can scale up without the instance limitations); within the same partition the records are ordered;
replica: for fault-tolerant durability mainly;
Quotes:
The partitions of the log are distributed over the servers in the Kafka cluster with each server handling data and requests for a share of the partitions. Each partition is replicated across a configurable number of servers for fault tolerance.
There is a quite intuitive tutorial to explain some fundamental concepts in Kafka: https://www.tutorialspoint.com/apache_kafka/apache_kafka_fundamentals.htm
Furthermore, there is a workflow to get you through the confusing jumgle: https://www.tutorialspoint.com/apache_kafka/apache_kafka_workflow.htm
Partitions
A topic consists of a bunch of buckets. Each such bucket is called a partition.
When you want to publish an item, Kafka takes its hash, and appends it into the appropriate bucket.
Replication Factor
This is the number of copies of topic-data you want replicated across the network.
In simple terms, partition is used for scalability and replication is for availability.
Kafka topics are divided into a number of partitions. Any record written to a particular topic goes to particular partition. Each record is assigned and identified by an unique offset. Replication is implemented at partition level. The redundant unit of topic partition is called replica. The logic that decides partition for a message is configurable. Partition helps in reading/writing data in parallel by splitting in different partitions spread over multiple brokers. Each replica has one server acting as leader and others as followers. Leader handles the read/write while followers replicate the data. In case leader fails, any one of the followers is elected as the leader.
Hope this explains!
Further Reading
Partitions store different data of the same type and
Yes, you can store the same message in different topic partitions but your consumers need to handle duplicated messages.
Replicas are a copy of these partitions in other servers.
Your number of replicas will be defined by the number of kafka brokers (servers) of your cluster
Example:
Let's suppose you have a Kafka cluster of 3 brokers and inside you have a topic with name AIRPORT_ARRIVALS that receives messages of Flight information and it has 3 partitions; partition 1 for flight arrivals from airline A, partition 2 from airline B, and partition 3 from airline C. All these messages will be initially written in one broker (leader) and a copy of each message will be stored/replicated to the other 2 Kafka broker (followers). Disclaimer; this example is only for an easier explanation and not an ideal way to define a message key because you could ending up with unbalanced load over specific partitions.
Partitions are the way that Kafka provides redundancy.
Kafka keeps more than one copy of the same partition across multiple brokers.
This redundant copy is called a Replica. If a broker fails, Kafka can still serve consumers with the replicas of partitions that failed broker owned

Resources