When could a task in Kafka Streams have more than one input partition? - apache-kafka-streams

Given this answer:
the maximum number of partitions over all topics determines the number of tasks.
and the code of AbstractTask with the following line:
final Set<TopicPartition> inputPartitions
I wonder when (if ever) a task can have multiple partitions assigned?

A task would have multiple input partitions if you do a join(), merge(), or copartition() for example. Also if you read multiple topics at once via pattern subscription.
It's orthogonal to
the maximum number of partitions over all topics determines the number of tasks
The quote is about the number of created tasks and has nothing to do with the number of partitions per task.
Assume you have two input topic A with 2 partitions (A-0 and A-1) and B with only one partition (B-0). Your program is:
KStream a = builder.stream("A",...);
KStream b = builder.stream("B",...);
a.merge(b);
Your program is logically:
topic-A ---+
+---> merge()
topic-B ---+
For this case, you would get two tasks:
Task 0_0:
A-0 ---+
+--- merge() -->
B-0 ---+
Task 0_1:
A-1 ---+
+--- merge() -->
Note that the second task 0_1 only has one input partition, because topic B has only 1 partition.
A task is basically a copy (physical instantiation) of your (logical) program that processed all partitions with the name partition number. Because topic-A has two partitions, two tasks need to be created.

Related

what is the maximum tasks for kafka streams on multiple topcis with different partition

Suppose we have two input topics. Topic1 has 2 partitions, and Topic2 has 4 partitions.
We create the kafka stream application with thread number 1.
Question: what is the maximum number that we can run the stream application that all will be assigned a partition?
as for my understanding, it is decided by the maximum partition of input topics. that is 4.
while what I want to achieve is 6, that is the sum number of all topics' partition. do you know is this doable? Thanks.
The parallelism of a streams application is defined number of partitions in the input topic(s), you are correct. You cannot change this. A workaround would be to work with an intermediate repartition topic: you repartition the input topic into a new topic with 6 partitions, and then do the actual work with a parallelism of 6.

Copy operations in shuffle and sort phase of MapReduce

I am quite confused about that in Shuffle and Sort phase, Job with m mappers and r reducers involves up to mr copy operations. Which scenario will the copy operations reach maximum value m*r?
Could anyone illustrate it?
Suppose you have 3 mappers and 1 reducer. Each mapper task outputs 1 file (sorted by key) that is written to the local filesystem of where the map function ran from. So, we will have 3 such output files spread around the cluster.
Since reducers do not take advantage of data locality optimisation, and since we have only 1 reducer - it will need to copy the 3 different output files that each mapper task produced across the network.
Hence, there are m x n = 3 x 1 = 3 copy operations involved in this scenario.

Kafka Streams splitting the topolgy

In my use case, i have below two option to build my kafka streams topology,
Topics where data streamed from:
Topic A - 6 partitions
Topic B - 6 partitions
Topic C - 6 partitions
Intermediate Topic:
Topic INTERMIDIATE - 6 partitions
Option 1: with two topologies
Topology 1:
A.leftJoin(B).to(INTERMIDIATE)
Topology 2:
INTERMIDIATE.leftJoin(C).to(ResultTopic)
Option 2: with single topology
A.leftJoin(B).leftJoin(C).to(ResultTopic)
I know both are resulting in same output, my question is,
Option 1 creates 12 tasks max-partition(A, B) + max-partition(INTERMIDIATE,C)
Option 2 creates just 6 tasks max-partition(A, B, C)
At high level, it looks like Option 2 is the best solution because I can just configure less threads to handle the tasks but in my use case i am making a big single topology,
A.leftJoin(B).leftJoin(C).leftJoin(D).leftJoin(E).leftJoin(F).to(ResultTopic)
In this case again just 6 tasks are created but the topics are more, is this a good solution to end up in less tasks by chaining the topics in a single topology?

Kafka Streams - Multiple Joins and Number of threads in a single instance

I have a use case to do multiple joins on the two topics,
Lets say, I have topic A (2 partitions) and topic B (2 partitions) and running single instance of KafkaStreams application.
I have use case to find breaks, left miss and right miss between the two topics, so I am performing following 3 operations,
A.join(B)
B.leftJoin(A)
A.leftJoin(B)
As per the documentation, there will be two tasks (max(2,2)) will be created for each topology and a total of 6 tasks, i.e,
1. A.join(B) - two tasks created - each task is assigned two
partitions
2. B.leftJoin(A) - two tasks created - each task is assigned
two partitions
3. A.leftJoin(B) - two tasks created - each task is
assigned two partitions
Since i am running a single instance, to scale up, i am planning to configure num.stream.threads=6 and each thread will be assigned one task.
is my above understanding correct? Please correct me if i am mistake.
Thanks in Advance.
Regards,
Sathish
From confluent documentation:
The default implementation provided by Kafka Streams is
DefaultPartitionGrouper, which assigns each task with at most one
partition for each of the source topic partitions; therefore, the
generated number of tasks is equal to the largest number of partitions
among the input topics. [1]
So if you aren't overriding partition.grouper config, the number of tasks should be 2.
Links:
[1] http://docs.confluent.io/current/streams/developer-guide.html#optional-configuration-parameters

Window operation on Spark streaming from Kafka

I am trying to explore Spark streaming from Kafka as the source. As per this link, createDirectStream has 1:1 parallelism between kafka partitions and Spark. So this would mean that, if there is a Kafka topic with 3 partitions then 3 spark executors would run parallel, each reading a partition.
Questions
Suppose i have a window operation after the data is read. Does the window operation apply window across partitions or within one
partition i.e. lets say my batch interval is 10s and window interval
is 50s. Does window accumulate data for 50s of data across partitions
(if each partition has 10 records each for 50s, does window hold 30
records) or 50s of data per partition in parallel (if each partition
has 10 records each for 50s, does window hold 10 records)?
pseudo code:
rdd = createDirectStream(...)
rdd.window()
rdd.saveAsTextFile() //Does this write 30 records in 1 file or 3 files
with 10 records per file?
Suppose i have this...
Pseudo code:
rdd = createDirectStream()
rdd.action1()
rdd.window()
rdd.action2()
Lets say, i have 3 kafka partitions and 3 executors (each reading a
topic). This spins 2 jobs as there are 2 actions. Each spark executor
would have partition of the RDD and action1 is applied in parallel.
Now for action2, would the same set of executors be used (otherwise,
the data has to be read from Kafka again - not good)?
Q) if there is a Kafka topic with 3 partitions then 3 spark executors would run parallel, each reading a partition.
In more specific terms, there will be 3 tasks submitted to the Spark cluster, one for each partition. Where these tasks execute depend on your cluster topology and locality settings but in general you can consider that these 3 tasks will run in parallel.
Q) Suppose I have a window operation after the data is read. Does the window operation apply window across partitions or within one partition?
The fundamental model of Spark and by transitivity of Spark Streaming is that operations are declared on an abstraction (RDD/Datasets for Spark, DStream for Spark Streaming) and at the execution level, those operations will be applied in a distributed fashion, using the native partitioning of the data.
((I'm not sure about the distinction the question makes between "across partitions or within one partition". The window will be preserved per partition. The operation(s) will be applied according to their own semantics. For example, a map operation will be applied per partition, while a count operation will be first applied to each partition and then consolidated to one result.))
Regarding the pseudo code:
val dstream = createDirectStream(..., Seconds(30))
dstream.window(Seconds(600)) // this does nothing as the new dstream is not referenced any further
val windowDstream = dstream.window(timePeriod) // this creates a new Windowed DStream based on the base DStream
dstream.saveAsTextFiles() // this writes using the original streaming interval (30 seconds). It will write 1 logical file in the distributed file system with 3 partitions
windowDstream.saveAsTextFiles() // this writes using the windowed interval (600 seconds). It will write 1 logical file in the distributed file system with 3 partitions.
Given this code (note naming changes!):
val dstream = createDirectStream(...)
dstream.action1()
val windowDStream = dstream.window(...)
windowDStream.action2()
for action2, would the same set of executors be used (otherwise, the data has to be read from Kafka again - not good)?
In the case of Direct Stream model, the RDDs at each interval do not contain any data, only offsets (offset-start, offset-end). It's only when an action is applied that the data is read.
A windowed dstream over a direct producer is, therefore, just a series of offsets: Window (1-3) = (offset1-start, offset1-end), (offset2-start, offset2-end), (offset3-start, offset3-end). When an action is applied to that window, these offsets will be fetched from Kafka and the operation will be applied. This is not "bad" as implied in the question. This prevents us from having to store intermediate data for long periods of time and lets us preserve operation semantics on the data.
So, yes, the data will be read again, and that's a good thing.

Resources