Why is the number of spark streaming tasks different from the Kafka partition? - spark-streaming

There are multiple topics in my Kafka but they all have only one partition. Spark streaming consumes these data through direct. In theory, shouldn't the number of tasks be the same as the number of partitions in Kafka? Why do I see so many tasks on spark ui?
enter image description here
val data = KafkaUtils.createDirectStream(ssc, PreferConsistent, SubscribePattern[String, String](data_pattern, kafkaParams))
val windows = 300
val newData = data.map(_.value()).filter(_.contains("id")).window(Seconds(windows), Seconds(windows))
handleData(newData)

Related

KafkaStreams : Number of tasks and regex for source topics

Lets say we have a KafkaStreams application which is reading data from 2 source topics customerA.orders and customerB.orders. Each topic is having 3 partitions.
StreamsBuilder builder = new StreamsBuilder();
KStream stream1 = builder.stream("customerA.orders")
KStream stream2 = builder.stream("customerB.orders")
//Business logic which has stateless transformations.
When i run this application, 6 tasks are created which is expected ( since we have 3 partitions for each topic) : current active tasks: [0_0, 0_1, 1_0, 0_2, 1_1, 1_2]
Since both topic names end with ".orders", i can use regex to read data from the source topics as shown below
StreamsBuilder builder = new StreamsBuilder();
KStream stream1 = builder.stream(Pattern.compile(".*orders"))
But when i run this application using regex, only 3 tasks are created instead of 6 tasks even though we have 2 topics with 3 partitions each : current active tasks: [0_0, 0_1, 0_2]
streams application is getting messages from both the topics.
Why are the number of tasks reduced when we use regex for source topics ?
In the first code, if you don't apply any operation like join, or using same state store between two topics (more precisely between too Stream DSL codes from two KStreams) it'll create 2 sub-topology, so you can have separated task for each topic's partition. So these 2 Topology process in parallel.
When your application subscribes multiple topics into one KStream, it'll create a same task for topic's partitions of input topics which have the same partition number so it's co-partitioned (so partition 0 of topic 1 and partition 0 of topic 2 is consumed by the same task), and one particular task only processes one message from one of subscribed partition-i at a time.

Is scalability applicable with Kafka stream if each topic has single partition

My understanding as per Kafka stream documentation,
Maximum possible parallel tasks is equal to maximum number of partitions of a topic among all topics in a cluster.
I have around 60 topics at Kafka cluster. Each topic has single partition only.
Is it possible to achieve scalability/parallelism with Kafka stream for my Kafka cluster?
Do you want to do the same computation over all topics? For this, I would recommend to introduce an extra topic with many partitions that you use to scale out:
// using new 1.0 API
StreamsBuilder builder = new StreamsBuilder():
KStream parallelizedStream = builder
.stream(/* subscribe to all topics at once*/)
.through("topic-with-many-partitions");
// apply computation
parallelizedStream...
Note: You need to create the topic "topic-with-many-partitions" manually before starting your Streams application
Pro Tip:
The topic "topic-with-many-partitions" can have a very short retention time as it's only used for scaling and must not hold data long term.
Update
If you have 10 topic T1 to T10 with a single partitions each, the program from above will execute as follows (with TN being the dummy topic with 10 partitions):
T1-0 --+ +--> TN-0 --> T1_1
... --+--> T0_0 --+--> ... --> ...
T10-0 --+ +--> TN-10 --> T1_10
The first part of your program will only read all 10 input topics and write it back into 10 partitions of TN. Afterwards, you can get up to 10 parallel tasks, each processing one input partition. If you start 10 KafakStreams instances, only one will execute T0_0, and each will alsa one T1_x running.

How to send messages in order from Apache Spark to Kafka topic

I have a use case where event information about sensors is inserted continuously in MySQL. We need to send this information with some processing in a Kafka topic every 1 or 2 minutes.
I am using Spark to send this information to Kafka topic and for maintaining CDC in Phoenix table.I am using a Cron job to run spark job every 1 minute.
The issue I am currently facing is message ordering, I need to send these messages in ascending timestamp to end the system Kafka topic (which has 1 partition). But most of the message ordering is lost due to more than 1 spark DataFrame partition sends information concurrently to Kafka topic.
Currently as a workaround I am re-partitioning my DataFrame in 1, in order to maintain the messages ordering, but this is not a long term solution as I am losing spark distributed computing.
If you guys have any better solution design around this please suggest.
I am able to achieve message ordering as per ascending timestamp to some extend by reparations my data with the keys and by applying sorting within a partition.
val pairJdbcDF = jdbcTable.map(row => ((row.getInt(0), row.getString(4)), s"${row.getInt(0)},${row.getString(1)},${row.getLong(2)},${row. /*getDecimal*/ getString(3)},${row.getString(4)}"))
.toDF("Asset", "Message")
val repartitionedDF = pairJdbcDF.repartition(getPartitionCount, $"Asset")
.select($"Message")
.select(expr("(split(Message, ','))[0]").cast("Int").as("Col1"),
expr("(split(Message, ','))[1]").cast("String").as("TS"),
expr("(split(Message, ','))[2]").cast("Long").as("Col3"),
expr("(split(Message, ','))[3]").cast("String").as("Col4"),
expr("(split(Message, ','))[4]").cast("String").as("Value"))
.sortWithinPartitions($"TS", $"Value")

Spark RDD union OOM error when using single machine

I have the following code, that gets the distinct phone numbers, and create union of all the calls made.
//Get all the calls for the last 24 hours for each MSISDN in the hour
val sCallsPlaced = (grouped24HourCallsPlaS).join(distinctMSISDNs)
val oCallsPlaced = (grouped24HourCallsPlaO).join(distinctMSISDNs)
val sCallsReceived = grouped24HourCallsRecS.join(distinctMSISDNs)
val oCallsReceived = grouped24HourCallsRecO.join(distinctMSISDNs)
val callsToProcess = sCallsPlaced.union(oCallsPlaced)
.union(sCallsReceived)
.union(oCallsReceived)
The spark-defaults.conf file has the following:
spark.driver.memory=16g
spark.driver.cores=1
spark.driver.maxResultSize=2g
spark.executor.memory=24g
spark.executor.cores=10
spark.default.parallelism=256
The question is, will Spark be able to process 50G of data, with a 256G machine, with Hadoop services (namenode, datanode, secondaryname node), yarn, and HBase running on the same machine.
Hbase (HMaster, HQuorumPeer, and HRegionServers) take up around 20G each.
Also, is there a faster way than using "Union" in Spark.
How many partitions are there for each RDD? How does the Serde look for the records?
As for the relational algebra, maybe perform the unions first and then perform the join.

Persisting Spark Streaming output

I'm collecting the data from a messaging app, I'm currently using Flume, it sends approx 50 Million records per day
I wish to use Kafka,
consume from Kafka using Spark Streaming
and persist it to hadoop and query with impala
I'm having issues with each approach I've tried..
Approach 1 - Save RDD as parquet, point an external hive parquet table to the parquet directory
// scala
val ssc = new StreamingContext(sparkConf, Seconds(bucketsize.toInt))
val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2)
lines.foreachRDD(rdd => {
// 1 - Create a SchemaRDD object from the rdd and specify the schema
val SchemaRDD1 = sqlContext.jsonRDD(rdd, schema)
// 2 - register it as a spark sql table
SchemaRDD1.registerTempTable("sparktable")
// 3 - qry sparktable to produce another SchemaRDD object of the data needed 'finalParquet'. and persist this as parquet files
val finalParquet = sqlContext.sql(sql)
finalParquet.saveAsParquetFile(dir)
The problem is that finalParquet.saveAsParquetFile outputs a huge number of files, the Dstream received from Kafka outputs over 200 files for a 1 minute batch size.
The reason that it outputs many files is because the computation is distributed as explained in another post- how to make saveAsTextFile NOT split output into multiple file?
However, the propsed solutions don't seem optimal for me , for e.g. as one user states - Having a single output file is only a good idea if you have very little data.
Approach 2 - Use HiveContext. insert RDD data directly to a hive table
# python
sqlContext = HiveContext(sc)
ssc = StreamingContext(sc, int(batch_interval))
kvs = KafkaUtils.createStream(ssc, zkQuorum, group, {topics: 1})
lines = kvs.map(lambda x: x[1]).persist(StorageLevel.MEMORY_AND_DISK_SER)
lines.foreachRDD(sendRecord)
def sendRecord(rdd):
sql = "INSERT INTO TABLE table select * from beacon_sparktable"
# 1 - Apply the schema to the RDD creating a data frame 'beaconDF'
beaconDF = sqlContext.jsonRDD(rdd,schema)
# 2- Register the DataFrame as a spark sql table.
beaconDF.registerTempTable("beacon_sparktable")
# 3 - insert to hive directly from a qry on the spark sql table
sqlContext.sql(sql);
This works fine , it inserts directly to a parquet table but there are scheduling delays for the batches as processing time exceeds the batch interval time.
The consumer cant keep up with whats being produced and the batches to process begin to queue up.
it seems writing to hive is slow. I've tried adjusting batch interval size, running more consumer instances.
In summary
What is the best way to persist Big data from Spark Streaming given that there are issues with multiple files and potential latency with writing to hive?
What are other people doing?
A similar question has been asked here, but he has an issue with directories as apposed to too many files
How to make Spark Streaming write its output so that Impala can read it?
Many Thanks for any help
In solution #2, the number of files created can be controlled via the number of partitions of each RDD.
See this example:
// create a Hive table (assume it's already existing)
sqlContext.sql("CREATE TABLE test (id int, txt string) STORED AS PARQUET")
// create a RDD with 2 records and only 1 partition
val rdd = sc.parallelize(List( List(1, "hello"), List(2, "world") ), 1)
// create a DataFrame from the RDD
val schema = StructType(Seq(
StructField("id", IntegerType, nullable = false),
StructField("txt", StringType, nullable = false)
))
val df = sqlContext.createDataFrame(rdd.map( Row(_:_*) ), schema)
// this creates a single file, because the RDD has 1 partition
df.write.mode("append").saveAsTable("test")
Now, I guess you can play with the frequency at which you pull data from Kafka, and the number of partitions of each RDD (default, the partitions of your Kafka topic, that you can possibly reduce by repartitioning).
I'm using Spark 1.5 from CDH 5.5.1, and I get the same result using either df.write.mode("append").saveAsTable("test") or your SQL string.
I think the small file problem could be resolved somewhat. You may be getting large number of files based on kafka partitions. For me, I have 12 partition Kafka topic and I write using spark.write.mode("append").parquet("/location/on/hdfs").
Now depending on your requirements, you can either add coalesce(1) or more to reduce number of files. Also another option is to increase the micro batch duration. For example, if you can accept 5 minutes delay in writing day, you can have micro batch of 300 seconds.
For the second issues, the batches queue up only because you don't have back pressure enabled. First you should verify what is the max you can process in a single batch. Once you can get around that number, you can set spark.streaming.kafka.maxRatePerPartition value and spark.streaming.backpressure.enabled=true to enable limited number of records per micro batch. If you still cannot meet the demand, then the only options are to either increase partitions on topic or to increase resources on spark application.

Resources