Apache Flink, more threads than Kafka partitions - parallel-processing

The data flow is simple like
kafka -> some logic -> kafka
and 'some logic' is a bottleneck here so I want to use more threads/tasks to increase throughput instead of increasing kafka partitions (currently 3). Order between input and output topics doesn't matter here.
It can be easily done with Apache Storm. I can just increase parallelism of a bolt for the some logic. How can I do it with Flink? More general question is if there is any simple way to use different parallelism for different stages with Flink?

This is quite simple in Flink. You can specify the parallelism of each operator using the setParallelism() method:
DataStream<String> rawEvents = env
.addSource(new FlinkKafkaConsumer010("topic", new SimpleStringSchema(), props));
DataSteam<String> mappedEvents = rawEvents
.flatMap(new Tokenizer())
.setParallelism(64); // set parallelism to 64

Related

Input Data rate in Apache Storm

I am reading text data from a file and processing it to produce results using apache storm. I want to experiment with different input data rates. I want to know, how will I change the input data rate in apache storm in this setting. Also is the input data rate is:
Number of tuples emitted by spout/Time
By default, Storm will pull tuples out of the spout as fast as possible. You can interact with this via a few settings:
topology.max.spout.pending defines how many tuples can be emitted into the topology before Storm will throttle the spout and wait for some of the tuples to be acked. By default this is uncapped.
topology.sleep.spout.wait.strategy.time.ms defines how many milliseconds Storm will pause between calls to nextTuple on the spout, if a call to nextTuple produces no output. This is 1ms by default.

Parallelism in Apache Storm with one worker node

I am trying to Parallelize my topology using Apache Storm but it gives me java.util.ConcurrentModificationException error on worker nodes if I increased the number of workers>1. It works fine with 1 worker and in local cluster. I want a way to parallelize my topology and measure the different parameters like throughput, latency, emit rate etc. using one worker node only.
Based on the stack trace you posted, it looks like Kryo is trying to serialize an ArrayList and hitting a ConcurrentModificationException. I would look for any place you emit an ArrayList and make sure that you don't modify it after you've passed it to OutputCollector.emit.
Likely the reason you're not seeing this issue when you only have one worker is that Storm only serializes emitted objects when they need to be sent to a different worker.

How to write rows asynchronously in Spark Streaming application to speed up batch execution?

I have a spark job where I need to write the output of the SQL query every micro-batch. Write is a expensive operation perf wise and is causing the batch execution time to exceed the batch interval.
I am looking for ways to improve the performance of write.
Is doing the write action in a separate thread asynchronously like shown below a good option?
Would this cause any side effects because Spark itself executes in a distributed manner?
Are there other/better ways of speeding up the write?
// Create a fixed thread pool to execute asynchronous tasks
val executorService = Executors.newFixedThreadPool(2)
dstream.foreachRDD { rdd =>
import org.apache.spark.sql._
val spark = SparkSession.builder.config(rdd.sparkContext.getConf).getOrCreate
import spark.implicits._
import spark.sql
val records = rdd.toDF("record")
records.createOrReplaceTempView("records")
val result = spark.sql("select * from records")
// Submit a asynchronous task to write
executorService.submit {
new Runnable {
override def run(): Unit = {
result.write.parquet(output)
}
}
}
}
1 - Is doing the write action in a separate thread asynchronously like shown below a good option?
No. The key to understand the issue here is to ask 'who is doing the write'. The write is done by the resources allocated for your job on the executors in a cluster. Placing the write command on an async threadpool is like adding a new office manager to an office with a fixed staff. Will two managers be able to do more work than one alone given that they have to share the same staff? Well, one reasonable answer is "only if the first manager was not giving them enough work, so there's some free capacity".
Going back to our cluster, we are dealing with a write operation that is heavy on IO. Parallelizing write jobs will lead to contention for IO resources, making each independent job longer. Initially, our job might look better than the 'single manager version', but trouble will eventually hit us.
I've made a chart that attempts to illustrate how that works. Note that the parallel jobs will take longer proportionally to the amount of time that they are concurrent in the timeline.
Once we reach that point where jobs start getting delayed, we have an unstable job that will eventually fail.
2- Would this cause any side effects because Spark itself executes in a distributed manner?
Some effects I can think of:
Probably higher cluster load and IO contention.
Jobs are queuing on the Threadpool queue instead of on the Spark Streaming Queue. We loose the ability to monitor our job through the Spark UI and monitoring API, as the delays are 'hidden' and all is fine from the Spark Streaming point of view.
3- Are there other/better ways of speeding up the write?
(ordered from cheap to expensive)
If you are appending to a parquet file, create a new file often. Appending gets expensive with time.
Increase your batch interval or use Window operations to write larger chunks of Parquet. Parquet likes large files
Tune the partition and distribution of your data => make sure that Spark can do the write in parallel
Increase cluster resources, add more nodes if necessary
Use faster storage
Is doing the write action in a separate thread asynchronously like shown below a good option?
Yes. It's certainly something to consider when optimizing expensive queries and saving their results to external data stores.
Would this cause any side effects because Spark itself executes in a distributed manner?
Don't think so. SparkContext is thread-safe and promotes this kind of query execution.
Are there other/better ways of speeding up the write?
YES! That's the key to understand when to use the other (above) options. By default, Spark applications run in FIFO scheduling mode.
Quoting Scheduling Within an Application:
By default, Spark’s scheduler runs jobs in FIFO fashion. Each job is divided into “stages” (e.g. map and reduce phases), and the first job gets priority on all available resources while its stages have tasks to launch, then the second job gets priority, etc. If the jobs at the head of the queue don’t need to use the whole cluster, later jobs can start to run right away, but if the jobs at the head of the queue are large, then later jobs may be delayed significantly.
Starting in Spark 0.8, it is also possible to configure fair sharing between jobs. Under fair sharing, Spark assigns tasks between jobs in a “round robin” fashion, so that all jobs get a roughly equal share of cluster resources. This means that short jobs submitted while a long job is running can start receiving resources right away and still get good response times, without waiting for the long job to finish. This mode is best for multi-user settings.
That means that to make a room for executing multiple writes asynchronously and in parallel you should configure your Spark application to use FAIR scheduling mode (using spark.scheduler.mode property).
You will have to configure so-called Fair Scheduler Pools to "partition" executor resources (CPU and memory) into pools that you can assign to jobs using spark.scheduler.pool property.
Quoting Fair Scheduler Pools:
Without any intervention, newly submitted jobs go into a default pool, but jobs’ pools can be set by adding the spark.scheduler.pool "local property" to the SparkContext in the thread that’s submitting them.

Storm: What happens with multiple workers?

Say I deploy a topology with 2 workers, the topo has 1 spout and 1 bolt with 2 tasks. Then my understanding is, 1 worker will run spout executor and 1 bolt executor, the other worker will run 1 bolt executor.
Is my understanding correct?
If my understanding is correct, then my question comes. Say the bolt is implemented by Python. Since storm transfers data between multi-lang bolts via stdout/stdin, if the 2 workers run on different hosts, how spout can send data to bolt that locates on the other host?
Little more clarification to your question. Storm uses various types of queue for data/tuple transfer between various components of topology
Example :
1) Intra-worker communication in Storm (inter-thread on the same Storm node): LMAX Disruptor
2) Inter-worker communication (node-to-node across the network): ZeroMQ or Netty
3) Inter-topology communication: nothing built into Storm, you must take care of this yourself with e.g. a messaging system such as Kafka/RabbitMQ, a database, etc.
For further reference :
http://www.michael-noll.com/blog/2013/06/21/understanding-storm-internal-message-buffers/
To give a more detailed answer:
Storm will sent the data to both bolt executors. For the spout-local bolt, this happens in-memory; for the other bolt via network. Afterwards, each bolt-instance will deliver the input to an local-running python process. Thus, your describe stdout/stdin delivery happens locally on each machine. The data is transfer to each bolt before the data delivery from Java to Python happens.
Thus, stdout/stdin bridge is used within each bolt, and not from spout to bolt.
I have done a test by myself. Storm can properly deliver spout emitted data to bolts on different hosts.

Configuring parallelism in Storm

I am new to Apache Storm, and I am trying to figure for myself about configuring storm parallelism. So there is a great article "Understanding the Parallelism of a Storm Topology", but it only arouses questions.
When you have a multinode storm cluster each topology is distributed as a whole according to TOPOLOGY_WORKERS configuration parameter. So if you have 5 workers, then you have 5 copies of spout (1 per worker), and the same thing is with bolts.
How to deal with situation like this inside a storm cluster (preferably without creating external services):
I need exactly one spout used by all instances of topology, for example if input data is being pushed to cluster via a net folder, which is scanned for new files.
Similar issue with concrete type of bolts. For example when data is processed by licensed third-party library which is locked to a concrete physical machine.
First, the basics:
Workers - Run executors, each worker has its own JVM
Executors - Run tasks, each executor is distributed across various workers by storm
Tasks - Instances running your spout/bolt code
Second, a correction... having 5 workers does NOT mean you will automatically have 5 copies of your spout. Having 5 workers means you have 5 separate JVMs where storm can assign executors to run (think of this as 5 buckets).
The number of instances of your spout is configured when you first create and submit your topology:
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("0-spout", new MySpout(), spoutParallelism).setNumTasks(spoutTasks);
Since you want only one spout for the entire cluster, you'd set both spoutParallelism and spoutTasks to 1.

Resources