What is the "task" in Storm parallelism - parallel-processing

I'm trying to learn twitter storm by following the great article "Understanding the parallelism of a Storm topology"
However I'm a bit confused by the concept of "task". Is a task an running instance of the component(spout or bolt) ? A executor having multiple tasks actually is saying the same component is executed for multiple times by the executor, am I correct ?
Moreover in a general parallelism sense, Storm will spawn a dedicated thread(executor) for a spout or bolt, but what is contributed to the parallelism by an executor(thread) having multiple tasks ? I think having multiple tasks in a thread, since a thread executes sequentially, only make the thread a kind of "cached" resource, which avoids spawning new thread for next task run. Am I correct?
I may clear those confusion by myself after taking more time to investigate, but you know, we both love stackoverflow ;-)
Thanks in advance.

Disclaimer: I wrote the article you referenced in your question above.
However I'm a bit confused by the concept of "task". Is a task an running instance of the component(spout or bolt) ? A executor having multiple tasks actually is saying the same component is executed for multiple times by the executor, am I correct ?
Yes, and yes.
Moreover in a general parallelism sense, Storm will spawn a dedicated thread(executor) for a spout or bolt, but what is contributed to the parallelism by an executor(thread) having multiple tasks ?
Running more than one task per executor does not increase the level of parallelism -- an executor always has one thread that it uses for all of its tasks, which means that tasks run serially on an executor.
As I wrote in the article please note that:
The number of executor threads can be changed after the topology has been started (see storm rebalance command).
The number of tasks of a topology is static.
And by definition there is the invariant of #executors <= #tasks.
So one reason for having 2+ tasks per executor thread is to give you the flexibility to expand/scale up the topology through the storm rebalance command in the future without taking the topology offline. For instance, imagine you start out with a Storm cluster of 15 machines but already know that next week another 10 boxes will be added. Here you could opt for running the topology at the anticipated parallelism level of 25 machines already on the 15 initial boxes (which is of course slower than 25 boxes). Once the additional 10 boxes are integrated you can then storm rebalance the topology to make full use of all 25 boxes without any downtime.
Another reason to run 2+ tasks per executor is for (primarily functional) testing. For instance, if your dev machine or CI server is only powerful enough to run, say, 2 executors alongside all the other stuff running on the machine, you can still run 30 tasks (here: 15 per executor) to see whether code such as your custom Storm grouping is working as expected.
In practice we normally we run 1 task per executor.
PS: Note that Storm will actually spawn a few more threads behind the scenes. For instance, each executor has its own "send thread" that is responsible for handling outgoing tuples. There are also "system-level" background threads for e.g. acking tuples that run alongside "your" threads. IIRC the Storm UI counts those acking threads in addition to "your" threads.

Related

Apache Storm assign tasks to the same executor thread

I have been exploring Apache Storm for one of my use case. Spout will read the data from my own kafka implemenation and it will pass to bolts.
My topology is working faster for single consumer group. If I increased the number of consumer groups inside the topology, I could see processing slowness though I have spout and bolts dedicated to each partition of all the consumer groups.
When I debug, I found that there are more number of context switches happened during my processing and therefore logged the thread name processing the bolts. I could see, some times different executor thread is passing the same bolt id.
I am suspecting due to this there may be context switch may happen and due to cpu thread cache miss, there may be increase in the latency.
Whether my assumption is correct. If this is the behaviour, is there any option to set the task by same executor thread instead of passing it to the other executor thread.
After further analysis and debugging, I found that there is CPU steal in the vm in which the storm processes is running. Hence I thought about whether the number of cores would be a problem.
Hence I ran the same process in the machine with a high number of core and cpu steal percentage has been reduced and also I didn't any see slowness with my processing. So all these problems are with machine configuration it seems.

Apache storm: why and how to choose number of tasks per executor?

According to the official documentation:
How many instances to create for a spout/bolt. A task runs on a thread with zero or more other tasks for the same spout/bolt. The number of tasks for a spout/bolt is always the same throughout the lifetime of a topology, but the number of executors (threads) for a spout/bolt can change over time. This allows a topology to scale to more or less resources without redeploying the topology or violating the constraints of Storm (such as a fields grouping guaranteeing that the same value goes to the same task)
My questions are:
Under what circumstances would I choose to run multiple tasks in one executor?
If I do use multiple tasks in one executor, what might be reasons that I would choose different number of tasks per executor between my spout and my bolt (such as 2 tasks per bolt executor but only 1 task per spout executor)?
I thought https://stackoverflow.com/a/47714449/8845188 was a fine answer, but I'll try to reword it as examples:
The number of tasks for a component (e.g. spout or bolt) is set in stone when you submit the topology, while the number of executors can be changed without redeploying the topology. The number of executors is always less than or equal to the number of tasks for a component.
Question 1
You wouldn't normally have a reason to choose running e.g. 2 tasks in 1 executor, but if you currently have a low load but expect a high load later, you may choose to submit the topology with a high number of tasks but a low number of executors. You could of course just submit the topology with as many executors as you expect to need, but using many threads when you only need a few is inefficient due to context switching and/or potential resource contention.
For example, lets say you submit your topology so the spout has 4 tasks and 4 executors (one per). When your load increases, you can't scale further because 4 is the maximum number of executors you can have. You now have to redeploy the topology in order to scale with the load.
Let's say instead you submit your topology so the spout has 32 tasks and 4 executors (8 per). When the load increases, you can increase the number of executors to 32, even though you started out with only 4. You can do this scaling up without redeploying the topology.
Question 2
Let's say your topology has a spout A, and a bolt B. Let's say bolt B does some heavyweight work (e.g. can do 10 tuples per executor per second), while the spout is lightweight (e.g. can do 1000 tuples per executor per second). Let's say your load is initially 20 messages per second into the topology, but you expect that to grow.
In this case it makes sense that you might configure your spout with 1 executor and 1 task, since it's likely to be idle most of the time. At the same time you want to configure your bolt with a high number of tasks so you can scale the number of executors for it, and at least 2-3 executors to start.
Config#TOPOLOGY_TASKS -> How many tasks to create per component.
A task performs the actual data processing and is run within its parent executor’s thread of execution. Each spout or bolt that you implement in your code executes as many tasks across the cluster.
The number of tasks for a component is always the same throughout the lifetime of a topology, but the number of executors (threads) for a component can change over time. This means that the following condition holds true: #threads <= #tasks.
By default, the number of tasks is set to be the same as the number of executors, i.e. Storm will run one task per thread (which is usually what you want anyways).
Also be aware that:
The number of executor threads can be changed after the topology has been started.
The number of tasks of a topology is static.
There is another reason where having tasks in place of executors makes more sense.
Lets suppose you have 2 tasks of the same bolt running on a single executor(thread). Lets suppose you are calling a relatively long running(1 second maybe) database subroutine and the result is needed before proceeding further.
Case 1 - Your database call would be running on the executor thread and it would pause for a while and you would not gain anything by running 2 tasks.
Case 2 - You refactor your database call code to spawn a new thread and execute. In this case, your main executor thread would not hang and it would be able to start processing of the second bolt task while the newly spawned thread would be fetching data from database.
Unless you introduce your own parallelism within the component, I do not see a performance gain and no reason to run multiple tasks apart from maintenance reasons as mentioned in other answers.

Apache storm - Map topology with storm cluster

I read many sites related to Storm.
But still I cannot map topology into storm cluster perfectly.
Please help me to understand this.
In storm cluster there are terms like
Supervisor
Worker node
Worker processor
Workers
Slots
Executer
Tasks
In topology, there are
Spout
Bolt
Also there is possible to configure
numWorkers
parallelism
So anyone please relate all these thing to help me.
I want to know like, each spout/bolt is act executer or is it the task.
If parallelism hint is given, the count of which entity will increase.
If num workers set, which one's count is that.
All these things to map with storm cluster.
I already worked in a project. So I know the topology.
Physical Cluster Setup:
The term node usually refers to a physical machine (or a VM) in your cluster. On each node a supervisor is running in an own JVM. Supervisor have worker slots. This is a logical configuration and tells how many worker can be started by a supervisor. Each worker (if started) runs in an own JVM (thus, some people call it worker process). In summary: on a node there is one supervisor JVM and up to number-of-worker-slots worker JVMs. Therefore, the node a worker JVM is running on, can be called worker node. While the supervisor is running all the time, workers are started if needed, ie, if topologies are deployed, and stopped when a topology is killed. Within a worker, executors are running as threads (ie, each executor maps to an own thread).
Logical Topology Setup:
Topologies are build out of Spouts (also called sources, ie, operators with no incoming data stream) and Bolts (regular operators with at least one incoming data stream and any number of outgoing data streams -- if there is no outgoing data stream, a Bolt is also called sink). For each Spout/Bolt you can configure two parameters:
the number of tasks
the dop (degree of parallelism, called parallelism_hint), ie, the number of executors you want to have for a Spout/Bolt
Tasks are logical unit of works (ie, something passive). Let's assume you use fieldsGrouping connection pattern. Thus, the data stream is partitioned into number-of-tasks many sub-streams. Tasks are assigned to executors, ie, each executor processes one or multiple tasks. This implies, that you cannot have less tasks than executors (ie, parallelism); otherwise, there would be a thread without any work to do.
See the Storm documentation for further details (https://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html). Furthermore, there are many other question on SO about task/executors in Storm.
Last but not least, you can configure the numberOfWorkers for a topology. This parameter indicates, how many workers should be started to run the topology. The overall number of executors for a topology is the sum of dops over all Spouts/Bolts. All executors will be evenly distributes over all available worker JVMs.
Furthermore, a single worker can only run executors of a single topology. This is done for fault-tolerance reasons, ie, topologies are isolated from each other. At the same time, a worker itself can run any number of executors.

Issues with storm execution in single node

We have the Storm configured in a single node development server with most of the configurations set to default (not local mode).
Having storm nimbus, supervisor and workers running in that single node only and UI also configured.
AFAIK parallelism and configuration differs from topology to topology.
I think finding the right parallelism and configuration is by trial and error method only.
So, to find the best parallelism we have started testing our Storm topology with various configurations in a single node.
Strangely the results are unexpected:
Our topology processes stream of xml files from HDFS directory.
Having a single spout (Parallelism always 1) and four bolts.
Single worker
Whatever the topology parallelism we get the almost same performance results (the rate of data processed)
Multiple workers
Whatever the topology parallelism we get the similar performance as of single worker until sometime (most of the cases it is 10 minutes).
But after that complete topology gets restarted without any error traces.
We had observed that Whatever data processed in 20 minutes with single worker took 90 minutes with 5 workers having the same parallelism.
Also Topology had restarted 7 times with 5 workers.
And CPU usage is relatively high.
(Someone else also had faced this topology restart issue http://search-hadoop.com/m/LrAq5ZWeaU but no answer)
After testing many configurations we found that single worker with less no of parallelism (each bolt with 2 or 3 instances) works better than high parallelism or more no of workers.
Ideally the performance of Storm topology should be better with more no workers/ parallelism.
Apparently this rule is not holding good here.
why can't we set more than a single worker in a single node?
What are the maximum no of workers can be run in a single node?
What are the Storm configurations changes that are need to scale the performance? (I have tried nimbus.childopts and worker.childopts)
If your CPU usage is high on the one node then you're not going to get any better performance as you increase parallelism. If you do increase parallelism, there will just be greater contention for a constant number of CPU cycles. Not knowing any more about your specific topology, I can only suggest that you look for ways to reduce the CPU usage across your bolts and spouts. Only then can you would it make sense to add more bolt and spout instances.

What does multiple task inside an executor in storm signifies?

What is the benefit of using multiple task in an executor in storm topology. I mean I couldn't understand that except doing multiple thing, we can achieve any speed or parallelism?
Michael G.Noll wrote a great tutorial that should help you to understand storm parallelism.
Usually a topology runs one task per executor. However since you cannot increase the number of tasks while a topology is running you can declare multiple tasks per executor in order to scale up parallelism over time.
There is no specific use case to have multiple tasks per executor other than the possibility to increase the topology parallelism.

Resources