How is the work divided amongst Storm Workers? - apache-storm

How does Apache Storm Divide the tasks amongst it's workers, I read that storm does it by itself, and it's a function of parallelism, but what I don't know is how do I figure out which node does what and how many nodes would do which task, basically so that I can calculate the optimal number of nodes required?
Assuming that the hardware configuration of all nodes is not the same.

By default, Storm used "round robin" scheduling, ie, it loops over all supervisors with available slots and assigns the parallel instances of spouts/bolts. If no more free slots are available, single workers are assigned multiple spout/bolt instances.

You need to have a look at storm UI. The metrics: complete latency, capacity, execute latency, process latency and failed tuples will give you "hints" on how many executors and tasks you should allocate for each bolt.

Related

About nodes number on Flink

I'm developing a Flink toy-application on my local machine before to deploy the real one on a real cluster.
Now I have to determine how many nodes I need to set the cluster.
But I'm still a bit confused about how many nodes I have to consider to execute my application.
For example if I have the following code (from the doc):
DataStream<String> lines = env.addSource(new FlinkKafkaConsumer<>()...);
DataStream<Event> events = lines.map((line)->parse(line));
DataStream<Statistics> stats = events
.keyBy("id");
.timeWindow(Time.seconds(10))
.apply(new MyWindowAggregationFunction());
stats.addSink(new RollingSink(path));
This means that operations "on same line" are executed on same node? (It sounds a bit strange to me)
Some confirms:
If the answer to previous question is yes and if I set parallelism to 1 I can establish how many nodes I need counting how many operations I have to perform ?
If I set parallelism to N but I have less than N nodes available Flink automatically scales the elaboration on available nodes?
My throughput and data load are not relevant I think, it is not heavy.
If you haven't already, I recommend reading https://ci.apache.org/projects/flink/flink-docs-release-1.3/concepts/runtime.html, which explains how the Flink runtime is organized.
Each task manager (worker node) has some number of task slots (at least one), and a Flink cluster needs exactly as many task slots as the highest parallelism used in the job. So if the entire job has a parallelism of one, then a single node is sufficient. If the parallelism is N and fewer than N task slots are available, the job can't be executed.
The Flink community is working on dynamic rescaling, but as of version 1.3, it's not yet available.

Apache storm - Map topology with storm cluster

I read many sites related to Storm.
But still I cannot map topology into storm cluster perfectly.
Please help me to understand this.
In storm cluster there are terms like
Supervisor
Worker node
Worker processor
Workers
Slots
Executer
Tasks
In topology, there are
Spout
Bolt
Also there is possible to configure
numWorkers
parallelism
So anyone please relate all these thing to help me.
I want to know like, each spout/bolt is act executer or is it the task.
If parallelism hint is given, the count of which entity will increase.
If num workers set, which one's count is that.
All these things to map with storm cluster.
I already worked in a project. So I know the topology.
Physical Cluster Setup:
The term node usually refers to a physical machine (or a VM) in your cluster. On each node a supervisor is running in an own JVM. Supervisor have worker slots. This is a logical configuration and tells how many worker can be started by a supervisor. Each worker (if started) runs in an own JVM (thus, some people call it worker process). In summary: on a node there is one supervisor JVM and up to number-of-worker-slots worker JVMs. Therefore, the node a worker JVM is running on, can be called worker node. While the supervisor is running all the time, workers are started if needed, ie, if topologies are deployed, and stopped when a topology is killed. Within a worker, executors are running as threads (ie, each executor maps to an own thread).
Logical Topology Setup:
Topologies are build out of Spouts (also called sources, ie, operators with no incoming data stream) and Bolts (regular operators with at least one incoming data stream and any number of outgoing data streams -- if there is no outgoing data stream, a Bolt is also called sink). For each Spout/Bolt you can configure two parameters:
the number of tasks
the dop (degree of parallelism, called parallelism_hint), ie, the number of executors you want to have for a Spout/Bolt
Tasks are logical unit of works (ie, something passive). Let's assume you use fieldsGrouping connection pattern. Thus, the data stream is partitioned into number-of-tasks many sub-streams. Tasks are assigned to executors, ie, each executor processes one or multiple tasks. This implies, that you cannot have less tasks than executors (ie, parallelism); otherwise, there would be a thread without any work to do.
See the Storm documentation for further details (https://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html). Furthermore, there are many other question on SO about task/executors in Storm.
Last but not least, you can configure the numberOfWorkers for a topology. This parameter indicates, how many workers should be started to run the topology. The overall number of executors for a topology is the sum of dops over all Spouts/Bolts. All executors will be evenly distributes over all available worker JVMs.
Furthermore, a single worker can only run executors of a single topology. This is done for fault-tolerance reasons, ie, topologies are isolated from each other. At the same time, a worker itself can run any number of executors.

Issues with storm execution in single node

We have the Storm configured in a single node development server with most of the configurations set to default (not local mode).
Having storm nimbus, supervisor and workers running in that single node only and UI also configured.
AFAIK parallelism and configuration differs from topology to topology.
I think finding the right parallelism and configuration is by trial and error method only.
So, to find the best parallelism we have started testing our Storm topology with various configurations in a single node.
Strangely the results are unexpected:
Our topology processes stream of xml files from HDFS directory.
Having a single spout (Parallelism always 1) and four bolts.
Single worker
Whatever the topology parallelism we get the almost same performance results (the rate of data processed)
Multiple workers
Whatever the topology parallelism we get the similar performance as of single worker until sometime (most of the cases it is 10 minutes).
But after that complete topology gets restarted without any error traces.
We had observed that Whatever data processed in 20 minutes with single worker took 90 minutes with 5 workers having the same parallelism.
Also Topology had restarted 7 times with 5 workers.
And CPU usage is relatively high.
(Someone else also had faced this topology restart issue http://search-hadoop.com/m/LrAq5ZWeaU but no answer)
After testing many configurations we found that single worker with less no of parallelism (each bolt with 2 or 3 instances) works better than high parallelism or more no of workers.
Ideally the performance of Storm topology should be better with more no workers/ parallelism.
Apparently this rule is not holding good here.
why can't we set more than a single worker in a single node?
What are the maximum no of workers can be run in a single node?
What are the Storm configurations changes that are need to scale the performance? (I have tried nimbus.childopts and worker.childopts)
If your CPU usage is high on the one node then you're not going to get any better performance as you increase parallelism. If you do increase parallelism, there will just be greater contention for a constant number of CPU cycles. Not knowing any more about your specific topology, I can only suggest that you look for ways to reduce the CPU usage across your bolts and spouts. Only then can you would it make sense to add more bolt and spout instances.

What are the reasons to configure more than one worker per cluster node in Apache Storm?

In the following, I refer to this article: Understanding the Parallelism of a Storm Topology by Michael G. Noll
It seems to me that a worker process may host an arbitrary number of executors (threads) to run an arbitrary number of tasks (instances of topology components). Why should I configure more than one worker per cluster node?
The only reason I see is that a worker can only run a subset of at most one topology. Hence, if I want to run multiple topologies on the same cluster, I would need to configure the same number of workers per cluster node as the number of topologies to be run.
(Example: This is because I would want to be flexible in case that some cluster nodes fail. If for example, only one cluster node remains I need at least as many worker processes as topologies running on that cluster in order to keep all topologies running.)
Is there any other reason? Especially, is there any reason to configure more than one worker per cluster node if running only one topology? (Better fail-safety, etc.)
To balance the costs of a supervisor daemon per node, and the risk of impact of a worker crashing. If you have one large, monolithic worker JVM, one crash impacts everything running in that worker, as well as bad behaving portions of your worker impact more residents. But by having more than one worker per node, you make your supervisor more efficient, and now have a bulkhead pattern somewhat, keeping from the all or nothing approach.
The shared resources I refer to could be yours or storm's; several pieces of storm's architecture are shared per JVM, and could create contention problems. I refer to the receive and send threads, and underlying network pieces, specifically. Documented here.

How many Mapreduce Jobs can be run simultaneously

I want to know How many Mapreduce Jobs can be submit/run simultaneously in a single node hadoop envirnment.Is there any limit?
From a configuration standpoint, there's no limit I'm aware of. You can set the number of map and reduce slots to whatever you want. Practically, though, each slot has to spin up a JVM capable of running some hadoop code, which requires some amount of memory, so eventually you would run out of memory on your machine. You might also have to configure job queues cleverly in order to run a ton at the same time.
Now, what is possible is a very different question than what is a good idea...
You can submit as many jobs you want, they will be queued up and scheduler will run them based on FIFO(by default) and available resources.The number of jobs being executed by hadoop will depend as described by John above.
The number of Reducer slots is set when the cluster is configured. This will limit the number of MapReduce jobs based on the number of Reducers each job requests. Mappers are generally more limited by number of DataNodes and # of processors per node.

Resources