Configuring parallelism in Storm - parallel-processing

I am new to Apache Storm, and I am trying to figure for myself about configuring storm parallelism. So there is a great article "Understanding the Parallelism of a Storm Topology", but it only arouses questions.
When you have a multinode storm cluster each topology is distributed as a whole according to TOPOLOGY_WORKERS configuration parameter. So if you have 5 workers, then you have 5 copies of spout (1 per worker), and the same thing is with bolts.
How to deal with situation like this inside a storm cluster (preferably without creating external services):
I need exactly one spout used by all instances of topology, for example if input data is being pushed to cluster via a net folder, which is scanned for new files.
Similar issue with concrete type of bolts. For example when data is processed by licensed third-party library which is locked to a concrete physical machine.

First, the basics:
Workers - Run executors, each worker has its own JVM
Executors - Run tasks, each executor is distributed across various workers by storm
Tasks - Instances running your spout/bolt code
Second, a correction... having 5 workers does NOT mean you will automatically have 5 copies of your spout. Having 5 workers means you have 5 separate JVMs where storm can assign executors to run (think of this as 5 buckets).
The number of instances of your spout is configured when you first create and submit your topology:
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("0-spout", new MySpout(), spoutParallelism).setNumTasks(spoutTasks);
Since you want only one spout for the entire cluster, you'd set both spoutParallelism and spoutTasks to 1.

Related

Control over scheduling/placement in Apache Storm

I am running a wordcount topology in a Storm cluster composed on 2 nodes. One node is the Master node (with Nimbus, UI and Logviewer) and both of then are Supervisor with 1 Worker each. In other words, my Master node is also a Supervisor, and the second node is only a Supervisor. As I said, there is 1 Worker per Supervisor.
The topology I am using is configured so that it is using these 2 Workers (setNumWorkers(2)). In details, the topology has 1 Spout with 2 threads, 1 split Bolt and 1 count Bolt. When I deploy the topology with the default scheduler, the first Supervisor has 1 Spout thread and the split Bolt, and the second Supervisor has 1 Spout thread and the count Bolt.
Given this context, how can I control the placement of operators (Spout/Bolt) between these 2 Workers? For research purpose, I need to have some control over the placement of these operators between nodes. However, the mechanism seems to be transparent within Storm and such a control is not available for the end-user.
I hope my question is clear enough. Feel free to ask for additional details. I am aware that I may need to dig into Storm's source code and recompile. That's fine. I am looking for a starting point and advices on how to proceed.
The version of Storm I am using is 2.1.0.
Scheduling is handled by a pluggable scheduler in Storm. See the documentation at http://storm.apache.org/releases/2.1.0/Storm-Scheduler.html.
You may want to look at the DefaultScheduler for reference https://github.com/apache/storm/blob/v2.1.0/storm-server/src/main/java/org/apache/storm/scheduler/DefaultScheduler.java. This is the default scheduler used by Storm, and has a bit of handling for banning "bad" workers from the assignment, but otherwise largely just does round robin assignment.
If you don't want to implement a cluster-wide scheduler, you might be able to set your cluster to use the ResourceAwareScheduler, and use a topology-level scheduling strategy instead. You would set this by setting config.setTopologyStrategy(YourStrategyHere.class) when you submit your topology. You will want to implement this interface https://github.com/apache/storm/blob/e909b3d604367e7c47c3bbf3ec8e7f6b672ff778/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/IStrategy.java#L43 and you can find an example implementation at https://github.com/apache/storm/blob/c427119f24bc0b14f81706ab4ad03404aa85aede/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/DefaultResourceAwareStrategy.java
Edit: If you implement either an IStrategy or IScheduler, they need to go in a jar that you put in storm/lib on the Nimbus machine. The strategy or scheduler needs to be on the classpath of the Nimbus process.

Parallelism in Apache Storm with one worker node

I am trying to Parallelize my topology using Apache Storm but it gives me java.util.ConcurrentModificationException error on worker nodes if I increased the number of workers>1. It works fine with 1 worker and in local cluster. I want a way to parallelize my topology and measure the different parameters like throughput, latency, emit rate etc. using one worker node only.
Based on the stack trace you posted, it looks like Kryo is trying to serialize an ArrayList and hitting a ConcurrentModificationException. I would look for any place you emit an ArrayList and make sure that you don't modify it after you've passed it to OutputCollector.emit.
Likely the reason you're not seeing this issue when you only have one worker is that Storm only serializes emitted objects when they need to be sent to a different worker.

When I use storm trident, if I set the parallelism not less than 2, how can I make all the executors run on different servers not just on one server?

i.e., if the parallelism is 2, the bolt run on 2 different servers, and if the parallelism is 3, the bolt run on 3 different servers. It's important for me, for I don't want all the tasks running on just one server, that'll be too slow.
Try to increase configuration parameter "number of workers" (default value is 1) via
Config cfg = new Config();
cfg.setNumWorkers(...);
You can also limit the number of workers per host via storm.yaml config parameter supersior.slots.ports -- for each port, one worker JVM can be started, thus if you only provide one port for this config, only one worker JVM will get started. Just be aware, that this might limit the number of topologies you can run. A single worker JVM will only execute code from a single topology (to isolate topologies from each other).

Storm: What happens with multiple workers?

Say I deploy a topology with 2 workers, the topo has 1 spout and 1 bolt with 2 tasks. Then my understanding is, 1 worker will run spout executor and 1 bolt executor, the other worker will run 1 bolt executor.
Is my understanding correct?
If my understanding is correct, then my question comes. Say the bolt is implemented by Python. Since storm transfers data between multi-lang bolts via stdout/stdin, if the 2 workers run on different hosts, how spout can send data to bolt that locates on the other host?
Little more clarification to your question. Storm uses various types of queue for data/tuple transfer between various components of topology
Example :
1) Intra-worker communication in Storm (inter-thread on the same Storm node): LMAX Disruptor
2) Inter-worker communication (node-to-node across the network): ZeroMQ or Netty
3) Inter-topology communication: nothing built into Storm, you must take care of this yourself with e.g. a messaging system such as Kafka/RabbitMQ, a database, etc.
For further reference :
http://www.michael-noll.com/blog/2013/06/21/understanding-storm-internal-message-buffers/
To give a more detailed answer:
Storm will sent the data to both bolt executors. For the spout-local bolt, this happens in-memory; for the other bolt via network. Afterwards, each bolt-instance will deliver the input to an local-running python process. Thus, your describe stdout/stdin delivery happens locally on each machine. The data is transfer to each bolt before the data delivery from Java to Python happens.
Thus, stdout/stdin bridge is used within each bolt, and not from spout to bolt.
I have done a test by myself. Storm can properly deliver spout emitted data to bolts on different hosts.

specify execution of a set tuples in a worker node in a storm topology

Is it possible to execute a set of tuples (based on a particular field) on a particular worker node in the storm topology. Need to minimize network load in the cluster.
You can go for a custom scheduler ... it will allow you to bind a specific task to a supervisor , might worth taking a look into it

Resources