10 ms latency with a simple topology in storm - apache-storm

I am running a simple topology which has a simple spout that emits tuples with two fields and a bolt which just acks in the execute method. These are run in two machines. With this setup and default configuration values, I get 10ms for the complete latency while both execute and process latency are .005ms. I have disabled logging as well. What could be the issue? Storm version is 1.0.

If you run your topology on several machines try to use localOrShuffle() grouping on your bolts. It will remove unnessasery traffic and network delay.

Related

Control over scheduling/placement in Apache Storm

I am running a wordcount topology in a Storm cluster composed on 2 nodes. One node is the Master node (with Nimbus, UI and Logviewer) and both of then are Supervisor with 1 Worker each. In other words, my Master node is also a Supervisor, and the second node is only a Supervisor. As I said, there is 1 Worker per Supervisor.
The topology I am using is configured so that it is using these 2 Workers (setNumWorkers(2)). In details, the topology has 1 Spout with 2 threads, 1 split Bolt and 1 count Bolt. When I deploy the topology with the default scheduler, the first Supervisor has 1 Spout thread and the split Bolt, and the second Supervisor has 1 Spout thread and the count Bolt.
Given this context, how can I control the placement of operators (Spout/Bolt) between these 2 Workers? For research purpose, I need to have some control over the placement of these operators between nodes. However, the mechanism seems to be transparent within Storm and such a control is not available for the end-user.
I hope my question is clear enough. Feel free to ask for additional details. I am aware that I may need to dig into Storm's source code and recompile. That's fine. I am looking for a starting point and advices on how to proceed.
The version of Storm I am using is 2.1.0.
Scheduling is handled by a pluggable scheduler in Storm. See the documentation at http://storm.apache.org/releases/2.1.0/Storm-Scheduler.html.
You may want to look at the DefaultScheduler for reference https://github.com/apache/storm/blob/v2.1.0/storm-server/src/main/java/org/apache/storm/scheduler/DefaultScheduler.java. This is the default scheduler used by Storm, and has a bit of handling for banning "bad" workers from the assignment, but otherwise largely just does round robin assignment.
If you don't want to implement a cluster-wide scheduler, you might be able to set your cluster to use the ResourceAwareScheduler, and use a topology-level scheduling strategy instead. You would set this by setting config.setTopologyStrategy(YourStrategyHere.class) when you submit your topology. You will want to implement this interface https://github.com/apache/storm/blob/e909b3d604367e7c47c3bbf3ec8e7f6b672ff778/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/IStrategy.java#L43 and you can find an example implementation at https://github.com/apache/storm/blob/c427119f24bc0b14f81706ab4ad03404aa85aede/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/DefaultResourceAwareStrategy.java
Edit: If you implement either an IStrategy or IScheduler, they need to go in a jar that you put in storm/lib on the Nimbus machine. The strategy or scheduler needs to be on the classpath of the Nimbus process.

Parallelism in Apache Storm with one worker node

I am trying to Parallelize my topology using Apache Storm but it gives me java.util.ConcurrentModificationException error on worker nodes if I increased the number of workers>1. It works fine with 1 worker and in local cluster. I want a way to parallelize my topology and measure the different parameters like throughput, latency, emit rate etc. using one worker node only.
Based on the stack trace you posted, it looks like Kryo is trying to serialize an ArrayList and hitting a ConcurrentModificationException. I would look for any place you emit an ArrayList and make sure that you don't modify it after you've passed it to OutputCollector.emit.
Likely the reason you're not seeing this issue when you only have one worker is that Storm only serializes emitted objects when they need to be sent to a different worker.

Spark batches does not complete when running on Yarn cluster

Setting the scene
I am working to make a Spark streaming application (Spark 2.2.1 with Scala) run on a Yarn cluster (Hadoop 2.7.4).
So far I managed to submit the application to the Yarn cluster with spark-submit. I can see that the receiver task starts up correctly and fetches a lot of records from the database (Couchbase Server 5.0) and I can also see that the records are divided into batches.
The question
When I look at the Streaming Statistics on the Spark Web UI, I can however see that my batches are never processed. I have seen batches with 0 records process and complete but when a batch with records start processing it never completes. One time it even got stuck on a batch with 0 records.
I even tried simplifying the output operations on the SteamingContext as much as possible. But still with the very simple output operation print() my batches are never processed. The logs does not show any warnings or errors.
Does anyone know what might be wrong? Any suggestions on how to solve this will be much appreciated.
More Info
The main class of the Spark application is built from this example (first one) from the Couchbase Spark Connector documentation combined with this example with checkpoint from the Spark Documentation.
Right now I have 3230 Active Batches (3229 queued and 1 processing) and 1 Completed Batch (that had 0 records) and the application has been running for 4 hours and 30 minutes... and another batch is added every 5 seconds.
If I look at the "thread dump" for the executors I see a lot of WAITING, TIMED WAITING and a few RUNNABLE threads. The list will fill up 3 screenshots, so i will only post it if needed.
Below you will find some screenshots from the Web UI
Executor Overview
Spark Jobs Overview
Node Overview with resources
Capacity Scheduler Overview
Per screenshot, you have 2 cores and 1 is being used for driver and another is being used for receiver. You don't have a core for the actual processing to happen. Please increase the number of cores and try again.
Refer: https://spark.apache.org/docs/latest/streaming-programming-guide.html#input-dstreams-and-receivers
If you are using an input DStream based on a receiver (e.g. sockets, Kafka, Flume, etc.), then the single thread will be used to run the receiver, leaving no thread for processing the received data. Hence, when running locally, always use “local[n]” as the master URL, where n > number of receivers to run (see Spark Properties for information on how to set the master).

Storm: What happens with multiple workers?

Say I deploy a topology with 2 workers, the topo has 1 spout and 1 bolt with 2 tasks. Then my understanding is, 1 worker will run spout executor and 1 bolt executor, the other worker will run 1 bolt executor.
Is my understanding correct?
If my understanding is correct, then my question comes. Say the bolt is implemented by Python. Since storm transfers data between multi-lang bolts via stdout/stdin, if the 2 workers run on different hosts, how spout can send data to bolt that locates on the other host?
Little more clarification to your question. Storm uses various types of queue for data/tuple transfer between various components of topology
Example :
1) Intra-worker communication in Storm (inter-thread on the same Storm node): LMAX Disruptor
2) Inter-worker communication (node-to-node across the network): ZeroMQ or Netty
3) Inter-topology communication: nothing built into Storm, you must take care of this yourself with e.g. a messaging system such as Kafka/RabbitMQ, a database, etc.
For further reference :
http://www.michael-noll.com/blog/2013/06/21/understanding-storm-internal-message-buffers/
To give a more detailed answer:
Storm will sent the data to both bolt executors. For the spout-local bolt, this happens in-memory; for the other bolt via network. Afterwards, each bolt-instance will deliver the input to an local-running python process. Thus, your describe stdout/stdin delivery happens locally on each machine. The data is transfer to each bolt before the data delivery from Java to Python happens.
Thus, stdout/stdin bridge is used within each bolt, and not from spout to bolt.
I have done a test by myself. Storm can properly deliver spout emitted data to bolts on different hosts.

Configuring parallelism in Storm

I am new to Apache Storm, and I am trying to figure for myself about configuring storm parallelism. So there is a great article "Understanding the Parallelism of a Storm Topology", but it only arouses questions.
When you have a multinode storm cluster each topology is distributed as a whole according to TOPOLOGY_WORKERS configuration parameter. So if you have 5 workers, then you have 5 copies of spout (1 per worker), and the same thing is with bolts.
How to deal with situation like this inside a storm cluster (preferably without creating external services):
I need exactly one spout used by all instances of topology, for example if input data is being pushed to cluster via a net folder, which is scanned for new files.
Similar issue with concrete type of bolts. For example when data is processed by licensed third-party library which is locked to a concrete physical machine.
First, the basics:
Workers - Run executors, each worker has its own JVM
Executors - Run tasks, each executor is distributed across various workers by storm
Tasks - Instances running your spout/bolt code
Second, a correction... having 5 workers does NOT mean you will automatically have 5 copies of your spout. Having 5 workers means you have 5 separate JVMs where storm can assign executors to run (think of this as 5 buckets).
The number of instances of your spout is configured when you first create and submit your topology:
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("0-spout", new MySpout(), spoutParallelism).setNumTasks(spoutTasks);
Since you want only one spout for the entire cluster, you'd set both spoutParallelism and spoutTasks to 1.

Resources