Storm ackers are limiting the performance - apache-storm

I was running a topology with many bolts. From the storm's ui, I can see that the Execute latency and Process latency of all bolts are very small (<1ms). However, the Complete latency of my Spouts raised up to 30s.
I thought such a huge discrepancy is caused by the ackers. Because, the ackers executed 101,522,080 times but only emitted 2,673,260, which means, if I'm correct, there are around 100,000,000 tuples are flying in the topology and waiting for Ack signal.
I tried to set the Ack numbers to 0 and disable Ack at all. But it turned out the entire system is running out of control. Also tried to double the number of ackers, but the situation does not get better.
Is the acker the real problem that limited the performance? And how to optimize such an issue?

First, setting number of ackers to zero means your spout emits all tuple when they are available so your topology encounters performance problems, failed tuples and piled up messages in consumer/spout side. Because all of your tuples acked (marked as executed ) instantly before the tuple executed in all bolts and TOPOLOGY_MAX_SPOUT_PENDING can't do its real duty.
In my opinion, first try to figure out the best TOPOLOGY_MAX_SPOUT_PENDING count for your topology. Then, tune ackers count.you can double it the number of workers and watch the performance from Strom UI.

Related

High latency between spout -> bolt and bolt -> bolts

In my topology I see around 1 - 2 ms latency when transferring tuples from spouts to bolts or from bolts to bolts. I am calculating latency using nanosecond timestamps because the whole topology runs inside a single worker.
Topology is run in a cluster which runs in a production capable hardware.
To my understanding, tuples need not be serialized/de-serialized in this case as everything is inside single JVM. I have set parallelism hint for most spouts and bolts to 5 and spouts only produce events at a rate of 100 per second. I dont think high latency is due to queuing of events because I dont see any increase of latency with time. No memory increase either. log levels are set to ERROR. CPU usage is in the range of 200 to 300 %.
what could be causing this latency? I was expecting only few us's for tuple transfer.
I'm going to assume you're using one of the released Storm versions, and not 2.0.0-SNAPSHOT, since the queueing implementation has changed in that version.
I think it's likely that the delay is because Storm batches up tuples before delivering them to the consumer. Take a look at https://github.com/apache/storm/blob/v1.2.1/storm-core/src/jvm/org/apache/storm/utils/DisruptorQueue.java#L247, and also look at the Flusher class in that file. When a spout/bolt publishes a tuple, it is put into the _currentBatch list. It stays there until either enough tuples have been received so the batch is "big enough" (you can look at the _inputBatchSize variable to figure out when this is), or until the Flusher is triggered (happens by default once per millisecond).

Tuples failing at the spout, and seems they are not even reaching the Bolt

I have a topology running for a few days now and it started failing tuples from last couple of days. From the logs it seems that the tuples are not reaching the bolts, attached is the Storm UI screenshot.
I am ack'ing the tuples in finally in my code, so no case of un'acked tuples, and the timeout is set at 10sec, which is quite high than the time shown on the UI.
Any Hints ?enter image description here
The log you're seeing is simply the Kafka spout telling you that it has fallen too far behind, and it has started skipping tuples.
I believe only acked tuples count for the complete latency metric https://github.com/apache/storm/blob/a4afacd9617d620f50cf026fc599821f7ac25c79/storm-client/src/jvm/org/apache/storm/stats/SpoutExecutorStats.java#L54. Failed tuples don't (how would Storm know what the actual latency is for tuples that time out), so the complete latency you're seeing is only for the initial couple of acked tuples.
I think what's happening is that your tuples are reaching the bolt, and then either you're not acking them (or acking them more than once), or the tuples are taking too long to process so they time out while queued up for the bolt. Keep in mind that the tuple timeout starts when the spout emits the tuple, so time spent in the bolt's input queue counts. Since your initial couple of tuples are taking a while to process, I think the bolt queue gets backed up with tuples that are already timed out. The bolt doesn't discard tuples that are timed out, so the queued timed out tuples are preventing fresh tuples from being processed in time.
I'd raise the tuple timeout, and also cap the number of pending tuples by setting topology.max.spout.pending to whatever you think is reasonable (something like the number of tuples you think you can process within the timeout)

Storm latency caused by ack

I was using kafka-storm to connect kafka and storm. I have 3 servers running zookeeper, kafka and storm. There is a topic 'test' in kafka that has 9 partitions.
In the storm topology, the number of KafkaSpout executor is 9 and by default, the number of tasks should be 9 as well. And the 'extract' bolt is the only bolt connected to KafkaSpout, the 'log' spout.
From the UI, there is a huge rate of failure in the spout. However, he number of executed message in bolt = the number of emitted message - the number of failed mesage in bolt. This equation is almost matched when the failed message is empty at the beginning.
Based on my understanding, this means that the bolt did receive the message from spout but the ack signals are suspended in flight. That's the reason why the number of acks in spout are so small.
This problem might be solved by increase the timeout seconds and spout pending message number. But this will cause more memory usage and I cannot increase it to infinite.
I was wandering if there is a way to force storm ignore the ack in some spout/bolt, so that it will not waiting for that signal until time out. This should increase the throughout significantly but not guarantee for message processing.
if you set the number of ackers to 0 then storm will automatically ack every sample.
config.setNumAckers(0);
please note that the UI only measures and shows 5% of the data flow.
unless you set
config.setStatsSampleRate(1.0d);
try increasing the bolt's timeout and reducing the amount of topology.max.spout.pending.
also, make sure the spout's nextTuple() method is non blocking and optimized.
i would also recommend profiling the code, maybe your storm Queues are being filled and you need to increase their sizes.
config.put(Config.TOPOLOGY_TRANSFER_BUFFER_SIZE,32);
config.put(Config.TOPOLOGY_EXECUTOR_RECEIVE_BUFFER_SIZE,16384);
config.put(Config.TOPOLOGY_EXECUTOR_SEND_BUFFER_SIZE,16384);
Your capacity numbers are a bit high, leading me to believe that you're really maximizing the use of system resources (CPU, memory). In other words, the system seems to be bogged down a bit and that's probably why tuples are timing out. You might try using the topology.max.spout.pending config property to limit the number of inflight tuples from the spout. If you can reduce the number just enough, the topology should be able to efficiently handle the load without tuples timing out.

Apache Storm: what happens to a tuple when no bolts are available to consume it?

If it's linked to another bolt, but no instances of the next bolt are available for a while. How long will it hang around? Indefinitely? Long enough?
How about if many tuples are waiting, because there is a line or queue for the next available bolt. Will they merge? Will bad things happen if too many get backed up?
By default tuples will timeout after 30 seconds after being emitted; You can change this value, but unless you know what you are doing don't do it (topology.message.timeout.secs)
Failed and timeout out tuples will be replayed by the spout, if the spout is reading from a reliable data source (eg. kafka); this is, storm has guaranteed message processing. If you are codding your own spouts, you might want to dig deep into this.
You can see if you are having timeout tuples on storm UI, when tuples are failing on the spout but not on the bolts.
You don't want tuples to timeout inside your topology (for example there is a performance penalty on kafka for not reading sequential). You should adjust the capacity of your topology process tuples (this is, tweak the bolt parallelism by changing the number of executors) and by setting the parameter topology.max.spout.pending to a reasonable conservative value.
increase the topology.message.timeout.secs parameter is no real solution, because soon or late if the capacity of your topology is not enough the tuples will start to fail.
topology.max.spout.pending is the max number of tuples that can be waiting. The spout will emit more tuples as long the number of tuples not fully processed is less than the given value. Note that the parameter topology.max.spout.pending is per spout (each spout has it's internal counter and keeps track of the tuples which are not fully processed).
There is a deserialize-queue for buffering the coming tuples, if it hangs long enough, the queue will be full,and tuples will be lost if you don't use the ack function to make sure it will be resent.
Storm just drops them if the tuples are not consumed until timeout. (default is 30 seconds)
After that, Storm calls fail(Object msgId) method of Spout. If you want to replay the failed tuples, you should implement this function. You need to keep the tuples in memory, or other reliable storage systems, such as Kafka, to replay them.
If you do not implement the fail(Object msgId) method, Storm just drops them.
Reference: https://storm.apache.org/documentation/Guaranteeing-message-processing.html

Why is there a huge lag between executed and acked in storm topology

I am working on apache storm , i see there is a huge difference between executed and acked .
Following is the screenshot from Storm UI
What can we do to make acks equal to executed , i tried increasing the number of packers but that was of no help
to make it clear, I would like try to explain the two values' meaning. "Executed" represents how many times execute method is called for the bot. "Acked" means how many times the bolt calls ack.
From the snapshot above, it means booking_bolt executes "execute" method for 23300 times and call acked only 500 times.
So maybe in bolt's execute method, ack or fail is not called everytime.
From Michael G. Noll training : Why does the Storm UI report seemingly incorrect numbers?
Storm samples incoming tuples when computing statistics in order to increase performance.
Sample rate is configured via topology.stats.sample.rate.
0.05 is the default value
Here, Storm will pick a random event of the next 20 events in which to increase the metric count by 20. So if you have 20 tasks for that bolt, your stats could be off by +/- 380.
1.00 forces Storm to count everything exactly
This gives you accurate numbers at the cost of a big performance hit. For testing purposes however this is acceptable and often quite helpful.

Resources