Throughput in Apache storm - apache-storm

I want to know exactly throughput in apache Storm. Is it the number of tuples processed/Total time?
If so what is the total number of tuples emitted? I am not getting exact significance of total tuples emitted/Time. Please let me know.

You need to look at the execute count of the sink bolts (the end ones in your topology that don't connect to any other bolts). This is the throughput and is reported for the last 10 mins, 3 hrs, 1 day and all time. Dividing the values by the time period in seconds will give you the throughput in tuples per second.

Related

Can we calculate the approximate TPS (Transactions per second) value that we reach based on Hourly active user count?

One of my clients is asking about the expected TPS before the execution. He has given the below requirements to initiate the load test,
The expected hourly active user count: 14250 (concurrent)
The total think time: 625 Seconds (4 thread groups with different user flows. each flow action controller has 5 Seconds delay)
No Pace time: 0
Total Number of endpoints: 212 (4 thread groups with different user flows)
Can anyone help me to calculate the approximate TPS (Transactions per second) value that we reach for the above configuration?
Is this data set will sufficient to do this calculation? Appreciate it if anyone can help with this calculation.
It's not sufficient because you don't know the response time for all 212 endpoints, you can assume what could be the TPS if response time would be 1 second or 2 seconds, but it's not possible to predict the actual TPS without knowing what is the response time.
According to JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
More information: How do I Correlate the Number of (Concurrent) Users with Hits Per Second

How is total throughput value calculated in Aggregate Report?

I discovered, that in Aggregate Report TOTAL THROUGHPUT value depends on thread count. And if we run tests with only one thread, total throughput is calculated as 1 / Total Average (and multiplied by 1000 to convert milliseconds to seconds, see the screenshot below).
But when we set thread count to 2 or more, total throughput is calculated the unknown way, so what I want to know is which formula is used when calculating total throughput in this case (thread count > 1), because it does not seem to be an average of all requests throughput, it's also not calculated as 1 / Total Average like described in the first case. So how exactly does this work? (Screenshot for 2 threads attached below)
Thanks.
Screenshot for 1 thread used:
aggregate_1_thread.png
Screenshot for 2 threads used:
aggregate_2_threads.png
As per doc:
http://jmeter.apache.org/usermanual/component_reference.html#Aggregate_Report
Throughput - the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.
So result depends both on Response time and Number of threads which influences those response times.
The total number of requests is divided by the time taken to run them, see:
https://github.com/apache/jmeter/blob/trunk/src/core/org/apache/jmeter/visualizers/SamplingStatCalculator.java#L198

Apache storm ui capacity metric

How 'capacity' is calculated?
From their documentation
The "capacity" metric is very useful and tells you what % of the time in the last 10 minutes the bolt spent executing tuples. If this value is close to 1, then the bolt is "at capacity" and is a bottleneck in your topology. The solution to at-capacity bolts is to increase the parallelism of that bolt.
I don't quite understand % of time. So if the value is 0.75 - what does it really mean?
It's the percent of time that the bolt is busy vs idle. 0.75 would mean that 25% of the time is waiting for new data to be processed.
Lets say you receive a new input tuple every second but your bolt takes 0.1 seconds to process it, the bolt will be idle 90% of the time and the capacity will be 0.1.
Another example: Imagine you receive more data in real time that you can process and you have two bolts and the task that is doing the first bolt takes more time than the second so the first bolt is your bottleneck. The capacity of the first bolt will be around 1 and the capacity of the second will be below 1.
In both examples above, then you can determine the parallelism (or processing power) that you need to set up for each bolt by looking at this number.
If the first bolt capacity is 1 and the second is 0.5 you probably want to set up twice the number of executors to the first bolt than two the second. At the same time (and most importantly), you have to increase the number of executors until that bolt capacity is below 1, so you are sure that your topology is able to keep up and process all the data that is coming in real time.

Apache Storm : Relation between Executors ,Execute latency and Process latency?

Topology with 1 executor assigned to Query Normalizer
Topology with 4 executor assigned to Query Normalizer
Initially I was running my topology with only 1 executor assigned to QueryNormalizer. The execute latency was 8.952 and process latency was 12.857.
To make it faster I changed the number of executors in QueryNormalizer to 4.The execute latency changed to 197.616 and process latency to 59.132.
According to the definition of Execute latency – The average time a Tuple spends in the execute method. The execute method may complete without sending an Ack for the tuple.
So, What I understand is it should be low if I increase the number of executors.As the parallelism should increase as the executor increases.
Am I misinterpreting something ?
Also, there is a huge difference between the emitted,transmitted and executed fields. Is this normal ?
Also, Should process latency be always lower than the execute latency ?
Which of the above shown topologies are better performance wise ? Also, How should I decide which topology is running better than the other , seeing the bolts data ?
Have a look at "complete latency" in the spout, that is the value the tuples spend in average inside in your topology, it had decreed.
So, What I understand is it should be low if I increase the number of executors.As the parallelism should increase as the executor increases.
it means you have now 4 units processing tuples, each unit process 1 tuple at the time, "theoretically" let you process 4 tuples at the same time instead of 1. Do your tuples look always the the same? this is, do they have always the same complexity?
Also, there is a huge difference between the emitted,transmitted and executed fields. Is this normal ?
executed means how many tuples your bolt consumed; emitted means how many tuples your bolt generated (in your case i see each consumed tuple is generating around 4 new tuples); transfered means how many emitted tuples were transfered to other bolts, for example you have two bolts consuming from the bolt emitting, in this case transfered would be equal a 2 * nr of tuples emitted.
Also, Should process latency be always lower than the execute latency ?
Not necessaly, have for example at Nathan Marz definition:
Process latency is time until tuple is acked, execute latency is time spent in execute for a tuple
and I can give you an example of one of my topologies where this does not happen:
Which of the above shown topologies are better performance wise ? Also, How should I decide which topology is running better than the other , seeing the bolts data ?
well let them run for a longer period of time. Both processed less than 1000 tuples, the size of the sample is too small. Ultimately the metric is the "complete latency" on the spout and the number of failed tuples.

what does each field of strm UI mean?

I want to see performance of each bolt and decide the number of parallelism.
In storm UI there are several fields which is confusing, so would be glad if you can tell me.
Capacity(last 10m) - average capacity per one second in last 10 minute of a single executor?
For example, if Capcity is 1.2, does that mean single executor processed 1.2 messages per second in average?
Execute latency and Process latency - Is it average value or value of last processed message?
and what is the difference between them?
and what is the difference between them and Capacity?
I have found a great article describing the Storm UI. You can reach it by link: http://www.malinga.me/reading-and-understanding-the-storm-ui-storm-ui-explained/
So, we have that:
capacity (last 10m) – If this is around 1.0, the corresponding Bolt is running as fast as it can, so you may want to increase the Bolt’s parallelism. This is (number executed * average execute latency) / measurement time.
Execute latency (ms) – The average time a Tuple spends in the execute method. The execute method may complete without sending an Ack for the tuple.
Process latency (ms) – The average time it takes to Ack a Tuple after it is first received. Bolts that join, aggregate or batch may not Ack a tuple until a number of other Tuples have been received.
Also, I found that documentation for the Storm UI REST API can be very useful for understanding the fields meaning: https://github.com/apache/storm/blob/master/STORM-UI-REST-API.md

Resources