How Throughput is calculate and display in Sec,Minute and Hours in Jmeter? - jmeter

I have one observation and want to get knowledge on Throughput calculation,Some time Throughput is displaying in seconds,some times in minutes and some times in Hours,please any one provide exact answer to calculate throughput and when it will display in Seconds,Minutes and Hours in Jmeter Summary Report

From JMeter Docs:
Throughput is calculated as requests/unit of time. The time is
calculated from the start of the first sample to the end of the last
sample. This includes any intervals between samples, as it is supposed
to represent the load on the server. The formula is: Throughput =
(number of requests) / (total time).
unit time varies based on the throughput values.
examples:
In 10 seconds, 10 requests are sent, then throughput is 10/10 = 1/sec
In 10 seconds, 1 requests are sent, then throughput is 1/10 = 0.1/sec = 6/min (showing 0.1/sec in decimals will be automatically shown in next higher unit time)
If you understand, it is to avoid small values (like, 0.1, 0.001 etc). In such cases, higher unit time is more friendly in understanding, while all unit times are correct. It is a matter of usability.
so,
1/sec = 60/min = 3600/hour = SAME

Related

I cannot increase the throughput to the number I want

I am trying to stress test my server.
To do so I am using Jmeter and here is my set up:
I use
my Setup
Thread: 1000
schedule for 3 mins
So as you see I keep going with 1000 thread for a period of 3 mins.
But when I look at the throughput I only get around 230 per second
results
So what should I do to increase the through put to for example 1000000 per second? How come increasing the thread which I assume means more load does not increase throughput?
According to JMeter Glossary
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
Throughput explicitly relies on the application response time. Looking into your results, the average response time is 3.5 seconds therefore you will not get more than 1000 / 3.5 = 285 requests per second
Theoretically you could use Throughput Shaping Timer and Concurrency Thread Group combination, this way JMeter will kick off extra threads if the current amount is not enough to reach/maintain the desired throughput, however looking into 8.5% error rate and maximum response time for your application > 2 minutes my expectation is that you will not be able to get more throughput because most probably your application is overloaded and cannot respond faster.
Throughput measures the number of transactions or requests that can be made in a given period of time. basically, it lists the number of requests server managed to serve in a given time period. Throughput value depends on lot of factors and maybe your application under test not able to cater the expected load.
So with 1000 threads, you can't expect a 1000 throughput.
It's up to you to find out how much throughput your application can handle. For that maybe you need to do different optimizations on your side like optimize your script, distribute load via JMeter execution, increase theard count,...etc

How is total throughput value calculated in Aggregate Report?

I discovered, that in Aggregate Report TOTAL THROUGHPUT value depends on thread count. And if we run tests with only one thread, total throughput is calculated as 1 / Total Average (and multiplied by 1000 to convert milliseconds to seconds, see the screenshot below).
But when we set thread count to 2 or more, total throughput is calculated the unknown way, so what I want to know is which formula is used when calculating total throughput in this case (thread count > 1), because it does not seem to be an average of all requests throughput, it's also not calculated as 1 / Total Average like described in the first case. So how exactly does this work? (Screenshot for 2 threads attached below)
Thanks.
Screenshot for 1 thread used:
aggregate_1_thread.png
Screenshot for 2 threads used:
aggregate_2_threads.png
As per doc:
http://jmeter.apache.org/usermanual/component_reference.html#Aggregate_Report
Throughput - the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.
So result depends both on Response time and Number of threads which influences those response times.
The total number of requests is divided by the time taken to run them, see:
https://github.com/apache/jmeter/blob/trunk/src/core/org/apache/jmeter/visualizers/SamplingStatCalculator.java#L198

Throughput calculation in Jmeter

Attached is the Summary Report for my tests.
Please help me understand how is the throughput value calculated by JMeter:
example the throughput of the very first line 53.1/min, how was this figure calculated by JMeter with which formula.
Also, wanted to know how are the throughput values in the subsequent test divided into mins or secs. example the 2nd line has a throughput 1.6/sec, so how does JMeter calculate this throughput values based on the time units ?
Tried many websites on the net and have got a common reply that the throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test. But that didn't apply to the results I see in my graph the way it was explained straight forward.
Documentation defines Throughput as
requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So in your case you had 1 request, which took 1129ms, so
Throughput = 1 / 1129ms = 0.00088573959/ms
= 0.00088573959 * 1000/sec = 0.88573959/sec
= 0.88573959 * 60/min = 53.1443754/min, rounded to 53.1/min
For 1 request total time (or elapsed time) is the same as the time of this single operation. For requests executed multiple times, it would be equal to
Throughput = (number of requests) / (average * number of requests) = 1 / average
For instance if you take the last line in your screenshot (with 21 requests), it has an average of 695, so throughput is:
Throughput = 1 / 695ms = 0.0014388489/ms = 1.4388489/sec, rounded to 1.4/sec
In terms of units (sec/min/hour), Summary report does this:
By default it displays throughput in seconds
But if throughput in seconds < 1.0, it will convert it to minutes
If it's still < 1.0, it will convert it to hours
It rounds the value to 1 decimal digit afterwards.
This is why some values are displayed in sec, some in min, and some could be in hours. Some may even have value 0.0, which basically means that throughput was < 0.04
I have been messing with this for a while and here is what I had to do into order for my numbers to match what jmeter says
Loop through my lines in the csv file, gather the LOWEST start time for each of the labels you have, also grab the HIGHEST (timestamp + elapsed time)
Calculate the difference between those in seconds
then do number of samples / the difference
So in excel, the easiest way to do it is get the csv file and add a column for timestamp + elapsed
First sort the block by the timestamp - lowest to highest then fine the first instance of each label and grab that time
Then sort by your new column highest to lowest and grab the first time again for each label
For each label then gather both of these times in a new sheet
A would be the label
B would be the start time
C would be the endtime+elapsed time
D would then be (C-B)1000 (diff in seconds)
E would then be the number of samples for each label
F would be E/D (samples per second)
G would be F60 (samples per minute)

what does it mean by total throughput in Jmeter aggregated graph

When i carryout load testing in jmeter i have list of samples. Each sample returns its own throughput. However in the aggregated graph or summary result it has a total row at the bottom and adds all the throughput. What does this signify?
Can i just use the total throughput as the total throughput of the entire test run. Why does summary report adds up all the sample throughput rather than showing the average throughput?
In the following picture i ran load test with 2 user and 2 ramp up time.
As shown above the total actually sums up the throughput rather than aggregating it.
However, when i carry out test with 1 user and 1 ramp up time then it aggregates the throughput and displays the average throughput of the samplers.
In the below figure i carried out test with 1 thread and 1 ramp up time.
IS this a bug?
No, It's not a bug!!!
The Throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.
The Throughput is the real load processed by your server (Application under test) during a run but it does not tell you anything about the performance of your server during this same run.
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So, In your case For 2 User : The application handled 10.7 requests/second.
And For the Single user : The Application handled 22.9 requests/second.
Its not sums up here, If you see in your screenshot the sum is coming around 14.4/sec, Sot its not sum of all throughput. Its calculated value based on the load you had given and your application would support mentioned throughput.
In your case if a user accesses the application it supports 22.9 requests per sec. But when two users access the application then it supports 10.7 requests per sec.
Please look at here for more info about Jmeter throughput
Jmeter aggregate report total throughput - how is calculated
In case of performance testing, average is something we all avoid.
Going back to actual question. Consider you have 5 requests in one workflow. You are running this test for 50 iterations. So making 250 requests during the load test.
Now you want to analyze individual request performance as well as overall system performance. In this case, when you want to drill down and look at individual request in order to find bottlenecks, you look at the throughput and response time of request.
If you want to find the overall load your system can handle, look at the total throughput.

What does this mean in JMeter load? 100 in13.2s= 7.4/s

I have checked for load 100 and got result 100 in13.2/s= 7.4/s.
So what is the meaning of 100 in 13.2/s = 7.4/s?
It means the Number of Executed Samples or Requests are 100. Test duration is 13.2 seconds and Throughput is 7.4/s. So your application handled average 7.4 requests per second during those 13.2 seconds. From that test, the total number of requests are 100.
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
In fact, there's been a mistake in the question: it should be "100 in 13.2s" not "100 in 13.2/s" !!
For further detail, go through Apache JMeter User Manual: Glossary & Elemants of a Test Plan.

Resources