How does Transaction Controller have a separate average time in Summary report and the overall report has a separate average time - jmeter

I ran a Jmeter script with all the samples under a transaction controller as shown in below image
Then when I got the summary report for this test and found in the report under average column that the transaction controller shows the total of all average time of samples.
Question 1: Why does transaction controller show as a sample as well?
Question 2: (Check below summary report image) Isn't the total average supposed to be 7608/17 (where transaction controller average is 7608 and number of samples is 17) if you see the summary report below you can see the average time shown is double the value. 7608/17*2 = 895. Can you please explain the reason for doubling it.
Similarly when I ran the test for 20 users the average was 895 which I think arrived in the form 7608/340*40= 895 (Transaction controller average time= 7608, Number of samples = 340) There too I don't understand why the value 40 (which is double the number of users) is multiplied. Please Explain Thank you

This is the whole idea of the Transaction Controller - to measure cumulative duration of all the samplers in its scope so measure the time it takes for them all to complete
This is not the average, this is the sum.
See Using JMeter's Transaction Controller article for comprehensive information on using the Transaction Controller.

Related

Jmeter Transactions Per Second do not represent actual requests processed in second

I am in confusion here for what is the right parameter to find how many requests my service can handle in a sec..
Eg: According to docs & this post TPS(transactions/sec) is calculated based on elapsed time of the request which seems to be fair when you have one service instance. Eg: My elapsed time is 1 second so my tps is 1 which makes sense, but the calculations fail when i have 3 service instance(H-Scaled) though the elapsed time remains the same but now i can process 3 concurrent requests in that same second which should ideally read back as 3 tps but it doesnt
Q:Then what is the right parameter in jmeter report to check for this ? or is my theory wrong?
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
And request is something produced by JMeter's Sampler
If you're doing some scalability testing you can measure it as follows:
Run a stress test with 1 service instance, i.e. start with 1 user and gradually increase the load at the same time looking at TPS. At some point you will reach the stage where increasing the number of users won't result in increased TPS due to some bottleneck. Measure the number of users and the TPS just before the bottleneck hits you.
Re-run your test with 3 service instances, you should see that the number of users and TPS before the bottleneck is higher now.

How is throughput value is calculating in summary report when we have samples more than 1?

I have create a test plan setting no. of threads= 1 ramp-up period= 1 and loop count = 1
If i want to verify throughput value of 2nd label i am using this formula 2/5 means (no.of samples / average time) which results in 0.4 ms but the value in jmeter is showing as 4.9/min. And how are the last two rows of summary report are calculating which include labels of Test(it is my transaction controller) and Total. Please explain with formula. The image of my summary report is in the given link.
summary report
You're using the wrong formula, according to JMeter Glossary
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So you should be dividing the number of requests not by average response time, but by the whole duration of the test.
If you want to exclude some Samplers which are "not interesting" using Filter Results Tool where you can specify the label(s) of the sampler(s) you would like to get metrics for.
Filter Results Tool can be installed using JMeter Plugins Manager

what does it mean by total throughput in Jmeter aggregated graph

When i carryout load testing in jmeter i have list of samples. Each sample returns its own throughput. However in the aggregated graph or summary result it has a total row at the bottom and adds all the throughput. What does this signify?
Can i just use the total throughput as the total throughput of the entire test run. Why does summary report adds up all the sample throughput rather than showing the average throughput?
In the following picture i ran load test with 2 user and 2 ramp up time.
As shown above the total actually sums up the throughput rather than aggregating it.
However, when i carry out test with 1 user and 1 ramp up time then it aggregates the throughput and displays the average throughput of the samplers.
In the below figure i carried out test with 1 thread and 1 ramp up time.
IS this a bug?
No, It's not a bug!!!
The Throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.
The Throughput is the real load processed by your server (Application under test) during a run but it does not tell you anything about the performance of your server during this same run.
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So, In your case For 2 User : The application handled 10.7 requests/second.
And For the Single user : The Application handled 22.9 requests/second.
Its not sums up here, If you see in your screenshot the sum is coming around 14.4/sec, Sot its not sum of all throughput. Its calculated value based on the load you had given and your application would support mentioned throughput.
In your case if a user accesses the application it supports 22.9 requests per sec. But when two users access the application then it supports 10.7 requests per sec.
Please look at here for more info about Jmeter throughput
Jmeter aggregate report total throughput - how is calculated
In case of performance testing, average is something we all avoid.
Going back to actual question. Consider you have 5 requests in one workflow. You are running this test for 50 iterations. So making 250 requests during the load test.
Now you want to analyze individual request performance as well as overall system performance. In this case, when you want to drill down and look at individual request in order to find bottlenecks, you look at the throughput and response time of request.
If you want to find the overall load your system can handle, look at the total throughput.

JMeter: Aggregate report analysis

I want to test the capacity of the web app that can handle without break.how to get the average requests per second from the aggregate report.
Is throughput is equal to average requests per second ?
I don't really need Apache definition.please make it simple.
Number of threads : 25
ramp-up :1
Loop:10
I have 3 slaves.
samples:250
Avg:1594
Throughput:10.4
Your question contains the answer: Throughput:10.4
According to JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So my assumption is that you generate the load of 10 requests per second (it may be per minute however, I need to know how long did your 250 samples execute to tell for sure)
If you're interested in "requests per second over time" you can utilize Server Hits Per Second listener which is available via JMeter Plugins project.
Alternative option of visualizing your load test results is using Loadosophia.org cloud where you can upload your test result and see graphs, distributions, export load test report as PDF, etc.

Jmeter output result interpretation help required

I would like to understand the Jmeter output for in depth.
I am confused with the 'throughput rate' concept.Does it mean that the server can only handle 48.1 requests/min at the given load or does it mean something else .What is the difference between the total throughput rate and the throughput rate shown by individual requests.In my case there 8 requests sent and the individual request shows throughput rate as 6.1/min.Please explain.
I need to suggest any changes to server side/explain the jmeter report,Please suggest how i can explain what needs to be done.
The total summary report is as below:
Total Users:100
Ramp up time:1000s
Total Samples : 800
Min:325
Max:20353
Std.Dev: 4524.91
Throughput:48.1/min
Error: 0.38%
Thanks in advance.
As per JMeter Glossary
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So you providing the "load" of 0.8 requests per second which is quite low.
JMeter provides a test element which controls this "Throughput" value so you can choose whether you will be simulating "N" concurrent users or sending "N" requests per second. Take a look at How to use JMeter's Throughput Constant Timer guide for more details on goal-oriented load test scenario implementation with JMeter.

Resources