90th percentile accuracy in JMeter - jmeter

I have seen some 'odd' results in JMeter when it comes to calculating the 90th percentile response time from a test. I have a results.jtl from a one hour test, with approximately 20,000 data points per transaction. If I use the file to generate a dashboard with;
jmeter -g results.jtl -o c:\temp\report
I can see I get a 90th response time of 2261.10. If I look at the same results.jtl file through the aggregate report in JMeter (post test), I have 2260. I also used the percentile function in Excel which gave me 2260.3 (which ties in with the aggregate report). The 95th percentile is similar, I get 3551.50, 3549 and 3549.3 from each of the three calculations.
Has anyone else seen this type of discrepancy between reports? Is there anything I can do to correct the dashboard?

The differences are due to approximation that is made by JMeter when computing the report.
This can be adjusted by increasing the value of this property:
jmeter.reportgenerator.statistic_window=20000
See this doc and this blog:
https://dzone.com/articles/how-to-achieve-better-accuracy-in-latency-percenti
By default JMeter tries not to consume too much memory.

Related

How is throughput value is calculating in summary report when we have samples more than 1?

I have create a test plan setting no. of threads= 1 ramp-up period= 1 and loop count = 1
If i want to verify throughput value of 2nd label i am using this formula 2/5 means (no.of samples / average time) which results in 0.4 ms but the value in jmeter is showing as 4.9/min. And how are the last two rows of summary report are calculating which include labels of Test(it is my transaction controller) and Total. Please explain with formula. The image of my summary report is in the given link.
summary report
You're using the wrong formula, according to JMeter Glossary
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So you should be dividing the number of requests not by average response time, but by the whole duration of the test.
If you want to exclude some Samplers which are "not interesting" using Filter Results Tool where you can specify the label(s) of the sampler(s) you would like to get metrics for.
Filter Results Tool can be installed using JMeter Plugins Manager

Throughput calculation using response time and no of request

I received an requirement were I need to display the response time, number of threads running, latency and throughput in one report. I used below code in Beanshell post processor to display throughput, response time and number of threads:
long repons=prev.getTime();
vars.put("responseTime",String.valueOf(recons));
//print("res" +responseTime);
log.info("Response time" + repons);
long thread=prev.getAllThreads();
vars.put("threads", Integer.toString(prev.getAllThreads()));
log.info("Thread number is"+thread);
float throughput=thread/repons;
log.info("Through put"+throughput);
I guess it is wrong. Can anyone help on this?
You have syntax error in your script, you have repons in the first line and recons in the second, they should be the same
It is better to use JSR223 Elements and Groovy language for scripting.
And finally, your approach is wrong, according to JMeter glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So you need to divide total number of requests by total time taken to
execute these requests, your "code" most likely will be returning zero throughput
You can consider the following workarounds:
Use Backend Listener and a 3rd-party visualisation tool, see Real-time results article for details.
Run your JMeter test via Taurus framework which has Interactive Reporting feature

How do I get the Average response time in Jmeter 3.0 Dashboard

In the Jmeter 3.0 Report Dashboard the Statistics table appears to be missing the average response time column. Min, Max, 90th pct, 95 pct and 99 pct response times along with Label, #Samples, KO(?), Error%, Throughput and KB/sec columns are present, but Average is not.
The Statistics table in the documentation at http://jmeter.apache.org/usermanual/generating-dashboard.html also shows Average as missing.
How do I generate the average response time in the Report Dashboard Statistics Table?
Average response time is not available in this report as it is a misleading metric.
As of 3.0 it is not here, it may be added in the future if many people ask for it, see:
https://jmeter.apache.org/issues.html

JMeter: Aggregate report analysis

I want to test the capacity of the web app that can handle without break.how to get the average requests per second from the aggregate report.
Is throughput is equal to average requests per second ?
I don't really need Apache definition.please make it simple.
Number of threads : 25
ramp-up :1
Loop:10
I have 3 slaves.
samples:250
Avg:1594
Throughput:10.4
Your question contains the answer: Throughput:10.4
According to JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So my assumption is that you generate the load of 10 requests per second (it may be per minute however, I need to know how long did your 250 samples execute to tell for sure)
If you're interested in "requests per second over time" you can utilize Server Hits Per Second listener which is available via JMeter Plugins project.
Alternative option of visualizing your load test results is using Loadosophia.org cloud where you can upload your test result and see graphs, distributions, export load test report as PDF, etc.

Apache JMeter - listener results interpretation

I use JMeter to test my webapp application, I have aggregate graph with some score values, but I actually don't know what they mean...
Aggregate graph shows for example:
average
median
min
max
I don't know about what refer that values.
For what refer 90% line?
I also don't know what's the unit of throughput per second (bytes?).
Anybody knows?
JMeter documentation shows only general information about reports and listeners.
This link contains a helpful explanation for Jmeter Usuage, Results, Tips, Considerations and Deployment.
Good Luck!
Throughput - means number of requests per one second. So if two users open your website at the same time throughput will be 2/s - 2 requests in one second.
How it can be useful: check your website analytics and you see number of hosts and hits per one day. Throughput stands for hits per day. If analytics shows 200 000 hits per day this means: 200 000 / 86400 (seconds in one day) = 2,31 hits/s.
Average - the average of response time. I think you know what is response time - it's time between sending request and getting response from server. To get the average response time you should sum all samplers response time and devide to number of samplers. Sampler means user, request, hit, the meaning is the same.
Min - the minimal response time in all samplers. Differently we may say the fastest response.
Max - opposite of Min, the slowest response.
The throughput is generally measured in requests/second in things Jmeter.
As far as knowing what requests are within the 90% line, not really a way to do it with this listener. This is representing aggregate information, so it only reports on information about all the tests, not specific results.
For some different methods and ideas on getting useful information out of the responses, take a look at this jmeter wiki on log analysis.
If you don't already have it, jmeter plugins has a lot of useful controllers and listeners that can make understanding the results easier as well.

Resources