Understanding jmeter terms and result - jmeter

I am using jmeter to test my web application on tomcat. I just wanted to know the meaning of terms in simplest word: Deviation Throughput Average Median No of Sample
I have tested with
Number of thread(Users):1000
Rampup Period:1
Loop Count:1
No extra settings.
I am attaching the pics for reference. Can anyone tell whether result is good or not ?

No of Sample: Total number of requests sent to server during the test.
Average : Mathematical average of the Response times. This is the number which is quoted as your average response time of your http service.
Deviation : Mathematical standard deviation of the Response times. This shows how much the response time varies. Higher values means problem.
Ideally, your average, max and min Response times should be same. Of course, this is not a practical option. So you will target to keep the deviation as low as possible. Higher values generally means system stress - unless you are writing some kind of exponential backoff operations. Your Min and Max values shows very high difference and your deviation is way too high. If you are writing a simple HTTP service, you min - max values should have similar RT values.
In summary , For me, your Jmeter test result really looks scary and is leading me to believe you had run the test and the server on same machine leading to machine getting overloaded.Or the code is really buggy and gets bogged down on load.
Throughput : Simple term to define number of requests you can process per second or minute.
Median : Mathematical Median of the RT. Arrange the RTs in order and select the middle value. This should be as close to average value as possible.

Related

Jmeter deveation is more but the report has zero errors

rampup - 400
Thread- 100
Loop count -10
Deveation is more than average value ...as per my knowledge deveation should be less or half of the average and report has 0 errors
Can anyone tell me what happens if deveation is more and developers going to fix this
And I'm I giving the ramp up time correct what should be rampup period in general for 100 users ...when I give for same input rampup has 100 I'm getting time out errors in my report
As per JMeter Glossary:
Standard Deviation is a measure of the variability of a data set. This is a standard statistical measure. See, for example: Standard Deviation entry at Wikipedia. JMeter calculates the population standard deviation (e.g. STDEVP function in spreadsheets), not the sample standard deviation (e.g. STDEV).
As per Understanding Your Reports: Part 3 - Key Statistics Performance Testers Need to Understand
Standard Deviations
The standard deviation is the measurement of the density of the cluster of the data around the sought value (mean). Low standard deviation means that points are closer to the mean. High standard deviation means the points are farther away. This parameter can help determine how reliable the data is. If the standard deviation is high, this means that results vary very much, and the analysis should be conducted accordingly.
If you have standard deviation higher than the average response time it basically means that you have more samplers with response time above the average than the ones which response time is below the average. Not sure if there is anything to fix there, maybe it's expected that some samplers last longer than otherse, for example "Logout" operation is normally very quick and "search" operations can last longer, if your user does multiple searches and only one logout - the deviation will be higher than the average. You can look at i.e. 90%, 95% and 99% lines of the Aggregate Report listener to see which percentage of users have for each and every action (and overall), compare the values with your NFRs or SLAs and raise issues if necessary.
Per se deviation higher than the average doesn't necessarily mean that there is a performance problem, you need to correlate other metrics with the business requirements

Throughput or Standard Deviation in JMeter

Once after execution is completed in JMeter, in the summary report, whether I need consider throughput value or standard deviation values for the result analysis purpose?
You must consider both of the values in order to analyze the results.
In the Summary Report:
The Throughput: is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.
The throughput is the real load processed by your server during a run but it does not tell you anything about the performance of your server during this same run. This is the reason why you need both measures in order to get a real idea about your server’s performance during a run. The response time tells you how fast your server is handling a given load.
The Response time: is the elapsed time from the moment when a given request is sent to the server until the moment when the last bit of information has returned to the client.
Average: This is the Average (Arithmetic mean μ = 1/n * Σi=1…n xi) Response time of your total samples.
Min and Max are the minimum and maximum response time.
Now, An important thing to understand is that the mean value can be very misleading as it does not show you how close (or far) your values are from the average.For this purpose, we need the Deviation value since the Average value can be the Same for the different response time of the samples!!
Deviation: The standard deviation (σ) measures the mean distance of the values to their average (μ).It gives you a good idea of the dispersion or variability of the measures to their mean value.
The following equation show how the standard deviation (σ) is calculated:
σ = 1/n * √ Σi=1…n (xi-μ)2
For Details, see here!!
So, if the deviation value is low compared to the mean value, it will indicate you that your measures are not dispersed (or mostly close to the mean value) and that the mean value is significant.

Throughput calculation using response time and no of request

I received an requirement were I need to display the response time, number of threads running, latency and throughput in one report. I used below code in Beanshell post processor to display throughput, response time and number of threads:
long repons=prev.getTime();
vars.put("responseTime",String.valueOf(recons));
//print("res" +responseTime);
log.info("Response time" + repons);
long thread=prev.getAllThreads();
vars.put("threads", Integer.toString(prev.getAllThreads()));
log.info("Thread number is"+thread);
float throughput=thread/repons;
log.info("Through put"+throughput);
I guess it is wrong. Can anyone help on this?
You have syntax error in your script, you have repons in the first line and recons in the second, they should be the same
It is better to use JSR223 Elements and Groovy language for scripting.
And finally, your approach is wrong, according to JMeter glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So you need to divide total number of requests by total time taken to
execute these requests, your "code" most likely will be returning zero throughput
You can consider the following workarounds:
Use Backend Listener and a 3rd-party visualisation tool, see Real-time results article for details.
Run your JMeter test via Taurus framework which has Interactive Reporting feature

JMeter throughput results differ although average is similar

Ok so I ran some stress tests on an application of mine and I came across some weird results compared to last time.
The Throughput was way off although the averages are similar.
The number of Samples did vary, however as I understood the Throughput is calculated by dividing the number of samples by the time it took.
In my understanding if the average time was similar the throughput should be similar even though the samples varied...
This is what I have:
PREVIOUS
RECENT
As you can see the throughput difference is pretty substantial...
Can somebody please explain me if my logic is correct or point me on why that is not the case?
Throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.
The throughput is the real load processed by your server during a run but it does not tell you anything about the performance of your server during this same run. This is the reason why you need both measures in order to get a real idea about your server’s performance during a run. The response time tells you how fast your server is handling a given load.
The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
Throughput =(number of requests) / (total time).
Average: This is the Average (Arithmetic mean μ = 1/n * Σi=1…n xi) Response time of your total samples.It is the arithmetic mean of all the samples response time.
Response time is the elapsed time from the moment when a given request is sent to the server until the moment when the last bit of information has returned to the client.
So these are two different things.
Think of a trip to Disney or your favorite amusement park. Let's
define the capacity of the ride to be the number of people that can
sit on the ride per turn (think roller coaster). Throughput will
be the number of people that exit the ride per unit of time. Let's
define service time -the amount of time you get to sit on the ride.
Response time to be your time queuing for the ride
plus service time.

Specific Cache Hit Rate calculation

Scenario:
Suppose we have infinite cache memory size. Caching is just limited by timeout, value of this timeout is half an hour. Cache is initially empty.
Problem:
We have 50,000 distinct request. Our system is querying, randomly, at the rate of 15 request/second i.e. 27,000 request in half an hour . What kind of curve or average value of cache hit rate could we expect for first 5 hours?
Note: This scenario is fixed. I need an approach to find out hit rate. If you think tag is wrong, please suggest appropriate tag.
I think you're right and this is a math question (certainly not a programming
problem).
One approach is to consider the extremes -- what is the hit rate for the
first query when the the system starts running? For the second query?
After one second? After 10? After a minute? And what is the likelyhood
that any random query will be found in the cache once the system has been
running a long time?
These are few specific values, and together they give you a curve.
I don't think great numeric precision is necessary; the long-term average
and the shape of the curve is more interesting.

Resources