Jmeter : Summary report : Throughput - jmeter

is the total throughput shown in last row in Summary Report correct ? I m using Jmeter 2.11
I find it difficult to match the displayed figure by manipulation.
I followed the formula (x/sec) : Number of request / Total response time required (in sec)
Or 1/Avg total response time (sec).
for example : 50 request taking avg response time as 2000 ms each then throughput = 50/(50*2) = 0.5/sec
But Jmeter shows different value than 0.5/sec or 30/min
Can someone help me here?

I was also having similar assumption. But this is the formula for calculating throughput.
endTime = lastSampleStartTime + lastSampleLoadTime
startTime = firstSampleStartTime
converstion = unit time conversion value
Throughput = Numrequests / ((endTime - startTime)*conversion)
(I got this few months back from the below answer)
Calculating throughput from Jmeter jtl log file

Related

Jmeter delay between two loop count in Loop Controller

I am trying to achieve below use case for load testing via jmeter
1. Search Product
2. Add to cart
3. Do payment
1 user with uid = 1 will perform above mentioned 3 steps every 5min for 1 hour.
total request per user per hour. = 12(5 * 12 = 60) * 3(rpm) = 36(request per hour)
total users(threads) = 1000.
total request per hour = 1000 * 36 = 36000
lets consider 3 request as a single set
I am looking for below things
after every 5min 1 set should be executed
delay between two sets should be of 5 min
can anyone please help me in achieving above scenario?
I have tried with below jmeter tools
thread group (thread = 1000, ramp up = 100 sec, loop count = 1)
loop controller( above 3 request with loop count = 12)
constant timer = 300000 millisecond
thread group (thread = 1000, ramp up = 100 sec, loop count = 1)
loop controller( above 3 request with loop count = 12)
constant throughput timer = 5 rpm
thread group (thread = 1000, ramp up = 100 sec, loop count = infinite, duration = 3600 sec)
above 3 request inside thread group
constant throughput timer = 5 rpm
Also I have tried with random order controller
I am unable to simulate above scenario. What I am getting is first request is getting executed 1000 times, then delay, then second request is getting executed 1000 times, then delay then 3rd request is getting executed 1000 times.
Constant Timer adds a delay before each Sampler in its scope
If you want to introduce a delay between 2 iterations add Flow Control Action sampler and define the desired delay there
Additionally if you want all the users to finish the action - add a Synchronizing Timer and set the number of users to group by to be equal to the number of threads in the Thread Group.
More information on JMeter Timers concept: A Comprehensive Guide to Using JMeter Timers

JSR223 Timer in jmeter

How to create thread delay in seconds based on the calculation(duration/throughput) using JSR223 Timer. what should I write inside the script section ?We are having the Duration and throughput as Jmeter parameters in our Test Plan
The easiest way is to use Precise Throughput Timer or Throughput Shaping Timer - both can be configured with combination of the desired throughput and test duration. The timers are smart enough in order to pause JMeter threads to reach the needed throughput.
If for some reason the above timers are not suitable you can consider implementing so called Pacing - a dynamic delay between Thread Group iterations.
The example code would be something like:
//Sets the pacing length based on the last requests response time. 4500 is the time in ms
Long pacing = 4500 - prev.getTime();
//If the response time is less than 4500 ms, set the delay value to myDelay
if ( pacing > 0 )
{
//iPacing is equal to the int value of pacing if pacing is not equal to null, otherwise iPacing is null
Integer iPacing = pacing != null ? pacing.intValue() : null;
log.info(String.valueOf(iPacing));
vars.put("myDelay", String.valueOf(iPacing));
return iPacing;
}
//The response time is greater than or equal to 4500 ms, set myDelay to 0
else
{
vars.put("myDelay", "0");
return 0;
}

Whenever i ran the Jmeter test for less than 10 Thread Groups then all the time "Throughput" shows numbers in "Minutes"

When I execute test in JMeter for less than 10 Thread Groups, in Summary Report column Throughput showing result in Minutes.
Can anyone please help me
As per RateRenderer class source
String unit = "sec";
if (rate < 1.0) {
rate *= 60.0;
unit = "min";
}
if (rate < 1.0) {
rate *= 60.0;
unit = "hour";
}
setText(formatter.format(rate) + "/" + unit);
So:
If throughput is more than 1 - time unit is "seconds"
If your throughput is less than 1 - it's being multiplied by 60 and time unit is set to "minutes"
If after throughput converting to "minutes" it is still less than 1 - it is being multiplied by 60 and time unit is set to "hours"
If you need to get the throughput in hits per second from minutes - just divide the value by 60.
Other options are:
Patch the RateRenderer class and comment out the two above "if" clauses
Use an external 3rd-party tool like BM.Sense for JMeter results analysis

Optimizing Groovy Performance

I'm working on groovy code perfomance optimization. I've used jvisualvm to connect to running applicaton and gather CPU samples. Samples say that org.codehaus.groovy.reflection.CachedMethod.inkove takes the most CPU time. I don't see any other application methods in samples.
What is the right way to dig into CachedMethod.invoke and understand what code lines really give perfomance penalties?
Thanks.
UPD:
I do use Indy, it didn't help me.
I didn't try to introduce #CompileStatic since I want to find my bottlenecks before rewriting groovy to java.
My problem a bit similar to this thread: Call site caching faster than invokedynamic?
I have a code that dynamically composes groovy script. Script template looks this way:
def evaluateExpression(Map context){
def user = context.user
%s
}
where %s replaced with
user.attr1 == '1' || user.attr2 == '2' || user.attr3 = '3'
There is a set (20 in total) of replacements have taken from Databases.
The code gets replacements from DB, creates GroovyScript and evaluates it.
I suppose the bottleneck is in the script execution. What is the right way to fix it?
So, I've tried various things
groovy-indy, doesn't work
groovy-indy with some code "optimization", doesn't work. BTW, I'started to play around with try/catch and it as a result I made my "hotspot" run 4 times faster. I'm not good at JVM internals, but internet says - try/catch prevents optimizations. I assumed it as a ground truth. Need to g deeper to understand who it really works.
I gave up, turned off invokedynamic and rewrote my "hottest" code with #CompileStatic. It took about 3-4 hours and I my code runs 100 time faster now.
Here are initial metrics with "invokedynamic support"
count = 83043
mean rate = 395.52 calls/second
1-minute rate = 555.30 calls/second
5-minute rate = 217.78 calls/second
15-minute rate = 82.92 calls/second
min = 0.29 milliseconds
max = 12.98 milliseconds
mean = 1.59 milliseconds
stddev = 1.08 milliseconds
median = 1.39 milliseconds
75% <= 2.46 milliseconds
95% <= 3.14 milliseconds
98% <= 3.44 milliseconds
99% <= 3.76 milliseconds
99.9% <= 12.19 milliseconds
Here are #CompileStatic metrics with ind turned off. BTW, there is no reason to use #CompileStatic if "indy" is turned on.
count = 139724
mean rate = 8950.43 calls/second
1-minute rate = 2011.54 calls/second
5-minute rate = 426.96 calls/second
15-minute rate = 143.76 calls/second
min = 0.02 milliseconds
max = 24.18 milliseconds
mean = 0.08 milliseconds
stddev = 0.72 milliseconds
median = 0.06 milliseconds
75% <= 0.08 milliseconds
95% <= 0.11 milliseconds
98% <= 0.15 milliseconds
99% <= 0.20 milliseconds
99.9% <= 1.27 milliseconds

Jmeter Summary Report analysis

Can anyone please explain like how to analyze jmeter's Summary Report?
Example:
Label : Login Action(sampler)
Sample# : 1
average: 104 // What does this mean actually?
min : 104 // What does this mean actually?
max : 104
stddev : 0 // What does this mean actually?
error% : 0
Throughput : 9.615384615 // What does this mean actually?
Kb/Sec : 91.74053486 // What does this mean actually?
Average Bytes : 9770 // What does this mean actually?
It is pretty straightforward:
Average, min and max is the response times for the request in milliseconds. The response time os from the request is sent to the response is received. Since you have only one request they are of course all equal.
stddev is a measure of the variation of the response times: http://en.wikipedia.org/wiki/Standard_deviation.
Throughput is number of requests per second. With a average response time a little over 100ms the throughput is a little below 10.
Kb/Sec is the number of kilobytes transferred per second. It is Average Bytes (per request) * Throughput. I am not sure if the average bytes is for only the response or both the request and the response. My guess is that it is the latter. It is the latter: response headers and body.
I have just found the very nice and simple explanation here:
http://jmeterresults.blogspot.jp/2012/07/jmeterunderstanding-summary-report.html

Resources