The way of JMeter Result Analysis - performance

No Of Requests - 2113 ;
Average Response time (s) - 123.5 ;
Response time/Sec (90% of Requests) - 142.9
Minimum Response time (s) - 2.4
Maximum response time (s) - 14.9
Error% -0.0
My Questions - For 2113 requests average response time is 123.5 secs I need to know what will be the response time of average one single request in 2113 requests

The average response time of a single request (1 out of 2,113) will be the value itself, but I'm sure this isn't your question.
Are you simply trying to locate the response time of each request after a given test plan has fully executed, that is, to see each of the 2,113 response times? If so, just add a Summary Report to your thread group. By doing this you'll need to specify an output file (which will get generated if it doesn't already exist) and will show you in detail each of the requests sent to the server, along with the HTTP response code, response time and other goodies.
UPDATE
Per the question posed in the comments via Ripon Al Wasim, the default extension of the results file is CSV, however this is configurable in /bin/jmeter.properties:
# legitimate values: xml, csv, db. Only xml and csv are currently supported.
#jmeter.save.saveservice.output_format=csv
As we can see, JMeter only appears to support XML and CSV.

Related

JMeter - Count requests with responses below defined time

Can you recommend plugin or report for Jmeter 4.0 which count number of requests with responses lower than < define time (eg, 200ms, 500ms, etc.)
I would like get answer on below question:
How many requests per sec can be sent that response time of 90% responses is lower than 200ms
How many responses is below 200ms from Total
% of responses to the response below 200 ms from Total
I'm not aware of any existing plugin which implements your requirement, however you can achieve this using JSR223 Listener
Add JSR223 Listener to your Test Plan
Put the following Groovy code into "Script" area:
if (prev.getTime() < 200) {
prev.setSampleLabel(prev.getSampleLabel() + " < 200")
}
That's it, if your Sampler response time will be below 200 the JSR223 Listener will amend its label and add < 200 postfix to it.
You can view total number of samplers with response time below 200 ms and 90% percentile using "normal" Aggregate Report listener
You can use "Duration Assertion". It will fail all the requests which take more than the expected time and with the "View Result Tree" or "Simple Data writer" listener you can get all the required data and count from the csv/jtl file generated by them.
Hope this help.
Unless you will need absolute numbers, I would recommend the Response Times Percentiles listener (https://jmeter-plugins.org/wiki/RespTimePercentiles/)
This listener will paint a graph of response times and this will clearly show in percentiles below any response time within the range

config Jmeter Generating Report Dashboard Average response time millisecond to second

in Jmeter http://jmeter.apache.org/usermanual/generating-dashboard.html
we can see the response time is second not millisecond
transaction response time example from Jmeter
but in my report it is millisecond not second, how can I config millisecond to second?
My test result
The report is in Milliseconds not in seconds and it’s not configrable.
So check if you are not receiving error pages with a 200 response code and add assertions:
http://www.ubik-ingenierie.com/blog/best-practice-using-jmeter-assertions/
Also check your are following the prerequisites for the report:
http://jmeter.apache.org/usermanual/generating-dashboard.html#configuration_requirements

Response time different in Postman/Jmeter and web API

I have an MVC Web aPI and I have trouble in comparing the response time of this API. I added some code to calculate the response time:
In the AuthorizationFilterAttribute OnAuthorization, I have the below code:
actionContext.Request.Headers.Add("RequestStartTime", DateTime.Now.ToString());
I have an ActionFilterAttribute, and an OnActionExecuted in which I have the below code:
string strRequestStartTime = actionExecutedContext.Request.Headers.GetValues("RequestStartTime").First();
DateTime dtstartTime = DateTime.Parse(strRequestStartTime);
TimeSpan tsTimeTaken = DateTime.Now.Subtract(dtstartTime);
actionExecutedContext.Response.Headers.Add("RequestProcessingTime", tsTimeTaken.TotalMilliseconds + "ms");
The response has the header "RequestProcessingTime" in milli seconds. The issue is whenever I try the same request using Postman/JMeter, I see that the response time is lesser than what I see in my Response. Why is this happening?
I think this is due to the fact the header does not consider time for request to reach the server and response to travel back, my expectation is that it shows only the time, required to process the request on the server side. So JMeter reports time as delta from the time when request has been sent and the time when the last byte has been received, which is more correct in terms of real user experience.
See definitions of "Elapsed Time", "Connect Time" and "Latency" in the JMeter Glossary. You may also be interested in How to Analyze the Results of a Load Test article which demonstrates the impact of network capacity on the overall performance

jmeter vs python requests - different response time

I am running a load testing with Jmeter and python Requests package, but get different result when I try to access the same website.
target website: http://www.somewebsite.com/
request times: 100
avg response time for Jmeter: 1965ms
avg response time for python Requests: 4076ms
I have checked response html content of jmeter and python Requests are the same. So it means they all got the correct response from website. but not sure why it has 2 times difference with each other. Is there anyone know is there any deep reason for that?
the python Requests sample code:
repeat_time = 100
url = 'http://www.somewebsite.com/'
base_time = datetime.datetime.now()
time_cost = base_time
for i in range(repeat_time):
start_time = datetime.datetime.now()
r = requests.get(url, headers=headers)
end_time = datetime.datetime.now()
print str(r.status_code) + ';time cost: %s' % (end_time - start_time)
time_cost += (end_time - start_time)
print 'total time: %s' % (time_cost - base_time)
print 'average time: %s' % ((time_cost - base_time).total_seconds() / repeat_time)
Without your JMeter code, I can't tell you what the difference is, but let me give you an idea of what's happening in that one call to requests:
We create a Session object, plus the urllib3 connection pools we use
We do a DNS look-up for 'www.somewebsite.com' which shouldn't be too negatively affecting this request
We open a socket for 'www.somewebsite.com:80'
We send the request
We receive the first byte of the response
We determine if the user wanted to stream the body of the response, if not we read all of it and cache it locally.
Keep in mind that the three most intensive parts (usually) are:
DNS lookup (for various reasons, but as I already said, it shouldn't be a problem here)
Socket creation (this is always an expensive operation)
Reading the entirety of the body and caching it locally.
That said, each response object should have an attribute, elapsed which will give you the time to the first byte of the response body. In other words, it will measure the time between when the request is actually sent and when the end of the headers is found.
That might give you far more accurate information than what you're measuring now, which is the time to the last byte of the message.
That said, keep in mind that what you're doing in that for-loop is also invoking the garbage collector a lot:
Create Session, it's adapters, the adapters connection pools, etc.
Create socket
Discard socket
Discard Session
Goto 1
If you create a session once, your script will perform better in general.

how to get the response time of web pages in jmeter?

How to generte csv file and load csv using response time graph listener?
Can any one help me in detail that how we find response time in jmeter ?
If you run JMeter in command-line non-GUI mode as follows:
jmeter -n -t /path/to/your/test_plan.jmx -l /path/to/results_file.jtl
your results_file.jtl content will look like:
1409124780902,182,Logon,200,OK,Thread Group 1-1,text,true,214,0
1409124781219,153,Logout,200,OK,Thread Group 1-1,text,true,110,0
where second column is page response time in milliseconds.
Other values are:
"1409124780902" - current time stamp in ms
"182" - page response time
"Logon" - sampler name
"200" - Response Code
"OK" - Response Message
"Thread Group 1-1" - Parent Thread Group name, thread number and iteration.
"text" - response data type
"214" - response data size in bytes
"0" - latency
Once your test run is done you can open JMeter GUI and load this results_file.jtl into the listener of your choice.
You might also be interested in JMeter Plugins Extras Set which is capable of generating nice looking and easy understandable response-time related graphs to wit:
Response Times vs Threads
Response Times Distribution
Response Times Percentiles
You can get it by adding Reporters.
Please keep in mind Reporters is cpu and memory intensive components and thus should not be used while actual load test.
But for sample testing you can use it and for load test run you can get response time, average,throughput etc by saving output to jtl file in JMeter.
For normal/sample run
Aggregate report gives Average response time, min, max, median etc.
Summary report also gives the same with less details,
While performing actual run you can save output of these reporters in a jtl file. After the test results can be analyzed from jtl files.

Resources