I am running a load testing with Jmeter and python Requests package, but get different result when I try to access the same website.
target website: http://www.somewebsite.com/
request times: 100
avg response time for Jmeter: 1965ms
avg response time for python Requests: 4076ms
I have checked response html content of jmeter and python Requests are the same. So it means they all got the correct response from website. but not sure why it has 2 times difference with each other. Is there anyone know is there any deep reason for that?
the python Requests sample code:
repeat_time = 100
url = 'http://www.somewebsite.com/'
base_time = datetime.datetime.now()
time_cost = base_time
for i in range(repeat_time):
start_time = datetime.datetime.now()
r = requests.get(url, headers=headers)
end_time = datetime.datetime.now()
print str(r.status_code) + ';time cost: %s' % (end_time - start_time)
time_cost += (end_time - start_time)
print 'total time: %s' % (time_cost - base_time)
print 'average time: %s' % ((time_cost - base_time).total_seconds() / repeat_time)
Without your JMeter code, I can't tell you what the difference is, but let me give you an idea of what's happening in that one call to requests:
We create a Session object, plus the urllib3 connection pools we use
We do a DNS look-up for 'www.somewebsite.com' which shouldn't be too negatively affecting this request
We open a socket for 'www.somewebsite.com:80'
We send the request
We receive the first byte of the response
We determine if the user wanted to stream the body of the response, if not we read all of it and cache it locally.
Keep in mind that the three most intensive parts (usually) are:
DNS lookup (for various reasons, but as I already said, it shouldn't be a problem here)
Socket creation (this is always an expensive operation)
Reading the entirety of the body and caching it locally.
That said, each response object should have an attribute, elapsed which will give you the time to the first byte of the response body. In other words, it will measure the time between when the request is actually sent and when the end of the headers is found.
That might give you far more accurate information than what you're measuring now, which is the time to the last byte of the message.
That said, keep in mind that what you're doing in that for-loop is also invoking the garbage collector a lot:
Create Session, it's adapters, the adapters connection pools, etc.
Create socket
Discard socket
Discard Session
Goto 1
If you create a session once, your script will perform better in general.
Related
I am doing load testing on generating report and the requirement is like the report should get generated within 10mins.
It includes one HTTP post request for report generation, and then there is a status check call, which keeps on checking the status of the first request. Once the status of first request changes to complete then the report generation is successful.
Basically I want to start the timer at the begining of the first request and stop the timer once the status is complete and need to add assertion if the time is less than 10 mins then test is pass else fail.
I tried multiple approaches like using Transaction controller, and adding all request under it. But this doesn't give sum but the average response time of all the request under it.
Also, I tried beanshell listener, extracting the response time for every request and adding them all...
var responseTime;
props.put("responseTime", sampleResult.getTime());
log.info(" responseTime :::" + props.get("responseTime"));
log.info("time: "+ sampleResult.getTime());
props.put("responseTime", (sampleResult.getTime()+props.get("responseTime")));
log.info("new responseTime :::" + props.get("responseTime"));
However, I am not interested in adding the response time of these requests, instead I need to just know what is the time elapsed from when the report is triggered and till it gives status as complete.
All the jmeter timers are adding delays, I dnt wish to add delay instead I need it as a timer.
Any help is highly appreciated.
Thank you
Since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting mainly due to performance reasons so I'll provide one of possible solutions in Grovy
Add JSR223 PostProcessor as a child of the HTTP Request which kicks off the report generation and put the following code there:
vars.putObject('start', System.currentTimeMillis())
Add JSR223 Sampler after checking the status and put the following code there:
def now = System.currentTimeMillis()
def start = vars.getObject('start')
def elapsed = now - start
if (elapsed >= 600000) {
SampleResult.setSuccessful(false)
SampleResult.setResponseMessage('Report generation took: ' + (elapsed / 1000 / 60) + ' minutes instead of 10')
}
Example setup:
I want to redirect every http request to https. I have modified my VCL file as:
import std;
sub vcl_recv {
if (std.port(server.ip) != 443) {
set req.http.location = "https://" + req.http.host + req.url;
return(synth(301));
}
}
sub vcl_synth {
if (resp.status == 301 || resp.status == 302) {
set resp.http.location = req.http.location;
return (deliver);
}
}
It worked but it increases my site loading to approximately 1s, whether the request is http or https.
Is there any other alternative approach i can use or improve load time performance?
Varnish hardly adds latency to the request/response flow.
If the HTTPS resource that is redirected to is not cached, you will depend on the performance of your origin server. If the origin server is slow, the loading time will be slow.
Browser timing breakdown
Please analyse the breakdown of the loading time for that resource in the "network" tab of your browser's inspector.
For that page that is loaded, you can click the "Timings" tab in Firefox to see the breakdown. Here's an example:
If the Waiting timer is high, this means the server is slow.
If the Receiving timer is high, this means the network is slow.
Varnish timing breakdown
The varnishlog program allows you to inspect detailed transactions logs for Varnish.
The varnishlog -g request -i ReqUrl -i Timestamp will list the URL and loading times of every transaction.
Here's an example:
root#varnish:/etc/varnish# varnishlog -c -g request -i ReqUrl -i Timestamp
* << Request >> 163843
- Timestamp Start: 1645036124.028073 0.000000 0.000000
- Timestamp Req: 1645036124.028073 0.000000 0.000000
- ReqURL /
- Timestamp Fetch: 1645036124.035310 0.007237 0.007237
- Timestamp Process: 1645036124.035362 0.007288 0.000051
- Timestamp Resp: 1645036124.035483 0.007409 0.000120
Timestamps are expressed in UNIX timestamp format. The first timer in every Timestamp log line is the total loading time since the start of the transaction. The second one is the loading time of the individual part.
In this example the total server-side loading time was 0.007409 seconds. The backend fetch was responsible for 0.007237 seconds of loading time.
Varnish itself wasted 0.007237 seconds on processing before the fetch, 0.000051 seconds of processing after the fetch and 0.000120 seconds for sending the response.
You can use this command to inspect server-side performance at your end. It also allows you to inspect whether or not Varnish is to blame for any incurred latency.
When there's no Fetch line it means you're dealing with a cache hit and no backend fetch was required.
Conclusion
Combine server-side timing breakdown and client-side timing breakdown to form a conclusion on what causes the delay. Based on that information you can improve the component that is causing this delay.
It might help to add synthetic("") to your vcl_synth {} to make sure that the redirects are sent with an empty body, but I agree that the code should not increase the response time by any significant amount.
I am using Locust and my code looks as below
class RecommenderTasks(TaskSet):
#task
def test_recommender_multiple_platforms(self):
start = round(time.time() * 1000)
self.client.get('recommendations', name='Test')
end = round(time.time() * 1000)
print(end - start)
class RecommenderUser(FastHttpUser):
tasks = [RecommenderTasks]
wait_time = constant(1)
host = "https://my-host.com/"
When I test with this code, I get the following output times
374
62
65
68
64
I am not sure why the very first task time alone is about 300+ ms and the rest are as expected. With this, my overall average time also increases. Could you please help me here?
Locust response times are measured from the time the initial request is sent to the server to the time a response is received. By default Locust reuses socket connections when available but creates new ones if an existing one isn't available. When connecting via HTTPS, there are a number of things that need to be done to set up the connection initially. Generally performance of that connection set up is dependent on things the server is doing. You could look into ways of reducing your connection setup time. How to do that will vary widely depending on your stack but you can find general principles in SO answers like this one:
how to reduce ssl time of website
I have an MVC Web aPI and I have trouble in comparing the response time of this API. I added some code to calculate the response time:
In the AuthorizationFilterAttribute OnAuthorization, I have the below code:
actionContext.Request.Headers.Add("RequestStartTime", DateTime.Now.ToString());
I have an ActionFilterAttribute, and an OnActionExecuted in which I have the below code:
string strRequestStartTime = actionExecutedContext.Request.Headers.GetValues("RequestStartTime").First();
DateTime dtstartTime = DateTime.Parse(strRequestStartTime);
TimeSpan tsTimeTaken = DateTime.Now.Subtract(dtstartTime);
actionExecutedContext.Response.Headers.Add("RequestProcessingTime", tsTimeTaken.TotalMilliseconds + "ms");
The response has the header "RequestProcessingTime" in milli seconds. The issue is whenever I try the same request using Postman/JMeter, I see that the response time is lesser than what I see in my Response. Why is this happening?
I think this is due to the fact the header does not consider time for request to reach the server and response to travel back, my expectation is that it shows only the time, required to process the request on the server side. So JMeter reports time as delta from the time when request has been sent and the time when the last byte has been received, which is more correct in terms of real user experience.
See definitions of "Elapsed Time", "Connect Time" and "Latency" in the JMeter Glossary. You may also be interested in How to Analyze the Results of a Load Test article which demonstrates the impact of network capacity on the overall performance
No Of Requests - 2113 ;
Average Response time (s) - 123.5 ;
Response time/Sec (90% of Requests) - 142.9
Minimum Response time (s) - 2.4
Maximum response time (s) - 14.9
Error% -0.0
My Questions - For 2113 requests average response time is 123.5 secs I need to know what will be the response time of average one single request in 2113 requests
The average response time of a single request (1 out of 2,113) will be the value itself, but I'm sure this isn't your question.
Are you simply trying to locate the response time of each request after a given test plan has fully executed, that is, to see each of the 2,113 response times? If so, just add a Summary Report to your thread group. By doing this you'll need to specify an output file (which will get generated if it doesn't already exist) and will show you in detail each of the requests sent to the server, along with the HTTP response code, response time and other goodies.
UPDATE
Per the question posed in the comments via Ripon Al Wasim, the default extension of the results file is CSV, however this is configurable in /bin/jmeter.properties:
# legitimate values: xml, csv, db. Only xml and csv are currently supported.
#jmeter.save.saveservice.output_format=csv
As we can see, JMeter only appears to support XML and CSV.