Why JMeter latency decreases but response times remain the same? - jmeter

As I understand it, in JMeter:
Latency = Time from when the request is sent until the first response is received.
Response Time = Time from when the request is sent, until the full response is received.
I have a test that I've been running (non GUI mode) multiple times (exact same test every time), once without Google CDN and one with Google CDN.
With the Google CDN, I've seen Latency reduce by approx 50% (from ~900ms to ~450ms).
However, response time has stayed the same.
I would've assumed the response time would reduce/increase by the same amount that latency changes by?
Has anyone experienced this, or have any ideas as to why response time has remained the same?
Thanks :)

Related

How Throughput and Response time are related

I ran a JMeter test for 193 samples
where I could see my average response time as 5915ms and Throghput as 1.19832.
I just want to know how are they exactly related
All the answers are in JMeter Glossary
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received.
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
The relationship is: higher response time - lower throughput and vice versa.
You can use charts like Transactions per Second for throughput and Response Times Over Time for response times to get them plotted on your test timeline and Composite Graph to put them together. This way you will be able to track the trends.
All 3 charts can be installed using JMeter Plugins Manager
TL;DR
No, but yes.
Both aren't related directly, but when increasing Throughput, it will probably affect server response time due to load/stress on server.
If there are timeout errors response time will probably increase.
But for validation or firewall errors - response time will probably decrease.
There's a long explanation in JMeter archive, last is using Disney to demonstrate:
Think of your last trip to disney or your favorite amusement park. Lets define capacity of the ride to be the number of people that can sit on the ride per turn (think roller coaster). Throughput will be the number of people that exit the ride per unit of time. Lets define service time the the amount of time you get to sit on the ride. Lets define response time or latency to be your time queuing for the ride (dead time) plus service time.
In terms of load/Performance testing. Throughput and Response times are inversely proportional. i.e
With increase in response time throughput should decrease.
With increase in Throughput response time should decrease.
You can get more detailed definitions in this blog:
https://nirajrules.wordpress.com/2009/09/17/measuring-performance-response-vs-latency-vs-throughput-vs-load-vs-scalability-vs-stress-vs-robustness/
Throughout increases to some extent and remains stable when all the resources becomes busy. Now, if user requests increases further at this point response time would increase. But if response time increase is only because of internal queuing then due to the fact that system is taking more requests in at the same time response time is also increasing, throughout doesn't change. When queues are full more requests should fail. If response increase is due to some delay in processing or serving the request, for example running a query on database then due to the fact that system is not accepting more request and at the same time response time is also increasing, consequently throughout would drop.
Just a general explaination.
Respose Time : It is the time calculated when user send the request till request gets finnished.
Throughput : It is server property that number of transaction or request can be made during certain amount of time. here 1.19832 /minute means server cand hadle 1.19832 sample per minute.
As Respose Time increses Throughput increases.

Jmeter: Why increasing number of threads did not change latency?

How is this possible that in Jmetetr increasing number of users (threads) in my test did not changed the latency (Response time)?
I got the same latency for 100 threads and for 300 threads.
Latency is the difference between the time when a request was sent and time when the response has started to be received.
As per JMeter Glossary
JMeter measures the latency from just before sending the
request to just after the first response has been received. Thus the
time includes all the processing needed to assemble the request as
well as assembling the first part of the response, which in general
will be longer than one byte. Protocol analysers (such as Wireshark)
measure the time when bytes are actually sent/received over the
interface. The JMeter time should be closer to that which is
experienced by a browser or other application client.
Response time (= Sample time = Load time = Elapsed time) is a difference between the time when the request was sent and time when the response has been fully received.
As per JMeter Glossary
JMeter measures the elapsed time from just before sending the request
to just after the last response has been received. JMeter does not
include the time needed to render the response, nor does JMeter
process any client code, for example, Javascript.
So Response time always >= latency.
So it is possible that you may have same Latency for 100 and 300 threads but Response time will be different or increased.
If you have stable network connectivity between JMeter and Application Under Test it is expected that Latency wouldn't change not matter how many threads you kick off. It is "pure" network metric which tells how long did it take for the request to reach to the server.
Check out How to Analyze the Results of a Load Test article to see the impact of Latency for the end user

Jmeter Response Times vs Threads

I am doing API load testing by sending 250 requests at once.
1. Configuration
Naturally, server takes longer to respond when a lot of users requests it simultaneously, this is what it says here.. As per http://jmeter-plugins.org/wiki/ResponseTimesVsThreads/. However when testing this is what I found..
2. Test
The plot above starts from right to left and as the number of active threads decrease, the response time increases.
Is active threads same as number of user requests, if so why this is happening on a consistent basis?
Update-1
Ran another test and increased the ramp-up period this time
No of threads: 200
Ramp-Up Period: 200 secs
Loop Count: 200
There are at least 2 possible explanations:
you don't have a problem, and your improvement in response times comes from caching effect related to your data being in cache after some time. Only you can validate as we don't know if you are using a large enough dataset and how long is your test lasting
you have a problem, your server is rejecting connections under load, so you have very rapid failed responses that have a very good response time. To know if it's your problem, check the response code over time or transactions over time as long as error percentage

Is it possible to retrieve the server response time for a URL using the Google PageSpeed Insights API?

I am looking to retrieve the actual server response time for a URL as reported by Google PageSpeed Insights by using the API.
This can be seen when the rule is broken in the GPSI GUI. For example, see the following screen capture:
In this case it is 0.89 seconds.
I have looked at their API documentation but so far have not found anything pertaining to this. However this seems incomplete since it is available in the GUI, so I'm hoping I am just missing something.
According to PageSpeed documentation for server response time measurements Reduce Server response time
Server response time measures how long it takes to load the necessary HTML to begin rendering the page from your server, subtracting out the network latency between Google and your server
This means, it measures latency(which depends upon your bandwidth) between your server and machine on which you are running GPSI. It subtracts the latency from (total response time - rendering time) and if it is more than 200ms then it indicates as high server response time.
Though its not a accurate measure but collecting it multiple times and average of those values can be considered with pinch of salt.
Coming to your question, GPSI is a client side utility. To get actual application server statistics (server processing time) you need to have access to server through an agent or API's exposed by appllication/app server. Without any of those getting actual stats is not possible. IMHO GPSI Api's wont help you in this situation.
GPSI is providing you a rough estimate by math. calculation of various factors. (total response time = server time + latency + rendering time)
GPSI knows total response time, latency, rendering time and thus it is providing you approximate server time.

jmeter latency vs actual browser load test

Is this a valid testing for checking of how much time to load a web under test with 500 concurrent user.
I run jmeter with a 500 thread user , ramp-up period = 50 and loop count forever. with a listener with "results in table" that also record the latency.
While jmeter is running, i try to load/browse the web under test using actual browser(in my case IE8) ,
and it loads in 7 secs. but based on the latency the majority of result is 50k++.
is the 7 secs load time in actual browser is consider a "response time result"? since it is load in actual browser.
another question:
is the latency 50k is converted to sec? means 50secs. to load the web under test if we based on the jmeter result?
kindly clarify this to me please :)
In simple words, Latency is network delay (time taken by network while transferring data)
In JMeter latency is time between, when request is sent to server till first byte of response reaches the client/Jmeter. If response time is very low enough then you wont get precise measure of latency. If Response time is high then probably you will get correct measure.
In Jmeter Latency shares the measure as response time i.e. ms/seconds.
Your 7sec in browser is (Response time (Processing time + Latency) + Rendering time). In Jmeter rendering time is not present (As it is not a browser). Though your rendering is very low as compared to response time but in cases heavy content websites rendering time is comparable. Thus should be considered.
I hope this clears your doubts :)

Resources