What is JMeter throughput - jmeter

My website is hosted on cloud. I am running JMeter from my office. Now I want to know if the throughput that I get in Summary Report contain network latency also.
I have this kind of API details in my log file.
GET mywebsite/getBday 200 67
So for all getBday requests it gives me processing time of 67ms. But my JMeter show throughput 1.20 reqs/sec and latency here is 8.5 secs (latency = Average field from Summary Report).
Can you tell me if the throughput that I get in Summary Report contain network latency also. If so, how can I exclude it?

Response time includes network latency. It measures the time the request was made to the time the response was received.
How can jmeter know how long the request spent in transit, unless the server can respond with a time the request was received?
The only way to exclude network latency from jmeter results is to measure it at the server and send back the information in the response (or by some other method).
Most servers should have monitoring software running anyway, like carbon/graphite. You can use that to measure the true server response times, and show network latency.
As I am most testing Java stacks, I use jconsole as well on the same machine as jmeter for side by side comparison of graphs to determine real server capability.

"Can you tell me if the throughput that I get in Summary Report contain network latency also."?
The answer is no - throughput is a measure of the completion rate of requests and the formula for calculating it does not include latency. See below.
Probably worth looking up a definition for throughput. JMeter provides its own :
"Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server. The formula is: Throughput = (number of requests) / (total time)."
https://jmeter.apache.org/usermanual/glossary.html

Related

Getting so high average response time in Jmeter

I am testing a scenario with 400 threads. Although I am almost getting no errors, I have very high average response. What can bring about this problem? Seems like server gives no time-out but gives response so late. I've addded the summary report. It is as follows:
This table doesn't tell the full story, if response time seems "so high" to you - this is definitely the bottleneck and you can report it already.
What you can do to localize the problem is:
Consider using a longer ramp-up period, i.e. start with 1 user and add 1 more user every 5 seconds (adjust these numbers according to your scenario) so you would have arrival phase, the "plateau" and the load decrease phase. This approach will allow you to correlate increasing load and increasing response time by looking at Active Threads Over Time and Response Times Over Time charts. This way you will be able to state that:
response time remains the same up to X concurrent users
after X concurrent users it starts growing so throughput is going down
after Z concurrent users response time exceeds acceptable threshold
It would also be good to see CPU, RAM, etc. usage on the server side as increased response time might be due to lack of resources, you can use JMeter PerfMon Plugin for this
Inspect your server configuration as you might need to tune it for high loads (same applies to JMeter, make sure to follow JMeter Best Practices)
Use a profiler tool on server side during the next test execution, it will show you the slowest places in your application code

Effect of slow/unstable network connection in JMETER

Is network connection can affect the connection between servers and JMeter? Are there any way to reduced the number of error percentage and high average response time?
Of course it can, looking into JMeter Glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
So the formula is:
Response time = Connect Time + Latency + actual server response time
If there are networking problems - it will have direct impact on the response time. Check out How to Analyze the Results of a Load Test Using BlazeMeter article to see how networking issues affect test results. So if you want to get more "clear" picture it's recommended to have JMeter load generator(s) having direct access to the application under test, to wit use LAN instead of Wi-Fi, make sure that NIC cards (as well as routers/switches) have enough bandwidth to serve the anticipated data volumes.

Jmeter: Why increasing number of threads did not change latency?

How is this possible that in Jmetetr increasing number of users (threads) in my test did not changed the latency (Response time)?
I got the same latency for 100 threads and for 300 threads.
Latency is the difference between the time when a request was sent and time when the response has started to be received.
As per JMeter Glossary
JMeter measures the latency from just before sending the
request to just after the first response has been received. Thus the
time includes all the processing needed to assemble the request as
well as assembling the first part of the response, which in general
will be longer than one byte. Protocol analysers (such as Wireshark)
measure the time when bytes are actually sent/received over the
interface. The JMeter time should be closer to that which is
experienced by a browser or other application client.
Response time (= Sample time = Load time = Elapsed time) is a difference between the time when the request was sent and time when the response has been fully received.
As per JMeter Glossary
JMeter measures the elapsed time from just before sending the request
to just after the last response has been received. JMeter does not
include the time needed to render the response, nor does JMeter
process any client code, for example, Javascript.
So Response time always >= latency.
So it is possible that you may have same Latency for 100 and 300 threads but Response time will be different or increased.
If you have stable network connectivity between JMeter and Application Under Test it is expected that Latency wouldn't change not matter how many threads you kick off. It is "pure" network metric which tells how long did it take for the request to reach to the server.
Check out How to Analyze the Results of a Load Test article to see the impact of Latency for the end user

JMeter: More HTTP Requests Result in Increased Performance?

I'm trying to understand a significant performance increase in my Jmeter test.
In a multi-tenancy database environment, I have a single RESTful service test containing a Thread Group with a single HTTP Request sampler posting an XML payload. The XML payload is then evaluated via stored procedures, and a response is received stating if the claim was qualified. I run this test from a .bat file (non-gui mode) in an Apache 7 environment with a single JVM running.
Test Thread Group Properties
# of Threads: ${__P(test.threads,200)}
Ramp-Up Period: ${__P(test.rampup,1)}
Loop Count: Forever
Delay Thread: Enabled
Scheduler: Enabled
Duration: ${__P(test.duration,1800)}
HTTP Request
Method: POST
https://serverName:port/database/.../${__P(tenant,1111)}/Claim/${__property(contractId)}
When I duplicate the HTTP Request sampler within the TG and change the tenant ID within the URL, for some reason the performance seems to increase by > 55%. (i.e., the # of claims/second is increased by 55%) It appears the test did not fail, so I cannot attribute the performance increase to an increased error rate.
I would have expected an increase if I had enabled another JVM to let the Load Balancer perform optimization, but this is not the case. (still using only 1 JVM)
HTTP Request 1
https://serverName:port/database/.../${__P(tenant,1111)}/Claim/${__property(contractId)}
HTTP Request 2
https://serverName:port/database/.../${__P(tenant,2222)}/Claim/${__property(contractId)}
The theory going around here is that Jmeter generates a workload at a higher rate for multiple requests than for a single request. I'm skeptical, but haven't found anything "solid" to support my skepticism.
Is this theory true? If so, why would two HTTP Requests increase the performance?
In short: it's OK.
Longer version:
Here is how JMeter works:
JMeter starts all the threads during ramp-up period
Each thread starts executing samplers upside down (or according to the Logic Controllers)
When request doesn't have more samplers to execute and no more loops to iterate it's being shut down.
So how does number of virtual users correlate with the "performance". When you increase virtual users number of requests number for a load test it affects Throughput
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So if you increase load on well-behaved system throughput should increase by the same factor or linearly.
When you increase load but throughput does not increase, such situation is known as "saturation point" when you get the maximum performance from the system. Further load increasing will lead to throughput going down.
References:
Apache JMeter Glossary
An extended Glossary version
And how do you messure your performance? According to your "theory" your messurements includes jmeter overhead and this would be wrong. More over, is the response the same for both cases? What I mean, is the backend doing the same work work both cases?
Maybe first request returns different output then the other one. Maybe it is more expensive to generate output in one of the request. That is why you will notice "incresed" performance as normally you would do N x Heavy task in X seconds, and in second case G x heavy tasks + H X light tasks in the same time where G < N/2 - more requests in the same time? Sure! Incresed perfomance? Nope.
So to completly investigate what is happening, you need to review your measurement method. I would start with comparing the the actual time for both requests.

jmeter latency vs actual browser load test

Is this a valid testing for checking of how much time to load a web under test with 500 concurrent user.
I run jmeter with a 500 thread user , ramp-up period = 50 and loop count forever. with a listener with "results in table" that also record the latency.
While jmeter is running, i try to load/browse the web under test using actual browser(in my case IE8) ,
and it loads in 7 secs. but based on the latency the majority of result is 50k++.
is the 7 secs load time in actual browser is consider a "response time result"? since it is load in actual browser.
another question:
is the latency 50k is converted to sec? means 50secs. to load the web under test if we based on the jmeter result?
kindly clarify this to me please :)
In simple words, Latency is network delay (time taken by network while transferring data)
In JMeter latency is time between, when request is sent to server till first byte of response reaches the client/Jmeter. If response time is very low enough then you wont get precise measure of latency. If Response time is high then probably you will get correct measure.
In Jmeter Latency shares the measure as response time i.e. ms/seconds.
Your 7sec in browser is (Response time (Processing time + Latency) + Rendering time). In Jmeter rendering time is not present (As it is not a browser). Though your rendering is very low as compared to response time but in cases heavy content websites rendering time is comparable. Thus should be considered.
I hope this clears your doubts :)

Resources