JMeter breakup of response time - jmeter

is there any way so that I can get the breakup of response time provided by JMeter. i.e.
Travel time of total request
processing time
Travel time of total response
I know JMeter works entirely on client side, and the response is the TTLB. But any plugin or by any means to achieve the same?
Thanks in advance.

You are asking what you should know.
There is no plugin which will give you such breakdown (getting processing time of server is impossible unless you have jmeter agents installed on target server. Monitoring agents are not part of Jmeter till now)
You can get approximate request travel time by using new Connect Time feature of Jmeter.
In practice,
Response time = processing time + latency
You can again find latency with multiple network tools or rough idea using ping (JMeter also gives latency. cross verify with ping or wanem)
Once you know latency you can get processing time.
I think you should get breakdown from this.

Add a listener to the thread group:
jp#gc - Composite Graph
jp#gc - Connect Times Over Time
jp#gc - Response Times Over Time
2.jp#gc - Composite Graph Configuration Connect Times Over Time and Response Times Over Time
3.The result after running:
The larger the difference between the two listeners is, the bottleneck is at the network layer, and the smaller the difference is at the server layer.
4.You can also view specific data by adding a View Results in Table listener
Server processing time =Latency - Connect Time
The larger the difference is, the bottleneck is at the service layer, and the smaller the difference is, the bottleneck is at the network layer.
Server processing time covers program processing time, queue waiting time, database query time and so on. This method can confirm whether the bottleneck of response time is at the network layer or the service layer. If it is at the service layer, we may need to analyze further. So the term server processing time seems inaccurate.

Related

Getting so high average response time in Jmeter

I am testing a scenario with 400 threads. Although I am almost getting no errors, I have very high average response. What can bring about this problem? Seems like server gives no time-out but gives response so late. I've addded the summary report. It is as follows:
This table doesn't tell the full story, if response time seems "so high" to you - this is definitely the bottleneck and you can report it already.
What you can do to localize the problem is:
Consider using a longer ramp-up period, i.e. start with 1 user and add 1 more user every 5 seconds (adjust these numbers according to your scenario) so you would have arrival phase, the "plateau" and the load decrease phase. This approach will allow you to correlate increasing load and increasing response time by looking at Active Threads Over Time and Response Times Over Time charts. This way you will be able to state that:
response time remains the same up to X concurrent users
after X concurrent users it starts growing so throughput is going down
after Z concurrent users response time exceeds acceptable threshold
It would also be good to see CPU, RAM, etc. usage on the server side as increased response time might be due to lack of resources, you can use JMeter PerfMon Plugin for this
Inspect your server configuration as you might need to tune it for high loads (same applies to JMeter, make sure to follow JMeter Best Practices)
Use a profiler tool on server side during the next test execution, it will show you the slowest places in your application code

Effect of slow/unstable network connection in JMETER

Is network connection can affect the connection between servers and JMeter? Are there any way to reduced the number of error percentage and high average response time?
Of course it can, looking into JMeter Glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
So the formula is:
Response time = Connect Time + Latency + actual server response time
If there are networking problems - it will have direct impact on the response time. Check out How to Analyze the Results of a Load Test Using BlazeMeter article to see how networking issues affect test results. So if you want to get more "clear" picture it's recommended to have JMeter load generator(s) having direct access to the application under test, to wit use LAN instead of Wi-Fi, make sure that NIC cards (as well as routers/switches) have enough bandwidth to serve the anticipated data volumes.

Kafka message timestamps for request/response

I am building a performance monitoring tool which works in a cluster with Kafka topics.
For example, I am monitoring two topics: request, response. I.e. I need to have two timestamps - one from request and another from response. Then I could calculate difference to see how much time spent in a service which received a request and produced a response.
Please take in the account that it is working on a cluster, so different components may run on different hosts, hence - different physical clocks - so they could be out-of-sync and it will distort results significantly.
Also, I could not reliably use the clock of the monitoring tool itself, as this will influence timing results by its own processing times.
So, I would like to design a proper way which is reliably calculate time difference. What is most reliable way to measure time difference between two events in Kafka?
Solution 1:
We had similar problem before and solution we had was setting up NTP ( network time protocol).
In this one of your node act as NTP server and runs demons to keep time in sync across all your nodes we kept UTC and all other nodes has NTP clients which kept same time across all the servers
Solution 2:
Build a clock common API for all your components which will provide current time. This will make your system design independent of node local clock.

JMeter: More HTTP Requests Result in Increased Performance?

I'm trying to understand a significant performance increase in my Jmeter test.
In a multi-tenancy database environment, I have a single RESTful service test containing a Thread Group with a single HTTP Request sampler posting an XML payload. The XML payload is then evaluated via stored procedures, and a response is received stating if the claim was qualified. I run this test from a .bat file (non-gui mode) in an Apache 7 environment with a single JVM running.
Test Thread Group Properties
# of Threads: ${__P(test.threads,200)}
Ramp-Up Period: ${__P(test.rampup,1)}
Loop Count: Forever
Delay Thread: Enabled
Scheduler: Enabled
Duration: ${__P(test.duration,1800)}
HTTP Request
Method: POST
https://serverName:port/database/.../${__P(tenant,1111)}/Claim/${__property(contractId)}
When I duplicate the HTTP Request sampler within the TG and change the tenant ID within the URL, for some reason the performance seems to increase by > 55%. (i.e., the # of claims/second is increased by 55%) It appears the test did not fail, so I cannot attribute the performance increase to an increased error rate.
I would have expected an increase if I had enabled another JVM to let the Load Balancer perform optimization, but this is not the case. (still using only 1 JVM)
HTTP Request 1
https://serverName:port/database/.../${__P(tenant,1111)}/Claim/${__property(contractId)}
HTTP Request 2
https://serverName:port/database/.../${__P(tenant,2222)}/Claim/${__property(contractId)}
The theory going around here is that Jmeter generates a workload at a higher rate for multiple requests than for a single request. I'm skeptical, but haven't found anything "solid" to support my skepticism.
Is this theory true? If so, why would two HTTP Requests increase the performance?
In short: it's OK.
Longer version:
Here is how JMeter works:
JMeter starts all the threads during ramp-up period
Each thread starts executing samplers upside down (or according to the Logic Controllers)
When request doesn't have more samplers to execute and no more loops to iterate it's being shut down.
So how does number of virtual users correlate with the "performance". When you increase virtual users number of requests number for a load test it affects Throughput
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So if you increase load on well-behaved system throughput should increase by the same factor or linearly.
When you increase load but throughput does not increase, such situation is known as "saturation point" when you get the maximum performance from the system. Further load increasing will lead to throughput going down.
References:
Apache JMeter Glossary
An extended Glossary version
And how do you messure your performance? According to your "theory" your messurements includes jmeter overhead and this would be wrong. More over, is the response the same for both cases? What I mean, is the backend doing the same work work both cases?
Maybe first request returns different output then the other one. Maybe it is more expensive to generate output in one of the request. That is why you will notice "incresed" performance as normally you would do N x Heavy task in X seconds, and in second case G x heavy tasks + H X light tasks in the same time where G < N/2 - more requests in the same time? Sure! Incresed perfomance? Nope.
So to completly investigate what is happening, you need to review your measurement method. I would start with comparing the the actual time for both requests.

What is JMeter throughput

My website is hosted on cloud. I am running JMeter from my office. Now I want to know if the throughput that I get in Summary Report contain network latency also.
I have this kind of API details in my log file.
GET mywebsite/getBday 200 67
So for all getBday requests it gives me processing time of 67ms. But my JMeter show throughput 1.20 reqs/sec and latency here is 8.5 secs (latency = Average field from Summary Report).
Can you tell me if the throughput that I get in Summary Report contain network latency also. If so, how can I exclude it?
Response time includes network latency. It measures the time the request was made to the time the response was received.
How can jmeter know how long the request spent in transit, unless the server can respond with a time the request was received?
The only way to exclude network latency from jmeter results is to measure it at the server and send back the information in the response (or by some other method).
Most servers should have monitoring software running anyway, like carbon/graphite. You can use that to measure the true server response times, and show network latency.
As I am most testing Java stacks, I use jconsole as well on the same machine as jmeter for side by side comparison of graphs to determine real server capability.
"Can you tell me if the throughput that I get in Summary Report contain network latency also."?
The answer is no - throughput is a measure of the completion rate of requests and the formula for calculating it does not include latency. See below.
Probably worth looking up a definition for throughput. JMeter provides its own :
"Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server. The formula is: Throughput = (number of requests) / (total time)."
https://jmeter.apache.org/usermanual/glossary.html

Resources