Sample time (ms) is different from the response time of Loadrunner for the same request. Why is it so? - jmeter

We recorded a request for launching a website page in Jmeter excluding all the static content files css, js etc. When we replayed the script, the Sample time( considering that it is the response time) was coming around 5000ms.
We recorded the same request in LoadRunner and the response time was coming around 300ms. Also when we saw the response time for the request through HTTPFox it was also around 300ms.
My question is why there is a drastic difference between the response time measured by the two tools. Am i going wrong while calculating the response time in jmeter OR is there any other way to calculate response time in Jmeter?

I can see several reasons on why this can happen:
JMeter is configured to "Redirect automatically" or to "Follow Redirects"
JMeter is configured to "Download embedded resources"
Your system under test demonstrates high Latency (amount of time for the request to reach the server, JMeter reports overall response time as latency + actual response time, see The Load Reports guide for metrics explanation)

There are lots of architectural differences which can contribute to this difference between the tools. Narrow your scope to one request, such as an image, and scale up your number of users in both tools to see what happens.
You also have test configuration items that can come into play, such as JMETER running monolithic on one host vs loadrunner running distributed amonst many generators. Think time setting differences, number of users. etc... You could spend all day nailing the jello of test settings and architecture.
But, given that the Loadrunner time is closest to observable with a proxy and manual execution what can you infer about the rest of your test data?

Related

How to find deadlock, timeout and memory issues using JMeter?

I am new to performance testing. I have a task on measuring the web application performance. I need to find out which modules/calls are causing deadlock, timeout and memory issues.
Q1. How can I use JMeter to find out deadlock, memory and timeout issues? If I do the following steps, it is the right way to trace those issues?
create a test plan in JMeter, which contains multiple Thread Group.
In each thread group, it contains multiple HTTP requests and 200 or
more users plus infinite loop.
Monitor JMeter results and SQL
profiler for deadlock.
Q2. JMeter is the right tool for tracking those issues? Or, should I use browser based performance testing tool such as LoadNinja, LoadView?
Thanks
Bonnie
Q1 JMeter per se doesn't provide any toolchain to detect deadlock and memory issues, the HTTP Request sampler (or even better HTTP Request Defaults) provides possibility to set the timeouts, if the value is blank - it will default to operating system timeout or web server timeout, whatever comes the first
If you conduct some form of stress test, i.e. start with 1 virtual user and gradually increase the load at some point you will see that response time starts growing and number of requests per second starts decreasing. So it's the point of maximum system performance and after that the performance will be degrading.
To monitor application under test memory you can use JMeter PerfMon Plugin, it will allow you to state whether the lack of RAM is the cause of the performance issue
With regards to deadlocks, it should result in HTTP Request sampler failure (or timeout), JMeter won't give you the underlying reason, but it will give you the timestamp and you should be able to check what happened with your application/database at that moment.
Q2 well-behaved JMeter test must produce the same network footprint as a real browser, if your test plan is good enough the system under test shouldn't be able to distinguish whether it's being hit by JMeter or by a real user using the real browser. JMeter will not give you client-side performance metrics like page rendering time or JavaScript execution time as:
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does (it's possible to view the response as HTML etc., but the timings are not included in any samples, and only one sample in one thread is ever displayed at a time).

JMeter Load Testing Time Verification

I use JMeter for checking load testing.
I note a time with stopwatch when i check load time personally it was
8.5 seconds
when i run same case with JMeter it gave load time of 2 seconds
There is huge difference between them, How can i verify the actual time?
e.g : if one user taking 9 seconds to load the form while in JMeter it is given load time 2 seconds
Client time is a complex item, as you can see from the clip from the Chrome Developer tools, performance tab, above. There's lots going on at the client which does lead to a difference between the time you see with an HTTP protocol test tool, such as JMETER (and most of the other performance test tools on the planet) and the actual client render.
You can address this Delta in a number of ways:
Run a single GUI Virtual user. Name your timing records such as "Login" and "login_GUI." The delta between the two is your client weight. Make sure to run the GUI virtual user on a dedicated host to avoid resource contention
Run a test with all browsers. This was state of the art in 1995. Because of the resource cost and the skew imposed on trying to figure out the cost of the server response the entire industry shifted to protocol level virtual users. Some are trying to bring back this model as "state of the art." It is not
Ask a performance question earlier, also known as "shift left..." Every developer has these developer tools at their disposal, as does every functional tester. If you find that a client is slow for one user, be curious and use the developer tools to identify, "why?" If you are waiting to multi user performance testing to answer questions related to client weight, then you have waited too long and often will not have the time or resources to change the page architecture in meaningful ways to reduce the client page cost. This is where understanding earlier has tremendous advantages for making changes.
I picked the graphic above deliberately to illustrate the precise challenge you have. Notice, the loading of the components takes less than a tenth of a second. These are the requests that JMETER would be making. But the page takes almost five seconds to "render." Jmeter is not broken, it is working as designed. It is your understanding that needs to change on which tools can be used to pull particular stats for analysis.
You can't compare JMeter load time to browser as is, also because your browser will load JavaScript files and can call JavaScript functions on page load while JMeter doesn't execute JavaScript.
JMeter is not a browser, it works at protocol level. As far as
web-services and remote services are concerned, JMeter looks like a
browser (or rather, multiple browsers); however JMeter does not
perform all the actions supported by browsers. In particular, JMeter
does not execute the Javascript found in HTML pages. Nor does it
render the HTML pages as a browser does (it's possible to view the
response as HTML etc., but the timings are not included in any
samples, and only one sample in one thread is ever displayed at a
time).
Just a side note - you can use plugin to check exact load time in chrome.
Well-behaved JMeter test timing should be equal or similar to real user timing, if there is a 4x times difference - most probably your JMeter configuration is not correct.
Probably the most important. Make sure your HTTP Request samplers are configured to retrieve so called "embedded resources" (images, scripts, styles) which are referenced in the web page
If your application is using AJAX technology make sure you execute AJAX-driven requests as well and add their elapsed time to main sampler using i.e. Transaction Controller.
Make sure you mimic browser's:
Cookies via HTTP Cookie Manager
Headers via HTTP Header Manager
Cache via HTTP Cache Manager
Assuming all above you should be receiving similar to real user experience page load time. See How to make JMeter behave more like a real browser article for more detailed information on the above tips.
In addition to the answers provided by James and user7294900, please find these images to help you understand the reason behind the difference in time given by your stop watch and JMeter.
Below image gives the ideology behind how JMeter provides the time.
Below image gives the ideology behind how you have measured the time with
your stop watch.
Notice that there are additional actions performed by the browser when you are taking the time using your stop watch. This is the reason behind the huge difference in time between JMeter and your stop watch.
In addition to this, ensure that you are using the same test environmental conditions for both the tests (like same network conditions, same LG etc.)
Hope this helps!

There is Jmeter response time difference between Jmeter run results and manually captured the response time

There is Jmeter response time difference between Jmeter run results and manually captured the response time from the local system using stop watch on the web application.
Browse the web app from local windows system and use stop watch to see the response time to load the page.
Run the Jmeter in non-gui/gui mode and observe the response time (used Listeners only to debug, when run the script no listener was added)
Can see there is a difference in both. Please suggest how to know if Jmeter has captured the correct response time.
Given you properly configure JMeter you should get the same or similar response time for the same request. "Proper" configuration stands for:
You should configure JMeter to retrieve embedded resources and use parallel thread pool of ~5 threads for this
This options "tells" JMeter to fetch images, styles and scripts referenced in the main HTML page and do it in parallel like real browsers do
Add HTTP Cache Manager to your Test Plan. Real browsers download embedded resources but do it only once, on subsequent requests the resources are being returned from disk cache, no actual request is being made. HTTP Cache Manager enables cache simulation and cache-control headers handling.
Add HTTP Cookie Manager to represent cookies/sessions and deal with cookie-based authentication
Add HTTP Header Manager to your test plan to represent browser headers. It might be important as i.e. missing Accept-Encoding header may disable server-side compression therefore client will receive much more data and it will take more time.
Assuming "good" JMeter configuration you should see more or less same behavior
If there are still differences - capture the requests sent by JMeter and the browser using a sniffer tool like Wireshark and amend JMeter configuration to eliminate the differences
JMeter have three basic measurements it captures per request:
Elapsed Time (which is overall timespan from the point when it just starts sending request to the last bit received)
Latency (which starts the same point in time and ends when server starts responding)
And Connect time (which is included in latency and is basically the time for handshakes with server, including SSL/TLS negotiations)
So if you set a data writer among your listeners (e.g SimpleDataWriter, although AggregateReport & SummaryReport can do it as well), you'll see these metrics in your data file (while standard listeners/visualisers/aggregators stuck to elapsed time only).
But mind that these metrics doesn't include response rendering, and especially any code to be executed by browser.
JMeter just doesn't do it at all: obviously, it measures just the combined performance of Server + Network, skipping everything on the client side (except for bare necessities, like protocol negotiations).
That might explain the difference you've experienced.
As well as the difference between logged server processing times & the response times measured by JMeter: server just doesn't count what the network brings in.
PS And you don't have to sit and click on stopwatch with your browser: modern ones have a Dev Tools capable of showing you the precision timings divided by stages. E.g., just call Ctrl+Shift+I in Chrome, switch to network tab & behold the timings right there as you doing your requests.

The Average response time differs largely while load testing the Same API Endpoint with same load using Jmeter

I am using JMeter to test performance of an API Endpoint.
The Number of Threads(users) applied is 100.
When I execute the test for the very first time the average response time is 35345 ms.
for all the following tests with the same number of threads on the same API Endpoint the average response time is somewhere around 4705 ms.
what is the reason for such a big difference in average response time ?
Does JMeter cache any files on the first test and use the same cached files on all following tests ?
If yes how do I avoid this ?
I am new to JMeter any help in this regards will be much appreciated.
JMeter doesn't cache anything when it comes to API testing, it has HTTP Cache Manager which can represent browser cache when it comes to handling embedded resources like images, scripts and styles when it comes to web testing (mimicking HTTP requests send by real browsers) but it is not your case.
So my expectation is that it is something on your application under test side, i.e. it needs to "warm up" its own caches and first few requests are processed longer due to lazy initialization of components on the first access.
So my recommendations are:
Make sure you use reasonable ramp-up period so the load increases gradually, this way you will be able to correlate main metrics like response time, hits per second, etc. with the increased load
Monitor your application under test health from OS perspective, i.e. CPU, RAM, Swap consumption, etc. You can use JMeter PerfMon Plugin for that
Use profiling tools which are relevant for your application to detect "heavy" methods, long-running DB queries, etc.

Factors affecting need to consider when using jmeter tool

I would like to know which are the factors i need to consider when using jmeter, Most of the time internet speed will be varying and due to which i don't get accurate response time, and operations on server side [CPU utilization, etc].
Do I need to consider all this points when calculating performance of the application.
In regards to "internet speed vary", JMeter is smart enough to detect it and report as Latency metric. As per JMeter glossary:
Latency.
JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client
So you should be able to subtract latency from the overall response time and calculate the time, required to process your request on the server side. However it will be much better if JMeter load generators will live in the same intranet. If you need to test your application behavior when virtual users are sitting on different network types it can also be simulated
In regards to other factors that matter:
Application under test health. You should be monitoring baseline server-side health metrics to identify whether application server(s) gets overloaded during the load test as i.e. if you see high response times the reason could be as simple as a lack of free RAM or slow hard-drive or whatever.
JMeter load generator(s) health. The same approach should be applicable to JMeter engine(s). If JMeter hosts are overloaded they cannot generate the requests and send them fast enough which will be reported as reduced throughput.
You can use PerfMon JMeter Plugin for both. See How to Monitor Your Server Health & Performance During a JMeter Load Test article for detailed description of the plugin installation and usage
Your tests need to be realistic and represent virtual user as close to real one as possible. So make sure you:
Add Timers to your test plan to represent "think time"
If you are testing web-based application consider adding and properly configuring HTTP Cookie Manager, Header Manager and Cache Manager. Also don't forget to configure HTTP Request Defaults to "Retrieve all embedded resources" and use "Parallel pool" for this.

Resources