Factors affecting need to consider when using jmeter tool - jmeter

I would like to know which are the factors i need to consider when using jmeter, Most of the time internet speed will be varying and due to which i don't get accurate response time, and operations on server side [CPU utilization, etc].
Do I need to consider all this points when calculating performance of the application.

In regards to "internet speed vary", JMeter is smart enough to detect it and report as Latency metric. As per JMeter glossary:
Latency.
JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client
So you should be able to subtract latency from the overall response time and calculate the time, required to process your request on the server side. However it will be much better if JMeter load generators will live in the same intranet. If you need to test your application behavior when virtual users are sitting on different network types it can also be simulated
In regards to other factors that matter:
Application under test health. You should be monitoring baseline server-side health metrics to identify whether application server(s) gets overloaded during the load test as i.e. if you see high response times the reason could be as simple as a lack of free RAM or slow hard-drive or whatever.
JMeter load generator(s) health. The same approach should be applicable to JMeter engine(s). If JMeter hosts are overloaded they cannot generate the requests and send them fast enough which will be reported as reduced throughput.
You can use PerfMon JMeter Plugin for both. See How to Monitor Your Server Health & Performance During a JMeter Load Test article for detailed description of the plugin installation and usage
Your tests need to be realistic and represent virtual user as close to real one as possible. So make sure you:
Add Timers to your test plan to represent "think time"
If you are testing web-based application consider adding and properly configuring HTTP Cookie Manager, Header Manager and Cache Manager. Also don't forget to configure HTTP Request Defaults to "Retrieve all embedded resources" and use "Parallel pool" for this.

Related

detect scaling problem form performance test result

I conducted performance testing on e-commerce website and I have the test results with some matrices. I already found some problems on some component for example on checkout or post login with high response time and error. But I also would like to find issues that are limiting the application to scale. I only did the testing on the application server. And I observed that CPU , I/O rate are very stable as well. But still the application gives high response time. Is there any other way I can determine from the test result why it is not scaling well? Thank!
From JMeter test result only - unlikely, JMeter just sends requests, waits for the responses and measures the time in-between plus collects some extra metrics like connect time and latency, see the JMeter Glossary for full list with explanations
The integrated system acts at the speed of its slowest component, possible reasons could be in:
Network issues (i.e. lack of bandwidth, faulty router, long DNS resolution time, etc.)
Your application is not properly configured for high loads. Inspect the current setup of the application in terms of thread pools, maximum number of open connections, any limitations on resource usage, etc. Look for documentation on performance tuning of individual middleware compoments as well.
Repeat your test run with a profiler tool telemetry enabled or look at the APM tool output for the test time frame if the tool is in place, it will allow you do perform a deep dive into what's going on under the hood of this or that function call as it might be inefficient algorithm or a slow database query

How to find deadlock, timeout and memory issues using JMeter?

I am new to performance testing. I have a task on measuring the web application performance. I need to find out which modules/calls are causing deadlock, timeout and memory issues.
Q1. How can I use JMeter to find out deadlock, memory and timeout issues? If I do the following steps, it is the right way to trace those issues?
create a test plan in JMeter, which contains multiple Thread Group.
In each thread group, it contains multiple HTTP requests and 200 or
more users plus infinite loop.
Monitor JMeter results and SQL
profiler for deadlock.
Q2. JMeter is the right tool for tracking those issues? Or, should I use browser based performance testing tool such as LoadNinja, LoadView?
Thanks
Bonnie
Q1 JMeter per se doesn't provide any toolchain to detect deadlock and memory issues, the HTTP Request sampler (or even better HTTP Request Defaults) provides possibility to set the timeouts, if the value is blank - it will default to operating system timeout or web server timeout, whatever comes the first
If you conduct some form of stress test, i.e. start with 1 virtual user and gradually increase the load at some point you will see that response time starts growing and number of requests per second starts decreasing. So it's the point of maximum system performance and after that the performance will be degrading.
To monitor application under test memory you can use JMeter PerfMon Plugin, it will allow you to state whether the lack of RAM is the cause of the performance issue
With regards to deadlocks, it should result in HTTP Request sampler failure (or timeout), JMeter won't give you the underlying reason, but it will give you the timestamp and you should be able to check what happened with your application/database at that moment.
Q2 well-behaved JMeter test must produce the same network footprint as a real browser, if your test plan is good enough the system under test shouldn't be able to distinguish whether it's being hit by JMeter or by a real user using the real browser. JMeter will not give you client-side performance metrics like page rendering time or JavaScript execution time as:
JMeter is not a browser, it works at protocol level. As far as web-services and remote services are concerned, JMeter looks like a browser (or rather, multiple browsers); however JMeter does not perform all the actions supported by browsers. In particular, JMeter does not execute the Javascript found in HTML pages. Nor does it render the HTML pages as a browser does (it's possible to view the response as HTML etc., but the timings are not included in any samples, and only one sample in one thread is ever displayed at a time).

Interpreting JMeter Results

I have been running some load tests against APIs using JMeter, the results are below:
I am trying to understand what the cause of the two different patterns of slow behaviour I am seeing could be:
Pattern 1: Time to connect is low, Latency is high
Pattern 2: Time to connect is high, Latency is low
*Note: the majority of calls are returning in around 45-50ms.
My current thoughts are as follows:
Pattern 1: This is "server processing time" so for some reason back-end server is taking longer than usual to respond. We will need to do deeper dive to figure out why.
Pattern 2: This pattern shows a long time to establish a TCP connection. Is there a way to determine if this is a problem on the outgoing side i.e. JMeter itself is running out of threads to make API connections, or if the API server is running out of connections and is unable to accept more?
How should I interpret these results? Are there any additional data points I could pull or tools I could use to better understand the findings?
Both Connect Time and Latency are network-related metrics, the formula is:
Elapsed Time = Connect Time + Latency + Server Response time
It looks like your server itself is no brainer, the problem is either on network level or connected with JMeter which might lack resources in order to send requests fast enough.
With regards to additional information sources:
Generate HTML Reporting Dashboard and look into "Over Time" charts. It should allow you to correlate increasing load with the increasing connect time / latency.
Consider setting up monitoring of essential health metrics of JMeter load generator(s) and the application under test. You can use JMeter PerfMon Plugin for this.
Make sure to follow JMeter Best Practices as JMeter default configuration is good for tests development and debugging and you need to perform fine tuning of JMeter for the high loads.

API load testing with 10k users and I am choosing jmeter

I have requirement of test api with 10k users. What I choose is :
Jmeter
Linux server to install Jmeter and throw load with 10k users
One node for now
Will perform API with operations like :
Login
Book hotel with post parameters
Update Booking Details
Save Booking
I am thinking to use above for API testing with 10k users, Above tools are enough or I should look for other options like loadimpact, loader or blazemeter?
If you are talking about API you should be rather considering "requests per second" rather than "users" as I strongly doubt that end users will be sending requests to API endpoints via curl or Postman.
No matter whether your goal is "users" or "requests per second" it is only you who can answer as it depends on many parameters like:
your machine hardware specifications
software specifications (OS/JVM/JMeter version and architecture)
nature of your test (request/response size, number of pre/post processors, assertions, etc)
So you should act as follows:
Make sure you're following JMeter Best Practices
Make sure you monitor baseline health metrics of the node which is running JMeter (CPU, RAM, Swap, Network, Disk), you can use JMeter PerfMon Plugin for that .
Start with minimal load (1 virtual user or 1 request per second) and increase the load until your machine starts swapping or any other health metric exceeds, say, 80% of maximum available capacity. Once it happens take a look into active threads (Active Threads Over Time listener) or throughput (Transactions Per Second) - this is the maximum number of users or hits per second you can produce on particular this host for particular this test. If it is enough - you're good to go, if not - you will have to switch to Distributed testing
See What’s the Max Number of Users You Can Test on JMeter? article for more details.
The whole answer is an elaborate it depends.
The best thing you can do is to run PerfMon agents on the server generating the load as well as on the server running the system under test.
This way you should see (in the CPU utilization and free memory statistics) whether you had maxed what the server providing the API can do or whether it is your load generator running out of steam. In the first case you got some readline based on the hardware and configuration you had run with. In the second one you have an indication to employ more than 1 box to generate the load or to investigate settings and options.
Have a closer look at PerfMon JMeter plugin for exact details.

The Average response time differs largely while load testing the Same API Endpoint with same load using Jmeter

I am using JMeter to test performance of an API Endpoint.
The Number of Threads(users) applied is 100.
When I execute the test for the very first time the average response time is 35345 ms.
for all the following tests with the same number of threads on the same API Endpoint the average response time is somewhere around 4705 ms.
what is the reason for such a big difference in average response time ?
Does JMeter cache any files on the first test and use the same cached files on all following tests ?
If yes how do I avoid this ?
I am new to JMeter any help in this regards will be much appreciated.
JMeter doesn't cache anything when it comes to API testing, it has HTTP Cache Manager which can represent browser cache when it comes to handling embedded resources like images, scripts and styles when it comes to web testing (mimicking HTTP requests send by real browsers) but it is not your case.
So my expectation is that it is something on your application under test side, i.e. it needs to "warm up" its own caches and first few requests are processed longer due to lazy initialization of components on the first access.
So my recommendations are:
Make sure you use reasonable ramp-up period so the load increases gradually, this way you will be able to correlate main metrics like response time, hits per second, etc. with the increased load
Monitor your application under test health from OS perspective, i.e. CPU, RAM, Swap consumption, etc. You can use JMeter PerfMon Plugin for that
Use profiling tools which are relevant for your application to detect "heavy" methods, long-running DB queries, etc.

Resources