Performance Testing: What does fluctuating Response time indicates? - performance

Below is the graph which I received after the performance test execution.
I am confused about the fluctuated response time graph.
NOTE: 1) Throughput graph is also fluctuating. 2) I did not receive any error during test.

It normally indicates that either application under test or JMeter engine is overloaded hence it cannot handle/produce stable load pattern.
Your response time is around 1.5 minutes which seems little bit high to me so I would suggest that you need to monitor the application under test and check:
whether it has enough headroom to operate in terms of CPU, RAM, Network IO, etc. as it might be the case the application is short on RAM and goes swapping and disk IO is much slower than RAM, it can be checked using i.e. JMeter PerfMon Plugin
whether it is properly configured for high loads as its middleware (database, application server, load balancer, etc. need to be tuned, spike-like response time pattern may stand for intensive GC activity
in any case ensure that JMeter is also properly configured for high load and isn't short on resources as if JMeter isn't able to send/receive requests fast enough you will get false-negative results
Single chart never tells the full story, you need to correlate information from all the possible sources, collect log files, etc.
-

Related

Optimal way to handle loads upto 50K TPS using JMeter

can JMeter distributed test handle such loads? or should we fire individual tests on each server and use a backend listener to store the details.
If both of them are not the optimal way, what is the best way to build load test infrastructure to handle big loads?
There are no limitations for the throughput (number of requests per second) on JMeter side, the question whether you can conduct the required load or no mainly depends on the hardware you can allocate.
Given you have powerful enough machine and follow JMeter Best Practices you can even create such a load using single instance, however it's a good idea to check resources usage like CPU, RAM, Network and Disk IO, etc. using i.e. JMeter PerfMon Plugin. The idea is that JMeter must have enough headroom to operate as if it will not be able to send requests fast enough due to i.e. high CPU usage the perceived load will be less even if the system under test can handle more and you will get false negative results.
The answer to the question whether you need to use the Backend Listener mainly depends on the following criteria:
do you need the possibility to observe the test results in the real time while the test is running
do you need to store the results in some database instead of the .jtl results files

JMeter fails on long scenario

I have a scenario with 5K HTTP requests. When I start JMeter with it, JMeter simply hangs after about 170 users. I followed all the guidelines for successful stress testing (no listeners, headless, increased heap space).
I must say that some of those requests are a little big, the overall file is ~115M.
When I only take a subset of the requests (~100), the simulation works better (faster initialization of users, holds more than 170 users, etc).
My question is, first, as I understand JMeter loads the scenario tree and every threads plays it, there should not be any duplication, so what exactly causes this extensive load? and second, what can I do about it?
PS: when I view the system bottlenecks I notice both CPU and memory are at very high values on the long file, both of the metrics have low values on the shorter version. Anyone can explain?
PS2: the requests have about 7 seconds of delay between them
First I need to let you know that if you are using a single system to do the load testing, the maximum your hardware or the port can handle at a time is 1 Gig of data. and your firewall(if any) would again receive/pass not more than I Gig of data. Try doing the same load test with Distributed System of load testing in Jmeter(Master-Slave-Distributed System). Even then, I don't think it would run for 4k requests(if these requests are heavy).
Best possible solution:
Try Distributed system as I mentioned above.
Try running the load test in Non GUI Mode- CLI
Increase the ramp up time as needed.
Increase the Ram of your system and allocate maximum available heap space to jmeter.
Drastic change- Use 1. Blazemeter cloud or 2. Move the complete setup of your load testing to Amazon Server which is more reliable and scalable.

Throughput coming very low as against the desired RPS-Jmeter

In Jmeter, how can I achieve a 100 RPS for an http request which is taking an average of 20 seconds response time? The more the thread count I gave, the more response time it took.
What should be the thread count that I should ideally give in this situation? If I am adding a constant throughput timer or throughput shaping timer what should the thread count as well as the settings of the timers?
I have adjusted thread count to 100,250,500, 2500. But it is not giving anything close to 100 RPS. It is about 20/minute that I see.
This sounds like a bottleneck so
First of all ensure that you're following JMeter Best Practices as it might be the case JMeter configuration is not suitable for kicking off that many threads.
Second I would recommend checking whether your system under test has enough headroom in terms of resources (CPU, RAM, Network, Disk, etc.), you can do this using i.e. JMeter Perfmon Plugin
If there is no lack of resources but application performance is not satisfactory, like at point 1 you can check infrastructure configuration (web/application server settings, database settings, etc.) as in majority of cases default configurations are not good enough for high loads. Refer documentation on your infrastructure software for tuning tips.
And finally there could be problems with your application code itself, re-run your test having profiling tools telemetry enabled, this way you will be able to tell for sure where your application spends time and/or resources.

How to avoid network latency in performance testing

We have servers which are installed on other Las Vegas and currently we need to perform performance testing with jmeter from SAN Francisco office.I am pretty sure doing so there will be network latency added to response times.Do you have any idea how can we avoid that.
You can't avoid network latency, but at least minimize its' consequences to your test results.
Just place your load generator instances (Jmeter servers) as close as possible to testing target. Ideally they should be in the same data center (take a look at Amazon EC2 instances for instance).
In this case your latency will not have a huge effect on performance results, since it will be relatively small.
But remember that network latency is an everyday part of any network communication and you have to take it into account also. It can have major effect on your system in production, especially for the users which are not closely "situated" to your data centers.
Actually JMeter stores Latency separately and as per The Load Reports guide
The response time that is required to receive a response from the server is the sum of the response time + latency.
JMeter .jtl result file looks as follows:
So a very simple formula like =B2-L2 will help you to determine response time without Latency metric, however it isn't something which is being normally done as latency matters.

How to do load testing using jmeter and visualVM?

I want to do load testing for 10 million users for my site. The site is a Java based web-app. My approach is to create a Jmeter test plan for all the links and then take a report for the 10 million users. Then use jvisualVM to do profiling and check if there are any bottlenecks.
Is there any better way to do this? Is there any existing demo for doing this? I am doing this for the first time, so any assistance will be very helpful.
You are on the correct path, but your load limit is of with a high factor.
Why I'm saying this is cause your site probably will need more machine to handle 10Milj Concurrent users. A process alone would probably struggle to handle concurrent 32K TCP-streams. Also do some math of the bandwidth it would take to actually handle 10Milj users.
Now I do not know what kind of service you thinking of providing on your site, but when thinking of that JVisualVM slows down processing by a factor 10 (or more for method tracing), you would not actually measure the "real world" if you got JMeter and JVisualVM to work at the same time.
JVisualVM is more useful when you run on lower loads.
To create a good measurement first make sure your have a good baseline.
Make a test with 10 concurrent users, connect up JVisuamVM and let it run for a while, not down all interesting values.
After you have your baseline, then you can start adding more load.
Add 10times the load (ea: 100 users), look at the changes in JVisualVM. Continue this until it becomes obvious that JVisualVM slows you down, for every time to add extra load, make sure you have written down the numbers your are interested in. Plot down the numbers in a graph.
Now... Interpolate the graph (by hand) for the number of users you want. This works for memory usage, disc access etc, but not for used CPU time, cause JVisualVM will eat CPU and give you invalid numbers on that (especially if you have method tracing turned on).
If you really want to go as high as 10Milj users, I would not trust JMeter either, I would write a little test program of my own that performs the test you want. This would be okey, since the the setting up the site to handle 10Milj will also take time, so spending a little extra time of the test tools are not a waste.
Just because you have 10 million users in the database, doesn't mean that you need to load test using that many users. Think about it - is your site really going to have 10 million simultaneous users? For web applications, a ratio of 1:100 registered users is common i.e. you are unlikely to have more than 100K users at any moment.
Can JMeter handle that kind of load? I doubt it. Please try faban instead. It is very light-weight and can support thousands of users on a single VM. You also have much better flexibility in creating your workload and can also automate monitoring of your entire test infrastructure.
Now to the analysis part. You didn't say what server you were using. Any Java appserver will provide sufficient monitoring support. Commercial servers provide nice GUI tools while Tomcat provides extensive monitoring via JMX. You may want to start here before getting down to the JVM level.
For the JVM, you really don't want to use VisualVM while running such a large performance test. Besides to support such a load, I assume you are using multiple appserver/JVM instances. The major performance issue is usually GC, so use the JVM options to collect and log GC information. You will have to post-process the data.
This is a non-trivial exercise - good luck!
There are two types of load testing - bottleneck identification and throughput. The question leads me to believe this is about bottlenecks, so number of users is a something of a red herring, instead the goal being for a given configuration finding areas that can be improved to increase concurrency.
Application bottlenecks usually fall into three categories: database, memory leak, or slow algorithm. Finding them involves putting the application in question under stress (i.e. load) for an extended period of time - at least an hour, perhaps up to several days. Jmeter is a good tool for this purpose. One of the things to consider is running the same test with cookie handling enabled (i.e. Jmeter retains cookies and sends with each subsequent request) and disabled - sometimes you get very different results and this is important because the latter is effectively a simulation of what some crawlers do to your site. Details for bottleneck detection follow:
Database
Tables without indices or SQL statements involving multiple joins are frequent app bottlenecks. Every database server I've dealt with, MySQL, SQL Server, and Oracle has some way of logging or identifying slow running SQL statements. MySQL has the slow query log, whereas SQL Server has dynamic management views that track the slowest running SQL. Once you've got your hands on the slow statements use explain plan to see what the database engine is trying to do, use any features that suggest indices, and consider other strategies - such as denormalization - if those two options do not solve the bottleneck.
Memory Leak
Turn on verbose garbage collection logging and a JMX monitoring port. Then use jConsole, which provides much better graphs, to observe trends. In particular leaks usually show up as filling the Old Gen or Perm Gen spaces. Leaks are a bottleneck with the JVM spends increasing amounts of time attempting garbage collection unsuccessfully until an OOM Error is thrown.
Perm Gen implies the need to increase the space as a command line parameter to the JVM. While Old Gen implies a leak where you should stop the load test, generate a heap dump, and then use Eclipse Memory Analysis Tool to identify the leak.
Slow Algorithm
This is more difficult to track down. The most frequent offenders are synchronization, inter process communication (e.g. RMI, web services), and disk I/O. Another common issue is code using nested loops (look mom O(n^2) performance!).
Best way I've found to find these issues absent some deeper knowledge is generating stack traces. These will tell what all threads are doing at a given point in time. What you're looking for are BLOCKED threads or several threads all accessing the same code. This usually points at some slowness within the codebase.
I blogged, the way I proceeded with the performance test:
Make sure that the server (hardware can be as per the staging/production requirements) has no other installations that can affect the performance.
For setting up the users in DB, a procedure can be used and can be called as a part of jmeter test plan.
Install jmeter on a separate machine, so that jmeter won't affect the performance.
Create a test plan in jmeter (as shown in the figure 1) for all the uri's, with response checking and timer based requests.
Take the initial benchmark, using jmeter.
Check for the low performance uri's. These are the points to expect for bottlenecks.
Try different options for performance improvement, but focus on only one bottleneck at a time.
Try any one fix from step 6 and then take an benchmark. If there is any improvement commit the changes and repeat from step 5. Otherwise revert and try for any other options from step 6.
The next step would be to use load balancing, hardware scaling, clustering, etc. This may include some physical setup and hardware/software cost. Give the results with the scalability options.
For detailed explanation: http://www.daemonthread.com/2011/06/site-performance-tuning-using-jmeter.html
I started using JMeter plugins.
This allows me to gather application metrics available over JMX to use in my Load Test.

Resources