Jmeter Response Times vs Threads - performance

I am doing API load testing by sending 250 requests at once.
1. Configuration
Naturally, server takes longer to respond when a lot of users requests it simultaneously, this is what it says here.. As per http://jmeter-plugins.org/wiki/ResponseTimesVsThreads/. However when testing this is what I found..
2. Test
The plot above starts from right to left and as the number of active threads decrease, the response time increases.
Is active threads same as number of user requests, if so why this is happening on a consistent basis?
Update-1
Ran another test and increased the ramp-up period this time
No of threads: 200
Ramp-Up Period: 200 secs
Loop Count: 200

There are at least 2 possible explanations:
you don't have a problem, and your improvement in response times comes from caching effect related to your data being in cache after some time. Only you can validate as we don't know if you are using a large enough dataset and how long is your test lasting
you have a problem, your server is rejecting connections under load, so you have very rapid failed responses that have a very good response time. To know if it's your problem, check the response code over time or transactions over time as long as error percentage

Related

How to find out max number of concurrent requests per second that server could handle in Jmeter

I would like to load test https://app-staging.servespark.com site. I have completed scripts on Jmeter for login and am able to go to any page.
How can find out the max number of concurrent requests per second that the server could handle in Jmeter?
Is it possible in the Jmeter? Please advise.
It looks like you need to conduct a Stress Test, something like:
Start with 1 user
Gradually increase the load at the same time looking into the following charts:
Active Threads Over Time
Response Times Over Time
Transactions Per Second
At the beginning the response time should not change and the throughput (number of transactions per second) should increase by the same factor as the number of users increase
At certain stage of test you will notice that response time will start growing and the number of transactions per second will go down. This will indicate a bottleneck
You may continue increasing the load to see at which stage the errors will start occurring
And finally you can decrease the load gradually as well to see if the application gets back to normal when the load comes down (i.e. errors disappear, throughput grows, etc.)
You could try this;
https://jmeter-plugins.org/wiki/ConcurrencyThreadGroup/
And ramp up the users to a value higher than expected.

Getting so high average response time in Jmeter

I am testing a scenario with 400 threads. Although I am almost getting no errors, I have very high average response. What can bring about this problem? Seems like server gives no time-out but gives response so late. I've addded the summary report. It is as follows:
This table doesn't tell the full story, if response time seems "so high" to you - this is definitely the bottleneck and you can report it already.
What you can do to localize the problem is:
Consider using a longer ramp-up period, i.e. start with 1 user and add 1 more user every 5 seconds (adjust these numbers according to your scenario) so you would have arrival phase, the "plateau" and the load decrease phase. This approach will allow you to correlate increasing load and increasing response time by looking at Active Threads Over Time and Response Times Over Time charts. This way you will be able to state that:
response time remains the same up to X concurrent users
after X concurrent users it starts growing so throughput is going down
after Z concurrent users response time exceeds acceptable threshold
It would also be good to see CPU, RAM, etc. usage on the server side as increased response time might be due to lack of resources, you can use JMeter PerfMon Plugin for this
Inspect your server configuration as you might need to tune it for high loads (same applies to JMeter, make sure to follow JMeter Best Practices)
Use a profiler tool on server side during the next test execution, it will show you the slowest places in your application code

Is it good, sample time increasing gradually along with increase in number of users

In Starting of script sample time is less and then it starts increasing as the load increasing, is it the correct way to do load testing for website?
Please help, which is the correct way to do load testing for website
Not really, in ideal world response time should remain the same as the load increases like:
1 user - response time 1 second - throughput 1 request per second
100 users - response time 1 second - throughput 100 requests per second
200 users - response time 1 second - throughput 200 requests per second
etc.
The situation when response time doesn't start increasing is called saturation point - it is the maximum throughput your application can support.
The situation when response time starts increasing as you start more threads (virtual users) is known as the bottleneck and the question is: whether performance is still acceptable for that number of users that is defined in NFR and/or SLA. If yes - you're good to go, if not - you need to report this issue (it would be beneficial if you could try to determine reason for this)
The correct way of load testing the website is simulating end users activity as close as possible including workload model. Remember to increase the load gradually, this way you will be able to correlate increasing load with metrics like response time, throughput, number of errors. It is also good to decrease the load gradually as well to see whether your website recovers when the load gets back to normal/zero.

Why number of requests are reduced when number of Threads are increased?

I have a test suite which has many HTTP requests. Each HTTP requests have different number of threads but with 30 seconds as Ramp up time.
Set 1:
Set 2:
The difference between Set 1 and Set 2 are only in the number of Threads. Its exactly the double number of requests in Set 2. But you can see the total count is reduced. Why is this? i was expecting the number of requests also to go up when the number of Threads are increased.
Can someone please put some light into this?
Your tables don't tell the full story and the could be multiple explanations, for example:
You increase number of threads by factor of 2
Your application becomes overloaded hence response time increases
So assuming the same test duration JMeter is able to execute less requests as it waits for response from previous request prior to sending a new one
So pay attention not only to number of requests but also check response time for all the samplers and correlation between increased amount of active users and response time by looking into i.e. Response Times vs Threads and Transaction Throughput vs Threads charts.
Aforementioned graphs can be installed using JMeter Plugins Manager

How to setup ramp up time in Jmeter for 500 concurrent users?

We want to demonstrate that our REST API for a customer can handle 500 concurrent requests. In order to implement this, what is the best way to setup ramp up time ?
Is it 500 requests in 1 sec ?
2500 requests in 5 seconds ?
Any other option ?
With first option, the app and webservers will be flooded. With second option
How should i go about setting this up ?
Appreciate any inputs on this.
Actually performance testing has many different faces, for example:
Load Testing: the process of verifying whether the application under test can handle anticipated load, if you expect 500 users - set 500 threads under Thread Group and configure Ramp-Up period so the load would increase gradually. According to JMeter documentation:
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test, and short enough that the last threads start running before the first ones finish (unless one wants that to happen).
Start with Ramp-up = number of threads and adjust up or down as needed.
So if you have 500 seconds ramp-up time all 500 users will be online in ~8 minutes, after that you can leave test running for some time (i.e. another 500 seconds) and then again gradually (500 more seconds) decrease the load to zero.
This way you will be able to correlate increasing response time (or increasing number of errors) with the increasing load and vice versa.
Soak Testing: basically the same as above, but leave the test running overnight or over the weekend to see how does your application survives the prolonged load. This way you will be able to detect for example memory leaks
Stress Testing: again the same as load testing but don't limit the maximum load to 500 users, gradually increase the load until your application breaks to see how many maximum users it can serve. Then you might also want to gradually descrease the load to see whether it recovers when the load comes back to normal
Spike Testing: this doesn't assume any ramp-up, this way you will test how your application handles 500 users arriving at once
See Why ‘Normal’ Load Testing Isn’t Enough article for more detailed explanation on various performance testing types and why you need to consider all of them.

Resources