Concurrent Thread and Ultimate Thread Group and Performance Bench mark - jmeter

While understanding concept of Concurrent thread and ultimate thread group, i am confused to understand result of summary/aggregate report while running concurrent thread or ultimate thread group .For example if i have 200 user and ramp up time 60 sec then i didn't see all sampler request as 200 sample after completing execution successfully but only few sampler request have 200 sample. when i use normal thread group then i always got thread count same for each sampler request after completing execution .
for realistic load testing with more user , could you please suggest me which thread group should prefer.
Could you please provide valuable guidance with some valuable link/book and also share me standard performance bench mark criteria or key parameter detail while doing load testing.(if any bench load parameter value does not meet standard then we can say that there is a performance issue)
Thanks for giving valuable time in advance.
Thanks
amit

This is due to the fact that:
Your application response time is too high
Your test duration is too low
For example I can see response times > 80 seconds:
it means that if a single virtual user has cumulative response time for 2 samplers > 160 seconds and the test duration is 120 seconds it will not be able to execute all the requests. Just increase your test duration to be i.e. 10 minutes and you should see more virtual users capable of executing all the Samplers you defined in the test plan.
Also given first user is capable of executing all the requests successfully and in time it looks like that your application gets overloaded hence cannot respond fast enough when the number of concurrent users reaches some "critical threshold", you can add listeners like Active Threads Over Time and Response Times vs Threads, this way you will be able to correlate increasing load with the increasing response time.
If also makes sense to collect:
Baseline health metrics of your application (CPU, RAM, Network, Disk usage, etc.), it can be done using JMeter PerfMon Plugin.
Lower level details like slowest methods, largest objects, heaviest database queries, etc. This form of information can be obtained using profiling tools specific to your application programming language(s).

Related

Load Testing of UI User scenario - Jmeter Webdriver sampler

I want to run basic load testing scenario on a mendix application.
The initial requirement is at least 10 users should be able to perform some activities like login , file upload concurrently.
I have used the webdriver sampler in Jmeter with selenium and written the JS script for the actions.
The issue is it launches 10 browser on the same machine.
Please suggest the proper way to handle this. Currently it seems that the errors which I am getting are because of I am running the test on my laptop instead of distributed setup.
Also the total script takes 60 seconds to finish for single user. What should be the ramp up time?
If not Jmeter, please suggest any other suitable tool for this scenario.
As per The WebDriver Sampler: Your Top 10 Questions Answered article
Q. How do I Allocate the WebDriver Sampler?
A. First of all, don’t use the WebDriver Sampler to create the load! You’ll need around 1 CPU core per virtual user to keep the JMeter resource consumption within an acceptable range. The WebDriver Sampler should be used in addition to HTTP Request Samplers. Here’s an example of how it should be used:
1,000 of users are simulated by HTTP Request Samplers
1 user is the WebDriver Sampler
You should use the WebDriver Sampler to evaluate a real-life user experience. It’s most commonly used to measure the load times of pages when the system is experiencing high loads. The HTTP Request doesn’t really “render” the page and it can’t execute JavaScript. That’s why the WebDriver Sampler is so valuable - JMeter isn’t a browser and it overcomes limitations posed by this fact.
So first of all I would recommend re-considering your approach and migrating to HTTP Request samplers, you might be able to execute 10 browsers but if you will need to scale your test to 100 or 1000 users you will definitely won't be able to use this "real browser" approach while HTTP Request samplers act on HTTP protocol level, given you properly configure JMeter it will be no difference for the application under test whether the requests are originating from JMeter or from the real browser and the resources footprint will be much less.
With regards to the ramp-up time: the overall idea is to increase the load gradually so you could correlate the changing number of virtual users with changing metrics like response time, throughput, etc. According to JMeter Documentation:
The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up period is 100 seconds, then JMeter will take 100 seconds to get all 10 threads up and running. Each thread will start 10 (100/10) seconds after the previous thread was begun. If there are 30 threads and a ramp-up period of 120 seconds, then each successive thread will be delayed by 4 seconds.
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test, and short enough that the last threads start running before the first ones finish (unless one wants that to happen).
Start with Ramp-up = number of threads and adjust up or down as needed.

How much load it is?

I have tried but have a doubt that whether the below-mentioned specification is equivalent to 4000load or not.
the number of threads-100,
ramp-up period-10 secs,
loop count- 40, then
which is equal to how much load??
You are loading 100 concurrent threads, the loops just adds more execution time.
So it isn't equivalent to 4000 concurrent threads hitting your server
I don't know what do you mean by 4000load, your test will send 4000 requests per each Sampler which is in your Thread Group as fast as it can. The actual test duration will depend on your application response time but will not be less than 10 seconds.
You might want to take a look at Transactions per Second and Server Hits per Second charts to see how many requests your configuration delivers, both charts can be installed using JMeter Plugins Manager
Also you can generate HTML Reporting Dashboard which will have consolidated aggregate view of your test results.

Validate newly created server support the same load

We are creating a new hosted server for one of our APIs on managed containers (Kubernetes) and we're trying to validate that it can handle at least the same amount of traffic load requests.
We've started with one of the APIs, where we would need to handle at least 140k requests per minute, all endpoints combined.
To verify this, I created a simple JMeter test as follows:
-Test Plan
---Thread Group Endpoint1
-----HTTP Request -> a GET request with query params for /path1
---Thread Group Endpoint2
-----HTTP Request -> a GET request with query params for /path2
For a local test, I used the following setup:
Thread Groups Endpoint1 and Endpoint2 are set to 200 threads (users), ramp-up period of 1s, loop count = forever and duration 60s.
Using a Summary Report listener when running the test gets me a total of ~9300 # Samples.
Using this approach, is it safe to just increase the number of threads (users) for the Thread Groups until I reach the desired 140k requests per minute?
Note: I only used JMeter a little before, so I'm aware that the entire approach may be wrong, therefore any suggestions and steering to the right path are more than welcomed.
Your approach is viable as long as it represents real-life application usage. If it has 2 endpoints with equally/evenly distributed load - your setup is just fine. If there are more endpoints and some of them are used more than the others - consider defining the workload correspondingly either using different Thread Groups or other distribution mechanism such as Throughput Controller
Increasing the number of threads is also fine, however consider increasing the load gradually, to wit increase ramp-up time so your test could have:
Arrivals phase
Time to hold the load
Ramp-down phase
This way you will be able to correlate various metrics like increasing response time, throughput, number of errors, etc. with the increasing load. Also you will be able to state what was the number of threads/requests per second when the system reached saturation point/breaking point and does it recover when the load gets back.
Also make sure you're following JMeter Best Practices as 2300/2500 requests per second is not something JMeter can support out of the box and you will need to do some tuning, at least increase JVM Heap size allocated to JMeter.
You may not be able to achieve the desired 140k requests per minute using a single Jmeter Machine, in that case you'll need Distributed Load Testing approach here.
refer: http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.html
Also keeping the ramp-up period of 1 second will lead to spike and unrealistic load in the system which will not give proper result unless you've pre-warmed your server, you should gradually increase the load as per real/estimated traffic pattern.

JMeter number of threads in a run

I am quite new to jmeter and try to do a performance test of my application. I want to generate 100 request per second scenario however my server takes 3-4 secs to respond to every request. I am running my test for 1 mins which means number of requests fired should be 60k within the time span. However jmeter actually waits for the response before it sends next request. Which is not what I am looking for.
How can I make sure that jmeter sends a new requests every second with 100 req/sec without waiting for the response so that the number of requests fired per min is 60k.
I am trying to use constant throughput timer with 60k as request per min, however that is not helping. Here is my test screenshot.
EDIT
I have done like this
And Throughput shaping timer being as
So ideally I should get number of samples as 3000?, still not getting that.
Make sure you provide enough threads (virtual users) under Thread Group, "vanilla" JMeter won't kick off any extra threads if actual throughput is less than target one you specify in the Constant Throughput Timer.
Another solution would be using Concurrency Thread Group along with the Throughput Shaping Timer. They can be tied together via feedback loop so if you use these test elements JMeter will start more threads if the current amount won't be enough to reach the desired requests per second rate.
You can install both using JMeter Plugins Manager
My suggestion is to consider using the Arrivals Thread Group. This TG will allow you to configure the desire average throughput (ATP); the TG will instantiate the required threads needed to achieve the ATP goal.

Within a thread, are JMeter HTTP request/responses done sequentially?

I'm trying to understand the basics of JMeter. I've got a "plus1" Java servlet that adds one to a request parameter and returns the result, so it's a fast test servlet just so I can understand load testing.
Here's my test plan:
Thread Group: 1 thread, ramp up 1 s, loop count 10000
HTTP Request to localhost
Graph Results
Summary Report
When I run this, the summary report shows a throughput number of 200/sec or so.
The key question, with no controllers in the test plan, is JMeter running the test plan (sending a single request) and waiting for the response before looping?
When I introduce a more computationally intensive page for the request, the throughput number goes down as I would expect.
In short, yes.
There is an argument for having a sampler that would make a request and not wait for the response but it's an edge case. In most cases you would want a testing tool to wait to see what happens and verify things. It's also more realistic, most users will wait for a response, in fact they generally have to, before making subsequent calls.
If you want to run a capacity test then the best approach, I think, is to spread the load over multiple threads and to actually throttle the throughput of each one - you can do this using a Constant Throughput Controller. Eg. You could have 500 threads each running at 60 requests per minute, this would give a total load of 500 reqs/sec. This way, your test load is predictable and stable - it won't be linked to the speed of response from the server. Note. with multiple threads you'll want a ramp up period and you might find you have to spread the test over multiple machines (known as 'distributed' testing if you're going to google it).

Resources