I want to know how many request per second my server can handle? - jmeter

I want to know how many HTTP requests per second my server can handle using Jmeter.
I got throughput 128/mins which means 2/3 req per sec. but something is wrong in my sampler can you tell what I need to do to test the login URL?

Hammering a login page doesn't have anything in common with the load testing as load testing is checking your application against the anticipated load.
If you're trying to determine the maximum number of requests per second your server can handle - this is some form of stress testing
In both cases you should come up with a realistic test scenario which would represent real users using real browsers like:
Record your scenario using JMeter's HTTP(S) Test Script Recorder
Perform parameterization and correlation
Run your test with several virtual users/iterations and inspect request/response details using View Results Tree listener to ensure that your test is doing what it is supposed to be doing
Once you're happy with your test script - disable the View Results Tree listener and run full load test using command-line non-GUI mode
Analyze your results using JMeter Reporting Dashboard
With regards to your goal I would recommend increasing the load gradually so you would be able to correlate the increasing number of users with the increasing throughput.
When you reach the point where you're adding more users and the throughput doesn't increase - it means that the application cannot handle more load hence that would be the maximum number of users/requests per second your application can handle.

Related

How to create proper JMeter workload model

How to create proper workload model in JMeter if I have only number of concurrent users and response time as input/requirement? Do I need any additional information in order to load test app?
If these are only info I can get, how to approach load testing in context of using JMeter as load testing tool, any ideas - suggestions - advices?
A "proper workload" would be simulating real life usage of the system under test by "the number of concurrent users".
Just replicate real user behaviour at HTTP protocol level paying attention to pretty much everything i.e.
JMeter's network footprint must be exact replica of the browser's network footprint when it comes to HTTP requests so cross-check the number and the nature of the requests from browser developer tools and JMeter's View Results Tree listener
Make sure to send the same/proper HTTP Headers
As a subset of point 2 pay attention to Cookies
Make sure to download embedded resources like browser does
In addition to point 4 make sure to properly handle cache
Use Timers wisely to simulate think times
If your users are doing different actions configure your JMeter test to distribute them like real users are distributed
Once done just run your test with the anticipated amount of users for the desired duration and compare the real response time against the anticipated
The Avg RT and other KPIs (such as Throughput) are the result of generating or executing the workload with a determined # of VUsers. IMO, the best approach is to generate the load by varying the VUsers as shown in the graph:
This graph also shows a key concept: perfomance (as measure by KPIs) is not linear (although it might appear linear at a small # of VUsers).

Is there a specific way to achieve the below scenarios in JMeter ? if yes how?

Can someone guide how can I achieve below scenarios via JMeter
1.Check if system is able to process 1,00,000 random searches per hour
2.Check if system can accept 1,00,000 transaction's per minute- this is more like form submissions
First of all you need to implement your test scenarios (search and submitting forms) using HTTP Request samplers
The HTTP Request samplers can be:
Recorded using JMeter's HTTP(S) Test Script Recorder
Recorded using JMeter Chrome Extension
Created manually basing on your application/endpoint specifications
Once you have test project skeleton and perform necessary correlation of dynamic values and parameterization of dynamic parameters like usernames you can start defining the workload, i.e. see Building a Web Test Plan user manual chapter
Add as many virtual users as needed, run your test and see whether your application can handle the anticipated load.
Suggested scenario:
Increase the load gradually, i.e. start with 1 user and increment the number of users till the projected amount
Look at Transactions per Second chart and Active Threads Over Time chart. On well-behaved system the throughput (number of requests per second) should increase as the number of users increase.
If you detect the point when you increase the load but the throughput doesn't increase - it means that the system reached the maximum performance. If it is sufficient - you can report the test as passed, otherwise you will need to investigate the root cause and either report or fix it if you're capable of doing this.

Validate newly created server support the same load

We are creating a new hosted server for one of our APIs on managed containers (Kubernetes) and we're trying to validate that it can handle at least the same amount of traffic load requests.
We've started with one of the APIs, where we would need to handle at least 140k requests per minute, all endpoints combined.
To verify this, I created a simple JMeter test as follows:
-Test Plan
---Thread Group Endpoint1
-----HTTP Request -> a GET request with query params for /path1
---Thread Group Endpoint2
-----HTTP Request -> a GET request with query params for /path2
For a local test, I used the following setup:
Thread Groups Endpoint1 and Endpoint2 are set to 200 threads (users), ramp-up period of 1s, loop count = forever and duration 60s.
Using a Summary Report listener when running the test gets me a total of ~9300 # Samples.
Using this approach, is it safe to just increase the number of threads (users) for the Thread Groups until I reach the desired 140k requests per minute?
Note: I only used JMeter a little before, so I'm aware that the entire approach may be wrong, therefore any suggestions and steering to the right path are more than welcomed.
Your approach is viable as long as it represents real-life application usage. If it has 2 endpoints with equally/evenly distributed load - your setup is just fine. If there are more endpoints and some of them are used more than the others - consider defining the workload correspondingly either using different Thread Groups or other distribution mechanism such as Throughput Controller
Increasing the number of threads is also fine, however consider increasing the load gradually, to wit increase ramp-up time so your test could have:
Arrivals phase
Time to hold the load
Ramp-down phase
This way you will be able to correlate various metrics like increasing response time, throughput, number of errors, etc. with the increasing load. Also you will be able to state what was the number of threads/requests per second when the system reached saturation point/breaking point and does it recover when the load gets back.
Also make sure you're following JMeter Best Practices as 2300/2500 requests per second is not something JMeter can support out of the box and you will need to do some tuning, at least increase JVM Heap size allocated to JMeter.
You may not be able to achieve the desired 140k requests per minute using a single Jmeter Machine, in that case you'll need Distributed Load Testing approach here.
refer: http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.html
Also keeping the ramp-up period of 1 second will lead to spike and unrealistic load in the system which will not give proper result unless you've pre-warmed your server, you should gradually increase the load as per real/estimated traffic pattern.

Recording application using template

I have recorded my web application through template & just to confirm that load test result which i am getting is correct? Just by increasing No of users does it give proper results? Is it enough for load testing of web application?
First of all you need to ensure that your test does what it is supposed to be doing. Recorded tests can rarely be successfully replayed, so normally you should be acting as follows:
Add View Results Tree listener and run your test with 1 user. Inspect request and response details to verify your test steps.
Perform correlation and parametrization if required.
Correlation: the process of identifying and handling any dynamic parameters. Most often people use Regular Expression Extractor for it.
Parametrization: the process of making your test data driven. For example, if your application assumes multiple authenticated users you need to store the credentials somewhere. Most commonly used test element for this is CSV Data Set Config
Make your test realistic. Virtual users simulated by JMeter need to represent real users using real browsers as close as possible with all the related stuff: cookies, headers, cache, etc. See How To Make JMeter Behave More Like A Real Browser to learn how to configure JMeter to act closer to real users. Also real users need some time to "think" between operations so make sure you are using Timers to simulate this behaviour as well.
Only after you apply the above points you should add more virtual users. Again, run your test with 2-3 users and iterations to ensure your test funcitons as designed. Once you are happy with it you can increase the load, but don't overkill your server, increase the load gradually and check the impact of the increasing load on your application, i.e. how response time, throughput and number of errors change as you increase the load. The same is applicable for decreasing the load, don't turn it off at once, decrease the number of virtual users gradually.
Building a Web Test Plan
Building an Advanced Web Test Plan

Jmeter stop test when time reaches threshold

We're running a test case for load testing, over different servers. What we want to do is, given that test case, stop if we can see a performance decrease, based on a response time threshold.
What we have now is threadgroup defined, and inside it, an HTTP request defined plus a view table for output. What should I do to put this control in there?
Add Duration Assertion and specify threshold there
In your Thread Group set "Action to be taken after a Sampler error" to Stop Test.
Above steps will stop your test after first occurrence of response time exceeding the threshold.
See How to Use JMeter Assertions in Three Easy Steps article for more information on how to conditionally set pass/fail criteria in your JMeter test.
P.S.
Remove View Results Table listener (or disable it) during load test execution as it consumes a lot of resources.
Run your load test in non-GUI mode as JMeter GUI is not designed for running the actual load test and may be a bottleneck in case of more or less severe load.

Resources