I'm currently using Apache JMeter to run load tests on the REST interfaces of my backend application. The test plan currently uses two thread groups and runs those in loops. Both thread groups are sending particular REST requests at a certain throughput using a constant throughput timer. The first thread group randomly picks one of several different REST requests, while the other uses one fixed REST request.
With this setup, I'm able to fire load to my application at a constant throughput. Now, I would like to simulate that over a day/night load profile, given that from e.g. 8:00 am to 5:00 pm the load is at a certain constant throughput, while in the rest of the time (the night), the load drops to a lower throughput or even stops entirely. Then at 8:00 am the next morning, it rises again.
Do you guys know if such a load profile can be simulated using Apache JMeter? Do you also know, which constructs could be used to setup something like that?
Thanks for any hint.
Regards
Timo
You can consider switching to Throughput Shaping Timer which seems to be exactly what you're looking for. See Special Property Processing chapter for the "load profiles" bit.
You can get Throughput Shaping Timer element via JMeter Plugins Manager
Related
In Jmeter I have a scenario like
Load tested with 4000 users and 1 hour duration
759965 requests made and out of which one request failed on an average 18894.13 requests made per second.
This was the earlier scenario and I want to make the same scenario again with the above information. Can someone guide me how to set up the environment and also the results. I have designed my script using Co-relation with the help regular expression extractor.enter image description here
For the normal Thread Group the configuration would be something like:
It would also be a good idea to use some ramp-up period so the load would increase gradually and you could correlate increasing load with other metrics like response time or transactions per second.
You might also want to use one of Custom Thread Groups which can be installed as JMeter Plugins, they provide easy visual way to define the number of threads, test duration, ramp-up, ramp-down, time to hold the load, eventual spikes, etc.
Once you define your desired workload you should run your test in command-line non-GUI mode, with regards to the test results the easiest option is to generate HTML Reporting Dashboard
Can someone guide how can I achieve below scenarios via JMeter
1.Check if system is able to process 1,00,000 random searches per hour
2.Check if system can accept 1,00,000 transaction's per minute- this is more like form submissions
First of all you need to implement your test scenarios (search and submitting forms) using HTTP Request samplers
The HTTP Request samplers can be:
Recorded using JMeter's HTTP(S) Test Script Recorder
Recorded using JMeter Chrome Extension
Created manually basing on your application/endpoint specifications
Once you have test project skeleton and perform necessary correlation of dynamic values and parameterization of dynamic parameters like usernames you can start defining the workload, i.e. see Building a Web Test Plan user manual chapter
Add as many virtual users as needed, run your test and see whether your application can handle the anticipated load.
Suggested scenario:
Increase the load gradually, i.e. start with 1 user and increment the number of users till the projected amount
Look at Transactions per Second chart and Active Threads Over Time chart. On well-behaved system the throughput (number of requests per second) should increase as the number of users increase.
If you detect the point when you increase the load but the throughput doesn't increase - it means that the system reached the maximum performance. If it is sufficient - you can report the test as passed, otherwise you will need to investigate the root cause and either report or fix it if you're capable of doing this.
We are creating a new hosted server for one of our APIs on managed containers (Kubernetes) and we're trying to validate that it can handle at least the same amount of traffic load requests.
We've started with one of the APIs, where we would need to handle at least 140k requests per minute, all endpoints combined.
To verify this, I created a simple JMeter test as follows:
-Test Plan
---Thread Group Endpoint1
-----HTTP Request -> a GET request with query params for /path1
---Thread Group Endpoint2
-----HTTP Request -> a GET request with query params for /path2
For a local test, I used the following setup:
Thread Groups Endpoint1 and Endpoint2 are set to 200 threads (users), ramp-up period of 1s, loop count = forever and duration 60s.
Using a Summary Report listener when running the test gets me a total of ~9300 # Samples.
Using this approach, is it safe to just increase the number of threads (users) for the Thread Groups until I reach the desired 140k requests per minute?
Note: I only used JMeter a little before, so I'm aware that the entire approach may be wrong, therefore any suggestions and steering to the right path are more than welcomed.
Your approach is viable as long as it represents real-life application usage. If it has 2 endpoints with equally/evenly distributed load - your setup is just fine. If there are more endpoints and some of them are used more than the others - consider defining the workload correspondingly either using different Thread Groups or other distribution mechanism such as Throughput Controller
Increasing the number of threads is also fine, however consider increasing the load gradually, to wit increase ramp-up time so your test could have:
Arrivals phase
Time to hold the load
Ramp-down phase
This way you will be able to correlate various metrics like increasing response time, throughput, number of errors, etc. with the increasing load. Also you will be able to state what was the number of threads/requests per second when the system reached saturation point/breaking point and does it recover when the load gets back.
Also make sure you're following JMeter Best Practices as 2300/2500 requests per second is not something JMeter can support out of the box and you will need to do some tuning, at least increase JVM Heap size allocated to JMeter.
You may not be able to achieve the desired 140k requests per minute using a single Jmeter Machine, in that case you'll need Distributed Load Testing approach here.
refer: http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.html
Also keeping the ramp-up period of 1 second will lead to spike and unrealistic load in the system which will not give proper result unless you've pre-warmed your server, you should gradually increase the load as per real/estimated traffic pattern.
i am new to Load testing and would like configure my jmeter setting for the following requirement below. My understanding is Theard are different from request per second. If so what will be values in thread group for the below requirement.
"Initial load 20 request/second, increase load with 100 request/second for each minute.
Perform load test until we see an increase in latency "
You should put something very high into Thread Group and use one of the following approaches to define your load pattern:
Constant Throughput Timer - it comes bundled with JMeter
Throughput Shaping Timer or Concurrency Thread Group- available via JMeter Plugins project
In order to automatically stop the test when latency exceeds threshold you can use AutoStop Listener, again it comes with JMeter Plugins.
In general latency is networking related metric so even if your application is slow as a snail you can have low or even zero latency so I would recommend considering response time and/or transactions per second metrics as well.
I am new to jmeter and scripted for login authentication. Project requirement is to see the load for 10K concurrent users.
Script is working fine but to enhance I need suggestions on how to do the following thigs:
How can I see that how much time/average time the server takes to load a page.
which thread grp (studied Ultimate thread group but it is not very clear to me), should be used to see the maximum load the server can sustain in a particular time, for that rampup time need to be adjusted (correct me if I am wrong).
Please tell how to adjust the rampup time with respect to users/waiting time etc., in short how to do incremental/proportional observation to see the server performance(there is no Gateway error etc)
If you're looking for your server capacity boundaries I would rather stick to "requests per second" rather than to "concurrent users" as users may work with different applications in a different way.
For instance, if it is image gallery - the majority of users will be browsing images and do this rather frequently, for instance request next image i.e. each 2 seconds. Given image load time 1 second it will be an image per 3 seconds - 20 images per minute. In this case 10000 users will create the load of 3333 requests per second.
If your site is articles collection, users will need some more time to read an article, i.e. 2 minutes. In that case 10000 users will create 83 requests per second load.
JMeter provides Constant Throughput Timer out of the box, you can set desired target throughput in requests per minute using it. And once you're already aware of JMeter Plugins project, it offers Throughput Shaping Timer - more advanced test element with extended functionality.
If you go "throughput" way, no matter which Thread Group you choose as the load will be orchestrated by aforementioned timers.
See What is the Relationship Between Users and Hits Per Second? article for more detailed explanation.
Once you design your test scenario run it in non-GUI mode (as JMeter's GUI is very resource intensive) as:
jmeter -n -t /path/to/your/testplan.jmx -l /path/to/resultsfile.jtl.
When the test finishes, open JMeter GUI, add Aggregate Report listener and inspect min/max/average response times per requests.