I have 5 scenarios for my web application. The ratio in which to hit these scenarios are as follows : 10:25:30:5:30. I want to run the test for 5 hours. In the first hour the number of users should be 100, in the next hour it should be 80, then it should be 200, in the 4th hour it should be 500 and in the last hour it should be 300. How to achieve this workload modelling in Jmeter
The easiest way to configure your model is using 5 different Thread Groups, maybe even Ultimate Thread Groups as it is much easier to control users arrival rate with it, to simulate the desired distribution.
If for some reason you need to keep all the requests for different scenarios in a single Thread Group, you have the following choices:
Throughput Controller
Switch Controller
Weighted Switch Controller
P.S:
Ultimate Thread Group and Weighted Switch Controller are available via JMeter Plugins project
See Running JMeter Samplers with Defined Percentage Probability article for different approaches of running "weighted" tasks explained
Related
In JMeter(Vr. 5.2.1) i have JSR223 sampler having a custom script to generate a payload to an endpoint. The end point is capable of receiving 100000 transactions per second. However, from jmeter i find that the max samples i can achieve is around 4000 requests per second per thread.
I tried to increase the thread limit to 10 then 100, however the result seems to stay the same i.e., jmeter is able achieve a max of 4k requests per second.
i tried to have two thread groups running in parallel to increase transaction rate, this did not increase the rate beyond the 4k per second, mark, no matter what i do.
Is there any way or method i can use to increase this request rate to 100k per second?
The test plan has one thread group within which a simple controller and one jsr223 sampler withnin it. This is followed by a summary report at test plan level.
I have followed all the best practices highlighted in some of the articles in stackoverflow.
Thank you.
If you would really follow JMeter Best Practices you would be using JMeter 5.5 as the very first one looks like:
16.1 Always use latest version of JMeter
The performance of JMeter is being constantly improved, so users are highly encouraged to use the most up to date version.
Given you applied all JMeter performance and tuning tips and did the same to the JVM and still cannot reach more than 4000 transactions per second you can consider the following options:
Switch from JSR223 Sampler to a Java Request sampler or create your own JMeter Plugin, in this case performance of your code would be higher
And consider switching to distributed mode of JMeter tests execution, if you can reach 4000 requests per second from one machine you will need 25 machines to reach 100000 requests per second.
How can I acheive below transaction per hour. I tried to control login by using once only controller ,But transaction per hour is still more than 70. How to handle this?
Overall User count is 70
Transaction per hour for login -- 70
Transaction per hour for homepage -- 100
If you need JMeter to execute exact amount of transactions, not more, not less - go for Throughput Controller
Similarly you can configure 100 homepage transactions.
In order to evenly distribute 70/100 transactions for the duration of one hour time frame you can play with ramp-up period and Constant Throughput Timer
Be aware that you won't be able to achieve different throughputs for different samplers under the same Thread Group as JMeter will always wait for the previous sampler to finish prior to executing the next one hence it will act at the speed of the slowest sampler.
More information: Running JMeter Samplers with Defined Percentage Probability
A couple of other options:
Use the Arrivals Thread Group. This TG will allow you to
configure the desire average throughput (ATP); the TG will
instantiate the required threads needed to achieve the ATP goal (no
guessing)
Use the Concurrency Thread Group together with Throughput
Shaping Timer. These option will also autoscale the number of Vusers
see more information here
Be aware that the downside of these options is that vusers is are instantiate using a fixed pacing. In general, this is not how rusers interface with an application in the real world.
I have created a performance test script as below. I am running 4 Thread Groups in parallel (Even though there are 14 Thread Groups, only 4 are enabled and I am running only those enabled 4 Thread Groups). I have used default Thread Groups.
I have used Flow Control Action to simulate the user think time and set it as 3 seconds.
My requirement is to achieve Throughput as 6.6/seconds. What is the best way to achieve it? Also, does user think time creates any impact on Throughput?
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So basically throughput is the number of requests which JMeter was able to within test duration time frame. If you introduce artificial delay of 3 seconds - it will make the overall throughput lower
Other (and the main factor) is your application response time because JMeter waits for previous request to finish before executing new one so there could be 2 options:
The amount of threads in your Thread Groups is not sufficient to create the desired throughput. If this is the case - just add more threads (virtual users) in Thread Group(s)
The amount of threads in your Thread Groups is too high and you're getting higher throughput that you expect. If this is the case you can pause JMeter even more by adding Constant Throughput Timer and specifying the desired number of the requests per minute you want to have. If you need 6.6 requests per second it will mean 396 requests per minute
Also remember that JMeter itself must be able to send requests fast enough so make sure to follow JMeter Best Practices
My suggestion is to consider using the Arrivals Thread Group. This TG will allow you to configure the desire average throughput (ATP); the TG will instantiate the required threads needed to achieve the ATP goal.
As for the Think Time (TT), it should be a business requirement that emulates the pauses that users take when using the application in the real world.
I have a task to load test on application which needs to respond 2350 users per second. For that I have set us something like this in Jmeter:
I have added a Thread group. In that I have set:
Number of threads(users): 2350
Ramp-up period: 1 Second
Loop Count: 1
Will it solve my purpose to load test of application with 2350 users??
It will. But only if response time for each virtual user will be equal 1 second.
There are 2 common load patterns, for implementing both you will need Timers
Actually it might be the case you don't need as much as 2350 thread to simulate 2350 users as real life users don't hammer the server non-stop, they need some time to think between requests. Besides page loading time also needs to be considered.
Let's imagine you have 2350 users. Each user performs an action each 15 seconds. Page loading time is 5 seconds. So each user will be able to hit the server 3 times per minute. So 2350 users will produce 7050 requests in minute which stands for 117.5 requests per second only. If this is what you're looking for consider adding Constant Timer or Uniform Random Timer
If you need to simulate 2350 requests per second, not users, you need to handle it a little bit differently. There are 2 Timers which are designed to set exact "throughput" - a number of requests per time unit. They are:
Constant Throughput Timer
Throughput Shaping Timer - an advanced version of the Constant Throughput Timer, available via JMeter Plugins project.
Remember that above timers can only pause the threads, they won't kick off new virtual users if you don't provide enough on Thread Group level so make sure you have at least as much as you try to simulate, or it's better to have 2x more in your virtual pocket just in case. Also check out JMeter tuning tips from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure as JMeter default configuration isn't something you can use to create such a load.
The setup you described just creates 2350 parallel users (2350 separate connections), but doesn't guarantee that every thread will be completed exactly in 1 second. Ramp-up period - how fast all threads will start to send requests. So, in the case you described Jmeter will do the following:
Create 2350 separate threads.
The difference between 1st and last thread start will be 1 second (approximately).
Every thread will be implemented only once (loop count).
For real scenarios when you need specific throughput for continuous period of time it's better to use Constant Throughput Timer. It controls the amount of requests sent by thread(s) and changes delay between requests when it's necessary to meet the value you defined. So, your real throughput doesn't really depend on the total amount of threads (users). Sometimes less users can send requests faster (depends on your application).
To control your throughput while running, just add Summary Report to your test plan.
Moreover, for this specific scenario (2350 users) it can be potentially difficult to generate so many requests in 1 second. In this case, you need to use distributed load with some Jmeter slaves and 1 master.
We have 6 user scenarios that we are trying to test on our application concurrently. We are constantly tuning the percentage of threads going to each scenario and total number of threads. In order to make these changes quickly, I've put all the scenarios under 1 thread group and in that thread group I have 6 throughput controllers setup that that total up to 100% with 'per user' unchecked, each scenarios samplers (with thinktimes) are then inside these throughput controllers.
As far as I can tell, this is accomplishing the goal and I see the proper user distribution going through our system but I'm not sure if I should be breaking these out into 6 different thread groups instead. If so, how should I be controlling the percentage of threads going to each scenario?
Your solution is a good one. It will do what you expect.