load testing by keep increasing the number of concurrent users - performance

I have 500 users in my csv file. I am doing load testing using jmeter. I want to run the script for first 100 hundred users. Once the execution for 100 concurrent user/Threads is done then I want to automatically increase the size of concurrent users to 200 and so on.
How I can achieve this ??

You can use Constant Throughput Timer to set throughput according to your test scenario. Despite its name it doesn't have to be "constant", you can put a variable into "Target Throughput" input so you'll be able to modify concurrency on the fly.
Another option which could be easier / more flexible is Throughput Shaping Timer available via JMeter Plugins

Related

how to distribute load in different time in jmeter

In my application I want hit 20000 request in 10 hrs but I want distribute load in different time with different number of request means ex in 1 hrs 2000 request second hrs 3000 request third 1000 request like that how achieve this means how to separate load in diff time with diff no. of request
The easiest option is going for Throughput Shaping Timer, configuration implementing your described setup:
It's a good idea to use Concurrency Thread Group in combination with the Throughput Shaping Timer, they can be connected via Feedback Function so JMeter would be able to start extra threads if the current amount is not sufficient in order to reach/maintain the desired number of requests per second.
Both are JMeter Plugins and can be installed using JMeter Plugins Manager
Another solution could be using Constant Throughput Timer.
N.B. although the Timer is called the Constant Throughput timer, the throughput value does not need to be constant. It can be defined in terms of a variable or function call, and the value can be changed during a test. The value can be changed in various ways:
You could set the throughput using a property or variable.
Calculate the throughput values you need at different time intervals and set the property when the time is reached.
props.put("currentTPM", 120)
You will have some work in checking the duration since the test is started.
You may create a separate thread group to control the throughput. Rename the thread group name to TG-TM. Set the number of threads to 1 and loop count to infinite. Set the duration of the thread group.
def lstThrouputInOneHour= [2000,3000,1000,5000,4000,5000]
def currentIndex=vars.get("__jm__TG-TM__idx").toInteger()
if (currentIndex <lstThrouputInOneHour.size() ) {
def currentTPH=lstThrouputInOneHour[currentIndex]
def currentTPM=currentTPH.intdiv(60)
props.put("currentTPM",currentTPM.toString())
Thread.sleep(60*60*1000)
}
Note: Please introduce a startup delay to other thread groups to ensure they have access to the throughput value when they start.
This solution can be extended to work with Bean Shell Server where you could change the throughput values (JMeter properties) remotely

Limit values of custom JMeter Properties at runtime

We've a distributed JMeter setup as described here - How to Change JMeter´s Load During Runtime
The test plan (JMX file) is provided by the user. Hence, we don't know the property names used in there. During runtime, the user can provide property names and values that we will directly pass to JMeter setup through beanshell script.
In this setup, can we put a limit/cap on values of certain JMeter properties (which can potentially affect our provided resources) that can be changed by the user at runtime?
For eg. we don't want the total RPS of the system to cross say, 300 RPS at anytime. Or if the user has provided runtime change capability of #threads, we don't want them to exceed say, 100 on any machine at anytime.
We want to refrain from storing any user-defined property names in our system to provide such validation.
You can inject a Constant Throughput Timer or a Precise Throughput Timer or a Throughput Shaping Timer into the user-provided .jmx script and put your maximum allowed concurrency there.
Even if there will be multiple timers in the test plan JMeter will apply the throughput of the slowest one so you won't make the test too fast in case if original RPS is lower than your maximum and vice versa, no matter what RPS user will want it will never get higher than 300 RPS which you will define.
The same approach applies to the number of threads in thread group.

How to create Burst test in Jmeter without using parallel threads?

I have a requirement to create simple Burst test in Jmeter:
10 requests-> 20 requests -> 30 requests
On single worker mode.
Throughput 20ms
Anticipated response time : <=200ms
How can we achieve this with Jmeter without using parallel threads?
I am looking for a simple solution.
Thanks a lot
Add plugin
Ultimate Thread Group to JMeter
Add Ultimate Thread Group component with settings of 3 rows
Set Start Threads Count 10,20 and 30
Use different start times so requests batch will be executed in different times
"Ultimate" means there will be no need in further Thread Group plugins. The features that everyone needed in JMeter and they finally available:
infinite number of schedule record
separate ramp-up time, shutdown time, flight time for each schedule record
and, of course, trustworthy load preview graph
Consider using Throughput Shaping Timer and Concurrency Thread Group combination which provides flexible way of defining load patterns:
They can be connected together using Feedback Function so JMeter will be able to kick off more threads in order to reach the desired throughput (number of requests per second) if current amount is not enough.
Both can be installed using JMeter Plugins Manager:

JMeter number of threads in a run

I am quite new to jmeter and try to do a performance test of my application. I want to generate 100 request per second scenario however my server takes 3-4 secs to respond to every request. I am running my test for 1 mins which means number of requests fired should be 60k within the time span. However jmeter actually waits for the response before it sends next request. Which is not what I am looking for.
How can I make sure that jmeter sends a new requests every second with 100 req/sec without waiting for the response so that the number of requests fired per min is 60k.
I am trying to use constant throughput timer with 60k as request per min, however that is not helping. Here is my test screenshot.
EDIT
I have done like this
And Throughput shaping timer being as
So ideally I should get number of samples as 3000?, still not getting that.
Make sure you provide enough threads (virtual users) under Thread Group, "vanilla" JMeter won't kick off any extra threads if actual throughput is less than target one you specify in the Constant Throughput Timer.
Another solution would be using Concurrency Thread Group along with the Throughput Shaping Timer. They can be tied together via feedback loop so if you use these test elements JMeter will start more threads if the current amount won't be enough to reach the desired requests per second rate.
You can install both using JMeter Plugins Manager
My suggestion is to consider using the Arrivals Thread Group. This TG will allow you to configure the desire average throughput (ATP); the TG will instantiate the required threads needed to achieve the ATP goal.

configure jmeter setting based on request per secon

i am new to Load testing and would like configure my jmeter setting for the following requirement below. My understanding is Theard are different from request per second. If so what will be values in thread group for the below requirement.
"Initial load 20 request/second, increase load with 100 request/second for each minute.
Perform load test until we see an increase in latency "
You should put something very high into Thread Group and use one of the following approaches to define your load pattern:
Constant Throughput Timer - it comes bundled with JMeter
Throughput Shaping Timer or Concurrency Thread Group- available via JMeter Plugins project
In order to automatically stop the test when latency exceeds threshold you can use AutoStop Listener, again it comes with JMeter Plugins.
In general latency is networking related metric so even if your application is slow as a snail you can have low or even zero latency so I would recommend considering response time and/or transactions per second metrics as well.

Resources