How to send the requests in batches using Jmeter - jmeter

I've a scenario where I need to send the requests in batches of user defined number (for example 1K,5K,10K etc) with a specified interval between each batch.
Assume Interval between batch is 30 Seconds, I've to send 'N' number of request per batch, for example 1K. Sending 1K request got finished within 10 Seconds, so for next 20 Seconds no request should go. Once the interval gets over another batch of 1K should be sent.
Input : Data is flowing from a CSV, for the 2nd batch it should ideally start from 1001.
Options tried : Constant Throughput Timer. With this I'm restricting the speed of the request, which I do not want to do.
Can someone help me with other option which i can try with?

Add JSR223 Samplers before and after your requests. Your test plan should look like this:
JSR223 Sampler 1
You requests
JSR223 Sampler 2
Add this code to your first JSR223 Sampler:
interval = 30000 //Specify the desired interval here
startTime = System.currentTimeMillis()
vars.put("startTime", startTime.toString())
vars.put("interval", Long.toString(interval))
Add this code to your second JSR223 Sampler:
startTime = Long.parseLong(vars.get("startTime"))
interval = Long.parseLong(vars.get("interval"))
endTime = System.currentTimeMillis()
duration = endTime - startTime
if (duration < interval) {
sleepTime = interval - duration
log.info("Sleeping for ${sleepTime} ms")
Thread.sleep(sleepTime)
}
This will make your threads sleep until the interval is over (if they've already completed their work).
If you need more precision you can modify this solution to make all of your threads respect the same time interval.

You also may use beanshell/JSR223 timer (after all your samples in a thread group) instead of sampler or post processor, as it's been proposed.
As well as pre-processor (before all your samples in a thread group) to set the start time variable - instead of sampler.
In such a timer, you're going to simply return the delay to be applied (like return (interval - (endTime - startTime)); )

Related

How to add timer in jmeter script, which we can start at first call, poll the status & stop once the first request is completed & add assertions

I am doing load testing on generating report and the requirement is like the report should get generated within 10mins.
It includes one HTTP post request for report generation, and then there is a status check call, which keeps on checking the status of the first request. Once the status of first request changes to complete then the report generation is successful.
Basically I want to start the timer at the begining of the first request and stop the timer once the status is complete and need to add assertion if the time is less than 10 mins then test is pass else fail.
I tried multiple approaches like using Transaction controller, and adding all request under it. But this doesn't give sum but the average response time of all the request under it.
Also, I tried beanshell listener, extracting the response time for every request and adding them all...
var responseTime;
props.put("responseTime", sampleResult.getTime());
log.info(" responseTime :::" + props.get("responseTime"));
log.info("time: "+ sampleResult.getTime());
props.put("responseTime", (sampleResult.getTime()+props.get("responseTime")));
log.info("new responseTime :::" + props.get("responseTime"));
However, I am not interested in adding the response time of these requests, instead I need to just know what is the time elapsed from when the report is triggered and till it gives status as complete.
All the jmeter timers are adding delays, I dnt wish to add delay instead I need it as a timer.
Any help is highly appreciated.
Thank you
Since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting mainly due to performance reasons so I'll provide one of possible solutions in Grovy
Add JSR223 PostProcessor as a child of the HTTP Request which kicks off the report generation and put the following code there:
vars.putObject('start', System.currentTimeMillis())
Add JSR223 Sampler after checking the status and put the following code there:
def now = System.currentTimeMillis()
def start = vars.getObject('start')
def elapsed = now - start
if (elapsed >= 600000) {
SampleResult.setSuccessful(false)
SampleResult.setResponseMessage('Report generation took: ' + (elapsed / 1000 / 60) + ' minutes instead of 10')
}
Example setup:

Jmeter - Until request gets success execute max. 60 requests in 5 minutes with 5 seconds pause

I have one request, which triggers every 5 seconds and maximum for 5 minutes until it gets 200 in response code. So ideally that request executes 12 times in a minute and 60 requests total if it fails everytime.
My problem is how I define those maximum 60 requests.
Here is my configuration
I have taken one While Controller
${__javaScript(parseInt(vars.get("Response_code"))!=200)}
In that while controller this components are there,
While Controller
User Defined Variable (Response_code)
Counter (Starting value: 1, Increment:1, Maximum Value:60)
My HTTP Request
JSR223 PostProcessor (vars.put("Response_code",prev.getResponseCode());)
Constant Throughput Timer (Targer throughput: 12.0)
Where should I have to put condition like if my HTTP request gets success in 3rd attempt go ahead to next request else repeat that request again after 5 seconds till 5 minutes?
I am using jmeter Ver. 5.5
You can amend your While Controller's condition to look like:
${__javaScript((parseInt(vars.get("Response_code"))!=200 && ${counter} < 60),)}
this way it will loop until response code is 200 but not more than 60 times.
Instead of Constant Throughput Timer you can use Flow Control Action sampler to introduce static delay of 5000 ms.
There is no need to have a counter, While Controller exposes a special variable, in your case it will be ${__jm__While Controller For Thumbnail-1 QA1.pdf__idx}
More information: Using the While Controller in JMeter

JSR-233 Timer strange(?) behavior

I'm using JSR-223 Timer (jMeter 5.4.1), with groovy language, and trying to add delay\pauses to my threads.
I'm following the instructions by BlazeMeter (How to Easily Implement Pacing).
The strange(?) behavior is that the actual delay is double than required.
The script is as follows:
Long pacing = 5000 - prev.getTime();
Integer iPacing = pacing != null ? pacing.intValue() : null;
log.info("Transaction Pacing: " +String.valueOf(iPacing));
vars.put("myDelay", String.valueOf(iPacing));
return iPacing;
I get the duration of the Sampler action, then calculate "myDelay" as the difference from a base duration of 5,000 mSec. myDelay is a variable I use in the Flow Control Sampler.
Now the strange result:
The actual delay I achieve is TWICE than calculated. In this example, the delay is 5K mSec, but the actual delay is 10K mSec.
Now here is the real strange issue:
If I mark-out the return iPacing, the delay is 5K mSec as required (with a warning message in log file).
See the output below.
Why does the Flow Control Sampler adds myDelay and the iPacing values?
The first block - iPacing is returned. The overall pause is myDelay + iPacing.
The second block - iPacing is marked-out. The delay is myDelay only.
Your delay is TWICE simply BECAUSE you're setting it TWICE.
This statement:
return iPacing;
will create a delay BEFORE each SAMPLER in the JSR223 Time SCOPE
So there is no need to use the Flow Control Action sampler because you're creating the delay in the JSR223 timer ALREADY.
In general PACING is not implemented in JMETER because there is an EASIER way of creating the LOAD in terms of X REQUESTS per second: Constant THROUGHPUT timer and friends.

To manullay calculate the total duration of jmeter testplan from the Logfile

I want to manually calculate the Duration of jmeter testplan from the csv Logfile.I was following the calculation of last timestamp-first timestamp and it looks correct if am running for 1 thread group.For more than 1 threadgroup the samplers will be repeating and I think it should not be the right way to calculate the duration.I tried using transaction controller thinking that the corresponding timestamp will give me the duration of all contained samples but got confused when I saw multiple transaction controller entry in the Log file for more than one threadgroup. I am newcomer in the performance testing and in the jmeter.Any help will be appreciated.
JMeter provides variable which holds test start timestamp, it is ${TESTSTART.MS}
You could use tearDown Thread Group which is designed to run post-test actions. Under tearDown Thread Group you can use Beanshell Sampler to print test duration to jmeter.log file as follows:
long start = Long.parseLong(vars.get("TESTSTART.MS"));
long end = System.currentTimeMillis();
log.info("Test duration: " + (end - start) / 1000 + " seconds");
By the end of the test you should see something like:
2015/06/17 22:20:15 INFO - jmeter.util.BeanShellTestElement: Test duration: 300 seconds
See How to use BeanShell: JMeter's favorite built-in component guide for more Beanshell scripting tips and tricks.
If you have only result file, another option is open .jtl results file with Excel or Google Sheets or equivalent, sort timestamp column (usually the first one), subtract first cell/first row value from the first sell/last row value - this way you'll get test duration in milliseconds.

Total time taken by jmeter to execute the given load

I am performing load test with these parameters :
threads=4
ramp_up_period=90
loop_count=60
So according to above numbers, my assumption is that each one of the four thread will be created in 22.25 seconds and this 4 thread cycle will be repeated 60 times.
Below is the load test summarized report :
According to JMeter manual ramp up period is :
The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up
period is 100 seconds, then JMeter will take 100 seconds to get all 10
threads up and running. Each thread will start 10 (100/10) seconds
after the previous thread was begun. If there are 30 threads and a
ramp-up period of 120 seconds, then each successive thread will be
delayed by 4 seconds.
So according to above scenarios approximate total time for executing load test with mentioned thread group parameters is :
TotalTime = ramp_up_period*loop_count
which in my case evaluates to 90*60 = 5400 seconds, but according to summariser Total Time is coming 74 seconds
JMeter version is 2.11.
Is there is any problem in my understanding or there is some issue with JMeter ?
Initially JMeter will start 1 thread which will be doing something, which is under your Loop Controller. In 30 seconds second thread will join, in 30 more seconds 3rd thread will start and finally on 90th second 4th thread will start.
Starting from 90 second 4 threads will be doing "what is under your loop controller".
There is no way to determine how long it would take, especially under the load. If you need a load test to last approximately N seconds you can use Duration input under Sheduler in Thread Group.
If you want to forcefully stop the test if certain conditions are met there are 2 more options:
Use Test Action Sampler
Use Beanshell Sampler
Example Beanshell code (assumed to be run in separate thread group in endless loop with reasonable delay between firing events)
if (currenttime - teststart > Long.parseLong(props.get("test_run_time").toString())) {
try {
DatagramSocket socket = new DatagramSocket();
byte[] buf = "StopTestNow".getBytes("ASCII");
InetAddress address = InetAddress.getByName("localhost");
DatagramPacket packet = new DatagramPacket(buf, buf.length, address, 4445);
socket.send(packet);
socket.close();
} catch (Throwable ex) {
}
}
TotalTime would be that if you were working without concurrency. When working in a multi-threaded environment it can happen that thread 1 is already performing its second call when thread 3 is still firing up.

Resources