I'm using JSR-223 Timer (jMeter 5.4.1), with groovy language, and trying to add delay\pauses to my threads.
I'm following the instructions by BlazeMeter (How to Easily Implement Pacing).
The strange(?) behavior is that the actual delay is double than required.
The script is as follows:
Long pacing = 5000 - prev.getTime();
Integer iPacing = pacing != null ? pacing.intValue() : null;
log.info("Transaction Pacing: " +String.valueOf(iPacing));
vars.put("myDelay", String.valueOf(iPacing));
return iPacing;
I get the duration of the Sampler action, then calculate "myDelay" as the difference from a base duration of 5,000 mSec. myDelay is a variable I use in the Flow Control Sampler.
Now the strange result:
The actual delay I achieve is TWICE than calculated. In this example, the delay is 5K mSec, but the actual delay is 10K mSec.
Now here is the real strange issue:
If I mark-out the return iPacing, the delay is 5K mSec as required (with a warning message in log file).
See the output below.
Why does the Flow Control Sampler adds myDelay and the iPacing values?
The first block - iPacing is returned. The overall pause is myDelay + iPacing.
The second block - iPacing is marked-out. The delay is myDelay only.
Your delay is TWICE simply BECAUSE you're setting it TWICE.
This statement:
return iPacing;
will create a delay BEFORE each SAMPLER in the JSR223 Time SCOPE
So there is no need to use the Flow Control Action sampler because you're creating the delay in the JSR223 timer ALREADY.
In general PACING is not implemented in JMETER because there is an EASIER way of creating the LOAD in terms of X REQUESTS per second: Constant THROUGHPUT timer and friends.
Related
I am doing load testing on generating report and the requirement is like the report should get generated within 10mins.
It includes one HTTP post request for report generation, and then there is a status check call, which keeps on checking the status of the first request. Once the status of first request changes to complete then the report generation is successful.
Basically I want to start the timer at the begining of the first request and stop the timer once the status is complete and need to add assertion if the time is less than 10 mins then test is pass else fail.
I tried multiple approaches like using Transaction controller, and adding all request under it. But this doesn't give sum but the average response time of all the request under it.
Also, I tried beanshell listener, extracting the response time for every request and adding them all...
var responseTime;
props.put("responseTime", sampleResult.getTime());
log.info(" responseTime :::" + props.get("responseTime"));
log.info("time: "+ sampleResult.getTime());
props.put("responseTime", (sampleResult.getTime()+props.get("responseTime")));
log.info("new responseTime :::" + props.get("responseTime"));
However, I am not interested in adding the response time of these requests, instead I need to just know what is the time elapsed from when the report is triggered and till it gives status as complete.
All the jmeter timers are adding delays, I dnt wish to add delay instead I need it as a timer.
Any help is highly appreciated.
Thank you
Since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting mainly due to performance reasons so I'll provide one of possible solutions in Grovy
Add JSR223 PostProcessor as a child of the HTTP Request which kicks off the report generation and put the following code there:
vars.putObject('start', System.currentTimeMillis())
Add JSR223 Sampler after checking the status and put the following code there:
def now = System.currentTimeMillis()
def start = vars.getObject('start')
def elapsed = now - start
if (elapsed >= 600000) {
SampleResult.setSuccessful(false)
SampleResult.setResponseMessage('Report generation took: ' + (elapsed / 1000 / 60) + ' minutes instead of 10')
}
Example setup:
i executed a script for login in Jmeter.
User 1, rampup time 1, loop 5, cache and cookie manager added. Clear cache of each iteration checked.
without timer below are the time taken for a user in next iteration
after adding constant timerof 3000ms below are the values obtained.
can someone please explain the results after adding constant timer of 3000ms?
according to me the result should be approx. 2+30sec =around 32 sec foreach iteration.
JMeter doesn't include Pre-Processors, Post-Processors and Timers duration into Sample Result by default
If you want to include these 3 extra seconds added by the Timer put your Sampler under the Transaction Controller and tick both Generate parent sample and Include duration of timer and pre-post processors in generated sample boxes:
I've a scenario where I need to send the requests in batches of user defined number (for example 1K,5K,10K etc) with a specified interval between each batch.
Assume Interval between batch is 30 Seconds, I've to send 'N' number of request per batch, for example 1K. Sending 1K request got finished within 10 Seconds, so for next 20 Seconds no request should go. Once the interval gets over another batch of 1K should be sent.
Input : Data is flowing from a CSV, for the 2nd batch it should ideally start from 1001.
Options tried : Constant Throughput Timer. With this I'm restricting the speed of the request, which I do not want to do.
Can someone help me with other option which i can try with?
Add JSR223 Samplers before and after your requests. Your test plan should look like this:
JSR223 Sampler 1
You requests
JSR223 Sampler 2
Add this code to your first JSR223 Sampler:
interval = 30000 //Specify the desired interval here
startTime = System.currentTimeMillis()
vars.put("startTime", startTime.toString())
vars.put("interval", Long.toString(interval))
Add this code to your second JSR223 Sampler:
startTime = Long.parseLong(vars.get("startTime"))
interval = Long.parseLong(vars.get("interval"))
endTime = System.currentTimeMillis()
duration = endTime - startTime
if (duration < interval) {
sleepTime = interval - duration
log.info("Sleeping for ${sleepTime} ms")
Thread.sleep(sleepTime)
}
This will make your threads sleep until the interval is over (if they've already completed their work).
If you need more precision you can modify this solution to make all of your threads respect the same time interval.
You also may use beanshell/JSR223 timer (after all your samples in a thread group) instead of sampler or post processor, as it's been proposed.
As well as pre-processor (before all your samples in a thread group) to set the start time variable - instead of sampler.
In such a timer, you're going to simply return the delay to be applied (like return (interval - (endTime - startTime)); )
I have one Transaction controller which has one http request in my Jmeter test plan. Transaction name and url comes from CSV file. At the end total execution is divided into 5 different transactions.
Testplan:
Testplan
-Thread Group
- User defined variable
Total sample execution will be 8000-10000. Now what i want, if total sample failures reached to 100, my JMeter test should stop test execution.
I have added User defined variable name "thread" and with value "0". I have added below code in Beanshell Post-Processor
int count= Integer.parseInt(vars.get("thread"));
if (prev.getErrorCount()==1){
count++;
System.out.println(count);
vars.put("thread",Integer.toString(count));
}
if (count==100){
System.out.println("Reached to max number of errors in load test, stopping test");
log.info ("Reached to max number of errors in load test, stopping test");
prev.setStopTestNow(true);
}
Somehow code is not working as expected. When error count reach to 100, Jmeter test is not getting stopped. Test is stopped when error count reached to 130. I am not sure who to fix above code.
Can someone please let me know what is issue in above code?
Variables are specific to 1 thread while Properties are shared by all threads.
See:
https://jmeter.apache.org/api/org/apache/jmeter/util/JMeterUtils.html#getProperty(java.lang.String)
https://jmeter.apache.org/api/org/apache/jmeter/util/JMeterUtils.html#setProperty(java.lang.String,%20java.lang.String)
Ensure you synchronize access
Another option is to use a custom java class as a Singleton and increment its value.
Here an example implementation using Beanshell (Prefer JSR223 + Groovy for performances):
setupThreadGroup that resets the counter on test start:
BeanshellPostProcessor that updates counter:
Note that as you call setStopTestNow, test threads are interrupted but do not stop exactly at this time unless you have some timer (which is usually the case)
prev.setStopTestNow(true);
Problem
I am modelling stress tests in JMeter 2.13. My idea of it is to stop the test after certain response time cap is reached, which I test with Duration Assertion node.
I do not want, however, to stop the test execution after first such fail - it could be a single event in otherwise stable situation. I would like the execution to fail after n such assertion errors, so I can be relatively sure the system got stressed and the average response should be around what I defined as a cap, which is where I want to stop the whole thing.
What I tried
I am using Stepping Thread Group from JMeter plugins. There I could use a checkbox to stop the test after an error, but it does that on first occasion. I found no other node in documentation that could model it, so I'm guessing there's a workaround I'm not seeing right now.
I would recommend switching to Beanshell Assertion as it is more flexible and allows you to put some custom code in there.
For instance you have 3 User Defined Variables:
threshold - maximum sampler execution time. Any value exceeding the threshold will be counted
maxErrors - maximum amount of errors, test will be stopped if reached and/or exceeded
failures - variable holding assertion failure count. Should be zero in the beginning.
Example Assertion code:
long elapsed = SampleResult.getTime();
long threshold = Long.parseLong(vars.get("threshold"));
if (elapsed > threshold) {
int failureCount = Integer.parseInt(vars.get("failures"));
failureCount++;
int maxErrors = Integer.parseInt(vars.get("maxErrors"));
if (failureCount >= maxErrors) {
SampleResult.setSuccessful(false);
SampleResult.setResponseMessage(failureCount + " requests failed to finish in " + threshold + " ms");
SampleResult.setStopTest(true);
} else {
vars.put("failures", String.valueOf(failureCount));
}
}
Example assertion work:
See How to use BeanShell: JMeter's favorite built-in component guide to learn more about extending your JMeter tests with scripting.
Close but not exactly what you're asking for: The Auto-Stop Jmeter plugin. See the documentation here. You can configure it to stop your test if there are n% failures in a certain amount of time.
If you want a specific number of errors, you can use a test-action sampler, combined with an if-controller - if (errorCount = n) test-action stop test