How to get the exact loading time of website in jmeter - performance

Just wanna know if this scenario is possible in jmeter:
I will create a website using docker image then i will have to check the exact accessibility time(from creating up to launching/checking the site). Once the requirement is met, the timer will stop and it will give me the exact time on when it is actually up. Also, in the view result tree, i only want to get the last successful status. The View Result Tree will only show the last successful status and will disregard the failed status. This is the sample of my thread that i am using: enter image description here

If you really want to discard non-successful results you can do this using JSR223 Assertion and the following Groovy code:
if (!prev.isSuccessful()) {
prev.setIgnore()
}
where prev is a shorthand for parent SampleResult and setIgnore() function tells the listener to ignore the result in case of failure.
More information: Scripting JMeter Assertions in Groovy - A Tutorial

Related

JMeter - Pause the request if it gives an error

I'm using Apache JMeter to send thousands of HTTP requests with 3 seconds of delay in between. The response body is json and starts with {"errors":[ ], ...}. If there is an error it will be in the [ ]. If there is no error than [ ] will be empty.
I want JMeter to pause for a short period of time if it receives an error, and try the request again. So that it'll add in additional buffer when needed.
Do I need a script for this? How can I achieve this?
You can get the number of errors by adding a JSON JMESPath Extractor as a child of the Sampler which returns the JSON and configuring it like:
It will extract the number of entries in errors JSON Array and store it into errors JMeter Variable
Then you can use If Controller to check if the number of errors is above zero, it can be done using __jexl3() function like:
${__jexl3(${errors} > 0,)}
and finally you can introduce a delay using Flow Control Action sampler:
Solution 1
Add a JSR223 Post Processor to the Test Plan level, Thread Group or to a sampler based on your requirement.
Add following code into the script area to check the error and introduce a delay after error if any.
int delayOnErrorInMillis = 5000
if (prev.getResponseDataAsString().startsWith('{"errors":')){
log.info("ERROR !")
sleep(delayOnErrorInMillis)
}

JMETER - How can I pass 2 condition in a while loop on Jmeter

How can I pass 2 condition in a while loop on Jmeter. The conditions are
The request should run in loop till "Pass" response comes.
While loop should run only for 1 minute.
Condition 1 is working fine. However condition 2 is unable to implement.
I have tried running the While Loop inside a Runtime Controller. But the issue is, if the response "Pass" comes before 1 min, the rest of the test stops.
Tried other way round (Runtime inside While Loop) leading to numerous execution of the request, even after receiving "Pass" response.
Will appreciate any leads on this. Thanks
Add a JSR223 Sampler just before the While Controller and store the current time into a JMeter Variable using the following code:
SampleResult.setIgnore()
vars.putObject('whileLoopStart', System.currentTimeMillis())
Use the following __groovy() function as the While Controller's condition:
${__groovy(!vars.get('your_variable').equals('Pass') && ((System.currentTimeMillis() - vars.getObject('whileLoopStart')) < 60000),)}
This way the While Controller will run until:
either your_variable is not equal to Pass
or 60 seconds pass
whatever comes the first
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It
This could be another solution.
You can achieve the desired outcome with the following components.
Runtime Controller
If Controller
Flow Control Action
Set the Runtime (duration) in the Runtime Controller
Set the first condition you already have in While Controller in the If Controller
Click the Break Current Loop to exist from the Run Time controller

Jmeter how to simulate a failure based on results of a condition fro Post Processor

I have a situation where I check the response data and if there is a specific variable exist then I like to simulate a test failure even though that HTTP req code was 200. For example in Bean Post Processor I have:
if ( (prev.getResponseDataAsString().indexOf(Z2) >= 0) || (matches > 1) ){
System.out.println(ctx.getCurrentSampler().getName() +" --> Failed ....")
}
I know how to do it when I want to set the result to success (prev.setResponseOK();) how do I do it if I want to set it to fail? so the GUI shows red and not green?
Thank you
See in JSR223 Sampler
Unlike the BeanShell Sampler, the JSR223 Sampler does not set the ResponseCode, ResponseMessage and sample status via script variables. Currently the only way to changes these is via the SampleResult methods:
SampleResult.setSuccessful(true/false)
prev is a SampleResult object so you can mark it as failed:
prev.setSuccessful(false)
prev - (SampleResult) - gives access to the previous SampleResult
Sounds like a use case for a Response Assertion which you can use in order to conditionally set pass/fail criteria for a Sample basing on presence/absence of certain patterns in the response data.
Here is an example of failed HTTP Request sampler with 200 status code due to absence of the anticipated data in the response:
See How to Use JMeter Assertions in Three Easy Steps for more details.
With regards to JMeter Best Practices which you're violating by the way by using Beanshell PostProcessor:
You should be using built-in JMeter test elements where possible
If you have to go for scripting make sure to choose the most performing option which is unfortunately not Beanshell

Jmeter Test Plan summary report PASS/FAIL

I'm stuck on finding solution on one problem with Jmeter. I need to put some logic into my Test Plan that can give simple report PASS/FAIL calculated on test cases execution results and put in generated JTL report afterwards. For instance
All tests passed - Test Plan result=PASS
One or more tests failed - Test Plan result=FAIL
The majority of suitable options assume using third-party tools, to wit:
you can run JMeter test in Jenkins and use Performance plugin, it allows to conditionally fail the build if the amount of failed requests exceeds specified threshold
you can run JMeter test using Taurus tool as a wrapper, it has flexible and powerful Pass/Fail Criteria Subsystem allowing to set different criteria definitions to mark the test as passed or failed. If build is failed Taurus process returns non-zero exit code.
If above approaches are not suitable for any reason please elaborate your question and explain how and where you would like to see this "FAIL" or "PASS" result.
Add one BeanShell Listener and one BeanShell Sampler at the end of your Thread Group and put this in Listener:
if(sampleEvent.getResult() instanceof org.apache.jmeter.protocol.http.sampler.HTTPSampleResult)
if (!sampleEvent.getResult().isResponseCodeOK())
vars.put("res", -1);
And in BS Sampler put:
if you wanna store result as property:
props.put("testPlanResult", vars.get("res") != -1 ? "PASS" : "FAIL");
if you wanna store result in a file:
f = new FileOutputStream("/path/to/file.txt", false);
p = new PrintStream(f);
p.println("Result: " + (vars.get("res") != -1 ? "PASS" : "FAIL"));
p.close();
f.close();
From here you can do what ever you need with created property or file containing result...
Hope this helps you!
EDIT:
You will need to add this import if writing result to file:
import org.apache.jmeter.services.FileServer;

Beanshell script launched once (start and end of test plan) in JMeter

Good afternoon !
I will try to explain you clearly my problem.
The context
I have a JMeter TestPlan which send HTTP requests to a server. I have a Beanshell script to assert each different case of error returned.
302 response code -> OK
200 response code -> ?
In each error 200, I check the response data string to see if it is an error or a correct case. (User error like User don't have correct rights is OK, but Server is unavailable is ERROR and both have 200 as response code.)
Here is my test plan :
The goal
As I have several errors returned by only one assertion script, I am not able to differenciate each error, except by uncollaspe the assertion in a ViewResultTree. But I disable it when launching my test, and I will launch my TestPlan remotely.
I had the idea to manually count each error. All my samples goes in my Assertion script, and goes to the correct if block according to their content. I increment some variables (JMeter.properties in fact) in each block.
int test = Integer.parseInt(props.getProperty("302"));
test++;
props.setProperty("302", ""+test);
I want to display all those variables in a JFrame at the end of my testplan like this :
The problem
My problem is that I don't know how to launch a Beanshell script before and after the TestPlan.
I want a first script to be started before any sample is send, just to initialize all my properties variables to 0 (else, they keep the value of the last TestPlan).
And, I want a second one to display my Frame with all the variables after the test plan is finished. (Currently it is a JFrame but it will not stay like this.)
Tested solutions
1) For my first script, I set a Counter (JMeter > Config Element > Counter) in the beginning of my test plan to 0.
I use it to check if my test already started of not with an If Controller :
I have a Pre-Processor Beanshell with props.set("302","0"); where "302" is my property to count all 302 response code.
It correctly works but I want to know if there is a proper way to do this.
2) Then, for my second script, I tried to use ${JMeterThread.last_sample_ok} in an If Controller aswell but it doesn't work like I expected. If I put it after my sample, it start after all OK assertion, and if I put it at the end of the test plan, it is never called.
How can I run my beanshell script once, after all my threads are stopped (i.e. all sample finished) ?
Thank you in advance, I hope you understood everything !
JMeter SetUp thread group and TearDown thread group are meant for exactly this.
Add your beanshell component to the setUp thread group to do some setup activities before your actual test starts. Similarly the tearDown thread group runs after your test execution is complete.

Resources