CSV data set is getting executed before the JDBC sampler request - jmeter

I am performing login test with multiple concurrent users.
I have created a JDBC request to get username and password from patient
table. and then Created a Test.CSV file using same data with the help of
BeanShell assertion.
Now pass this file name in CSV dataset configTest.csv.
I am able to login with multiple users concurrently but facing an issue:
when I am running the test first time. The file is not available at the same location because it is being created after the execution of thread group.
If file is not there I am getting this in log: File Test.csv must exist and be readable.
To execute the same process, What I am doing is, keeping the JDBC request is different test plan. Firstly executing that test plan and then proceeding to the login.
I want to execute and keep both requests is the same test plan.
one more thing If I am using the different thread group for these requests in the same test plan, Still facing the same issue.

Jmeter execution order is:-
0-Configuration elements
1-Pre-Processors
2-Timers
3-Sampler
4-Post-Processors (unless SampleResult is null)
5-Assertions (unless SampleResult is null)
6-Listeners (unless SampleResult is null)
Based on above, it is evident that "csv data config" will be executed first before the JDBC sampler.
What I can think of, if you want to get the username/pwd in the same Thread Group then you can set username and password as properties using __setProperty() and fetch it using __property(). For this, use JSR223 Post Process after JDBC.
You can also use any of the other know post processor/scripting language, it is just groovy is better for performance test.
Apache Groovy - Why and How You Should Use It
Hope it helps.

The reason is that CSV Data Set Config is a Configuration Element and according to test elements Execution Order it is being initialised before anything else.
I would suggest going for __CSVRead() function instead, JMeter functions are evaluated at the time they're being called so the Test.csv file will exist at the time you will be getting credentials from it. See Apache JMeter Functions - An Introduction to get familiarized with JMeter Functions concept.
Also be aware that according to JMeter Best Practices it is recommended to switch to JSR223 Test Elements from Beanshell or other scripting languages.

Related

How to test two HTTP requests in Jmeter without affecting the performance of individual requests and how to only record the results of one?

I'm doing a performance test of a HTTP Patch Method Request using Jmeter called 'Update Person'. The case here is the Update Person is dependent on another request called 'Create Person'. Create Person will return a 'personId' as a response and I will use that id to send an update request. So I can't do a performance test with only the Update Person Request. Here is my Jmeter Test Plan Layout:
Jmeter Test Plan
When I run the test plan, the performance of both request is significantly slower than testing the Create Person alone. My questions are:
Does testing two http requests affects the performance? If yes, how?
Is there a way that I can test my Update Person request alone while the Create Person request is running in the background to get the personIds?
Thank you.
1.You can first run the create person alone and get all the required person ids in the csv. For this you can use the post processors and capture and write the output or take it directly from DB if it is there and you can.
2. Then, pass the created ids to the second request from the csv using CSV Data Set config.
Update:-
Use regex or any post processor for fetching the UserId then with in the same sampler use BeanShell PostProcessor to write the output to csv:-
Ex:-
CreatePerson = vars.get("Create_Person");
f = new FileOutputStream("C:/Users/XXX/Users.csv", true);
p = new PrintStream(f);
this.interpreter.setOut(p);
print(CreatePerson);
f.close();
This can also be achieved with Groovy for performance improvement. I am not an expert on Groovy but you can find that on this site. Dmitri T has submit that many times.
Then, for reading it is very easy. Add "CSV Data Set Config" before the samplers or on top to fetch the data. Column name need to be passed as variable where it is required like ${CreatePerson}
There is one more thing to capture the data instead of code. Use Sample Variable. In the user.properties(under bin folder) file add a line in the end:-sample_variables=CreatePerson
Then, use simple data writer or view result listener to save the results in the csv. It should write the data in the csv. You can deselect all the data that is not required from the simple data writer/view result listener.

JMeter - How to make post processors reusable and use them on top of a test fragment on demand

Here is my scenario:
I created a test fragment for a sampler which is being used in many thread groups present in different jmx scripts. I sometimes would like to extract few values of this sampler result using few post processors.
Question:
How do I group and make these post processors reusable? I do not want to include as part of the test fragment itself as I don't need/want to execute post processor action every time.
Here is what I have tried:
I am able to save those post processors as a separate test fragment and include it in my test script right after the test fragment with the sampler whenever I want to execute them. I can save the sampler result to a jmeter variable and use it inside my post processor test fragment.
Is this a good approach? Please guide me.
Having Post-Procesors at the same level as all other Samplers is not a very good idea as they will be executed for each Sampler in their scope
Saving response data into a variable each time is also an overhead as according to your question you need the value sometimes
I would recommend using JSR223 Sampler to copy previous sampler response data and apply necessary Post-Processor(s) to it as child(ren).
The relevant code to copy the previous sampler response data would be as simple as:
SampleResult.setResponseData(ctx.getPreviousResult().getResponseData())
Where:
SampleResult - stands for current SampleResult
ctx - stands for JMeterContext
Check out Apache Groovy - Why and How You Should Use It article to learn more about Groovy scripting in JMeter conctept
The JSR223 Sampler can be saved as a Test Fragment as well.
Adding to #Dmitri T answer, in JSR PostProcessor you can save similar code in script file and reuse it
Script file A file containing the script to run, if a relative file path is used, then it will be relative to directory referenced by "user.dir" System property
Use the same script file in several post processors for re-usability:

Jmeter: Requirement to Fetch aggregate response time of all the HTTP requests under a transaction controller for each iteration

As per our project requirement , we have to write response time of each transaction to a DB During our Load test.
For web services scripts , we are using prev.getTime() function in Beanshell and write the response time of that transaction in a DB.
But for UI level scripts , we have to use transaction controller and under the transaction controller many HTTP requests are there. If we use prev.getTime() function it will fetch only the response time of last request .
If some one has solution for the above requirement, please share it...
If you're using Transaction Controller in Generate parent sample mode you can get the total duration of all nested samplers as prev.getParent().getTime()
See SampleResult.getParent() method JavaDoc for details.
I would recommend switching either to JSR223 Test Elements and Groovy language, since as when it comes to high loads Beanshell performance might be a big question mark. Groovy is Java-compliant (even more than Beanshell) so most likely you won't have to change a single line of code in order to migrate.
References:
BeanShell Scripting
JSR223 Elements
Apache Groovy - Why and How You Should Use It

jMeter - Beanshell bsh.shared hashmap data in init file?

I have a jMeter project I'm working on which has thrown up all sorts of problems. Mainly due to storing and scoping of variable data throughout the run.
I now have things pretty much working and I'm using a Beanshell shared hashmap to store data through the run. I don't need to worry about thread safety due to the way I'm doing it.
It works, but it re-initialises itself each time a thread group runs. Despite putting the initialisation step outside all thread groups.
So, as I understand it, the solution is to put all the startup data into an initialisation file which is only run once on startup. But I can't figure out how I'm supposed to do that? I've copied the code from the Beanshell Preprocessor I was using previously into a ".bshrc" file and updated the jMeter properties file with the location of my ".bshrc" file, but it doesn't seem to work. It doesn't actually seem to have done anything. When I run the test, no values are present and everything fails.
I've tried using:
beanshell.init.file=../bin/data.bshrc
and
beanshell.preprocessor.init=../bin/data.bshrc
I've tried to find some sort of idiots guide to setting up an init file, but I can't find anything helpful. This is the first time I've had to make much serious use of Beanshell and my Java knowledge is VERY limited at best!
At the moment, I'm getting round it by running my test once with the original Beanshell pre-processor enabled. This sets up the hashmaps and they stay resident in memory from there on. I stop this run, disable the pre-processor, and all subsequent runs work just fine.
Anyone?
I would suggest using setUp Thread Group which is being executed prior to any other Thread Groups and define your test data there with a Beanshell Sampler like
bsh.shared.myMap = new java.util.HashMap();
bsh.shared.myMap.put("foo","bar");
// any other operations
After that in your main Thread Group(s) you can access myMap values in any Beanshell-enabled test element (Sampler, Pre/Post Processor, Assertion) as
log.info("foo = " + bsh.shared.myMap.get("foo"));
2014/07/22 10:06:48 INFO - jmeter.util.BeanShellTestElement: foo = bar
See How to use BeanShell: JMeter's favorite built-in component guide for more details on Beanshell scripting in Apache JMeter and a kind of Beanshell cookbook.
If you use Beanshell for "heavy" operations I would recommend considering switching to JSR223 Sampler and Groovy language as in that case you'll get performance comparable to native Java code.

How to run JMeter failed threads after test stops?

I'm using JMeter to run a functional test to update the password of a lot of users (22K). I've separated the users in 2 scripts and used a Ultimate Thread Group with Start Threads Count = 100, which is the value with which I got less errors, however I still got 1.5% transactions failed, and I need to rerun only this failed threads, because all users need to have the same password.
I've tried to get answers to this specific problems, but I have only found ways to prevent this from happening, like using a While controller with a timer, or logging the full response for failure, but I haven't found if there is a way to specifically rerun the failed threads.
Does anyone know if this is possible?
You will have to do following.
Use JSR223 sampler to set the rescode=0
While controller with (if rescode!=200)
HTTP Sampler
JSR223 post processor as javascript as the scripting language.
Store response code using prev.getResponseCode()
e.g. vars.put("rescode", prev.getResponseCode());
You might have to add some more intelligence to the script to avoid infinite loop.
Another approach to solving the problem would be to anticipate errors on some of the password update calls and build a data file upon failure with the information you need.
For Example:
Create a regular expression post processor that has default value of false, and template value of true. Make the expression match the expected response, and fail if the sample fails.
Then, after that sampler, you can add an if statement based on the new true/false variable. If it is false, you know the previous password update failed. Inside the if statement, add a dummy sampler with response data containing all the information you need to know which accounts you must retry.
Then, add a simple file writer to this dummy sampler, and log the dummy sampler response data to a file. At the end of a test run this data file would contain all the information you need to re-try all failed accounts.
Sadly this is a slightly manual process, but I'm sure with a little creativity you could automate recursive test runs until the re-try file is empty. Beanshell file IO might let you handle it all inside a single test run.
-Addled

Resources