Adding elements at runtime and executing a jmeter test plan - jmeter

I am trying to build a jmeter testplan, where all the test values are sent from a csv datafile.I want to add assertions(provided in the datafile) to my HTTP Request at runtime and execute the test. The reason behind doing this is to keep the plan flexible according to the number of assertions. In my case, the assertions are getting added at the runtime; however they fail to get executed. May I know what should be done to get the components added and executed in the same flow?
For example: A part of plan looks like:
XYZ
--HTTP Sampler
-- Response Assertion1
-- Response Assertion2
-- JSON Extractor
where XYZ -->keyword based transaction controller(reusable component)
Everytime I have a request of type XYZ, this chunk of components will get executed. In my case, I do not want to place anything such as Assertions, pre/post processors, extractors in the test plan already. I want to generate these components at run time and execute them (as per my test requisites).
Issue: The problem here is that I cant load the components programmatically and execute them in the same flow. The reason being, the compiler does not know beforehand what all components it needs to execute, so it bypasses the newly added components.
So, I need some alternative solution to execute this.

You can add Response Assertion (or multiple) with Pattern to test filled with a variable as ${testAssert1} and set the variable by default as empty, for example
Put in User Defined Variables name testAssert1 with empty value.
Your assertion(s) will pass until you on run time set the variable with a different value, for example using User Parameters Pre Processor.

Related

How to update the array values from existing based on the condition and again pass same array with updated array element

We are using for each control to iterate an array based on matchNr count. First time My application is sending all reports call to server together , next time after 10 sec delay again my application sending the same reports calls which are not get completed in first time and so on.
Ex. suppose application is having 10 reports then first time 10 requests goes, then again same request goes by eliminating the completed one (this request count should be less or equal to earlier one)
Note:
Application is having different report count for each users. we can not put fix reports to all users.
Request is same here, only application is adding report name in the body part of the request.
We are trying to simulate chrome behavior with Jmeter during load test.
Array is having the total report count[image1]
request is same only parameter is getting change for each request[image2]
ForEach Controller doesn't execute its children "together"
We are not telepathic enough to guess how does you "array" look like and how exactly you want to "update" it
If you have JMeter Variables in form of:
foo_1=something
foo_2=something else
....
foo_matchNr=number of matches
and want to update the variables with some "new" values there are 2 options:
Set Variables Action sampler (can be installed using JMeter Plugins Manager)
Or any suitable JSR223 Test Element, any of them has vars shorthand for JMeterVariables class instance providing read/write capabilities for JMeter Variables in current thread's scope.

How do I record a script in JMeter which adds a record in a website?

Currently, I am recording a script in jmeter by which I am adding a record in a website , but a problem is that while recording a script I am able to add a record in a website, but once recording has been done and after that if I will run a script again then, a script is not adding a record in a website.
can you please help me with this?
In the absolute majority of cases you will not be able to replay the recorded script without performing correlation.
Modern web applications widely use dynamic parameters for session management or CSRF protection so once you record your test you get "hardcoded" values and they need to be dynamic.
Assuming all above my expectation is that your test doesn't add record due to failed login or something like that. Inspect requests and responses using View Results Tree listener - this will allow you to identify which exact step is failing.
The process of implementing the correlation looks as follows:
Identify the element which looks to be dynamic either manually inspecting request parameters and look for "suspicious" patterns or record your test one more time and compare the recorded scripts looking for the parameters which differ
Inspect previous response and extract the dynamic value using a suitable post-processor. For HTML response times the best option is CSS Selector Extractor. It will allow you to extract the dynamic parameter value and store in into a JMeter Variable
Replace hardcoded recorded value with the variable from the step 2
Repeat for all dynamic parameters
Don't forget to add HTTP Cookie Manager to your Test Plan.

JMeter - Why is my variable being re-evaluated midway through my test?

I posted this question originally in a much more complicated way, but I've reproduced the issue more simply now so I'm extensively editing my post.
I have a simple Test Plan to exercise an API.
The first thing it does is create a session with a simple HTTP POST. We then extract the session ID from the response using the JSON Path Extractor plugin:
This reads the newly created session's ID into a variable called id_JSON, and subsequent PUT requests use the session ID in their path, i.e. /api/sessions/${id_JSON}/account.
This generally works well, but I have noticed that intermittently, id_JSON will suddenly have the default value NOT_FOUND. Samples will fail and when I look at the request, I can see that it was trying to hit /api/sessions/NOT_FOUND/account instead of a valid ID. What's really confusing me right now is that this will happen after requests have already successfully referenced ${id_JSON} and generated a valid path. It seems like this should be impossible, unless the value of id_JSON was being dynamically checked or looked up repeatedly - otherwise how is it coming up with a different value from one request to the next?
It seems that if any Sample fails, for any reason, subsequent requests in the same thread iteration all fail with id_JSON having the default value NOT_FOUND.
Do I need to declare or manage the variable id_JSON in any special way to ensure that it will get the value of the session ID and retain it throughout the thread iteration, until the next iteration overwrites it with the next session ID?
The Extractor is a Post Processor, meaning it is applied after each sampler. So in you case it will run on the First Get and the 4 Puts.
So what you are noticing is absolutely regular, and if a Sampler fails, the extractor will fail to extract the ID and put NOT_FOUND in value.
If you are sure it does not change, then just put the Post Processor as a child of the first HTTP Request called "Create Session", it will then only run for it and the variable will not change anymore.
You can read more on this at:
http://jmeter.apache.org/usermanual/test_plan.html#scoping_rules
"Go to next loop iteration" operates on Thread Group level.
Using any nested loop controllers doesn't increment global iteration counter. You can test it with either:
${__BeanShell(vars.getIteration();)} function - if you use vanilla JMeter
iterationNum function - if you use JMeter Plugins
So if you move your "looping" on Thread Group level and remove nester Loop Controller(s) (or set their loop count to 1) your approach should work as you expect it to do.

How to run JMeter failed threads after test stops?

I'm using JMeter to run a functional test to update the password of a lot of users (22K). I've separated the users in 2 scripts and used a Ultimate Thread Group with Start Threads Count = 100, which is the value with which I got less errors, however I still got 1.5% transactions failed, and I need to rerun only this failed threads, because all users need to have the same password.
I've tried to get answers to this specific problems, but I have only found ways to prevent this from happening, like using a While controller with a timer, or logging the full response for failure, but I haven't found if there is a way to specifically rerun the failed threads.
Does anyone know if this is possible?
You will have to do following.
Use JSR223 sampler to set the rescode=0
While controller with (if rescode!=200)
HTTP Sampler
JSR223 post processor as javascript as the scripting language.
Store response code using prev.getResponseCode()
e.g. vars.put("rescode", prev.getResponseCode());
You might have to add some more intelligence to the script to avoid infinite loop.
Another approach to solving the problem would be to anticipate errors on some of the password update calls and build a data file upon failure with the information you need.
For Example:
Create a regular expression post processor that has default value of false, and template value of true. Make the expression match the expected response, and fail if the sample fails.
Then, after that sampler, you can add an if statement based on the new true/false variable. If it is false, you know the previous password update failed. Inside the if statement, add a dummy sampler with response data containing all the information you need to know which accounts you must retry.
Then, add a simple file writer to this dummy sampler, and log the dummy sampler response data to a file. At the end of a test run this data file would contain all the information you need to re-try all failed accounts.
Sadly this is a slightly manual process, but I'm sure with a little creativity you could automate recursive test runs until the re-try file is empty. Beanshell file IO might let you handle it all inside a single test run.
-Addled

Call function in user parameter definition

I'm running a set of thread groups (consecutively) and I need to reset a number of parameters at the start of each thread group so that they have a unique value.
Presently I'm referencing a User Paramaters node using a test fragment, and setting the value to value-${__time()}. Unfortunately, this results in the value being used verbatim (without resolving the time).
Is there a better way to achieve per-thread group variables that include function calls?
Works fine for me (Jmeter 2.5.1), as per below example.
Sample params set to ${__time(HMS,)} and value-${__time()} successfully resolved, generated and updated (once per iteration) for each thread (in this case:3 Thread Groups, 5 threads # 3 loops).
Can you please answer why are you using User Parameters via Test Fragment (as per your post)?
...And several articles, just fyi:
Parametrization in JMeter with user parameter
JMeter Variables vs. Properties. vs. Parameters
UPDATED:
Please find below results for example with both User Params and test logic put into Test Fragment and called from Module Controllers.
Works the same way as in sample above: successfully resolved, preserved between samplers in separate loop and updated (once per iteration) between loops for each thread (well, I 've commented in screen the rest of thread groups to get output for the first only; works fine with all TGs enabled too).
I think you could also try to put User Params config from Test Fragment to each thread group and leave in Test Fragment only test logic - if the above schema will still not work for you:
It's not very nice but both the Module Controller and Include Controller are still quite "buggy" and sometimes unpredictable.
You could also try to debug problem controllers in your scenario: select controller > click Help in jmeter's main menu > click Enable Debug > look into jmeter.log for details after execution.
You could also look onto custom Parameterized Controller - maybe it will work better.

Resources