How to run JMeter failed threads after test stops? - jmeter

I'm using JMeter to run a functional test to update the password of a lot of users (22K). I've separated the users in 2 scripts and used a Ultimate Thread Group with Start Threads Count = 100, which is the value with which I got less errors, however I still got 1.5% transactions failed, and I need to rerun only this failed threads, because all users need to have the same password.
I've tried to get answers to this specific problems, but I have only found ways to prevent this from happening, like using a While controller with a timer, or logging the full response for failure, but I haven't found if there is a way to specifically rerun the failed threads.
Does anyone know if this is possible?

You will have to do following.
Use JSR223 sampler to set the rescode=0
While controller with (if rescode!=200)
HTTP Sampler
JSR223 post processor as javascript as the scripting language.
Store response code using prev.getResponseCode()
e.g. vars.put("rescode", prev.getResponseCode());
You might have to add some more intelligence to the script to avoid infinite loop.

Another approach to solving the problem would be to anticipate errors on some of the password update calls and build a data file upon failure with the information you need.
For Example:
Create a regular expression post processor that has default value of false, and template value of true. Make the expression match the expected response, and fail if the sample fails.
Then, after that sampler, you can add an if statement based on the new true/false variable. If it is false, you know the previous password update failed. Inside the if statement, add a dummy sampler with response data containing all the information you need to know which accounts you must retry.
Then, add a simple file writer to this dummy sampler, and log the dummy sampler response data to a file. At the end of a test run this data file would contain all the information you need to re-try all failed accounts.
Sadly this is a slightly manual process, but I'm sure with a little creativity you could automate recursive test runs until the re-try file is empty. Beanshell file IO might let you handle it all inside a single test run.
-Addled

Related

How do I record a script in JMeter which adds a record in a website?

Currently, I am recording a script in jmeter by which I am adding a record in a website , but a problem is that while recording a script I am able to add a record in a website, but once recording has been done and after that if I will run a script again then, a script is not adding a record in a website.
can you please help me with this?
In the absolute majority of cases you will not be able to replay the recorded script without performing correlation.
Modern web applications widely use dynamic parameters for session management or CSRF protection so once you record your test you get "hardcoded" values and they need to be dynamic.
Assuming all above my expectation is that your test doesn't add record due to failed login or something like that. Inspect requests and responses using View Results Tree listener - this will allow you to identify which exact step is failing.
The process of implementing the correlation looks as follows:
Identify the element which looks to be dynamic either manually inspecting request parameters and look for "suspicious" patterns or record your test one more time and compare the recorded scripts looking for the parameters which differ
Inspect previous response and extract the dynamic value using a suitable post-processor. For HTML response times the best option is CSS Selector Extractor. It will allow you to extract the dynamic parameter value and store in into a JMeter Variable
Replace hardcoded recorded value with the variable from the step 2
Repeat for all dynamic parameters
Don't forget to add HTTP Cookie Manager to your Test Plan.

Jmeter hml request order get change when increase the number of users and due to that post reguler expression get failed

I record and run the Jmeter script by keeping number of users = 1, in tread group.
Results tree output:
I increased the number of users to 3 and result tree output order changed.
Therefore my some of regular expression exacter logics get failed and resultant responses failed. How can I avoid this situation.
Is there way to manage result tree execution order.
If your regular expressions are under the requests and not at the same level as of HTTP requests then it should not be a problem.Every thread/vUser will run independently. But, in view results you will see request as and when executed by different threads and not in sequence.
As per JMeter Functions and Variables user manual chapter:
Variables are local to a thread
Each JMeter thread (virtual user) executes Samplers upside down (or according to the Logic Controllers). JMeter threads are absolutely independent from each other and each thread has its own variables.
So the problem must be somewhere else, inspect the state of the variables using Debug Sampler and the response data for /oauth calls - it might simply not contain the necessary token value.
Also there is a suspicious call to bundle.js, my expectation is that you should not be executing it directly. The good practices is to configure HTTP Request Defaults to download embedded resources and use parallel pool to be closer to what real browsers do.
See Web Testing with JMeter: How To Properly Handle Embedded Resources in HTML Responses article for more detailed explanation.

Adding elements at runtime and executing a jmeter test plan

I am trying to build a jmeter testplan, where all the test values are sent from a csv datafile.I want to add assertions(provided in the datafile) to my HTTP Request at runtime and execute the test. The reason behind doing this is to keep the plan flexible according to the number of assertions. In my case, the assertions are getting added at the runtime; however they fail to get executed. May I know what should be done to get the components added and executed in the same flow?
For example: A part of plan looks like:
XYZ
--HTTP Sampler
-- Response Assertion1
-- Response Assertion2
-- JSON Extractor
where XYZ -->keyword based transaction controller(reusable component)
Everytime I have a request of type XYZ, this chunk of components will get executed. In my case, I do not want to place anything such as Assertions, pre/post processors, extractors in the test plan already. I want to generate these components at run time and execute them (as per my test requisites).
Issue: The problem here is that I cant load the components programmatically and execute them in the same flow. The reason being, the compiler does not know beforehand what all components it needs to execute, so it bypasses the newly added components.
So, I need some alternative solution to execute this.
You can add Response Assertion (or multiple) with Pattern to test filled with a variable as ${testAssert1} and set the variable by default as empty, for example
Put in User Defined Variables name testAssert1 with empty value.
Your assertion(s) will pass until you on run time set the variable with a different value, for example using User Parameters Pre Processor.

Developing JMeter test plan with results from multiple REST end points

Is this possible in JMeter to develop a test plan that will have result of first test (an ID) will be input of next test and so on in next test upto 4 tests because each test generates a unique ID and each of these IDs are dependent on each other. Each one is related as follows: submission ID > execution ID > both will generate completion ID with result pass or fail. These are REST API calls. I need to run concurrency users load testing. Finally I need measure latency, throughput from each test.
Between sampler requests, parse the api response using JSON post processor, assign it to ${variable_name} and use it in other requests.
It should look something like this.
Thread group
Userdefined variables
Http Sampler
Regex to get id
Http Sampler
Regex to get id
If you want to measure the response time of all the sampler have a simple controller as parent of all samplers
thank you for quick tip. I was able to get one step working by passing ID into a regular expression, but the same regular expression did not work for 3rd step. Let me give more details here. Basically first post command gives submission ID > I used that ID into regular expression > run a get command in next step with an URL something like '/../2ndStep/submissionId' > this is passed > I'm using the same regular expression in next get command with an URL something like '/../3rdStep/submissionId/executions'> this is supposed to give another executionId and it is failing for me. I'm not sure what I'm missing.
thank you all for suggesting working solution. But I need to do this different way to achieve the following requirement.
When I run POST command test on my REST API HTTP request using JMeter, it returns an ID in response. This ID will be used by other steps for completing the job. I'm currently passing ID into regular expression and using that in between the samplers of each step as it was suggested above and then measuring latency, but the GET steps which are dependent on that ID could take sometime to complete. So I can not put those GET steps into one thread because two of the steps are failing as they could take some time to complete. Is there a way to separate POST command from the remaining and start polling GET commands on the remaining steps automatically to remedy this. Bottom line is I need to measure latency of each step and throughput too. Please let me know if there is a way to achieve this in JMeter?
Thanks again,
Santana

JMeter - Why is my variable being re-evaluated midway through my test?

I posted this question originally in a much more complicated way, but I've reproduced the issue more simply now so I'm extensively editing my post.
I have a simple Test Plan to exercise an API.
The first thing it does is create a session with a simple HTTP POST. We then extract the session ID from the response using the JSON Path Extractor plugin:
This reads the newly created session's ID into a variable called id_JSON, and subsequent PUT requests use the session ID in their path, i.e. /api/sessions/${id_JSON}/account.
This generally works well, but I have noticed that intermittently, id_JSON will suddenly have the default value NOT_FOUND. Samples will fail and when I look at the request, I can see that it was trying to hit /api/sessions/NOT_FOUND/account instead of a valid ID. What's really confusing me right now is that this will happen after requests have already successfully referenced ${id_JSON} and generated a valid path. It seems like this should be impossible, unless the value of id_JSON was being dynamically checked or looked up repeatedly - otherwise how is it coming up with a different value from one request to the next?
It seems that if any Sample fails, for any reason, subsequent requests in the same thread iteration all fail with id_JSON having the default value NOT_FOUND.
Do I need to declare or manage the variable id_JSON in any special way to ensure that it will get the value of the session ID and retain it throughout the thread iteration, until the next iteration overwrites it with the next session ID?
The Extractor is a Post Processor, meaning it is applied after each sampler. So in you case it will run on the First Get and the 4 Puts.
So what you are noticing is absolutely regular, and if a Sampler fails, the extractor will fail to extract the ID and put NOT_FOUND in value.
If you are sure it does not change, then just put the Post Processor as a child of the first HTTP Request called "Create Session", it will then only run for it and the variable will not change anymore.
You can read more on this at:
http://jmeter.apache.org/usermanual/test_plan.html#scoping_rules
"Go to next loop iteration" operates on Thread Group level.
Using any nested loop controllers doesn't increment global iteration counter. You can test it with either:
${__BeanShell(vars.getIteration();)} function - if you use vanilla JMeter
iterationNum function - if you use JMeter Plugins
So if you move your "looping" on Thread Group level and remove nester Loop Controller(s) (or set their loop count to 1) your approach should work as you expect it to do.

Resources