While running my load simulation in JMeter as follow, the oauth that I am generating from the system at the step one is override by a new one while I am in step 16(please refer the image attached), how to handle this in JMeter that until my all the transaction controller finished execution don't generate a new token. (Please refer the image attached and the description)
Description:
In step 1 I am generating the bearer token for my application and which is going to use for my entire iteration
While the iteration is running and reached step 16(till this it is hardly take 30 sec and marked 2) a new token generated for the user which I used in step 1.
Is there any condition I am missing because why it is creating a new token because it is not reached the entire journey or do I need to something else?
Here is how I am extracting my token
And here I am passing this in step 16,
This screenshot doesn't provide the full information as it's unclear how do you "generate bearer token" so I can only make some assumptions:
You're using a JMeter Property (which are "global") instead of JMeter Variable which are thread-local so when the other thread is generating a new token it overwrites the old value, make sure to use variables, not properties
Your "token generation" logic scope is incorrect and the Post-Processor which does the extraction is being applied to more than one request.
Due to a logic error or a copy paste issue you're overwriting the variable somewhere in step 15, use Debug Sampler to print the token value after each step, this way you will be able to localize the problem and find the problematic block in your script which causes the problem
Related
I have 100 users login slowly in 30 mins and then 100 users concurrently for next 30 mins.
I see an error rate of 26%. And all of them failed because the access token value was not passed Instead I see the parameter name. Why would the access token not get passed ?. It's defined as regular expression.
It sends parameter name because:
the extraction has failed
you didn't provide the default value
The question why the extraction has failed is not something we can help with, most probably your application gets overloaded and instead of responding with the token responds with something else, you can add i.e. Simple Data Writer listener (at least for the sampler which returns the token) or amend JMeter Results File Configuration to save the response data , this way you will be able to figure out what's wrong. See How to Save Response Data in JMeter article for more detailed instructions if needed
It also makes sense to check the system under test logs.
My Jmeter test plan is setting a token variable using a Regular Expression Extractor (a Post-Processor element), but upon a while this variable is not expanded in its value and as consequent ${token} is sent to the REST API under actual test.
I added Constant Delay Timer elements in between each request a user (thread) sends and it seems to overcome the problem to a great extend but not entirely. I still get some Bad Request from the API side.
I provide a screen capture
And a second screen capture showing the full test plan (or a large portion of it)
Can someone please explain why this happening and somehow resolve it ?
It might be the case your Regular Expression Extractor fails to find the value therefore it falls back to the default (undefined) value of ${token}
My expectation is that under the load your application cannot properly respond, to wit response doesn't contain the token. You can double check it using Debug Sampler and View Results Tree listener combination.
You can temporarily enable saving of response data by adding the next lines to user.properties file:
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
and next time you run your test in command-line non-GUI mode JMeter will keep response data for all samplers. You should be able to examine the responses and add a Response Assertion to ensure that the token is present in the POST LOGIN sampler response data
I have created a thread with three steps to:
Access token request: it generates a token to be used in step three. This token is stored in a property
${__setProperty(accessToken,${accessToken})}
Logon Get request to hit a url
Logon Post request, pass some data to the url and I have set the Authorisation header using the Bearer + accessToken (the one generated in first step.
Running a single thread it works, perfect; but when I increase the number of threads, the 3 steps are not running in sequence, maybe I have some Access token before the first Logon Post and I see the token this one is using is not the token generated in the first step, it is the last one generated.
If I set a rump time longer of the total execution time it works, but then I cannot run several threads on parallel.
How can I configure the script to run the threads using the correspondent token generated in step 1 in each Post? How can I different properties or variables to store the token of every thread and use them?
Thanks.
Your issue is that you are mixing Variables and Properties.
In summary, as per functions reference:
Variables are per Thread
Properties are shared accross Threads
So don't use setProperty, just use ${accessToken}
Use property only when you want to effect all threads. Otherwise you can save variables in other variable as in User_Parameters where you put new variable name the value can be a different variable as ${accessToken}
filling in the Variable name in the 'Name:' column. To add a new value to the series, click the 'Add User' button and fill in the desired value in the newly added column.
Values can be accessed in any test component in the same thread group, using the function syntax: ${variable}.
I posted this question originally in a much more complicated way, but I've reproduced the issue more simply now so I'm extensively editing my post.
I have a simple Test Plan to exercise an API.
The first thing it does is create a session with a simple HTTP POST. We then extract the session ID from the response using the JSON Path Extractor plugin:
This reads the newly created session's ID into a variable called id_JSON, and subsequent PUT requests use the session ID in their path, i.e. /api/sessions/${id_JSON}/account.
This generally works well, but I have noticed that intermittently, id_JSON will suddenly have the default value NOT_FOUND. Samples will fail and when I look at the request, I can see that it was trying to hit /api/sessions/NOT_FOUND/account instead of a valid ID. What's really confusing me right now is that this will happen after requests have already successfully referenced ${id_JSON} and generated a valid path. It seems like this should be impossible, unless the value of id_JSON was being dynamically checked or looked up repeatedly - otherwise how is it coming up with a different value from one request to the next?
It seems that if any Sample fails, for any reason, subsequent requests in the same thread iteration all fail with id_JSON having the default value NOT_FOUND.
Do I need to declare or manage the variable id_JSON in any special way to ensure that it will get the value of the session ID and retain it throughout the thread iteration, until the next iteration overwrites it with the next session ID?
The Extractor is a Post Processor, meaning it is applied after each sampler. So in you case it will run on the First Get and the 4 Puts.
So what you are noticing is absolutely regular, and if a Sampler fails, the extractor will fail to extract the ID and put NOT_FOUND in value.
If you are sure it does not change, then just put the Post Processor as a child of the first HTTP Request called "Create Session", it will then only run for it and the variable will not change anymore.
You can read more on this at:
http://jmeter.apache.org/usermanual/test_plan.html#scoping_rules
"Go to next loop iteration" operates on Thread Group level.
Using any nested loop controllers doesn't increment global iteration counter. You can test it with either:
${__BeanShell(vars.getIteration();)} function - if you use vanilla JMeter
iterationNum function - if you use JMeter Plugins
So if you move your "looping" on Thread Group level and remove nester Loop Controller(s) (or set their loop count to 1) your approach should work as you expect it to do.
I'm using JMeter to run a functional test to update the password of a lot of users (22K). I've separated the users in 2 scripts and used a Ultimate Thread Group with Start Threads Count = 100, which is the value with which I got less errors, however I still got 1.5% transactions failed, and I need to rerun only this failed threads, because all users need to have the same password.
I've tried to get answers to this specific problems, but I have only found ways to prevent this from happening, like using a While controller with a timer, or logging the full response for failure, but I haven't found if there is a way to specifically rerun the failed threads.
Does anyone know if this is possible?
You will have to do following.
Use JSR223 sampler to set the rescode=0
While controller with (if rescode!=200)
HTTP Sampler
JSR223 post processor as javascript as the scripting language.
Store response code using prev.getResponseCode()
e.g. vars.put("rescode", prev.getResponseCode());
You might have to add some more intelligence to the script to avoid infinite loop.
Another approach to solving the problem would be to anticipate errors on some of the password update calls and build a data file upon failure with the information you need.
For Example:
Create a regular expression post processor that has default value of false, and template value of true. Make the expression match the expected response, and fail if the sample fails.
Then, after that sampler, you can add an if statement based on the new true/false variable. If it is false, you know the previous password update failed. Inside the if statement, add a dummy sampler with response data containing all the information you need to know which accounts you must retry.
Then, add a simple file writer to this dummy sampler, and log the dummy sampler response data to a file. At the end of a test run this data file would contain all the information you need to re-try all failed accounts.
Sadly this is a slightly manual process, but I'm sure with a little creativity you could automate recursive test runs until the re-try file is empty. Beanshell file IO might let you handle it all inside a single test run.
-Addled