Developing JMeter test plan with results from multiple REST end points - jmeter

Is this possible in JMeter to develop a test plan that will have result of first test (an ID) will be input of next test and so on in next test upto 4 tests because each test generates a unique ID and each of these IDs are dependent on each other. Each one is related as follows: submission ID > execution ID > both will generate completion ID with result pass or fail. These are REST API calls. I need to run concurrency users load testing. Finally I need measure latency, throughput from each test.

Between sampler requests, parse the api response using JSON post processor, assign it to ${variable_name} and use it in other requests.

It should look something like this.
Thread group
Userdefined variables
Http Sampler
Regex to get id
Http Sampler
Regex to get id
If you want to measure the response time of all the sampler have a simple controller as parent of all samplers

thank you for quick tip. I was able to get one step working by passing ID into a regular expression, but the same regular expression did not work for 3rd step. Let me give more details here. Basically first post command gives submission ID > I used that ID into regular expression > run a get command in next step with an URL something like '/../2ndStep/submissionId' > this is passed > I'm using the same regular expression in next get command with an URL something like '/../3rdStep/submissionId/executions'> this is supposed to give another executionId and it is failing for me. I'm not sure what I'm missing.

thank you all for suggesting working solution. But I need to do this different way to achieve the following requirement.
When I run POST command test on my REST API HTTP request using JMeter, it returns an ID in response. This ID will be used by other steps for completing the job. I'm currently passing ID into regular expression and using that in between the samplers of each step as it was suggested above and then measuring latency, but the GET steps which are dependent on that ID could take sometime to complete. So I can not put those GET steps into one thread because two of the steps are failing as they could take some time to complete. Is there a way to separate POST command from the remaining and start polling GET commands on the remaining steps automatically to remedy this. Bottom line is I need to measure latency of each step and throughput too. Please let me know if there is a way to achieve this in JMeter?
Thanks again,
Santana

Related

How to Capture Jmeter test metrics like test start time, end time, number of users and pass it to rest API call

I am new to JMeter. I have two scripts one script is web and another is a rest api call which posts metrics to server. Both the scripts are working fine. Now i wanted to implement a scenario.
Web Script should run first once the script is completed i need to capture test metrics like start time, end time, load rate (No.of threads), Pass or fail save to a variable and pass these values to the rest api call which will then run and post the metrics to the server.
Any help is appreciated
Start time - can be obtained as ${TESTSTART.MS} JMeter pre-defined variable
End time - can be obtained via __time() function, if you call it somewhere in tearDown Thread Group it will report the time when all the main Thread Group(s) are done
Number of threads - it's a kind of weird requirement because its you who define the number of virtual users. Anyway, you can obtain it at any moment of time using i.e. __groovy() function like:
${__groovy(ctx.getThreadGroup().getNumberOfThreads(),)} - returns the number of threads which are active currently
${__groovy(ctx.getThreadGroup().getNumThreads(),)} - returns the number of threads which are defined in the Thread Group
As you are planning for the given scenarios you need to do following things.
1) You need to user jp#gc listeners to measure the results in (response time, threads per minute/seconds, hits per second and many more)
You can find the list of listeners here >> https://jmeter-plugins.org/wiki/GraphsGeneratorListener/
2) You need to implement the test plan using regular expression extractor for taking values from the response requests which you can store in the variables and later pass on to the dependent requests. for documentation visit https://jmeter.apache.org/usermanual/regular_expressions.html
For general understanding you can go through the jmeter official documentation
https://jmeter.apache.org/usermanual/get-started.html
I hope it will help you

How do I record a script in JMeter which adds a record in a website?

Currently, I am recording a script in jmeter by which I am adding a record in a website , but a problem is that while recording a script I am able to add a record in a website, but once recording has been done and after that if I will run a script again then, a script is not adding a record in a website.
can you please help me with this?
In the absolute majority of cases you will not be able to replay the recorded script without performing correlation.
Modern web applications widely use dynamic parameters for session management or CSRF protection so once you record your test you get "hardcoded" values and they need to be dynamic.
Assuming all above my expectation is that your test doesn't add record due to failed login or something like that. Inspect requests and responses using View Results Tree listener - this will allow you to identify which exact step is failing.
The process of implementing the correlation looks as follows:
Identify the element which looks to be dynamic either manually inspecting request parameters and look for "suspicious" patterns or record your test one more time and compare the recorded scripts looking for the parameters which differ
Inspect previous response and extract the dynamic value using a suitable post-processor. For HTML response times the best option is CSS Selector Extractor. It will allow you to extract the dynamic parameter value and store in into a JMeter Variable
Replace hardcoded recorded value with the variable from the step 2
Repeat for all dynamic parameters
Don't forget to add HTTP Cookie Manager to your Test Plan.

Adding elements at runtime and executing a jmeter test plan

I am trying to build a jmeter testplan, where all the test values are sent from a csv datafile.I want to add assertions(provided in the datafile) to my HTTP Request at runtime and execute the test. The reason behind doing this is to keep the plan flexible according to the number of assertions. In my case, the assertions are getting added at the runtime; however they fail to get executed. May I know what should be done to get the components added and executed in the same flow?
For example: A part of plan looks like:
XYZ
--HTTP Sampler
-- Response Assertion1
-- Response Assertion2
-- JSON Extractor
where XYZ -->keyword based transaction controller(reusable component)
Everytime I have a request of type XYZ, this chunk of components will get executed. In my case, I do not want to place anything such as Assertions, pre/post processors, extractors in the test plan already. I want to generate these components at run time and execute them (as per my test requisites).
Issue: The problem here is that I cant load the components programmatically and execute them in the same flow. The reason being, the compiler does not know beforehand what all components it needs to execute, so it bypasses the newly added components.
So, I need some alternative solution to execute this.
You can add Response Assertion (or multiple) with Pattern to test filled with a variable as ${testAssert1} and set the variable by default as empty, for example
Put in User Defined Variables name testAssert1 with empty value.
Your assertion(s) will pass until you on run time set the variable with a different value, for example using User Parameters Pre Processor.

How to measure the Test Fragment response time in jMeter?

My jMeter script performs visiting the Workout history page of the website.
While leading there the app sends 8 api requests. We put them into one Test Fragment but while running scripts in jMeter I get the response time of each HTTP Request.
Is there any possibility to get the response time of the whole Test Fragment?
My Test Fragment screenshot
It is simple. You just add a Transaction Controller in the Test Fragment. Move all the HTTP requests under the Transaction Controller.
If you are looking for only the total time of all the requests, then check the Generate parent sample checkbox.
you could definitely do this in beanshell, start a timer and then calculate the end time.
Also you might try something like the JC#gc package of extra plugins for something like responses over time.

How to run JMeter failed threads after test stops?

I'm using JMeter to run a functional test to update the password of a lot of users (22K). I've separated the users in 2 scripts and used a Ultimate Thread Group with Start Threads Count = 100, which is the value with which I got less errors, however I still got 1.5% transactions failed, and I need to rerun only this failed threads, because all users need to have the same password.
I've tried to get answers to this specific problems, but I have only found ways to prevent this from happening, like using a While controller with a timer, or logging the full response for failure, but I haven't found if there is a way to specifically rerun the failed threads.
Does anyone know if this is possible?
You will have to do following.
Use JSR223 sampler to set the rescode=0
While controller with (if rescode!=200)
HTTP Sampler
JSR223 post processor as javascript as the scripting language.
Store response code using prev.getResponseCode()
e.g. vars.put("rescode", prev.getResponseCode());
You might have to add some more intelligence to the script to avoid infinite loop.
Another approach to solving the problem would be to anticipate errors on some of the password update calls and build a data file upon failure with the information you need.
For Example:
Create a regular expression post processor that has default value of false, and template value of true. Make the expression match the expected response, and fail if the sample fails.
Then, after that sampler, you can add an if statement based on the new true/false variable. If it is false, you know the previous password update failed. Inside the if statement, add a dummy sampler with response data containing all the information you need to know which accounts you must retry.
Then, add a simple file writer to this dummy sampler, and log the dummy sampler response data to a file. At the end of a test run this data file would contain all the information you need to re-try all failed accounts.
Sadly this is a slightly manual process, but I'm sure with a little creativity you could automate recursive test runs until the re-try file is empty. Beanshell file IO might let you handle it all inside a single test run.
-Addled

Resources