How to end "While Controller" on error and show whole progress in "View Results Tree" - jmeter

In my test plan I want to make java request and check how it was done by making jdbc request, But I don't exactly know when java request will change DB. I want to make several requests with waiting after each. And if JDBC request doesn't special changes after certain time stop whole test. After that I want to see progress where test plan stopped in view results tree.
BUT
If I use any stop-thread functions, there are no data in View Results Tree.
Haw can I do what I want with jmeter?
Thanks.

you can use a similar mechanism to the one described here:
http://www.sourcepole.ch/2011/1/4/waiting-for-a-page-change-in-jmeter

Related

How to also Ignore the Parent sample generated by the Transaction Controller along with it's children samples?

I am using JMeter 5.5 to put load on some webpages.
I have some webpage navigation flow recorded and I am using The Transaction Controller with the "Generate parent sample" checkbox checked to represent a webpage navigation (load). Underneath the Transaction Controller are the some HTTP Request samplers with "Retrieve All Embedded Resources" checked.
I want to ignore the first and last minute of script execution so I am using a JSR223 Postprocessor with code to conditionally ignore the samples based on the current time in the execution.
This works well if I don't use the Transaction controller, but when I use the prev.setIgnore() function and the sampler being ignored is underneath a Transaction controller with the "Generate parent sample" checkbox checked, then in the "View Results Tree" listener (and also in the JMeter Dashboard) I get an empty parent sample with Load time:0; Connect Time:0; Latency:0. This impacts my metrics in the final generated report.
Is there any way to ignore the Parent sample as well (remove it from the reporting) or can I achieve the goal in a different way?
Thanks in advance.
I think you need to replace prev.setIgnore() with prev.getParent().setIgnore()
In general ignoring samples with Groovy is not something I would recommend to do as it causes extra overhead, I would rather suggest using
either Filter Results Tool
or JMeter Plugins Command Line Tool
or use jmeter.reportgenerator.start_date and end_date properties in case of using HTML Reporting Dashboard

JMeter load test async api

I make a post call to an api, and don't need to log the response (post to google cloud pubsub), but after that i need to measure time it takes for the data to be processed and appear in another GET request (need to keep hitting it unless the response changes).
I also need to measure the performance under load. I tried JMeter but could not figure out a way to get what I wanted. Is there a way to do this in Jmeter? or some other tool that will let me do what i want
You can put your "another GET request" under a While Controller so JMeter will keep sending this request unless it will match some defined condition
You can put the whole sequence of requests under the Transaction Controller - this way JMeter will measure end-to-end duration of the whole scenario

Why jMeter show transaction controller data in its summary report?

Here i have A recording controller and test script recorder. I recorded the user activities using template. This automatically creates a Recording controller and then transaction controller. Each transaction controller has child HTTP request wrapped inside it. Now when I run the test after creating a test script, I see that the aggregated result shows information about (throuput, error, min etc) for child HTTP request(Sampler) as well as the parent transaction controller.
I'll make it more clearer with images below.
In the above picture ive created the test plan. Now when I run this test i get following result. The circled ones are the transaction controller.
Here, i have circled the parent transaction controller. Now why on earth is this adding up to the result.
Question: Is it making any request to the website? Why is this showing up and adding values to the child request. This thing is just a sum of all its child request - so, why is it adding up in the table?
Here again if I click on the "Generate parent sample" then it hides the child request and shows only the summed up report which is totally different from above report.
Now the question is how do I turn things around. What are the consequences. And what should I do in this case. Shall i compute the parent-child report or just the parent report data?
As per documentation of Transaction Controller:
The Transaction Controller generates an additional sample which measures the overall time taken to perform the nested test elements.
So if you don't want this additional sampler, just remove it or replace it by Simple Controller.
Note it is useful when it contains more than 1 sampler.
If you're looking to learn jmeter correctly, this book will help you.

jMeter Multiple HTTP Requests

I want to test a fully functioning website for load using a constant, known number of users - to that end I'm trying to recreate the "Retrieved All Embedded Resources" functionality for a web-page, only manually, because I really don't know if it fetches all resources grabbed by JS. So the first question is - how do I check to see what these subsequent fetches retrieve?
Second question is - how do I make the multiple requests atomic, like "Retrieve All Embedded Resources"? I need to use "Constant Throughput Timer" for making sure the number of vusers is constant, but:
When using "Retrieve All Embedded Resources", this counts as one request, and one thread handles it right (hopefully, again - can't tell what goes on beyond the scenes)
When using a recorded session with numerous elements, each element is one action and occupies the queue (counts as 1 sample for Constant Throughput Timer). Therefore, it's not atomic.
I guess I can count the elements and define them as number of samples for throughput per minute, but this won't do in the long run.
First of all, jmeter does not execute any javascript in the pages retrieved. Clicking "Retrieve all embedded resources" does the following if you check the documentation:
Tell JMeter to parse the HTML file and send HTTP/HTTPS requests for all images, Java applets, JavaScript files, CSSs, etc. referenced in the file.
So it will check the current sample for any references and retrieve those, but it will not run any scripts that are retrieved.
If you want to check which resources jmeter is actually retrieving you could run for example Fiddler to check which requests are being made.
You can use Transaction Controller to consider all embedded resources requests and master request as one sample, aggregate time will be logged and reported.

How to run JMeter failed threads after test stops?

I'm using JMeter to run a functional test to update the password of a lot of users (22K). I've separated the users in 2 scripts and used a Ultimate Thread Group with Start Threads Count = 100, which is the value with which I got less errors, however I still got 1.5% transactions failed, and I need to rerun only this failed threads, because all users need to have the same password.
I've tried to get answers to this specific problems, but I have only found ways to prevent this from happening, like using a While controller with a timer, or logging the full response for failure, but I haven't found if there is a way to specifically rerun the failed threads.
Does anyone know if this is possible?
You will have to do following.
Use JSR223 sampler to set the rescode=0
While controller with (if rescode!=200)
HTTP Sampler
JSR223 post processor as javascript as the scripting language.
Store response code using prev.getResponseCode()
e.g. vars.put("rescode", prev.getResponseCode());
You might have to add some more intelligence to the script to avoid infinite loop.
Another approach to solving the problem would be to anticipate errors on some of the password update calls and build a data file upon failure with the information you need.
For Example:
Create a regular expression post processor that has default value of false, and template value of true. Make the expression match the expected response, and fail if the sample fails.
Then, after that sampler, you can add an if statement based on the new true/false variable. If it is false, you know the previous password update failed. Inside the if statement, add a dummy sampler with response data containing all the information you need to know which accounts you must retry.
Then, add a simple file writer to this dummy sampler, and log the dummy sampler response data to a file. At the end of a test run this data file would contain all the information you need to re-try all failed accounts.
Sadly this is a slightly manual process, but I'm sure with a little creativity you could automate recursive test runs until the re-try file is empty. Beanshell file IO might let you handle it all inside a single test run.
-Addled

Resources