Jmter, duplicate list of sample result - jmeter

I had recorded the script using blaze meter plugin, and using ultimate thread group, I am not able to figure out:
Why there is delay in sending first request? I have not set any delayed and start-time is also for 1 second
Also in my result tree I am seeing multiple http requests called. What am I doing wrong here?

We are not telepathic enough to guess why do you have the "delay" without seeing your test plan, most probably there is something like "Think-Time" or other Timer there
having non-zero "Thread Delay" value. This is to simulate real user using the real browser. Real users don't hammer the application under test non-stop, they need some time to "think" between operations so most probably this delay is due to the timer action
If you look at Defaults configuration element at the "Advanced" tab there is a setting instructing JMeter to download so-called "Embedded Resources". In the main HTML response there are multiple referenced elements like images, scripts, styles, fonts, sounds, etc. and JMeter parses the main HTML response, extracts these resources from there and downloads them as this is what real browsers do.

Related

Suspeciously long pauses between Click and SendKeys actions in WDS in JMeter

I have an action which fills in a form with 2 fields and clicks submit button.
Issue: load time for this sampler grew up during several script runs from 2-3 sec initially (which was visually close to manual execution) to 14-16 sec with no visible reason (no changes in application or script).
Investigation: ~4-5sec pauses between click into a field and sendKeys actions for both 2 fields in script when manually fields are clickable with no visible delays after form appears.
Log:
WDS code:
Q: Is there any explanation of such strange behavior with ~5sec + ~5sec pauses in each field?
What should I do to fix this issue with too long and unexplainable pauses which provide unrelevant results?
Should I consider to switch to JSR223/groovy or it is not sampler type & code-related problem?
Your question doesn't tell the full story so I don't think it's possible to come up with the comprehensive answer so far only few generic recommendations:
Take a thread dump at the moment when JMeter is "waiting" and doing nothing, it will allow you to detect what's going on there
Monitor JVM parameters using JVisualVM or equivalent, it might be the case the JVM is doing garbage collections to free up heap memory space, in this case you will have to tune JMeter for maximum performance
You're "finding" the element twice, the second line is absolutely unnecessary so instead of
wait.until(pkg.ExpectedConditions.visibilityOfElementLocated(pkg.By.xpath("some xpath")))
var element = WDS.browser.findElement(pkg.By.xpath("the same xpath"))
you can do it only once like:
var element = wait.until(pkg.ExpectedConditions.visibilityOfElementLocated(pkg.By.xpath("some xpath")))
XPath is the most powerful but the least performing locator strategy, if it's possible - try to stick to element ID attributes, if it is not possible - go for CSS Selectors
Follow JMeter Best Practices like don't run your test in GUI mode, avoid storing response data, etc.

Visual Studio Web Test - Recording background requests

I have a web test where my requirements need a handful of different polling requests to be going on in the background. I have created a WebTestPlugin that looks for a certain context parameter to be set, and once it is, it kicks off a thread that just loops (every X seconds) firing off the configured request.
My issue is that this is not done in the context of the test, therefore the results (# of calls, duration, etc) is not part of the final report.
Is there a way to insert this data?
Rather than starting your own thread to run the background requests I suggest using the facilities of the load test. That way the results will be properly recorded. Another reason is that the threading regime of a load test is not specified by Microsoft and adding your own thread may cause issues.
You could have one scenario for the main test. Another scenario has one or more simple tests for the background polling activity. These tests could be set with a "think time between iterations" or with "test mix based on user pace" to achieve the required background rate. To get the background web tests starting at the correct time start the test with a constant load of 0 (zero) users and use a load test plugin that adjusts the number of users whenever needed. The plugin writes the required number into m_loadTest.Scenarios[N].CurrentLoad for a suitable N. This would probably be done in the Heartbeat plugin but potentially could be in any load test plugin. If may be that the TestFinished plugin can better detect when the number of users should increase.

JMeter Add Think Time to children feature

In JMeter when I right click Thread/Controller I have an option: Add Think Time to children feature , when I click on it I get after every Sampler Test Action Pause with Uniform Random Timer with Random Delay 100 and Constant Delay 1000.
I didn't find in documentation any reference to it and why/how it should be used.
Is it configurable and how? is there a special case for it or should it be used for loading best practice ?
Also you can add several times think times I'm not sure is it on purpose (add more delays after request)
EDIT
Configurable using jmeter.properties:
# Default implementation that create the Timer structure to add to Test Plan
# Implementation of interface org.apache.jmeter.gui.action.thinktime.ThinkTimeCreator
#think_time_creator.impl=org.apache.jmeter.thinktime.DefaultThinkTimeCreator
# Default Timer GUI class added to Test Plan by DefaultThinkTimeCreator
#think_time_creator.default_timer_implementation=org.apache.jmeter.timers.gui.UniformRandomTimerGui
# Default constant pause of Timer
#think_time_creator.default_constant_pause=1000
# Default range pause of Timer
#think_time_creator.default_range=100
When it comes to web applications load testing the idea is to represent a real user sitting in front of computer using a real browser as close as possible.
Well-behaved JMeter test needs to mimic this real user with all its stuff like:
headers
cookies
cache
embedded resources
AJAX requests
etc.
The purpose of using Timers in JMeter tests is simulating real users "think times". Users don't hammer application non-stop, they need some time to "think" between operations, fill forms, type comments, even clicking on a button or link takes some time. So if you are testing if your web application supports X users each JMeter thread must act like a real user so you need to add reasonable think times using Timers. There is no "best practice" or "known good values", it depends only on your web application specifics. See A Comprehensive Guide to Using JMeter Timers for more details.
This feature is made to simplify addition of Think Times, the way it adds them to plan lead to pause between every sampler while if you just add a Timer it will be scoped and thus be applied before all samplers in scope.
As it's a helper, it adds default Pause of 1 second that are configurable by tuning the properties you have mentioned and which are documented :-) :
http://jmeter.apache.org/usermanual/properties_reference.html#timer
You can adjust:
The type of Timers you want to create
The constant and variable pause range
You can even create your own class that would work differently.

How do i add a WAIT in a Web Performance Test loop?

Writing a Web Performance Test for a process that will run for an undetermined time, and have to put a refresh command in a while that runs until the process state indicates it is done.
The refresh command consumes about 3 seconds. so do not want it running constantly in the loop. So, am trying to find a sleep/wait function to stop the execution between loops.
The only reference i've found is for Thread.Sleep which seems to do the job.
BUT, this method seems to also stop the test's timers. so, however many times the loop runs, and whatever the actual time taken by the process, the test report will only show the cumulative time of the refresh statements.
Is there another method that will not stop the test's timers?
If the refresh is in a loop within the Web Performance Test then set a suitable "think time" on the request. This will pause the test after the response is received. (Think times are normally used to simulate the time a person spends reading a web page and filling in forms etc before the next request is issued.)
Think times are set via the properties of the request. Think times (also reporting names) for all requests in a test can be viewed and modified using the "Set request detail" command accessed using the (rightmost) command icon in eth web test editor.
Think times can also be set or adjusted in the PreRequest method of a WebTestRequest plugin.

jMeter Multiple HTTP Requests

I want to test a fully functioning website for load using a constant, known number of users - to that end I'm trying to recreate the "Retrieved All Embedded Resources" functionality for a web-page, only manually, because I really don't know if it fetches all resources grabbed by JS. So the first question is - how do I check to see what these subsequent fetches retrieve?
Second question is - how do I make the multiple requests atomic, like "Retrieve All Embedded Resources"? I need to use "Constant Throughput Timer" for making sure the number of vusers is constant, but:
When using "Retrieve All Embedded Resources", this counts as one request, and one thread handles it right (hopefully, again - can't tell what goes on beyond the scenes)
When using a recorded session with numerous elements, each element is one action and occupies the queue (counts as 1 sample for Constant Throughput Timer). Therefore, it's not atomic.
I guess I can count the elements and define them as number of samples for throughput per minute, but this won't do in the long run.
First of all, jmeter does not execute any javascript in the pages retrieved. Clicking "Retrieve all embedded resources" does the following if you check the documentation:
Tell JMeter to parse the HTML file and send HTTP/HTTPS requests for all images, Java applets, JavaScript files, CSSs, etc. referenced in the file.
So it will check the current sample for any references and retrieve those, but it will not run any scripts that are retrieved.
If you want to check which resources jmeter is actually retrieving you could run for example Fiddler to check which requests are being made.
You can use Transaction Controller to consider all embedded resources requests and master request as one sample, aggregate time will be logged and reported.

Resources