I'm testing a large BizTalk system using Visual Studio Load Test. Load Test to pushes messages into MQ, these are picked up by BizTalk and then processed.
Rather than having the test finish (and all performance counters ending) as soon as Visual Studio has finished injecting messages to MQ, I want the test to end if and only if some condition is met (in my case if (SELECT COUNT(*) FROM BizTalkMsgBoxDb.dbo.Spool) == 4).
I can see a bunch of ways to run stuff after the test is complete, but no obvious way to extend the test and continue monitoring unless some user-defined exit condition is met.
Is this possible, or if not, does anyone have an idea for a good work-around/hack to achieve this?
You'll want to write a custom load test plugin. Details begin at this URL: http://msdn.microsoft.com/en-us/library/ms243153.aspx
The plugin can manipulate the scenario, extending the duration of the test until your condition is met.
I imagine you want to keep the load test running after queueing up a bunch of requests in order to continue to monitor the performance while the requests are processed. Although we can't control the load test duration, there is a way to achieve this.
Don't limit the test duration: Set the load test duration (or number of iterations) to a very large value -- larger than you anticipate (or know) it will take for the end condition to be satisfied.
Limit the scenario that queues up requests: In the load test scenario properties, in the Options section, set the Maximum Test Iterations so that the user load will drop to zero after sending the desired number of requests. If setting an iteration limit is not possible for some reason, you can instead write a load test plugin that sets the user load to zero in a specified scenario after a certain amount of test time has elapsed.
Check for end condition: Write a web test plugin that checks the database for your end condition. Attach this plugin to a new webtest in a new scenario and set Think Time Between Test Iterations on the scenario so that the test runs only as often as needed (60 seconds?). When the condition is reached, the plugin should write a predetermined value into the user context (the user context is accessible in the web test context as $LoadTestUserContext, and is only available in a load test, not when running a webtest standalone).
Abort the test: Write a load test plugin that looks for the flag value in the user context in the TestFinished event. When the value is found, the plugin calls LoadTest.Abort().
There is one minor disadvantage to this method: the test state is marked as Aborted in the results database.
At time of writing there is (still) no way to extend the duration of the test using a custom load test plugin, nor by having a virtual user type that refused to exit, nor by locking the close-down period of the test and preventing it from exiting that way.
The only way we managed to do something like this was to directly manipulate the LoadTest database and inject performance counter data in afterwards from log files, but this is neither smart nor recommended.
Oh well..
Related
I have a web test where my requirements need a handful of different polling requests to be going on in the background. I have created a WebTestPlugin that looks for a certain context parameter to be set, and once it is, it kicks off a thread that just loops (every X seconds) firing off the configured request.
My issue is that this is not done in the context of the test, therefore the results (# of calls, duration, etc) is not part of the final report.
Is there a way to insert this data?
Rather than starting your own thread to run the background requests I suggest using the facilities of the load test. That way the results will be properly recorded. Another reason is that the threading regime of a load test is not specified by Microsoft and adding your own thread may cause issues.
You could have one scenario for the main test. Another scenario has one or more simple tests for the background polling activity. These tests could be set with a "think time between iterations" or with "test mix based on user pace" to achieve the required background rate. To get the background web tests starting at the correct time start the test with a constant load of 0 (zero) users and use a load test plugin that adjusts the number of users whenever needed. The plugin writes the required number into m_loadTest.Scenarios[N].CurrentLoad for a suitable N. This would probably be done in the Heartbeat plugin but potentially could be in any load test plugin. If may be that the TestFinished plugin can better detect when the number of users should increase.
I have recorded my web application through template & just to confirm that load test result which i am getting is correct? Just by increasing No of users does it give proper results? Is it enough for load testing of web application?
First of all you need to ensure that your test does what it is supposed to be doing. Recorded tests can rarely be successfully replayed, so normally you should be acting as follows:
Add View Results Tree listener and run your test with 1 user. Inspect request and response details to verify your test steps.
Perform correlation and parametrization if required.
Correlation: the process of identifying and handling any dynamic parameters. Most often people use Regular Expression Extractor for it.
Parametrization: the process of making your test data driven. For example, if your application assumes multiple authenticated users you need to store the credentials somewhere. Most commonly used test element for this is CSV Data Set Config
Make your test realistic. Virtual users simulated by JMeter need to represent real users using real browsers as close as possible with all the related stuff: cookies, headers, cache, etc. See How To Make JMeter Behave More Like A Real Browser to learn how to configure JMeter to act closer to real users. Also real users need some time to "think" between operations so make sure you are using Timers to simulate this behaviour as well.
Only after you apply the above points you should add more virtual users. Again, run your test with 2-3 users and iterations to ensure your test funcitons as designed. Once you are happy with it you can increase the load, but don't overkill your server, increase the load gradually and check the impact of the increasing load on your application, i.e. how response time, throughput and number of errors change as you increase the load. The same is applicable for decreasing the load, don't turn it off at once, decrease the number of virtual users gradually.
Building a Web Test Plan
Building an Advanced Web Test Plan
In this case, I have created several webtest scripts, and added them to a load test (distributed by expected use).
What I would like to do is send a user load (500 for example) where all users run at the same time, each user is given only a single script to run and complete, then the test is finished. One iteration for each user.
I am finding that iterations are not user based but test based, so only one user and test is completed when selecting a Test Iterations value of 1 for 500 users.
Is there a user based iteration setting or some other way to accomplish my intended test?
Thanks.
The test settings you have used are not at all clear from your question. However assuming you want to start 500 test cases at the same time and stop after they have completed then you can use the following.
In the properties of the scenario: Set the user load to constant and to 500 users. Also set maximum test iteration to 0 (meaning no maximum). I would also set the think time between iteration to much longer than you expect the test run to take; this setting may not be needed but it avoids unexpected behaviours.
In the properties of the run settings there are two possibilities.
Either (1) set the test iterations to 500.
Or (2) set the run duration to long enough for all 500 tests to complete, but shorter than the think time between iteration in the scenario.
We're running a test case for load testing, over different servers. What we want to do is, given that test case, stop if we can see a performance decrease, based on a response time threshold.
What we have now is threadgroup defined, and inside it, an HTTP request defined plus a view table for output. What should I do to put this control in there?
Add Duration Assertion and specify threshold there
In your Thread Group set "Action to be taken after a Sampler error" to Stop Test.
Above steps will stop your test after first occurrence of response time exceeding the threshold.
See How to Use JMeter Assertions in Three Easy Steps article for more information on how to conditionally set pass/fail criteria in your JMeter test.
P.S.
Remove View Results Table listener (or disable it) during load test execution as it consumes a lot of resources.
Run your load test in non-GUI mode as JMeter GUI is not designed for running the actual load test and may be a bottleneck in case of more or less severe load.
Writing a Web Performance Test for a process that will run for an undetermined time, and have to put a refresh command in a while that runs until the process state indicates it is done.
The refresh command consumes about 3 seconds. so do not want it running constantly in the loop. So, am trying to find a sleep/wait function to stop the execution between loops.
The only reference i've found is for Thread.Sleep which seems to do the job.
BUT, this method seems to also stop the test's timers. so, however many times the loop runs, and whatever the actual time taken by the process, the test report will only show the cumulative time of the refresh statements.
Is there another method that will not stop the test's timers?
If the refresh is in a loop within the Web Performance Test then set a suitable "think time" on the request. This will pause the test after the response is received. (Think times are normally used to simulate the time a person spends reading a web page and filling in forms etc before the next request is issued.)
Think times are set via the properties of the request. Think times (also reporting names) for all requests in a test can be viewed and modified using the "Set request detail" command accessed using the (rightmost) command icon in eth web test editor.
Think times can also be set or adjusted in the PreRequest method of a WebTestRequest plugin.