Can't run multiple tests inside a load test - performance

I am setting up a load test with multiple web performance tests.
I want one user to run each test. I can't have multiple users run the same test.
I have 10 tests, and I want them all to be run at the same time (i.e. simulating peak load).
When I run my load test only 1 of the tests gets loaded.
Here's my configuration:
I have 10 web performance tests defined that I've added to a load
test.
I have set the "Test Mix Model" to be based on the "total number of
tests".
I have allocated each test to have 10% distribution.
I set the user count to 10 (constant load).

Related

Is it possible to run simultaneous runs with different device configurations?

Is it possible to run multiple test runs with different test suites at the same time with an account that permits device concurrency?
https://forums.xamarin.com/discussion/39831/run-ui-tests-on-multiple-devices-simultaneously
In this question the answer was this
When you create a test run in Xamarin Test cloud, the second page in the Test Run wizard has an option to run tests concurrently (the Parallelization drop down).
If you are submitting tests at the command line, you can run tests in parallel using one of the following two command line parameters:
--test-chunk to run tests in parallel by method
--fixture-chunk to run tests in parallel by fixture.
But can I test on different devices like in this example?
Device1 - test1, test2
Device2 - test1, test3
Device3 - test4, test5
It is possible to run multiple tests with different test suites at the same time using device concurrency in the Xamarin Test Cloud. This is true whether or not you are using parallelization, however, parallelization does complicate the matter somewhat, because parallelization runs on multiple copies of a single device, and those copies also count against your concurrent devices.
When you select to run on parallel devices, the Test Cloud will automatically run the devices on as many copies of that device that are available.
Example Scenario
Device Concurrency - 3
First Test Run - 1 device selected
Second Test Run - 2 devices selected
Without parallelization - Both tests can run as soon as devices are available, because the concurrency is the total maximum for all tests. You could similarly have three test runs each with a single device and all could start immediately. If you exceed your device concurrency, then your remaining tests will be queued up to wait for another device to be finished.
With parallelization - The first test run may use up 1, 2 or all 3 device concurrency slots; depending on how many devices are available. The slots that are used up by the first test run won't be available for the second test run until tests on them have finished.
Conclusion
Theoretically you can have multiple test runs all using parallelization at the same time; but in practice you might not have enough concurrency slots for them to actually progress concurrently.
You can think of it as a trade-off, for individual test runs on a single device, parallelization will let you get your test results much faster; but subsequent test runs will often have to wait, so it is a tradeoff. But whether you use it or not, you can still queue up more tests afterwards; so there's no "penalty" for adding extra tests beyond what your concurrency will allow to immediately run.

VS Web Test Load, Distribution and Iterations

In this case, I have created several webtest scripts, and added them to a load test (distributed by expected use).
What I would like to do is send a user load (500 for example) where all users run at the same time, each user is given only a single script to run and complete, then the test is finished. One iteration for each user.
I am finding that iterations are not user based but test based, so only one user and test is completed when selecting a Test Iterations value of 1 for 500 users.
Is there a user based iteration setting or some other way to accomplish my intended test?
Thanks.
The test settings you have used are not at all clear from your question. However assuming you want to start 500 test cases at the same time and stop after they have completed then you can use the following.
In the properties of the scenario: Set the user load to constant and to 500 users. Also set maximum test iteration to 0 (meaning no maximum). I would also set the think time between iteration to much longer than you expect the test run to take; this setting may not be needed but it avoids unexpected behaviours.
In the properties of the run settings there are two possibilities.
Either (1) set the test iterations to 500.
Or (2) set the run duration to long enough for all 500 tests to complete, but shorter than the think time between iteration in the scenario.

Visual Studio Cloud Load Test Average Test Time Seems Long

I have a WebAPI service that I put together to test throughput hosted in Azure. I have it set up to call Task.Delay with a configurable number (IE webservice/api/endpoint?delay=500). When I run against the endpoint via Fiddler, everything works as expected, delays, etc.
I created a Load Test using VS Enterprise and used some of my free cloud load testing minutes to slam it with 500 concurrent users over 2 minutes. After multiple runs of the load test, it says the average test time is roughly 1.64 seconds. I have turned off think times for the test.
When I run my request in Fiddler concurrently with the Load test, I am seeing sub-second responses, even when spamming the execute button. My load test is doing effectively the same thing and getting 1.64 second response times.
What am I missing?
Code running in my unit test (which is then called for my load test):
var client = new HttpClient { BaseAddress = new Uri(CloudServiceUrl) };
var response = client.GetAsync($"{AuthAsyncTestUri}&bankSimTime={bankDelay}&databaseSimTime={databaseDelay}");
AuthAsyncTestUri is the endpoint for my cloud-hosted service.
There are several delay(), sleep(), pause(), etc methods available to a process. These methods cause the thread (or possible the program or process for some of them) to pause execution. Calling them from code used in a load test is not recommended, see the bottom of page 187 of the Visual Studio Performance Testing Quick Reference Guide (Version 3.6).
Visual Studio load tests do not have one thread per virtual user. Each operating system thread runs many virtual users. On a four-core computer I have seen a load test using four threads for the virtual users.
Suppose a load test is running on a four-core computer and Visual Studio starts four threads to execute the test cases. Suppose one virtual user calls sleep() or similar. That will suspend that thread, leaving three threads available to execute other virtual user activity. Suppose that four virtual users call sleep() or similar at approximately the same time. That will stop all four threads and no virtual users will be able to execute.
Responding to the following comment that was added to the question
I did try running it with a 5 user load, and saw average test times of less than 500 ms, which match what I see in my Fiddler requests. I'm still trying to figure out why the time goes up dramatically for the 500 user test while staying the same for Fiddler requests run in the middle of the 500 user test.
I think that this comment highlights the problem. At a low user load, the Visual Studio load test and the Fiddler test give similar times. At higher loads something between the load test and the server is limiting throughput and causing the slowdown. It would be worth examining the network route between the computer running the tests and the system being tested. Are there any slow segments on that path? Are there any segments that might see the load test as a denial of service attack and hence might slow down the traffic?
Running a test for as little as 2 minutes does not really show how the test runs. The details in the question do net tell how many tests started, how many finished and how many were abandoned at the end of the two minute run. It is possible that many tests cases were abandoned and that the average time of those that completed was 1.6 second.
If you have the results of the problem run then look at the "details" section of the results. Expand the slider below the image to include the whole run. Tick the option (top left corner) to highlight failing tests. I would expect to see a lot of red at the two minute mark for failing tests. However, the two minute run may be too short compared to the sampling interval (in the run settings) to see much.
Running a first test at 500 users tells you very little. It tells you either that the system copes with that load or that it does not. You need to run the test at several different user loads. Then you start to learn where the boundary between working and not working lies. Hence I recommend using a stepped load.
I believe you need at least one more test run to understand what is happening. I suggest doing a run as follows. Set a one minute cool-down period. Set a stepped load: start at 5 users as you know that that works. Increment by 1 user every two seconds until 100 users. That will take 190 seconds. Run for about another minute at that 100 user load. Total of 4 minutes 10 seconds. Call it 4 minutes. Adding in the one minute cool down makes (5 minutes) x (100 VU) = 500 VUM, which is a small portion of the free minutes per month. After the run look at the graphs of average test times. If all is OK on that test then you could try another that ramps up more quickly to say 500 users.

Perform load testing, stress testing, capacity testing with JMeter

Hi I am new to JMeter and I do know how to perform load tests using JMeter. I tried to figure out how a stress test or a capacity test is performed via JMeter. Is it by gradually increasing threads in JMeter we can determine when performance hits are arise and get that threshold and run tests above the threshold. Does it make a stress test then?
Confused in how to perform a stress test and a capacity test with Jmeter tool.
JMeter is very flexible and load scenario can be established in multiple ways. Out of box there are following test elements available:
Thread Group - where you can set
Virtual Users Number
Ramp Up Time
Iterations count
JMeter acts as follows: each samplers are being executed upside down with each thread representing virtual user. When thread has no more samplers to execute and no more iterations it is being shut down. For ramp-up bit: by default settings JMeter tries to kick off all the threads as fast as it can but you can configure it to simulate increasing load. I.e. if you have 30 users and 30 seconds ramp-up time JMeter will start with 1 user and add one per second.
Constant Throughput Timer
Constant Throughput Timer can be used to set exact load in "Requests per minute".
Synchronizing Timer
Synchronizing Timer pauses test threads until threshold specified is reached. Once there are enough threads in pool JMeter releases them all at the same moment providing "spike" simultaneous load.
You can also use i.e. Ultimate Thread Group available via JMeter Plugins which provides easy and quick way of defining load scenario like:
Start with N users
Start up for S seconds
Hold the load for L seconds
Shut down test threads in T seconds
Hope this helps.
first of all both Load test & Stress test can help you determine the capacity of the system.
In order to perform load test, use the "Thread Group" available in Jmeter.
http://jmeter.apache.org/usermanual/test_plan.html
while doing a load test you will have to increase the user load gradually after 1 iteration has been executed completely e.g. you want to execute load test for 100, 200, 300, .... ,1000
so first iteration you have to keep "no. of threads" as 100, run the test save the results and then change the value in "no. of threads" to 200 & so on.
In order to perform stress test, use "jp#gc Stepping Thread Group"
http://testingfreak.com/tools/jmeter/stepping-thread-group/
hope this will help.

Specify test end condition in Visual Studio Load Test

I'm testing a large BizTalk system using Visual Studio Load Test. Load Test to pushes messages into MQ, these are picked up by BizTalk and then processed.
Rather than having the test finish (and all performance counters ending) as soon as Visual Studio has finished injecting messages to MQ, I want the test to end if and only if some condition is met (in my case if (SELECT COUNT(*) FROM BizTalkMsgBoxDb.dbo.Spool) == 4).
I can see a bunch of ways to run stuff after the test is complete, but no obvious way to extend the test and continue monitoring unless some user-defined exit condition is met.
Is this possible, or if not, does anyone have an idea for a good work-around/hack to achieve this?
You'll want to write a custom load test plugin. Details begin at this URL: http://msdn.microsoft.com/en-us/library/ms243153.aspx
The plugin can manipulate the scenario, extending the duration of the test until your condition is met.
I imagine you want to keep the load test running after queueing up a bunch of requests in order to continue to monitor the performance while the requests are processed. Although we can't control the load test duration, there is a way to achieve this.
Don't limit the test duration: Set the load test duration (or number of iterations) to a very large value -- larger than you anticipate (or know) it will take for the end condition to be satisfied.
Limit the scenario that queues up requests: In the load test scenario properties, in the Options section, set the Maximum Test Iterations so that the user load will drop to zero after sending the desired number of requests. If setting an iteration limit is not possible for some reason, you can instead write a load test plugin that sets the user load to zero in a specified scenario after a certain amount of test time has elapsed.
Check for end condition: Write a web test plugin that checks the database for your end condition. Attach this plugin to a new webtest in a new scenario and set Think Time Between Test Iterations on the scenario so that the test runs only as often as needed (60 seconds?). When the condition is reached, the plugin should write a predetermined value into the user context (the user context is accessible in the web test context as $LoadTestUserContext, and is only available in a load test, not when running a webtest standalone).
Abort the test: Write a load test plugin that looks for the flag value in the user context in the TestFinished event. When the value is found, the plugin calls LoadTest.Abort().
There is one minor disadvantage to this method: the test state is marked as Aborted in the results database.
At time of writing there is (still) no way to extend the duration of the test using a custom load test plugin, nor by having a virtual user type that refused to exit, nor by locking the close-down period of the test and preventing it from exiting that way.
The only way we managed to do something like this was to directly manipulate the LoadTest database and inject performance counter data in afterwards from log files, but this is neither smart nor recommended.
Oh well..

Resources