Web Performance Test that requires login: How do you make it work in isolation and in load test? - visual-studio-2010

I have a Visual Studio 2010 Load test, which contains a number of web performance tests. Running the web performance tests requires you to be logged in to the website under test. Accordingly, the load test contains an initialization step - a small web performance test which does the log in, and which uses a plug-in to cache the cookie so obtained. The 'real' web performance tests - the ones that actually do the work also each have a plug-in that reads the cached cookie and adds it to the test, so that each test functions correctly:
public override void PreWebTest(object sender, PreWebTestEventArgs e)
{
if (CookieCache.Cookies != null) // CookieCache is a static class of mine
e.WebTest.Context.CookieContainer.Add(CookieCache.Cookies);
The problem is that while this all works absolutely fine when I run the load test, it means I can't run any of the web performance tests in isolation because if the load test initializer hasn't run then there's no cookie, so the web performance test won't be logged in and will fail.
Is there any recommended solution for this situation? In other words, if a web performance test needs to have logged in, is there any way to get it to run both in isolation and when it's part of a load test?
The obvious way to run each web performance test in isolation would be to have it call the login test first, but I can't do that because that'll be incorrect behaviour for the load test (where logging in should happen only once per user, right at the beginning of the load test).

The solution is to add the Login test to your individual web performance tests (via "Insert Call to Web Test"), but gated by a "Context Parameter Exists" Conditional Rule that looks for the absence of the context parameter $LoadTestUserContext. That parameter only exists if the web test is running in a load test.
This way you get just one Login whether in or outside of a load test.

Why not try and use the PreRequest Function instead of the PreWebTestFunction
Public Overrides Sub PreRequest(sender As Object, e As PreRequestEventArgs)
MyBase.PreRequest(sender, e)
Dim cookie As System.Net.Cookie = New System.Net.Cookie(...)
e.Request.Cookies.Add(cookie)
That way both the Load test and the Web Test will work.

I'm not familiar with Visual Studio 2010 Load Testing, but it sounds like you need the equivalent of NUnit's SetUp and TearDown methods which run once for all tests, whether you have selected a single test or all the tests in an assembly.
A bit of searching implies that the equivalent is the Init and Term tests.
1) Right click on a scenario node in load test and select Edit Test
Mix...
2) In the edit test mix dialog, Look at the bottom the Form. You will
see 2 check boxes. One for an init test and one for a term test.
The init test will run prior to each user and term test will run when
user completes. To make sure the term test runs, you also need to set
the cooldown time for a load test. The cooldown time is a property on
the run setting node. This setting gives tests a chance to bleed out
when duration completes. You can set this to 5 minutes. The cooldown
period does not necessarily run for 5 minutes. It will end when all
term tests have completed. If that takes 20 seconds, then that is
when load test will complete.

Related

Perform actions before all WebTests in Visual Studio

I want to run some code (obtain a oauth valid token) before sending the SOAP requests of my webtests.
I am using Visual Studio.
I want to run the code that obtains the OAuth code before ALL test, not on each one.
¿Is that possible?
Thnaks!!
Assuming the web tests are being run as part of a load test then you can create a load test plugin and run some code from the load test starting event.
Your question reads as if you need to run one web test (possibly it contains just one request) to get the token and then run all the other tests. This can be achieved by using two scenarios in the load test. The first scenario contains the get-token web test, it is set to have one user and one iteration. The second scenario runs all the other web tests, the only change to it is to set its Delay Start Time property to a value that allows the get-token web test to complete.

Web Performance Test in Cloud Load Test sends cookie on some tests

I have a web performance test that begins with a webforms login, executes a few steps and then finishes.
Mostly this runs without errors but if I extend the load test run beyond 15 minutes I start to get load test failures which fail because some tests send a Session and Auth cookie on the initial Get to the root url.
Clearly the test recording does not have cookies on the initial request. Additionally, I have set the "Percentage of New Users" on the scenarios to 100% to ensure that all tests are running as a new user.
The test is databound to a list of 600 users in a User Pace scenario. Nothing very heavy.
However, I cannot identify why after a period of time (12 minutes) some of the tests begin to send the cookies on the initial request!
Can anyone give me any pointers please?
This is an old question and on re-reading it not very clear.
The scenario reflects more my lack of knowledge of the web testing features I was using.
I am fairly certain it was caused by a missing "log out" test step combined with the configuration of the load test probably re-using connections.
After much prodding around I achieved some clean runs

Can use Specflow scenario with Visual Studio 2013 Load Test

I plan to reuse existing Specflow scenarios (Currently is using for acceptance and automated test) for VS Load Test as well, to avoid duplication and extra work. Specflow works fine for those test since it runs them once but in context of Load test when it executes each Specflow scenario more than one time and parallel it runs into issues and errors and with higher number of user it gets more
These errors can fail some of the test which at end creates incorrect test result, for instance using one Specflow scenario as test scenario with load test of 20 users and time period of 2 minutes can caused 50 errors similar to below. So test result shows that particular scenario is executed 200 times where 150 passed and 50 failed test and failure is caused by Specflow errors. In context of Load test this result is totally wrong and incorrect since the test itself has issues.
Error message:
ScenarioTearDown threw exception. System.NullReferenceException: System.NullReferenceException: Object reference not set to an instance of an object.
TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.HandleBlockSwitch(ScenarioBlock block)
TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.ExecuteStep(StepInstance stepInstance) TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.Step(StepDefinitionKeyword stepDefinitionKeyword, String keyword, String text, String multilineTextArg, Table tableArg)
TechTalk.SpecFlow.TestRunner.Then(String text, String multilineTextArg, Table tableArg, String keyword)
After some investigation it seems Specflow cannot generate and run same scenario parallel which caused this conflict and fails some test but I also have some doubt about that and seeking to see if there is any workaround about this or if I am missing anything and wondering if Specflow scenarios can be used for Load Test at all?
I understand wanting to reuse your tests for load testing (Don't Repeat Yourself), however a load test has a very different purpose than acceptance tests. Load tests should take realistic every day usage scenarios, and throw increasing numbers of users at them. For this reason I would urge you to keep your load tests separate from your acceptance and automated tests. They really are testing different things.
Load tests should test the performance of the application under high usage for every day scenarios, and acceptance and automated tests make sure the application is functioning according to spec.
Load testing is the process of putting demand on a system or device and measuring its response. Load testing is performed to determine a system’s behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation.
Source: Wikipedia: Load testing
An acceptance test is a formal description of the behaviour of a software product, generally expressed as an example or a usage scenario. ... For many Agile teams acceptance tests are the main form of functional specification; sometimes the only formal expression of business requirements. In other cases, they merely complement a specification document resulting from a less specifically Agile technique or formalism, such as uses cases or more narrative documents.
Source: Agile Aliance: Acceptance Testing
They are different things, so the tests, and the test frameworks, should be different as well. You aren't really "repeating yourself" by keeping a separate suite for load tests.
As for the technical reason why this is failing? SpecFlow, when run using the normal Visual Studio test runner, was not built to run tests in parallel. There are parallel test runners available, but most are paid software.

Application Tests VS Logic Tests

Since application tests can now be run on the simulator from Xcode, what would the advantage be, apart from possibly a small saving in execution time, of still separating your tests into logic and application tests?
The differentiation as per the Apple docs:
Logic tests. These tests check the correct functionality of your code in a clean-room environment; that is, your code is not run inside an application. Logic tests let you put together very specific test cases to exercise your code at a very granular level (a single method in class) or as part of a workflow (several methods in one or more classes). You can use logic tests to perform stress-testing of your code to ensure that it behaves correctly in extreme situations that are unlikely in a running application. These tests help you produce robust code that works correctly when used in ways that you did not anticipate. Logic tests are iOS Simulator SDK–based; however, the application is not run in iOS Simulator: The code being tested is run during the corresponding target’s build phase.
Application tests. These tests check the functionality of your code in a running application. You can use application tests to ensure that the connections of your user-interface controls (outlets and actions) remain in place, and that your controls and controller objects work correctly with your object model as you work on your application. Because application tests run only on a device, you can also use these tests to perform hardware testing, such as getting the location of the device.
Application tests compared to logic tests are really used for two different things:
Logic tests/unit tests are used to test very small behavior for one or a few methods, e.g. "Given that I create my object like this, is the value of a certain property what I expect it to be?"
Application tests however are used to test the big picture, e.g. "Do I get the right data in my detail view when I tap on a certain table view cell?"

Visual Studio Load Testing framework vs Console App - strange results

I've been using the Visual Studio Load Testing framework to load test a web service.
If I keep my test simple and use a constant load pattern of 1 user from my local machine, I am able to generate 'x' requests per second.
Alternatively, if I use console app that runs the same test, making synchronous calls to the web service, the console app is generating twice the load that I get using the Visual Studio Load Testing framework.
The same is true if I try to scale my load tests to use multiple test agents (8 cores) - the VS framework does not generate near the amount of load as a console app running multiple instances.
These are the two different unit tests I am using to generate load:
//Unit test used for load testing
[TestMethod]
public void HappyReturnCase_Test()
{
HttpWebRequest req = WebRequest.Create("http://myurl") as HttpWebRequest;
req.Method = "GET";
req.GetResponse().Close();
}
//Console app version
private static void Main(string[] args)
{
for (int i = 0; i < 200000; i++)
{
HttpWebRequest req = WebRequest.Create("http://myurl") as HttpWebRequest;
req.Method = "GET";
req.GetResponse().Close();
}
}
Can any explain to me why I might be seeing this kind of behaviour?
Thanks in advance.
Kevin
There are a few things that comes into mind when dealing with load testing in Visual Studio.
What kind of data capture are you using?
Is there instrumentation for Code Coverage?
You also need to keep in mind that in Visual Studio you're tunning the code within a test framework that does way more than simply call your code. It analyse the result, check for exceptions, logs all the data to and fro the code being called, generate reports with the captured data...
And all this stuff that comes out of the box, and that we think is "free" does have its toll on performance of the said test.
While the number of request per seconds, as you put, is lower in VS than its app counterpart, you also needs to weigh in all the other stuff the testing framework is doing for you.
With the console application, there's no throttling of the requests--you're just going full-bore from the client. With the VS Load Tests, there are other factors that limit the number of requests (like the total number of iterations).
For example, if you have test iterations enabled, you'll spread them throughout the duration of the load test. Generally, that will bring your test frequency down. If you have 100 test iterations set, and you're running your test for an hour, and each test takes 30 seconds, you'll run 20 fewer tests because of it (evenly spread throughout the hour).
There is also a callback model going on here. The load tests support a load test plugin model and a request plugin model, so the unit test will yield to the load test runner which may be swapping out to a new virtual user; even if the test is set for 1 virtual user, it may not be the same virtual user throughout the test. You'll be reporting and logging, plus you may be starting up a new application host "container" for your unit test, and a few other activities. Even if that's not the case, you're not spending all your time in the context of the unit test.
Even inside the unit test, there are other methods running, like ClassInitialize, TestInitialize, setting timers, etc. Plus, there is a thread pool being used, even if only for one user. See http://blogs.msdn.com/b/billbar/archive/2007/10/12/features-and-behavior-of-load-tests-containing-unit-tests-in-vsts-2008.aspx for some more information on how unit tests are run by the load test runner. Even if you data bound that unit test to run 100 rows of data, it probably wouldn't run as quickly as the loop that you'd written, but it's got the benefit of easily configuring extra work and running multiple unit tests together.
You may want to take a read through of the performance testing quick reference guide at http://vsptqrg.codeplex.com/.
Now, setting the constant load to 1 user doesn't take advantage of any of the benefits of the load rig--you've taken on the overhead of the thread pool without running multiple users. You'd expect to start seeing benefits if you increase the number of users and let the VS load test manage that context switching for you. Another benefit is creating a test mix that you can easily alter, plus collecting the perfmon statistics, applying threshold rules, etc. You're not really doing any of those in the console app.

Resources