How to run parallel test using NUnit in visual studio - visual-studio

I am trying to get parallel tests going with NUnit and C# in visual studio using the NUnit3Test adapter and .runsettings file. It's basically this sample from Souce Labs I am unable to get to run in parallel with the NUnit3Test adpter, instead of using the console.
According to the NUnit documentation (https://github.com/nunit/docs/wiki/Tips-And-Tricks)is should be sufficient to have a .runsettings files in the project and configure number of workers there.
I tried with this, but test are still running sequentially.
<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
<NUnit>
<NumberOfTestWorkers>4</NumberOfTestWorkers>
</NUnit>
</RunSettings>
Any thoughts on what is wrong?

The "tips and tricks" entry is info for the user of the adapter. It tells you what you might need to do to configure parallelism of your tests under the adapter in addition to the general steps that are involved in the nunit framework itself and assumes you have read all the docs regarding parallel execution.
By default, the nunit framework runs nothing in parallel. The ParallelizableAttribute is available to tell it to run a particular test method or fixture in parallel. It can even be used at the assembly level if you are quite sure that all your tests are able to run in parallel.
That last point deserves repeating with emphasis! When you apply [Parallelizable] to a test, you are telling the framework that this particular test is capable of running in parallel - that it will not interfere with any other tests. NUnit assumes you know what you are talking about. It will run that test on a separate thread along with other parallelizable tests even if that causes problems due to the way the tests are written.
For that reason, you should pick a few tests first and mark them as parallelizable. The main thing that makes a test impossible to run in parallel is that it shares non-readonly state with other tests.
The info you provide in the .runsettings file is completely optional. NUnit will use a default value for the number of worker threads if you don't specify it. The main thing is the use of the attribute.

Related

Running Load Tests in VSTS

I've been trying to get Jmeter load tests to run in VSTS thus far without avail. I've been back and forth (very slowly!) with the Microsoft support team about this, but as the issues are ironed out I would like to at least run a small set of load tests on our build machine using Jmeter and then have the results uploaded somehow to VSTS so they are easier to track. I have part 1 of this working: From the VSTS release definition I run a batch file that runs the load tests locally, and then generates an aggregate spreadsheet with results.
The question is - how can I get those results loaded into VSTS?
In our case we had to export the results to xml using the jmeter.test.xmlouput configuration. Then we had a script to transform the xml in a proper Xunit result file and we finally used a publish test results to gather this file and add the results to the release. (this approach would work with build definitions too).
It's a little bit complicated, requires some scripting and surely would be easier if a dedicated task was available.

Running Visual Studio Load Test From Build Definition

I created a build definition that runs automated tests using MTM build environments and test suites. I recently created a Visual Studio Load Test, which can be added to a test suite just like any test method marked with the [TestMethod] attribute. However, when I run the build, I get no errors and it appears the aggregate tests don't run. Is there a way to make this work?
I found this article: https://blogs.msdn.microsoft.com/testingspot/2013/01/22/how-to-automatically-run-a-load-test-as-part-of-a-build/ which describes a way to do it, but I can't find a build template that matches what he describes, and it appears this only allows you to run a single load test.
Also, when you configure a test controller, there is an option to configure it for load testing, but to do this, you must unregister it from the Team Project Collection. If this is done, it appears the controller can no longer be used in an environments to run project automated tests. This defeats the purpose of what I want to do and makes it seem that Load Tests and Team Projects are mutually exclusive. Is this the case? If so, this is a big oversight. Load tests are the kind of thing you would like to run automatically. Thanks for the help.
You are unfortunately right. A test controller used for load testing cannot be used for other automated test execution 'at the same time'. In your scenario I would recommend that you setup a different test controller and agent for load testing and you would be able to queue it as a part of your build to achieve what you are looking for.
There is no special build process template for this case.

How to make testing on different environmets?

We have a web application. We want to run the same test across multiple environments to ensure everything is still working correctly.
UAT -> STAGING -> PRODUCTION
We want to run these tests after each deploy to each environment. Each environment has as a different URL. I have created three test plans in MTM. I have added test cases for only UAT environment and I have created an environment in Lab Center. By the way, I have recorded test cases with coded ui test and I have associated them for automated testing (only UAT environment). How can I do testing other environments. How can I achieve this without changing the recording or code everytime? Thanks,
If you generated the tests using the default Test Builder, you can try to write something like this on your [CodedUITest] class:
[TestInitialize()]
public void MyTestInitialize()
{
// the url I could read from a config file
string url = "http://stackoverflow.com/";
this.UIMap.RecordedMethodXXParams.XXChWindowUrl = url;
}
Where RecordedMethodXXParams and XXChWindowUrl are auto generated. You can check the generated names in the UIMap class.
This is too late on this but just in case it helps readers.
You do not need to create multiple test plans or test suites in MTM for this. What you need is, the builds to be smart enough to choose the right config based on the target environment. As Ciaran suggested you could use xml config that have all the details of each environment and then you write some filtering code to filter out the details based on the target environment but maintainability could become a bit of pain. Ideally you would like to have one xml layout for app.config and that will load different values for each config based on the target environment. ie xml in app.config is transformed based on the target environment.
SlowCheetah does exactly that for you. a bit of reading and understanding is required to implement this.
After you have all the transforms in place, use the "Configuration Manager" in visual studio to describe all the target environments. You can find it in the dropdown next to your green start/run button in visual studio.
Create a separate CI build (ie trigger = checkin) of the test code (ie coded UI tests project) targeting each test environment using the Process>Build>Configurations section of the build definition.
Create a lab test run build (ie one using LabDefaultTemplate) for each target environment that uses the same test suite from the test manager. make sure that each of the build maps to the corresponding CI build in the build section of the process workflow wizard.
Queue away all the builds and you ll have all the builds running together in all the environments simultaneously and each of them smartly picking up the right configs.
You will probably need to edit the Coded UI tests to change the browser URL which gets launched when the tests run. When I performed automated Coded UI tests on different browsers, when the tests started, I made it read from an XML configuration file on each test environment to get the correct browser URL (and any other relevant configuration data). So in other words you will need at least a little bit of code to handle the different URL's or any configuration data for each test environment.
For actually running the tests on remote environments, you should download the Microsoft Test Controller and Test Agents (Download link). And here's the documentation for installing and configuring the agents.
The idea is that your main machine (perhaps the main build/test machine) has the test controller installed, and the test controller remotely connects to the test agents which are installed on your test environment and launches the automated Coded UI tests.
Microsoft Test Manager also has command-line options so that you can schedule automated tests (e.g. you could run a script from the Windows task scheduler).
I can't remember the exact details of implementing these but hopefully I at least will put you in the right direction so that you can research these things further.
There are plenty of nuances with automating tests using test agents, so I would prepare to invest a fair amount of time in this.
UPDATE:
It's been a long time since I've worked with test automation so I don't remember details of my implementation, but as far as I remember, in my system, I had an XML configuration file stored on the test environment (e.g. C:\MyTestConfig\config.xml that had XML values for various configuration options, the important one being the URL that I want to launch, e.g.
<browserUrl>http://localhost:1659/whatever</browserUrl>
Then, I had a class in the test project which on instantiation would get the configuration XML file (it would be stored on the same place in each test environment), and read the values. It's been a long time since I've done it though, so I can't remember my exact implementation, but there is plenty of documentation on the web for reading XML in C# .NET.
From my test classes, I then inherited the class which reads the configuration values, and then from the test setup methods in the test classes this would launch the browser with the browser URL from the XML file and start the tests. If you don't know how to create test setup methods I would look at the documentation for the test framework you are using (which will most likely be the Visual Studio Unit testing framework as this is used by default with the Coded UI tests).

Is there a way to disable/ignore a Load Test in Visual Studio 2010 without using Test Lists?

I'm new to load testing in Visual Studio/MSTest, and I created a new Load Test recently to validate some high-traffic scenarios for a WCF service. I want to add this to the tests project for the service, but I don't want the test to be executed whenever I "Run All Tests in Solution" nor as part of our Continuous Integration build-verification process because a) it takes 5 minutes to run, and b) the service call that it is testing generates many thousands of email messages. Basically, I'd like to do the equivalent of adding the [Ignore] attribute to a unit test so that the load test is only executed when I explicitly choose to run it.
This MSDN Article ("How to: Disable and Enable Tests") suggests that the only to disable the test is to use Test Lists (.vsmdi files), but I don't have much experience with them, they seem like a hassle to manage, I don't want to have to modify our CI Build Definition, and this blog post says that Test Lists are deprecated in VS2012. Any other ideas?
Edit: I accepted Mauricio's answer, which was to put the load tests into a separate project and maintain separate solutions, one with the load tests and one without. This enables you to run the (faster-running) unit tests during development and also include the (slower-running) load tests during build verification without using test lists.
This should not be an issue for your CI Build Definition. Why?
To run unit tests as part of your build process you need to configure the build definition to point to a test container (usually a .dll file containint your test classes and methods). Load tests do not work this way, they are defined within .loadtest files (which are just xml files) that are consumed by the MSTest engine.
If you do not make any further changes to your CI Build definition the load test will be ignored.
If you want to run the test as part of a build, then you need to configure the build definition to use the .loadtest file.
Stay away from testlists. Like you said, they are being deprecated in VS11.
Edit: The simplest way to avoid running the load test as part of Visual Studio "Run All" tests is to create a different solution for your load tests.
Why don't you want to use Test Lists. I think is the best way to do that. Create different Test Lists for each test type (unit test, load test...) and then in your MSTest command run the Test List(s) you want:
MSTest \testmetadata:testlists.vsmdi \testlist:UnitTests (only UnitTests)
MSTest \testmetadata:testlists.vsmdi \testlist:LoadTests (only LoadTests)
MSTest \testmetadata:testlists.vsmdi \testlist:UnitTests \testlist:LoadTests (UnitTests & LoadTests)

Find All Tests Not in a List

I have all the tests for my web application (written with the Visual Studio test framework -- Microsoft.Quality DLLs) divided into several (currently two) ordered tests. Is there an easy way to find all the tests that are not in any list?
(The reason I need to use ordered tests is because the initial tests test that installation/setup/configuration of my application worked, and subsequent tests would fail without that.)
There's on easy way to do this. The best thing to do is switch to a framework that doesn't require every test to be on a list -- I recommend MbUnit. It has a great DependsOn attribute to easily configure dependencies between tests.

Resources