I know there is no right or wrong answer to this but it would be VERY helpful to see how others have structured their test projects? Especially for multi assembly solutions and managing different test phases, unit, integration, system?
And ontop of it all it would be helpful to see what structure fits nicely for running test builds on a Continuous Integration Server (TeamCity)?
Initially I am starting out with:
Test (Namespace)
--Unit (Folder)
----ClassATests
----ClassBTests
--Integration (Folder)
----ClassA+BTests
----DBTests
I've been thinking about this very thing at work today. I've inherited a test project that has been maintained and updated by multiple people in the past, and as such currently contains a rather confusing hierarchy.
In an attempt to simplify matters, I have started out using the following structure:
Tests (Namespace)
-- Infrastructure (Folder)
---- general utility classes (common to all tests)
---- any other config
-- ClassATests (Folder)
---- ClassATestBase (base class for setup of common mock objects etc.)
---- ClassATestMethods (helper methods for the ClassATests)
---- ClassATests (main test class)
-- ClassBTests (Folder)
etc.
I've found this approach to useful so far, as it means most of the code that will run during any given test can be found in the same folder. It also aims to avoid the scenario of one huge TestMethods class.
This may not be the most elegant of solutions (sorry, no pun intended!), but its working for me at present. Any and all suggestions are most welcome though!
I keep my unit tests and integration tests in separate assemblies (x.Tests.dll, y.IntegrationTests.dll) in order to be able to easily find test assemblies to run during the build process. I can then just locate *.Tests.dll and run them as part of a daily build. Integration tests are run manually in specific environments, but can still be run from a simple build script.
Apart from that, TestClass-per-Class is pretty much the rule I've followed with the exception being small helper classes that are all tested from a single HelperTests fixture.
Related
I created a build definition that runs automated tests using MTM build environments and test suites. I recently created a Visual Studio Load Test, which can be added to a test suite just like any test method marked with the [TestMethod] attribute. However, when I run the build, I get no errors and it appears the aggregate tests don't run. Is there a way to make this work?
I found this article: https://blogs.msdn.microsoft.com/testingspot/2013/01/22/how-to-automatically-run-a-load-test-as-part-of-a-build/ which describes a way to do it, but I can't find a build template that matches what he describes, and it appears this only allows you to run a single load test.
Also, when you configure a test controller, there is an option to configure it for load testing, but to do this, you must unregister it from the Team Project Collection. If this is done, it appears the controller can no longer be used in an environments to run project automated tests. This defeats the purpose of what I want to do and makes it seem that Load Tests and Team Projects are mutually exclusive. Is this the case? If so, this is a big oversight. Load tests are the kind of thing you would like to run automatically. Thanks for the help.
You are unfortunately right. A test controller used for load testing cannot be used for other automated test execution 'at the same time'. In your scenario I would recommend that you setup a different test controller and agent for load testing and you would be able to queue it as a part of your build to achieve what you are looking for.
There is no special build process template for this case.
lets say I have three projects in my solution.
1 Is a ASP.Net project simply printing an output
2 Is a PHP project using VS.PHP which simply prints an output (Same output as the ASP.Net project. Just in different environment)
3 A C# Console project which use the above two projects as server and parse their responses.
Now I want to add an other project named "Test" and fill it with unit tests mainly for testing the integrity of the solution.
I am new to unit tests but my main problem here is not about them. It is about this simple question that: How can I run the two first projects (Using VS.Php Webserver for PHP and IIS Express for ASP.Net project - one at each time) somehow before performing my tests? I cant test the 3rd project without having one of the first two active and in result I cant check the integrity of my project. Not even parts of it.
So, do you have any suggestion? Am I wrong about something here? Maybe I just don't understand something.
Using Visual Studio 2013 Update 3
Usually for unit testing you don't connect live systems together with your tests. That would be called integration testing instead. The line I usually use with unit testing is that it needs to a) always be fast and b) be able to be run without network connectivity.
If you want to do unit testing, the easiest way is to make interfaces around your dependent systems. Don't use these names, but something like IAspNetProject and IPhpProject. Code to those interfaces and then replace their implementation with fake data for unit testing.
If you want to do integration testing, then you can use something like http://nancyfx.org/ to create a self hosted web project. There are tons of other options for starting a lightweight web app locally to do testing against.
We have a web application. We want to run the same test across multiple environments to ensure everything is still working correctly.
UAT -> STAGING -> PRODUCTION
We want to run these tests after each deploy to each environment. Each environment has as a different URL. I have created three test plans in MTM. I have added test cases for only UAT environment and I have created an environment in Lab Center. By the way, I have recorded test cases with coded ui test and I have associated them for automated testing (only UAT environment). How can I do testing other environments. How can I achieve this without changing the recording or code everytime? Thanks,
If you generated the tests using the default Test Builder, you can try to write something like this on your [CodedUITest] class:
[TestInitialize()]
public void MyTestInitialize()
{
// the url I could read from a config file
string url = "http://stackoverflow.com/";
this.UIMap.RecordedMethodXXParams.XXChWindowUrl = url;
}
Where RecordedMethodXXParams and XXChWindowUrl are auto generated. You can check the generated names in the UIMap class.
This is too late on this but just in case it helps readers.
You do not need to create multiple test plans or test suites in MTM for this. What you need is, the builds to be smart enough to choose the right config based on the target environment. As Ciaran suggested you could use xml config that have all the details of each environment and then you write some filtering code to filter out the details based on the target environment but maintainability could become a bit of pain. Ideally you would like to have one xml layout for app.config and that will load different values for each config based on the target environment. ie xml in app.config is transformed based on the target environment.
SlowCheetah does exactly that for you. a bit of reading and understanding is required to implement this.
After you have all the transforms in place, use the "Configuration Manager" in visual studio to describe all the target environments. You can find it in the dropdown next to your green start/run button in visual studio.
Create a separate CI build (ie trigger = checkin) of the test code (ie coded UI tests project) targeting each test environment using the Process>Build>Configurations section of the build definition.
Create a lab test run build (ie one using LabDefaultTemplate) for each target environment that uses the same test suite from the test manager. make sure that each of the build maps to the corresponding CI build in the build section of the process workflow wizard.
Queue away all the builds and you ll have all the builds running together in all the environments simultaneously and each of them smartly picking up the right configs.
You will probably need to edit the Coded UI tests to change the browser URL which gets launched when the tests run. When I performed automated Coded UI tests on different browsers, when the tests started, I made it read from an XML configuration file on each test environment to get the correct browser URL (and any other relevant configuration data). So in other words you will need at least a little bit of code to handle the different URL's or any configuration data for each test environment.
For actually running the tests on remote environments, you should download the Microsoft Test Controller and Test Agents (Download link). And here's the documentation for installing and configuring the agents.
The idea is that your main machine (perhaps the main build/test machine) has the test controller installed, and the test controller remotely connects to the test agents which are installed on your test environment and launches the automated Coded UI tests.
Microsoft Test Manager also has command-line options so that you can schedule automated tests (e.g. you could run a script from the Windows task scheduler).
I can't remember the exact details of implementing these but hopefully I at least will put you in the right direction so that you can research these things further.
There are plenty of nuances with automating tests using test agents, so I would prepare to invest a fair amount of time in this.
UPDATE:
It's been a long time since I've worked with test automation so I don't remember details of my implementation, but as far as I remember, in my system, I had an XML configuration file stored on the test environment (e.g. C:\MyTestConfig\config.xml that had XML values for various configuration options, the important one being the URL that I want to launch, e.g.
<browserUrl>http://localhost:1659/whatever</browserUrl>
Then, I had a class in the test project which on instantiation would get the configuration XML file (it would be stored on the same place in each test environment), and read the values. It's been a long time since I've done it though, so I can't remember my exact implementation, but there is plenty of documentation on the web for reading XML in C# .NET.
From my test classes, I then inherited the class which reads the configuration values, and then from the test setup methods in the test classes this would launch the browser with the browser URL from the XML file and start the tests. If you don't know how to create test setup methods I would look at the documentation for the test framework you are using (which will most likely be the Visual Studio Unit testing framework as this is used by default with the Coded UI tests).
I wonder if I could get some feedback from people on how to best approach building of Visual Studio solutions.
My core requirements would be to ensure that any code/tests run against the correct resources, in particular, database schema and sample data.
I've tried various ways to do this with mixed degrees of success. Currently, I …
Create a class library *.Installation.dll, which creates, configures
and populates the database, etc.
Have a class library *.Build.dll
which has an MSBuild task that takes parameters from the csproj file
and passes to the Installation.dll.
These sit within their own solution. Say MyApp.Build.sln. I maintain this separately from my main solution, to prevent file locking issues.
In my main solution, say MyApp.sln …
Then, my test projects invoke the MSBuild task to create test environments for integration testing including database and sample test data.
And my Web/Windows front end projects invoke the MSBuild to create runnable environments for test users/my manual testing
So, I am using MSBuild to create customisable builds/environments for testing/running. Additionally, I can wrap the Installation.dll into a configuration/setup tool to automate the installation for the user when the time comes to install.
Is this too complex a scenario? I'm worried I've over engineered this and overlooked something. It works well, but is bound by a lot of meta programming (eg. the database build code, configuration, build task, etc.) that is not directly involved with tangible, chargeable work.
I have SubVersion and TeamCity. I'd like to enable a CI build ultimately that is invokes on a daily/commit build trigger. Or can I use TeamCity in such a way to avoid rebuilding the database/etc. every build?
Thanks for any insight.
I'm new to load testing in Visual Studio/MSTest, and I created a new Load Test recently to validate some high-traffic scenarios for a WCF service. I want to add this to the tests project for the service, but I don't want the test to be executed whenever I "Run All Tests in Solution" nor as part of our Continuous Integration build-verification process because a) it takes 5 minutes to run, and b) the service call that it is testing generates many thousands of email messages. Basically, I'd like to do the equivalent of adding the [Ignore] attribute to a unit test so that the load test is only executed when I explicitly choose to run it.
This MSDN Article ("How to: Disable and Enable Tests") suggests that the only to disable the test is to use Test Lists (.vsmdi files), but I don't have much experience with them, they seem like a hassle to manage, I don't want to have to modify our CI Build Definition, and this blog post says that Test Lists are deprecated in VS2012. Any other ideas?
Edit: I accepted Mauricio's answer, which was to put the load tests into a separate project and maintain separate solutions, one with the load tests and one without. This enables you to run the (faster-running) unit tests during development and also include the (slower-running) load tests during build verification without using test lists.
This should not be an issue for your CI Build Definition. Why?
To run unit tests as part of your build process you need to configure the build definition to point to a test container (usually a .dll file containint your test classes and methods). Load tests do not work this way, they are defined within .loadtest files (which are just xml files) that are consumed by the MSTest engine.
If you do not make any further changes to your CI Build definition the load test will be ignored.
If you want to run the test as part of a build, then you need to configure the build definition to use the .loadtest file.
Stay away from testlists. Like you said, they are being deprecated in VS11.
Edit: The simplest way to avoid running the load test as part of Visual Studio "Run All" tests is to create a different solution for your load tests.
Why don't you want to use Test Lists. I think is the best way to do that. Create different Test Lists for each test type (unit test, load test...) and then in your MSTest command run the Test List(s) you want:
MSTest \testmetadata:testlists.vsmdi \testlist:UnitTests (only UnitTests)
MSTest \testmetadata:testlists.vsmdi \testlist:LoadTests (only LoadTests)
MSTest \testmetadata:testlists.vsmdi \testlist:UnitTests \testlist:LoadTests (UnitTests & LoadTests)