Now when Xunit 2.0 is release what is the possibility to run setup/tear down code before and after ALL tests are run?
It was not possible in Xunit 1 but according to this xUnit.net - run code once before and after ALL tests there were plans to support that behavior in 2.0.
It seems like in your case, Collection Fixtures could be helpful. You can find an example in xUnit documentation. In this example, a constructor and a Dispose method of the DatabaseFixture class are places where you can write setup / tear down code. Create a base class for your tests and use this collection fixture for it.
Here is the AssemblyFixture example published by xUnit, which shows how you can get assembly-level setup and cleanup: https://github.com/xunit/samples.xunit/tree/master/AssemblyFixtureExample
Related
I'm wondering if anyone has a way of rerunning failed tests automatically on Team City? I'm using C#, XUNIT and Selenium for a suite of automated UI tests. They can be flakey and a simple rerun will pass the test on the second try. I can't seem to find a solution. I've used other frameworks before that allow you to pass a console param for reruns.
This is something you have to do at the level of your test runner, i.e. xUnit. You can consider xUnitRetry plugin (the link has installation documentation):
[Retry(5)] //will try 5 times
public void TryAFewTimes()
{
tried++;
Assert.True(tried >= 5);
}
I created a build definition that runs automated tests using MTM build environments and test suites. I recently created a Visual Studio Load Test, which can be added to a test suite just like any test method marked with the [TestMethod] attribute. However, when I run the build, I get no errors and it appears the aggregate tests don't run. Is there a way to make this work?
I found this article: https://blogs.msdn.microsoft.com/testingspot/2013/01/22/how-to-automatically-run-a-load-test-as-part-of-a-build/ which describes a way to do it, but I can't find a build template that matches what he describes, and it appears this only allows you to run a single load test.
Also, when you configure a test controller, there is an option to configure it for load testing, but to do this, you must unregister it from the Team Project Collection. If this is done, it appears the controller can no longer be used in an environments to run project automated tests. This defeats the purpose of what I want to do and makes it seem that Load Tests and Team Projects are mutually exclusive. Is this the case? If so, this is a big oversight. Load tests are the kind of thing you would like to run automatically. Thanks for the help.
You are unfortunately right. A test controller used for load testing cannot be used for other automated test execution 'at the same time'. In your scenario I would recommend that you setup a different test controller and agent for load testing and you would be able to queue it as a part of your build to achieve what you are looking for.
There is no special build process template for this case.
I was researching unit testing frameworks and the Wikipedia list has a column which lists whether a framework is considered the "xUnit type" or "compatible". Mocha was listed as not being of the "xUnit type" – why? What are the core features of the xUnit family?
XUnit frameworks share the following concepts:
Test runner - the program that runs the tests
Test case - Where all tests inherit from
Test fixture - the state needed to run the tests
Test suites - tests that share the same fixture
Assertion - the function that verifies the state of the test
Test formatter - shows the results un one or more formats. This is a bit tricky, since the formats are not always the same. For example, Inria's page specifies the xml tags as test-case and test-suite. JUnit, on the other hand, uses testcase and testsuite.
You can think of XUnit frameworks as *Unit... where the * is replaced by the language (e.g., JUnit for Java).
What's very tricky is that XUnit.net is different from XUnit. XUnit.net is a framework on itself that also incorporates these aforementioned concepts. The XML output format uses different tags though... such as assembly, class, etc. It can get very confusing when googling for issues.
Can Chutzpah run qunit tests from a url? I need a lot of server-side injected markup and json data in my qunit tests, so I like to run the test suite within my visual studio project on localhost instead of mocking tons of test data in my test.js files.
As of version 2.4 Chutzpah now supports running against a remote url. See the documentation here.
Maybe it helps to share our testing strategy.
We use chutzpah for javascript unit tests. No dependency on a running server. The tests run very quickly as part of the build. (But we are not testing generated javascript code which is your scenario).
We test against running server by writing tests in JavaScript and running them with PhantonJS. See my answer for an example of one of our tests: automated functional web GUI testing frameworks (asp.net)
If you don't like writing the tests in Javascript like this (it is not as nice as using a unit testing framework - like qUnit or jasmine) you could checked out CasperJS.
We use Visual Studio 2010 Ultimate with tests written in MSTest. Both our unit tests and integration tests* are written in MSTest.
**By our definition, an integration test is an MSTest TestMethod that takes a while to run and/or calls out to external components such as a database or web services.*
I'm looking for a way of easily filtering out the integration tests so that only unit tests run without all the integration tests running too.
My ideas so far:
Mark integration tests with the [Ignore] attribute. This works but is a real pain when you do want to run the integration tests.
Assign a [TestCategory] attribute to the different test types. This allows them to be run separately but only through the Test View panel. You can't use CTRL+R, A (Run All Tests in Solution) or other similar shortcuts/hotkeys.
The integration tests are in a separate project, is there something that could be done to stop them running at the project level? As long as it's easy to toggle.
Write the integration tests in a different test framework, e.g. NUnit. This would keep them totally separate from a tooling point of view.
Does anyone have any other suggestions? Are there any plug-ins that can help with this?
I recommend having a different project (or projects) for integration tests because depending on your runner the only real way to run or not run tests across all runners is to include or not include a test class library.
But, I also recommend, if you're using MSTest, to use the TestCategoryAttribute to tag non-unit tests. You can then "filter" tests to be run in Test View with MSTest.
Runners like Resharper and apparently TestDriven.net (http://bit.ly/tmtYC2) allow you to then filter-out those tests from general unit-test executions.
If your unit test project is in a separate namespace, you could use the keyboard shortcut CTRL+R, T to run all tests in the current context (i.e. namespace MyApp.Tests.Unit). To do this place the cursor just after the opening curly brace in the namespace clause of any unit test class.
I have a suggestion but you won't like it.
Abandon MSTest entirely, while other unit test frameworks have been evolving MSTest as almost stopped in time. Yes, it has a major benefit of integrating directly with VS, but if I'm not mistaken that will change in VS 2011 which will provide native support for custom unit test runners integration.
(Note: The stopped in time part may be not true because I confess not paying to much attention to MSTest since I used it sparingly with VS 2008)
I use NUnit and separate my unit tests from the integration tests by using a different class library project. Then I automate the running of the tests using Gallio command line runner allowing me to configure separate scripts for running unit and integration tests.
Finally, personal opinions aside, I'm not sure but the TestDriven.net plugin may have support for running tests with a specific category only, so you could check that.