Is it possible to create data driven tests with MSpec? - tdd

With MSpec is it possible to create data driven tests?
For example, NUnit has the TestCase attribute that allows for multiple data driven cases.
[TestFixture]
public class ExampleOfTestCases
{
[TestCase(1,2,3)]
[TestCase(3,3,6)]
[TestCase(2,2,4)]
public void when_adding_two_numbers(int number1, int number2, int expected)
{
Assert.That(number1 + number2, Is.EqualTo(expected);
}
}

That's not possible. I would advise against driving MSpec with data, use NUnit or MbUnit if you need row tests or combinatorial tests (and MSpec when you describe behavior).
Follow-up: Aeden, TestCases/RowTests are not possible with MSpec and likely will never be. Please use NUnit for such cases, as it is the best tool for that job. MSpec excels when you want to specify system behavior (When an order is submitted => should notify the fulfilment service). For TestCases with MSpec you would need to create a context for every combination of inputs which might lead to class explosion.
MSpec is also great when you want to have a sane test structure that is easy to learn. Instead of starting with a blank sheet of paper (think NUnit's [Test] methods) MSpec gives you a template (Establish, Because, It) that you can build your specifications around. Contrast this to the example you give where Arrange, Act and Assert are combined into one line of code.

Related

Unit, Integration or Feature Test?

A simple question: How do you differentiate between a feature, unit and integration test?
There are a lot of differing opinions, but I'm specifically trying to determine how to organise a Laravel test which touches a model's relationship. Here is an example if some PHP code which would require testing:
public function prices()
{
return $this->hasMany(Prices::class);
}
public function getPriceAttribute($)
{
return $this->prices()->first() * 2;
}
The test descriptions as I understand them (feel free to correct me):
Unit test
Tests the smallest part of your code
Does not touch the database
Does not interact with any other part of the system
Integration test
Tests part of the system working together
e.g controllers which call helper functions which need to be tested together
Feature test
Blackbox test
e.g. Call an api end point, see that it has returned the correct JSON response
Here is my issue given those descriptions:
My Laravel model test needs to test the smallest unit of code - the calculated accessor of a model, which makes it feel like a Unit test
But, it touches the database when it loads the model's relationship
It doesnt feel like an Integration test, because it is only touching other related models, not internal or external services
Other property accessor tests in Laravel would fall under Unit tests when they do not touch the database or the model's relationships
Separating these types of tests into integration tests would mean that a single model's tests against its properties are fragmented between integration and unit tests
So, without mocking relationships between models, where would my test belong?
If I’m interpreting your original question correctly, I think the killer constraint here is:
So, without mocking relationships between models, where would my test belong?
If mocking isn't allowed and you're required to touch a DB then, by your/and google's definition, it has to belong as an integration/medium size test :)
The way I think of this is get price attribute functionality is separate from the DB. Even though it's in the model the prices could come from anywhere. Right now its a RDBMS but what if your org go really big and it split into another service? Basically, I believe, that the capability of getPriceAttributes is distinct from the storage of attributes:
public function getPriceAttribute($)
{
return $this->prices()->first() * 2;
}
If you buy into this reasoning, it creates a logical separation that supports unit tests. prices() can be mocked to returns a collection of 0, 1 & many (2) results. This test can be executed as a unit tests (for orders of magnitude faster test execution (ie on the order of 1ms vs potentially 10s or 100s of ms talking to a local DB)
I am not familiar with php test ecosystem but one way to do this could be with a test specific subclass (not sure if the following is valid PHP :p ):
class PricedModel extends YourModel {
function __construct($stub_prices_supporting_first) {
$this->stub_prices = $stub_prices_supporting_first;
}
public function prices() {
return $this->stub_prices;
}
}
tests
function test_priced_model_0_prices() {
p = new PricedModel(new Prices(array()));
assert.equal(null, p.getPriceAttribute());
}
function test_priced_model_1_price() {
p = new PricedModel(new Prices(array(1)));
assert.equal(2, p.getPriceAttribute());
}
function test_priced_model_2_prices() {
p = new PricedModel(new Prices(array(5, 1)));
assert.equal(10, p.getPriceAttribute());
}
The above should hopeuflly allow you to fully control input into the getPriceAttribute method to support direct IO-free unit testing.
——
Also all the unit tests above can tell you is that you’re able to process prices correctly , it doesn’t price any feedback on if you’re able to query prices !
What distinguishes the tests is their respective goal:
Unit-testing aims at findings those bugs that can be found in isolated small parts of the software. (Note that this does not say you must isolate - it only means your focus is on the isolated code. Isolation and mocking often enough are not needed to reach this goal: Think of a call to a sin function - you almost never need to mock this, you let your system under test just call the original one.)
Integration testing aims at findings bugs in the interaction of two or more components, for example mutual misconceptions about an interface. These bugs can not be found in the isolated software: If you test code in isolation, you also write your tests on your (possibly wrong) understanding of the other components.
Feature tests as you describe them will then have the goal to find further bugs, which the other tests so far could not detect. One example for such a bug could be, that an old version of the feature was integrated (which was correct at that time, but lacked some functionality).
The conclusion, although it may be surprising, is, that it is not in the stricter sense forbidden to make data base accesses in unit-testing. Consider the following scenario: You start writing unit-tests and mock the data base accesses. Later, you realize you can be more lazy and just use the data base without mocking - but otherwise leave all the tests as they are. Your tests have not changed, and they will continue finding the bugs in the isolated code as before. They may run a bit slower now, and the setup may be more complex than with the mocked data base. However, the goal of the test suite was the same - with and without mocking the data base.
This scenario simplifies things a bit, because there may be test cases that can only be done with a mock: For example, testing the case that the data base gets corrupted in a specific way and your code handles this properly. With the real data base such test cases may be practically impossible to set up.

Dealing with duplication between unit and integration tests

I have an algorithm implemented by a number of classes, all covered by unit test.
I would like to refactor it, which will change behavior of two classes.
When I change one class and its tests, all unit tests pass, though the algorithm becomes incorrect until refactoring is done.
This example illustrates that complete coverage by unit tests is sometimes not enough and I need "integration" tests for the whole algorithm in terms of input-output. Ideally, such tests should cover the behavior of my algorithm completely.
My question: looks like by adding such integration tests I make unit tests unnecessary and superfluous. I don't want to support duplicated test logic.
Should I remove my unit tests or leave them as is, e.g. for easier bug location?
This is part of the problem with tests which are too fine grained and are tightly coupled with the implementation.
Personally I would write tests which focus on the behaviour of the algorithm and would consider this 'a unit'. The fact that it is broken into several classes is an implementation detail, in the same way that breaking down a public method's functionality into several smaller private methods is also an implementation detail. I wouldn't write tests for the private methods separately, they would be tested by the tests of the functionality of the public method.
If some of those classes are generically useful and will be reused elsewhere then I would consider writing unit tests for them at that point as then they will have some defined behaviour on their own.
This would result in some duplication but this is ok as those classes now have a public contract to uphold (and which is used by both components which use it), which those tests can define.
Interestingly, see the definition of Unit in this article

How to order methods of execution using Visual Studio to do integration testing?

I have 2 questions in regard doing integration testing using VS 2010
First, I'm really in need of finding a way to execute these testing methods in the order I want them to. Note: I know in Unit Testing, methods should run standalone from anything else, but these are integration tests which I do depend on the order of which method runs first.
On the same note, is there a way to keep a local variable through running the tests? For example like the following code which right now fails.
[TestClass]
public class UnitTest1
{
int i = 0;
[TestMethod]
public void TestMethod1()
{
i = 5;
}
[TestMethod]
public void TestMethod2()
{
Assert.AreEqual(5, i);
}
}
So is there a way to do any of these?
To execute tests in a specific order I followed the next steps:
In a test project with test1, test2 and test3
1 Right click on the project 'Add'->'new test..."
2 Select 'Ordered Test'
3 Double click in the file that appears "OrderedTest1.orderedtest"
4 Build the project if was not build previously
5 From the list of available test select the test you want and order them
From that point on there appears a new test in the test list editor
It is an extra test that runs the enclosed tests in the correct order, but if you run all the test in the project carelessly the tests included in the ordered list will be executed twice so you need to somehow manage lists or test categories to avoid that.
I tried disabling the individual tests but that also disables the ordered test, I don't know a better way to do so.
It is best practice to use functions to set up the tests and to clean them up, by using the attributes [TestInitialize] and [TestCleanUp] or [ClassInitialize] and [ClassCleanup].
http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.unittesting(v=VS.100).aspx
The next code is an example of a similar thing to what you want:
[TestClass]
public class UnitTest1
{
int i=0;
[TestInitialize]
public void Setup()
{
i = 5;
}
[TestMethod]
public void TestMethod1()
{
Assert.AreEqual(5, i);
}
}
The function SetUp will be called before executing each test.
If you need to pass the value from one test to the other you might want to consider using a static variable which is not recommended due to the indeterministic order of execution.
Usually there is a way to avoid needing a specific order by using the setup/cleanup technique, but it is true that this might not be true for very complex integration tests.
If there is no possible way to avoid having them to reorder you can consider merging them in one, breaking again the best practice of having only one assert per test, but if they are so much dependent one from the other it might be even better this way, as in this case one test failing might compromise the result of the others.
EDIT:
May be using ordered tests answers question 1, and using static variables question 2:
http://msdn.microsoft.com/en-us/library/ms182631.aspx

How do I skip specific tests in xUnit based on current platform

I have an assembly that I've built on Windows
I want to run the xUnit tests on mono in Linux.
However, I have found that while 400 of these tests can run (in order), that certain tests either hang the xUnit runner, or bring it down entirely.
I don't care if certain tests are not able to run on Linux, certain tests are to do with the DTC and some unmanaged gumph that we don't need to support there.
What I do want however, is to apply an ignore to those tests, and have the fact that the test was ignored flagged properly in the build output.
The question can be boiled down to I guess a number of possible solutions
How do I run specific tests in xUnit via the console runner? (I haven't found documentation to this end, maybe I'm just not looking hard enough)
Is it possible to go the other way and say "Here is an assembly, please ignore these specific tests though"
Having an attribute on those tests has been suggested a better way, to formally document that these tests are platform specific - is this possible?
If I could avoid modifying the original code too much that would be grand, as the code isn't really mine to change, and applying lots of cross-platform hacks probably won't go down too well.
XUnit v2.0 is now available. Skippable tests are supported by it directly. Use:
[Fact (Skip = "specific reason")]
I would avoid externalising skipping tests (i.e. a config/command file if it's possible). This somewhat goes against making the tests easy to run and trustworthy. Making the tests ignored in code is the safest approach when other people start to get involved.
I could see a number of options, here are two that involve modification of existing code.
Option 1 - Most intrusive, compile time platform detection
In the VS Solution, define another configuration that defines a precompiler flag MONOWIN (just so that it's explicitly a flag the says that it is for code compiled on Windows for use on Mono).
Then define an attribute that will make the test ignored when compiled for Mono:
public class IgnoreOnMonoFactAttribute : FactAttribute {
#if MONOWIN
public IgnoreOnMonoFactAttribute() {
Skip = "Ignored on Mono";
}
#endif
}
It's actually hard to find any advantages to this method as it involves mocking with the original solution and adds another confiration that needs to be supported.
Option 2 - somewhat intrusive - runtime platform detection
Here is a similar solution to option1, except no separate configuration is required:
public class IgnoreOnMonoFactAttribute : FactAttribute {
public IgnoreOnMonoFactAttribute() {
if(IsRunningOnMono()) {
Skip = "Ignored on Mono";
}
}
/// <summary>
/// Determine if runtime is Mono.
/// Taken from http://stackoverflow.com/questions/721161
/// </summary>
/// <returns>True if being executed in Mono, false otherwise.</returns>
public static bool IsRunningOnMono() {
return Type.GetType("Mono.Runtime") != null;
}
}
Note 1
xUnit runner will run a method twice if it is marked with [Fact] and [IgnoreOnMonoFact]. (CodeRush doesn't do that, in this case I assume xUnit is correct). This means that any tests methods must have [Fact] replaced with [IgnoreOnMonoFact]
Note 2
CodeRush test runner still ran the [IgnoreOnMonoFact] test, but it did ignore the [Fact(Skip="reason")] test. I assume it is due to CodeRush reflecting xUnit and not actually running it with the aid of xUnit libraries. This works fine with xUnit runner.
There is a new options now.
Add Nuget Package SkippableFact, which allows you to use [SkippableFact] instead of [Fact] and you can use Skip.<xyz> within a Tests to dynamically Skip the Test during runtime.
Example:
[SkippableFact]
public void SomeTestForWindowsOnly()
{
Skip.IfNot(Environment.IsWindows);
// Test Windows only functionality.
}
[Fact(Skip="reason")]
works but I prefer to use traits
[Fact, Trait("type","unit")]
public void MyUnitTest(){
// given
// when
// then
}
[Fact, Trait("type","http")]
public void MyHttpIntegrationTest(){
// given
// when do things over HTTP
// then
}
usage
dotnet test --filter type=unit
this protects our builds from accidentally running integration tests that devs forgot to skip e.g. [Fact(Skip="Integration")], however it does require unit tests to "opt in" to CI by adding the correct traits which admittedly isn't great.
The Dominik's solution work for me by this code:
[SkippableFact]
public void Get_WhenCall_ReturnsString()
{
// Arrange
Skip.IfNot(RuntimeInformation.IsOSPlatform(OSPlatform.Windows));
// Act
// Assert
}
To add to the previous answers regarding SkippableFact: Note that each of the tests are still constructed - the constructor is run.
If you have timeconsuming code in a base class constructor, an alternative is to gather environment-specific test cases in suitable files and run the environment check in the constructor:
if (!SupportsTemporalQueries())
throw new SkipException("This test class only runs in environments support temporal queries");
This can speed up the test run considerable. In our system we either extend a "generic" base test class (runs in all environments) or an environment-specific base test class. I find this easier to maintain than filtering in pipelines or other solutions.
This is now solved in 1.8 - you can filter on Traits. See this issue log.
Update: Traits work with the console runner but not MSBuild, I've added a feature request for this support.

VS.Net Unit Testing -- possible to have project-scoped test setup?

Within a test file (MyTest.cs) it is possible to do setup and teardown at the class and the individual test level. Do similar hooks exist for the entire project? Entire solution?
No I don't believe that they do.
Typically when people ask this question it's because they have tests whoch are depending on something heavy, like a DB setup which needs to be reset for each test pass. Usually the right thing to do here is to mock/stub/fake the dependency out and remove the part that's causing the issue. Of course your reasons for this may be completely different.
Updated: So thinking about this some more I think you could do something with attributes and static types. You could add an assembly level attribute to each test assembly and pass it a static type.
[OnLoadAttribute(typeof(ProjectInitializer))]
When the assembly loads the type will get resolved and it's static constructor will be executed the first time it's resolved (when the assembly is loaded).
Doing something at a solution level is much harder because it depends how your unit test runner deals with tests and how it loads tests into AppDomains, per test, per test class or per test project. I suspect most runners create a new AppDomain per project.
I'm not exactly recommending this as I haven't tried it and there may be some repercussions. It's an idea you might want to try. Another option would be do derive all your tests from a common base class which has a constructor which resolves a singleton that in turn does your setup. This is less hacky but means having a common base class. You could also use an aspect oriented approach I suspect.
Hope this helps. These are just thoughts as to how you could do this.
Ade
We use the [AssemblyInitialize] / [AssemblyCleanup] attributes for project level test setup and cleanup code. We do this for two things:
creating a test database
creating configuration files in a temp directory
It works fine for us, although we have to be careful that each test leaves the database how it found it. Looks a little like this (simplified):
[AssemblyInitialize]
public static void AssemblyInit(TestContext context)
{
ConnectionString = DatabaseHelper.CreateDatabaseFornitTests();
}
[AssemblyCleanup]
public static void AssemblyCleanup()
{
DatabaseHelper.DeleteDatabase(ConnectionString);
}
I do not know of a way to do this 'solution' wide (I guess this really means for the test run - across potentially multiple projects)

Resources