Why must a ClassInitialize method be static? - visual-studio

I'm curious as to why the fixture setup must be static? It seems more intuitive to me to have instance variables per fixture that share the lifetime of the fixture.
Yes, these can be initialized in the constructor, but then I assume they are out of reach of the control of the test runner.
What design requirements or philosophies determined that the setup method should be static?

The method with the ClassInitialize attribute runs once for all the tests in the class. An instance of the class is created each time a test is run, so it has to be static in order to only run once.
If you want to initialize for every test, then you can use the TestInitialize attribute, which will run whenever a new instance of the class is created (before running a test).
If you need more info, you can check out:
That Pesky MSTest Execution Ordering

Related

Best technique/workaround for mocking assignment of private field?

TDD and Mockito testing newb here... I have recently discovered about injection of mocks of private fields and am trying to get my ideas straight.
It may well be that my thinking on this particular matter is all wrong, but it's just that I often find a situation where I want to test that calling a particular method has resulted in a private field being assigned: before the call it was null, and after the call it is set to an instance of whatever class.
A mocked private field is a subclass of the class in question by the time you start to run your test method, so obviously testing for null is meaningless/doesn't work. Equally, if your method results in an assignment of this field this means that the mock variable has been replaced by a genuine instance. Thus, even if methods are called on it it won't lead to any interaction with the mock variable. In any event, if the test is just to check on the creation and assignment there is no reason you would want to have any interaction with it in the test.
Obviously one answer is to create a public get() method or a public create() method in the app class. But this gives me a bad feeling as it is clearly just being done for testing purposes.
Incidentally there will be those, many no doubt much more experienced than me at TDD, who will say that I'm getting too fine-grained and shouldn't be testing something as specific as assignment of a field. But my understanding of the most extreme approach to TDD is "don't write a line of code without a test". I really am trying to stick with this at this stage as far as possible. With experience maybe I will have greater understanding of how and when to depart from this principle.
It's just that assignment of a field seems to me to be a "class event" (of the host class) as worthy as any other type of event, such as a call of a method on the field once an instance has been assigned to the field. Shouldn't we be able to test for it in some way...? What do the Mockito pros who accept that this may be a legitimate thing to test for do in these circumstances?
later...
The comment from Fabio highlights the tension which, as a newb, I find puzzling, between "never write a line without a test" and "test only public methods": the effect of this is inevitably to expose much more stuff than I want, purely for the sake of testing.
And what he says about "changing tests all the time" I also find problematic: if we inject mocked private fields, and use them, then we are going to have to change the tests if we change the "implementation details". Nor does this idea fill me with dread: a need for change is perceived, you look at the very explanatory names of your existing test methods and decide which ones no longer apply, and of course you create new ones... the testing code, like the app code, will have to change all the time, right?
Isn't the testing code like some sort of die or mould, which guides and constrains the app code but at the same time can be detached from it? By Mockito allowing us to mock private fields a privileged form of access is already granted... who's to say how far this privileged access should rightly extend?
even later...
Jeff Bowman suggests this is a dup with this. I'm not entirely convinced, since I am asking a very specific thing for which it's just possibly there may be a specific answer (some Mockito means of detecting assignment). But his answer there seems very comprehensive and I'm trying to fully understand it all... including the whole dependency injection thing... and the compromises you seem have to make in "distorting" your public interface to make an app class as testable as you want to make it.
Simple example: you need to write a class Class1 with two methods Method1 and Method2 which return values provided by some Helper class.
[Test] Start writing test for first method - it will not compile
[Production] Create class Class1 and Method1 - compile ok and
test fail
[Production] In Method1 you create new instance of Helper and call some of this methods which return expected value - test pass
[Test] Create another test case for same Method1 - test fail
[Production] Make changes in Method1 to satisfy both test cases - tests are green
[Production] Make some refactorings in Method1 if you see possibility - test must stay green
[Test] Create test for Method2 which will update some public field Field1 of Class1 - will not compile
[Production] Create method Method2 and field Field1 in Class1 - compile - ok, test fail
[Production] Create new instance of Helper class in Method2, call it method and update Field1 with expected value - test pass
[Test] Create another test case for Method2 - test fail
[Production] Make changes in Method2 to satisfy both test cases - tests are green
Now you have implemented needed behavior and noticed that both Method1 and Method2 create instance of Helper class.
[Production] So you decide to declare private field of type Helper and use it in both methods - all tests are failed - because your private field not instantiated.
[Production] - you instantiate new instance in the constructor - all tests are green
Later if you decide to move creation of Helper class outside and pass it to the Class1 as constructor's parameter, you just change your production code and tests will show you does you break anything without changing tests.
Main point of my answer, that if you think like "I want write this line of code - How I can test it" - it will goes you in thats kind of situation you got right now.
Try to first think what you want to test and after writing the test think how you will implement it.

Unit Testing - TDD - C#

I am constructing a prototype for robot using Test driven development ( C#, Console Application). First, I have created a test project and a class RobotTest. Here, I have written test methods to fail and to pass I construct the Robot class. Then, I have created a class RobertPrototype in which Robot class object is created to use methods in the Robot class. Along with that, I added some other methods (to parse input) in RobertPrototype.
I don't know if this is the way I have to follow while developing through TDD. Do I have to include all methods in Robot class itself ?
Please guide me. Thanks.
Do I have to include all methods in Robot class itself ?
I don't understand the question. you already said your tests are in a separate project and that you are writing a separate client class RobotPrototype that uses the Robot class.
At this point it seems like a reasonable design.
I think you're confusing yourself by writing bits of all of your classes for each bit of "working" test that you write for some Robot class method. This is not the way to think about TDD. It DOES NOT mean write a failing test to create a Robot object, then write a shell of a Robot constructor, write a shell of a class that uses a robot, write a shell of a client that uses a RobotPrototype. Then write a failing test, then write an empty Robot method, write RobotPrototype code that uses that method, write client code that uses what the RobotPrototype uses. no, no, no.
Each class in your robot design will have it's own corresponding Test class. Each method in each class will have it's own corresponding method in it's corresponding test class. The TDD cycle is performed on a method-by-method basis.
Try this:
Focus on one class and it's corresponding test class. Clearly you need a Robot before anything else. Start with the Robot class.
Using the TDD cycle, write functional methods.
When you have enough Robot functionality to do something, then you can start writing some Robot-using code (RobotPrototype class).
The RobotPrototype class has it's own corresponding Test class. Each of it's methods will have a corresponding Test method. You should have written enough Robot functionality to complete any given RobotPrototype method. If not, stop. Go back to Robot and write functioning methods there.
Given the above, the points to take away are:
You wrote complete "core" methods first. Each method has working tests when you're done.
As write new code using existing code, you know that existing code works because it's been tested. And, your new code has it's own tests.
Thus your application is built up upon layers of tested code.
As you write and re-write, you constantly re-run your tests. And periodically make sure you rerun ALL of them. If a previously working test fails, well you know you have a problem and you know where to look first.
As much as practicable every class has a test class and every method has (at least one) test method.
One implementation of TDD is Red-Green-Refactor.
As you write tests (Red), you will need to add methods to Robot in order to pass(Green). The next step is to organize the code, possibly into other classes (Refactor). The initial code used to pass the test may be in a different class than your final code.
In general you start by writing the skeleton of the class that you are willing to unit test and leaving all methods not implemented. Then you write the unit test about this class and all the methods that you are willing to test. Then you run the unit test which will fail because you haven't implemented the methods yet (you left them throw NotImplementedException) but at least your unit test can compile and execute. Then you go ahead and implement the methods and run the unit test which normally should pass. Then you refactor your code and when you run the unit test it should still pass. You move on to the next class and this process repeats.

How do I implement AssemblyInitialize/AssemblyCleanup in my CodedUITest in MSVS 2010?

I am trying to implement AssemblyInitialize/AssemblyCleanup attributes in my Microsoft Visual Studio 2010 for the exact purpose as stated here. That link even describes the process which I need to follow to implement the code.
A quick summary of that purpose is to create an initial block of code which will run right before any test no matter which of the codedUITests I run in the solution and then a block of code which will run after the last codedUITest is completed. Example: I need to open up a specific application, then run a series of codedUITests which all start at that application and which are executed in any order, then close the application after everything is finished; this is more efficient than opening/closing the application for each codedUITest.
What I don't understand is where I need to place the code laid out at the bottom of that page (also shown below). I stuck all that code right under my 'public partial class UIMap' and the code runs except it runs the 'OpenApplication' and 'CloseApplication' commands before/after each CodedUITest instead of sandwiching the entire group of CodedUITests.
How do I implement the code correctly?
Update:
I discovered AssemblyI/C last night and I spent 3 hours trying to
figure out where to put the code so it works. If I put the
AssemblyInitialize at the beginning of a specific test method then:
1) It still wouldn't run - it was giving me some error saying that
UIMap.OpenWindow() and UIMap.CloseWindow() methods need to be static
and I couldn't figure out how to make them static.
2) Wouldn't the specific [TestMethod] which has the AssemblyI/C on it
need to be in the test set? In my situation I have a dozen
CodedUITests which need to run either individually or in a larger
group and I need to get the AssemblyI/C to Open/Close the window I am
testing.
You've added the methods to the wrong class. By putting then into the UIMap partial class, you are telling the runtime to run those methods every time you create a new UIMap instance, which it sounds like you're doing every test.
The point of the ClassInitialize/ClassCleanup methods is to add them to the class with your test methods in it. You should have at least one class decorated with the TestClass attribute, which has at least one method decorated with a TestMethod attribute. This is the class that needs the ClassInitialize and ClassCleanup attributes applied to it. Those methods will run one time for each separate TestClass you have in your project.
You could also use the AssemblyInitialize and AssemblyCleanup attributes instead. There can only be one of these methods in any given assembly, and they will run first and last, respectively, before and after any test methods in any classes.
UPDATE:
AssemblyInitialize/Cleanup need to be in a class that has the TestClass attribute, but it doesn't matter which one. The single method with each attribute will get run before or after any tests in the assembly run. It can't be a test method, though; it has to be a static method and will not count as a "test".

How to initialize test class resources in Visual Studio Unit Testing framework?

I'm using the unit testing framework in .NET in C++/CLI to test unmanaged C++ code.
I would like for example an instance of System::Random to generate random values throughout the test methods.
Do I need to put this as a member variable in my test class?
If yes where can I put the initialization code, cause the ClassInitialize() method that is generated is static for some reason and it only has access to a TestContext which I read is only for using testing data from some external sources.
You can add static properties to your test class and initialize them in the ClassInitialize() method if you need them to be available to all tests. If you want them initialized per test, then using the TestInitialize() method is better.
Are you sure you want to use random values in your unit tests? Typically you'd want to use known values (good values, bad values, edge cases, etc) so that your tests are predictable. Using multiple tests with various values where you know the expected behavior (outcome) is more typical than using random values.

NMock2.0 - how to stub a non interface call?

I have a class API which has full code coverage and uses DI to mock out all the logic in the main class function (Job.Run) which does all the work.
I found a bug in production where we werent doing some validation on one of the data input fields.
So, I added a stub function called ValidateFoo()... Wrote a unit test against this function to Expect a JobFailedException, ran the test - it failed obviously because that function was empty. I added the validation logic, and now the test passes.
Great, now we know the validation works. Problem is - how do I write the test to make sure that ValidateFoo() is actually called inside Job.Run()? ValidateFoo() is a private method of the Job class - so it's not an interface...
Is there anyway to do this with NMock2.0? I know TypeMock supports fakes of non interface types. But changing mock libs right now is not an option. At this point if NMock can't support it, I will simply just add the ValidateFoo() call to the Run() method and test things manually - which obviously I'd prefer not to do considering my Job.Run() method has 100% coverage right now. Any Advice? Thanks very much it is appreciated.
EDIT: the other option I have in mind is to just create an integration test for my Job.Run functionality (injecting to it true implementations of the composite objects instead of mocks). I will give it a bad input value for that field and then validate that the job failed. This works and covers my test - but it's not really a unit test but instead an integration test that tests one unit of functionality.... hmm..
EDIT2: IS there any way to do tihs? Anyone have ideas? Maybe TypeMock - or a better design?
The current version of NMock2 can mock concrete types (I don't remember exactly which version they added this, but we're using version 2.1) using the mostly familiar syntax:
Job job = mockery.NewMock<Job>(MockStyle.Transparent);
Stub.On(job).Method("ValidateFoo").Will(Return.Value(true));
MockStyle.Transparent specifies that anything you don't stub or expect should be handled by the underlying implementation - so you can stub and set expectations for methods on an instance you're testing.
However, you can only stub and set expectations on public methods (and properties), which must also be virtual or abstract. So to avoid relying on integration testing, you have two options:
Make Job.ValidateFoo() public and virtual.
Extract the validation logic into a new class and inject an instance into Job.
Since all private are all called by public methods (unless relying on reflection runtime execution), then those privates are being executed by public methods. Those private methods are causing changes to the object beyond simply executing code, such as setting class fields or calling into other objects. I'd find a way to get at those "results" of calling the private method. (Or mocking the things that shouldn't be executed in the private methods.)
I can't see the class under test. Another problem that could be pushing you to want access to the private methods is that it's a super big class with a boatload of private functionality. These classes may need to be broken down into smaller classes, and some of those privates may turn into simpler publics.

Resources