nUnit test fails when run as part of larger namespace - visual-studio-2010

I am having an interesting situation. In my test assembly, I have folders having specific test classes, i.e., TestFixture's. Consider, for e.g., the following hierarchy in VS:
Sol
TestProject
TestFolder1
TestClass1
TestClass2
TestFolder2
TestClass3
Now, when I run the following at command line:
nunit-console.exe /run:Sol.TestProject.TestFolder1.TestClass2 TestProject.dll
Things are running fine and all the tests are passing. But, if I run as below:
nunit-console.exe /run:Sol.TestProject.TestFolder1 TestProject.dll
In this case, some of the tests in TestClass2 are failing.
I have tried dumping the state of some of the relevant objects involved in the test, and the state seemed fine at the beginning of the test code in both cases. Also, TestClass1/2/3 do not have a superclass doing something - so that is ruled out as well. Any ideas what else can be happening here?
I am using VS2010/.NET4.0 (4.0.30319.1)/nUnit 2.5.9.

Finally figured this out. I was using a singleton class for storing certain options. Looks like the singleton class instance is retained between runs of different TestFixtures (i.e., test classes), when they are run together, e.g., for a folder or for a project. I did not dump the state of this object initially, because I thought that the singleton class will be having new instance for each of the TestFixtures. Interesting finding, hope this helps someone.

Related

JUnit Easymock - spurious results invoking unit test method as a) method b) within class c) as mvn build

Easymock 3.5.1
JUnit 4.12
Maven 3.5.0
Intellij Build #IU-181.5281.24, built on June 12, 2018
I have a unit test and contained within this unit test is my problem method:
#Test(expected = CheckoutException.class)
public void performCheckout_CheckoutException() throws Exception {
// setup test data
Order order = new OrderImpl();
OMSOrder omsOrder = new OMSOrderImpl();
Order omsOrderProxy = OMSOrderProxy.proxify(order, omsOrder, Logger.getRootLogger());
omsOrderProxy.setId(1L);
FulfillmentOrder fulfillmentOrder = new FulfillmentOrderImpl();
FulfillmentGroup fulfillmentGroup = new FulfillmentGroupImpl();
fulfillmentGroup.setType(FulfillmentType.DIGITAL);
fulfillmentOrder.setFulfillmentGroup(fulfillmentGroup);
((OMSOrder)omsOrderProxy).getAllFulfillmentOrders().add(fulfillmentOrder);
ProcessContext<CheckoutSeed> context = new DefaultProcessContextImpl<>();
// create the expected flow
expect(orderService.save(anyObject(Order.class), eq(false))).andReturn(order).times(2);
replay(orderService);
expect((ProcessContext<CheckoutSeed>)checkoutWorkflow.doActivities(anyObject(CheckoutSeed.class))).andReturn(context);
replay(checkoutWorkflow);
expect(fulfillmentService.fulfill(anyObject(FulfillmentOrder.class))).andThrow(new FulfillmentException());
replay(fulfillmentService);
// test
checkoutService.performCheckout(omsOrderProxy);
// check results
verify(orderService);
verify(checkoutWorkflow);
verify(fulfillmentService);
}
orderService is a strict mock (defined in a #Before setup method):
orderService = createStrictMock(OrderService.class);
Each and every unit test class that uses this orderService creates this mock (whether strict or nice) in this #Before setup method.
Running this test method in Intellij (right-click, Run ...) achieves a successful result. Running the test at class level, again right-click, Run ... achieves another successful result. A mvn clean install (whether in Intellij or at the command line) renders the following error:
java.lang.Exception: Unexpected exception, expected<org.curtiscommerce.core.checkout.service.exception.CheckoutException> but was<java.lang.IllegalStateException>
at org.easymock.internal.ExpectedInvocation.createMissingMatchers(ExpectedInvocation.java:52)
at org.easymock.internal.ExpectedInvocation.<init>(ExpectedInvocation.java:41)
at org.easymock.internal.RecordState.invoke(RecordState.java:51)
at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:40)
at org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:94)
at com.sun.proxy.$Proxy27.save(Unknown Source)
at com.central.core.checkout.service.TestCheckoutServiceImpl.performCheckout_CheckoutException(TestCheckoutServiceImpl.java:151)
Line 151 (detailed in the code line directly above) relates to:
expect(orderService.save(anyObject(Order.class), eq(false))).andReturn(order).times(2);
which is a line in this method.
Now to get the exception details I remove the 'expected' attribute from the #Test annotation and the exception thrown is clearer:
java.lang.IllegalStateException: 2 matchers expected, 12 recorded.
This exception usually occurs when matchers are mixed with raw values when recording a method:
foo(5, eq(6)); // wrong
You need to use no matcher at all or a matcher for every single param:
foo(eq(5), eq(6)); // right
foo(5, 6); // also right
at org.easymock.internal.ExpectedInvocation.createMissingMatchers(ExpectedInvocation.java:52)
at org.easymock.internal.ExpectedInvocation.<init>(ExpectedInvocation.java:41)
at org.easymock.internal.RecordState.invoke(RecordState.java:51)
at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:40)
at org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:94)
at com.sun.proxy.$Proxy27.save(Unknown Source)
at com.central.core.checkout.service.TestCheckoutServiceImpl.performCheckout_CheckoutException(TestCheckoutServiceImpl.java:151)
Also, when I run a suite of tests in Intellij say at the level of the package where my unit test class resides (com.central.core.checkout.service), I get this same error. I have removed all other versions of easymock in .m2/repository to ensure there is no conflict.
The concern is; why does this error only occur upon a mvn clean install (in Intellij or cmd line) and at a package level unit test run?
I suppose what really concerns me, aside from differing results depending on how the test is run, is the the exception that is thrown:
java.lang.IllegalStateException: 2 matchers expected, 12 recorded.
tells me there are 2 matchers and 12 recorded. Is this including those created in other unit tests, almost as if spanning a test session? I find this difficult to believe as each unit test method creates a fresh mock #Before invocation.
Added July 6th # 15:21
So, to expedite this current build process and achieve no failing unit tests, I #Ignored this failing unit test and attempted a build. The build failed again but this time the preceding method was the problem child with a similar exception:
java.lang.IllegalStateException: 2 matchers expected, 12 recorded.
This exception usually occurs when matchers are mixed with raw values when recording a method:
foo(5, eq(6)); // wrong
You need to use no matcher at all or a matcher for every single param:
foo(eq(5), eq(6)); // right
foo(5, 6); // also right
I tried a little experiment and #Ignored this current failing method and tried another build but before kinda knew the next preceding method in the class would be the problem child. Lo and behold, it was.
Are you sitting comfortably? Then I will begin...
So I posted this question and of course I kept trying for a solution. I exhausted all channels in fixing the unit tests so I decided to look at this from a different angle. The issue came to light when the development environment was built. I noticed there were two builds quite close together. The former succeeded but the latter failed with the afore mention exception.
Now, going all Columbo I needed to determine the differences between the two builds and it turned out it was one little unit test. This unit test was a straight forward unit test in that it needed no mocking of any sort. What was odd was the Easymock.eq was imported. Strange indeed and even stranger was that it was contained within an assertEquals statement where the 'expected' value was wrapped in this eq(). Yikes and in the wise words of Han Solo, "I've got a bad feeling about this".
I removed this import, together with the eq(), ran the unit test method in isolation...success. I then invoked a build with a test run. Success.
So, what I have learned from this is that using the Easymock.eq() method in the wrong context, in this case an Assert.assertEquals() really strange things happen but still not sure why. Even stranger was that running the unit test in isolation inside my IDE succeeded. :-/
I'll give you a little under the hood of EasyMock insight to help you understand.
When you use a matcher, it is stored in a ThreadLocal. So when the mocked method call actually occurs, a matcher for each parameter is in thread local. EasyMock removes them from there and creates a call expectation.
So, when too many matchers are recording, everything gets misaligned and weird things can happen. That's what the error message is saying. That 12 matchers were recorded but only 2 were expected since you have 2 arguments to your method.
Since Maven and IntelliJ are not forking a new VM between tests, the bad matchers were still there from one test to the other.

Unit Testing - TDD - C#

I am constructing a prototype for robot using Test driven development ( C#, Console Application). First, I have created a test project and a class RobotTest. Here, I have written test methods to fail and to pass I construct the Robot class. Then, I have created a class RobertPrototype in which Robot class object is created to use methods in the Robot class. Along with that, I added some other methods (to parse input) in RobertPrototype.
I don't know if this is the way I have to follow while developing through TDD. Do I have to include all methods in Robot class itself ?
Please guide me. Thanks.
Do I have to include all methods in Robot class itself ?
I don't understand the question. you already said your tests are in a separate project and that you are writing a separate client class RobotPrototype that uses the Robot class.
At this point it seems like a reasonable design.
I think you're confusing yourself by writing bits of all of your classes for each bit of "working" test that you write for some Robot class method. This is not the way to think about TDD. It DOES NOT mean write a failing test to create a Robot object, then write a shell of a Robot constructor, write a shell of a class that uses a robot, write a shell of a client that uses a RobotPrototype. Then write a failing test, then write an empty Robot method, write RobotPrototype code that uses that method, write client code that uses what the RobotPrototype uses. no, no, no.
Each class in your robot design will have it's own corresponding Test class. Each method in each class will have it's own corresponding method in it's corresponding test class. The TDD cycle is performed on a method-by-method basis.
Try this:
Focus on one class and it's corresponding test class. Clearly you need a Robot before anything else. Start with the Robot class.
Using the TDD cycle, write functional methods.
When you have enough Robot functionality to do something, then you can start writing some Robot-using code (RobotPrototype class).
The RobotPrototype class has it's own corresponding Test class. Each of it's methods will have a corresponding Test method. You should have written enough Robot functionality to complete any given RobotPrototype method. If not, stop. Go back to Robot and write functioning methods there.
Given the above, the points to take away are:
You wrote complete "core" methods first. Each method has working tests when you're done.
As write new code using existing code, you know that existing code works because it's been tested. And, your new code has it's own tests.
Thus your application is built up upon layers of tested code.
As you write and re-write, you constantly re-run your tests. And periodically make sure you rerun ALL of them. If a previously working test fails, well you know you have a problem and you know where to look first.
As much as practicable every class has a test class and every method has (at least one) test method.
One implementation of TDD is Red-Green-Refactor.
As you write tests (Red), you will need to add methods to Robot in order to pass(Green). The next step is to organize the code, possibly into other classes (Refactor). The initial code used to pass the test may be in a different class than your final code.
In general you start by writing the skeleton of the class that you are willing to unit test and leaving all methods not implemented. Then you write the unit test about this class and all the methods that you are willing to test. Then you run the unit test which will fail because you haven't implemented the methods yet (you left them throw NotImplementedException) but at least your unit test can compile and execute. Then you go ahead and implement the methods and run the unit test which normally should pass. Then you refactor your code and when you run the unit test it should still pass. You move on to the next class and this process repeats.

How do I implement AssemblyInitialize/AssemblyCleanup in my CodedUITest in MSVS 2010?

I am trying to implement AssemblyInitialize/AssemblyCleanup attributes in my Microsoft Visual Studio 2010 for the exact purpose as stated here. That link even describes the process which I need to follow to implement the code.
A quick summary of that purpose is to create an initial block of code which will run right before any test no matter which of the codedUITests I run in the solution and then a block of code which will run after the last codedUITest is completed. Example: I need to open up a specific application, then run a series of codedUITests which all start at that application and which are executed in any order, then close the application after everything is finished; this is more efficient than opening/closing the application for each codedUITest.
What I don't understand is where I need to place the code laid out at the bottom of that page (also shown below). I stuck all that code right under my 'public partial class UIMap' and the code runs except it runs the 'OpenApplication' and 'CloseApplication' commands before/after each CodedUITest instead of sandwiching the entire group of CodedUITests.
How do I implement the code correctly?
Update:
I discovered AssemblyI/C last night and I spent 3 hours trying to
figure out where to put the code so it works. If I put the
AssemblyInitialize at the beginning of a specific test method then:
1) It still wouldn't run - it was giving me some error saying that
UIMap.OpenWindow() and UIMap.CloseWindow() methods need to be static
and I couldn't figure out how to make them static.
2) Wouldn't the specific [TestMethod] which has the AssemblyI/C on it
need to be in the test set? In my situation I have a dozen
CodedUITests which need to run either individually or in a larger
group and I need to get the AssemblyI/C to Open/Close the window I am
testing.
You've added the methods to the wrong class. By putting then into the UIMap partial class, you are telling the runtime to run those methods every time you create a new UIMap instance, which it sounds like you're doing every test.
The point of the ClassInitialize/ClassCleanup methods is to add them to the class with your test methods in it. You should have at least one class decorated with the TestClass attribute, which has at least one method decorated with a TestMethod attribute. This is the class that needs the ClassInitialize and ClassCleanup attributes applied to it. Those methods will run one time for each separate TestClass you have in your project.
You could also use the AssemblyInitialize and AssemblyCleanup attributes instead. There can only be one of these methods in any given assembly, and they will run first and last, respectively, before and after any test methods in any classes.
UPDATE:
AssemblyInitialize/Cleanup need to be in a class that has the TestClass attribute, but it doesn't matter which one. The single method with each attribute will get run before or after any tests in the assembly run. It can't be a test method, though; it has to be a static method and will not count as a "test".

What determines the order of compliation or execution of source file in Delphi Prism?

Having written my Delphi Prism program enough to compile and run on Window and Linux (mono) without compilation errors, I am finding out that my constructors and load events are firing at different order than I expected. I thought, files get executed in the order that they are listed in the project file like in Delphi .dpr file. Speaking of .dpr file, is there a similar file for Delphi Prism that I am not looking into. I looked into program.pas file and properties. I didn't see anything there to give me a hint or clue.
How do you make sure that the project files get executed in right order in Delphi Prism?
Delphi Prism compiles in the order the files are defined in the project. However, there should not be anything that depends on the order of the files as there are no initialization sections.
As for your other question. Program.pas by default contains the entry point, it's a method called "Main", you could see this as the main begin/end.
.NET does not know about the order your classes are listed in your program file. It just sees classes.
Under normal circumstances you could think of this rule:
Static (class) constructors are executed immediately before the instance .ctor or another static (class) method is called on this class for the first time
While this is not true every time (they could be called earlier, but not later), this is a good approximation which works out most of the time.
So to ensure a certain order for static class initialization, I rely on the following:
I have one static class that has an Initialize() method. This method is the first thing I call in the Main() method of my program. In this method I call Initialize-Methods on other classes in the required order. This makes sure, that the initialization code is executed.

VS.Net Unit Testing -- possible to have project-scoped test setup?

Within a test file (MyTest.cs) it is possible to do setup and teardown at the class and the individual test level. Do similar hooks exist for the entire project? Entire solution?
No I don't believe that they do.
Typically when people ask this question it's because they have tests whoch are depending on something heavy, like a DB setup which needs to be reset for each test pass. Usually the right thing to do here is to mock/stub/fake the dependency out and remove the part that's causing the issue. Of course your reasons for this may be completely different.
Updated: So thinking about this some more I think you could do something with attributes and static types. You could add an assembly level attribute to each test assembly and pass it a static type.
[OnLoadAttribute(typeof(ProjectInitializer))]
When the assembly loads the type will get resolved and it's static constructor will be executed the first time it's resolved (when the assembly is loaded).
Doing something at a solution level is much harder because it depends how your unit test runner deals with tests and how it loads tests into AppDomains, per test, per test class or per test project. I suspect most runners create a new AppDomain per project.
I'm not exactly recommending this as I haven't tried it and there may be some repercussions. It's an idea you might want to try. Another option would be do derive all your tests from a common base class which has a constructor which resolves a singleton that in turn does your setup. This is less hacky but means having a common base class. You could also use an aspect oriented approach I suspect.
Hope this helps. These are just thoughts as to how you could do this.
Ade
We use the [AssemblyInitialize] / [AssemblyCleanup] attributes for project level test setup and cleanup code. We do this for two things:
creating a test database
creating configuration files in a temp directory
It works fine for us, although we have to be careful that each test leaves the database how it found it. Looks a little like this (simplified):
[AssemblyInitialize]
public static void AssemblyInit(TestContext context)
{
ConnectionString = DatabaseHelper.CreateDatabaseFornitTests();
}
[AssemblyCleanup]
public static void AssemblyCleanup()
{
DatabaseHelper.DeleteDatabase(ConnectionString);
}
I do not know of a way to do this 'solution' wide (I guess this really means for the test run - across potentially multiple projects)

Resources