Any unit test the includes a call to SELECT (using LINQ) data from my DBContext throws the following error:
The model backing the 'MyDBContext' context has changed since the
database was created. Either manually delete/update the database, or
call Database.SetInitializer with an IDatabaseInitializer instance.
For example, the DropCreateDatabaseIfModelChanges strategy will
automatically delete and recreate the database, and optionally seed it
with new data.
Doing a search for that specific error leads me to believe that I need to include the following line in my Global.asax Application_Start method:
System.Data.Entity.Database.SetInitializer<MyDBContext>( null );
This is suppose to fix a similar error when running the application itself. Unfortunately, I don't get this error when I run my application and there doesn't seem to be an Application_Start method for my unit test project. Is there any way to to the unit test project that I'm using a custom database back-end and to ignore any changes that have occurred in it?
I added the unit test project after working on my main project for a while so it's possible I messed it up somehow, but I can't figure out for the life of me what to do. I'm using the built in unit testing in Visual Studio 2010.
There are 2 methods that you could use with the VS unit testing framework allowing you to run some code before and after each test and before and after all the tests contained in the file
// Use TestInitialize to run code before running each test
[TestInitialize()]
public void MyTestInitialize()
{
}
// Use TestCleanup to run code after each test has run
[TestCleanup()]
public void MyTestCleanup()
{
}
or:
// Use ClassInitialize to run code before running the first test in the class
[ClassInitialize()]
public static void MyClassInitialize(TestContext testContext)
{
}
// Use ClassCleanup to run code after all tests in a class have run
[ClassCleanup()]
public static void MyClassCleanup()
{
}
Related
While running unit test in Visual studio , i was getting 0 Total Tests - 0 passed,0 failed ,0 skipped even after having tests in the class.
In Nunit, if there is an exception in loading test setup or execution it will result in above. From Visual studio menu open debug=>output and select Test window and see are there any exceptions thrown when tests are run. In my case in project1 i have nunit2 (refernce), in project2 i have nunit3 which referred project1 which is causing conflict and unable to execute.
if you resolve the exception it should work
Check if you have runsettings file in your solution or if there is one selected under
VS -> Test -> Configure Run Settings. If this is chosen, uncheck it. Remove the file. This fixed for me.
Ensure the access modifier of the class is public instead of internal. If it is internal none of the [Fact]s in the class will register as tests within the Test Explorer, nor will they run.
Installing Microsoft.NET.Test.Sdk package from nuget package manager solved my issue.
I also have xunit and xunit.runner.visualstudio package installed.
In Case anyone is still struggling with this, installing "MSTest.TestAdapter" solved my issue.
For me a simple "Clean Solution" worked.
Here in my case the issue was different . In VS Unit tests are being identified based on Test class. In my case , there were no access modifier for the class and which caused the issue .VS unable to identify the test methods for the particular class.
class ControllerTests
{
public Controller _Controller;
[TestMethod()]
}
After adding public to the class ,it worked fine.
Public class ControllerTests
{
public Controller _Controller;
[TestMethod()]
}
For MSTest users - if you have a method marked with the [ClassInitialize] attribute, make sure that it is both public and static, and has a single parameter of type TestContext, otherwise the test file will not run.
[ClassInitialize]
public static void ClassInitialize(TestContext testContext)
{
// initialization code...
}
For C#/.NET Playwright:
I've seen this a couple times and my issue was that the return type of an async test method was void, instead of Task. It should be as follows:
[TestMethod]
public async Task TestMethod1()
{
In my project, I have acceptance tests which take a long time to run. When I add new features to the code and write new tests, I want to skip some existing test cases for the sake of time. I am using Spring 3 and junit 4 using SpringJUnit4ClassRunner. My idea is to create an annotation (#Skip or something) for the test class. I am guessing I would have to modify the runner to look for this annotation and determine from system properties if a test class should be included while testing. My question is, is this easily done? Or am I missing an existing functionality somewhere which will help me?
Thanks.
Eric
Annotate your class (or unit test methods) with #Ignore in Junit 4 and #Disabled in Junit 5 to prevent the annotated class or unit test from being executed.
Ignoring a test class:
#Ignore
public class MyTests {
#Test
public void test1() {
assertTrue(true);
}
}
Ignoring a single unit test;
public class MyTests {
#Test
public void test1() {
assertTrue(true);
}
#Ignore("Takes too long...")
#Test
public void longRunningTest() {
....
}
#Test
public void test2() {
assertTrue(true);
}
}
mvn install -Dmaven.test.skip=true
so you can build your project without test,
mvn -Dtest=TestApp1 test
you can just add the name of your application and you can test it.
I use Spring profiles to do this. In your test, autowire in the Spring Environment:
#Autowired
private Environment environment;
In tests you don't want to run by default, check the active profiles and return immediately if the relevant profile isn't active:
#Test
public void whenSomeCondition_somethingHappensButReallySlowly() throws Exception{
if (Arrays.stream(environment.getActiveProfiles()).noneMatch(name -> name.equalsIgnoreCase("acceptance"))) {
return;
}
// Real body of your test goes here
}
Now you can run your everyday tests with something like:
> SPRING_PROFILES_ACTIVE=default,test gradlew test
And when you want to run your acceptance tests, something like:
> SPRING_PROFILES_ACTIVE=default,test,acceptance gradlew test
Of course that's just an example command line assuming you use Gradle wrapper to run your tests, and the set of active profiles you use may be different, but the point is you enable / disable the acceptance profile. You might do this in your IDE, your CI test launcher, etc...
Caveats:
Your test runner will report the tests as run, instead of ignored, which is misleading.
Rather than hard code profile names in individual tests, you probably want a central place where they're all defined... otherwise it's easy to lose track of all the available profiles.
Dear fellows from Stack Exchange.
I'm trying to test if my Custom Model Binder is being added to the ModelBinderProviders.BinderProviders collection.
I decided to activate this through WebActivator, to avoid messing global.asax,
Everything works fine, but the Test:
I tried using the WebActivator.ActivationManager.Run() method, but my things weren't loaded.
I've something like this in my test:
[TestMethod]
public void TemplateModelBinderProvider_Should_Be_Registered_In_BinderProviders()
{
WebActivator.ActivationManager.Run();
IModelBinderProvider templateModelBinderProvider = ModelBinderProviders.BinderProviders.
Where(x => x is TemplateModelBinderProvider).
FirstOrDefault();
Assert.IsNotNull(templateModelBinderProvider);
}
And this is my app_Start class:
[assembly: WebActivator.PreApplicationStartMethod(typeof(MVC.App_Start.MVCBindings), "Start")]
namespace MVC.App_Start
{
public static class MVCBindings
{
public static void Start()
{
ModelBinderProviders.BinderProviders.Add(new TemplateModelBinderProvider());
}
}
}
Sorry you have problems with the piece of code I wrote.
I don't have access to the source code right now but will take a look in the evening (UK time).
Do you think you could send me your solution so I could replicate it locally? My email is jkonecki at gmail.com
UPDATE
I have received your source code but unfortunately it contains references to libraries I cannot obtain so I cannot compile it.
I have created a separate solution (emailed to you) with MVC3 web app and unit test projects that uses your custom model binder provide. There are two tests that prove that WebActivatorManager.Run method properly registers a custom provider.
Try debugging your unit test to make sure that Run method calls your static Start method.
WebActivator source code is here - you might want to get it and step through.
Using Linq to sql and server explorer, I mapped to a loginvalidation stored proc. So I write the following code:
ClientReportingDataContext db = new ClientReportingDataContext();
var data = db.ADMIN_LoginValidation(login, password);
It throws up an exception on the following line:
public ClientReportingDataContext() :
base(global::System.Configuration.ConfigurationManager.ConnectionStrings["FeedsConnectionString"].ConnectionString, mappingSource)
Exception thrown:
Object reference not set to an instance of an object.
I'm calling this function from a unit test class. I cann feedsconnectionstring in web.config.
I put the web.config in the unit tests folder, and also under debug and debug/bin. Not sure what I'm missing.
Thanks in advance for any advise.
For a unit test,
ConnectionStrings["FeedsConnectionString"].ConnectionString
won't be reading from your web.config file; it will be reading from the application configuration file for the test runner. Therefore, unless you've put FeedsConnectionString in the application configuration file for your test runner,
ConnectionStrings["FeedsConnectionString"]
is null and so
ConnectionStrings["FeedsConnectionString"].ConnectionString
is going to throw a NullReferenceException.
This is why testing and application configuration files don't get along well.
You should consider the following:
public ClientReportingDataContext(string connectionString) :
base(connectionString, mappingSource)
Then inject your connection string in your test.
In some my project I notice that during executing unit tests under VSTS2008 its VSTestHost's memory consuming grows. As I have very many tests in my solution it leads to OutOfMemroyException eventually.
That looks very strange for me as I was sure that MSTest creates a new AppDomain for each unit test. Otherwise how would it reset static fields?
But if AppDomain is being created for each test than memory shouldn't leak. But it does.
So the question is: Should VS create AppDomain for each test class or not? If yes than how can I check that it does it.
I tried tracing through ProcessExpolorer and Performance snap-in. A value of "Total appdomain unloaded" is always 0 during test run.
MsTest creates one-app domain per Test assembly, unless you are using noisolation, in which case there is no AppDomain Isolation.
If you are seeing leaks, its probably a but in either your test code, or your product code. Make sure you aren't stuffing things into dictionaries and leaving them there.
I don't think the unit test engine creates a new AppDomain for each test. Since creating an AppDomain is a relatively expensive operation, doing so for each test would slow down execution of unit tests considerably!
Visual Studio 2008 uses a seperate executable called vstesthost.exe to run unit tests. VS communicates with vstesthost.exe (how it does this I don't know) to tell it what tests to run. vstesthost.exe returns the execution results to VS which displays those results.
If you are getting OutOfMemoryExceptions when running your unit tests I would say that's a strong indicator that your code under test is actually not cleaning things up. Are you sure that you aren't retaining handles to unmanaged objects/memory? I would recommend running your unit tests under a Performance Analysis (you can do that by finding the unit test under the "Test View", right-clicking on it, and selecting "Create Performance Session"). This might shed some light at least on your object allocations.
I was wrong about having separate AppDomains for each unittest.
Here's evidence:
a singleton
public class Singleton
{
public static Singleton Instance = new Singleton();
private Guid _token;
private Singleton()
{
_token = Guid.NewGuid();
}
public Guid Token
{
get { return _token; }
}
}
and two tests:
[TestClass]
public class UnitTest2
{
[TestMethod]
public void TestMethod1()
{
Console.WriteLine(Singleton.Instance.Token);
}
}
[TestClass]
public class UnitTest1
{
[TestMethod]
public void TestMethod1()
{
Console.WriteLine(Singleton.Instance.Token);
}
}
During executing both tests output the same guid.
Seen the same problem with large test runs. My theory is the following. Memory exhaustion in this case is due to the fact that MSTest test result files are XML. Therefore it needs to keep all the log results in memory until the end of the test run before serializing to disk. Hurray for XML :-)
I have posted this problem as a connect issue a while back and it should have been fixed in MSTest 10 (going 64 bit) but I haven't been able to verify this yet because of all the other problems we have moving to VS2010 and .NET 4.0.
The only way to dispose of a singleton is to dispose the appDomain. A singleton is a static holding onto itself, so it's basically a circular reference. True singletons do not get disposed until the appdomain goes away.
This does not seem to be solved in MSTest 2010. I am experiencing a lot of similar issues like this. Why does garbage collection not work in unit test?
My understanding was that the UT framework took care of disposing of all executed tests, but this does not seem to be the case with some singleton patterns that we have in code.