Has anyone seen this very strange behaviour before?
I've got a solution whith 70 unit tests. All of them pass on my dev machine.
Whenever I commit my changes, our continuous integration process kicks in and the build box will eventually run the same 70 unit tests.
There is only ONE test in the build box that fails all the time.
The error is in one line that only gets a record from our unit test db. (I know it sucks having unit test to rely on data but please don't focus on this as it's not relevant now)
The most weird thing is when I logon myself to the build box, open up the same visual studio solution and manually kick off the unit tests. Result: ALL PASS!
Has anyone ever had this weird situation? I'm guessing there is some weird thing going on with Cruise Control.NET and MSTest?
Surely your unit test runner produces a good log that shows the exact exception message or error? It's kinda pointless to guess at it but an "access denied" kind of error would be an obvious candidate. Setup whatever dbase engine you use (you forgot to mention that too) to give the user account that runs the tests on the build grunt access to the tables.
As said in another answer, it doesn't make too much sense to guess about it when there are detailed logs around...
But because I had this situation several times, here's a guess anyway:
The account, which is used by the CI server to run the tests, may not have appropriate permissions in the database. This would also explain, why the same test succeeds when you run it manually (then with your user account)...
HTH!
Thomas
thanks for your inputs but it wasn't anything related to credentials at all.
I've found out that other tests that were running before that particular one were leaving my unit test database in an inconsistent state, therefore causing errors to the test in question.
It's not a good practice to have your unit tests relying on data, so unless you are extremely bound to it like myself, this is what a recommend to everyone: DO NOT RELY ON DATA TO DO YOUR UNIT TESTING !!!! Make sure you have all the good stuff in place, specifically a good IOC/dependency injector container so your classes are loosely coupled and you can mock up any interface you may want to unit test easily!
If you have system tests that you want to run on your build server or in general, want to be able to run correctly on any machine, including your own, then you must make sure that their states are independent.
In your case, you should have each test init prepare the DB it uses (either by copying a file-based DB or by emptying/filling a service-based DB). Each test should also attempt to undo its changes (delete file or empty DB) but not assume that other tests have done so successfully.
Related
This question is very similar to other questions that also in some cases literally have the text "tests not running in Test Explorer" in the title. But, my context is a bit different. In those questions, there was a fair bit of investigation into what might be wrong with the tests. I am fairly confident nothing is wrong with the tests in this case.
I am one of hundreds of developers working on a project, and this project has a large bank of automated tests (though perhaps not as large as it ought to be :-P). Everybody is frequently running tests, and triggers fire when pull requests are made and merged to automatically run them then too. Tests were working fine for me as well. But, I have just been given a new laptop with better hardware specs, and I am trying to get it set up. On the new laptop, the project builds just fine (and noticeably faster :-) ), but the automated tests just don't run. I can't figure out why, and I'm looking for suggestions about what to check in this context -- given that there are hundreds of places where the exact same code is working perfectly, I really don't think the tests or test projects themselves are at fault here.
I have observed that the build output, apparently randomly, sometimes does not contain the test adapter files:
Microsoft.VisualStudio.TestPlatform.MSTest.TestAdapter.dll
Microsoft.VisualStudio.TestPlatform.MSTestAdapter.PlatformServices.dll
Microsoft.VisualStudio.TestPlatform.MSTestAdapter.PlatformServices.Interface.dll
xunit.runner.visualstudio.testadapter.dll
If these files are missing, then VSTest.Console.exe also cannot run the test. But, usually rebuilding the project results in the files magically appearing, and then VSTest.Console.exe works just fine.
I haven't been able to ascertain a reason why the adapter files are sometimes put into the build output and sometimes not, and in either case, the Test Explorer within Visual Studio always fails to run the tests -- it discovers the tests just fine, puts several thousand items into the forest of trees, but when told to run tests, it just sits there for a minute or two and then returns to idle state with no output at all in the "Tests" output window.
This is a brand new installation of Visual Studio Enterprise 2019 Preview, the exact same version that is on my old laptop, but on my old laptop the tests run fine. What do?? I don't know what to check next. :-(
Well, I am thoroughly confused. I tried installing new features, I tried checking for system updates, I rebooted multiple times, and tests did not work. So, finally, I decided to make a cut-down minimal test project and see if I could observe any differences in Process Monitor between the two computers. I made a project with two tiny tests, one with NUnit and one with xUnit, and ... they worked. So, I opened up the big project again and hit Run Tests, and ... they worked. I am completely stumped, and the only advice I can offer to anyone finding this question with a similar problem is, just keep trying.
It is not my decision personally to have lingering failing unit tests.
In the solution I work with at my company, there are a few failing unit tests that have lingered over time. Occasionally, while making changes, another unit test will start to fail, but it won't be clear which one it is. To figure it out, it's sometimes necessary to copy at least the names of the failing unit tests out into a text file and do a comparison.
In VS2017, in the Test Explorer, you can right-click a unit test and select Copy, and it will copy the name and some other meta-information to your clipboard. However, if you select multiple unit tests, that option disappears. Additionally there doesn't appear to be any "Copy All" option available.
So if you're needing to copy the names of all of the unit tests that are failing (other related meta-information is okay), is there a way to do this in Visual Studio 2017 other than manually copying the tests one at a time?
This is not a direct answer to your question, but rather a workaround (though I'd call it a best practice compared to what you seem to be doing):
You seem to have a number of unit tests that produce errors and for whatever reason you decided to not fix them in time. Fixing them would be the obvious solution, but lets assume there were reasons to not do so.
Now everybody that develops a feature after that decision is in trouble, because the unit test result just became unreliable. They might fail and you will never know if it's your error, or maybe that test was supposed to fail because of the former (bad) decision. Failed tests have transformed from a red/green quality signal to a broken traffic light signaling nothing.
You should mark those tests that fail on purpose so that you know which they are. If you are using MSTest (the Visual Studio default) you can do so by annotating them with the [Ignore] attribute. That way, they will not be run, will not count as failed, but will still appear in the list and remind you of the fact that they still need to be fixed.
That way, your tests are reliable again. Anything red is something you broke. Anything red is something you need to fix. Yellow are tests that were broken anyway and green.. well is green.
No need to compare lists of testnames against each other. Use the means available.
This question may be a bit nebulous, so please bear with me.
I am using Visual Studio, and I am new to the entire realm of unit testing. One thing I do a lot though is use Unit Testing as a quick and dirty ad-hoc "administration ui" at times when I need to just TRY things, but don't have time to make an actual admin system.
What I mean is ... sometimes I just want to get some data thrown into my database to see how it looks on a page. So I'll make a dirty unit test...
[Fact]
public void install_some_test_data(){
using(var database = RavenDocumentStore()){
using(var session = database.OpenSession()){
// create some objects
// add some objects
// save some objects
}
}
}
Nowhere in here have I really cared about "testing", I just like the fact that I can right click and say "Run it" and it'll go, without launching up the program, without having to have a lot of interaction, etc. etc.
So my question is this;
Is this okay to do? I realize this isn't how a program should be long term managed, but I was scolded for doing this by another developer simply because I wanted to quickly show them how something they wanted to see worked. Is there really a problem with using these convenient tools as simpler ways of running commands against my database that can give me immediate feedback on whether it passed or failed?
My followup question to that is this; I have had to actually search pretty hard to find a unit test runner that lets me run individual methods instead of running every single unit test in a class. Is this normal? This whole thing kind of confuses me in general. Even when I do use tests for actually testing, why would I always want to run EVERY test every time? Isn't that extremely wasteful of time and resources? How do you control the order that they run in? This seems even more imperative when tests have to depend on some data to exist before they can run appropriately.
At the moment, I am using xunit and then the unit test runner in ReSharper, but out of the box, Visual Studio doesn't seem to let me run unit tests individually, just as huge batches in a single class.
Am I the only person confused by this?
Sure, it's actually very easy to execute a single unit test (without an external test runner required).
Go to TEST -> Windows -> Test Explorer
Now you'll see a window like this
Now you can rightclick on the test you want to execute and select 'run selected methods'.
As to your other question: it's hard to distinguish what you are asking. Are you demonstrating a proof of concept with a unit test? How does this test replace your admin panel?
Just put your cursor on a Test function name. Then go to Test -> Run Selected Scope (or similar - the name changes by context). That test should execute and also create a test list for you automatically.
I recently run into problems when running all my unit tests at once.
I can debug them and run my tests seperate without problems, but when running them all together, the test-run keeps hanging half way through.
This happens:
"Run all tests in Solution"
The first tests parses without problem (slower then usual though)
At some point it gets stuck. Nothing fails, no exceptions, VS just stops running the pending tests.
When stopping the test-run it gets stuck again, and I need to restart VS to abort the test-run.
Normally I would expect a bug in my code, but I haven't made any changes to the code beeing testet since last succesful test-run. The only thing I did was run Performance Wizard - Cuncurrency profiling.
It always stops the same place, when removing some tests from the run it stops a new place (still without actually entering any leftover tests).
I have no clue what is causing this. But seems like I'm having problem with a VS setting rather then a code Error.
Any suggestions? Do Performance Wizard change any settings that might have influenced the way test should be run?
System details:
Windows 7 Ultimate 64-bit,
Visual Studio 10 Premium
This sounds like a concurrency issue. It seems that one test changes the testenvironment in such a way that another test runs into a deadlock. When you remove some tests, the test run order is changed and some other tests get stuck.
So I would look for a concurrency issue regarding your test environment/externall dependencies.
I can't really explain why this works, but it solved the problem!
I reversed the '.csproj' file to an earlier version, in one of the projects that had been in 'contact' with the Performance Wizard, and now my tests works.
ALSO Be aware of that Performance Wizard can change the solution configurations from 'DEBUG' to 'RELEASE' mode in some cases. This was not the case for me, but have been a pain for some of my colleagues.
I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI.
My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work.
I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done.
But I have a few questions before I do anything drastic... (and before purchasing anything)
Is it worth it?
Would this be a good way to test?
The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)?
I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp.
Is there a way in testcomplete to specify command-line parameters for an exe?
Does anybody have any similar experiences.
I would suggest you plan to use both DUnit and something like TestComplete, as they each serve a different purpose.
DUnit is great for Unit Testing, but is difficult to use for overall application testing and UI testing.
TestComplete is one of the few automated testing products that actually has support for Delphi, and our QA Engineer tells me that their support is very good.
Be aware though that setting up automated testing is a large and time-consuming job. If you rigourously apply unit testing and auomated UI testing, you could easily end up with more test code than production code.
With a large (existing) application you're in a difficult situation with regards to implementing automated testing.
My recommendation is to set up Unit Testing first, in conjunction with an automated build server. Every time someone checks anything in to source control, the Unit Tests get run automatically. DO NOT try to set up unit tests for everything straight up - it's just too big an effort for an existing application. Just remember to create unit tests whenever you are adding new functionality, and whenever you are about to make changes. I also strongly suggest that whenever a bug is reported that you create a unit test that reproduces the bug BEFORE you fix it.
I'm in a similar situation. (Large app with lots of dependencies). There is almost no automated testing. But there is a big wish to fix this issue. And that's why we are going to tackle some of the problems with each new release.
We are about to release the first version of the new product. And the first signs are good. But it was a lot of work. So next release we sure need some way to automate the test process. That's why I'm already introducing unit tests. Although due to the dependencies, these are no real unit tests, but you have to start somewhere.
Things we have done:
Introduced a more OO approach, because a big part the code was still procedural.
Moved stuff between files.
Eliminated dependencies where possible.
But there is far more on the list of requirements, ensuring enough work for the entire team until retirement.
And maybe I'm a bit strange, but cleaning up code can be fun. Refactoring without unit tests is a dangerous undertaking, especially if there are a lot of side effects. We used pair programming to avoid stupid mistakes. And lots of test sessions. But in the end we have cleaner code, and the amount of new bugs introduced was extremely low.
Oh, and be sure that you know this is an expensive process. It takes lots of time. And you have to fight the tendency to tackle more than one problem in a row.
I can't answer everything, as I've never used testcomplete, but I can answer some of those.
1 - Yes. Regression testing is worth it. It's quite embarrassing to you as a developer when the client comes back to you when you've broken something that used to work. Always a good idea to make sure everything that used to work, still does.
4 - Oracle has something called Flashback which lets you create a restore point in the database. After you've done your testing you can just jump back to this restore point. You can write scripts to use it too, FLASHBACK DATABASE TO TIMESTAMP (FEB-12-2009, 00:00:00);, etc
We're looking at using VMWare to isolate some of our testing.
You can start from a saved snapshot, so you always have a consistent environment and local database state.
VMWare actions can be scripted, so you can automatically install your latest build from a network location, launch your tests and shut down afterwards.
Is it worth it?
Probably. Setting up and maintaining tests can be a big job, but when you have them, tests can be executed very easily and consistently. If your project is evolving, some kind of test suite is very helpful.
Would this be a good way to test?
I would say that proper DUnit test suite is better first step. However if you have large codebase which is not engineered for testing, setting up functional tests is even bigger pain than setting up GUI tests.
The result of the test should in my database (Oracle), is there an easy
way in testcomplete to check these values (multiple fields in multiple tables)?
TestComplete has ADO and BDE interface. Or you can use OLE interface in VBScript to access everything that's available.
Is there a way in testcomplete to specify command-line parameters for
an exe?
Yes.
One way to introduse unittesting in an (old) application could be to have a "Start database" (like the "Flashback" feature described by Rich Adams).
The program som unittest using DUnit to control the GUI.
Se the "GUI testing with DUnit" on http://delphixtreme.com/wordpress/?p=181
Every time the test is started by restoring to "Start database", because, then a known set of data, can be used.
I would need to setup a test database to do all the automated
testing, would there be an easy way to
automate re-setting the test db?
Use transactions: perform a rollback when the test completed. This should revert everything to the initial state.
Recommended reading:
http://xunitpatterns.com/