New Visual Studio installation, tests not running in Test Explorer - visual-studio

This question is very similar to other questions that also in some cases literally have the text "tests not running in Test Explorer" in the title. But, my context is a bit different. In those questions, there was a fair bit of investigation into what might be wrong with the tests. I am fairly confident nothing is wrong with the tests in this case.
I am one of hundreds of developers working on a project, and this project has a large bank of automated tests (though perhaps not as large as it ought to be :-P). Everybody is frequently running tests, and triggers fire when pull requests are made and merged to automatically run them then too. Tests were working fine for me as well. But, I have just been given a new laptop with better hardware specs, and I am trying to get it set up. On the new laptop, the project builds just fine (and noticeably faster :-) ), but the automated tests just don't run. I can't figure out why, and I'm looking for suggestions about what to check in this context -- given that there are hundreds of places where the exact same code is working perfectly, I really don't think the tests or test projects themselves are at fault here.
I have observed that the build output, apparently randomly, sometimes does not contain the test adapter files:
Microsoft.VisualStudio.TestPlatform.MSTest.TestAdapter.dll
Microsoft.VisualStudio.TestPlatform.MSTestAdapter.PlatformServices.dll
Microsoft.VisualStudio.TestPlatform.MSTestAdapter.PlatformServices.Interface.dll
xunit.runner.visualstudio.testadapter.dll
If these files are missing, then VSTest.Console.exe also cannot run the test. But, usually rebuilding the project results in the files magically appearing, and then VSTest.Console.exe works just fine.
I haven't been able to ascertain a reason why the adapter files are sometimes put into the build output and sometimes not, and in either case, the Test Explorer within Visual Studio always fails to run the tests -- it discovers the tests just fine, puts several thousand items into the forest of trees, but when told to run tests, it just sits there for a minute or two and then returns to idle state with no output at all in the "Tests" output window.
This is a brand new installation of Visual Studio Enterprise 2019 Preview, the exact same version that is on my old laptop, but on my old laptop the tests run fine. What do?? I don't know what to check next. :-(

Well, I am thoroughly confused. I tried installing new features, I tried checking for system updates, I rebooted multiple times, and tests did not work. So, finally, I decided to make a cut-down minimal test project and see if I could observe any differences in Process Monitor between the two computers. I made a project with two tiny tests, one with NUnit and one with xUnit, and ... they worked. So, I opened up the big project again and hit Run Tests, and ... they worked. I am completely stumped, and the only advice I can offer to anyone finding this question with a similar problem is, just keep trying.

Related

In VS2017, is there a way to copy the names of all the failed unit tests to the clipboard at one time?

It is not my decision personally to have lingering failing unit tests.
In the solution I work with at my company, there are a few failing unit tests that have lingered over time. Occasionally, while making changes, another unit test will start to fail, but it won't be clear which one it is. To figure it out, it's sometimes necessary to copy at least the names of the failing unit tests out into a text file and do a comparison.
In VS2017, in the Test Explorer, you can right-click a unit test and select Copy, and it will copy the name and some other meta-information to your clipboard. However, if you select multiple unit tests, that option disappears. Additionally there doesn't appear to be any "Copy All" option available.
So if you're needing to copy the names of all of the unit tests that are failing (other related meta-information is okay), is there a way to do this in Visual Studio 2017 other than manually copying the tests one at a time?
This is not a direct answer to your question, but rather a workaround (though I'd call it a best practice compared to what you seem to be doing):
You seem to have a number of unit tests that produce errors and for whatever reason you decided to not fix them in time. Fixing them would be the obvious solution, but lets assume there were reasons to not do so.
Now everybody that develops a feature after that decision is in trouble, because the unit test result just became unreliable. They might fail and you will never know if it's your error, or maybe that test was supposed to fail because of the former (bad) decision. Failed tests have transformed from a red/green quality signal to a broken traffic light signaling nothing.
You should mark those tests that fail on purpose so that you know which they are. If you are using MSTest (the Visual Studio default) you can do so by annotating them with the [Ignore] attribute. That way, they will not be run, will not count as failed, but will still appear in the list and remind you of the fact that they still need to be fixed.
That way, your tests are reliable again. Anything red is something you broke. Anything red is something you need to fix. Yellow are tests that were broken anyway and green.. well is green.
No need to compare lists of testnames against each other. Use the means available.

UnitTest keeps hanging in Visual Studio 2010

I recently run into problems when running all my unit tests at once.
I can debug them and run my tests seperate without problems, but when running them all together, the test-run keeps hanging half way through.
This happens:
"Run all tests in Solution"
The first tests parses without problem (slower then usual though)
At some point it gets stuck. Nothing fails, no exceptions, VS just stops running the pending tests.
When stopping the test-run it gets stuck again, and I need to restart VS to abort the test-run.
Normally I would expect a bug in my code, but I haven't made any changes to the code beeing testet since last succesful test-run. The only thing I did was run Performance Wizard - Cuncurrency profiling.
It always stops the same place, when removing some tests from the run it stops a new place (still without actually entering any leftover tests).
I have no clue what is causing this. But seems like I'm having problem with a VS setting rather then a code Error.
Any suggestions? Do Performance Wizard change any settings that might have influenced the way test should be run?
System details:
Windows 7 Ultimate 64-bit,
Visual Studio 10 Premium
This sounds like a concurrency issue. It seems that one test changes the testenvironment in such a way that another test runs into a deadlock. When you remove some tests, the test run order is changed and some other tests get stuck.
So I would look for a concurrency issue regarding your test environment/externall dependencies.
I can't really explain why this works, but it solved the problem!
I reversed the '.csproj' file to an earlier version, in one of the projects that had been in 'contact' with the Performance Wizard, and now my tests works.
ALSO Be aware of that Performance Wizard can change the solution configurations from 'DEBUG' to 'RELEASE' mode in some cases. This was not the case for me, but have been a pain for some of my colleagues.

Visual Studio Unit Test - Weird behaviour

Has anyone seen this very strange behaviour before?
I've got a solution whith 70 unit tests. All of them pass on my dev machine.
Whenever I commit my changes, our continuous integration process kicks in and the build box will eventually run the same 70 unit tests.
There is only ONE test in the build box that fails all the time.
The error is in one line that only gets a record from our unit test db. (I know it sucks having unit test to rely on data but please don't focus on this as it's not relevant now)
The most weird thing is when I logon myself to the build box, open up the same visual studio solution and manually kick off the unit tests. Result: ALL PASS!
Has anyone ever had this weird situation? I'm guessing there is some weird thing going on with Cruise Control.NET and MSTest?
Surely your unit test runner produces a good log that shows the exact exception message or error? It's kinda pointless to guess at it but an "access denied" kind of error would be an obvious candidate. Setup whatever dbase engine you use (you forgot to mention that too) to give the user account that runs the tests on the build grunt access to the tables.
As said in another answer, it doesn't make too much sense to guess about it when there are detailed logs around...
But because I had this situation several times, here's a guess anyway:
The account, which is used by the CI server to run the tests, may not have appropriate permissions in the database. This would also explain, why the same test succeeds when you run it manually (then with your user account)...
HTH!
Thomas
thanks for your inputs but it wasn't anything related to credentials at all.
I've found out that other tests that were running before that particular one were leaving my unit test database in an inconsistent state, therefore causing errors to the test in question.
It's not a good practice to have your unit tests relying on data, so unless you are extremely bound to it like myself, this is what a recommend to everyone: DO NOT RELY ON DATA TO DO YOUR UNIT TESTING !!!! Make sure you have all the good stuff in place, specifically a good IOC/dependency injector container so your classes are loosely coupled and you can mock up any interface you may want to unit test easily!
If you have system tests that you want to run on your build server or in general, want to be able to run correctly on any machine, including your own, then you must make sure that their states are independent.
In your case, you should have each test init prepare the DB it uses (either by copying a file-based DB or by emptying/filling a service-based DB). Each test should also attempt to undo its changes (delete file or empty DB) but not assume that other tests have done so successfully.

run tests in mstest without compiling/building

is there a way? do I have to wait for building every time I start the test? I want to build from visual studio not from test
thanks
Any time your code changes and you run your test it is going to do a build... so technically you can run your test over and over again and they will only build the first time, but once you run your test why would you run them again without making a code change?
Couple of things that I use that make your test run faster are:
Check the box for "Only build startup projects and dependencies on Run", located Options->Projects and Solution->Build and Run.
Learn the short cut keys
a. "Ctrl+R, T" Runs test in current context, so if your cursor is inside a test method it will only run that test, but when you do it inside of a non test class it will run all of your test.
b. "Crtl+R, Ctrl+T" Debug test same except debug.
c. Others can be found here, those are 2008 if you need to reference others you can find them via google.
Make sure your test are not calling the database or other time intensive resources, use mocking and stubbing.
Run only small sets of test, ie if I am working in a service class I run only the service class test.
Edit: Reading your question again if you want to build and not from a test you can just go to the menu and click Build->Build Solution or press F6. Also it would be helpful if you indicated which version of visual studio you are using because 2010 is different in the sense that you have to click refresh. Either way are you able to clarify?
This is an old question, but I keep seeing people ask it and the issue is still true in VS2017, and it's also true of other test frameworks (Xunit, etc) run from within VS.
I don't know how to make VS stop building all the time. But I do know how to circumvent the compile - run your tests from a console runner, not from within VS. If you're using ReSharper, it has one.
If you aren't using ReSharper, for MSTest, you can start here. https://msdn.microsoft.com/en-us/library/ms182489.aspx
If you aren't using ReSharper, for XUnit, you can start here. https://xunit.github.io/docs/getting-started-desktop.html#add-xunit-runner-ref
Any changes to source code cause compilation, because in order to run tests VS needs up to date DLL with tests.
If you have already compiled project then you can run test multiple times without compilation.
PS: I run MSTest using TestDriven.NET as for me it is faster.

Is there a gui for nosetests

I've been using nosetests for the last few months to run my Python unit tests.
It definitely does the job but it is not great for giving a visual view of what tests are working or breaking.
I've used several other GUI based unit test frameworks that provide a visual snap shot of the state of your unit tests as well as providing drill down features to get to detailed error messages.
Nosetests dumps most of its information to the console leaving it the developer to sift through the detail.
Any recommendations?
You can use rednose plugin to color up your console. The visual feedback is much better with it.
I've used Trac + Bitten for continuous integration, it was fairly complex setup and required substantial amount of time to RTFM, set up and then maintain everything but I could get nice visual reports with failed tests and error messages and graphs for failed tests, pylint problems and code coverage over time.
Bitten is a continuous integration plugin for Trac. It has the master-slave architecture. Bitten master is integrated with and runs together with Trac. Bitten slave can be run on any system that communicate with master. It would regularly poll master for build tasks. If there is a pending task (somebody has commited something recently), master will send "build recipe" similar to ant's build.xml to slave, slave would follow the recipe and send back results. Recipe can contain instructions like "check out code from that repository", "execute this shell script", "run nosetests in this directory".
The build reports and statistics then show up in Trac.
I know this question was asked 3 years ago, but I'm currently developing a GUI to make nosetests a little easier to work with on a project I'm involved in.
Our project uses PyQt which made it really simple to start with this GUI as it provides all you need to create interfaces. I've not been working with Python for long but its fairly easy to get to grips with so if you know what you're doing it'll be perfect providing you have the time.
You can convert .UI files created in the PyQt Designer to python scripts with:
pyuic4 -x interface.ui -o interface.py
And you can get a few good tutorials to get a feel for PyQt here. Hope that helps someone :)
I like to open a second terminal, next to my editor, in which I just run a loop which re-runs nosetests (or any test command, e.g. plain old unittests) every time any file changes. Then you can keep focus in your editor window, while seeing test output update every time you hit 'save' in your editor.
I'm not sure what the OP means by 'drill down', but personally all I need from the test output is the failure traceback, which of course is displayed whenever a test fails.
This is especially effective when your code and tests are well-written, so that the vast majority of your tests only take milliseconds to run. I might run these fast unit tests in a loop as described above while I edit or debug, and then run any longer-running tests manually at the end, just before I commit.
You can re run tests manually using Bash 'watch' (but this just runs them every X seconds. Which is fine, but it isn't quite snappy enough to keep me happy.)
Alternatively I wrote a quick python package 'rerun', which polls for filesystem changes and then reruns the command you give it. Polling for changes isn't ideal, but it was easy to write, is completely cross-platform, is fairly snappy if you tell it to poll every 0.25 seconds, doesn't cause me any noticeable lag or system load even with large projects (e.g. Python source tree), and works even in complicated cases (see below.)
https://pypi.python.org/pypi/rerun/
A third alternative is to use a more general-purpose 'wait on filesystem changes' program like 'watchdog', but this seemed heavyweight for my needs, and solutions like this which listen for filesystem events sometimes don't work as I expected (e.g. if Vim saves a file by saving a tmp somewhere else and then moving it into place, the events that happen sometimes aren't the ones you expect.) Hence 'rerun'.

Resources