Separate visual studio solution for unit testing? - visual-studio

I have a visual studio solution that contains around 18 projects. I want to write unit tests for those projects (by creating a test-project that contains unit tests against each source project).
Should I use a separate solution that contains all the test-projects? Or should I use Partitioned solution approach of Visual Studio 2008 and create a sub-solution for all test-projects?

To put unit tests in a separate solution would seem to me to create problems. Inherently, unit tests must be able to reference the types (classes) they are testing. If in a separate solution then it implies a binary association. That would seem very 'difficult'.
So, to investigate the reasons for the question, and hopefully provide some help, these are the reasons I would put my unit tests in the same solution:
Project referencing. I like to name unit tests projects as .Tests where is the name of the production assembly holding the types being tested. That way they appear next to each other (alphabetical order) in VS.
With TDD development I need rapid switching between production code and unit tests. It is about writing the code, not so much about post code testing.
I can select the solution node at the top of the solution pane and say 'run all tests' (I use Resharper(.
The build farm (TeamCity) always runs all tests on all commits. Much easier with them all in one solution.
As I write this I wonder why put them in another solution. Unit test have no meaning outside of the solution with the production code.
Perhaps you can explain why your asking the question. Can you see a problem with them being in the same solution. 18 projects does not, to me, seem like a lot. The solutions I work on have many more ... but if using Resharper (and why wouldn't you?) you will need sufficient RAM in you box.
Hope this is of some help.

Related

How can I add a unit test for a new class stub?

I would like to add a unit test for a newly minted (usually empty) class into my unit test project in the appropriate folder with a single key command. How can i do this? I'd even go so far as writing code to do this as it would greatly expedite my normal coding practices. I'm not sure where to start looking.
Depends on which version of Visual Studio we are talking about. For Visual Studio 2012 / 2013 there is the Generate Unit Test Extension. It is a pretty decent solution, with some minor quirks, such as no possibility to define a default test-project name.
If you have access to Resharper - and I recommend you at least try it out - there is also the possibility of file and live-templates, which allow for rather fast method and class generation.
For example, you might use the shortcut Alt+Ctrl+Ins to quickly generate a unit test template (which you will have to create in advance).
Then, using a user defined shortcut (e.g. typing test+TAB) you may quickly generate a test-method stub. This solution is easy to customize, but you will have to invest some time to create your templates.
More information on templates can be found here.

Unit/Integration Test Organization in a Large Visual Studio Solution

I'm starting to develop and organize tests for a very large Visual Studio solution. (Yes, I know that tests should have been developed along with the code rather than when the project is nearly complete, but things are what they are.)
I've seen similar questions about organizing unit tests in Visual Studio solutions, but I didn't see any that address integration tests as well. I would appreciate some guidance about where to place test projects so that they don't clutter up the already large code base.
Here's the basic hierarchy of things within the solution. (All items not ending in .proj are folders within a project or Solution Folders.)
HardwareServices
HardwareService1
HardwareService1.Core.proj
HardwareService1.Host.proj
HardwareService1.Service.proj
HardwareService2
HardwareService2.Core.proj
HardwareService2.Host.proj
HardwareService2.Service.proj
Infrastructure
MyApp.Database.proj
MyApp.Infrastructure.proj
MyApp.ReportViewer.proj
MyApp.SettingsManager.proj
AppModules
AppModule1.proj
Common
Reports
Services
ViewModels
Views
AppModule2.proj (similar structure to other AppModules)
AppModule3.proj (similar structure to other AppModules)
Modules
ComputeEngine.proj
Footer.proj
Header.proj
CommonServices.proj
My thought was to make a Solution Folder called "Tests" and then mimic the hierarchy above, making one test project for every production code project. Within each test project, I would make folders called "UnitTests" and "IntegrationTests".
My focus is to create a consistent naming/organization scheme so that there's no ambiguity about where new tests should go and where to find existing tests. Given the large size of this project/application, I'd like to get the structure pretty solid right out of the gate so that it's not a pain later.
Thank you for your time and advice.
The naming convention that our company adopted was the use of projectName.Tests.Unit and projectName.Tests.Integration.
With your existing structure you would have something like this:
HardwareService1
HardwareService1.Core.proj
HardwareService1.Host.proj
HardwareService1.Service.proj
Tests
HardwareService1.Core.Tests.Unit
HardwareService1.Core.Tests.Integration
If you keep your tests folder along with the root folder you don't have to mimic the complete structure again as the tests are right with the respective project.
side note
By having the project name having a consistant Tests.Unit it helps assist in running unit tests in your build script as you can run tests with a wild card search like **\*tests.unit*.dll
At the end of the day, project structure can be very subjective so do what makes sense in your environment and makes sense to your team.

what's the purpose of *.vsmdi? Do I need to source control it?

What's the purose of .vsmdi file? Do I need to check into the source control system?
The VSMDI file is created by Visual Studio when you create a test project for the first time. It contains a list of all tests that Visual Studio can find in your solution assemblies and allows you to divide your tests into so-called test lists. These test lists can be used to categorize your tests and let you select a subset of tests for execution.
You can use this mechanism for running sub-selections. However, you can also (freely) assign multiple test categories to a test, which enables you to achieve the same, in a more flexible way. And with the known issues with VSMDI files, like uncontrolled duplication of these files and obsolete tests being listed with a warning icon, it might seem the better way to do things like this.
My overall suggestion is: check-in your default generated .vsmdi file. This will prevent Visual Studio from (re-)generating such files on your own and your team members systems when new test projects are added. Decide on usage of test lists or assigning categories to tests directly based on your usage experience. Test lists are easy to start with, but less suitable is you want to have flexibility for a large set of tests.
It's used for Testing in Visual Studio. If you don't do testing in Visual Studio, I wouldn't worry about it. But if you do, and you have hundreds of tests it might be worth keeping.

Visual studio solutions with large numbers of projects

I see developers frequently developing against a solution containing all the projects (27) in a system. This raises problems of build duration (5 minutes), performance of Visual Studio (such as intellisense latency), plus it doesn't force developer's to think about project dependencies (until they get a circular reference issue).
Is it a good idea to break down a solution like this into smaller solutions that are compilable and testable independent of the "mother" solution? Are there any potential pitfalls with this approach?
Let me restate your questions:
Is it a good idea to break down a solution like this into smaller solutions
The MSDN article you linked makes a quite clear statement:
Important Unless you have very good reasons to use a multi-solution model, you should avoid this and adopt either a single solution model, or in larger systems, a partitioned single solution model. These are simpler to work with and offer a number of significant advantages over the multi-solution model, which are discussed in the following sections.
Moreover, the article recommends that you always have a single "master" solution file in your build process.
Are there any potential pitfalls with this approach?
You will have to deal with the following issues (which actually can be quite hard to do, same source as the above quote):
The multi-solution model suffers from
the following disadvantages:
You are forced to use file references when you need to reference
an assembly generated by a project in
a separate solution. These (unlike
project references) do not
automatically set up build
dependencies. This means that you must
address the issue of solution build
order within the system build script.
While this can be managed, it adds
extra complexity to the build process.
You are also forced to reference a specific configuration build of a
DLL (for example, the Release or Debug
version). Project references
automatically manage this and
reference the currently active
configuration in Visual Studio .NET.
When you work with single solutions, you can get the latest code
(perhaps in other projects) developed
by other team members to perform local
integration testing. You can confirm
that nothing breaks before you check
your code back into VSS ready for the
next system build. In a multi-solution
system this is much harder to do,
because you can test your solution
against other solutions only by using
the results of the previous system
build.
Visual Studio 2010 Ultimate has several tools to help you better understand and manage dependencies in existing code:
Dependency graphs and Architecture Explorer
Sequence diagrams
Layer diagrams and validation
For more info, see Exploring Existing Code. The Visualization and Modeling Feature Pack provides dependency graph support for C++ and C code.
We have a solution of ~250 projects.
It is okay, after installing a patch for Visual Studio 2005 for dealing fast with extremely large solutions [TODO add link].
We also have smaller solutions for teams with selection of their favorite projects, but every project added has also to be added to the master solution, and many people prefer to work with it.
We reprogrammed F7 shortcut (build) to build the startup project rather than the whole solution. That's better.
Solution folders seem to address the problem of finding things well.
Dependencies are only added to top-level projects (EXEs and DLLs) because, when you have static libraries, if A is dependency of B and B is dependency of C, A might often not need to be dependency of C (in order to make things compile and run correctly) and this way, circullar dependencies are OK for compiler (although very bad for mental health).
I support having fewer libraries, even to the extent of having one library named "library". I see no significant advantage of optimizing process memory footprint by bringing "only what it needs", and the linker should do it anyway on object file level.
The only time I really see a need for multiple solutions is functional isolation. The required libs for a windows service may be different than for a web site. Each solution should be optimized to produce a single executable or web site, IMO. It enhances separation of concern and makes it easy to rebuild a functional piece of the application without building everything else along with it.
It certainly has its advantages and disadvantages anyway breaking a solution into multiple projects helps you find what you looking for easly i.e if you are looking for something about reporting you go to the reporting project. it also allows big teams to split the work in such a way that nobody do something to break someone else's code ...
This raises problems of build duration
you can avoid that by only building the projects that you modified and let the CI server do the entire build
Intellisense performance should be quite a bit better in VS2010 compared to VS2008. Also, why would you need to rebuild the whole solution all the time? That would only happen if you change something near the root of the dependency tree, otherwise you just build the project you're currently working on.
I've always found it helpful to have everything in one solution because I could navigate the whole code base easily.
Is it a good idea to break down a solution like this into smaller solutions that are compilable and testable independent of the "mother" solution? Are there any potential pitfalls with this approach?
Yes it is a good idea because:
You don't want VS to slow down on a solution with dozens of VS projects.
It can be interesting to only focus on a portion of the code, this enforce the notion of code locality which is a good thing.
But the important first thing to struggle for is to have as few VS projects/assemblies as possible. My company published two free two white books that explain the pro/cons of using assemblies/VS project/namespaces to partition a large code base.
Partitioning code base through .NET assemblies and Visual Studio projects (8 pages)
Defining .NET Components with Namespaces (7 pages)
The first white-book explains also that VS is pretty slow when working with a solution with dozens of projects, and shows tricks about how to remedy to this slowness.

Add all projects to same solution or not?

I am the intranet developer for the company I work for and I have been doing this for the last 5 years. My projects are divided into two solutions, the "Intranet" solution itself and the "Library" solution. The "Library" sln itself has several projects containing the DAL, BLL, etc.. The reason why I kept them in a different solution is because I thought that "maybe", one day my library sln can be used in other projects as well - you know reuse the code that I already wrote :) Well, that never happened. Now, since its so easier to have all projects in the same .sln, I am thinking to just do that. Is that a wise situation? What would you do if you were in my shoes?
In the past I've used and reused the same 'project' in multiple solutions - I really just see a solution as a 'particular' instance of a collection of projects.
For example, we might have different solutions for the same overall piece of software depending on whether we want to be doing unit testing (in their own project) and or integration testing (in a separate project), and we'd open the right solution for what it is we're about to do. That way if you're doing normal coding with unit testing you don't have to build the integration test code every time and visa-versa.
Only thing to watch out for is bringing in a project to a solution that is a dependency of lots of other projects/solutions and then "accidentally" changing the code in it without realising it's in a side project rather than your main code. Then you can start breaking loads of other projects that depend on it and not realise!
Yes, you can do it! You may still reuse your DAL and BLL, as the project settings are stored in the specific project files (csproj, vbproj, ...). Also dependencies are stored there, so no problem and good to go. I have an addin-infrastructure and for every and each addin-package, I do need the addin-host, which is included in several solution files. I never experienced any problems with this. Open up your *.sln file in a text-editor to see its contents...just links to the projects.
Simply add your library projects to your intranet sln. Keep your library solution as is.
I would personally all add them to the same solution, yes. Namely, it doesn't matter if you plan on using some of the libraries in the solution in other projects: you can still add the compiled dll to those solutions, or you have the option to add them as an exisiting project to the new solutions.
So yes, I add everything to the same solution: gui-projects, libraries, even unit tests. Maybe if your solution becomes incredibly large (20+ projects, or larger) it would be better to split them up, but I've never worked on such large projects.
I prefer to have 2 solutions. They both have identical structure (more or less) but the 2nd contains reusable infrastructure code only that isn't tied to a particular project. The thing is - your project code shouldn't contain framework-like (i.e. 'IsNumeric()' string extension method) stuff next to serious banking business logic (even if you won't reuse your 'Library'). That just makes things much more readable and maintainable.
For example:
Solution1:
ProjectName.Core
ProjectName.Infrastructure
ProjectName.Application
ProjectName.UI
ProjectName.IntegrationTests
ProjectName.UnitTests
And
Solution2:
CompanyName.Core
CompanyName.Infrastructure
CompanyName.Application
CompanyName.UI
CompanyName.Tests
And I try not to have more than 10 projects in 1 solution. Otherwise - it leads to infinite toggling between "unload project"/"reload project.
I, for my part, have separated the Solutions and the projects, leaving me with a big punch of projects and only a few solution-files. I add all the projects I need in new solutions.
The upside is that I only have the projects in my workspace which I really need and it still changes in all other solutions.
The downside is that it changes in all other solutions too, means that if you change the API of a widely used library, you'll have to check all your other solutions if incompatibilities.

Resources