Reading from web.config while running Pex explorations - visual-studio

I've just started using Pex to generate parameterized unit-tests for my project. However, when I let Pex run its explorations, my code crashes because it cannot read from the web.config (ConfigurationSettings.AppSettings has zero elements to be more precise). The working-directory during the explorations is: "C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE". I assume this is the root-cause.
I know that the supposedly proper way to handle this is to create mock-objects corresponding to the values I need. However, this would force me to create tons of mock-code and wouldn't provide any tangible value IMHO, because I have no problem bundling web.config with the test-project.
How do I enable reading from web.config (or app.config) while the Pex explorations executes?

You've answered your own question I'm afraid - you wouldn't directly access your database from your code, so why do it with your config files? Just put a thin wrapper around your config file settings and stub it out in your tests. You don't have to do it all in one go, start with the piece of code under test and move the direct references behind your wrapper bit by bit. The tangible benefit of doing this is that it makes testing easy.
Also, with Pex if your code is getting fully torn down between each run (depends on your code and the tests whether or not this is actually the case) you'll be hitting the file system each time which will have a serious impact on performance.

The Pex developers don't read (often) stackoverflow. You better ask your Pex-related question on the forums at http://social.msdn.microsoft.com/Forums/en/pex/threads

Related

What is the $RANDOM_SEED$ file generated by Visual Studio build of C# solution?

We noticed that on a certain dev machine a Visual Studio (2015 update 3) debug build of a C# solution was generating a $RANDOM_SEED$ file alongside every built DLL.
The content of the file is just a single number e.g.
1443972318
Deleting the file(s) then rebuilding resulted in the file being regenerated, with a different number.
This behaviour was also observed when rebuilding a single project in the solution (one which has only the standard C# project refs/dependencies + System.Management).
Note that running a command line build e.g.
msbuild <sln-file>
did not regenerate the file (for build of complete solution or single project).
After a restart of VS, the file is no longer regenerated.
As far as we know this file name is not used in any of our source code, post build steps or internal dependencies.
There are quite a few dependencies on .NET framework classes, including Random and RNGCryptoServiceProvider, and also external dependencies. We don't have complete source code for all these so it's not possible to check exhaustively which if any of the dependencies are responsible.
This is a bit of a shot in the dark but the question is has anyone seen anything similar to this?
EDIT
I'm not surprised this has been downvoted - I appreciate it is pretty open ended, but as I'm currently not able to reproduce this and as it could have potentially serious consequences (random number generator attack?) I have posted it anyway. If I am able to repro I will of course update here.
I have the same file.
After a short investigation I found guilty:
this file is created by NUnit 3.x test adapter.
(You can check it in AdapterSettings.cs from NUnit adapter source code).
The file is used by NUnit to ensure that we use the same random seed value for generating random test cases in both the discovery and execution processes. This is required because the IDE uses two different processes to execute the adapter. It's not actually required (or created) when running the adapter under vstest.console.exe.

Separate visual studio solution for unit testing?

I have a visual studio solution that contains around 18 projects. I want to write unit tests for those projects (by creating a test-project that contains unit tests against each source project).
Should I use a separate solution that contains all the test-projects? Or should I use Partitioned solution approach of Visual Studio 2008 and create a sub-solution for all test-projects?
To put unit tests in a separate solution would seem to me to create problems. Inherently, unit tests must be able to reference the types (classes) they are testing. If in a separate solution then it implies a binary association. That would seem very 'difficult'.
So, to investigate the reasons for the question, and hopefully provide some help, these are the reasons I would put my unit tests in the same solution:
Project referencing. I like to name unit tests projects as .Tests where is the name of the production assembly holding the types being tested. That way they appear next to each other (alphabetical order) in VS.
With TDD development I need rapid switching between production code and unit tests. It is about writing the code, not so much about post code testing.
I can select the solution node at the top of the solution pane and say 'run all tests' (I use Resharper(.
The build farm (TeamCity) always runs all tests on all commits. Much easier with them all in one solution.
As I write this I wonder why put them in another solution. Unit test have no meaning outside of the solution with the production code.
Perhaps you can explain why your asking the question. Can you see a problem with them being in the same solution. 18 projects does not, to me, seem like a lot. The solutions I work on have many more ... but if using Resharper (and why wouldn't you?) you will need sufficient RAM in you box.
Hope this is of some help.

Generate Solution with dependencies for building VS 2010 with hundreds of projects

We have a few hundred visual studio project files that I need to assemble into a solution for building. We currently have a custom ruby script, that uses rake, to do this. But is fragile, and only allows a few visual studio macros ( $(TargetDir),$(TargetName), etc...) through, and failing on the rest. Plus the grammar of Ruby rubs me like Perl: The wrong way.
So my question is, given a directory is there a tool that will recursively find all all the .vcxproj and .csproj files and generate a solution file with dependencies? When I say 'with dependencies' it means that some projects need to be built before others. I found some other posts here on stack overflow that pointed to a tool that generates solution files: but it doesn't generate dependencies. Therefore without dependencies any solution creation tool is completely useless. Does anyone know of something that will do this?
If not a solution file, does anyone know of something that will just emit a dependency list?
P.S.
And before anyone asks: creating a solution file manually is completely out of the question. We simply have way too many project files.
So my question is, given a directory
is there a tool that will recursively
find all all the .vcxproj and .csproj
files and generate a solution file
with dependencies?
No.
What you're asking for is very reasonable; your approach to the problem is quite rational. Unfortunately, the tools haven't kept up with you. (We had the same problem.)
You're going to have to script that yourself, or otherwise customize tools. That's what we did. Successful approaches I've seen include:
Generate the *.vcproj/*.sln from
"reference project definitions",
using tools like CMake, QMake, Scons, or
Gyp. Our main system currently sits
on Scons, with our custom Python
code to navigate these dependencies,
generate solutions based on projects
(spidering dependencies). By
default, we generate a "complete"
solution for each project (including
all required supporting projects),
plus a "Master All Projects"
solution. It works very well. But,
it was custom work that took effort,
and we extended Scons somewhat to
describe our projects (but we simply
rely on the Scons generation of
*.sln and *.vcproj).
Write a custom tool to "find" these dependencies by
parsing all the *.vcproj files in
your workspace. This is work, but can be done. Those files can be "tricky" to navigate, but you might be fine with a "good enough" solution that uses the GUIDs as hash keys to generate those dependencies.
I totally agree with you: This type of stuff (project dependencies) is prohibitively difficult to maintain manually when you move beyond "simple" (e.g., many dozens of projects, yes, we also have hundreds).
Sorry. MSVS is a pretty good IDE (intended for iterative development), and a terrible build configuration management system, and not designed to do what we're talking about.
Because I care about your sanity and Your Everlasting Soul, please Please PLEASE do not attempt to write your custom solution in MSBuild.
On a side note, having hundreds of VS projects is a bad idea, it will kill VS performances, see the two white-books:
Partitioning code base through .NET assemblies and Visual Studio projects (8 pages)
Defining .NET Components with Namespaces (7 pages)

Visual studio solutions with large numbers of projects

I see developers frequently developing against a solution containing all the projects (27) in a system. This raises problems of build duration (5 minutes), performance of Visual Studio (such as intellisense latency), plus it doesn't force developer's to think about project dependencies (until they get a circular reference issue).
Is it a good idea to break down a solution like this into smaller solutions that are compilable and testable independent of the "mother" solution? Are there any potential pitfalls with this approach?
Let me restate your questions:
Is it a good idea to break down a solution like this into smaller solutions
The MSDN article you linked makes a quite clear statement:
Important Unless you have very good reasons to use a multi-solution model, you should avoid this and adopt either a single solution model, or in larger systems, a partitioned single solution model. These are simpler to work with and offer a number of significant advantages over the multi-solution model, which are discussed in the following sections.
Moreover, the article recommends that you always have a single "master" solution file in your build process.
Are there any potential pitfalls with this approach?
You will have to deal with the following issues (which actually can be quite hard to do, same source as the above quote):
The multi-solution model suffers from
the following disadvantages:
You are forced to use file references when you need to reference
an assembly generated by a project in
a separate solution. These (unlike
project references) do not
automatically set up build
dependencies. This means that you must
address the issue of solution build
order within the system build script.
While this can be managed, it adds
extra complexity to the build process.
You are also forced to reference a specific configuration build of a
DLL (for example, the Release or Debug
version). Project references
automatically manage this and
reference the currently active
configuration in Visual Studio .NET.
When you work with single solutions, you can get the latest code
(perhaps in other projects) developed
by other team members to perform local
integration testing. You can confirm
that nothing breaks before you check
your code back into VSS ready for the
next system build. In a multi-solution
system this is much harder to do,
because you can test your solution
against other solutions only by using
the results of the previous system
build.
Visual Studio 2010 Ultimate has several tools to help you better understand and manage dependencies in existing code:
Dependency graphs and Architecture Explorer
Sequence diagrams
Layer diagrams and validation
For more info, see Exploring Existing Code. The Visualization and Modeling Feature Pack provides dependency graph support for C++ and C code.
We have a solution of ~250 projects.
It is okay, after installing a patch for Visual Studio 2005 for dealing fast with extremely large solutions [TODO add link].
We also have smaller solutions for teams with selection of their favorite projects, but every project added has also to be added to the master solution, and many people prefer to work with it.
We reprogrammed F7 shortcut (build) to build the startup project rather than the whole solution. That's better.
Solution folders seem to address the problem of finding things well.
Dependencies are only added to top-level projects (EXEs and DLLs) because, when you have static libraries, if A is dependency of B and B is dependency of C, A might often not need to be dependency of C (in order to make things compile and run correctly) and this way, circullar dependencies are OK for compiler (although very bad for mental health).
I support having fewer libraries, even to the extent of having one library named "library". I see no significant advantage of optimizing process memory footprint by bringing "only what it needs", and the linker should do it anyway on object file level.
The only time I really see a need for multiple solutions is functional isolation. The required libs for a windows service may be different than for a web site. Each solution should be optimized to produce a single executable or web site, IMO. It enhances separation of concern and makes it easy to rebuild a functional piece of the application without building everything else along with it.
It certainly has its advantages and disadvantages anyway breaking a solution into multiple projects helps you find what you looking for easly i.e if you are looking for something about reporting you go to the reporting project. it also allows big teams to split the work in such a way that nobody do something to break someone else's code ...
This raises problems of build duration
you can avoid that by only building the projects that you modified and let the CI server do the entire build
Intellisense performance should be quite a bit better in VS2010 compared to VS2008. Also, why would you need to rebuild the whole solution all the time? That would only happen if you change something near the root of the dependency tree, otherwise you just build the project you're currently working on.
I've always found it helpful to have everything in one solution because I could navigate the whole code base easily.
Is it a good idea to break down a solution like this into smaller solutions that are compilable and testable independent of the "mother" solution? Are there any potential pitfalls with this approach?
Yes it is a good idea because:
You don't want VS to slow down on a solution with dozens of VS projects.
It can be interesting to only focus on a portion of the code, this enforce the notion of code locality which is a good thing.
But the important first thing to struggle for is to have as few VS projects/assemblies as possible. My company published two free two white books that explain the pro/cons of using assemblies/VS project/namespaces to partition a large code base.
Partitioning code base through .NET assemblies and Visual Studio projects (8 pages)
Defining .NET Components with Namespaces (7 pages)
The first white-book explains also that VS is pretty slow when working with a solution with dozens of projects, and shows tricks about how to remedy to this slowness.

how to generate multi part assembly ( per folder) in visual studio for custom library project , C#?

Is there a pre build action or some compiler switch that we can add?
I have just too many projects in our solution at the moment. I want to add new modules and compile them into separate assemblies.I am looking for options where I can avoid adding new projects for each assembly.
I am using Visual Studio 2005.
Also, It will be worthwhile to know if 2008 has better features over this space.
edit #1: There are two development teams working on this project and we want to cut the modules broadly into two verticals and keep the assemblies separate so that the ongoing patches ( post release ) do not overlap with the functionality in two verticals and also the testing footprint is minimized.
Currently the solution has about 8 projects and we need to setup the structure for the second team to start development.
I do not want to end up adding 5 or 6
new projects in the solution but
rather create folders in the existing
projects so separate code for the new
team or some easy way.
No, Visual Studio is still "one project per assembly". Do you really need to have that many different assemblies?
You may be able to write your own build rules which create multiple assemblies from a single project, but I suspect it's going to lead to a world of pain where Visual Studio gets very confused.
If you could give us more details about why you want lots of assemblies, we may be able to help you come up with a different solution.
EDIT: Having read your updated question, it sounds like you would possibly be better off just working off two branches in source control, and merging into the trunk (and updating from the trunk) appropriately. Alternatively, if the two teams really are working on independent parts of the code, maybe separate projects really is the best solution.
One of the problems (IMO) with Visual Studio is that the files in the projects are listed explicitly - which means that the project files become big merge bottlenecks. I prefer the Eclipse model where any source file under a source path is implicitly included in the build (unless you explicitly exclude it).
Neither Visual Studio 2005 nor 2008 lets you create multi-file assemblies. However, you can run the C# compiler at the command line with the '/addmodule:ModuleName' switch and it'll do what you want. For general details on command line usage of csc see this article. For description of the /addmodule switch see this one.
That said, however, you're most-likely taking a non-optimal approach here. In normal situations you should not have to want to create multi-file assemblies just because you have too many projects. Give more details of your general problem so that people can offer suggestions regarding that.
I'd heed the advice you've been given thus far--if you find yourself asking such questions, there's probably a deeper design issue that's being overlooked--but if you really must do what you're suggesting be done, you have several options. You can hack the project file to allow you to compile files into separate assemblies: the project file is an msbuild file, so there's a lot you can do with it. Also, you can simply use an msbuild file for building your projects and solutions. Or you can use a different build system entirely--NAnt is one example.
The likely problem with these suggestions is that they won't be feasible for your work environment. It's no good to start hacking away at project files that other people on your team use, or to just decide that this or that solution is going to be built using your custom msbuild file. There are many good reasons to use something like a single custom msbuild file, or NAnt, to build your projects, but it's always the wrong decision if it's not made with input from everyone the decision affects.

Resources