More on: Help Structuring VS2010 Solutions/Projects and TFS2010 - visual-studio-2010

This is a follow-on post to our previous post (Help Structuring VS2010 Solutions/Projects and TFS2010).
We have a few questions regarding how to structure our VS2010 solutions and projects for best organization, as well as to save in and use TFS2010.
Currently, our code is structured something like:
/OverallAppName
OverallAppName.sln
-/Client
- -/WindowsFormsProject1
WindowsFormsProject1.sln
- -/WindowsFormsProject2
WindowsFormsProject2.sln
-/Components
- -/ClassLibrary1 (common library referenced by other projects)
ClassLibrary1.sln
- -/ClassLibrary2
ClassLibrary2.sln
- -/ClassLibrary3
ClassLibrary3.sln
- -/ClassLibrary4
ClassLibrary4.sln
- -/ClassLibrary5
ClassLibrary5.sln
-/Server
- -/WindowsServiceProject1
WindowsServiceProject1.sln
- -/WindowsServiceProject2
WindowsServiceProject2.sln
- -/WebProject1
WebProject1.sln
- -/WebProject2
WebProject2.sln
Since, right now, we’re in the process of moving from VSS to TFS2010, we’re wanting to structure all our solutions/projects to be most efficient, most logical, easiest to maintain, easiest to reference, and easiest to use with and build in TFS2010, and we’re needing some advice on the “best” way to structure everything with a partitioned solution model.
Any suggestions????? How can we structure all these different types of VS2010 projects into a logical structure that separate groups can work on individual pieces (not the entire solution), we can still have project references, we can stored in TFS2010 and build and branch in there, and follow “recommended best practices”?
Thanks.
(Sorry, I'm not sure the formatting came out very good.)

While I admire your commitment to trying to keep everything as one large solution, I think this is going to defeat some of the best features TFS has to offer in the realm of automated builds by sticking to this.
The reason I say that is because you can use builds triggered by check-in to immediately build the code to prove it works (or better yet, use a Gated check-in). The usefulness of these builds are inversely proportional to the time they take to run. So if you have a massive solution that takes 20 minutes to build then it's going to take away from the advantages of those types of builds. If however you had several smaller solutions that took about 5 minutes each then you'll only get the modified solutions building on check-in and know the results sooner.
From what you've listed above I'd be inclined to have a solution for each set of artefacts that can be released separately. In your example that's probably one for each of the clients, one for each of the web applications and one for all of the common libraries.
Folder structure wise it'll not be much different to what you have above (assuming I'm interpreting it correctly)
/OverallApplication
/Clients
/Client1
-Client1.sln
/Client1Project1
-Client1Project1.csproj
/Client1Project2
-Client1Project1.csproj
...
...
/Components
-Components.sln
/ClassLibrary1
-ClassLibrary1.csproj
/ClassLibrary2
-ClassLibrary2.csproj
...
/Server
/WebApp1
-WebApp1.sln
/WebApp1Project1
-WebApp1Project1.csproj
/WebApp1Project2
-WebApp1Project1.csproj
...
...
/CodeSigningKey
-KeyPair.snk
/ReferencedAssemblies
/Manufacturer1
-Manufacturer1Assembly1.dll
...
...
The common libraries can still be added as project references in the server and client solutions. I've introduced a few new folders for common items such as the code signing key and 3rd party assemblies that are referenced (such as the Enterprise Library).
On top of that you'll want to employ a branching strategy of some kind to keep Main, Dev and Release code branches separate. I recommend a little light reading of the ALM Rangers branching guide on codeplex for that.
http://vsarbranchingguide.codeplex.com/releases

Related

tfs2013 share project across many projects

I have a few (3) core projects I want to share across many solutions (12+).
So, say I have 12 websites and they use some shared back end core code (in this case I'm not talking about shared js, css or views - I'm talking about business objects, entity stuff, etc.).
I need to be able to identify which site has which version of the shared code in dev, test, prod, etc. so a developer can get the website code and get the right version of the shared code to develop or patch the website.
And then the MS build server needs to know which version of the shared code to get for the deployment.
To solve this, I'm seeing people branch that core code - which seems absurd to do 12+ times. (I do expect to branch the core code sometimes for things like hot fixes and long running projects.)
I'm also seeing people copy DLLs of the core code and check those in.
I would think I would list the dependencies for my solutions based on TFS label names somewhere so developers can easily get the apps running with the right code and given a tfs label the build server can get the code for the website and the proper version of the core code. I'm using TFS & VS 2013 at the moment too, so there's that.
So, is there a way to do this that's straightforward, supportable/scale-able and intuitive? Thanks - Peter
Labels in TFS is very limited. For example once the label created you couldn't change and update it. If one of your core projects updated, did you need to create a new label for it. If you did and use the new label for one of your solution. However you found there are some bugs in this update, you need a newer update of your core project to fix the bug. Then a newer label created, you need to manually maintain the dependencies which seems not to be an easy job.
Moreover how to list the dependencies for your solutions based on TFS label names? TFS don't have this built-in option, seems the only way is store it in a txt or someother files and check in the source control. Every time the developer open a website application need to check it first and get label from server to their workspace and work on it.
Usually the purpose of sharing code between projects is reducing maintenance. There’s two main code sharing paths: source and binary. The difference between them you could take a look at this blog: Code Sharing in Team Foundation Server
Sharing code between products is a primary cause of quality erosion and elevated bug counts. I would recommend you to build separately and sharing binary output through NuGet which use preferable.
Also take a look below similar questions:
Sharing code between solutions in TFS
TFS 2010 Branch Across Team Projects - Best Practices

Reintegrate a branch back to the trunk when sweeping changes have been made to the tree structure

A brief note before I start: there is a lot of explanation required to "set the stage", and it may seem like this is more of a design question than a question about a programming problem. The question is actually about SVN branching and merging, so please read to the end.
Scenario:
I have a large Visual Studio solution with quite a few projects. I'm using SVN, so of course the trunk has my production line of development. This consists of a core DLL assembly, a "main" UI user client, and a handful of "plugin" assemblies that operate by implementing interfaces on the core assembly in order to provide functionality within the UI, and also by utilizing a set of service methods which provide common functionality to all of the plugins (such as persistence logic operations, storage operations for a centralized file store architecture, etc.)
There are also external utilities that I have built over time which must duplicate a lot of the business logic in the plugins. I won't go into much detail because it will ultimately distract from my main question, but just picture, for example, a scheduled service on a server that handles centralized maintenance operations related to a particular plugin's data.
When I initially built this application, I (stupidly) didn't anticipate the need for centralized service tiers, so I architected the core assembly (for better or worse), as shown above, to be tightly integrated with the presentation layer of the application. In other words, the UI presentation logic needed to integrate the plugins with the user interface and the business logic needed by the plugins to perform common plugin logic operations is all part of the one "core" assembly. Therefore, much of the "shared" logic that exists between the plugins and the centralized services has resulted in duplicated code.
I decided to undertake the major refactoring initiative to pull out the common logic -- that which is not related to the presentation -- into a "shared" assembly. For this, I created a branch off the trunk. I reorganized common code into a "shared" assembly, and I re-pointed everything in the client application (plugins, etc.) and the external service applications to utilize the shared assembly. In many cases, I also had to rename classes in order to fit their more-general purpose going forward. The core assembly remained in place only to broker presentation-layer responsibilities between the plugins and the UI.
Problem:
Now that I have successfully completed the refactoring, I want to reintegrate the branch back into the trunk. Merging is tricky business even in simple cases, but what I'm facing here is a lot of tree conflicts to put it mildly. Also, in addition to residing in an entirely new project, the folder structure in the "shared" project is quite a bit different from what it was in the "core" project. Classes are, in many cases, located in different places due to the new mechanisms for using the shared assembly.
I want to maintain the version history of every class from its old home in the core assembly to its new home in the shared assembly. Furthermore, I want to guarantee that the merge is successful. That seems obvious, but in testing a miniature version of this whole scenario, I was never able to get the conflicts to resolve in such a way where my branch features remained entirely intact. Furthermore, the fact that I have renamed some of the classes, as I stated earlier, to suit their more-general roles, makes it very tricky to maintain the version history.
I will note that I am using AnkhSVN which helps in "normal" cases when you rename files to repair the moves, but it doesn't seem to work in these major tree-conflict cases. Also, I know there is a difference in how merges work between different versions of SVN -- I believe it's pre-SVN 1.5 and post-SVN 1.5. I'm using SVN 1.9.3.
I have been trying to figure this out for a few weeks now. I've been pouring through the SVN book, TortoiseSVN resources like this, and anything I could find from google searches, like this, this, and this -- among many, many, many others. I feel like I'm going crazy and I think advanced SVN (and Tortoise) are impossible to learn with the traditional teach-yourself, learn-from-the-web-and-books approach. At any rate, I would greatly appreciate any insight that is out there.
What is the proper methodology when you create a feature branch using SVN and plan on making major tree changes and "moves" (i.e. renames) so that you can reintegrate those changes with the trunk without losing anything?
Congratulations to stepping on the most "popular" rake in SVN - "Merge Hell after refactoring"!
There are (at least) two simple rules for your case, produced by the bitter experience:
Never perform refactoring in SVN
If you'll ignore rule 1: in the name of all that is holy and good in the world don't touch ANYTHING in trunk during refactoring in branch
If you reject these the righteous covenants you still have a ways to salvation
Pure SVN-way, long and dirty
Merge all and every subtree, which is source of Tree Conflict, determining by hands every source and target like
svn merge NEW_PATH/NEW_NAME old_path/old_name
and finalize this the bloody work by full merge
Tricky Mercurial-way (or Git-way, but I just hate Git)
Preface: such merges aren't problem at all for modern DVCSes, they have "bridges" to SVN-repos, thus - you can delegate this job of merging to external VCS of choice and return results back (with some limitations and warnings)
I'm too lazy to speak about all DVCSes and will explain only about Mercurial (considering that with SVN-background it will be the least painful migration).
With HGSubversion Mercurial can read (pull) and write (push) to Subversion repositories, but - it can't push to Subversion results of it's own merges, thus: it will be multi-stage operation with the substitution of WC of Subversion in the process
A brief synopsis
Install Mercurial (TortoiseHG) and HGSubversion extension
Clone the whole SVN-repository to Mercurial into some temporary location (not current Subversion WC)
Merge branch to mainline (SVN's trunk become default branch), resolve (possible) context-conflicts (not tree)
Test (?) results
Perform the full replacement of Subversion Working Copy (WC of trunk, obviously) by the content of Mercurial Working Directory (beware of .svn and .hg folders respectively)
Commit WC to trunk
For the beauty and compliance with all rules "cheat" mergeinfo data of trunk (committed in step 6 must me known later as mergeset, although it is not true formally)
HTH
PS - migration to Mercurial with HGVS doesn't seems as totally crazy idea for now

How to manage version control of common DLLs across multiple .NET projects?

Following SO thread shows Managing DLL references in multiple projects across different solutions. Additionally I want to know how to manage the version control for these dependency DLLs ?
Should the DLL (which is published to other external projects) be committed too, every time code change happens ?
Team Development with TFS Guide (Final Release)
This guide has proven to be invaluable to my team. They describe several scenarios and what the pros and cons are of each. For anyone managing a TFS environment this is a must read.
With regards to the DLL's from external projects. We will keep a copy of the source in TFS if we can get access to the source some times, use your best judgement. We keep a copy of the DLL in source control always. The DLL goes into a "SharedBinaries" folder next to source code so that they can be branched together.
It is critical that you be able to branch these DLL's along with source. It is also critical that they be in TFS so that you can do an automated build with little or no build machine configuration. My own personal goal while managing TFS is to be ready for a new developer to join the team and with a single get of source code be able to execute a successful build for local debugging.
EDIT: Different department builds DLL
Like all good IT answers I have to start with "it depends". If the other department is truly segregated from your department and you have little or no knowledge of what they are working or when they will be working on it. If they just occasionally tell you that they have done some things and you should now incorporate the changes then I would lean towards the DLL being committed to the repository every time that the department consuming it wants to change it.
If on the other hand we are really just talking about different teams in the same department where there is lots of cross talk and water cooler communication then I would expect that you could making something else work with just some project references.
It sounds to me like it is the former and not the latter situation that you find yourself in today. I would try to get the department that is creating the shared code to "release" the shared code like Microsoft releases the .NET Framework. Get them to just build the API and give you some DLL's and some documentation. Then the groups that are incorporating those DLL's into there products can check them in separately into a repository of there own control and isolate themselves from code churn while the department working on the these reused DLL's can work on the next version of them.
You should take this all with a grain of salt. This is just one guy rambling on about what might be a good idea. There are many more ways to solve these problems and they are all correct given different circumstances. If you are asking 5 people and you get 5 different responses I wouldn't be surprised.

Are there any drawbacks to having 1 solution per project

We are working on a big application, comprising around 100 projects (40 views, 40 controllers/models, 20 Utilities libraries). We have outsourced the bulk of the work and the deliverables come in fairly randomly.
When we get a deliverable (a project), we need to run FxCop, StyleCop, the associated unit-tests, etc, etc. before committing it to source control. To make this easier, we have mandated that every project has a solution file. This allows us to simply run an automated script on the solution file which tests it before checking it in.
My question is "Can you think of any drawbacks to having 1 solution for each project?".
Drawbacks we have already discussed include:
Additional maintenance required by developers. This doesn't bother us as we have outsourced development on a fixed-price contract.
SourceSafe bindings in solution file. This could have been a huge issue, but luckily we migrated to TFS about a year ago.
We're in a similar boat with about 200 projects, many of them common use, accross our various solutions of varying sizes.
While a disadvantage is load time, one advantage is debugging - i.e. if your code is calling into other assemblies, then it's nice to have everything in the same solution.
Also, we do keep our unit and integration tests along with our core project (DLL or EXE) all in the same solution, so even on a bare bones template we have at least three per solution.
Ultimately I'd say the largest advantage of a common solution boils down to cross-project debugging, IMO. - But I would never just toss them together unless there was at least this, or some other compelling reason.
On a side note - we do not allow a project without a solution for the reasons you noted above (running FxCop, etc.) plus Continuous Integration. One disadvantage of a large solution with several projects is build time - but it does help to know if messing with a component ended up breaking an unrelated solution/project.
We did discover one drawback in the end. All our projects are in Visual Studio 2008. If we want to upgrade one project to Visual Studio 2010, we realised we would almost certainly need to upgrade every other project (as they are all, directly or indirectly, either dependent on, or depended on, each other).

Please settle a check out and lock vs update and merge version control debate

I've used source controls for a few years (if you count the Source Safe years), but am by no means an expert. We currently are using an older version of Sourcegear Vault. Our team currently uses a check out and lock model. I would rather switch to a update and merge model, but need to convince the other developers.
The reason the developers (not me) set up to work as check out and lock was due to renegade files. Our company works with a consulting firm to do much of our development work. Some years ago, long before my time here, they had the source control set up for update and merge. The consultants went to check in, but encountered a merge error. They then chose to work in a disconnected mode for months. When it was finally time to test the project, bugs galore appeared and it was discovered that the code bases were dramatically different. Weeks of work ended up having to be redone. So they went to check out and lock as the solution.
I don't like check out and lock, because it makes it very difficult for 2 or more people to work in the same project at the same time. Whenever you add a new file of any type or change a file's name, source control checks out the .csproj file. That prevents any other developers from adding/renaming files.
I considered making just the .csproj file as mergable, but the Sourcegear site says that this is a bad idea, because csproj is IDE auto-generated and that you cannot guarantee that two different VS generated files will produce the same code.
My friend (the other developer) tells me that the solution is to immediately check in your project. To me, the problem with this is that I may have a local copy that won't build and it could take time to get a build. It could be hours before I get the build working, which means that during that time, no one else would be able to create and rename files.
I counter that the correct solution is to switch to a mergable model. My answer to the "renegade files" issue is that it was an issue of poor programmer discipline and that you shouldn't use a weaker programmer choice as a fix for poor discipline; instead you should take action to fix the lack of programmer discipline.
So who's right? Is check in - check out a legitimate answer to the renegade file issue? Or does the .csproj issue far too big of a hassle for multiple developers? Or is Sourcegear wrong and that it should be ok to set the csproj file to update and merge?
The problem with update and merge that you guys ran into was rooted in a lack of communication between your group and the consulting group, and a lack of communication from the consulting group to your group as to what the problem was, and not necessarily a problem with the version control method itself. Ideally, the communication problem would need to be resolved first.
I think your technical analysis of the differences between the two version control methodologies is sound, and I agree that update/merge is better. But I think the real problem is in the communication to the people in your group(s), and how that becomes apparent in the use of version control, and whether the people in the groups are onboard/comfortable with the version control process you've selected. Note that as I say this, my own group at work is struggling through the exact same thing, only with Agile/SCRUM instead of VC. It's painful, it's annoying, it's frustrating, but the trick (I think) is in identifying the root problem and fixing it.
I think the solution here is in making sure that (whatever VC method is chosen) is communicated well to everyone, and that's the complicated part - you have to get not just your team on board with a particular VC technique, but also the consulting team. If someone on the consulting team isn't sure of how to perform a merge operation, well, try to train them. The key is to keep the communication open and clear so that problems can be resolved when they appear.
Use a proper source control system (svn, mercurial, git, ...)
If you are going to do a lot of branching, don't use anything less recent than svn 1.6. I'm guessing mercurial/git would be an even better solution, but I don't have too much hands-on-experience using those yet.
If people constantly are working on the same parts of the system, consider the system design. It indicates that each unit has too much responsibility.
Never, ever accept people to offline for more than a day or so. Exceptions to this rule should be extremely rare.
Talk to each other. Let the other developers know what your are working on.
Personally I would avoid having project files in my repository. But then again, I would never ever lock developers to one tool. Instead I would use a build system that generated project files/makefiles/whatever (CMake is my flavor for doing this).
EDIT: I think locking files is fixing the symptoms, not the disease. You will end up having developers doing nothing if this becomes a habit.
I have worked on successful projects with teams of 40+ developers using the update-and-merge model. The thing that makes this method work is frequent merges: the independent workers are continuously updating (merging down) changes from the repository, and everyone is frequently merging up their changes (as soon as they pass basic tests).
Merging frequently tends to mean that each merge is small, which helps a lot. Testing frequently, both on individual codebases and nightly checkouts from the repository, helps hugely.
We are using subversion with no check-in/check-out restrictions on any files in a highly parallel environment. I agree that the renegade files issue is a matter of discipline. Not using merge doesn't solve the underlying problem, what's preventing the developer from copying their own "fixed" copy of code over other people's updates?
Merge is a pita, but that can be minimized by checking in and updating your local copy early and often. I agree with you regarding breaking checkins, they are to be avoided. Updating your local copy with checked in changes on the other hand will force you to merge your changes in properly so that when you finally check-in things go smoothly.
With regards to .csproj files. They are just text, they are indeed mergeable if you spend the time to figure out how the file is structured, there are internal references that need to be maintained.
I don't believe any files that are required to build a project should be excluded from version control. How can you reliably rebuild or trace changes if portions of the project aren't recorded?
I am the development manager of a small company, only 3 programmers.
The projects we work on sometimes take weeks and we employ the big bang, shock and awe implementation style. This means that we have lots of database changes and program changes that have to work perfectly on the night that we implement. We checkout a program, change it and set it aside because implementing it before everything else will make 20 other things blow up. I am for check out and lock. Otherwise, another person might change a few things not realizing that program has had massive changes already. And the merge only helps if you haven't made database changes or changes to other systems not under source control. (Microsoft CRM, basically any packaged software that is extensible through configuration)
IMO, project files such as .csproj should not be part of the versioning system, since they aren't source really.
They also almost certainly are not mergeable.

Resources