TeamCity: Deploying specific projects in feature branches from large repositories - visual-studio-2010

I've yet to find suitable advice on a sustainable continuous deployment workflow that fits our organizations applications. We're currently using a single, large repository to house all our source code, with multiple VS2010 *.sln files to group related products/tools. We're also using a branch strategy similar to the develop and master outlined in this great article though via mercurial not git. The applications vary widely from public websites, internal websites, windows services, utilities, and development tools.
Even some of the comprehensive examples I found don't really address deployment at various levels within the repository (unless I missed it).
My default build points at the main.sln at the root of the repository -- building all projects in the respository. This configuration is also set to build any feature branches.
The feature branches are branches of the entire repository, but I don't want to deploy everything to test, just the contents of that sub-solution. I aslo don't want to require branches be merged to develop prior to being built. How do I indicate to TeamCity (or my repository) that I only want to deploy specific projects, which could vary between feature branches?
Note: This is similar to this SO question, but with emphasis on multiple smaller projects that have different usages/release requirements.

Related

How to Structure Projects for Multiple Xamarin Apps

My team is working on translating several legacy mobile applications to Xamarin Forms apps. Currently each application is in its own solution, which is not ideal when it comes to the fact that they all use a common set of backend software libraries. We were planning to consolidate all the smaller solutions into a single solution, containing the apps as well as the common libraries.
However, one of my teammates brought up a valid concern about how with a single Xamarin Forms app, several projects could get generated (core, Android, iOS, etc.), with the eventual result of a generally unwieldy solution. I agree with him that the current setup probably would not scale too well as we add more apps -- even if we group projects in solution folders, Visual Studio will eventually slow to a crawl after a certain amount of projects exist in the solution.
So we are considering just going back to having each app in its own solution, each solution containing the few Xamarin Forms projects for that app, as mentioned above. But this brings us back to the question of how to reasonably manage the shared library code. My current thought would be to just use shared project(s) for the libraries, or maybe assemble them into NuGet package(s) the app solutions would consume. Am I on the right track here, or does anyone know of a better way to do this?
There are several different ways to manage a shared code project using subtrees, submodules, NuGet packages, etc. There are pros and cons to each so it's best to decide based on the expected use case for that project.
Subtrees essentially take a copy of the remote repo and pull it into the parent repo. This makes it easy to pull in changes from the remote repo but if changes are expected to be pushed back it can be significantly more difficult since it has no knowledge of the remote repo. While it is possible to push changes back it can take a significant among of time to do depending on the amount of history of the repos.
Submodules are similar to subtrees except that instead of taking a copy it tracks the remote repo based on a specific commit it's pointed to. This essentially can be thought of as another repo inside of the parent that makes pushing changes back to the remote repo much easier but at the cost of making pulling/updating from it a little bit more difficult.
NuGet packages are extremely convenient to install, update, and release to others without having to make the source code public, but that comes with a bit more initial setup to generate each package version and comes at the cost of making it more difficult to debug than with the actual source code. This is particularly a great option if the shared code library will be distributed to others.
For most projects, if changes are expected to be potentially made to that shared project from a consuming one I'd recommend a repo for each project and set up the shared one as a submodule in each. It does take a bit of learning to get used to the different processes of checking out and updating a submodule but actually isn't all that difficult and worth learning the few git commands required. The docs provide a great example of how to get started using submodules.

How to manage stable binaries and avoid risk of CI rebuilds when install packaging?

I am looking for a tool to manage the collection of binary files (input components) that make up a software release. This is a software product and we have released multiple versions each year for the last 20 years. The details and types of files may vary, but this is something many software teams need to manage.
What's a Software Release made of?
A mixture of files go into our software releases, including:
Windows executables/binaries (40 DLLs and 30+ EXE files).
Scripts used by the installer to create a database
API assemblies for various platforms (.NET, ActiveX, and Java)
Documentation files (HTML, PDF, CHM)
Source code for example applications
The full collected files for a single version of the release are about 90MB. Most are built from source code, but some are 3rd party.
Manual Process
Long ago we managed this manually.
When starting each new release the files used to build the last release would be copied to a new folder on a shared drive.
The developers would manually add or update files in this folder (hoping nothing was lost or deleted accidentally).
The software installer script would be compiled using the files in this folder to produce a SETUP.EXE (output).
Iterate steps 2 and 3 during validation & testing until release.
Automatic Process
Some years ago we adopted CI (building our binaries nightly or on-demand).
We resorted to putting 3rd party binaries under version control since they usually don't change as often.
Then we automated the process of collecting & updating files for a release based on the CI build outputs. Finally we were able to automate the construction of our SETUP.EXE.
Remaining Gaps
Great so far, but this leaves us with two problems:
Rebuilding Assemblies The CI mostly builds projects when something has changed, but when forced it will re-compile a binary that doesn't have any code change. The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?).
Latest vs Stable Mostly our CI machine builds the latest versions of each project. In some cases this is ok, but often we want to release an older tested or stable version. To do this we have separate CI projects for the latest and stable builds - this works but is clumsy.
Thanks for your patience if you've got this far :-)
I Still Haven't Found What I'm Looking For
After some time searching for solutions it seems it might be easier to build our own solution, but surely someone else has solved these problems before!?
What we want is a way to store and manage binary files (either outputs from CI, or 3rd party files) such that each is tagged with a version (v1.2.3.4) that allows:
The CI to publish new versions of each binary (but reject rebuilt versions that already exist).
The development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
package name
version
path/destination in the release folder
The Automatic package script to use the recipe collect the required files, and compile the install package (e.g. SETUP.EXE).
I am aware of past debates about storing binaries in a VCS. For now I am looking for a better solution. That approach does not appear ideal for long-term ongoing use (e.g. how to prune old binaries)... amongst other issues.
I have tried some artifact repositories currently available. From my investigation these provide a solution for component/artifact storage and version control. However they do not provide tools for managing a list of components/artifacts to include in a software release.
Does anybody out there know of tools for this?
Have you found a way to get your CI infrastructure to address these remaining issues?
If you're using an artifact repository to solve this problem, how do you manage and automate the process?
This is a very broad topic, but it sounds like you want a release management tool (e.g. BuildMaster, developed by my company Inedo), possibly in conjunction with a package management server like ProGet (which you tagged, and is how I discovered this question).
To address some specific questions you have, I'll associate it with a feature that would solve the problem:
A mixture of files go into our software releases, including...
This is handled in BuildMaster with artifacts. This video gives a basic overview of how they are manually added to releases and deployed to a file system: https://inedo.com/support/tutorials/buildmaster/deployments/deploying-a-simple-web-app-to-iis
Of course, once that works to satisfaction, you can automate the import of artifacts from your existing CI tool, create them from a BuildMaster deployment plan itself, pull them from your package server, whatever. Down the line you can also have your CI tool call the BuildMaster release management API to create a release and automatically have it include all the artifacts and components you want (this is what most of our customers do now, i.e. have a build step in TeamCity create a release from a template).
Rebuilding Assemblies ... The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?)
You can mostly assume they are equivalent functionally, but it's only the times that they are not that problems arise. This is especially true with package managers that do not lock dependencies to specific version numbers (i.e. NuGet, npm). You should be releasing exactly the same binary that was tested in previous environments.
[we want] the development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
This is handled with releases. A developer can choose its name, dates, etc., and associate it with a pipeline (i.e. a set of testing stages that the artifacts are deployed to), then can "click the deploy button" and have the automation do all the work.
Releases are grouped by "application", similar to a project in TeamCity. As a more advanced use case, you can use deployables. Deployables are essentially individual components of an application you include in a release; in your case the "Documentation" could be a deployable, and maybe contain an artifact of the .pdf and .docx files. Deployables from other applications (maybe a different team is responsible for them, or whatever) can then be referenced and "included" in a release, or you can reference ones from a past release.
Hopefully that provides some overview and fits your needs. Getting into this space is a bit overwhelming because there are so many terms, technologies, and methodologies, but my advice is to start simple and then slowly build upon it, e.g.:
deploy a single, manually uploaded component through BuildMaster to a share drive, then manually deploy it from there
add a deployment plan that imports the component
add a second plan and associate it with the 2nd stage that takes the uploaded artifact and deploys it to the target, bypassing the need for the share drive
add more deployment plans and associate them with pipeline stages and promote through them all to "close out" a release
add an agent and deploy to that instead of the default localhost server
add more components and segregate their deployment with deployables
add event listeners to email team members at points in the process
start adding approvals if you require gated "sign-offs"
and so on.

Do composite builds make multi-module builds obsolete?

I have a hard time do understand when to use composite builds vs multi-module builds. It seems both can be used to achieve similar things.
Are there still valid use cases for multi-module builds?
In my opinion, a multi-module build is a single system which is built and released together. Every module in the build should have the same version and is likely developed by the same team and committed to a single repository (git/svn etc).
I think that a composite build is for development only and for use in times when a developer is working on two or more systems (likely in different repositories with different release cycles/versions). eg:
Developing a patch for an open source library whilst validating the changes in another system
Tweaking a utility library in a separate in-house repository (perhaps shared by multiple teams) whilst validating the changes in another system
A fix/improvement that spans two or more systems (likely in separate repos)
I don't think that a composite build should be committed to source control or built by continuous integration. I think CI should use jars from a repo (eg nexus). Basically I think composite builds serves the same purpose as the resolve workspace artifacts checkbox in m2e
Please note that one of the restrictions on a composite build is that it can not include another composite build. So I think it's safer to commit multi-module builds to source control and use composite builds to join them together locally for development only.
These are my opinions on how the two features should be used, I'm sure there are valid exceptions to the above
We use our own monorepo with monobuild-type detection and use composite builds for CI and CD to staging (any microservices that end up building from your changes auto-deploy to staging). I disagree that composite builds are just for development as we use it to get to production in a monorepo/monobuild.
multi-project build estimated time at Orderly Health is about 15-20 minutes based on webpieces 5 minutes AND based on modifying a library in OrderlyHealth that affects EVERY project takes about 15 minutes.
Instead, we detect projects changed, what leaf nodes depend on it and all leaf nodes are composite projects pulling in libraries that pull in libraries and the general average build time is 3 minutes. (That is a 5x boost right there on build time).
later
Dean

Build dependencies and local builds with continuous integration

Our company currently uses TFS for source control and build server. Most of our projects are written in C/C++, but we also have some .NET projects and wouldn't want to be limited if we need to use other languages in the future.
We'd like to use Git for our source control and we're trying to understand what would be the best choice for a build server. We have started looking into TeamCity, but there are some issues we're having trouble with which will probably be relevant regardless of our choice of build server:
Build dependencies - We'd like to be able to control the build dependencies for each <project, branch>. For example, have <MyProj, feature_branch> depend on <InfraProj1, feature_branch> and <InfraProj2, master>.
From what we’ve seen, to do that we might need to use Gradle or something similar to build our projects instead of plain MSBuild. Is this correct? Are there simpler ways of achieving this?
Local builds - Obviously we'd like to be able to build projects locally as well. This becomes somewhat of a problem when project dependencies are introduced, as we need a way to reference these resources or copy them locally for the build to succeed. How is this usually solved?
I'd appreciate any input, but a sample setup which covers these issues will also be a great help.
IMHO both issues you mention fall really in the config management category, thus, as you say, unrelated to the build server choice.
A workspace for a project build (doesn't matter if centralized or local) should really contain all necessary resources for the build.
How can you achieve that? Have a project "metadata" git repo with a "content" file containing all your project components and their dependencies (each with its own git/other repo) and their exact versions - effectively tying them together coherently (you may find it useful to store other metadata in this component down the road as well, like component specific SCM info if using a mix of SCMs across the workspace).
A workspace pull wrapper script would first pull this metadata git repo, parse the content file and then pull all the other project components and their dependencies according with the content file info. Any build in such workspace would have all the parts it needs.
When time comes to modify either the code in a project component or the version of one of the dependencies you'll need to also update this content file in the metadata git repo to reflect the update and commit it - this is how your project makes progress coherently, as a whole.
Of course, actually managing dependencies is another matter. Tons of opinions out there, some even conflicting.

subversion structure questions

Just moved to subversion...from visual studio. I love it already! Can someone briefly explain
Repository
Branches
Tags
Trunk
Do I need to create a new repository for every project? Or a new trunk?
Thanks
You don't need a separate repository, but you can if you want. I recommend reading the book at http://svnbook.red-bean.com/. Grab the pdf version or whatever. It doesn't take too long, and it explains some things pretty well. I read it, and found that I'm glad I did.
Remember that subversion is just a fancy filesystem that supports versioning. Think of a repository as a "drive root" like "C:/".
Each project gets a trunk, tags and branches directory. All of your day to day work happens in the trunk. Experimental code is done in a branch and then merged back into the trunk at a later date. Tags are for when you release the software. These are not to be edited. When you release the software, you create a tag with a unique name based on what is currently in the trunk.
I can't say whether or not you need a separate repository for each project, there are pros and cons. This blog posting details them:
Simplified administration. One set of hooks to deploy. One repository
to backup. etc.
Branch/tag flexibility. With the code all in one repository it makes it
easier to create a branch or tag
involving multiple projects.
Move code easily. Perhaps you want to take a section of code from
one project and use it in another, or
turn it into a library for several
projects. It is easy to move the code
within the same repository and retain
the history of the code in the
process.
Here are some of the drawbacks to the
single repository approach, advantages
to the multiple repository approach.
Size. It might be easier to deal with many smaller repositories than
one large one. For example, if you
retire a project you can just archive
the repository to media and remove it
from the disk and free up the storage.
Maybe you need to dump/load a
repository for some reason, such as to
take advantage of a new Subversion
feature. This is easier to do and with
less impact if it is a smaller
repository. Even if you eventually
want to do it to all of your
repositories, it will have less impact
to do them one at a time, assuming
there is not a pressing need to do
them all at once.
Global revision number. Even though this should not be an issue,
some people perceive it to be one and
do not like to see the revision number
advance on the repository and for
inactive projects to have large gaps
in their revision history.
Access control. While Subversion's authz mechanism allows
you to restrict access as needed to
parts of the repository, it is still
easier to do this at the repository
level. If you have a project that only
a select few individuals should
access, this is easier to do with a
single repository for that project.
Administrative flexibility. If you have multiple repositories, then
it is easier to implement different
hook scripts based on the needs of the
repository/projects. If you want
uniform hook scripts, then a single
repository might be better, but if
each project wants its own commit
email style then it is easier to have
those projects in separate
repositories
I agree, read the svnbook. It's a great resource.
Do I need to create a new repository for every project? Or a new trunk?
Kevin covered the single/multiple repository trade-offs pretty well. When we started with svn, we used one repository for all of our development projects. It worked well and had all the advantages mentioned. However, as the repository got bigger it got more difficult to administer because of the size of the dump file and resulting issues during backup. It also became an issue that projects couldn't easily be archived out of the repository - it's certainly possible but it requires dumping and pulling out projects from the repository. They aren't issues you can't get around but it's something to keep in mind.
Repository
Branches
Tags
Trunk
Branches, tags and the trunk are just copies of your files contained in the repository. It allows you to segregate and check-mark your files at whatever time you feel appropriate (usually at a release or a feature branch).
An important thing to keep in mind about branches, tags and trunk is that they just conventions in svn. There is no functional difference between the three locations, they are just an accepted usage model and they can be changed or organized differently if you have a good reason. I'm not recommending that you organize differently but you'll find that svn is very flexible because there isn't really a forced organizational structure other than convention.
Depending on how many projects you decide to have in your repository, you may organize differently.
You can have the subdirectories with projects under it:
\repo
\branches
\...
\tags
\...
\trunk
\..
or you can have projects contain the subdirectories:
\repo
\Project1
\branches
\tags
\trunk
\Project2
\branches
\tags
\trunk
There are trade-offs that are covered in the svnbook. The first method is usually used if you only have one project per repository and the second if there is more than one project in your repository.
The nice thing is that you can just start using svn and then figure out what you prefer. You should have some sort of organization but, with cheap copies, you can always re-arrange the folders as your situation or workflow changes.
An important thing to remember with SVN, compared to other version control systems like CVS or Git, is that SVN doesn't really have a concept or branching or tagging. As far as SVN is concerned it's all just a bunch of folders and files. So while you'll see a lot of people using the branches/tags/trunk setup, this is not required and you are able to deviate from this if you so choose.
Generally speaking 'trunk' is where you keep your active development going. So this is where you do all your commits. Whether or not you checkout trunk or use tags/branches instead is entirely up to you.
Branches, as I've used them, are usually for when you need to do large changes to your application but don't want them in trunk because you want to be able to continue developing against trunk without deploying your other changes. In this case you may have something like
\repo
\trunk
\branches
\version_two
In this case you can develop in both trunk and version_two separately and, assuming your live site is a checkout of trunk, you don't need to worry about 'accidentally' breaking your live site with your other changes. And when those changes are done and ready you just merge them back into trunk whenever you want.
Tags can be used similarly to branches, in that instead of checking out trunk and just using 'svn up' to update your repository you instead of several tags, each representing one release. So your repo may look something like
/repo
/trunk
/branch
/version_one
/version_two
/tags
/1.0.0
/1.0.1
/1.1.0
In this case the general idea is that when you're ready to do a deploy you do an
svn copy
To copy trunk over to a tag (in this case the next one could be 1.1.1, 1.2.0, 2.0.0, etc). How you name your tags it entirely up to you though and, again, depends on your project and requirements. With this route instead of doing a regular 'svn up' you would have to do an svn switch. So you have to deploy with
svn switch https://svn.yourrepo.com/repo/tags/1.1.0
The switch will automatically do updates, adds and deletes on the appropriate files.
When it comes to one repo for many projects or separate repos for each one I am an advocate of one repo per project. It provides the additional benefits of easily managing access to it. But most importantly it means that each project has a separate commit history and separate logs. This m
Reading your tags I see you started using VisualSVN instead of your old VSS system. (Your question says you stopped using Visual Studio.. which makes VisualSVN a strange choice).
One of the major differences between SourceSafe and VSS is that you can choose different tools to access the same repository (and you can switch every time you like as they all share the same workingcopy).
E.g.:
TortoiseSVN for Explorer integration.
The normal subversion client for scripts.
VisualSVN as Visual Studio frontend for TortoiseSVN
AnkhSVN as real SCC (VAPI) package in Visual Studio.

Resources