We are using TeamCity 2017.1 and have been using it for years with great joy. A long time ago, someone decided that all third-party binaires should be put into Subversion (our VCS of choice).
This has worked fine, but over time this repos has grown quite large, and combined with our being better and better at using TeamCity, we now have dozens of build configurations which all uses third-party binaries.
Our third-party folder is called Department and is around 2.6 GB in size. As such this is not so bad, but remember that this folder is used by pretty much every single project on the build server!
Now, I will agree with everyone that says that we should use Nugets, network shares etc., and that would work great with new projects. However, we have a lot of history and we cannot begin to change every single solution and branch.
A co-worker came up with the idea, that IF we made a single build project that in reality did nothing but keep a single folder updated with our Department stuff. Then we just need to find a way to reference this, without have to change all our projects and solutions.
My initial though is using Snapshot dependencies and then create a symbolic link as the first build step and remove it as the last, in order to achieve the same relative levels.
But is there a better way? What do other people do?
And keep in mind, that replacing with nugets or something else is not an option.
Let me follow the idea of your colleague and improve it. There would be a build configuration that monitors the Subversion repository and copies packages to a network share. That network share will be used by development teams as nuget repository. Projects that will convert their dependencies from Binary reference to nuget reference will enjoy faster build. When all the teams will use nuget repositories you may kill that Subversion.
Related
My team is working on translating several legacy mobile applications to Xamarin Forms apps. Currently each application is in its own solution, which is not ideal when it comes to the fact that they all use a common set of backend software libraries. We were planning to consolidate all the smaller solutions into a single solution, containing the apps as well as the common libraries.
However, one of my teammates brought up a valid concern about how with a single Xamarin Forms app, several projects could get generated (core, Android, iOS, etc.), with the eventual result of a generally unwieldy solution. I agree with him that the current setup probably would not scale too well as we add more apps -- even if we group projects in solution folders, Visual Studio will eventually slow to a crawl after a certain amount of projects exist in the solution.
So we are considering just going back to having each app in its own solution, each solution containing the few Xamarin Forms projects for that app, as mentioned above. But this brings us back to the question of how to reasonably manage the shared library code. My current thought would be to just use shared project(s) for the libraries, or maybe assemble them into NuGet package(s) the app solutions would consume. Am I on the right track here, or does anyone know of a better way to do this?
There are several different ways to manage a shared code project using subtrees, submodules, NuGet packages, etc. There are pros and cons to each so it's best to decide based on the expected use case for that project.
Subtrees essentially take a copy of the remote repo and pull it into the parent repo. This makes it easy to pull in changes from the remote repo but if changes are expected to be pushed back it can be significantly more difficult since it has no knowledge of the remote repo. While it is possible to push changes back it can take a significant among of time to do depending on the amount of history of the repos.
Submodules are similar to subtrees except that instead of taking a copy it tracks the remote repo based on a specific commit it's pointed to. This essentially can be thought of as another repo inside of the parent that makes pushing changes back to the remote repo much easier but at the cost of making pulling/updating from it a little bit more difficult.
NuGet packages are extremely convenient to install, update, and release to others without having to make the source code public, but that comes with a bit more initial setup to generate each package version and comes at the cost of making it more difficult to debug than with the actual source code. This is particularly a great option if the shared code library will be distributed to others.
For most projects, if changes are expected to be potentially made to that shared project from a consuming one I'd recommend a repo for each project and set up the shared one as a submodule in each. It does take a bit of learning to get used to the different processes of checking out and updating a submodule but actually isn't all that difficult and worth learning the few git commands required. The docs provide a great example of how to get started using submodules.
I am looking for a tool to manage the collection of binary files (input components) that make up a software release. This is a software product and we have released multiple versions each year for the last 20 years. The details and types of files may vary, but this is something many software teams need to manage.
What's a Software Release made of?
A mixture of files go into our software releases, including:
Windows executables/binaries (40 DLLs and 30+ EXE files).
Scripts used by the installer to create a database
API assemblies for various platforms (.NET, ActiveX, and Java)
Documentation files (HTML, PDF, CHM)
Source code for example applications
The full collected files for a single version of the release are about 90MB. Most are built from source code, but some are 3rd party.
Manual Process
Long ago we managed this manually.
When starting each new release the files used to build the last release would be copied to a new folder on a shared drive.
The developers would manually add or update files in this folder (hoping nothing was lost or deleted accidentally).
The software installer script would be compiled using the files in this folder to produce a SETUP.EXE (output).
Iterate steps 2 and 3 during validation & testing until release.
Automatic Process
Some years ago we adopted CI (building our binaries nightly or on-demand).
We resorted to putting 3rd party binaries under version control since they usually don't change as often.
Then we automated the process of collecting & updating files for a release based on the CI build outputs. Finally we were able to automate the construction of our SETUP.EXE.
Remaining Gaps
Great so far, but this leaves us with two problems:
Rebuilding Assemblies The CI mostly builds projects when something has changed, but when forced it will re-compile a binary that doesn't have any code change. The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?).
Latest vs Stable Mostly our CI machine builds the latest versions of each project. In some cases this is ok, but often we want to release an older tested or stable version. To do this we have separate CI projects for the latest and stable builds - this works but is clumsy.
Thanks for your patience if you've got this far :-)
I Still Haven't Found What I'm Looking For
After some time searching for solutions it seems it might be easier to build our own solution, but surely someone else has solved these problems before!?
What we want is a way to store and manage binary files (either outputs from CI, or 3rd party files) such that each is tagged with a version (v1.2.3.4) that allows:
The CI to publish new versions of each binary (but reject rebuilt versions that already exist).
The development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
package name
version
path/destination in the release folder
The Automatic package script to use the recipe collect the required files, and compile the install package (e.g. SETUP.EXE).
I am aware of past debates about storing binaries in a VCS. For now I am looking for a better solution. That approach does not appear ideal for long-term ongoing use (e.g. how to prune old binaries)... amongst other issues.
I have tried some artifact repositories currently available. From my investigation these provide a solution for component/artifact storage and version control. However they do not provide tools for managing a list of components/artifacts to include in a software release.
Does anybody out there know of tools for this?
Have you found a way to get your CI infrastructure to address these remaining issues?
If you're using an artifact repository to solve this problem, how do you manage and automate the process?
This is a very broad topic, but it sounds like you want a release management tool (e.g. BuildMaster, developed by my company Inedo), possibly in conjunction with a package management server like ProGet (which you tagged, and is how I discovered this question).
To address some specific questions you have, I'll associate it with a feature that would solve the problem:
A mixture of files go into our software releases, including...
This is handled in BuildMaster with artifacts. This video gives a basic overview of how they are manually added to releases and deployed to a file system: https://inedo.com/support/tutorials/buildmaster/deployments/deploying-a-simple-web-app-to-iis
Of course, once that works to satisfaction, you can automate the import of artifacts from your existing CI tool, create them from a BuildMaster deployment plan itself, pull them from your package server, whatever. Down the line you can also have your CI tool call the BuildMaster release management API to create a release and automatically have it include all the artifacts and components you want (this is what most of our customers do now, i.e. have a build step in TeamCity create a release from a template).
Rebuilding Assemblies ... The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?)
You can mostly assume they are equivalent functionally, but it's only the times that they are not that problems arise. This is especially true with package managers that do not lock dependencies to specific version numbers (i.e. NuGet, npm). You should be releasing exactly the same binary that was tested in previous environments.
[we want] the development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
This is handled with releases. A developer can choose its name, dates, etc., and associate it with a pipeline (i.e. a set of testing stages that the artifacts are deployed to), then can "click the deploy button" and have the automation do all the work.
Releases are grouped by "application", similar to a project in TeamCity. As a more advanced use case, you can use deployables. Deployables are essentially individual components of an application you include in a release; in your case the "Documentation" could be a deployable, and maybe contain an artifact of the .pdf and .docx files. Deployables from other applications (maybe a different team is responsible for them, or whatever) can then be referenced and "included" in a release, or you can reference ones from a past release.
Hopefully that provides some overview and fits your needs. Getting into this space is a bit overwhelming because there are so many terms, technologies, and methodologies, but my advice is to start simple and then slowly build upon it, e.g.:
deploy a single, manually uploaded component through BuildMaster to a share drive, then manually deploy it from there
add a deployment plan that imports the component
add a second plan and associate it with the 2nd stage that takes the uploaded artifact and deploys it to the target, bypassing the need for the share drive
add more deployment plans and associate them with pipeline stages and promote through them all to "close out" a release
add an agent and deploy to that instead of the default localhost server
add more components and segregate their deployment with deployables
add event listeners to email team members at points in the process
start adding approvals if you require gated "sign-offs"
and so on.
Overview
I am using VisualSVN in Visual Studio, VisualSVN Server on Windows, and of course, TortoiseSVN. I wanted to know what the best method of sharing multiple projects over multiple solutions was, and if there was a better method.
Layout
My Repository kind of looks like this (not their real names):
Library.Common
Library.Web
Library.DB
Library.CMS
Customer1.Site
Customer2.Site
Process
To create a new site that contains common projects:
Create Repository in SVN-Server, e.g. "Customer3.Site"
Create Web site using Visual Studio 2008, named "Customer3.Site", VisualSVN used to commit to the repository created in step (1).
Edit properties of Customer3.Site and specify the necessary projects as svn:externals, e.g. "Library.Common", "Library.DB", etc.
Perform an update, to get these external projects, and add them to my solution in Visual Studio, add the necessary references to the Customer3.Site web project and hit build.
So far so good.
The Problem
All this works fine, I am happy that if I have to modify any of the core Library projects I can do so right in the same environment and commit them to the repository. As more and more customer sites are built, I will then have to keep track of what I've done and remember to SVN Update and rebuild those sites which seems quite a long-winded task.
Is there a better way of doing this, a more best-practice solution? Am I breaking any fundamental SVN laws by doing it this way? I want to find a good solution that doesn't cost too much time and isn't overly complex either.
I've been facing a similar issue ... I am setting up a base install package for WordPress, something we would use to quickly get a site setup, it contains the core of wordpress + a set of baseline plugins, both third party and custom ones we've created. Everything pretty much comes from SVN.
Different plugins have different versions/tags and to setup an svn external pointing to a specific tagged version per project would be a nightmare ... only to then have to go into each and every project and do a property adjustment and then an update.
What going to be implementing is a vendor branch with specific versions as needed. All I should then have to do is update the client sites, since they will always be pointing to the latest versions (under my control in the vendor branches).
As to your problem, and also in my case: I would probably write a commit script to update all projects automatically when something in the vendor branch is updated.
We have a lot of different solutions/projects which are managed by different teams. Our solution needs to reference several projects that another team owns. We don't want to add these dependencies as project references because we do not intend on modifying that code, we just want to use it. Also we already have quite a bit of projects in our solution and don't want to add a bunch more since it will slow down Visual Studio. So we are building these projects in a separate solution and adding them as file references to our solution.
My question is, how do people manage these types of dependencies? Should I just have some automated process what looks for changes to those projects, builds them and checks the dlls into our source control, after which we treat them like other 3rd party dependencies? Is there a recommended way of doing this?
One solution, although it may not necessarily be what you are looking for, is to have each dependent sub-system perform a release. This release could be in the form of a MSI install, or just a network share of assemblies. When a significant change is made, that team could let you know, and you could run the install or a script to copy the files.
Once you got the release, you could put them into the GAC, that way you would not have to worry about copying them to your project bin folders.
Another solution, assuming you are using a build server or continuous integration of some kind, is to have a post build step or process stage the files. Than at any given moment, the developers of the other teams could grab the new files , or have a script or bat file pull them down locally.
EDIT - ANOTHER SOLUTION
It might be best to ask why do you have these dependencies? Do you really need them locally when building your part of the application? Could you mock out the dependencies in your solution, allowing you to code, build, and run unit tests? The the actual application would wire these up in your DEV/Test/Prod environments. Keeping your solution decoupled and dependent free may be a better solution for the individual team. Leave the integration and coupling when the application runs in a real setting.
(Not a complete answer, but still:)
Any delivery is better stored in a file/binary repository, as opposed to a VCS used to manage sources history.
We prefer managing those deliveries in a repo like Nexus, and we are using maven to get back the right dependencies.
Even if those tools can be more Java-oriented, Nexus can store anything, and maven is only there to read the pom.xml of each artifact and compute the right dependencies.
If I have a C# solution with multiple projects in it, what would be better, to have the Git repo created in the solution folder, or in each individual project folder? Multiple developers will be working on the projects. What are your experiences with this?
I use several (sometimes overlapping) solutions to contain a collection of related independent applications and shared libraries. As others have mentioned, you really don't want to have a single Git repository containing the source for multiple, independent projects as it makes it much too difficult to track isolated changes.
So, if your solution is structured as mine is then you will definitely want individual Git repositories for each project. This has worked well for me for ten to twelve applications and doesn't create as much maintenance overhead as you might think.
If your solution is truly monolithic (and you're sure you want it that way forever and ever), then it probably makes sense to only have a single repository.
It depends. git repositories are most suited to containing a single configuration item with its own independent lifecycle. If your projects have their own release cycle and are shared between multiple solutions then it might make sense to have them in their own repositories. Usually, though, it is the solution that represents a configuration item with all the constituent projects forming part of the same build. In this case a single git repository at the solution level makes more sense.
git submodule is probably worth consideration here. Each project gets its own repo, the solution gets a repo, and the projects are submodules.
I assume that your solution represents some kind of a product while the projects are just a part of the product.
In this situation I would create the repository on the solution level. This way it is a lot easier to build the whole product at once, especially if the projects depend on each other.
Some though and 3 solutions on the subject can be read on that blog:
https://www.atlassian.com/blog/git/git-and-project-dependencies
package management tool, i.e. nuget in VS, so using reference to a package/compiled module
git submodule (only with command line in VS?)
other build and cross-stack dependency tools
Another solution is just to add a project from the other repo and let it out of the current repo, and latter use the Team Explorer to commit its changes.