How to manage stable binaries and avoid risk of CI rebuilds when install packaging? - installation

I am looking for a tool to manage the collection of binary files (input components) that make up a software release. This is a software product and we have released multiple versions each year for the last 20 years. The details and types of files may vary, but this is something many software teams need to manage.
What's a Software Release made of?
A mixture of files go into our software releases, including:
Windows executables/binaries (40 DLLs and 30+ EXE files).
Scripts used by the installer to create a database
API assemblies for various platforms (.NET, ActiveX, and Java)
Documentation files (HTML, PDF, CHM)
Source code for example applications
The full collected files for a single version of the release are about 90MB. Most are built from source code, but some are 3rd party.
Manual Process
Long ago we managed this manually.
When starting each new release the files used to build the last release would be copied to a new folder on a shared drive.
The developers would manually add or update files in this folder (hoping nothing was lost or deleted accidentally).
The software installer script would be compiled using the files in this folder to produce a SETUP.EXE (output).
Iterate steps 2 and 3 during validation & testing until release.
Automatic Process
Some years ago we adopted CI (building our binaries nightly or on-demand).
We resorted to putting 3rd party binaries under version control since they usually don't change as often.
Then we automated the process of collecting & updating files for a release based on the CI build outputs. Finally we were able to automate the construction of our SETUP.EXE.
Remaining Gaps
Great so far, but this leaves us with two problems:
Rebuilding Assemblies The CI mostly builds projects when something has changed, but when forced it will re-compile a binary that doesn't have any code change. The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?).
Latest vs Stable Mostly our CI machine builds the latest versions of each project. In some cases this is ok, but often we want to release an older tested or stable version. To do this we have separate CI projects for the latest and stable builds - this works but is clumsy.
Thanks for your patience if you've got this far :-)
I Still Haven't Found What I'm Looking For
After some time searching for solutions it seems it might be easier to build our own solution, but surely someone else has solved these problems before!?
What we want is a way to store and manage binary files (either outputs from CI, or 3rd party files) such that each is tagged with a version (v1.2.3.4) that allows:
The CI to publish new versions of each binary (but reject rebuilt versions that already exist).
The development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
package name
version
path/destination in the release folder
The Automatic package script to use the recipe collect the required files, and compile the install package (e.g. SETUP.EXE).
I am aware of past debates about storing binaries in a VCS. For now I am looking for a better solution. That approach does not appear ideal for long-term ongoing use (e.g. how to prune old binaries)... amongst other issues.
I have tried some artifact repositories currently available. From my investigation these provide a solution for component/artifact storage and version control. However they do not provide tools for managing a list of components/artifacts to include in a software release.
Does anybody out there know of tools for this?
Have you found a way to get your CI infrastructure to address these remaining issues?
If you're using an artifact repository to solve this problem, how do you manage and automate the process?

This is a very broad topic, but it sounds like you want a release management tool (e.g. BuildMaster, developed by my company Inedo), possibly in conjunction with a package management server like ProGet (which you tagged, and is how I discovered this question).
To address some specific questions you have, I'll associate it with a feature that would solve the problem:
A mixture of files go into our software releases, including...
This is handled in BuildMaster with artifacts. This video gives a basic overview of how they are manually added to releases and deployed to a file system: https://inedo.com/support/tutorials/buildmaster/deployments/deploying-a-simple-web-app-to-iis
Of course, once that works to satisfaction, you can automate the import of artifacts from your existing CI tool, create them from a BuildMaster deployment plan itself, pull them from your package server, whatever. Down the line you can also have your CI tool call the BuildMaster release management API to create a release and automatically have it include all the artifacts and components you want (this is what most of our customers do now, i.e. have a build step in TeamCity create a release from a template).
Rebuilding Assemblies ... The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?)
You can mostly assume they are equivalent functionally, but it's only the times that they are not that problems arise. This is especially true with package managers that do not lock dependencies to specific version numbers (i.e. NuGet, npm). You should be releasing exactly the same binary that was tested in previous environments.
[we want] the development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
This is handled with releases. A developer can choose its name, dates, etc., and associate it with a pipeline (i.e. a set of testing stages that the artifacts are deployed to), then can "click the deploy button" and have the automation do all the work.
Releases are grouped by "application", similar to a project in TeamCity. As a more advanced use case, you can use deployables. Deployables are essentially individual components of an application you include in a release; in your case the "Documentation" could be a deployable, and maybe contain an artifact of the .pdf and .docx files. Deployables from other applications (maybe a different team is responsible for them, or whatever) can then be referenced and "included" in a release, or you can reference ones from a past release.
Hopefully that provides some overview and fits your needs. Getting into this space is a bit overwhelming because there are so many terms, technologies, and methodologies, but my advice is to start simple and then slowly build upon it, e.g.:
deploy a single, manually uploaded component through BuildMaster to a share drive, then manually deploy it from there
add a deployment plan that imports the component
add a second plan and associate it with the 2nd stage that takes the uploaded artifact and deploys it to the target, bypassing the need for the share drive
add more deployment plans and associate them with pipeline stages and promote through them all to "close out" a release
add an agent and deploy to that instead of the default localhost server
add more components and segregate their deployment with deployables
add event listeners to email team members at points in the process
start adding approvals if you require gated "sign-offs"
and so on.

Related

How to integrate Gitversion

I would like to integrate an automated versioning system in my ASP.NET project. For each release, it should have a version number based on the previous release. I am planning to integrate Gitversion https://gitversion.net/. Does anyone use it in your projects? For the CI/CD pipeline we have teamcity and octopus deploy.
What is the best practice for automated software release versions?
Thanks in advance
As one of the maintainers of GitVersion, I'm obviously biased, but since you're asking how to use GitVersion to implement a "best practice for automated software release", I'm going to unashamedly give you a textual description of a talk I've done on how I prefer to version and release software with GitVersion, TeamCity and Octopus Deploy.
Developer Workflow
The first thing you should figure out is what kind of developer workflow you want for your software. GitVersion supports Git Flow and many simplified variants of it, and GitHub Flow (as well as many other trunk based development flows). Which workflow you should choose depends on what type of software you are developing, your team and most importantly; your personal preference.
Once you have chosen your workflow, you can configure which mode GitVersion should operate under.
Version Source
GitVersion works by calculating a version number from your Git repository. This means that you should not commit a version number to Git in any shape or form. Not within a package.json, pom.xml, .csproj, or any other build- or project-related file that often mandates the existence of a version number.
Versioning
Instead, you should allow GitVersion to produce a version number based on the Git history, using the currently checked out commit as its starting point, and searching through the parents and their tags to calculate an appropriate version number for the current commit. This version number can then be used in any way you want in your build pipeline. You can, for instance, write GitVersion's FullSemVer variable to package.json by executing the following command:
npm version $GitVersion_FullSemVer
If you are developing on the .NET platform, it's also possible to use GitVersion to patch your AssemblyInfo.cs files so the calculated version number is compiled into your assemblies. With InformationalVersion containing the SHA of the Git commit being built and versioned, you'll be able to identify the exact origin of a compiled assembly.
Build
Once you have your workflow in order and GitVersion has a good source of information to use for its versioning, you can go ahead and create a build pipeline for your software. A typical build pipeline will look something like this:
git clone. (Ensure that the clone is complete and not shallow or a detached HEAD. See requirements for more information.)
GitVersion. Execute GitVersion by whichever means make most sense to your environment.
Patch. Patch the version number created by GitVersion in step 2 into every file in your repository that makes sense, such as AssemblyInfo.cs, package.json, etc.
Build. Perform the build of your software.
Test. Run tests that ensure the quality of the software.
Package. Create a package of your software, using the version number created by GitVersion in step 2.
Release. Release the package using the package management software of your choice, such as Octopus Deploy, npm, nuget, composer, or similar.
Test. Perform automatic tests of the released software, if possible. If successful, it's possible to automatically promote the released software to other environments, if applicable.
Through GitVersion's built-in support for build servers, the calculated version number will also be promoted to the build server to version the build itself. This will be done automatically on TeamCity, for instance. In TeamCity, I recommend that you run GitVersion as its own build configuration exposing the required variables which can then be used in dependent build configurations later on.
Release
Once you have a built artifact containing the version number generated by GitVersion, you can use the same version number to create a package, create a release and deploy the release in Octopus Deploy.
You told you want to integrate an automated versioning system? I would like to throw my hat in the ring.
I am author of Vernuntii, a simple semantic versioning library with git integration.
The answer of #Asbjørn told you already about the best practices for example choosing a workflow that is suited for your project.
The main part a versioning tool like GitVesion or Vernuntii is to generate a suitable NEXT VERSION based on (non-)existing git tags.
So at the end of the day it is a matter of taste what kind of complexity you want HOW the NEXT VERSION is calculated.
So when you want cross-branch versioning, then you are good to go with GitVersion, but if you don't need that kind of complexity then you can try a single-branch versioning like it is implemented in Vernuntii.
For more informations take a look at the README.md of Vernuntii.
Here, for giving you my impression of versioning tools and their complexity, let me give you a list (sorted from least complex to most complex):
MinVer
Verlite
Vernuntii
Nerdbank.GitVersioning
GitVersion
A fun fact beside: all libraries from top to Vernuntii also allows to calculate the next version from detached HEAD.

Preventing NuGet package update in team environment

We have a large solution with many projects, and many developers working together.
We need a way to validate Nuget package versions to insure no developer accidentally breaks the build with a package update.
Ideally, is there a way to validate and interupt/stop during Nuget package installation if it is a known incompatible package? We know we can do a validation at build time, but Ideally I'd like to actually be able to stop/inform the developer the newer version is not supported in the build to prevent them from going off and building with the newer package, only at build time to discover they've wasted time on a new package that might have differing calls etc.
If this is not possible, what would be the easiest way to do this at build time? I'm thinking a pre-build script but interested in other ideas. Effectively, the script would look across other project package versions to compare and inform if an incorrect version (and stop a publish to a shared location during build).

Least-impact solution to binary references in VCS

We are using TeamCity 2017.1 and have been using it for years with great joy. A long time ago, someone decided that all third-party binaires should be put into Subversion (our VCS of choice).
This has worked fine, but over time this repos has grown quite large, and combined with our being better and better at using TeamCity, we now have dozens of build configurations which all uses third-party binaries.
Our third-party folder is called Department and is around 2.6 GB in size. As such this is not so bad, but remember that this folder is used by pretty much every single project on the build server!
Now, I will agree with everyone that says that we should use Nugets, network shares etc., and that would work great with new projects. However, we have a lot of history and we cannot begin to change every single solution and branch.
A co-worker came up with the idea, that IF we made a single build project that in reality did nothing but keep a single folder updated with our Department stuff. Then we just need to find a way to reference this, without have to change all our projects and solutions.
My initial though is using Snapshot dependencies and then create a symbolic link as the first build step and remove it as the last, in order to achieve the same relative levels.
But is there a better way? What do other people do?
And keep in mind, that replacing with nugets or something else is not an option.
Let me follow the idea of your colleague and improve it. There would be a build configuration that monitors the Subversion repository and copies packages to a network share. That network share will be used by development teams as nuget repository. Projects that will convert their dependencies from Binary reference to nuget reference will enjoy faster build. When all the teams will use nuget repositories you may kill that Subversion.

Managing internal 3rd Party Dependencies

We have a lot of different solutions/projects which are managed by different teams. Our solution needs to reference several projects that another team owns. We don't want to add these dependencies as project references because we do not intend on modifying that code, we just want to use it. Also we already have quite a bit of projects in our solution and don't want to add a bunch more since it will slow down Visual Studio. So we are building these projects in a separate solution and adding them as file references to our solution.
My question is, how do people manage these types of dependencies? Should I just have some automated process what looks for changes to those projects, builds them and checks the dlls into our source control, after which we treat them like other 3rd party dependencies? Is there a recommended way of doing this?
One solution, although it may not necessarily be what you are looking for, is to have each dependent sub-system perform a release. This release could be in the form of a MSI install, or just a network share of assemblies. When a significant change is made, that team could let you know, and you could run the install or a script to copy the files.
Once you got the release, you could put them into the GAC, that way you would not have to worry about copying them to your project bin folders.
Another solution, assuming you are using a build server or continuous integration of some kind, is to have a post build step or process stage the files. Than at any given moment, the developers of the other teams could grab the new files , or have a script or bat file pull them down locally.
EDIT - ANOTHER SOLUTION
It might be best to ask why do you have these dependencies? Do you really need them locally when building your part of the application? Could you mock out the dependencies in your solution, allowing you to code, build, and run unit tests? The the actual application would wire these up in your DEV/Test/Prod environments. Keeping your solution decoupled and dependent free may be a better solution for the individual team. Leave the integration and coupling when the application runs in a real setting.
(Not a complete answer, but still:)
Any delivery is better stored in a file/binary repository, as opposed to a VCS used to manage sources history.
We prefer managing those deliveries in a repo like Nexus, and we are using maven to get back the right dependencies.
Even if those tools can be more Java-oriented, Nexus can store anything, and maven is only there to read the pom.xml of each artifact and compute the right dependencies.

What should the repository contain?

I am trying to set up a Continuous Integration process. For my various build tasks(compiling, testing, documentation etc.)I need to have tools that perform these tasks(csc, NUnit, NDoc etc.). My question is should these tools too go into my source control repository?
Why I think that they should is because I read in some online article that the developer environment should be as much similar to the build server environment. To fulfill this requirement, the article suggested that you put everything that is required for your build in the repository and when you check out the code(or the build server checks out the code) you are ready to build the project right away without first installing any other tools. But on the other hand if I put these tools with my source code in the repository then the build server will have to install them whenever a build is run.
Is it OK to install these tools? Won't it increase the time for each build unnecessarily?
It's often more trouble than it's worth to try to check in tools to source control. Rather, write a list of software requirements that must be installed before the source can be checked out and built (one thing that would need to be on this list in any case is the source control system itself). If you rely on software being in source control, some tools might need to be installed in certain paths or be otherwise configured (registry entries come to mind).
I would certainly not check in the compiler itself to source control, and I probably wouldn't check in NUnit or NDoc either. Just install these beforehand, as they are not likely to change too much over the lifetime of your project. Your build script might want to check that the expected version(s) of the required software packages are installed before the build may proceed.
Unless you're customizing the tools there's probably no reason to put their source code in your repository. However there are excellent reasons for putting your config files in the repository.
Re-installing the tools for every single build is overkill and will slow you down.
However it's by far better to have a server dedicated to the continuous integration so that you know its state ; you sure nobody installed anything that may have an impact on the outcome of the build.
If you want to be able to re-generate today's build next year, you need to be able to re-create your environment first. Make sure you'll be able to re-install your tools (exact same version), either by keeping them on your server (installing the newer versions in different directories), or storing the whole package in your configuration management tool.
Think about how you would create another continuous integration server, either to have two of them, or for a second site, or to recover after a disaster. Document how the continuous integration server was set up.
What really needs to be version controlled, is the build scripts, that access the right versions of the tools, especially if you opt for installing several versions of the tools.

Resources