run release tasks selectively based on project code changes - continuous-integration

We are using VSTS for build and release management, and using CI/CD. Typically, our solutions consist of a web application project, and a database project.
Our current release tasks take the application offline (using app_offline.htm), publish the database, then publish the web application. Publishing the database project often results in no changes, as due to CI/CD we are much more frequently updating code on the web app than changing the db schema.
Is there a way to only run the database publish task (using WinRM) when it detects a change in the database project code, in our git repository?
EDIT: This in itself isn't a problem, as typically when the DACPAC gets published, there will be no activity. HOWEVER, I've been requesting that the database is backed up using the /p:BackupDatabaseBeforeChanges=true flag - which seems to back up the database even if there are no changes. This is an issue for large databases.

The simple way is that you can separate web project and database project to two build definitions.
Create a new build definition
Enable Continuous Integration in Triggers tab
Specify Path filter to include database project
Modify Visual Studio Build task, specify /t:[database project name] argument in MSBuild Arguments box to just build database project
The same steps for web project
Create a new related definition
Add artifacts for previous two build definitions and enable Continuous deployment trigger
Add two environments (e.g. database, web)
Open Pre-deployment conditions of an environment (e.g. database)
Enable Artifact filters and select corresponding artifact (e.g. database build artifact), specify build branch (can specify *, it means all branches)
Add tasks to just deploy database in this environment
The same steps for web environment

The answer is - exactly what I want isn't possible.

Related

Build individual batch of projects from solution

I have a solution with multiple projects, 2 of which are the main projects. A .NET WinForms desktop app and an ASP.NET WebApi2.
I have defined a number of publish profiles in Server project, which currently I execute manually. Now, moving to an Azure build pipeline I have a couple of questions:
I create a new pipeline based on the ASP.NET template, which builds all the projects within the solution. Should I leave as is, or should I exclude the client-side projects. If yes, then how can I achieve this?
I'm used to Visual Studio building and publishing just afterwards. In a CI/CD scenario, I assume that the I need to separate the building and publishing/deploying. Is this correct?
It all depends, you may use the same pipeline do build WinForms to create a package/installer. It is all up to you. If you want to exclude you may do it creating configuration
Configuration Manager -> Active Solution Configuration -> New ...
and then you will define what project you want to build. Next you will use this configuration in your build to just compile project you want.
You may pass configuration here.
If you use YAML pipeline instead of classic you may consider using multi stage pipeline. Here you have documentation and here a simple tutorial. If you decide to have one pipeline please check deployment jobs for deploying your app, however this is not necessary.
You you secleted classic pipeline you should go with classic release pipelines.

How to configure VS solution to use tfs vnext build with release management

Q. How can I setup our config/transforms to get release management to work in the example way?
I'm trying to get release management to work in the way all the videos seem to show. The same build progressing through environments going through build --> Dev/Staging --> Production.
It's making me step back a little and question the way we do our configurations in Visual Studio solutions (and our git flow branch process). I think the way we use the configurations is making things more difficult further down the line with the build and then release.
Configurations
We currently use the two default configurations, debug & release.
We tend to use the debug build on our Dev (contains the dev database
connection string & other app settings transforms). This is what we deploy to 'dev'.
Then we also have the release configuration with the production transforms in. This is what we deploy to 'Production'.
How can I setup our config/transforms to get release management to work in the example way?
One option: Build both configurations. Publish both configurations as artifacts in your build.
In your release definition, deploy the appropriate configuration from the linked artifacts.
Another option: Don't do compile-time configuration transforms and instead do deployment-time configuration.
What you provide in the screenshot is a Overview of releases. Which is used to track a release in Microsoft Release Management. Based on a release name and links.
The Overview page shows a list of release definitions. Each one is shown as a series of environments, with the name of the release and the date or time it was started. The color of the heading and the icon in each environment indicate the current status of the release. The color scheme is the same as in the Releases page.
You just need to follow the provided starter deployment templates or you can also create your own templates for your project.
Back to the screenshot, there are just the environments in a release build definition. You can add the need environment in the definitions.
After that you will view the same thing in the overview just like the example:
For your situation, you can created two separate release definition with two build definition based on the both configurations. Moreover,there has been a very detailed document in MSDN, including setup, configurations, manage release, deploy, you can have a systematic understanding.

How can I achive Continuous Integration and Deployment for many projects in one solution?

What we use:
We use mercurial and bitbucket for repositories. Appveyor and kudu for continous integration and deployment. We are using visual studio 2015 as IDE.
What we have:
We have different web projects. They share some other projects. All of web projects have their own solution. Every solution have their own repository.
If there is change on develop branch. Appveyor builds this repository, tests and deploys it.
If there is change on default, kudu builds this repository and deploys it.
What we want:
We want to merge all of these projects in one solution. But I couldn't figure it out, how I can achive continous integration or deployment.
If I change something on webproject1, I just want to build and deploy webproject1. The other webprojects in solution neither should be built nor deployed.
Perhaps a single repository will help you. Using relative path to include the shared libraries from your different applications.
Each application can still have its own Solution file and your CI setup also stays as it is. What changes is that the shared projects you have across all applications will be referenced with relative path. E.g.:
Repository root\Core\Component1\Component1.csproj
Repository root\Core\Component2\Component2.csproj
Repository root\Applications\App1\App1.sln
Repository root\Applications\App1\Domain\Domain.csproj
Repository root\Applications\App1\Web\Web.csproj
Repository root\Applications\App2\App2.sln
Repository root\Applications\App2\Domain\Domain.csproj
Repository root\Applications\App2\Web\Web.csproj
Now your different application can include the Core\Components they need by adding existing project to solution using relative path.
Your continuous integration system will have VCS triggers watching the app and dependencies so only relevant changes fire a build.
So if App1 developer makes a change on Component1, and Component1 is also used by App2, the build server will trigger a build to App1 and App2, signaling any breaking changes. However if App2 doesn't not depend on Component1, then only App1 will build.
This is achieved by configuring the build triggers for your applications.
One benefit of this strategy vs having a single .sln is that you won't have to build everything each time you build solution (nor configure what projects to build each time you work on a different app)
Also note that you can achieve this with multiple repositories. But that means you'd need to check them out at the correct location so your relative paths work. It's also quite obscure since if you checkout App1 and try to build it. It simply won't work and you'll have to figure out which other repos to check out, etc.
You are using Mercurial but FYI, the way (one of) this would be handled with Git is with submodules.

Build \ deploy - complex scenario and application

The managers grant our team with the task of creating an automated build \ deploy script for the production servers.
The script requirements are:
fetch latest release src code of web application from git.
compile - > WAR
connect to a remote server (production\test)
shutdown tomcat server on remote
execute schema updates on remote DB server (for new release)
deploy new war to tomcat and start it.
My questions are:
do all 3 major players in the build\deploy area can do that (ant \ maven \ gradle)?
is building a small application (java application) that does this exact steps is good practice ? (probably write a java app will be much faster than learning doing that in maven \ ant \ gradle)
are there any alternative tools for this kind of work?
are there any better alternatives for the whole "build-machine" idea?
thanks!
Do all 3 major players in the build/deploy area can do that (ant/maven/gradle)?
With enough customization of targets/goals, any of these can do what you want.
Is building a small application (java application) that does this exact steps is good practice?
You certainly could (or just use a shell script), this is essentially the same thing as customizing the build tools listed prior.
Are there any alternative tools for this kind of work?
My company has created BuildMaster specifically to solve these and additional problems related to deployment, and it sounds like the free version may suit your scenario.
The basic solution would be to:
Connect to Git by adding a Git source control provider (or if you're using GitHub, the GitHub provider)
Connect an agent to the target server and add it to BuildMaster (requires installation for Windows, but if deploying to Linux it just uses an SSH connection)
In your deployment plan, you'd use the following actions:
"Build Ant Project" or "Execute Maven" to perform the actual build process
"Create Build Artifact" to associate the build output (whether a WAR file or its contents) with the BuildMaster build
"Stop Service" to stop Tomcat
"Execute Database Scripts" to execute scripts on disk (whether you've pulled them from source or whatever) or "Execute Database Change Scripts" which works automatically if you've uploaded them to BuildMaster
"Deploy Build Artifact" to deploy the previously captured artifact to the remote server
"Start Service" Tomcat
What's neat about this approach is that when you create this deployment plan, it reads very similar to what is written out in the above steps.
Additional benefits that may or not pertain to your exact scenario that can be trivially added include:
Approvals & Signoff - workflows can specify these to ensure QA signoffs occur before promotion
Release Management and Auditing - know what build is in what environment and when it went there
Variable Deployments - you can add branching logic to deployment plans to make it easy to select whether the "Debug" or "Release" build goes to test, for example.
Notifications - users can subscribe to certain events (deployment, release, etc.) and receive email notifications when those events occur

Jenkins- multiple locations SVN. Is it possible to specify the build version

I am new to Jenkins CI tool and I want to know if it is possible to specify what build to use when there are several projects, on different SVN locations, dependent on one another. For example, if I have the web project on SVN location1 and the backend project on SVN location2 and the web depends on the backend and one of the developers modifies something in backend, when the web developer does a commit, there will be a build failure. Is there the possibility to specify that the build from the web part should take into consideration build x from backend and not the newest build?
Thanks in advance.
yes that can be done. in Jenkins check for the Build Triggers options in your project web-settings and on the line Build after other projects are built you can specify the name of projects you want to build automatically after there has been changes made to the base project.
And similarly, in the Post-build Actions, look for Build other projects, where you can specify that if the base project builds successfully, it will automatically trigger a build on children projects.
Hope this helps.
Your example of building a project against a specific version of another project is a little non-standard, but not impossible.
In your case, I would use Jenkins' ability to execute arbitrary scripts to help. The script would take care of getting the correct version of the project that the one I want to build depends on.
Building on your example of a Web and Backend project, here's how I would do things without using a parametrized build:
Add a file to the repository of the Web project that stores the version of the Backend project to use
Configure a job to build the Web project when the source for the backend project changes in SVN.
The project should check out the latest version of the Web project
The first Build Step for the project would be a script (Execute Shell or Execute Windows Batch Command) that does the following:
Gets the version of the Backend to use from the file containing the version info
Either pulls the appropriate version of the Backend from the Backend's repository; or pulls the source of the appropriate version of the Backend's source
(If you pulled the source only for the Backend, the next Build Step should be to build the Backend next)
Build the Web piece
Do any unit tests

Resources