Can you elevate build in Release Management client for VS 2013? - visual-studio-2013

This might be the wrong place to ask this because there's no actual code involved but I am running out of places I could seek help regarding this.
Previously I've been using Octopus where you can create a build and later push it to other environments like QA, UAT, Production etc.
I'm using Release manager now and I can't find such option. I was told the only way is to either push immediately to all environments and leave them pending until I'm ready to approve them (which is dangerous in case someone else approves the release prematurely)
Or I was told I would have to revert my code back to an earlier state, push a release to the environment I want and then move the code forward again.
This all sounds wrong and tedious to me and the Release manager documentation does not offer a clear answer. Ideally I want to make a release to one environment and at a later date elevate it to another.
If someone knows how it could be done, I would be very grateful.

Release Management supports this scenario.
You create a release path that consists of the environments you want to release to and the order you want to release to them. It also captures who is responsible for promoting a release between stages.
You then create a release template for a given build definition that will travel along that path. When you create a release for that release template, you choose what build from that build definition will be used.
When you trigger a release, it will go to the first stage in the release path, wait for approvals, then move on to the second stage, and so on.

Related

Azure PaaS and ALM - how to handle branches with single-click Publishing?

Right now I have an Azure PaaS solution with a single repo in TFS - we right-click publish from VS to an App Service and then swap slots to get code to production. Small team, disciplined check-ins (or so I thought), etc.
I made a decision to check code in that wasn’t production ready, thinking if needed, I could roll back and publish a hotfix should the need arise.
Well, the need has arisen and I've rolled things back to apply the fix. This was a bit of a headache though.
I’m a little unclear on what the right thing to do is moving forward. What I want to try is:
Create a branch from our MAIN repo and stick all ongoing development there, call it DEV. We'll create two workspaces on our machines - one for each branch.
When we're ready to push a feature, merge down to MAIN and then QA before right-click publishing > Staging > Prod.
At a high level, does this seem like a step in the right direction?
What I’m trying to do is keep this project/alm lean and simple. I don't want to go as far as introducing a build server with RM and other expensive (time, materials, process) components - I just want a sensible, incremental upgrade in the maturity of our current setup to avoid the above headache and this is all I could come up with.
That's two way for your reference, one base on the working flow , one base on the publish (releaseing)
A. Just using mainline and tagging for release
Pros:
Avoid merge hell.
Keeping to the mainline encourages some best practices like proper
release planning, not introducing a lot of WIP, using branching by
abstraction to deal with out-of-band long term work, and using the
open closed system and configurable features for dealing with
managing works in progress that may; or may not; need to be disabled
now or in the future in order to release or to avoid a full rollback.
Cons:
Dealing with works in progress becomes an issue and adds to potential
surface attack area when it comes time to release. However, if your
developers are disciplined then new features should be configurable
and modular and therefore easily disabled/enabled, or there is no WIP
and at each release point all work is either completed or has not yet
been started (i.e. Scrum).
Large scale/out-of-band changes require more thinking ahead of time
to implement.
B. Branch by release
Pros:
You can begin working on the next iteration while the current
iteration finishes its round of acceptance testing.
Cons:
Tons of branches.
Still need to tag branches at release points.
Still need to deal with WIP and merge WIP from previous release
branch into next release branch if it's not going to make it and
still need to disable or yank it of release branch and re-run
acceptance tests.
Hot fixes need to be applied to more branches (release branch +
hotfix + new tag merge hotfix into vnext branch and possibly
vnextnext depending on where the hotfix fails.)
With respect to your point 1. I would not recommend using two workspaces, since you already are running "two workspaces" internally with two branches. The approach is not that bad, just a litte hard to do in TFVC, meaning the old server based version control inside TFS. I do hope your planning to merge everything from dev to main at point in time.
In general your guide is more matching with Git as source-control, and especially gitflow http://nvie.com/posts/a-successful-git-branching-model/ as a branching model. We are running that with success within my team.
You can migrate from TFVC to git using git-tf http://git-tfs.com/
If you are looking for a cheap model that scales out with buildservers and such I would reccommend looking at Visual studio team services https://www.visualstudio.com/en-us/products/visual-studio-team-services-vs.aspx as well to host and build your code. There you also have Release management integrated without cost(up to 5 people/free for all visual studio subscribers)

TFS Team Build - Testing to Production

I have scoured the internet to find out what I can on this, but have come away short. I need to know two things.
Firstly, is there a best practice for how TFS & Team Build should be used in a Development > Test > Production environment? I currently have my local VS get the latest files. Then I work on them & check them in. This creates a build that then pushes the published files into a location on the test server which IIS references. This creates my test environment. I wonder then what is the best practice for deploying this to a Live environment once testing is complete?
Secondly, off the back of the previous - my web application is connected to a database. So, the test version will point to a test database. But when this is then tested and put live, I will need that process to also make sure that any data connections are changed to the live database.
I am pretty much doing all this from scratch and am learning as I go along.
I'd suggest you to look at Microsoft Release Management since it's the tool that can help you to do exactly the things you mentioned. It can also be integrated with TFS.
In general, release management is:
the process of managing, planning, scheduling and controlling a
software build through different stages and environments; including
testing and deploying software releases.
Specifically, the tool that Microsoft offers would enable you to automate the release process, from development to production, keeping track of what and how everything is done when a particular stage is reached.
There's an MSDN article, Automate deployments with Release Management, that gives a good overview:
Basically, for each release path, you can define your own stages, each one made of a workflow (the so-called deployment sequence) containing the activities you want to perform using pre-defined machines from a pool.
It's possible to insert manual interventions/approvals if necessary, and the whole thing can be triggered automatically once your build is done.
Since you are pretty much in control of the actions performed on each machine in each stage (through the use of built-in or custom actions/components) it is also certainly possible to change configuration files, for example to test different scenarios, etc..
Another image to give you and idea of how it can be done:

Publish a specific revision using CruiseControl.Net

I am setting up a CruiseControl.Net server. So far, it only builds a project (.Net website), and I kind-of know how to set up unit testing, code coverage, etc in the future.
What I will need to have soon is this:
The developers commit changes to SVN continually, thus CCNet builds often.
CCNet will publish the latest version to the development server, as soon as a commit is validated (with unit tests etc).
The project manager validates a specific version, in order to publish it to the pre-production server, and create a SVN tag from this revision.
The last point is where my problem lies: how exactly can I set up things so the project manager can, for instance, browse to the CCNet web dashboard, select a previous specific build, and says "this is the build I want to publish" ?
I believe that my thinking is flawed somewhere, but I can't put my finger on it. Maybe CCNet is not the right place to do these manipulations ?
In my mind, I can create a SVN tag using CCNet, and mostly work from the trunk, but maybe I can't ? Maybe it's the other way around, and I should add a CCNet project every time a tag is created under SVN ?
The final goal is that I want to automate the publication process: zip creation (for archiving), web.config modification (using Nant for instance), and website publication (using FTP).
In all these steps, I want to limit the manual intervention to the maximum. If I can avoid to add a new project to CCNet every time a tag or branch is created in SVN, that would be awesome.
Thanks for your help, and sorry if it's not very easy to read, but it's not very clear in my head either...
Since you can create any task, you should be able to achieve the goal, though unfortunately not out-of-the-box.
Since you use SVN, it all depends actually on revision. I think I'd create a separate project for your third scenario and added a parameter where PM would provide revision number. Then based on that I'd tag sources etc. in my own task.
Regarding the other points, I think this is similar. Recently for web projects we started using MSDeploy, and in each stage build the MSDeploy package was created. Then there was a separate build called Deploy, that when forced allows us to select which package we want to deploy using MSDeploy.
Having several environments, however, started a little bit like overkill for managing with CCNet, and I'll be looking into kwakee at some time.

Automated Software Versioning integrated with Issue Control System

I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?

What's the workflow of Continuous Integration With Hudson?

I am referred to Hudson today.
I have heard about continuous integration before, but I have no idea what the heck is a ci-server.
Hudson is really easy to install in Ubuntu and in several minutes I managed to set up an instance of it.
But I don't quite understand the workflow of a ci-server, or how am I supposed to use it?
Please tell me if you have experience about ci, thanks in advance.
Edit:
I am currently using Mercurial as my SCM, and I wonder what is the right way to use it with Hudson.
I have installed the Mercurial Plugin of Hudson, and I create a new job with a local repository. When I commit in the repository the Hudson job is built with the latest version of my source code.
If what I used is a remote repository, what's the workflow like?
Is it something like the following?
Set up a Hudson job with the repository
Developer makes a local clone of the repository
Developer commit and push changes
The remote repository update with the incoming changeset
Run a Hudson build
There may be something I misunderstanded at all, please help me point it out.
Continuous Integration is the process of "integrating software" continuously i.e. as frequently as possible (ultimately after each set of changes) to avoid any big-bang integration and all subsequent problems by getting immediate feedback.
To implement Continuous Integration, you first need to automate the build of your software (where build means of course compiling sources, packaging them, but also compiling tests, running the tests, running quality checks, etc, anything that will help to get feedback on the health of your code). Then you need to trigger the build on the latest version of the sources on a particular event (a change in the repository, a temporal event), to generate reports and to send notifications upon failure (by mail, twitter, etc).
And this is precisely the responsibility of a CI engine: offering trigger mechanisms, being able to get the latest version of the sources, running the build, generating and publishing reports, sending notifications. CI engines do implement this.
And because running a build is CPU and Disk intensive, CI engines usually run on a dedicated machine (or even a farm of machines if you want to build lots of projects).
Back to your question now. Once you've got Hudson running, configure it (Manage Hudson > Configure System): setup the JDK, build tools, etc. Then setup an Hudson Job and follow the steps: configure the location of the source repository, the build tool, the trigger, a notification channel and you're done (you can do more complex things but that's a start).
For more details on the setup, check:
The official Use Hudson guide for more details. << START HERE
Continuous Integration with Hudson - Tutorial.
Spot defects early with Continuous Integration.
Martin Fowler's overview of continuous integration is one of the canonical references. In my opinion, using automation to make sure your code base is healthy is one of the most useful things that you can set up.
Update Sorry that I didn't have much time earlier to expand on my reply. #Pascal_Thivent is right that in order to effectively use CI, you need to be able to automate your builds, tests, etc. CI is actually a good forcing function for this. For me, it's one of those little warning flags if I start to think that it would be too painful to put a build into Hudson. It means that something is not quite right.
What I like about Hudson is that it's flexible enough to accommodate different workflows. We use it for both builds / unit tests and releases. And it eliminates a lot of the worry about certain release procedures only working in one person's environment.
What I don't like about Hudson is that it is occasionally unstable when new builds break plugins. I've had a couple of upgrades (2 out of 10 or so) go bad because of incompatibilities. I do two things now:
I never upgrade my team's Hudson server to the latest and greatest right away. I generally only upgrade when there are significant new features, or bug fixes.
I now have a basic Hudson instance set up with all my plugins on a virtual machine with some dummy builds that I fire up to test out any new upgrades before doing it on the public server.

Resources