I've been doing some reading about continuous integration recently and there is a scenario which could occur which I don't understand how to deal with appropriately.
We have a stable mainline/trunk branch and create branches for features. Each developer will keep their own feature branches up to date by merging from trunk into their branch on a regular basis. However it is entirely possible that two or more feature branches could be created and worked on over a period of several weeks or months. In this time many releases of the software could be deployed. This where my confusion arises.
It is very likely that changes for one feature branch will cause merge conflicts with other feature branches. CI suggests you should merge into trunk at least daily which would resolve the conflicts quickly. However, you may not want to merge the feature code into trunk because it may not be finished or you may not want that feature available in the next release. So, how do you deal with this scenario and still follow CI principles of daily code integration?
There are no feature branches in proper CI. Use feature toggles instead.
The idea explained more fully in this article is to merge from the trunk/release branch to feature branches daily, but only merge back in the other direction once a feature meets your definition of 'done'.
Code written by one feature team will be pushed into the trunk once it's complete, and will be 'distributed' to the other teams, where conflicts can be dealt with, as part of the daily merge process.
This doesn't go as far as satisfying Nick's desire for a version control system that can be used a backup tool, unless the changes being made are small enough that they can be committed to the feature branch within a timeframe where the the risk of losing your work is acceptable.
I personally don't try to reintegrate code into the release branch before it's done, and although I've never really tried, I'm sure building feature toggles in for unfinished work has its own issues.
I think they mean merging mainline into the feature branch, not the other way 'round. This way, the feature branch will not deviate from mainline too much, and be kept in an easily mergeable state.
The git folks do the same thing by rebasing feature branches on top of the master branch before submitting a feature.
In my experience with CI, the way that you should keep your feature branches up to date with the main line changes as others have suggested. This has been working me for several releases. If you are using subversion make sure you to merge with the merge history enable. This way when you are trying to merge your changes back to line it will only like you are merging the feature changes to line, not trying resolve conflicts which your feature might have with the main line. If you are using more advance VCS like git the first merge will be a rebase where the second will be a merge.
There are tools that can support you to get thins done more smoothly like this Feature branches with Bamboo
Feature branches committing back into the mainline, and OFTEN is an essential feature of Continuous Integration. For a thorough breakdown, see This Article
There's now some good resources showing how to combine both CI and feature branches. Bamboo or Feature Branch Notifier are some ways to look.
And this is another quite long article showing pros of so called distributed CI. Hereunder, one excerpt explaining the benefits:
Distributed CI has the advantage for Continuous Deployment because it keeps a clean and stable Mainline branch that can always be deployed to Production. During a Centralized CI process, an unstable Mainline will exist if code does not integrate properly (broken build) or if there is unfinished work integrated. This works quite well with iteration release planning, but creates a bottleneck for Continuous Deployment. The direct line from developer branch to Production must be kept clean in CD, Distributed CI does this by only allowing Production ready code to be put into the Mainline.
One thing that still can be challenging is keeping the branch build isolated so that it doesn't pollute your repository of binaries by pushing its branch builds to it. Bamboo seems to address that, but not sure it's as easy with Jenkins.
Related
I recently got my team to switch to CI. We're still in the baby steps, but so far we've started using this branching strategy and it's been working great. We don't have any automated tests running yet and do all our testing manually. We are also currently doing weekly releases so we create a release branch from our Dev branch Monday afternoon, run further tests Tuesday and push it live to our Master branch on Wednesday morning and monitor through out the day. The problem is that sometimes our QA team doesn't get to test all our features in time or the feature fails QA, but is already in the Dev branch and could be going live prior to the issues being fixed. Any ways to mitigate this? This is all fairly new to us so I am open to changing everything if necessary.
Hopefully this diagram is a good explanation of how things work currently:
You're not doing Continuous Integration (CI).
In CI you only have one branch (master/main/etc) - on which the CI tool operates - and maybe very short lived child branches that get merged into that branch, ideally in < 1 day. Pretty much synonymous with trunk-based development.
In traditional CI breakages are normal, part of the process - bad changes are identified by the CI tool (one of the reasons you're using it to start with) after they are merged into the branch. Breakages should be fixed ASAP as they block the project's critical path. Unfortunately this requires human intervention, which can be a serious speed bump for larger projects.
There is also gating CI, which performs automated orchestrated verifications of the changes prior to merging them into the branch, rejecting those failing the verifications and automatically merging into the branch the successful ones, thus ensuring that the branch maintain a minimum quality level at all times. IMHO the only comfy and deterministic way to do integration, especially in larger scale projects. AFAIK only 2 such tools out there: Zuul and ApartCI (which is my baby).
The QA/release stages are just subsequent steps in your CI/CD pipeline which are automatically reached if the previous steps pass, see the picture in this answer. Optionally you can pull release branches from the respective refpoints, which would allow complex multi-releases stories, hot-fixes, etc. But these branches are never merged back into their parent, the hot-fixes are instead double-committed/cherry-picked into the master branch, potentially with additional changes as required by the different branch context.
GitFlow is not CI:
you have multiple branches (silos) which you only merge once in a while.
branch merges can rapidly become a nightmare, especially in larger projects where you can have more than 1 feature branch active simultaneously. Try to add 1 then 2 more parallel feature branches in your picture, as well as the additional upstream branch merges that need to be performed before you can do the merges you already show. And the additional QA executions you may need because of these extra merges. Just to get an idea about how badly such strategy scales.
tests done in these isolated branches are out of context - the branches do not reflect the overall project integration. Even if these tests pass the master branch may fail them or be plain broken after merge. See an example of how just 2 simple changes can cause such problems, let alone branches with multiple changes.
As you probably guessed by now, I'm not a big fan of GitFlow, it's awfully similar to the pre-CI integration practices I had to endure a long time ago ;)
Right now I have an Azure PaaS solution with a single repo in TFS - we right-click publish from VS to an App Service and then swap slots to get code to production. Small team, disciplined check-ins (or so I thought), etc.
I made a decision to check code in that wasn’t production ready, thinking if needed, I could roll back and publish a hotfix should the need arise.
Well, the need has arisen and I've rolled things back to apply the fix. This was a bit of a headache though.
I’m a little unclear on what the right thing to do is moving forward. What I want to try is:
Create a branch from our MAIN repo and stick all ongoing development there, call it DEV. We'll create two workspaces on our machines - one for each branch.
When we're ready to push a feature, merge down to MAIN and then QA before right-click publishing > Staging > Prod.
At a high level, does this seem like a step in the right direction?
What I’m trying to do is keep this project/alm lean and simple. I don't want to go as far as introducing a build server with RM and other expensive (time, materials, process) components - I just want a sensible, incremental upgrade in the maturity of our current setup to avoid the above headache and this is all I could come up with.
That's two way for your reference, one base on the working flow , one base on the publish (releaseing)
A. Just using mainline and tagging for release
Pros:
Avoid merge hell.
Keeping to the mainline encourages some best practices like proper
release planning, not introducing a lot of WIP, using branching by
abstraction to deal with out-of-band long term work, and using the
open closed system and configurable features for dealing with
managing works in progress that may; or may not; need to be disabled
now or in the future in order to release or to avoid a full rollback.
Cons:
Dealing with works in progress becomes an issue and adds to potential
surface attack area when it comes time to release. However, if your
developers are disciplined then new features should be configurable
and modular and therefore easily disabled/enabled, or there is no WIP
and at each release point all work is either completed or has not yet
been started (i.e. Scrum).
Large scale/out-of-band changes require more thinking ahead of time
to implement.
B. Branch by release
Pros:
You can begin working on the next iteration while the current
iteration finishes its round of acceptance testing.
Cons:
Tons of branches.
Still need to tag branches at release points.
Still need to deal with WIP and merge WIP from previous release
branch into next release branch if it's not going to make it and
still need to disable or yank it of release branch and re-run
acceptance tests.
Hot fixes need to be applied to more branches (release branch +
hotfix + new tag merge hotfix into vnext branch and possibly
vnextnext depending on where the hotfix fails.)
With respect to your point 1. I would not recommend using two workspaces, since you already are running "two workspaces" internally with two branches. The approach is not that bad, just a litte hard to do in TFVC, meaning the old server based version control inside TFS. I do hope your planning to merge everything from dev to main at point in time.
In general your guide is more matching with Git as source-control, and especially gitflow http://nvie.com/posts/a-successful-git-branching-model/ as a branching model. We are running that with success within my team.
You can migrate from TFVC to git using git-tf http://git-tfs.com/
If you are looking for a cheap model that scales out with buildservers and such I would reccommend looking at Visual studio team services https://www.visualstudio.com/en-us/products/visual-studio-team-services-vs.aspx as well to host and build your code. There you also have Release management integrated without cost(up to 5 people/free for all visual studio subscribers)
I'm currently trying to setup some code analysis for my team however I found our release process does not mesh well with the tools I have looked into (CodeClimate and SonarQube). Both tools require a default branch to track the state or "grade" of your repository over time. They watch the default branch and analyze pull requests to that branch. However, our current release process involves a new branch for each release which we merge into master after the branch is released. We could use master as our default branch but we would not see the analysis until after the code is out which is not ideal. As I am not in a position to change our process, I am tasked with finding a tool or work around to get an analysis tool to work with our process. The only work around I could think of is two pull requests. One to the release branch as usual, and another to master just to trigger the analysis. The master PR would then be closed once the issues found in the analysis are fixed. This is far from ideal and I come to my favorite forum looking for help and experience.
Code is in Github.
Primary language to analyze is PHP, bonus languages are CSS, JS, and Java.
It looks like Codacy could be a good alternative.
You can enable the analysis on all the branches of your project. All the pull requests to an analysed branch will be analysed, even if it's not the default branch.
It supports all the required languages: PHP, JS, CSS, Java and more. It also has a nice auto-comment integration with Github to help you save more time in code reviews.
We're considering a switch from SVN to a distributed VCS at my workplace.
I'm familiar with all the reasons for wanting to use a DVCS for day-to-day development: local version control, easier branching and merging, etc., but I haven't seen that much that's compelling in terms of managing software releases. Here's our release process:
Discover what changes are available for merging.
Run a query to find the defects/tickets associated with these changes.
Filter out changes associated with "open" tickets. In our environment, tickets must be in a closed state in order to merged with a release branch.
Filter out changes we don't want in the release branch. We are very conservative when it comes to merging changes. If a change isn't absolutely necessary, it doesn't get merged.
Merge available changes, preferably in chronological order. We group changes together if they're associated with the same ticket.
Block unwanted changes from the release branch (svnmerge block) so we don't have to deal with them again.
Sometimes we can be juggling 3-5 different milestones at a time. Some milestones have very different constraints, and the block list can get quite long.
I've been messing around with git, mercurial and plastic, and as far as I can tell none of them address this model very well. It seems like they would work very well when you have only one product you're releasing, but I can't imagine using them for juggling multiple, very different products from the same codebase.
For example, cherry-picking seems to be an afterthought in mercurial. (You have to use the 'transplant' extension, which is disabled by default). After you cherry-pick a change into a branch it still shows up as an available integration. Cherry-picking breaks the mercurial way of working.
DVCS seems to be better suited for feature branches. There's no need for cherry-picking if you merge directly from a feature branch to trunk and the release branch. But who wants to do all that merging all the time? And how do you query for what's available to merge? And how do you make sure all the changes in a feature branch belong together? It sounds like total chaos.
I'm torn because the coder in me wants DVCS for day-to-day work. I really want it. But I fear the day when I have to put the release manager hat and sort out what needs to be merged and what doesn't. I want to write code, I don't want to be a merge monkey.
You really want to be using git in this situation, because it is so vastly superior when it comes to merging and release management. Git allows for a signoff process for changes to go to release; it actually provides support for multiple layers of release management, precisely because that is how Linux is managed.
Simply put each release in a branch. Instead of blocking out changes you don't want, accept only those that you do, by only signing off for release those changes that are going into a release.
Git also allows you to cherry-pick a collection of changes into a single patch to be sent upstream, so you don't have to carry all the 'oops, that didn't quite work' patches into the release branch or repository, only nice clean feature or fix patches.
We are a new Agile shop and we are encountering an issue that I hope others have seen.
In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile.
My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches?
Thanks for any input.
LoneCM
I don't see why a "build and automated smoke test" should be any substantial extra work, or be accounted as a "cost of staying Agile" -- given it's all automated, of course. (Indeed I normally would consider a continuous build/integration and automated testing to be part of the set of best practices associated with Agile development). diff in general doesn't cut it as there may be files that will never compare equal (binary resources whose format includes time stamps, for example) and only testing can confirm that the differences, if any, do not impair correctness.
If multiple developers (presumably pairs thereof, if you do pair programming) can all be committing to trunk independently (i.e., no "locking of trunk" allowed), then the kind of minimal sanity check you advocate is highly recommended -- whether the underlying development be "agile", totally chaotic, or whatever else;-).