Tree Stable Timer in Bitbucket Pipelines? - continuous-integration

I'm investigating the possibility of replacing a complex Buildbot-based CI server with Bitbucket Pipelines for running tests and deployments for a Python-based application.
I'm new to Pipelines, but it seems to have almost all the features that Buildbot has. However, the one feature I can't find is what Buildbot terms a "tree stable timer". By default, both Buildbot and Pipelines run a build whenever a new commit is found. However, because my build takes about 2 hours to run (I have a lot of tests), if someone pushes up several commits to a branch all within a couple minutes, I only want the CI server to run a build for the most recent commit. In Buildbot, this is done by setting the time value for the "tree stable timer" parameter.
I've checked Pipeline's docs, but I can't find anything quite matching this feature, but I might not be looking in the right place. How would I do this in Pipeline?

Related

Advent of slow steps in CI pipeline in CD environment and minimising their impact

Running a fast test suite as part of an artifact's CI pipeline is pretty common place. Recently I've also seen encouragement to add slow steps to CI pipelines: performance tests, backward compatibility tests, whole environment tests, etc.
In my experience these slow steps have never been made part of the CI pipeline but instead are practiced infrequently and manually.
I can see the benefits but I also struggle fitting them into a continuous delivery environment. Not to think CD is just about pushing code to production in a hurry, but there is a time element to it.
If I had to give it a go I'd set up a fast pipeline that builds a deployable artifact after executing a fast test suite that is deployable to all non-production environments, followed by a slow pipeline that runs in parallel. On completion the slow pipeline would mark the artifact as production deployable and possibly deploy itself to production.
Tooling is key I think, and I haven't come across anything that manages an artifact's deployability lifecycle so far.
I'd like to know what approaches you have encountered, in which situations they worked and why, and where didn't they work so well.

Can you host a bitbucket pipeline internally?

We are currently using bitbucket cloud to host our grails-app repository. We want to set up some pipelines to do things like run unit tests and make sure the app compiles before being able to merge a branch to master.
I know this can pretty easily be done by letting them host the pipeline and committing a well written pipe file, however there is a problem standing that our app is very large, and even on brand new macbook pros takes 20 minutes to compile, on some older ones it can take 2 hours or more. Grails, thankfully, only compiles files that have changes in them from the last compilation. However, this can't be used on a bitbucket pipe that's working off a fresh pull of the app every time it runs.
My solution to this was wanting to set up a pipeline to run for us internally so that it can already have the app pulled, and just switch to the desired branch and run from there. This still might take time if switching between 2 very diverged branches, but it's better than compiling from fresh every time.
I can't seem to find any documentation on hosting a pipeline internally with bitbucket cloud, does anyone know if this is possible, and if so where there is documentation for it?
It would also be acceptable to find a solution to the long compilation problem itself with bitbucket hosted pipelines.
A few weeks ago, self hosted runners was made available as a public beta. Here are the details: https://community.atlassian.com/t5/Bitbucket-Pipelines-articles/Bitbucket-Pipelines-Runners-is-now-in-open-beta/ba-p/1691022
Additionally, if you're looking to retain some of your files from one build to the next to save doing the same work over and over again, have a look at caches: https://support.atlassian.com/bitbucket-cloud/docs/cache-dependencies/ there are some built ones that you could use, but you can define your own custom ones as well. Essentially it's just a way of preserving the contents of a directory for a future build.

Composite build without revision synchronization for dependencies in TeamCity

I'm trying to setup a pipeline for running all kinds of tests against pull requests into my repository. Repository is git repo hosted with Bitbucket Server and TeamCity is Enterprise 2019.1.5 (build 66605). There are a few key aspects to the task:
There's a lot of tests. One way or the other tests should run in parallel to achieve reasonable execution time. The tests are already partitioned as separate TeamCity build configurations, each having good enough execution time.
There's much less build agents available for the task, so it's not impossible for a particular build to spend quite some time (up to 1-2 hours) in a build queue.
The result of the testing should be reported to Bitbucket as a single aggregate value. I.e. if there are, say, 3 individual builds from p.1 with two passing and one failing then Bitbucket should receive single "failed" build status.
Ideally, pipeline should be triggered by a feature branch change (refs/pull-requests/123/from in Bitbucket lingo), but checkout merges of the feature branch into target branch (refs/pull-requests/123/merge in Bitbucket lingo).
Given above requirements, I experimented with Composite Build Configuration (https://www.jetbrains.com/help/teamcity/composite-build-configuration.html) as it seemed perfect fit for the job. So I set up single composite build with proper builds from p.1 as snapshot dependencies, "Pull Requests" and "Commit status publisher" build features. It works perfectly, except for one thing...
The only thing I cannot seem to be able to work around is the fact that VCS roots in the dependencies collect changes when the build chain is added to the build queue. This means that because of p.2 (non-negligible max time spent in build queue) some builds end up running against a little bit dated sources. Ideally, I would like to be able to run the builds against the latest sources.
So my question is if there's any way to disable revision synchronization for dependencies? Or maybe I could approach the whole problem in some completely different way without using snapshot dependencies?
Cross-posted at community forum: https://teamcity-support.jetbrains.com/hc/en-us/community/posts/360006745840

Multiple feature branches and continuous integration

I've been doing some reading about continuous integration recently and there is a scenario which could occur which I don't understand how to deal with appropriately.
We have a stable mainline/trunk branch and create branches for features. Each developer will keep their own feature branches up to date by merging from trunk into their branch on a regular basis. However it is entirely possible that two or more feature branches could be created and worked on over a period of several weeks or months. In this time many releases of the software could be deployed. This where my confusion arises.
It is very likely that changes for one feature branch will cause merge conflicts with other feature branches. CI suggests you should merge into trunk at least daily which would resolve the conflicts quickly. However, you may not want to merge the feature code into trunk because it may not be finished or you may not want that feature available in the next release. So, how do you deal with this scenario and still follow CI principles of daily code integration?
There are no feature branches in proper CI. Use feature toggles instead.
The idea explained more fully in this article is to merge from the trunk/release branch to feature branches daily, but only merge back in the other direction once a feature meets your definition of 'done'.
Code written by one feature team will be pushed into the trunk once it's complete, and will be 'distributed' to the other teams, where conflicts can be dealt with, as part of the daily merge process.
This doesn't go as far as satisfying Nick's desire for a version control system that can be used a backup tool, unless the changes being made are small enough that they can be committed to the feature branch within a timeframe where the the risk of losing your work is acceptable.
I personally don't try to reintegrate code into the release branch before it's done, and although I've never really tried, I'm sure building feature toggles in for unfinished work has its own issues.
I think they mean merging mainline into the feature branch, not the other way 'round. This way, the feature branch will not deviate from mainline too much, and be kept in an easily mergeable state.
The git folks do the same thing by rebasing feature branches on top of the master branch before submitting a feature.
In my experience with CI, the way that you should keep your feature branches up to date with the main line changes as others have suggested. This has been working me for several releases. If you are using subversion make sure you to merge with the merge history enable. This way when you are trying to merge your changes back to line it will only like you are merging the feature changes to line, not trying resolve conflicts which your feature might have with the main line. If you are using more advance VCS like git the first merge will be a rebase where the second will be a merge.
There are tools that can support you to get thins done more smoothly like this Feature branches with Bamboo
Feature branches committing back into the mainline, and OFTEN is an essential feature of Continuous Integration. For a thorough breakdown, see This Article
There's now some good resources showing how to combine both CI and feature branches. Bamboo or Feature Branch Notifier are some ways to look.
And this is another quite long article showing pros of so called distributed CI. Hereunder, one excerpt explaining the benefits:
Distributed CI has the advantage for Continuous Deployment because it keeps a clean and stable Mainline branch that can always be deployed to Production. During a Centralized CI process, an unstable Mainline will exist if code does not integrate properly (broken build) or if there is unfinished work integrated. This works quite well with iteration release planning, but creates a bottleneck for Continuous Deployment. The direct line from developer branch to Production must be kept clean in CD, Distributed CI does this by only allowing Production ready code to be put into the Mainline.
One thing that still can be challenging is keeping the branch build isolated so that it doesn't pollute your repository of binaries by pushing its branch builds to it. Bamboo seems to address that, but not sure it's as easy with Jenkins.

What's the workflow of Continuous Integration With Hudson?

I am referred to Hudson today.
I have heard about continuous integration before, but I have no idea what the heck is a ci-server.
Hudson is really easy to install in Ubuntu and in several minutes I managed to set up an instance of it.
But I don't quite understand the workflow of a ci-server, or how am I supposed to use it?
Please tell me if you have experience about ci, thanks in advance.
Edit:
I am currently using Mercurial as my SCM, and I wonder what is the right way to use it with Hudson.
I have installed the Mercurial Plugin of Hudson, and I create a new job with a local repository. When I commit in the repository the Hudson job is built with the latest version of my source code.
If what I used is a remote repository, what's the workflow like?
Is it something like the following?
Set up a Hudson job with the repository
Developer makes a local clone of the repository
Developer commit and push changes
The remote repository update with the incoming changeset
Run a Hudson build
There may be something I misunderstanded at all, please help me point it out.
Continuous Integration is the process of "integrating software" continuously i.e. as frequently as possible (ultimately after each set of changes) to avoid any big-bang integration and all subsequent problems by getting immediate feedback.
To implement Continuous Integration, you first need to automate the build of your software (where build means of course compiling sources, packaging them, but also compiling tests, running the tests, running quality checks, etc, anything that will help to get feedback on the health of your code). Then you need to trigger the build on the latest version of the sources on a particular event (a change in the repository, a temporal event), to generate reports and to send notifications upon failure (by mail, twitter, etc).
And this is precisely the responsibility of a CI engine: offering trigger mechanisms, being able to get the latest version of the sources, running the build, generating and publishing reports, sending notifications. CI engines do implement this.
And because running a build is CPU and Disk intensive, CI engines usually run on a dedicated machine (or even a farm of machines if you want to build lots of projects).
Back to your question now. Once you've got Hudson running, configure it (Manage Hudson > Configure System): setup the JDK, build tools, etc. Then setup an Hudson Job and follow the steps: configure the location of the source repository, the build tool, the trigger, a notification channel and you're done (you can do more complex things but that's a start).
For more details on the setup, check:
The official Use Hudson guide for more details. << START HERE
Continuous Integration with Hudson - Tutorial.
Spot defects early with Continuous Integration.
Martin Fowler's overview of continuous integration is one of the canonical references. In my opinion, using automation to make sure your code base is healthy is one of the most useful things that you can set up.
Update Sorry that I didn't have much time earlier to expand on my reply. #Pascal_Thivent is right that in order to effectively use CI, you need to be able to automate your builds, tests, etc. CI is actually a good forcing function for this. For me, it's one of those little warning flags if I start to think that it would be too painful to put a build into Hudson. It means that something is not quite right.
What I like about Hudson is that it's flexible enough to accommodate different workflows. We use it for both builds / unit tests and releases. And it eliminates a lot of the worry about certain release procedures only working in one person's environment.
What I don't like about Hudson is that it is occasionally unstable when new builds break plugins. I've had a couple of upgrades (2 out of 10 or so) go bad because of incompatibilities. I do two things now:
I never upgrade my team's Hudson server to the latest and greatest right away. I generally only upgrade when there are significant new features, or bug fixes.
I now have a basic Hudson instance set up with all my plugins on a virtual machine with some dummy builds that I fire up to test out any new upgrades before doing it on the public server.

Resources