Both Github Actions and Bitbucket Pipelines seem to fill similar functions at a surface level. Is it trivial to migrate the YAML for Actions into a Pipeline - or do they operate fundamentally differently?
For example: running something simple like SuperLinter (used on Github Actions) on Bitbucket Pipelines.
I've searched for examples or explanations of the migration process but with little success so far - perhabs they're just not compatible or am I missing something. This is my first time using Bitbucket over Github. Any resources and tips welcome.
They are absolutely unrelated CI systems and there is no straightforward migration path from one to another.
Both systems base their definitions in YAML, just like GitLab-CI, but the only thing that can be reused is your very YAML knowledge (syntax and anchors).
As CI systems, both will start some kind of agent to run a list of instructions, a script, so you can probably reuse most of the ideas of your scripts. But the execution environment is very different so be ready to write tons of tweaks like Benjamin commented.
E.g: about that "superlinter", just forget about it. Instead, Bitbucket Pipelines has a concept of pipes which have a similar purpose but are implemented in a rather different approach.
Another key difference: GHA runs on VMs and you configure whatever you need with "setup-actions". BBP runs on docker containers that should feature most of the runtime and tooling you will need upfront, as "setup-pipes" can not exist. So you will end up installing tooling on every run (via apt, yum, apk, wget...) so as to not maintain and keep updated a crazy amount of images with tooling and language runtimes: https://stackoverflow.com/a/72959639/11715259
Related
In my organization, we are in a transition phase. Big projects get split up into micro services. While this is nice to bring complexity down, the downside is that some parts which should be the same everywhere are more work.
For example, I would like every project to have some tools in the CI pipeline:
Software Composition Analysis (SCA)
Static Application Security Testing (SAST)
Unit Tests
What the tools are might differ from project to project (essentially by programming language). It might also be that this changes - for example, one might want to add the type checker later. Once the type checker is there, one might enforce some of the values (while keeping others flexible, to be changed by the microservices).
Is it possible to have a shared template for a CI pipeline in GitLab? I'm not looking something people can copy-and-paste. I'm looking for a solution that allows me to adjust the CI pipeline of multiple projects at once, without causing more work for me when more microservices are added (the changes don't have to be applied instantly)
Yes you can.
You may develop one or several templates (let's say for e.g.: a Java template (build&test), a Python one (build&test), a SonarQube (SAST), a Kubernetes (deploy), an AWS (deploy)) and then let developers/projects include the ones they need to assemble their pipeline.
We are currently using bitbucket cloud to host our grails-app repository. We want to set up some pipelines to do things like run unit tests and make sure the app compiles before being able to merge a branch to master.
I know this can pretty easily be done by letting them host the pipeline and committing a well written pipe file, however there is a problem standing that our app is very large, and even on brand new macbook pros takes 20 minutes to compile, on some older ones it can take 2 hours or more. Grails, thankfully, only compiles files that have changes in them from the last compilation. However, this can't be used on a bitbucket pipe that's working off a fresh pull of the app every time it runs.
My solution to this was wanting to set up a pipeline to run for us internally so that it can already have the app pulled, and just switch to the desired branch and run from there. This still might take time if switching between 2 very diverged branches, but it's better than compiling from fresh every time.
I can't seem to find any documentation on hosting a pipeline internally with bitbucket cloud, does anyone know if this is possible, and if so where there is documentation for it?
It would also be acceptable to find a solution to the long compilation problem itself with bitbucket hosted pipelines.
A few weeks ago, self hosted runners was made available as a public beta. Here are the details: https://community.atlassian.com/t5/Bitbucket-Pipelines-articles/Bitbucket-Pipelines-Runners-is-now-in-open-beta/ba-p/1691022
Additionally, if you're looking to retain some of your files from one build to the next to save doing the same work over and over again, have a look at caches: https://support.atlassian.com/bitbucket-cloud/docs/cache-dependencies/ there are some built ones that you could use, but you can define your own custom ones as well. Essentially it's just a way of preserving the contents of a directory for a future build.
Is there an abstraction for defining continuous integration pipelines which can then produce individual config files for particular CI providers? This would be handy especially for projects intended to serve as boilerplate.
Currently I am finding myself needing to manually write and sync both a .gitlab-ci.yml and a .github/workflows/ci.yml.
This is an interesting question, unless you can abstract all your CI scripts into shell scripts, then from what I can see there would lots of periodical porting process between the different CI providers.
Also, different CI providers has its own ideology of the perfect build pipeline as well as predefined setup.
With that being said, I would love to see some utility tools help me migrate the scripts and converge my CI setup into Github Action world.
I'm new to continuous integration. I'm interested in systems that would be able to test if the changes that I made to a code break the compilation of the code on a list of different build types.
Properties of code (Which I will call CodeA):
1.) Has dependencies to numerical libraries like SUNDIALS and PETSC
2.) Has dependencies on two other codes (CodeB CodeC) which themselves have dependencies to things like HDF5, MPI, etc.
Is it feasible to use the CI feature of GitLab to set up a system that would be able to build CodeA (linked with CodeB and CodeC) on Linux machines with different system flavors (Ubuntu, OpenSuSe, RHEL, Fedora, etc)?
Most of the examples that I've found of using GitLab for CI have been things like testing to see if HelloWold.cpp compiles if lines are changed on it. Just simple builds with very little external dependency management/integration.
So it sounds like you've got a few really great questions in here. I'll break them apart as I see them and you let me know if this fully answers your question.
How can I build in different flavors of linux?
The approach I would take would be to to use docker files as Connor Shea mentioned in the comment. This allows you to continue using generic build agents in your CI system but test across multiple platforms.
Another option would be to look at how you're distributing your application and see if you could use a snap package. That would allow you to not have to worry about the environment you're deploying to.
How do I deal with dependencies?
This is where it's really useful to consider and artifact repository. Both jfrog's artifactory and sonatype's nexus work wonders here. This will allow you to hook up your build pipeline for any app or library and push an artifact that the others can consume. These can be locked down with a set of credentials that you supply to your build.
I hope this helped.
I've been dealing with the problem of scaling CI at my company and at the same time trying to figure out which approach to take when it comes to CI and multiple branches. There is a similar question at stackoverflow, Multiple feature branches and continuous integration. I've started a new one because I'd like to get more of discussion and provide some analysis in the question.
So far I've found that there are 2 main approaches that I can take (or maybe some others???).
Multiple set of jobs (talking about Jenkins/Hudson here) per branch
Write tooling to manage the extra jobs
Create/modify/delete Jobs in bulk
Custom settings for each job per branch (SCM url, dep management repos duplications)
Some examples of people tackling this problem with shell tools, ant scripts and Jenkins CLI. See:
http://jenkins.361315.n4.nabble.com/Multiple-branches-best-practice-td2306578.html
http://jenkins.361315.n4.nabble.com/Is-it-possible-to-handle-multiple-branches-where-some-jobs-should-run-on-each-one-without-duplicatin-td954729.html
http://jenkins.361315.n4.nabble.com/Parallel-development-with-branches-td1013013.html
Configure or Create hudson job automatically
Will cause more load on your CI cluster
Feedback cycle for devs slows down (if the infrastructure cannot handle the new load)
Multiple set of jobs per 2 branches (dev & stable)
Manage the two sets manually (if you change the conf of a job then be sure to change in the other branch)
PITA but at least so few to manage
Other extra branches won't get a full test suite before they get pushed to dev
Unsatisfied devs. Why should a dev care about CI scaling problems. He has a simple request, when I branch I would like to test my code. Simple.
So it seems if I want to provide devs with CI for their own custom branches I need special tooling for Jenkins (API or shellscripts or something?) and handle scaling. Or I can tell them to merge more often to DEV and live without CI on custom branches. Which one would you take or are there other options?
When you talk about scaling CI you're really talking about scaling the use of your CI server to handle all your feature branches along with your mainline. Initially this looks like a good approach as the developers in a branch get all the advantages of the automated testing that the CI jobs include. However, you run into problems managing the CI server jobs (like you have discovered) and more importantly, you aren't really doing CI. Yes, you are using a CI server, but you aren't continuously integrating the code from all of your developers.
Performing real CI means that all of your developers are committing regularly to the mainline. Easy to say, but the hard part is doing it without breaking your application. I highly recommend you look at Continuous Delivery, especially the Keeping Your Application Releasable section in Chapter 13: Managing Components and Dependencies. The main points are:
Hide new functionality until it's finished (A.K.A Feature Toggles).
Make all changes incrementally as a series of small changes, each of which is releasable.
Use branch by abstraction to make large-scale changes to the codebase.
Use components to decouple parts of your application that change at different rates.
They are pretty self explanatory except branch by abstraction. This is just a fancy term for:
Create an abstraction over the part of the system that you need to change.
Refactor the rest of the system to use the abstraction layer.
Create a new implementation, which is not part of the production code path until complete.
Update your abstraction layer to delegate to your new implementation.
Remove the old implementation.
Remove the abstraction layer if it is no longer appropriate.
The following paragraph from the Branches, Streams, and Continuous Integration section in Chapter 14: Advanced Version Control summarises the impacts.
The incremental approach certainly requires more discipline and care - and indeed more creativity - than creating a branch and diving gung-ho into re-architecting and developing new functionality. But it significantly reduces the risk of your changes breaking the application, and will save your and your team a great deal of time merging, fixing breakages, and getting your application into a deployable state.
It takes quite a mind shift to give up feature branches and you will always get resistance. In my experience this resistance is based on developers not feeling safe committing code the the mainline and this is a reasonable concern. This in turn usually stems from a lack of knowledge, confidence or experience with the techniques listed above and possibly with the lack of confidence with your automated tests. The former can be solved with training and developer support. The latter is a far more difficult problem to deal with, however branching doesn't provide any extra real safety, it just defers the problem until the developers feel confident enough with their code.
I would set up separate jobs for each branch. I've done this before and it isn't hard to manage and set up if you've set up Hudson/Jenkins correctly. A quick way to create multiple jobs is to copy from an existing job that has similar requirements and modify them as needed. I'm not sure if you want to allow each developer to setup their own jobs for their own branches, but it isn't much work for one person (i.e. a build manager) to manage. Once the custom branches have been merged into stable branches, corresponding jobs can be removed when they are no longer necessary.
If you're worried about the load on the CI server, you could set up separate instances of the CI or even separate slaves to help balance the load across multiple servers. Make sure that the server you are running Hudson/Jenkins on is adequate. I've used Apache Tomcat and just had to ensure that it had enough memory and processing power to process the build queue.
It's important to be clear on what you want to achieve using CI and then figure out a way to implement it without much manual effort or duplication. There's nothing wrong with using other external tools or scripts that are executed by your CI server that help simplify your overall build management process.
I would choose dev+stable branches. And if you still want custom branches and afraid of the load, then why not move these custom ones to the cloud and let developers manage it themselves, e.g. http://cloudbees.com/dev.cb
This is the company where Kohsuke is now.
There is an Eclipse Tooling also, so if you are on Eclipse, you will have it tightly integrated right into dev env.
Actually what is really problematic is build isolation with feature branches. In our company we have a set of separate maven projects all be part of a larger distribution. These projects are maintained by different teams but for each distribution all projects need to be released. A featurebranch may now overlap from one project to another and thats when build isolation gets painfully. There are several solutions we've tried:
create separate snapshot repositories in nexus for each feature branch
share local repositories on dedicated slaves
use the repository-server-plugin with upstream repositories
build all within one job with one private repository
As a matter of fact, the last solution is the most promising. All other solutions lack in one or another way. Together with the job-dsl plugin it is easy to setup a new feature branch. simply copy and paste the groovy script, adapt branches and let the seed job create the new jobs. Make sure that the seed job removes nonmanaged jobs. Then you can easily scale with feature branches over different maven projects.
But as tom said well above, it would be nicer to overcome the necessity of feature branches and teach devs to integrate cleanly, but that is a longer process and the outcome is not clear with many legacy system parts you won't touch any more.
my 2 cents