Best practice for [python] CI/CD workflow - continuous-integration

I am looking for recommendation for CI/CD workflow. My pipeline is using GitHub Actions but that is not that important.
I currently have one single workflow that on each push/PR:
check out the code
Build the wheels for three platforms
Run the tests on all three platforms
Deploy the artifacts on PyPI
Generate and deploy the doc on GitHub pages
This is obviously inefficient (generating a release for each push is questionnable :) ).
I wanted to split the workflow in 3 having:
for building and testing on a single platform,
an other workflow for generating the 3 artefacts called once a release is decided
one for the doc that could be triggered once artefacts are generated (not sure how though)
Happy to hear what you think (is|are) the best workflow(s)

Related

How to sync CI configs for Gitlab, Github, Travis, Circle, etc.?

Is there an abstraction for defining continuous integration pipelines which can then produce individual config files for particular CI providers? This would be handy especially for projects intended to serve as boilerplate.
Currently I am finding myself needing to manually write and sync both a .gitlab-ci.yml and a .github/workflows/ci.yml.
This is an interesting question, unless you can abstract all your CI scripts into shell scripts, then from what I can see there would lots of periodical porting process between the different CI providers.
Also, different CI providers has its own ideology of the perfect build pipeline as well as predefined setup.
With that being said, I would love to see some utility tools help me migrate the scripts and converge my CI setup into Github Action world.

Composite build without revision synchronization for dependencies in TeamCity

I'm trying to setup a pipeline for running all kinds of tests against pull requests into my repository. Repository is git repo hosted with Bitbucket Server and TeamCity is Enterprise 2019.1.5 (build 66605). There are a few key aspects to the task:
There's a lot of tests. One way or the other tests should run in parallel to achieve reasonable execution time. The tests are already partitioned as separate TeamCity build configurations, each having good enough execution time.
There's much less build agents available for the task, so it's not impossible for a particular build to spend quite some time (up to 1-2 hours) in a build queue.
The result of the testing should be reported to Bitbucket as a single aggregate value. I.e. if there are, say, 3 individual builds from p.1 with two passing and one failing then Bitbucket should receive single "failed" build status.
Ideally, pipeline should be triggered by a feature branch change (refs/pull-requests/123/from in Bitbucket lingo), but checkout merges of the feature branch into target branch (refs/pull-requests/123/merge in Bitbucket lingo).
Given above requirements, I experimented with Composite Build Configuration (https://www.jetbrains.com/help/teamcity/composite-build-configuration.html) as it seemed perfect fit for the job. So I set up single composite build with proper builds from p.1 as snapshot dependencies, "Pull Requests" and "Commit status publisher" build features. It works perfectly, except for one thing...
The only thing I cannot seem to be able to work around is the fact that VCS roots in the dependencies collect changes when the build chain is added to the build queue. This means that because of p.2 (non-negligible max time spent in build queue) some builds end up running against a little bit dated sources. Ideally, I would like to be able to run the builds against the latest sources.
So my question is if there's any way to disable revision synchronization for dependencies? Or maybe I could approach the whole problem in some completely different way without using snapshot dependencies?
Cross-posted at community forum: https://teamcity-support.jetbrains.com/hc/en-us/community/posts/360006745840

General question on semantic versioning releases using CI / CD pipeline

I have yet to find clear guidance on semantically versioning software releases using Azure DevOps (Server) CI/CD.
My basic understanding is that I would set up a CI pipeline for our team so that we get the benefits of build failure notifications, test execution, code coverage, static code analysis and even more goodies.
The CD pipeline picks up on the CI pipeline, rolling out the artifacts on the different environments upon their completion.
Using this approach seems to not make sense to me. What about builds that fail because a dev didn't pay attention or that the team wants to discard? Such builds won't find their way into production but might use up version numbers, leading to gaps in our versioning scheme.
What is your experience or approach for semantically versioning software releases using CI/CD pipelines? Do you cherry-pick builds? Do you have a separate build pipeline for building releases?
Builds that fail don't get deployed. For builds that succeed but fail QA or integration testing when deployed to lower environments, you can put approval gates so that someone has to approve the release before it proceeds to higher environments.
If you build version 1.0.1 of your application and deploy it to dev and it's no good, that doesn't mean that version 1.0.1 doesn't exist. It exists. It was comprised of specific code assets. It was bad, and your users will never see it, but that's fine. If the users see version 1.0.1 jump to 1.0.94, why does it matter?

Combining Jenkins Pipeline and UrbanCode Deploy to achieve Continuously Delivery?

The best part of UrbanCode Deploy is it models a component based architecture application, and its deployment environment so well that everybody can understand in 10 minutes. Very initiative, flexible and powerful. Don't know if there is another tool does this well.
Jenkins Pipeline can orchestra the Continuously Delivery workflow at the higher level to include the build, test, etc.
Does it make sense?
There's a new UCD plugin for Jenkins that adds nice integrations with the Jenkins 2.0 pipeline. I'm going to poke the developers since there doesn't seem to be a nice video showing it, but there is documentation (and a link to the plugin) out here:
https://developer.ibm.com/urbancode/docs/jenkins-build-step-integration-with-ibm-urbancode-deploy/
I think the idea is that you can use Jenkins pipeline to govern the flow of a build through early test environments, while UCD owns the late test environments / production when the pipeline operates more at the snapshot level. Would love your feedback!
Today, in my production environment, i use Jenkins to manage my build (like a "build pipeline" with some tests) and put all my build results into Urbancode Code station. Urbancode is doing all my deploy work perfectly, the integration with Jenkins is beautiful, easy and fast. I have read some articles about Jenkins delivery pipeline and do not recommend use it.
Check it out
https://www.thoughtworks.com/radar/tools

What is the best way to setup an integration testing server?

Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).

Resources