Is it possible to limit a Multibranch pipeline to only build one branch at a time?
I have a pipeline that has steps that checkout, build, test, and then deploy. The deploy phase does some copying/executing of files on a specific machine that cannot be done in parallel with other branch jobs.
I have tried:
properties([disableConcurrentBuilds()])
But this only limits concurrency on a branch basis, so still multiple branches will be run in parallel.
Also, in regular non-pipeline Jenkins jobs, there is an option checkbox:
"Execute concurrent builds if necessary"
But this is also not available in the multibranch configuration.
Is there some other configuration to achieve this or is it by design?
In the above situation I would use lockable resources.
You can execute all of the branches in parallel.However, only one branch will executes locked step at any given point of time.
stage("locked stage") {
lock("deploy") {
//deploy steps/copy files here
} // resource is unlocked.
}
Related
I need to build a Gitlab CI pipeline manually but not using latest of my master branch, but using a specific commitID.
I have tried running pipeline manually by using variable as below and passing its value but of no use.
Input variable key: CI_COMMIT_SHA
At the time of this writing, GitLab only supports branch/tag pipelines, merge request pipelines and scheduled pipelines. You can't run a GitLab pipeline for a specific commit, since the same commit may belong to multiple branches.
To do what you want, you need to create a branch from the commit you want to run the pipeline for. Then you can run the manual pipeline on that branch.
See this answer for step-by-step instructions on how to create a branch from a commit directly in the GitLab UI.
Use the existing (created by Gitlab CI) workspace to run the .gitlab-ci.yml and from there checkout the code again in a different directory using commitID, and perform all the operations there.
We have a teamcity set-up on with multiple agents. There is one particular build which uses some underlying resources and services of a particular environment. ( say uat)
We want that this build should not run n parallel to avoid resource contention. i.e. only a single running build at a time. How can I achieve this ?
Thanks
Under the project, setup a shared resource:
Then under the build configurations you don't wish to be able to run in parallel add a build feature and select the resource you just created and "Write Lock":
This will mean any extra builds triggered will stay in the build queue and will not be allowed to run concurrently.
I wrote a pipline to build my Java application with Maven. I have feature branches and master branch in my Git repository, so I have to separate Maven goal package and deploy. Therefore I created two jobs in my pipeline. Last job needs job results from first job.
I know that I have to cache the job results, but I don't want to
expose the job results to GitLab UI
expose it to the next run of the pipeline
I tried following solutions without success.
Using cache
I followed How to deploy Maven projects to Artifactory with GitLab CI/CD:
Caching the .m2/repository folder (where all the Maven files are stored), and the target folder (where our application will be created), is useful for speeding up the process by running all Maven phases in a sequential order, therefore, executing mvn test will automatically run mvn compile if necessary.
but this solution shares job results between piplines, see Cache dependencies in GitLab CI/CD:
If caching is enabled, it’s shared between pipelines and jobs at the project level by default, starting from GitLab 9.0. Caches are not shared across projects.
and also it should not be used for caching in the same pipeline, see Cache vs artifacts:
Don’t use caching for passing artifacts between stages, as it is designed to store runtime dependencies needed to compile the project:
cache: For storing project dependencies
Caches are used to speed up runs of a given job in subsequent pipelines, by storing downloaded dependencies so that they don’t have to be fetched from the internet again (like npm packages, Go vendor packages, etc.) While the cache could be configured to pass intermediate build results between stages, this should be done with artifacts instead.
artifacts: Use for stage results that will be passed between stages.
Artifacts are files generated by a job which are stored and uploaded, and can then be fetched and used by jobs in later stages of the same pipeline. This data will not be available in different pipelines, but is available to be downloaded from the UI.
Using artifacts
This solution is exposing the job results to the GitLab UI, see artifacts:
The artifacts will be sent to GitLab after the job finishes and will be available for download in the GitLab UI.
and there is no way to expire the cache after finishing the pipeline, see artifacts:expire_in:
The value of expire_in is an elapsed time in seconds, unless a unit is provided.
Is there any way to cache job results only for the running pipline?
There is no way to send build artifacts between jobs in GitLab that only keeps them as long as the pipeline is running. This is how GitLab has designed their CI solution.
The recommended way to send build artifacts between jobs in GitLab is to use artifacts. This feature always upload the files to the GitLab instance, that they call the coordinator in this case. These files are available through the GitLab UI, as you write. For most cases this is a complete waste of space, but in rare cases it is very useful as you can download the artifacts and check why your pipeline broke.
The artifacts are available for download by project members that are at least Reporters, but can be viewed by everybody if public pipelines is enabled. You can read more about permissions here.
To not fill up your hard disk or quotas, you should use an expire_in. You could set it to just a few hours if you really don't want to waste space. I would not recommend this though, as if a job that depend on these artifacts fails and you retry it, if the artifacts have expired, you will have to restart the whole pipeline. I usually put this to one week for intermediate build artifacts as that often fits my needs.
If you want to use caches for keeping build artifacts, maybe because your build artifacts are huge and you need to optimize it, it should be possible to use CI_PIPELINE_ID as the key of the cache (I haven't tested this):
cache:
key: ${CI_PIPELINE_ID}
The files in the cache should be stored where your runner is installed. If you make sure that all jobs that need these build artifacts are executed by runners that have access to this cache, it should work.
You could also try some of the other predefined environment variables as key our your cache.
VSTS allows you to select which branches automatically trigger a CI build by specifying a branch pattern.
However, my unit tests are using a real database which causes a problem when more than one build triggers e.g. master and feature-123 as they will clash on the database tests.
Is there a way of specifying that only one such build should be run at at time; I don't want to go away from executing tests against a real database as there are significant differences between an in-memory database and SQL Azure.
VSTS already serialize builds which are triggered by the same CI build.
Even CI build can be triggered by multiple branches, but for a certain time, only one build is running by default (unless you use pipelines to run builds concurrently).
Such as if both master branch and feature-123 branch are pushed to remote repo at the time time, the CI build definition will trriger two builds serially (not concurrently).
If you are using pipeline and need to run the triggered builds serially, you should make sure only one agent is used for the CI builds. You can use the way below:
In your CI build definition -> Options Tab -> add demands to specify which agent you want to use for the CI build.
Assume in default agent pool, there are three agents with the agent name: default1, default2 and default3.
If you need to specify default2 agent to run the CI build, then you can add the demands as below:
Now even multiple branches have been pushed at the same time, they will be triggered one by one since only one agent is available for the CI build.
If you use a YAML pipeline, you can use a deployment job in stead of a regular job.
With a deployment job you select a named Environment you want to deploy to.
You can configure the Environment in azure devops under Pipelines->Environments and you can choose to add an Exclusive Lock.
Then only one run can use the environment at a time and this serializes your runs.
Unfortunately, if you have multiple runs waiting for the environment (because one run currently has it locked), when the environment becomes unlocked only the latest run will continue. All the others waiting for the lock will be cancelled.
If you want to do it via .yml or .yaml file you can do following
- phase: Build
queue:
name: <Agent pool name>
demands:
- agent.name -equals <agent name from agent pool>
steps:
- task: <taskname>
displayName: 'some display name'
inputs:
value: '<input variable based on type of task'
variableName: '<input variable name>'
I am using the new Jenkins pipeline DSL which I really like. My Jenkinsfile is probably fairly typical and compiles / unit tests code in the master branch of GIT using maven, does a docker build, deploys to staging etc. Towards the end of the pipeline there is a manual step where a user has to confirm if a build goes to production e.g.
stage name: 'Production Deploy', concurrency:1
input 'Do you want to deploy to production?'
node {
sh "./bin/production-deploy.sh"
}
However, the build blocks until someone accepts / declines. Is there a way to automatically decline the input if someone else kicks the build off (by merging code to the master branch)?
I sugest you separate the Continuos Integration pipeline of Continuos Delivery pipeline. In the CI pipeline you build, test, deploy from develop stage to staging stage, then someone validate the staging deploy and when all staging tests are OK in a next step you execute the DC pipeline; is when you deploy to production stage. in that way you have an independent livecycle develoment process and an independent delivery process.