I am using the new Jenkins pipeline DSL which I really like. My Jenkinsfile is probably fairly typical and compiles / unit tests code in the master branch of GIT using maven, does a docker build, deploys to staging etc. Towards the end of the pipeline there is a manual step where a user has to confirm if a build goes to production e.g.
stage name: 'Production Deploy', concurrency:1
input 'Do you want to deploy to production?'
node {
sh "./bin/production-deploy.sh"
}
However, the build blocks until someone accepts / declines. Is there a way to automatically decline the input if someone else kicks the build off (by merging code to the master branch)?
I sugest you separate the Continuos Integration pipeline of Continuos Delivery pipeline. In the CI pipeline you build, test, deploy from develop stage to staging stage, then someone validate the staging deploy and when all staging tests are OK in a next step you execute the DC pipeline; is when you deploy to production stage. in that way you have an independent livecycle develoment process and an independent delivery process.
Related
If I deploy to prod from the latest release branch forked from dev how can I integrate PR to review code?
Also, if I would like to run a test integration pipeline automatically when a pull request is done to link the result to the pull request and improve the code review process is it possible with this kind of branch strategy?
At the moment I have the following branch strategy:
master
test
dev
features
Features are detached from dev, then merged in dev when they are ready.
When a set of features are ready I create a PR from dev to test and my test pipeline runs and shows the result inside the PR to improve the code review process.
When code is merged into a test after PR a pipeline ships it to the test environment and another test pipeline runs to test APIs from the outside.
Then code is merged to prod and shipped to prod.
This is my first question so please presume ignorance and positive intent.
As part of our build pipeline I need to run what our dev team calls "unit tests." These unit tests are run via an ant target. Before running that ant target we must spin up, configure and partially populate (the ant target does some of the population) several containers including:
application server
ldap instance
postgres instance
It looks as if each task only supports a single container. Is there a straightforward way to get all of these running together? Ideally I could create a task that would allow me to specify a pod template with the commands running in one of the containers of that pod.
I realize that I could hack this together by using the openshift client or kubernetes actions but I was hoping for something a bit more elegant. I feel like using either of those tasks would require that I build out status awareness, error checking, retry logic, etc that is likely already part of the pipeline logic and then parse the output of the ant run to determine if all of the tests were successful. This is my first foray into tekton so accepted patterns or idioms would be greatly appreciated.
For greater context I see these tasks making up my pipeline:
clone git repo
build application into intermediate image
launch pod with all necessary containers
wait for all containers to become "ready"
execute ant target to run unit tests
if all tests pass build runtime image
copy artifacts from runtime image to external storage (for deployment outside of openshift)
Thank you for your time
Have a look at sidecars. The database must be up and running for the tests to execute, so start the database in a sidecar. The steps in the task will be started as soon as all sidecars are running.
VSTS allows you to select which branches automatically trigger a CI build by specifying a branch pattern.
However, my unit tests are using a real database which causes a problem when more than one build triggers e.g. master and feature-123 as they will clash on the database tests.
Is there a way of specifying that only one such build should be run at at time; I don't want to go away from executing tests against a real database as there are significant differences between an in-memory database and SQL Azure.
VSTS already serialize builds which are triggered by the same CI build.
Even CI build can be triggered by multiple branches, but for a certain time, only one build is running by default (unless you use pipelines to run builds concurrently).
Such as if both master branch and feature-123 branch are pushed to remote repo at the time time, the CI build definition will trriger two builds serially (not concurrently).
If you are using pipeline and need to run the triggered builds serially, you should make sure only one agent is used for the CI builds. You can use the way below:
In your CI build definition -> Options Tab -> add demands to specify which agent you want to use for the CI build.
Assume in default agent pool, there are three agents with the agent name: default1, default2 and default3.
If you need to specify default2 agent to run the CI build, then you can add the demands as below:
Now even multiple branches have been pushed at the same time, they will be triggered one by one since only one agent is available for the CI build.
If you use a YAML pipeline, you can use a deployment job in stead of a regular job.
With a deployment job you select a named Environment you want to deploy to.
You can configure the Environment in azure devops under Pipelines->Environments and you can choose to add an Exclusive Lock.
Then only one run can use the environment at a time and this serializes your runs.
Unfortunately, if you have multiple runs waiting for the environment (because one run currently has it locked), when the environment becomes unlocked only the latest run will continue. All the others waiting for the lock will be cancelled.
If you want to do it via .yml or .yaml file you can do following
- phase: Build
queue:
name: <Agent pool name>
demands:
- agent.name -equals <agent name from agent pool>
steps:
- task: <taskname>
displayName: 'some display name'
inputs:
value: '<input variable based on type of task'
variableName: '<input variable name>'
We have 3 types of tests, unit, functional and acceptance.
The first 2 can be run with phpunit or other tools on top of it like codeception. So in CI the deploy script will run all these tests and if one fail build will fail and merge request will be cancelled.
But in CI deploy script how to run acceptance tests? These tests need to be run in browser in an already deployed build. Is there a workaround for that? Maybe run acceptance tests after build succeeds?
But then revert will be a pain.
You can parallelize the test jobs as advised in every CI/CD pipeline.
But in CI deploy script how to run acceptance tests?
For this you will need dedicated Test infrastructure, like available browsers on the server. After build step is successful - run all Test steps.
The parallel jobs can be setup like this:
I read an article describing Continuous Deployment with jenkins thusly:
Create a 'test' job that runs your tests.
Create a 'deploy' job that deploys your app.
Make the 'test' job trigger 'deploy' on successful build.
I can do that just fine. However, I have a generic 'test' job right now the runs tests for any branch that I push. Is there a way to make it only trigger the 'deploy' job if I pushed to the 'production' branch?
Otherwise, I can always add a second 'test-production' job that only triggers when I push to production, and it triggers deploy afterwards...but that's not what I want to do.
An alternative setup is to use rundeck for deployment.
The jenkins plugin has a feature that will trigger a deployment based on a SCM commit message:
The "tag" field is used to perform "on-demand" job scheduling on RunDeck : if the value is not empty, we will check if the SCM changelog (= the commit message) contains the given tag, and only schedule a job execution if it is present. For example you can set the value to "#deploy". Note that if this value is left empty, we will ALWAYS schedule a job execution.
So you can configure rundeck to trigger a Jenkins test after every successful deploy. And control those deployments, using commit messages in the code.