I know it's possible to have that when using Jenkins inside of Openshift, but when using pure build images it seems full CI/CD seems to be missing.
Our perfect scenario for each push to 'master' branch would be:
build app
run unit tests
notify team if failed to build
deploy image
notify if failed to start
Simple Openshift build setup only includes bold items.
Can we have full CI/CD inside of Openshift? Or should we do checks outside?
Also notifications on failures are still missing in Openshift as far as I know.
Personally, I think you had better use the OpenShift Pipeline Jebkins Plugin for your use.
It can be implemented your own CI/CD using various ways, so it's a just sample. Maybe you would undergo trial and error for finding your own CI/CD configurations.
For example, simple build and deploy description using OpenShift Pipeline Jenkins Plugin.
For more details, refer here
And post notification for the job result is configured using Cleaning up and notifications.
apiVersion: v1
kind: BuildConfig
metadata:
labels:
name: your-pipeline
name: your-pipeline
spec:
runPolicy: Serial
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
node(''){
stage('some unit tests') {
sh 'git clone https://github.com/yourproject/yourrepo'
sh 'python -m unittest tests/unittest_start_and_result_mailing.py'
}
stage('Build using your-yourconfig'){
openshiftBuild(namespace: 'your-project', bldCfg: 'your-buildconfig', showBuildLogs: 'true')
}
stage('Deployment using your-deploymentconfig'){
openshiftDeploy(namespace: 'your-project', depCfg: 'your-deploymentconfig')
}
stage('Verify Deployment status'){
openshiftVerifyDeployment(namespace: 'your-project', depCfg: 'your-deploymentconfig', verifyReplicaCount: 'true')
}
}
post {
always {
echo 'One way or another, I have finished'
deleteDir() /* clean up our workspace */
}
success {
echo 'I succeeeded!'
}
unstable {
echo 'I am unstable :/'
}
failure {
echo 'I failed :('
}
changed {
echo 'Things were different before...'
}
}
type: JenkinsPipeline
triggers:
- github:
secret: gitsecret
type: GitHub
- generic:
secret: genericsecret
type: Generic
I hope it help you.
Related
I am using GitHub Actions for one of the projects. I have a use case when I need to deploy to 2 different environments. As the number of domains may grow, I want to deploy to all of them at once parametrically.
Part of my job that fails:
jobs:
build:
strategy:
matrix:
domain: [['main', 'books-v1'], ['old-main', 'books-v2']]
The above part works perfectly but if I need to add new variants from the secrets, the workflow doesn't work. See the snippet below:
jobs:
build:
strategy:
matrix:
domain: [['main', 'books-v1', ${{ secrets.URL_V1 }}], ['old-main', 'books-v2', ${{ secrets.URL_V2 }}]]
I checked GitHub Actions docs. I also searched available examples on GitHub to see existing solutions. So far, I didn't find a similar use case.
Is there a way to make it work like that? What are alternatives to my approach that will work?
GitHub Actions failure message:
You have an error in your yaml syntax on line XYZ
At the YAML level, single quotes around ${{ secrets... }} should fix the syntax error.
But, according to the Context availability, the secrets context is not allowed under stratey. The allowed contexts are:
jobs.<job_id>.strategy github, needs, vars, inputs
You can make use of the vars context for your use case.
Apart from that, linting your workflow with https://rhysd.github.io/actionlint/ would be much faster to identify potential issues.
UPDATE (by Dmytro Chasovskyi)
Here is an example with the vars context:
With a variable DOMAINS having this config:
{
"v1": {
"url": "http://localhost:80/api/v1"
},
"v2": {
"url": "http://localhost:80/api/v2"
}
}
the workflow will be:
jobs:
build:
strategy:
matrix:
domain: [['main', 'books-v1', '${{ vars.DOMAINS.v1.url }}'], ['old-main', 'books-v2', '${{ vars.DOMAINS.v2.url }}']]
I've managed to trigger a Google Cloud Build on every commit to GitHub successfully. However, we have many different source code repositories (projects) on GitHub, that all use Maven and Spring Boot, and I would like all of these projects to use the same cloudbuild.yaml (or a shared template). This way we don't need to duplicate the cloudbuild.yaml in all projects (it'll be essentially the same in most projects).
For example, let's just take two different projects on GitHub, A and B. Their cloudbuild.yaml files could look something like this (but much more complex in our actual projects):
Project A:
steps:
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'test' ]
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'package', '-Dmaven.test.skip=true' ]
- name: gcr.io/cloud-builders/docker
args: [ "build", "-t", "europe-west1-docker.pkg.dev/projectname/repo/project-a", "--build-arg=JAR_FILE=target/project-a.jar", "." ]
images: [ "europe-west1-docker.pkg.dev/projectname/repo/project-a" ]
Project B:
steps:
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'test' ]
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'package', '-Dmaven.test.skip=true' ]
- name: gcr.io/cloud-builders/docker
args: [ "build", "-t", "europe-west1-docker.pkg.dev/projectname/repo/project-a", "--build-arg=JAR_FILE=target/project-b.jar", "." ]
images: [ "europe-west1-docker.pkg.dev/projectname/repo/project-b" ]
The only thing that is different is the jar file and image name, the steps are the same. Now imagine having hundreds of such projects, it can become a maintenance nightmare if we need to change or add a build step for each project.
A better approach, in my mind, would be to have a template file that could be shared:
steps:
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'test' ]
- name: maven:3.8.6-eclipse-temurin-17-alpine
entrypoint: mvn
args: [ 'package', '-Dmaven.test.skip=true' ]
- name: gcr.io/cloud-builders/docker
args: [ "build", "-t", "europe-west1-docker.pkg.dev/projectname/repo/${PROJECT_NAME}", "--build-arg=JAR_FILE=target/${PROJECT_NAME}.jar", "." ]
images: [ "europe-west1-docker.pkg.dev/projectname/repo/${PROJECT_NAME}" ]
It would then be great if such a template file could be uploaded to GCS and then reused in the cloudbuild.yaml file in each project:
Project A:
steps:
import:
gcs: gs:/my-build-bucket/cloudbuild-template.yaml
parameters:
PROJECT_NAME: project-a
Project B:
steps:
import:
gcs: gs:/my-build-bucket/cloudbuild-template.yaml
parameters:
PROJECT_NAME: project-b
Does such a thing exists for Google Cloud Build? How do you import/reuse build steps in different builds as I described above? What's the recommended way to achieve this?
I contacted Google Cloud support and they told me that this is not currently available. They are aware of the issue and it's something that they're working on (no eta on when it's going to be available).
Their recommendation, in the meantime, is to use Tekton.
You have two readily available solutions on Google Cloud Build:
encapsulate these build procedures into one or more reusable Docker images
make use of gcloud builds submit
Eventually, whichever you decide, you will end up with something like this in your project build configurations:
- name: gcr.io/my-project/maven-spring
args: ['$REPO_NAME', 'project-a']
Some rationale
I don't believe these are the only two viable options you have to approach the problem of "reusable build code", but I think they play nicely with GCB and other related services you might already have provisioned.
With reusable Docker images, the benefit is that the images run in the same execution context as your build configuration, so you can imagine this is akin to the experience you already have with using existing gcr.io/cloud-builders or community images.
With gcloud builds submit, the benefit is that you can run an entirely separate build in it's own execution context, and either block until it completes or run these asynchronously.
The approach with building your custom Docker image that can perform these common tasks with the sources available on /workspace is a bit more straightforward.
In case that asynchronous builds seems like the right option, then I imagine what you can do is publish a Docker image to a shared Artifact/Container Registry, based on gcr.io/cloud-builders/gcloud, which has all the templates contained within, where on execution sources will be available on the automatically mounted /workspace path, and you can add some more nice touches like validating the Docker run arguments.
I have setup Jenkins project piper (https://sap.github.io/jenkins-library/). I have then setup a basic SAP Cloud Application Programming model app with integration for the SAP Cloud SDK pipeline with default configuration and uncommented the 'productionDeployment' stage and completed cloud foundry endpoints/orgs/spaces etc. I have committed the applicatino to the master branch in the git repo.
The pipeline executes successfully but is skipping the production deployment step.
Pipeline execution results
When checking the logs I see:
[Pipeline] // stageenter code here
[Pipeline] stage
[Pipeline] { (Production Deployment)
Stage "Production Deployment" skipped due to when conditional
When I look at the script (https://github.com/SAP/cloud-s4-sdk-pipeline/blob/master/s4sdk-pipeline.groovy) I see:
stage('Production Deployment') {
*when { expression { commonPipelineEnvironment.configuration.runStage.PRODUCTION_DEPLOYMENT }* }
//milestone 80 is set in stageProductionDeployment
steps { stageProductionDeployment script: this }
}
Can anyone explain what is required to pass the commonPipelineEnvironment.configuration.runStage.PRODUCTION_DEPLOYMENT check in order to execute the stageProductionDeployment script?
My pipeline_config.yml file (anonymized) is:
###
# This file configures the SAP Cloud SDK Continuous Delivery pipeline of your project.
# For a reference of the configuration concept and available options, please have a look into its documentation.
#
# The documentation for the most recent pipeline version can always be found at:
# https://github.com/SAP/cloud-s4-sdk-pipeline/blob/master/configuration.md
# If you are using a fixed version of the pipeline, please make sure to view the corresponding version from the tag
# list of GitHub (e.g. "v15" when you configured pipelineVersion = "v15" in the Jenkinsfile).
#
# For general information on how to get started with Continuous Delivery, visit:
# https://blogs.sap.com/2017/09/20/continuous-integration-and-delivery
#
# We aim to keep the pipeline configuration as stable as possible. However, major changes might also imply breaking
# changes in the configuration. Before doing an update, please check the the release notes of all intermediate releases
# and adapt this file if necessary.
#
# This is a YAML-file. YAML is a indentation-sensitive file format. Please make sure to properly indent changes to it.
###
### General project setup
general:
productiveBranch: 'master'
### Step-specific configuration
steps:
setupCommonPipelineEnvironment:
collectTelemetryData: true
cloudFoundryDeploy:
dockerImage: 'ppiper/cf-cli'
smokeTestStatusCode: '200'
cloudFoundry:
org: 'XXXXXX'
space: 'XXXXXX'
appName: 'MTBookshopNode'
manifest: 'mta.yaml'
credentialsId: 'CF_CREDENTIALSID'
apiEndpoint: 'https://api.cf.XX10.hana.ondemand.com'
### Stage-specific configuration
stages:
# This exclude is required for the example project to be successful in the pipeline
# Remove it when you have added your first test
s4SdkQualityChecks:
jacocoExcludes:
- '**/OrdersService.class'
# integrationTests:
# credentials:
# - alias: 'mySystemAlias'
# credentialId: 'mySystemCredentialsId'
# s4SdkQualityChecks:
# nonErpDestinations:
# - 'myCustomDestination'
productionDeployment:
cfTargets:
- org: 'XXXXXX'
space: 'XXXXXX'
apiEndpoint: 'https://api.cf.XX10.hana.ondemand.com'
appName: 'myAppName'
manifest: 'mta.yaml'
credentialsId: 'CF_CREDENTIALSID'
My Jenkins file is unchanged:
#!/usr/bin/env groovy
/*
* This file bootstraps the codified Continuous Delivery pipeline for extensions of SAP solutions, such as SAP S/4HANA.
* The pipeline helps you to deliver software changes quickly and in a reliable manner.
* A suitable Jenkins instance is required to run the pipeline.
* The Jenkins can easily be bootstraped using the life-cycle script located inside the 'cx-server' directory.
*
* More information on getting started with Continuous Delivery can be found in the following places:
* - GitHub repository: https://github.com/SAP/cloud-s4-sdk-pipeline
* - Blog Post: https://blogs.sap.com/2017/09/20/continuous-integration-and-delivery
*/
/*
* Set pipelineVersion to a fixed released version (e.g. "v15") when running in a productive environment.
* To find out about available versions and release notes, visit: https://github.com/SAP/cloud-s4-sdk-pipeline/releases
*/
String pipelineVersion = "master"
node {
deleteDir()
sh "git clone --depth 1 https://github.com/SAP/cloud-s4-sdk-pipeline.git -b ${pipelineVersion} pipelines"
load './pipelines/s4sdk-pipeline.groovy'
}
Any ideas what I am missing for a production deployment and how I get through this check in the script for production deployment?
Regards
Neil
the pipeline was built for multi-branch pipelines and will not work correctly in a single-branch pipeline job. There is no problem with running a project that has a single branch in a multi-branch pipeline job. To avoid confusion, we added a check to the pipeline in a recent version as documented here https://blogs.sap.com/2019/11/21/new-versions-of-sap-cloud-sdk-3.8.0-for-java-1.13.1-for-javascript-and-v26-of-continuous-delivery-toolkit/#cd-toolkit
Kind regards
Florian
In the circleci version 1 config, there was the option to specify owner as an option in a deployment. An example from the circleci docs ( https://circleci.com/docs/1.0/configuration/ ) with owner: circleci being the key line:
deployment:
master:
branch: master
owner: circleci
commands:
- ./deploy_master.sh
In version 2 of the config, there is the ability to use filters and tags to specify which branches are built, but I have yet to find (in the docs, or on the interwebs) anything that gives me the same capability.
What I'm trying to achieve is run build and test steps on forks, but only run the deploy steps if the repository owner is the main repo. Quite often people fork using the same branch name - in this case master - so having a build fail due to an inability to deploy is counter-intuitive, especially as I would like to use a protected branch in git and only merge commits based on a successful build in a pull request.
I realise we could move to only running builds based on tags being present, but nothing is stopping somebody with a fork also creating a tag in their fork, which puts us back at square one.
Is anybody aware of how to specify the owner of a repo in the version 2 config?
An example from the version 2 config document ( https://circleci.com/docs/2.0/workflows/ ) in case it helps jog somebodies memory:
workflows:
version: 2
un-tagged-build:
jobs:
- build:
filters:
tags:
ignore: /^v.*/
tagged-build:
jobs:
- build:
filters:
branches:
ignore: /.*/
tags:
only: /^v.*/
disclaimer: Developer Evangelist at CircleCI
That feature is not available on CircleCI 2.0. You can request it here.
As an alternative, you might be able to look for the branch name, say master, as well as the CIRCLE_PR_NUMBER environment variable. If that variable has any value, then it's a fork of master and you shouldn't deploy.
I am getting
task config 'get-snapshot-jar/infra/hw.yml
not found error. I have written a very simple pipeline .yml, this yml will connect to artifactory resource and run another yml which is defined in task section.
my pipeline.yml looks like:
resources:
- name: get-snapshot-jar
type: docker-image
source: <artifactory source>
repository: <artifactory repo>
username: {{artifactory-username}}
password: {{artifactory-password}}
jobs:
- name: create-artifact
plan:
- get: get-snapshot-jar
trigger: true
- task: copy-artifact-from-artifact-repo
file: get-snapshot-jar/infra/hw.yml
Artifactiory is working fine after that I am getting an error
enter image description here
copy-artifact-from-artifact-repo
task config 'get-snapshot-jar/infra/hw.yml' not found
You need to specify an input for your copy-artifact-from-artifact-repo task which passes the get-snapshot-jar resource to the tasks docker container. Take a look at this post where someone runs into a similar problem Trigger events in Concourse .
Also your file variable looks weird. Your are referencing a docker-image resource which, according to the official concourse-resource github repo, has no yml files inside.
Generally i would keep my task definitions as close as possible to the pipeline code. If you have to reach out to different repos you might loose the overview if your pipeline keeps growing.
cheers,