How to commit from concourse pipeline? - continuous-integration

What is the best way to commit from a pipeline.The job pulls from a different repo and makes some changes + builds - then push the new files to a different repo.Is this possible?

You should use the git-resource.
The basic steps of what you are going to want to do are to
Pull from the repo into a container.
Do some stuff with the code
Move the new code into a different container
Push the contents of that new container to a different git-repository
Your pipeline configuration should look something like this:
jobs:
- name: pull-code
plan:
- get: git-resource-pull
- get: git-resource-push
- task: do-something
inputs:
- name: git-resource-pull
run:
path: /bin/bash
args:
- -c
- |
pushd git-resource-pull
// do something
popd
// move the code from git-resource-pull to git-resource-push
- put: git-resource-push
params: {repository: git-resource-push}
resources:
- name: git-resource-pull
type: git
source:
uri: https://github.com/team/repository-1.git
branch: master
- name: git-resource-push
type: git
source:
uri: https://github.com/team/repository-2.git
branch: master

Related

link pipeline-build to board-item

I have a pipeline that runs regularly every 8 hours. It's task is to checkout a bunch of repositories, compile multiple projects and run some tests. The pipelines yaml-file is stored in Repo1:
resources:
repositories:
- repository: Repo1
type: git
name: Repo1
ref: main
- repository: Repo2
type: git
name: Repo2
ref: main
- repository: Repo3
type: git
name: Repo3
stages:
- stage: 'CompileAndTest'
jobs:
- job: 'x86'
timeoutInMinutes: 240
steps:
- checkout: Repo1
lfs: true
- checkout: Repo2
lfs: true
- checkout: Repo3
lfs: true
trigger: none
schedules:
- cron: "0 6,14,22 * * 1-5"
displayName: daily every 8 hours
branches:
include:
- main
always: true
In order to link work items to commits I enabled the appropriate option within my pipeline:
Now I get a few work-items that are related to the build within the build itself:
.
As an aside in the image there are 19 repos. For the sake of simplicity I just mention 3 here.
The commit associated with that item was done on Repo2, so on a different repo than the one where the pipeline is stored. However when I go to the the mentioned item, it doesn't show the build within the Integrated in build-field. Is that because the commit was done on another repo than the pipeline?

Can't checkout the same repo multiple times in a pipeline

I have self-hosted agents on multiple environments that I am trying to run the same build/deploy processes on. I would like to be able to deploy the same code from a single repo to multiple systems concurrently. Thus, I have created an "overhead" pipeline, and several "processes" pipeline templates. Everything seems to be going very well, except for when I try to perform checkouts of the same repo twice in the same pipeline execution. I get the following error:
An error occurred while loading the YAML build pipeline. An item with the same key has already been added.
I would really like to be able to just click ONE button to trigger a main pipeline that calls all the templates requires and gives the parameters needed to get all my jobs done at once. I could of course define this "overhead" pipeline and then queue up as many instances as I need of it per systems that I need to deploy to, but I'm lazy, hence why I'm using pipelines!
As soon as I remove the checkout from Common.yml, the validation succeeds without any issues. If I keep the checkout in there but only call the Common.yml once for the entire Overhead pipeline, then it succeeds without any issues as well. But the problem is: I need to pull the contents of the repo to EACH of my agents that are running on completely separate environments that are in no way ever able to talk to each other (can't pull the information to one agent and have it do some sort of a "copy" to all the other agent locations.....).
Any assistance is very much welcomed, thank you!
The following is my "overhead" pipeline:
# azure-pipelines.yml
trigger:
none
parameters:
- name: vLAN
type: string
default: 851
values:
- 851
- 1105
stages:
- stage: vLAN851
condition: eq('${{ parameters.vLAN }}', '851')
pool:
name: xxxxx
demands:
- vLAN -equals 851
jobs:
- job: Common_851
steps:
- template: Procedures/Common.yml
- job: Export_851
dependsOn: Common_851
steps:
- template: Procedures/Export.yml
parameters:
Server: ABTS-01
- stage: vLAN1105
condition: eq('${{ parameters.vLAN }}', '1105')
pool:
name: xxxxx
demands:
- vLAN -equals 1105
jobs:
- job: Common_1105
steps:
- template: Procedures/Common.yml
- job: Export_1105
dependsOn: Common_1105
steps:
- template: Procedures/Export.yml
parameters:
Server: OTS-01
And here is the "Procedures/Common.yml":
steps:
- checkout: git://xxxxx/yyyyy#$(Build.SourceBranchName)
clean: true
enabled: true
timeoutInMinutes: 1
- task: UsePythonVersion#0
enabled: true
timeoutInMinutes: 1
displayName: Select correct version of Python
inputs:
versionSpec: '3.8'
addToPath: true
architecture: 'x64'
- task: CmdLine#2
enabled: true
timeoutInMinutes: 5
displayName: Ensure Python Requirements Installed
inputs:
script: |
python -m pip install GitPython
And here is the "Procedures/Export.yml":
parameters:
- name: Server
type: string
steps:
- task: PythonScript#0
enabled: true
timeoutInMinutes: 3
displayName: xxxxx
inputs:
arguments: --name "xxxxx" --mode True --Server ${{ parameters.Server }}
scriptSource: 'filePath'
scriptPath: 'xxxxx/main.py'
I managed to make checkout work with variable branch names by using template expression variables ${{ ... }} instead of macro syntax $(...) variables.
The difference is that, template expressions are processed at compile time while macros are processed at runtime.
So in my case I have something like:
- checkout: git://xxx/yyy#${{ variables.BRANCH_NAME }}
For more information about variables syntax :
Understand variable syntax
I couldn't get it to work with expressions but I was able to get it to work using repository resources following the documentation at: https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops
resources:
repositories:
- repository: MyGitHubRepo # The name used to reference this repository in the checkout step
type: git
name: MyAzureProjectName/MyGitRepo
ref: $(Build.SourceBranch)
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
#some job
steps:
- checkout: MyGitHubRepo
#some other job
steps:
- checkout: MyGitHubRepo
- script: dir $(Build.SourcesDirectory)

Difference between PUT and OUTPUT steps in Concourse

Could someone tell me the difference between the PUT step and the OUTPUT step in Concourse? For example, in the following type of YAML files why do we need a put step after a get? Can't we use output instead of put? If not what are the purposes of each two?
jobs:
- name: PR-Test
plan:
- get: some-git-pull-request
trigger: true
- put: some-git-pull-request
params:
context: tests
path: some-git-pull-request
status: pending
....
<- some more code to build ->
....
The purpose of a PUT step is to push to the given resource while an OUTPUT is the result of TASK step.
A task can configure outputs to produce artifacts that can then be propagated to either a put step or to another task step in the same plan.
This means that you send the resource that you are specifying on the GET step to the task as an input, to perform wherever build or scripts executions and the output of that task
is a modified resource that you can later pass to your put step or to another TASK if you don't want to use PUT.
It would also depend on the nature of the defined resource in your pipeline. I'm assuming that you have a git type resource like this:
resources:
- name: some-git-pull-request
type: git
source:
branch: ((credentials.git.branch))
uri: ((credentials.git.uri))
username: ((credentials.git.username))
password: ((credentials.git.pass))
If this is true, the GET step will pull that repo so you can use it as an input for your tasks and if you use PUT against that same resource as you are describing in your sample code, that will push changes to your repo.
Really it depends on the workflow that you want to write but to give an idea it would look something like this:
jobs:
- name: PR-Test
plan:
- get: some-git-pull-request
trigger: true
- task: test-code
config:
platform: linux
image_resource:
type: docker-image
source:
repository: yourRepo/yourImage
tag: latest
inputs:
- name: some-git-pull-request
run:
path: bash
args:
- -exc
- |
cd theNameOfYourRepo
npm install -g mocha
npm test
outputs:
- name: some-git-pull-request-output
Then you can use it on either a PUT
- put: myCloud
params:
manifest: some-git-pull-request-output/manifest.yml
path: some-git-pull-request-output
or a another task whitin the same plan
- task: build-code
config:
platform: linux
image_resource:
type: docker-image
source:
repository: yourRepo/yourImage
tag: latest
inputs:
- name: some-git-pull-request-output
run:
path: bash
args:
- -exc
- |
cd some-git-pull-request-output/
npm install
gulp build
outputs:
- name: your-code-build-output
Hope it helps!

Google CloudBuild artifacts YAML

I've followed the docs for Google CloudBuild here: https://cloud.google.com/cloud-build/docs/configuring-builds/store-images-artifacts
So here's my cloudbuild.yaml configuration:
steps:
- name: gcr.io/cloud-builders/git
id: git-checkout
args: [ 'fetch','--tags','--unshallow']
- name: openjdk
id: gradle-build
args: [
'./gradlew',
'--build-cache',
'-Si',
'-Panalytics.buildId=$BUILD_ID',
'-PgithubToken=$_GITHUB_TOKEN',
'-g', '$_GRADLE_CACHE',
'build'
]
artifacts:
objects:
location: ['gs://my-bucket/artifacts/']
paths: ["build/libs/*.jar"]
If I comment out, the following, then it runs successfully:
artifacts:
objects:
location: ['gs://my-bucket/artifacts/']
paths: ["build/libs/*.jar"]
Without comments, I get the following error from the CloudBuild console:
failed unmarshalling build config cloudbuild.yaml: json: cannot unmarshal array into Go value of type string
And under the Logs section, it simply says Logs unavailable.
You may need to indent objects: line
artifacts:
objects:
location: ['gs://my-bucket/artifacts/']
paths: ["build/libs/*.jar"]
objects.location element should not be an array.
The following should work:
artifacts:
objects:
location: 'gs://my-bucket/artifacts/'
paths: ["build/libs/*.jar"]
I've also run into this error with a section of my cloudbuild.yaml file looking like:
- name: 'gcr.io/cloud-builders/git'
args:
- clone
- -depth
- 1
- --single-branch
- -b
- development
- git#bitbucket.org:aoaoeuoaeuoeaueu/oaeueoaueoauoaeuo.git
volumes:
- name: 'ssh'
path: /root/.ssh
Seems the issue is with the 1. So I just added quotes around which fixed it (- "1").

Pull from multiple SCM then mv file in Concourse CI to workdir

I've been banging my head on this one for quite some time and I cannot figure it out (I know it must be a simple thing to do though).
Currently, what I'm trying to do is pulling from two repositories (which naturally creates two separate directories) then I'm trying to move files from one directory to the other to successfully execute the Dockerfile.
Here's how my pipeline.yml file looks like:
---
jobs:
- name: build-nexus-docker-image
public: false
plan:
- get: git-nexus-docker-images
trigger: true
- get: git-nexus-license
trigger: true
- task: mv-nexus-license
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: "trusty"}
inputs:
- name: git-nexus-license
- name: git-nexus-docker-images
run:
path: /bin/sh
args:
- -c
- mv -v git-nexus-license/nexus.lic git-nexus-docker-images/nexus.lic; ls -la git-nexus-docker-images
- put: nexus-docker-image
params:
build: git-nexus-docker-images/
resources:
- name: git-nexus-docker-images
type: git
source:
uri: git#git.company.com:dev/nexus-pro-dockerfile.git
branch: test
paths: [Dockerfile]
private_key: {{git_ci_key}}
- name: git-nexus-license
type: git
source:
uri: git#git.company.com:secrets/nexus-information.git
branch: master
paths: [nexus.lic]
private_key: {{git_ci_key}}
- name: nexus-docker-image
type: docker-image
source:
username: {{aws-token-username}}
password: {{aws-token-password}}
repository: {{ecr-nexus-repo}}
I've posted the pipeline that actually can be deployed to Concourse; however I tried a lot of things, but I can't figure out how to do this. I'm stuck on the part of moving the license file from git-nexus-license directory to git-nexus-docker-images directory. What I've done doesn't seem to mv the nexus.lic file because when while building the docker image it fails because it cannot find that file in the directory.
EDIT: I've successfully been able to "mv" nexus.lic using the code above, however the build is still failing due to not finding the file! I'm not sure what I'm doing wrong, the build works properly if I do it manually but with Concourse it's failing.
Okay so I figured out what I was doing wrong and as usual it was something small. I forgot to add the outputs to the yml file which tells concourse that this is the new workdir. Here's how it looks like now (which works for me):
---
jobs:
- name: build-nexus-docker-image
public: false
plan:
- get: git-nexus-docker-images
trigger: true
- get: git-nexus-license
trigger: true
- task: mv-nexus-license
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: "trusty"}
inputs:
- name: git-nexus-license
- name: git-nexus-docker-images
outputs:
- name: build-nexus-dir
run:
path: /bin/sh
args:
- -c
- mv -v git-nexus-license/nexus.lic build-nexus-dir/nexus.lic; mv -v git-nexus-docker-images/* build-nexus-dir; ls -la build-nexus-dir;
- put: nexus-docker-image
params:
build: build-nexus-dir/
resources:
- name: git-nexus-docker-images
type: git
source:
uri: git#git.company.com:dev/nexus-pro-dockerfile.git
branch: test
paths: [Dockerfile]
private_key: {{git_ci_key}}
- name: git-nexus-license
type: git
source:
uri: git#git.company.com:secrets/nexus-information.git
branch: master
paths: [nexus.lic]
private_key: {{git_ci_key}}
- name: nexus-docker-image
type: docker-image
source:
username: {{aws-token-username}}
password: {{aws-token-password}}
repository: {{ecr-nexus-repo}}
I hope this helps whoever gets stuck on this. :)

Resources