Gradle build failed in Tekton CI/CD - gradle

I have below pipeline task for Gradle build, which clones the project from bitbucket repo and try to build the application.
tasks:
- name: clone-repository
taskRef:
name: git-clone
workspaces:
- name: output
workspace: shared-workspace
params:
- name: url
value: "$(params.repo-url)"
- name: deleteExisting
value: "true"
- name: build
taskRef:
name: gradle
runAfter:
- "clone-repository"
params:
- name: TASKS
value: build -x test
- name: GRADLE_IMAGE
value: docker.io/library/gradle:jdk17-alpine#sha256:dd16ae381eed88d2b33f977b504fb37456e553a1b9c62100b8811e4d8dec99ff
workspaces:
- name: source
workspace: shared-workspace
I have the below project structure
The settings.gradle contain the below projects
rootProject.name = 'discount'
include 'core'
include 'infrastructure'
include 'shared'
include 'discount-api'
when running the pipeline with below code
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: run-pipeline
namespace: tekton-pipelines
spec:
serviceAccountName: git-service-account
pipelineRef:
name: git-clone-pipeline
workspaces:
- name: shared-workspace
persistentVolumeClaim:
claimName: fetebird-discount-pvc
params:
- name: repo-url
value: git#bitbucket.org:anandjaisy/discount.git
Getting an exception as
FAILURE: Build failed with an exception.
* What went wrong:
Task 'build -x test' not found in root project 'discount'.
I have used the task from the Tekton catalog https://github.com/tektoncd/catalog/blob/main/task/gradle/0.1/gradle.yaml
If I pass the PROJECT_DIR value as ./discount-api to the Gradle task. I get an exception as settings.gradle not found. which is correct because that project has no setting.gradle file.
The main project is discount-api. I need to build the application. Quite not sure what is wrong. On the local env if I do ./gradlew build in the root directory the application successfully builds.

The error message tells about Task 'build -x test' not found in root project 'discount'
Checking that Task, in tekton catalog, we can read:
....
- name: TASKS
description: 'The gradle tasks to run (default: build)'
type: string
default: build
steps:
- name: gradle-tasks
image: $(params.GRADLE_IMAGE)
workingDir: $(workspaces.source.path)/$(params.PROJECT_DIR)
command:
- gradle
args:
- $(params.TASKS)
Now, in your Pipeline, you set that TASKS param to build -x test. This is your issue.
As you can read above, that TASKS param is a string. While you want to use an array.
You should be able to change the param definition, such as:
....
- name: TASKS
description: 'The gradle tasks to run (default: build)'
type: array
default:
- build
steps:
- name: gradle-tasks
image: $(params.GRADLE_IMAGE)
workingDir: $(workspaces.source.path)/$(params.PROJECT_DIR)
command:
- gradle
args: [ "$(params.TASKS)" ]
This would ensure "build", "-x" and "test" are sent to gradle as separate strings. While your current attempt would be equivalent to running gradle "build -x test", resulting in your error.

Related

Azure Spring Gradle: files in src/test/resouces not accessible

Im building Spring boot app using gradle. Integration tests (Spock) need to access code/src/resouces/docker-compose.yml file to prepare TestContainers container:
static DockerComposeContainer postgresContainer = new DockerComposeContainer(
ResourceUtils.getFile("classpath:docker-compose.yml"))
The git file structure is:
- code
- src
- main
- test
- resources
- docker-compose.yml
This is working fine on my local machine, but once I run it in Azure pipeline, it gives
No such file or directory: '/__w/1/s/code/build/resources/test/docker-compose.yml'
My pipeline yaml is like bellow. I use Ubuntu container with Java17 as I need to build with 17 but Azure's latest is 11 (maybe this plays any role in the error I get?)
trigger: none
stages:
- stage: Test
displayName: Test Stage
jobs:
- job: Test
pool:
vmImage: 'ubuntu-22.04'
container: gradle:7.6.0-jdk17
variables:
- name: JAVA_HOME_11_X64
value: /opt/java/openjdk
displayName: Test
steps:
- script: java --version
displayName: Java version
- script: |
echo "Build reason is: $(Build.Reason)"
displayName: Build reason
- checkout: self
clean: true
- task: Gradle#2
displayName: 'Gradle Build'
enabled: True
inputs:
javaHomeSelection: 'path'
jdkDirectory: '/opt/java/openjdk'
wrapperScript: code/gradlew
cwd: code
tasks: clean build
publishJUnitResults: true
jdkVersionOption: 1.17
Thanks for help!
I've solved it by "workarround" - I realied that I dont need to use the container with jdk17 that was causing the problem (could not access files on host machine of course)
The true is the Azure silently supports Jkd17 by directive: jdkVersionOption: 1.17
But once someone will need to use the container to build the code and access the repository files that are not on classpath, the problem will raise again
trigger: none
stages:
- stage: Test
displayName: Test Stage
jobs:
- job: Test
pool:
vmImage: 'ubuntu-22.04'
displayName: Test
steps:
- script: java --version
displayName: Java version
- script: |
echo "Build reason is: $(Build.Reason)"
displayName: Build reason
- checkout: self
clean: true
- task: Gradle#2
displayName: 'Gradle Build'
enabled: True
inputs:
wrapperScript: server/code/gradlew
cwd: server/code
tasks: test
publishJUnitResults: true
jdkVersionOption: 1.17
Please follow Azure pipeline issue for more details

GCP cloudbuild with multiple steps

I have the below steps and referring to the steps from here https://cloud.google.com/build/docs/building/build-java#gradle
Now for the second and third steps I don't need a gradle docker image. If I add I can see on the build logs the image already exists. How can
steps:
# This step show the version of Gradle
- name: gradle:7.4.2-jdk17-alpine
entrypoint: gradle
args: ['--version']
# This step build the gradle application
- name: 'Build'
entrypoint: gradle
args: [ 'build' ]
# This step run test
- name: 'Test'
entrypoint: gradle
args: [ 'test' ]

Artifact dependencies (destination) using Bamboo YAML specs

I'm trying to set up the Bamboo build plan configuration using Bamboo YAML specs (.yml file below). In the last stage (Create deployment artifacts) I want to use the shared artifacts from the previous stage. By specifying the artifacts of the jobs as "shared: true" I can use them in the second stage. However, they are in the same destination folder. Using the UI this can be easily edited.
Artifact dependencies
But how can I specify the destination folder of the two artifacts in the Bamboo YAML specs, e.g. from "Root of working directory" to "./app" and "./wwwroot", respectively?
---
version: 2
plan:
project-key: COCKPIT
key: BE
name: Cockpit - Continuous Build - Windows
stages:
- Build Stage:
- Build Backend
- Build Frontend
- Build Artifact:
- Create Deployment Artifact
Build Backend:
requirements:
- Visual Studio Build Tools (32-bit)
tasks:
- checkout:
repository: cockpit_backend
path: 'cockpit_backend'
force-clean-build: false
- script:
- dotnet publish .\cockpit_backend\src\Cockpit.WebApi\ --configuration Release
artifacts:
-
name: BackendBuild
location: cockpit_backend/src/Cockpit.WebApi/bin/Release/netcoreapp3.1/publish
pattern: '**/*.*'
required: true
shared: true
Build Frontend:
requirements:
- os_linux
tasks:
- checkout:
repository: 'Cockpit / cockpit_frontend'
path: 'cockpit_frontend'
force-clean-build: false
- script:
- cd cockpit_frontend
- npm install
- script:
- cd cockpit_frontend
- npm run build-prod
docker:
image: node:alpine
artifacts:
-
name: FrontendBuild
location: cockpit_frontend/dist
pattern: '**/*.*'
required: true
shared: true
Create Deployment Artifact:
requirements:
- os_windows
tasks:
- script:
interpreter: powershell
scripts:
- $buildDir = "Cockpit"
- $dest = "Cockpit_${bamboo.buildNumber}.zip"
- Add-Type -assembly "system.io.compression.filesystem"
- '[io.compression.zipfile]::CreateFromDirectory($buildDir, $dest)'
artifacts:
-
name: Completebuild
pattern: 'Cockpit_${bamboo.buildNumber}.zip'
required: true
YAML specs doesn't support artifact dependency management and you need to have script task at "Create Deployment Artifact" job to put them into separate folders from root before compressing

Difference between PUT and OUTPUT steps in Concourse

Could someone tell me the difference between the PUT step and the OUTPUT step in Concourse? For example, in the following type of YAML files why do we need a put step after a get? Can't we use output instead of put? If not what are the purposes of each two?
jobs:
- name: PR-Test
plan:
- get: some-git-pull-request
trigger: true
- put: some-git-pull-request
params:
context: tests
path: some-git-pull-request
status: pending
....
<- some more code to build ->
....
The purpose of a PUT step is to push to the given resource while an OUTPUT is the result of TASK step.
A task can configure outputs to produce artifacts that can then be propagated to either a put step or to another task step in the same plan.
This means that you send the resource that you are specifying on the GET step to the task as an input, to perform wherever build or scripts executions and the output of that task
is a modified resource that you can later pass to your put step or to another TASK if you don't want to use PUT.
It would also depend on the nature of the defined resource in your pipeline. I'm assuming that you have a git type resource like this:
resources:
- name: some-git-pull-request
type: git
source:
branch: ((credentials.git.branch))
uri: ((credentials.git.uri))
username: ((credentials.git.username))
password: ((credentials.git.pass))
If this is true, the GET step will pull that repo so you can use it as an input for your tasks and if you use PUT against that same resource as you are describing in your sample code, that will push changes to your repo.
Really it depends on the workflow that you want to write but to give an idea it would look something like this:
jobs:
- name: PR-Test
plan:
- get: some-git-pull-request
trigger: true
- task: test-code
config:
platform: linux
image_resource:
type: docker-image
source:
repository: yourRepo/yourImage
tag: latest
inputs:
- name: some-git-pull-request
run:
path: bash
args:
- -exc
- |
cd theNameOfYourRepo
npm install -g mocha
npm test
outputs:
- name: some-git-pull-request-output
Then you can use it on either a PUT
- put: myCloud
params:
manifest: some-git-pull-request-output/manifest.yml
path: some-git-pull-request-output
or a another task whitin the same plan
- task: build-code
config:
platform: linux
image_resource:
type: docker-image
source:
repository: yourRepo/yourImage
tag: latest
inputs:
- name: some-git-pull-request-output
run:
path: bash
args:
- -exc
- |
cd some-git-pull-request-output/
npm install
gulp build
outputs:
- name: your-code-build-output
Hope it helps!

Concourse: Resource not found?

I am trying to built a concourse pipeline which is triggered by git, and then runs a script in that git repository.
This is what I have so far:
resources:
- name: component_structure_git
type: git
source:
branch: master
uri: git#bitbucket.org:foo/bar.git
jobs:
- name: component_structure-docker
serial: true
plan:
- aggregate:
- get: component_structure_git
trigger: true
- task: do-something
config:
platform: linux
image_resource:
type: docker-image
source: { repository: ubuntu }
inputs:
- name: component_structure_git
outputs:
- name: updated-gist
run:
path: component_structure_git/run.sh
- put: component_structure-docker
params:
build: component_structure/concourse
- name: component_structure-deploy-test
serial: true
plan:
- aggregate:
- get: component_structure-docker
passed: [component_structure-docker]
- name: component_structure-deploy-prod
serial: true
plan:
- aggregate:
- get: component_structure-docker
passed: [component_structure-docker]
When I apply this code with fly, everything is ok. When I try to run the build. it fails with the following error:
missing inputs: component_structure_git
Any idea what I am missing here?
Agree with first answer. When running things in parallel (aggregate blocks) there are a few things to consider
How many inputs do I have? I have more than one, let's run these get steps in an aggregate block
If I have two tasks, is there a dependency between the tasks that can change the outcome of a task run, eg. Do I have an output from one task that is required in the next task
I have a sequence of put statements, let's run these steps in an aggregate block
just a guess but aggregate is causing the issue. You can't have an input from something that is executing at the same time? Why do you have aggregate anyway? This is usually used for "get" to speed up the process.

Resources