how to run pipeline only on HEAD commit in GitlabCi runner? - continuous-integration

we have a CI pipeline on our repository, hosted in gitlab
we setup gitlab-runner on our local machine
the pipeline running 4 steps
build
unit tests
integration test
quality tests
all this pipeline takes almost 20 min
and the pipeline trigger on each push to a branch
is there a way to configure the gitlab-runner that if the HEAD of a branch that the runner currently running on changes the pipe
will auto cancel the run? because the latest version is what matters
for example in this run the lower run is unnecessary
gitlab-ci.yml
stages:
- build
- unit_tests
- unit_and_integration_tests
- quality_tests
build:
stage: build
before_script:
- cd projects/ideology-synapse
script:
- mvn compile
unit_and_integration_tests:
variables:
GIT_STRATEGY: clone
stage: unit_and_integration_tests
only:
- /^milestone-.*$/
script:
- export RUN_ENVIORMENT=GITLAB_CI
- export MAVEN_OPTS="-Xmx32g"
- mvn test
- "cat */target/site/jacoco/index.html"
cache: {}
artifacts:
reports:
junit:
- "*/*/*/target/surefire-reports/TEST-*.xml"
unit_tests:
variables:
GIT_STRATEGY: clone
stage: unit_tests
except:
- /^milestone-.*$/
script:
- export MAVEN_OPTS="-Xmx32g"
- mvn test
- "cat */target/site/jacoco/index.html"
cache: {}
artifacts:
reports:
junit:
- "*/*/*/target/surefire-reports/TEST-*.xml"
quality_tests:
variables:
GIT_STRATEGY: clone
stage: quality_tests
only:
- /^milestone-.*$/
script:
- export RUN_ENVIORMENT_EVAL=GITLAB_CI
- export MAVEN_OPTS="-Xmx32g"
- mvn test
cache: {}
edit after #siloko comment:
I already try using
the auto-cancel redundant, pending pipelines in the setting menu
I want to cancel running pipelines and not pending

after forther investigation, I found that I had 2 active runners
on one of my machines
one shared runner , and another specific runner then if I push a 2 commit one after another to the same branch both of the runners take the jobs and execute them.
that also explains why
Auto-cancel redundant, pending pipelines
options, didn't work because it works only when the same runner have pending jobs
actions that been taken to fix this problem: unregister the specific runner and leave the machine only with the shared runner

Related

PR Decoration not showing on Bitbucket Pull request page under Code Quality

When I create a pull request, After successful completion of Pipeline, the analysis results are shown on the sonarqube server. But shows error on Bitbucket cloud pull request page as shown in the image.
My bitbucket-pipelines.yml looks like
image: maven:3-openjdk-11
definitions:
steps:
- step: &build-step
name: SonarQube analysis
caches:
- maven
- sonar
script:
- mvn verify sonar:sonar -Dsonar.projectKey=java_base_project
caches:
sonar: ~/.sonar
clone:
depth: full
pipelines:
branches:
feature/sonarqube_integration: # or the name of your main branch
- step: *build-step
feature/sonarqube_integration_2: # or the name of your main branch
- step: *build-step
feature/sonarqube_integration_3: # or the name of your main branch
- step: *build-step
pull-requests:
'**':
- step: *build-step
From the command, I assume you are using SonarCloud. The feature you are referring to is Pull Request Decoration, it seems Sonar could not detect the pull request parameters from Bitbucket Pipelines, try passing the pull request parameters manually.
mvn verify sonar:sonar -Dsonar.projectKey=java_base_project -Dsonar.pullrequest.key=12 -Dsonar.pullrequest.branch=feature/source-branch -Dsonar.pullrequest.base=master
Ref: https://docs.sonarqube.org/9.6/analysis/pull-request/

Gitlab runner cache miss file after stage complete

Summary
My gitlab-ci.yml has 3 stage for deploy an application to okd pod
Application running spring boot on tomcat:8
Sometimes, the cache.zip is not update after stage complete so that the next step can't run correctly
Steps to reproduce
My gitlab-ci run the following stage
Stage 1: run test compile ---> OK
Stage 2: package war file as output for deploy ---> Gitlab-ci log show success but the cache.zip has not war file (just sometimes cache.zip not have war file, sometimes it run correctly)
Stage 3: Deploy war file to pod ---> Because of war file not exists in cache.zip, script error -> failed
.gitlab-ci.yml
image: openshift/origin-cli
stages:
- build
- test
- staging
cache:
paths:
- .m2/repository
- target
- artifact
validate:jdk8:
stage: build
script:
- 'mvn test-compile'
only:
- master
image: maven:3.3.9-jdk-8
verify:jdk8:
stage: test
script:
- 'mvn verify'
- 'mvn package' # =====> this command generate war file
only:
- master
image: maven:3.3.9-jdk-8
staging:
script:
- "mkdir -p artifact"
- "cp ./target/*.war ./artifact/" # ======> Sometimes error at this line because of previous step not add war file into cache
- "oc start-build $APP"
- "rm -rf ./target/* && rm -rf ./artifact/*" # Remove war & class file, only cache m2 lib
stage: staging
variables:
APP: $CI_PROJECT_NAME
environment:
name: staging
url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
only:
- master
Actual behavior
Sometimes cache not have war file after test stage complete (is this depends on war file size?)
Expected behavior
War file update into cache after test stage for staging stage deploy
Relevant logs and/or screenshots
ScreenShot
job log
Running with gitlab-runner 13.7.0 (943fc252)
on gitlab-runner-node1 y6awygsj
Preparing the "docker" executor
00:01
Using Docker executor with image openshift/origin-cli ...
Using locally found image version due to if-not-present pull policy
Using docker image sha256:7ebb6be01117a50344d63f77c385a13302afecd33480b97c36a518d4f5ebc25a for openshift/origin-cli with digest docker.io/openshift/origin-cli#sha256:509e052d0f2d531b666b7da9fa49c5558c76ce5d286456f0859c0a49b16d6bf2 ...
Preparing environment
00:00
Running on runner-y6awygsj-project-489-concurrent-0 via gitlab.runner.node1...
Getting source from Git repository
00:01
Fetching changes...
Reinitialized existing Git repository in /builds/my-project/.git/
Checking out b4c97428 as master...
Removing .m2/
Removing artifact/
Removing target/
Skipping Git submodules setup
Restoring cache
00:05
Checking cache for default-23...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
Executing "step_script" stage of the job script
00:01
$ mkdir -p artifact
$ cp ./target/*.war ./artifact/
cp: cannot stat './target/*.war': No such file or directory
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1
Environment description
config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "gitlab-runner-node1"
url = "https://gitlab.mycompany.vn/"
token = "y6awygsj9zks18nU6PDt"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
dns = ["192.168.100.1"]
tls_verify = false
image = "alpine:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/mnt/nfs/nfsshare-gitlab/cache:/cache"]
shm_size = 0
pull_policy = "if-not-present"
Used GitLab Runner version
Version: 13.7.0
Git revision: 943fc252
Git branch: 13-7-stable
GO version: go1.13.8
Built: 2020-12-21T13:47:06+0000
OS/Arch: linux/amd64
Possible fixes
Re-run test stage until cache has war file
Let's go step by step.
First, regarding how to manage the files between stages.
It's true that you could directly access to the files between jobs and stages if both run on the same environment, but that's not always the case (even if both runners are using the same nfs share directory) and you should use artifacts for that.
When you define an artifact within a job, you're specifying a list of files that are attached to the job when it succeeds, fails or always, depending on the configuration you have.
By default, all artifacts from previous stages are passed to each job, but in any case you can use dependencies to also define from which jobs you want to fetch artifacts from.
So basically you should use the following .gitlab-ci.yml
image: openshift/origin-cli
stages:
- build
- test
- staging
cache:
paths:
- .m2/repository
validate:jdk8:
stage: build
script:
- 'mvn test-compile'
only:
- master
image: maven:3.3.9-jdk-8
verify:jdk8:
stage: test
script:
- 'mvn verify' # =====> verify already includes: validate, compile, test and package
artifacts:
paths:
- target/[YOUR_APP_NAME].war
only:
- master
image: maven:3.3.9-jdk-8
staging:
dependencies:
- verify:jdk8
script:
- "mkdir -p artifact"
- "cp ./target/[YOUR_APP_NAME].war ./artifact/"
- "oc start-build $APP"
stage: staging
variables:
APP: $CI_PROJECT_NAME
environment:
name: staging
url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
only:
- master
Also, notice that I deleted the mvn package instruction. I would recommend you to take a look into the Build Lifecycle Basics of Maven.

github actions can't access swift package

# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the main branch
push:
#branches: [ main ]
pull_request:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: macos-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- run: |
pwd
swift package init --type library
xcodebuild clean
xcodebuild test -project TimeFountain.xcodeproj -scheme TimeFountain CODE_SIGN_ENTITLEMENTS= DEVELOPMENT_TEAM= CODE_SIGNING_ALLOWED=NO
The problem is that it isn't accessing the AlamoFire Swift Package which I committed into the project. The error I'm getting is:
...
error: no such module 'Alamofire'
1499
import Alamofire
1500
^
1501
1502
CompileSwift normal x86_64
...
Error: Process completed with exit code 65.
The problem is: test -project TimeFountain.xcodeproj, I changed it to use the workspace.
# This is a basic workflow to help you get started with Actions
name: CI
on:
pull_request:
branches:
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: macos-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- run: |
pwd
swift package init --type library
xcodebuild clean
xcodebuild test -workspace TimeFountain.xcworkspace -scheme TimeFountain CODE_SIGN_ENTITLEMENTS= DEVELOPMENT_TEAM= CODE_SIGNING_ALLOWED=NO

AzureDevops agent must finish current build before taking an other one

Situation
I'm working on an AzureDevops Server 2020 with only one agent.
I have 2 build pipelines:
Build pipeline (yaml)
Merge pipeline (yaml)
Each pipeline contains multiple stages that contains only one job (because each task of the stage must run on same agent).
Current behavior
If I run the two pipelines at the same time, the agent run the two in a "fake" parallel that make the two builds very slow.
Exemple of agent process order:
build-stage1, build-stage2, merge-stage1, merge-stage2, merge-stage3, merge-stage4, build-stage3...
Wanted behavior
This is not something unexpected if we have more agents than build executions. But this will never be my case.
So I will prefer to lock the agent for the current build (like built-in in Jenkins).
Exemple of agent wanted process order:
build-stage1, build-stage2, build-stage3, build-stage4, build-stage5(latest), merge-stage1, merge-stage2, merge-stage3, merge-stage4, merge-stage5(latest)
Is it possible to set the agent work attribution policy ?
This occurs because a job is not added to agent queue if it depends on something (and by default it depends on previous stage).
Using dependsOn: [] let Azure Devops know that it depends on nothing so each jobs are added to the queue and are executed in FIFO order.
Agree with Dom.
Based on my test, I could reproduce this behavior. But I am afraid that there is no method to lock the agent to complete the pipeline before taking another one.
For a workaround:
You could use Pipeline trigger to trigger the merge Pipeline.
Build Pipeline:
pool:
name: Default
stages:
- stage: Build_Stage1
displayName: Stage1
....
- stage: Build_Stage2
displayName: Stage2
dependsOn: Build_Stage1
....
- stage: Build_Stage3
displayName: Stage3
dependsOn: Build_Stage2
....
Merge Pipeline:
resources:
pipelines:
- pipeline: TestTrigger
source: ABC
trigger:
branches:
- '*'
pool:
name: Default
stages:
- stage: Merge_Stage1
displayName: Merge1
...
- stage: Merge_Stage2
displayName: Merge2
dependsOn: Merge_Stage1
...
- stage: Merge_Stage3
displayName: Merge3
dependsOn: Merge_Stage2
...
In this case, you could queue the Build Pipeline alone. Then the Merge Pipeline will be triggered after completing the Build Pipeline.
The Process: build-stage1, build-stage2, build-stage3, build-stage4, build-stage5(latest), -> Tiggeer -> merge-stage1, merge-stage2, merge-stage3, merge-stage4, merge-stage5(latest)
On the other hand, this requirement is valuable.
To get this feature, you could add your request for this feature on our UserVoice site, which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.

How to fail Gitlab pipeline that calls another pipeline via API?

I have 2 Gitlab repos:
Project A
Integration tests for Project A
I want to stop the pipeline / build of Project A if the integration tests fail but currently the Project A pipeline passes even if the integration tests fail.
My .gitlab-ci.yml for Project A defines these 7 stages:
stages:
- build
- test
- publish
- dev-deployment
- staging-deployment
- trigger-integration-tests
- prod-deployment
The second last stage (trigger-integration-tests) kicks off the integration tests project by using the Gitlab API call with curl:
trigger-integration-tests:
stage: trigger-integration-tests
image: ubuntu:16.04
script:
- apt-get update && apt-get install -y curl
- "curl -X POST -F token=$INTEGRATION_TESTS_TOKEN -F variables[PROJECT_ID]=$CI_PROJECT_ID -F variables[BRANCH_NAME]=$CI_COMMIT_REF_NAME -F ref=master https://gitlab.mycompany.com/api/v4/projects/123/trigger/pipeline"
allow_failure: false
only:
- master
I tried adding the allow_failure: false flag but that didn't help so I'm looking for more ideas.
I found the trigger-and-wait technique but wasn't sure if there's a more simple solution.
As answered on a previous question, you could do the following:
From the main project, using a Python/Bash script:
Trigger the integration tests pipeline (and capture the pipeline ID)
Poll the status of the pipeline, using the captured ID (which can be running, pending, failed, canceled or skipped)
Raise an exception / error if it has failed...
See here for an example python script to achieve this.

Resources