PR Decoration not showing on Bitbucket Pull request page under Code Quality - sonarqube

When I create a pull request, After successful completion of Pipeline, the analysis results are shown on the sonarqube server. But shows error on Bitbucket cloud pull request page as shown in the image.
My bitbucket-pipelines.yml looks like
image: maven:3-openjdk-11
definitions:
steps:
- step: &build-step
name: SonarQube analysis
caches:
- maven
- sonar
script:
- mvn verify sonar:sonar -Dsonar.projectKey=java_base_project
caches:
sonar: ~/.sonar
clone:
depth: full
pipelines:
branches:
feature/sonarqube_integration: # or the name of your main branch
- step: *build-step
feature/sonarqube_integration_2: # or the name of your main branch
- step: *build-step
feature/sonarqube_integration_3: # or the name of your main branch
- step: *build-step
pull-requests:
'**':
- step: *build-step

From the command, I assume you are using SonarCloud. The feature you are referring to is Pull Request Decoration, it seems Sonar could not detect the pull request parameters from Bitbucket Pipelines, try passing the pull request parameters manually.
mvn verify sonar:sonar -Dsonar.projectKey=java_base_project -Dsonar.pullrequest.key=12 -Dsonar.pullrequest.branch=feature/source-branch -Dsonar.pullrequest.base=master
Ref: https://docs.sonarqube.org/9.6/analysis/pull-request/

Related

Circle Ci with Heroku orb giving "no workflow" error

I want to use a circle ci yaml pipeline to deploy to a Heroku App.
The yaml file I have right now is:
version: 2.1
orbs:
heroku: circleci/heroku#0.0.10
jobs:
heroku_deploy_review_app:
executor: heroku/default
steps:
- checkout
- heroku/install
- heroku/deploy-via-git:
app-name: $HEROKU_APP_NAME
workflows:
heroku_deploy:
jobs:
- heroku_deploy_review_app:
filters:
branches:
only:
- test-123/test-heroku-orb
There are no issues from syntax side for this YAML. However, when I run this pipeline, I see
I am not sure what I am doing wrong because this code looks all fine to me. I also confirmed with doc https://circleci.com/docs/deploy-to-heroku/ and https://circleci.com/developer/orbs/orb/circleci/heroku
First thing first, You can only see this workflow if you are pushing your code to test-123/test-heroku-orb branch, please check if you are mistakenly pushing to the main branch.
Secondly, you need to make a small adjustment to .circleci/config.yml
version: 2
orbs:
heroku: circleci/heroku#0.0.10
jobs:
heroku_deploy_review_app:
executor: heroku/default
steps:
- checkout
- heroku/install
- heroku/deploy-via-git:
app-name: $HEROKU_APP_NAME
workflows:
version: 2
heroku_deploy:
jobs:
- heroku_deploy_review_app:
filters:
branches:
only:
- test-123/test-heroku-orb
This means that your filter is not matched.
For your defined workflow to actually run, you'd need to push commits to test-123/test-heroku-orb branch.

Gitlab runner cache miss file after stage complete

Summary
My gitlab-ci.yml has 3 stage for deploy an application to okd pod
Application running spring boot on tomcat:8
Sometimes, the cache.zip is not update after stage complete so that the next step can't run correctly
Steps to reproduce
My gitlab-ci run the following stage
Stage 1: run test compile ---> OK
Stage 2: package war file as output for deploy ---> Gitlab-ci log show success but the cache.zip has not war file (just sometimes cache.zip not have war file, sometimes it run correctly)
Stage 3: Deploy war file to pod ---> Because of war file not exists in cache.zip, script error -> failed
.gitlab-ci.yml
image: openshift/origin-cli
stages:
- build
- test
- staging
cache:
paths:
- .m2/repository
- target
- artifact
validate:jdk8:
stage: build
script:
- 'mvn test-compile'
only:
- master
image: maven:3.3.9-jdk-8
verify:jdk8:
stage: test
script:
- 'mvn verify'
- 'mvn package' # =====> this command generate war file
only:
- master
image: maven:3.3.9-jdk-8
staging:
script:
- "mkdir -p artifact"
- "cp ./target/*.war ./artifact/" # ======> Sometimes error at this line because of previous step not add war file into cache
- "oc start-build $APP"
- "rm -rf ./target/* && rm -rf ./artifact/*" # Remove war & class file, only cache m2 lib
stage: staging
variables:
APP: $CI_PROJECT_NAME
environment:
name: staging
url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
only:
- master
Actual behavior
Sometimes cache not have war file after test stage complete (is this depends on war file size?)
Expected behavior
War file update into cache after test stage for staging stage deploy
Relevant logs and/or screenshots
ScreenShot
job log
Running with gitlab-runner 13.7.0 (943fc252)
on gitlab-runner-node1 y6awygsj
Preparing the "docker" executor
00:01
Using Docker executor with image openshift/origin-cli ...
Using locally found image version due to if-not-present pull policy
Using docker image sha256:7ebb6be01117a50344d63f77c385a13302afecd33480b97c36a518d4f5ebc25a for openshift/origin-cli with digest docker.io/openshift/origin-cli#sha256:509e052d0f2d531b666b7da9fa49c5558c76ce5d286456f0859c0a49b16d6bf2 ...
Preparing environment
00:00
Running on runner-y6awygsj-project-489-concurrent-0 via gitlab.runner.node1...
Getting source from Git repository
00:01
Fetching changes...
Reinitialized existing Git repository in /builds/my-project/.git/
Checking out b4c97428 as master...
Removing .m2/
Removing artifact/
Removing target/
Skipping Git submodules setup
Restoring cache
00:05
Checking cache for default-23...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
Executing "step_script" stage of the job script
00:01
$ mkdir -p artifact
$ cp ./target/*.war ./artifact/
cp: cannot stat './target/*.war': No such file or directory
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1
Environment description
config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "gitlab-runner-node1"
url = "https://gitlab.mycompany.vn/"
token = "y6awygsj9zks18nU6PDt"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
dns = ["192.168.100.1"]
tls_verify = false
image = "alpine:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/mnt/nfs/nfsshare-gitlab/cache:/cache"]
shm_size = 0
pull_policy = "if-not-present"
Used GitLab Runner version
Version: 13.7.0
Git revision: 943fc252
Git branch: 13-7-stable
GO version: go1.13.8
Built: 2020-12-21T13:47:06+0000
OS/Arch: linux/amd64
Possible fixes
Re-run test stage until cache has war file
Let's go step by step.
First, regarding how to manage the files between stages.
It's true that you could directly access to the files between jobs and stages if both run on the same environment, but that's not always the case (even if both runners are using the same nfs share directory) and you should use artifacts for that.
When you define an artifact within a job, you're specifying a list of files that are attached to the job when it succeeds, fails or always, depending on the configuration you have.
By default, all artifacts from previous stages are passed to each job, but in any case you can use dependencies to also define from which jobs you want to fetch artifacts from.
So basically you should use the following .gitlab-ci.yml
image: openshift/origin-cli
stages:
- build
- test
- staging
cache:
paths:
- .m2/repository
validate:jdk8:
stage: build
script:
- 'mvn test-compile'
only:
- master
image: maven:3.3.9-jdk-8
verify:jdk8:
stage: test
script:
- 'mvn verify' # =====> verify already includes: validate, compile, test and package
artifacts:
paths:
- target/[YOUR_APP_NAME].war
only:
- master
image: maven:3.3.9-jdk-8
staging:
dependencies:
- verify:jdk8
script:
- "mkdir -p artifact"
- "cp ./target/[YOUR_APP_NAME].war ./artifact/"
- "oc start-build $APP"
stage: staging
variables:
APP: $CI_PROJECT_NAME
environment:
name: staging
url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
only:
- master
Also, notice that I deleted the mvn package instruction. I would recommend you to take a look into the Build Lifecycle Basics of Maven.

GitLab SonarQube CI/CD variables are not passed to the Pipeline

I was trying to integrate the GitLab CI/CD with SonarQube 8.1 based on the following documentation https://docs.sonarqube.org/latest/analysis/gitlab-cicd/.
I tried to use the SonarScanner for Maven sample configurations
image: maven:latest
variables:
SONAR_TOKEN: "your-sonarqube-token"
SONAR_HOST_URL: "http://your-sonarqube-url"
GIT_DEPTH: 0
sonarqube-check:
script:
- mvn verify sonar:sonar -Dsonar.qualitygate.wait=true
allow_failure: true
only:
- merge_requests
- master
The problem is that it looks like the SONAR_HOST_URL, and probably, the SONAR_TOKEN is ignored for unclear reason. When looking at the pipeline log I get
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.6.0.1398:sonar (default-cli) on project sonar-java-test: Unable to execute SonarQube: Fail to get bootstrap index from server: Failed to connect to localhost/0:0:0:0:0:0:0:1:9000: Connection refused (Connection refused) -> [Help 1]
I tried to workaround this by setting the variables using the gitlab (12.3.2) CI/CD variables, but it doesn’t work
Any ideas?
The documentation seems not up-to-date.
You should add -Dsonar.host.url and -Dsonar.host.url arguments to the maven command to ovveride the default settings :
image: maven:latest
variables:
SONAR_TOKEN: "your-sonarqube-token"
SONAR_HOST_URL: "http://your-sonarqube-url"
GIT_DEPTH: 0
sonarqube-check:
script:
- mvn verify sonar:sonar -Dsonar.qualitygate.wait=true -Dsonar.host.url=$SONAR_HOST_URL -Dsonar.login=$SONAR_TOKEN
allow_failure: true
only:
- merge_requests
- master
The Protected toggle in the Gitlab > Variables screenshot refers to Protected Branches and Tags. It does not mean the variable is protected.
It is the Masked toggle that protects the variable being leaked into logs etc.
gitlab-ci understands the variables SONAR_TOKEN and SONAR_HOST_URL
Reference: https://docs.sonarqube.org/latest/analysis/gitlab-cicd/

how to run pipeline only on HEAD commit in GitlabCi runner?

we have a CI pipeline on our repository, hosted in gitlab
we setup gitlab-runner on our local machine
the pipeline running 4 steps
build
unit tests
integration test
quality tests
all this pipeline takes almost 20 min
and the pipeline trigger on each push to a branch
is there a way to configure the gitlab-runner that if the HEAD of a branch that the runner currently running on changes the pipe
will auto cancel the run? because the latest version is what matters
for example in this run the lower run is unnecessary
gitlab-ci.yml
stages:
- build
- unit_tests
- unit_and_integration_tests
- quality_tests
build:
stage: build
before_script:
- cd projects/ideology-synapse
script:
- mvn compile
unit_and_integration_tests:
variables:
GIT_STRATEGY: clone
stage: unit_and_integration_tests
only:
- /^milestone-.*$/
script:
- export RUN_ENVIORMENT=GITLAB_CI
- export MAVEN_OPTS="-Xmx32g"
- mvn test
- "cat */target/site/jacoco/index.html"
cache: {}
artifacts:
reports:
junit:
- "*/*/*/target/surefire-reports/TEST-*.xml"
unit_tests:
variables:
GIT_STRATEGY: clone
stage: unit_tests
except:
- /^milestone-.*$/
script:
- export MAVEN_OPTS="-Xmx32g"
- mvn test
- "cat */target/site/jacoco/index.html"
cache: {}
artifacts:
reports:
junit:
- "*/*/*/target/surefire-reports/TEST-*.xml"
quality_tests:
variables:
GIT_STRATEGY: clone
stage: quality_tests
only:
- /^milestone-.*$/
script:
- export RUN_ENVIORMENT_EVAL=GITLAB_CI
- export MAVEN_OPTS="-Xmx32g"
- mvn test
cache: {}
edit after #siloko comment:
I already try using
the auto-cancel redundant, pending pipelines in the setting menu
I want to cancel running pipelines and not pending
after forther investigation, I found that I had 2 active runners
on one of my machines
one shared runner , and another specific runner then if I push a 2 commit one after another to the same branch both of the runners take the jobs and execute them.
that also explains why
Auto-cancel redundant, pending pipelines
options, didn't work because it works only when the same runner have pending jobs
actions that been taken to fix this problem: unregister the specific runner and leave the machine only with the shared runner

How to fail Gitlab pipeline that calls another pipeline via API?

I have 2 Gitlab repos:
Project A
Integration tests for Project A
I want to stop the pipeline / build of Project A if the integration tests fail but currently the Project A pipeline passes even if the integration tests fail.
My .gitlab-ci.yml for Project A defines these 7 stages:
stages:
- build
- test
- publish
- dev-deployment
- staging-deployment
- trigger-integration-tests
- prod-deployment
The second last stage (trigger-integration-tests) kicks off the integration tests project by using the Gitlab API call with curl:
trigger-integration-tests:
stage: trigger-integration-tests
image: ubuntu:16.04
script:
- apt-get update && apt-get install -y curl
- "curl -X POST -F token=$INTEGRATION_TESTS_TOKEN -F variables[PROJECT_ID]=$CI_PROJECT_ID -F variables[BRANCH_NAME]=$CI_COMMIT_REF_NAME -F ref=master https://gitlab.mycompany.com/api/v4/projects/123/trigger/pipeline"
allow_failure: false
only:
- master
I tried adding the allow_failure: false flag but that didn't help so I'm looking for more ideas.
I found the trigger-and-wait technique but wasn't sure if there's a more simple solution.
As answered on a previous question, you could do the following:
From the main project, using a Python/Bash script:
Trigger the integration tests pipeline (and capture the pipeline ID)
Poll the status of the pipeline, using the captured ID (which can be running, pending, failed, canceled or skipped)
Raise an exception / error if it has failed...
See here for an example python script to achieve this.

Resources