I am using Sonarqube and CircleCI for code quality scan.
However, I don't know how if it is possible to start up a Sonarqube Server on CircleCI and use it to run the scanner.
This is my current config.yaml
version: 2.1
executors:
scanner:
docker:
- image: openjdk:11
commands:
check-code-quality:
description: Check Code Quality
parameters:
sonar_server_url:
type: string
description: "URL of your SonarQube server. e.g.: http://my.sonarqube,server:9000"
default: "$SONAR_SERVER"
sonar_login:
description: "Authentication key (sonar.login paramter) to access SonarQube and perform analysis"
type: string
default: "$SONAR_TOKEN"
sonar_sources:
description: "Where the files are located?"
type: string
default: "$SONAR_SOURCES"
steps:
- run:
name: Install Sonarqube scanner
command: |
wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.2.0.1873.zip
unzip sonar-scanner-cli-4.2.0.1873.zip
- run:
name: Run Sonarscanner
command: |
export SONAR_SCANNER_OPTS="-Xmx2048m"
eval ./sonar-scanner-4.2.0.1873/bin/sonar-scanner \
-Dsonar.projectKey=projectKey
-Dsonar.host.url=<< parameters.sonar_server_url >> \
-Dsonar.sources=<< parameters.sonar_sources >> \
-Dsonar.login=<< parameters.sonar_login >>
jobs:
check-code-job:
executor: scanner
steps:
- check-code-quality
workflows:
check-code-quality-flow:
jobs:
- check-code-job:
context: lineclass
There is an error log when the job being executed:
...
Caused by: java.lang.IllegalStateException: Fail to get bootstrap index from server
at org.sonarsource.scanner.api.internal.BootstrapIndexDownloader.getIndex(BootstrapIndexDownloader.java:42)
at org.sonarsource.scanner.api.internal.JarDownloader.getScannerEngineFiles(JarDownloader.java:58)
at org.sonarsource.scanner.api.internal.JarDownloader.download(JarDownloader.java:53)
at org.sonarsource.scanner.api.internal.IsolatedLauncherFactory.lambda$createLauncher$0(IsolatedLauncherFactory.java:76)
... 7 more
Caused by: java.net.ConnectException: Failed to connect to localhost/127.0.0.1:9000
at org.sonarsource.scanner.api.internal.shaded.okhttp.internal.connection.R...
This indicates that the Sonarqube Server is missing.
If you have experience running sonar-scanner on CircleCI please help.
Thank you.
After I change the image to sonarqube:8.9-community and fix the missing \ in the sonar-scanner command (at the end of -Dsonar.projectKey), it works.
Related
Summary
My gitlab-ci.yml has 3 stage for deploy an application to okd pod
Application running spring boot on tomcat:8
Sometimes, the cache.zip is not update after stage complete so that the next step can't run correctly
Steps to reproduce
My gitlab-ci run the following stage
Stage 1: run test compile ---> OK
Stage 2: package war file as output for deploy ---> Gitlab-ci log show success but the cache.zip has not war file (just sometimes cache.zip not have war file, sometimes it run correctly)
Stage 3: Deploy war file to pod ---> Because of war file not exists in cache.zip, script error -> failed
.gitlab-ci.yml
image: openshift/origin-cli
stages:
- build
- test
- staging
cache:
paths:
- .m2/repository
- target
- artifact
validate:jdk8:
stage: build
script:
- 'mvn test-compile'
only:
- master
image: maven:3.3.9-jdk-8
verify:jdk8:
stage: test
script:
- 'mvn verify'
- 'mvn package' # =====> this command generate war file
only:
- master
image: maven:3.3.9-jdk-8
staging:
script:
- "mkdir -p artifact"
- "cp ./target/*.war ./artifact/" # ======> Sometimes error at this line because of previous step not add war file into cache
- "oc start-build $APP"
- "rm -rf ./target/* && rm -rf ./artifact/*" # Remove war & class file, only cache m2 lib
stage: staging
variables:
APP: $CI_PROJECT_NAME
environment:
name: staging
url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
only:
- master
Actual behavior
Sometimes cache not have war file after test stage complete (is this depends on war file size?)
Expected behavior
War file update into cache after test stage for staging stage deploy
Relevant logs and/or screenshots
ScreenShot
job log
Running with gitlab-runner 13.7.0 (943fc252)
on gitlab-runner-node1 y6awygsj
Preparing the "docker" executor
00:01
Using Docker executor with image openshift/origin-cli ...
Using locally found image version due to if-not-present pull policy
Using docker image sha256:7ebb6be01117a50344d63f77c385a13302afecd33480b97c36a518d4f5ebc25a for openshift/origin-cli with digest docker.io/openshift/origin-cli#sha256:509e052d0f2d531b666b7da9fa49c5558c76ce5d286456f0859c0a49b16d6bf2 ...
Preparing environment
00:00
Running on runner-y6awygsj-project-489-concurrent-0 via gitlab.runner.node1...
Getting source from Git repository
00:01
Fetching changes...
Reinitialized existing Git repository in /builds/my-project/.git/
Checking out b4c97428 as master...
Removing .m2/
Removing artifact/
Removing target/
Skipping Git submodules setup
Restoring cache
00:05
Checking cache for default-23...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
Executing "step_script" stage of the job script
00:01
$ mkdir -p artifact
$ cp ./target/*.war ./artifact/
cp: cannot stat './target/*.war': No such file or directory
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1
Environment description
config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "gitlab-runner-node1"
url = "https://gitlab.mycompany.vn/"
token = "y6awygsj9zks18nU6PDt"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
dns = ["192.168.100.1"]
tls_verify = false
image = "alpine:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/mnt/nfs/nfsshare-gitlab/cache:/cache"]
shm_size = 0
pull_policy = "if-not-present"
Used GitLab Runner version
Version: 13.7.0
Git revision: 943fc252
Git branch: 13-7-stable
GO version: go1.13.8
Built: 2020-12-21T13:47:06+0000
OS/Arch: linux/amd64
Possible fixes
Re-run test stage until cache has war file
Let's go step by step.
First, regarding how to manage the files between stages.
It's true that you could directly access to the files between jobs and stages if both run on the same environment, but that's not always the case (even if both runners are using the same nfs share directory) and you should use artifacts for that.
When you define an artifact within a job, you're specifying a list of files that are attached to the job when it succeeds, fails or always, depending on the configuration you have.
By default, all artifacts from previous stages are passed to each job, but in any case you can use dependencies to also define from which jobs you want to fetch artifacts from.
So basically you should use the following .gitlab-ci.yml
image: openshift/origin-cli
stages:
- build
- test
- staging
cache:
paths:
- .m2/repository
validate:jdk8:
stage: build
script:
- 'mvn test-compile'
only:
- master
image: maven:3.3.9-jdk-8
verify:jdk8:
stage: test
script:
- 'mvn verify' # =====> verify already includes: validate, compile, test and package
artifacts:
paths:
- target/[YOUR_APP_NAME].war
only:
- master
image: maven:3.3.9-jdk-8
staging:
dependencies:
- verify:jdk8
script:
- "mkdir -p artifact"
- "cp ./target/[YOUR_APP_NAME].war ./artifact/"
- "oc start-build $APP"
stage: staging
variables:
APP: $CI_PROJECT_NAME
environment:
name: staging
url: http://$CI_PROJECT_NAME-staging.$OPENSHIFT_DOMAIN
only:
- master
Also, notice that I deleted the mvn package instruction. I would recommend you to take a look into the Build Lifecycle Basics of Maven.
Can you please help on my below issue.
As i am doing sonar scanner using cloud build with an advantage of secret manger but facing issue.
And followed same steps of https://cloud.google.com/cloud-build/docs/securing-builds/use-secrets
here is my code
steps:
- name: 'gcr.io/$_PROJECT_ID/sonar-scanner:latest'
entrypoint: 'bash'
args:
- '-c'
- '-Dsonar.host.url=http://sonar:9000/'
- '-Dsonar.login=$$USERNAME'
- '-Dsonar.password=$$PASSWORD'
- '-Dsonar.projectKey=$_BRANCH-analytics'
- '-Dsonar.sources=.'
secretEnv: ['USERNAME', 'PASSWORD']
dir: 'analytics'
availableSecrets:
secretManager:
- versionName: projects/project-id/secrets/sonar_pass/versions/1
env: 'PASSWORD'
- versionName: projects/project-id/secrets/sonar_user/versions/2
env: 'USERNAME'
tags: ['cloud-builders-community']
and the issue i am facing is:
bash: line 0: bash: -Dsonar.login=$USERNAME: invalid option name
ERROR
ERROR: build step 0 "gcr.io/project-id/sonar-scanner:latest" failed: step exited with non-zero status: 2
tried with different items but can't find a solution.
I am grateful if you guys help me on this.
Thank you
I actually had the same problem as you. It is indeed quite important that you use entrypoint: 'bash' and '-c', otherwise Cloud Build doesn't recognise the variables from the secret manager.
My cloudbuild.yaml step looks like this:
steps:
id: 'sonarQube'
name: 'gcr.io/$PROJECT_ID/sonar-scanner:latest'
entrypoint: 'bash'
args:
- '-c'
- |
sonar-scanner -Dsonar.host.url=<url> -Dsonar.login=$$SONARQUBE_TOKEN -Dsonar.projectKey=<project-key> -Dsonar.sources=.
secretEnv: ['SONARQUBE_TOKEN']
availableSecrets:
secretManager:
- versionName: projects/<project-id>/secrets/sonarqube-token/versions/latest
env: 'SONARQUBE_TOKEN'
I had some problems with the latest sonar-scanner image, because it used alpine. I got the next error: jre-bin-java-not-found even though the image has Java. Based on this, I created thus my own Docker image based on Ubuntu instead of Alpine. You can find the image in a pull request.
I found this example of using sonar-scanner in Cloud Build. It seems that sonar-scanner should be used without bash
I think that you should remove entrypoint: 'bash' and '-c'.
The similar approach is in this SO question. It should solve this error.
I am using AWS CodePipeline for the first time and trying to figure out how to properly create my buildspec.yml file for my Laravel application. There are few resources on the internet.
I have the following in my buildspec.yml file currently:
version: 0.2
phases:
install:
commands:
- curl -s https://getcomposer.org/installer | php
- mv composer.phar /usr/local/bin/composer
- php --version
build:
commands:
- echo Build started on `date`
- echo Installing composer deps
- composer install
- cp extra/.env ./
- php artisan cache:clear
post_build:
commands:
- echo Build completed on `date`
artifacts:
type: zip
files:
- '**/*'
name: clyde-$(date +%Y-%m-%d)
The CodeBuild is successful and this does deploy to Elastic Beanstalk. I did change the configuration in Elastic Beanstalk so the root is /public (for Laravel). However, when I go to the URL, the first line of code run presents an error like below:
View [inc\navbar] not found. (View: /var/app/current/resources/views/layouts/app.blade.php)
This leads me to believe something is not built properly.
To make it work, it will need to use a complete Pipeline: CodeCommit-->CodeBuild-->CodeDeploy
Inside your Artifact bucket there will be two objects generated in the process:
s3://codepipeline-us-east-1-<001122334455>/SourceArtif/
s3://codepipeline-us-east-1-<001122334455>/BuildArtif/
The first one is obtained in the initial phase of the pipeline from CodeCommit.
The second one is created by CodeBuild. The resultant zip file will be exactly the same as that one from CodeCommit. So it seems, the CodeBuild is only testing but not saving the Artifact with results from the instructions specified in buildspec.yml.
The third phase, CodeDeploy will obtain the code from the Artifact and it will need to Build again via scripts referred by appspec.yml.
version: 0.0
os: linux
files:
- source: /
destination: /web/project/html
hooks:
BeforeInstall:
- location: scripts/install_dependencies.sh
timeout: 300
runas: root
AfterInstall:
- location: scripts/build_again.sh
timeout: 600
runas: user
ApplicationStart:
- location: scripts/start_application.sh
timeout: 300
runas: root
The build_again.sh file will need to include same commands you are using in buildspec.yml (build section), then your Laravel project should be working.
I'm running the cypress-example-kitchen sink app on CircleCI.
This is my yaml config script:
version: 2.1
orbs:
cypress: cypress-io/cypress#1.0.1
workflows:
build:
jobs:
- cypress/install:
build: 'npm run build'
- cypress/run:
requires:
- cypress/install
start: 'npm start'
This kicks off and passes just fine when I make a commit to my fork of the repo above.
However, when I try to execute a CircleCI build programmatically, using
curl -X POST https://circleci.com/api/v1.1/project/github/Atticus29/cypress-example-kitchensink?circle-token=myApiToken, the build fails and the jobs dashboard on CircleCI tells me that something is wrong with my config file:
6 schema violations found required key [jobs] not found workflows:
5 schema violations found
workflows: minimum size: [2], found: 1
workflows: build: jobs: 4 schema violations found
workflows: build: jobs: 0: 0 subschemas matched instead of one
workflows: build: jobs: 0: expected type: String, found: Mapping
workflows: build: jobs: 0: install: extraneous key [build] is not permitted
workflows: build: jobs: 1: 0 subschemas matched instead of one
workflows: build: jobs: 1: expected type: String, found: Mapping
workflows: build: jobs: 1: run: extraneous key [start] is not permitted
And that something went wrong with my build:
Build-agent version 0.1.1216-48f80d08 (2018-12-07T16:01:40+0000)
Configuration errors: 2 errors occurred:
Configuration version 2.1 requires the "Enable Build Processing" project setting. Enable Build Processing under Project Settings ->
Advanced Settings. In order to retrigger build processing, you must
push a new commit.
Cannot find a job named build to run in the jobs: section of your configuration file. If you expected a workflow to run, check your
config contains a top-level key called 'workflows:'
I can confirm that Enable Build Processing is on.
None of these were problems when I ran the build in the usual way. Any advice?
Circle CI for some reason keeps on assuming that the projects are not set up for v2.0 despite config.yml being called the right thing and living in the right place in the repo. After a few commits, this issue seems to go away?
I ended up running a build programmatically with the following script:
#!/bin/bash
PERSONAL_TOKEN=myPersonalTokenHere
MOST_RECENT_BUILD=`curl -s "https://circleci.com/api/v1.1/recent-builds?circle-token=$PERSONAL_TOKEN&limit=1"| grep 'build_num'|grep -o '\d.'|sed 's/,//g'|sort -r -n|head -n1`
curl -X POST "https://circleci.com/api/v1.1/project/github/holmbergius/wildMeCypress/$MOST_RECENT_BUILD/retry?circle-token=$PERSONAL_TOKEN"
I have a sonarqube server running on top of Azure and a CICD pipeline configured using Google cloud build on top of GCP. Do you have an idea about how to include the sonarqube connection information in my cloudbuild file as a custom build step? I'm using gradle to build my build and test my images.
There's a sonarqube community cloud builder: https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/sonarqube
There is an example of using it as a step here: https://github.com/GoogleCloudPlatform/cloud-builders-community/blob/master/sonarqube/examples/cloudbuild.yaml
Below sample code worked for me
#static code analysis by sonarqube
- name: 'maven:3.6.1-jdk-8'
entrypoint: 'bash'
args:
- -c
- |
unset MAVEN_CONFIG \
&& echo "104.199.71.165 sonarqube.ct.blue.cdtapps.com" > /etc/hosts \
&& mvn sonar:sonar -q -Dsonar.login=5531b1a2d571c0482a3d45f605830e08ccf5f245 \
'-Dsonar.projectKey=odp.df.pubsub-sftp' \
'-Dsonar.projectName=ODP-DF-PUBSUB-SFTP' \
'-Dsonar.host.url=https://sonarqube.ct.blue.cdtapps.com' \
'-Dsonar.qualitygate.wait=true' \
'allow_failure: true'
dir: 'dataflows/generic/pubsub-sftp/src'
id: 'sonarqube-analysis'