Test coverage with SonarQube bitbucket plugin in pull requests - sonarqube

I have the following setup:
Bamboo runs builds on every commit on feature branches, including
Bamboo SonarQube plugin
Analysis, including test code coverage
displays in SonarQube.
We installed SonarQube plugin on the Stash Server, and it mostly works (quality gate, etc), but code coverage does not show up in diff in pull requests. What needs to be enabled to that to work?

You could use Code Coverage plugin to integrate coverage information into stash pull-requests. In order to use it, you need to:
Install plugin on your Bitbucket server
Publish coverage information from your branch build. You could use maven client or nodejs client depending on your tech stack.
That doesn't require SonarQube at all, since all information stored on Bitbucket itself.

Pull Request analysis doesn't (yet?) handle code coverage; it only looks at new issues (other than, of course, issues for things that are calculated server-side like inadequate test coverage).

Related

Rest api to get sonarqube info from bamboo build or vice versa

Is there any way I can get any sonarqube api which can provide the build number of bamboo or any bamboo api which can give sonarqube info.This would really help.
I have so far tried both side but i am surprised both the system are quite clueless about each other. Why it is not possible that a build which ran sonarqube as as one of it's job doesn't have any information about that. Also neither in sonar it tell which build has actually triggered that sonar execution
(Not sure I understand what exactly you are looking for, perhaps this ...? )
I don't believe you can relate a specific Activity (SonarQube analysis) to a specific build (Bamboo), just project to job.
You must have SonarQube Server configured in Bamboo
When executing your job, you can add these sonar.links optional parameters to the analysis step:
sonar.links.homepage Project home page.
sonar.links.ci Continuous integration.
and
sonar.links.scm Project source repository.
sonar.links.issue Issue tracker.
Maybe also specify sonar.host.url=$SONAR_HOST_URL (where SONAR_HOST_URL is the global setting in Bamboo) in the analysis step parameters.
Those populate the Project Overview page sidebar:
That should provide the links from SonarQube back to the other systems of interest.
If you have properly configured Bamboo, you should see a link in Bamboo to the SonarQube project, post execution.

How can I configure Jacoco plugin in sonarqube server

I have a project where I my sonarqube is in a remote server and the server has connection to SVN repository.
Now I wish to add a jacoco plugin to sonar qube which will checkout the project from SVN, build it and generate code coverage report and display it in my sonar report.
can anyone suggest what should I do for this.
Thanks in advance.
First of all you are missing an important step. SonarQube Server will only display your report and your data, and tell you if you passed the quality gate or not and it will show you your issues. But it will not do the analyzing part.
For that you need to use a SonarQube Scanner. There are multiple scanners available as you can see here. Those scanners can be executed locally, or ideally will be integrated in your Continous Integration pipeline, via Jenkins, Bamboo, TeamCity, etc. This Scanner will analyze your project based on the plugins/sensors on your Sonarqube server.
This scanner has to be configured to point to your server via sonar.host.url property, and ideally you will have some login setup. More details regarding that can be found in the SonarQube Documentation, which i highly suggest to read.

How sonarqube works

I have a question that, how analysis happen in sonarqube. when I do mvn sonar:sonar -Dsonar.host.url=http://sonar.com what will happen in background. what I felt is like
Maven will use some plugins and communicate with sonarqube server
Load all the rules that it have in sonarqube server in location we run mvn sonar:sonar
Analyze source code using set of rules we loaded from sonarqube server
push it back to SonarQube database and results will be displayed on SonarQube server
Is this is the proper way that it works? or source code will go into sonarqube server and analysis will happen in sonarqube server itself?
Thanks for the help
You've got it mostly right:
Maven will use some plugins and communicate with SonarQube server
Load all the rules that it have in SonarQube server in location we run mvn sonar:sonar
Analyze source code using set of rules we loaded from SonarQube server
Calculate file-level metrics
Read coverage reports if any
Compile data into an analysis report and push it back to the SonarQube server
The server pops the uploaded report from the queue, and integrates it, storing issues and calculating high-level metrics
User sees updated project status on Project homepage

Travis CI skipping SonarQube analysis

I'm trying to configure SonarQube analysis in a github project. I've followed the official travis ci documentation but SonarQube analysis is not performed.
I'm getting the following message: "Skipping SonarQube Scan because it is not running in a secure environment"
pull request link: https://github.com/zakshya/cronos/pull/4
Full build log: https://s3.amazonaws.com/archive.travis-ci.org/jobs/216125526/log.txt
Does some one encountered this problem ?
I'm i missing some config ?
As mentioned in the official documentation of the SonarQube Travis Add-on about analysing pull requests:
For security reasons, this advanced feature
[i.e. pull request analysis]
works only for internal pull requests. In other words, pull requests
built from forks won’t be inspected.

Jenkins Multi-Pipeline Build Not Detecting Changes in Repository

We have a Subversion repository setup in this manor:
http://svn.vegicorp.net/svn/toast/api/trunk
http://svn.vegicorp.net/svn/toast/api/1.0
http://svn.vegicorp.net/svn/toast/data/trunk
http://svn.vegicorp.net/svn/toast/data/branches/1.2
http://svn.vegicorp.net/svn/toast/data/branches/1.3
I've setup a Jenkins Multi-Pipeline build for the entire toast project including all sub-projects -- each sub-project is a jarfile. What I want is for Jenkins to fire off a new build each time any file is changed in one of the toast projects. That project should rebuild. This way, if we create a new sub-project in toast or a new branch in one of the toast sub-projects, Jenkins will automatically create a new build for that.
Here's my Jenkins Multi-Branch setup:
Branch Sources
Subversion
Project Repository Base: http://svn.vegicorp.net/svn/toast
Credentials: builder/*****
Include Branches: */trunk, */branches/*
Exclude Branches: */private
Property Strategy: All branches get the same properties
Build Configuration
Mode: By Jenkinsfile
Build Triggers (None selected)
Trigger builds remotely (e.g., from scripts) Help for feature: Trigger * builds remotely (e.g., from scripts)
Build periodically Help for feature: Build periodically
Build when another project is promoted
Maven Dependency Update Trigger Help for feature: Maven Dependency Update Trigger
Periodically if not otherwise run
Note that the list of Build Triggers list does not include Poll SCM. Changes in the repository does not trigger any build. Jenkinsfiles are located at the root of each sub-project. If I force a reindex, all changed sub-projects get built and all new branches are found. I did originally checked Periodically and reindexed every minute to pick up a change, but that's klutzy and it seems to cause Jenkins to consume memory.
Triggering a build on an SCM change should be pretty basic, but I don't see a configuration parameter for this like I do with standard jobs. I also can't seem to go into sub-projects and set those to trigger builds either.
There must be something really, really simple that I am missing.
Configuration:
Jenkins 2.19
Pipeline 2.3
Pipeline API: 2.3
Pipeline Groovy: 2.17
Pipeline Job: 2.6
Pipeline REST API Plugin: 2.0
Pipeline Shared Groovy Libraries: 2.3
Pipeline: Stage View Plugin: 1.7
Pipeline: Supporting APIs 2.2
SCM API Plugin: 1.2
I finally found the answer. I found a entry in the Jenkins' Jira Database that mentioned this exact issue. The issue is called SCM polling is not being performed in multibranch pipeline with Mercurial SCM. Other users chimed in too.
The answer was that Jenkins Multi-branch projects don't need to poll the SCM because indexing the branches does that for you:
Branch projects (the children) do not poll in isolation. Rather, the multibranch project (the parent folder) subsumes that function as part of branch indexing. If there are new heads on existing branches, new branch project builds will be triggered. You need merely check the box Periodically if not otherwise run in the folder configuration.
So, I need to setup reindexing of the branches. I'm not happy with this solution because it seems rather clumsy. I can add post-commit and post-push hooks in SVN and Git to trigger builds when a change takes place, and then reindex on a periodic basis (say once per hour). The problem means configuring these hooks and then keeping them up to date. Each project needs its own POST action which means updating the repository server every time a project changes. With polling, I didn't have to worry about hook maintenance.
You never mentioned setting up a webhook for your repository, so this may be the problem (or part of it).
Jenkins by itself can't just know when changes to a repository have been made. The repository needs to be configured to broadcast when changes are made. A webhook defines a URL that the repository can POST various bits of information to. Point it to a URL that Jenkins can read, and that allows Jenkins to respond to specific types of information it receives.
For example, if you were using github, you could have Jenkins listen on a url such as https://my-jenkins.com/github-webhook/. Github could be configured to send a POST as soon as a PR is opened, or a merge is performed. This POST not only symbolizes that the action was performed, but will also contain information about the action, such as a SHA, branch name, user performing the action... etc.
Both Jenkins and SVN should be capable of defining the URL they each respectively POST and listen on.
My knowledge lies more specifically with git. But this may be a good place to start for SVN webhooks: http://help.projectlocker.com/knowledge_base/topics/how-do-i-use-subversion-webhooks
Maybe you need something under version control in the base directory. Try putting a test file here http://svn.vegicorp.net/svn/toast/test.txt. That may make the poll SCM option show up.

Resources