Work around sonar.analysis.mode parameter being deprecated - maven

I am using SonarQube to analyse my code (in Maven Java). Every time I run my analysis, a report is generated and published to my SonarQube server. However, I would like to only publish my SonarQube analysis, when I run master branch (meaning that if I use SonarQube analysis with any other branch, the report should not be placed on the server but the analysis still needs to be done).
I know that there was sonar.analysis.mode parameter that used to do precisely what I want to do now. However, from this post's discussions I learnt that it was deprecated since v7.4 (I use v7.9+). From the same post, I learnt about branch analysis method (supposed 'alternative'), but if I understood correctly, the analysis report still gets placed on the server, though be it for a short time. I am afraid that it is still not good enough (unless I misunderstood or there it can be configured to be like sonar.analysis.mode).
My question is then as follows: was there ever found a workaround that would ultimately do what sonar.analysis.mode used to do? Maybe there is an API parameter that would prevent the analysis report being placed on the server (can one delete the analysis report, without admin privileges, but still retain analysis's info somehow)?
I am aware that I am essentially asking for a functionality that was removed due to reasons provided here. As such, any alternatives that I may look into will also suffice (thought I still would really like workaround using SonarQube API or something similar).

Related

Can SonarQube perform retrospective analysis of past commits?

I am considering the implementation of a Code Quality tool for our team's Projects.
SonarQube seems to be a good choice. I haven't gotten the ideal workflow figured out yet (we use SVN and Maven Projects and have a Jenkins server running the tests on every commit).
Aside from the importance of being able to analyse the quality of the current commit, historical evolution is also very interesting.
Given that we already have a few years of commits, is it possible, when setting up the project, to request a retrospective analysis of those commits, or will SonarQube only work for the commits from the day it is installed onwards?
SonarQube only displays data uploaded by scanners. You can checkout to any commit (read more here: How to checkout a specific Subversion revision from the command line?) and next execute a scanner. The used scanner depends of what kind of a build tool you use:
Ant
Gradle
Maven
MSBuild
Other
The analysis result will be pushed to a SonarQube server. Unfortunately, it is always treated as the last version of the application, so you cannot "insert" analysis of some old commits to the project history. But do you really need it? Scanners always analyze all sources. If somebody added some code three years ago and nobody deleted it, then it will be available on the server. If the code is deleted, then you shouldn't spend time on analyzing something, what doesn't exist anymore. That's why SonarQube always shows the last state of the project.
You can read a good blog post written by Fabrice Bellingrad (April 06, 2016): Stop planning; fix the leak!
Read more about SonarQube Architecture and Integration.
Yes, this is possible using the sonar.projectDate analysis parameter. Its purpose is precisely what you are asking for.
Quote from the docs:
Assign a date to the analysis. This parameter is only useful when you need to retroactively create the history of a not-analyzed-before project. The format is yyyy-MM-dd, for example: 2010-12-01. Since you cannot perform an analysis dated prior to the most recent one in the database, you must analyze recreate your project history in chronological order, oldest first.
You could for example check out your last 10 version tags in chronological order (oldest first!). For each tag run the analysis with sonar.projectDate set to the date the tag was created.

Is interaction between a widget plugin and SonarQube server to do analysis possible?

Specifically, I want to run some analysis on the issues generated and rules violated. So I want to design the system to have a few click actions in the web interface of sonar and initiate the analysis in the back-end. The reason for this is that the analyses are run only during some cases and I don't want the instance with my plugin to have additional load during each run.
Also if possible could you point me in the right direction?
I couldn't find anything on here. http://docs.sonarqube.org/display/DEV/Web+API
Please note I have been searching this very recently and am also new to SonarQube.
SonarQube server is for performing analysis only. What I was looking was for the server to give the user access to control the analysis. But using independent plugins to do static analysis could work.
I solved this issue by splitting the plugin into
A plugin for doing inline analysis work (if a need for doing our own static analysis arises)
A WebApp to classify issues, listing them by projects, etc.
This approach seemed more attractive when after discussion, we decided that all functions we wanted out widget to do, our WebApp itself could do. From SonarQube version 6.2, the ruby APIs are going to be deprecated and so moving to a REST-based approach would give an enduring solution.

Bugzilla integration with sonarqube

I want to setup one build automation environment, in which code should be statically analysed first, and the issues which are identified as part of static analysis should be raised as bugs.
for static analysis, I am using sonarQube and for bugs, I am using Bugzilla.
Is there some Bugzilla plugin are available for sonarQube, So that once issues are identified they can be directly raised to Bugzilla?
Not only is this not available, it's not recommended, for a couple of reasons.
First, not every issue raised by SonarQube should be an individual work ticket:
some will be resolved as Won't Fix - i.e. valid issues but not relevant for your context.
some issues can be handled en masse, so rather than creating a ticket per, say, naming convention violation, you would create one ticket to fix all naming convention violations
some - very few - tickets will be false positives.
Further, even if there were a SonarQube plugin to create work tickets for issues, the other side of the integration would be missing. I.e. if I comment on a work ticket in Bugzilla, I might reasonably expect that comment to show up in SonarQube as well. And it wouldn't.
In short, this type of integration would be an exercise in frustration for all involved - either immediately or eventually.

Using sonar in pretty big team

We have something about 20 people in our team and we are using sonar for now to analyse new code before submiting it to the main stream. So each designer uses it's own Sonar installed on his machine.
What I'm trying to do is to create a one instance of the Sonar which each designer will be able to use. The only concern I have is what will happen if:
One designer will launch analysis on one revision of file and right after that the second designer will launch analysis on another revision of this file (in the worst case we can have a bunch of such a files). First designer won't be able to see his violations and won't be able to see code he wrote at all. Do we have some mechanism to overcome this?
What will happen if two designers will analyse the same project at the same time? AFAIK, Sonar won't allow them to do so. Any workaround for this?
Of course, we can, somaehow, create a project on the sonar side for each team member, but this has it's drawbacks, such as issues, marked as false positive in one proect won't appear as such an issues in another project and so on.
Any ideas on such an issues?
What you probably want to set up is:
a central Sonar instance that analyses the code base on a regular basis (for instance every day) based on the code located in the repository. This instance should be the reference and the project manager(s) will use it to monitor the project.
ask the developers to run local analyses before commiting their code:
either using Sonar Eclipse if you're coding in Java, C++ or Python. Everything is perfectly described in the documentation, more precisely the "Checking code prior to commit" section
or using the Issues Report plugin if your language is not supported yet in Sonar Eclipse.

Automated Software Versioning integrated with Issue Control System

I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?

Resources