Can SonarQube perform retrospective analysis of past commits? - sonarqube

I am considering the implementation of a Code Quality tool for our team's Projects.
SonarQube seems to be a good choice. I haven't gotten the ideal workflow figured out yet (we use SVN and Maven Projects and have a Jenkins server running the tests on every commit).
Aside from the importance of being able to analyse the quality of the current commit, historical evolution is also very interesting.
Given that we already have a few years of commits, is it possible, when setting up the project, to request a retrospective analysis of those commits, or will SonarQube only work for the commits from the day it is installed onwards?

SonarQube only displays data uploaded by scanners. You can checkout to any commit (read more here: How to checkout a specific Subversion revision from the command line?) and next execute a scanner. The used scanner depends of what kind of a build tool you use:
Ant
Gradle
Maven
MSBuild
Other
The analysis result will be pushed to a SonarQube server. Unfortunately, it is always treated as the last version of the application, so you cannot "insert" analysis of some old commits to the project history. But do you really need it? Scanners always analyze all sources. If somebody added some code three years ago and nobody deleted it, then it will be available on the server. If the code is deleted, then you shouldn't spend time on analyzing something, what doesn't exist anymore. That's why SonarQube always shows the last state of the project.
You can read a good blog post written by Fabrice Bellingrad (April 06, 2016): Stop planning; fix the leak!
Read more about SonarQube Architecture and Integration.

Yes, this is possible using the sonar.projectDate analysis parameter. Its purpose is precisely what you are asking for.
Quote from the docs:
Assign a date to the analysis. This parameter is only useful when you need to retroactively create the history of a not-analyzed-before project. The format is yyyy-MM-dd, for example: 2010-12-01. Since you cannot perform an analysis dated prior to the most recent one in the database, you must analyze recreate your project history in chronological order, oldest first.
You could for example check out your last 10 version tags in chronological order (oldest first!). For each tag run the analysis with sonar.projectDate set to the date the tag was created.

Related

How to get new code coverage in Sonarqube?

When I say "new code",I mean that compare two commits in master branch and get the new code.
I am confused about the new code coverage in sonarqube.I want to specify the comparison of two commitIDs
Check in the analysis parameters if the sonar.projectDate one can help:
Retrieve the oldest version of your application's source that you wish to populate into the history (from a specific tag, whatever).
Run a SonarQube analysis on this project by setting the sonar.projectDate property. Example: sonar-scanner -Dsonar.projectDate=2010-12-01
Retrieve the next version of the source code of your application, update the sonar.projectDate property, and run another analysis. And so on for all the versions of your application you're interested in.
If your commits are done on different days, that could work.
This is not as precise as two commits ID, but can still help here.
See "SonarQube - unity tests code coverage on new code not working" (if this has not changed since SonarQube 5.x)

Is it possible for sonarqube to store the rules in git along with the source code of the project?

Is it possible for sonarqube to store the rules in git along with the source code of the project?
In other words, is it possible to versioned rules for different branches of the project?
Is it possible to store a profile in Git?
Yes. You can export a profile to file using the Back up feature. (Note that rule customizations aren't included)
Is it possible to have SonarQube store profiles in Git?
No. During analysis your SCM will be queried for blame information, but no part of the SonarQube ecosystem performs commits.
Is it possible to version your profiles and apply different versions to different branches of a project?
Yes, but why would you want to?
There's no point in re-analyzing branches that aren't changing. Which implies that if you improve/tighten your standards, you don't want to apply those higher standards to the new development that's going on in your branches.
"But new rules would raise all kinds of new issues in our branches & I don't want to have to address them because we're not doing new development there, just fixes." I can hear you saying.
Sure, but if you're using a recent version of SonarQube (>=6.3) then those new issues will be raised with a date that matches the blame date of the line. I.E. they won't be raised as leak period issues, but as old issues. Since you're presumably keeping the leak period clean on your branches, this should all work out.
But to answer your question, there's really no need to store old profiles in SCM, just peel off copies in SonarQube and assign them to the appropriate branch projects.
But if you insist on doing this via backups checked in to SCM be aware that you'll need to reconstitute (re-import) your profiles into SonarQube prior to analysis. You can't just point an analysis at a file containing a list of rules and expect it to work.

Sonar Differential Alerts

I've got a quality profile in Sonar which will alert if the number of Violations goes up since the previous analysis, e.g. "Alert if Critical Issues since previous analysis is greater than 0".
The problem with this is that when you run a subsequent build without any code changes (or perhaps an innocuous code change) the alert is cleared.
Is there a way to get Sonar to compare its results against the last analysis that did not contain any alerts?
EDIT: I should make it clear that the "difference since previous version" option will not work for our setup as we're employing a Continuous Delivery strategy, in which each build is a potential release candidate with its own unique version (we're using a date/time stamp as the version).
EDIT #2: I have also tried setting the value sonar.timemachine.period4 to a hardcoded version that I want to compare against; however this value is not accessible when configuring the Alerts, and is certainly ignored during an actual analysis.
After poking around in Sonar's source, a colleague and I came up with a workaround solution.
Set up your quality profile using the "previous version" comparison wherever you actually want to compare to the last good build.
For each build:
Query the last VCS tag with a build version and assign it to a variable called ${LAST_GOOD_BUILD} or similar for the rest of your build process to use.
Run Sonar with -Dsonar.timemachine.period3=${LAST_GOOD_BUILD} (also making sure the BuildBreaker plugin is active)
If you get no alerts, the next build step needs to record your new version in a VCS tag;
This works because sonar.timemachine.period3 is the same setting as "previous version" in your quality profile, but you are now replacing it with a hard-specified version of your choosing. Every time you build, you are tagging only the builds that pass quality checks, and when you run Sonar, you're only comparing against these good versions.
Pretty horrid, but it gets our build pipeline up and running again. If anything's unclear about the above, please let me know and I'll update this "solution".
CAVEATS: Your version numbering cannot be whole integers - Sonar will interpret this as the number of days between your current analysis and the one you want to compare with! Also, it cannot be in a format that could be confused with yyyy-MM-DD (e.g. 1000-01-01) as if this also happens to resolve to a real date, then you are inadvertently specifying the start of a date range. I've not yet seen anyone specifying version numbers that way, but you never know.
No but you can configure SonarQube to base your differential views on previous_version or on a date. See http://docs.codehaus.org/display/SONAR/Differential+Views#DifferentialViews-DifferentialViewsSettings

Using sonar in pretty big team

We have something about 20 people in our team and we are using sonar for now to analyse new code before submiting it to the main stream. So each designer uses it's own Sonar installed on his machine.
What I'm trying to do is to create a one instance of the Sonar which each designer will be able to use. The only concern I have is what will happen if:
One designer will launch analysis on one revision of file and right after that the second designer will launch analysis on another revision of this file (in the worst case we can have a bunch of such a files). First designer won't be able to see his violations and won't be able to see code he wrote at all. Do we have some mechanism to overcome this?
What will happen if two designers will analyse the same project at the same time? AFAIK, Sonar won't allow them to do so. Any workaround for this?
Of course, we can, somaehow, create a project on the sonar side for each team member, but this has it's drawbacks, such as issues, marked as false positive in one proect won't appear as such an issues in another project and so on.
Any ideas on such an issues?
What you probably want to set up is:
a central Sonar instance that analyses the code base on a regular basis (for instance every day) based on the code located in the repository. This instance should be the reference and the project manager(s) will use it to monitor the project.
ask the developers to run local analyses before commiting their code:
either using Sonar Eclipse if you're coding in Java, C++ or Python. Everything is perfectly described in the documentation, more precisely the "Checking code prior to commit" section
or using the Issues Report plugin if your language is not supported yet in Sonar Eclipse.

Automated Software Versioning integrated with Issue Control System

I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?

Resources