Sonar Differential Alerts - sonarqube

I've got a quality profile in Sonar which will alert if the number of Violations goes up since the previous analysis, e.g. "Alert if Critical Issues since previous analysis is greater than 0".
The problem with this is that when you run a subsequent build without any code changes (or perhaps an innocuous code change) the alert is cleared.
Is there a way to get Sonar to compare its results against the last analysis that did not contain any alerts?
EDIT: I should make it clear that the "difference since previous version" option will not work for our setup as we're employing a Continuous Delivery strategy, in which each build is a potential release candidate with its own unique version (we're using a date/time stamp as the version).
EDIT #2: I have also tried setting the value sonar.timemachine.period4 to a hardcoded version that I want to compare against; however this value is not accessible when configuring the Alerts, and is certainly ignored during an actual analysis.

After poking around in Sonar's source, a colleague and I came up with a workaround solution.
Set up your quality profile using the "previous version" comparison wherever you actually want to compare to the last good build.
For each build:
Query the last VCS tag with a build version and assign it to a variable called ${LAST_GOOD_BUILD} or similar for the rest of your build process to use.
Run Sonar with -Dsonar.timemachine.period3=${LAST_GOOD_BUILD} (also making sure the BuildBreaker plugin is active)
If you get no alerts, the next build step needs to record your new version in a VCS tag;
This works because sonar.timemachine.period3 is the same setting as "previous version" in your quality profile, but you are now replacing it with a hard-specified version of your choosing. Every time you build, you are tagging only the builds that pass quality checks, and when you run Sonar, you're only comparing against these good versions.
Pretty horrid, but it gets our build pipeline up and running again. If anything's unclear about the above, please let me know and I'll update this "solution".
CAVEATS: Your version numbering cannot be whole integers - Sonar will interpret this as the number of days between your current analysis and the one you want to compare with! Also, it cannot be in a format that could be confused with yyyy-MM-DD (e.g. 1000-01-01) as if this also happens to resolve to a real date, then you are inadvertently specifying the start of a date range. I've not yet seen anyone specifying version numbers that way, but you never know.

No but you can configure SonarQube to base your differential views on previous_version or on a date. See http://docs.codehaus.org/display/SONAR/Differential+Views#DifferentialViews-DifferentialViewsSettings

Related

How to get new code coverage in Sonarqube?

When I say "new code",I mean that compare two commits in master branch and get the new code.
I am confused about the new code coverage in sonarqube.I want to specify the comparison of two commitIDs
Check in the analysis parameters if the sonar.projectDate one can help:
Retrieve the oldest version of your application's source that you wish to populate into the history (from a specific tag, whatever).
Run a SonarQube analysis on this project by setting the sonar.projectDate property. Example: sonar-scanner -Dsonar.projectDate=2010-12-01
Retrieve the next version of the source code of your application, update the sonar.projectDate property, and run another analysis. And so on for all the versions of your application you're interested in.
If your commits are done on different days, that could work.
This is not as precise as two commits ID, but can still help here.
See "SonarQube - unity tests code coverage on new code not working" (if this has not changed since SonarQube 5.x)

Can SonarQube perform retrospective analysis of past commits?

I am considering the implementation of a Code Quality tool for our team's Projects.
SonarQube seems to be a good choice. I haven't gotten the ideal workflow figured out yet (we use SVN and Maven Projects and have a Jenkins server running the tests on every commit).
Aside from the importance of being able to analyse the quality of the current commit, historical evolution is also very interesting.
Given that we already have a few years of commits, is it possible, when setting up the project, to request a retrospective analysis of those commits, or will SonarQube only work for the commits from the day it is installed onwards?
SonarQube only displays data uploaded by scanners. You can checkout to any commit (read more here: How to checkout a specific Subversion revision from the command line?) and next execute a scanner. The used scanner depends of what kind of a build tool you use:
Ant
Gradle
Maven
MSBuild
Other
The analysis result will be pushed to a SonarQube server. Unfortunately, it is always treated as the last version of the application, so you cannot "insert" analysis of some old commits to the project history. But do you really need it? Scanners always analyze all sources. If somebody added some code three years ago and nobody deleted it, then it will be available on the server. If the code is deleted, then you shouldn't spend time on analyzing something, what doesn't exist anymore. That's why SonarQube always shows the last state of the project.
You can read a good blog post written by Fabrice Bellingrad (April 06, 2016): Stop planning; fix the leak!
Read more about SonarQube Architecture and Integration.
Yes, this is possible using the sonar.projectDate analysis parameter. Its purpose is precisely what you are asking for.
Quote from the docs:
Assign a date to the analysis. This parameter is only useful when you need to retroactively create the history of a not-analyzed-before project. The format is yyyy-MM-dd, for example: 2010-12-01. Since you cannot perform an analysis dated prior to the most recent one in the database, you must analyze recreate your project history in chronological order, oldest first.
You could for example check out your last 10 version tags in chronological order (oldest first!). For each tag run the analysis with sonar.projectDate set to the date the tag was created.

Is it possible for sonarqube to store the rules in git along with the source code of the project?

Is it possible for sonarqube to store the rules in git along with the source code of the project?
In other words, is it possible to versioned rules for different branches of the project?
Is it possible to store a profile in Git?
Yes. You can export a profile to file using the Back up feature. (Note that rule customizations aren't included)
Is it possible to have SonarQube store profiles in Git?
No. During analysis your SCM will be queried for blame information, but no part of the SonarQube ecosystem performs commits.
Is it possible to version your profiles and apply different versions to different branches of a project?
Yes, but why would you want to?
There's no point in re-analyzing branches that aren't changing. Which implies that if you improve/tighten your standards, you don't want to apply those higher standards to the new development that's going on in your branches.
"But new rules would raise all kinds of new issues in our branches & I don't want to have to address them because we're not doing new development there, just fixes." I can hear you saying.
Sure, but if you're using a recent version of SonarQube (>=6.3) then those new issues will be raised with a date that matches the blame date of the line. I.E. they won't be raised as leak period issues, but as old issues. Since you're presumably keeping the leak period clean on your branches, this should all work out.
But to answer your question, there's really no need to store old profiles in SCM, just peel off copies in SonarQube and assign them to the appropriate branch projects.
But if you insist on doing this via backups checked in to SCM be aware that you'll need to reconstitute (re-import) your profiles into SonarQube prior to analysis. You can't just point an analysis at a file containing a list of rules and expect it to work.

TeamCity and Plastic SCM plugin error when applying build.vcs.number

I have set up my TeamCity 10.0.3 to create an assembly version number during the project build that uses the build.vcs.number (which corresponds to the changset number on the VCS Root - taken from Plastic SCM) as one of the parts.
The format is similar to this; {major}.{Minor}.{build.vcs.number}.{build counter}
This method has worked perfectly for quite some time returning the changset number (and only the number) from my VCS system.
The Plastic plugin for TeamCity has now been upgraded to the latest version (SNAPSHOT-201611231807) and since the upgrade after the VCS Root has been created the build will successfully return the changeset number that can be used within the assembly version number.
The error occurs as soon as anyone checks something into the monitored branch - at this point if an automatic or manual build is triggered the information returned as build.vcs.number has lots of additional information that breaks the build.
An example of what is returned after a checkin is:
cs.418 (guid:6a2d5c45-b1b8-4f03-889c-3f3c80c6e209)
This appears to be both the changeset number along with the ID of the changset.
If I re-create the VCS root from scratch the correct number will be returned - until something is checked back in.
How can I resolve this error as all I want returned is the changset number
many thanks in advance
We have just released a new Teamcity plugin version including new features and a big code refactor. We are aware of this problem and we are going to configure the "build.vcs.number" variable to always show the changeset number (as we do in previous versions of the plugin). The task should be done very soon.
Please contact us at support at codicesoftware dot com if you need more information.

Automated Software Versioning integrated with Issue Control System

I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?

Resources