Can maven-sonar-plugin make a local analysis? - maven

I'm configuring a multi-module maven project that force the execution of sonar:sonar in the verify phase.
I also use the build-breaker-plugin of sonar to avoid deploying the module if some alerts are thrown by sonar.
The problem with this approach is that the developer should go to the sonar server to check the alerts. This is not that bad but if several users try to analyses the same module at the same time is impossible to know if the last/current analysis have your alerts.
CONTEXT: we have a CI system that builds all the modules each hour. So sometimes this collides with some developer deploy (that force the analysis)
IMHO, only the CI system should commit the analysis to the sonar server, because the CI have the lasted committed and deployed code. But the developer should only check locally his changes.
So, why we are forcing the analysis in the developer build? To avoid deploying modules that does not respect the code quality thresholds (The build-breaker plugin of sonar helps on this).
There is a way to configure the maven-sonar-plugin to do this?
local analysis in the developer build.
server analysis in the CI build

From what I understand, you should probably have a first instance of Sonar which is only used during the build to break it if your quality requirements are not met, and a second one that is used by your CI system and that is the reference for your products. And if you really want to enforce your process and be sure that code which breaks those requirements is not pushed into your SCM system, then you could bind a Sonar analysis on a pre-commit hook. But this seems a bit extreme to me...
At SonarSource, we haven't chosen the "block a commit because of violations" approach. Indeed, we consider that having some technical debt (= violations) is OK as long as you manage it. Managing technical debt means reviewing each incoming violation in Sonar and fixing them in the code or affecting those violations to action plans, the main idea being that the technical debt should not have increased at the end of a development sprint. This is what the review feature of Sonar is meant for. And Sonar provides widgets to monitor the evolution of reviews and new violations without a review.

Related

Offline Sonar analysis to signal a broken build

Sonar analysis is a nice way to check code compliance against a centrally defined policy. This is why I would like to use a profile with Blocker a Critical rules to signify a broken build.
I am using Jenkins do builds and invoke the SonarRunnerBuilder to get standard analysis on nightly builds of projects.
To achieve the features I outlined it would be best if I could run the analysis, check the results for violations (preferrably not sending them to Sonar server) and fail the build if there were any violations. So far I have not found way to do this using Google and looking through SonarRunner source code.
What I have considered is a workaround. I would implement a Decorator collecting violations and when the decoration reaches the project resource I would throw a RuntimeException to break the analysis. This would in turn fail the build.
would this work? Is there any better way to achieve this?
What you are looking for seems to be what the build breaker plugin offers. It relies on the Quality gate configuration to detect when violations (now called issues) in the current analysis require to break the whole build.
Please consider that this plugin won't be supported in SonarQube 5.2. Technically, deep changes in SonarQube's architecture make it impossible to provide the same feature. Philosophically, this plugin does not match the experience SonarQube wants to offer.
Still, another solution which covers the same use case is very likely to be offered in future version of SonarQube but this is yet to be defined.

Does continuous inspection still work with Sonar 5.1.X?

I'm trying to run a preview analysis for a (Java) project of ours with SonarQube 5.1.1. I am able to get a local report generated, however I get no coverage data, and I also get the message [INFO] [XX:YY:ZZ.ZZZ] Build Breaker plugin is no more supported in preview/incremental mode.
If I check here, the page says that Starting with SonarQube 5.1, the Build Breaker plugin does not work any longer in the preview & incremental modes..
I'm confused - I thought that for continuous inspection one needs the build breaker plugin. Is that no longer so? Has the concept in SonarQube changed?
Why am I not getting coverage data when running a preview analysis?
I don't know where you've read this, but continuous inspection is not specifically related to the preview/incremental mode nor to the build breaker plugin - it's not even related to SonarQube (even though it has been pushed by SonarSource from the very beginning).
Here are the key points:
Continuous inspection is about analyzing your code as often as you can in order to monitor (and eventually improve) the quality of your code. Whatever the tool.
On SonarQube, this means running analyses that will push information on the server so that you can monitor what's going on and take the required actions for your application portfolio.
Obviously, when you are a developer, you'd like to manage those issues early, before they even get pushed to the source code repository. But experience tells us that preventing any code push because of issues is a bad pattern - because some issues might be false-positive or not relevant in the context (and still you want - and have the right, to push your code). This is why we feel that the build breaker plugin is not aligned with all this, and it will be replaced in upcoming versions of SQ by native features that match better these concepts:
Very efficient code analysis to display issues in the IDE at the speed of light - but without computing metrics
Preview mode that will compute everything and make it possible to check quality gate before pushing code to the source code repository - without impacting the results on the server
and in this case, using some specific information found in the logs, it will be possible for a CI to fail a build

Automated Software Versioning integrated with Issue Control System

I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?

User specific sonar reports for same project

I have configured one project in sonar and integrated sonar with maven for build time analysis of the project.
After analysis, report is generated and uploaded to Sonar for browsing. But once another user compiles the same project their report overwrites mine.
Basically I want that one user's report on one project is not overwritten by report from other user. A user must be able to see their current violations independently. Is it possible in Sonar?
Sonar stores it's analysis on a daily basis, which explains why it's kind of pointless to run analysis several times in a day. Each analysis run will overwrite that day's existing results, which in turn spoils ongoing statistical analysis.
I would suggest running Sonar, from a dedicated build server like Jenkins (which has a Sonar plug-in). This daily analysis will populate the Sonar database and keep the project dashboard current. This architecture also enables you to keep the database credentials confidential.
Obviously developers would like to see the results of their bug fixing. For that I'd recommend running the Sonar Eclipse plug-in. The latest version will run the same Sonar analysis locally. Recent versions of Sonar also enable you to assign violations to developers for resolution.
This is not possible, last performed analysis will always be the one you browse in the interface. However, I guess what you need is the Issue Report Plugin which will enable analysis to store results locally, with the dry-run option.
This way your developers will be able to run an analysis on their code and see the violation delta without pushing the results.
You can do it by explicitly setting the below properties in pom.xml
<properties>
<sonar.projectKey>Test</sonar.projectKey>
<sonar.projectName>Test</sonar.projectName>
</properties>
Every user should set different projectKey and projectName, if you want that one user's report on one project is not overwritten by report from other user.

What's the workflow of Continuous Integration With Hudson?

I am referred to Hudson today.
I have heard about continuous integration before, but I have no idea what the heck is a ci-server.
Hudson is really easy to install in Ubuntu and in several minutes I managed to set up an instance of it.
But I don't quite understand the workflow of a ci-server, or how am I supposed to use it?
Please tell me if you have experience about ci, thanks in advance.
Edit:
I am currently using Mercurial as my SCM, and I wonder what is the right way to use it with Hudson.
I have installed the Mercurial Plugin of Hudson, and I create a new job with a local repository. When I commit in the repository the Hudson job is built with the latest version of my source code.
If what I used is a remote repository, what's the workflow like?
Is it something like the following?
Set up a Hudson job with the repository
Developer makes a local clone of the repository
Developer commit and push changes
The remote repository update with the incoming changeset
Run a Hudson build
There may be something I misunderstanded at all, please help me point it out.
Continuous Integration is the process of "integrating software" continuously i.e. as frequently as possible (ultimately after each set of changes) to avoid any big-bang integration and all subsequent problems by getting immediate feedback.
To implement Continuous Integration, you first need to automate the build of your software (where build means of course compiling sources, packaging them, but also compiling tests, running the tests, running quality checks, etc, anything that will help to get feedback on the health of your code). Then you need to trigger the build on the latest version of the sources on a particular event (a change in the repository, a temporal event), to generate reports and to send notifications upon failure (by mail, twitter, etc).
And this is precisely the responsibility of a CI engine: offering trigger mechanisms, being able to get the latest version of the sources, running the build, generating and publishing reports, sending notifications. CI engines do implement this.
And because running a build is CPU and Disk intensive, CI engines usually run on a dedicated machine (or even a farm of machines if you want to build lots of projects).
Back to your question now. Once you've got Hudson running, configure it (Manage Hudson > Configure System): setup the JDK, build tools, etc. Then setup an Hudson Job and follow the steps: configure the location of the source repository, the build tool, the trigger, a notification channel and you're done (you can do more complex things but that's a start).
For more details on the setup, check:
The official Use Hudson guide for more details. << START HERE
Continuous Integration with Hudson - Tutorial.
Spot defects early with Continuous Integration.
Martin Fowler's overview of continuous integration is one of the canonical references. In my opinion, using automation to make sure your code base is healthy is one of the most useful things that you can set up.
Update Sorry that I didn't have much time earlier to expand on my reply. #Pascal_Thivent is right that in order to effectively use CI, you need to be able to automate your builds, tests, etc. CI is actually a good forcing function for this. For me, it's one of those little warning flags if I start to think that it would be too painful to put a build into Hudson. It means that something is not quite right.
What I like about Hudson is that it's flexible enough to accommodate different workflows. We use it for both builds / unit tests and releases. And it eliminates a lot of the worry about certain release procedures only working in one person's environment.
What I don't like about Hudson is that it is occasionally unstable when new builds break plugins. I've had a couple of upgrades (2 out of 10 or so) go bad because of incompatibilities. I do two things now:
I never upgrade my team's Hudson server to the latest and greatest right away. I generally only upgrade when there are significant new features, or bug fixes.
I now have a basic Hudson instance set up with all my plugins on a virtual machine with some dummy builds that I fire up to test out any new upgrades before doing it on the public server.

Resources