How should sonarqube quality gates be used with automated builds? - sonarqube

Our organization is currently using sonarqube to fail a SCM triggered automated builds. Right now, our quality gate is firing when there are any open or reopened issues (with the understanding that someone needs to fix or "accept" the issue before the build will continue). Unfortunately, there are concerns that we are delaying code propagation for code formatting issues and other very minor offenses. Also, this is requiring quite a bit more manual intervention than we originally expected.
We want certain classes of issues to trigger a build failure, but the "new issues" filter only fire once. on the subsequent automated build they are no longer new so they no longer fire the gate.
Is there anyway to set up the equivalent to "Break the build if there are more than 0 open/reopened breaking/critical issues."?
EDIT: Sorry, I forgot to mention that we have about 15k "confirmed" issues that is serving as our backlog. so the non-new issue filters wont work either. Also I am open to minor changes to our workflow.

From what I understand, the following configuration should do the job:

Related

Work around sonar.analysis.mode parameter being deprecated

I am using SonarQube to analyse my code (in Maven Java). Every time I run my analysis, a report is generated and published to my SonarQube server. However, I would like to only publish my SonarQube analysis, when I run master branch (meaning that if I use SonarQube analysis with any other branch, the report should not be placed on the server but the analysis still needs to be done).
I know that there was sonar.analysis.mode parameter that used to do precisely what I want to do now. However, from this post's discussions I learnt that it was deprecated since v7.4 (I use v7.9+). From the same post, I learnt about branch analysis method (supposed 'alternative'), but if I understood correctly, the analysis report still gets placed on the server, though be it for a short time. I am afraid that it is still not good enough (unless I misunderstood or there it can be configured to be like sonar.analysis.mode).
My question is then as follows: was there ever found a workaround that would ultimately do what sonar.analysis.mode used to do? Maybe there is an API parameter that would prevent the analysis report being placed on the server (can one delete the analysis report, without admin privileges, but still retain analysis's info somehow)?
I am aware that I am essentially asking for a functionality that was removed due to reasons provided here. As such, any alternatives that I may look into will also suffice (thought I still would really like workaround using SonarQube API or something similar).

Bugzilla integration with sonarqube

I want to setup one build automation environment, in which code should be statically analysed first, and the issues which are identified as part of static analysis should be raised as bugs.
for static analysis, I am using sonarQube and for bugs, I am using Bugzilla.
Is there some Bugzilla plugin are available for sonarQube, So that once issues are identified they can be directly raised to Bugzilla?
Not only is this not available, it's not recommended, for a couple of reasons.
First, not every issue raised by SonarQube should be an individual work ticket:
some will be resolved as Won't Fix - i.e. valid issues but not relevant for your context.
some issues can be handled en masse, so rather than creating a ticket per, say, naming convention violation, you would create one ticket to fix all naming convention violations
some - very few - tickets will be false positives.
Further, even if there were a SonarQube plugin to create work tickets for issues, the other side of the integration would be missing. I.e. if I comment on a work ticket in Bugzilla, I might reasonably expect that comment to show up in SonarQube as well. And it wouldn't.
In short, this type of integration would be an exercise in frustration for all involved - either immediately or eventually.

Does continuous inspection still work with Sonar 5.1.X?

I'm trying to run a preview analysis for a (Java) project of ours with SonarQube 5.1.1. I am able to get a local report generated, however I get no coverage data, and I also get the message [INFO] [XX:YY:ZZ.ZZZ] Build Breaker plugin is no more supported in preview/incremental mode.
If I check here, the page says that Starting with SonarQube 5.1, the Build Breaker plugin does not work any longer in the preview & incremental modes..
I'm confused - I thought that for continuous inspection one needs the build breaker plugin. Is that no longer so? Has the concept in SonarQube changed?
Why am I not getting coverage data when running a preview analysis?
I don't know where you've read this, but continuous inspection is not specifically related to the preview/incremental mode nor to the build breaker plugin - it's not even related to SonarQube (even though it has been pushed by SonarSource from the very beginning).
Here are the key points:
Continuous inspection is about analyzing your code as often as you can in order to monitor (and eventually improve) the quality of your code. Whatever the tool.
On SonarQube, this means running analyses that will push information on the server so that you can monitor what's going on and take the required actions for your application portfolio.
Obviously, when you are a developer, you'd like to manage those issues early, before they even get pushed to the source code repository. But experience tells us that preventing any code push because of issues is a bad pattern - because some issues might be false-positive or not relevant in the context (and still you want - and have the right, to push your code). This is why we feel that the build breaker plugin is not aligned with all this, and it will be replaced in upcoming versions of SQ by native features that match better these concepts:
Very efficient code analysis to display issues in the IDE at the speed of light - but without computing metrics
Preview mode that will compute everything and make it possible to check quality gate before pushing code to the source code repository - without impacting the results on the server
and in this case, using some specific information found in the logs, it will be possible for a CI to fail a build

Automated Software Versioning integrated with Issue Control System

I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?

What's the most efficient way for a developer to switch between tasks?

I'm looking for a workflow-type description of the series of steps you perform to switch from one software development task to another. If a step involves a tool, please specify which tool and how it's used. The goal of the workflow is to have the smoothest possible transition from task #1 to task #2 and back to task #1.
Consider this scenario...
You're implementing a new user story and, while you've made progress so far today, it's not quite done and you haven't implemented your tests yet.
Your lead comes to you with a high priority bug that's blocking your test team. You need to stop what you're doing and get the bug fixed. The bug is in a build from three days ago, which is the most recent build the test team has picked up.
You can fix the bug in a new version of the sources, but it has to be a stable version and can't include the incomplete feature you're currently working on.
Alt + Tab is how we do it.
Task switching is a thing of the brain. I don't think there is a tool to do that for you. If there is, I am also interested.
Each person has its own way of preparing, some don't prepare at all and are in another thing like a snap, some take more time etc. It depends on the man/woman.
Sure, you can try to create some mental milestones (taking a note, place a reminder etc) to return to it when getting back to the task, but this again depends on other factors (how long was the task switch, how quiet the office, familiarity with the task, moon phases etc).
The most efficient way for a developer to switch between tasks I think is subjective.
Meanwhile, have you read the Human Task Switches Considered Harmful from Joel Spolsky?
I would say the steps you need to take in the scenario you describe are 100% dependent on the development environment and tools you have set up.
Using Perforce for source code version control, we have set up a branching system where the releases are separate from development work and all development branches stem from a single "acceptance" branch. Each branch is used for a single issue, or for a set of very closely related issues. No other issues can be worked on in a branch until the changes have been integrated up to the acceptance branch.
Yes, it does mean we have a lot of branches. Yes, we do a lot of syncing (acceptance down to a work branch) and integrating (work branch up to acceptance). But its worth is incalculable when it comes to easily switching from one task to another, going back to a test-built, spotting two issues biting each other, etc.
After development has done its thing (including their own tests), an issue is tested by the QA team. First in isolation in its own branch. After that is is integrated into the acceptance branch and a regression test is done to find any problems with independent issues biting each other. When all issues for a release have thus been integrated into acceptance, a full regression and new functionality test is executed by the QA team.
So, the acceptance branch is always the "latest" state of development for the app.
In this set up the scenario you describe would play out as:
Leave my current task as it is, possibly check in any outstanding changes so as not to lose them when my computer crashes. If that means breaking a daily build of that branch, I wouldn't check in, unless it is easy to fix the compile errors. (Please note that we have many apps in our application suite and while my changes may compile in the app I am working on, they may still break the compilation of other apps in our suite) Our rule is: each submit may break functionality, but must not break the build process.
Find an "empty" branch - a branch that is not currently being used for any development work, or, if all branches are taken, create a new one.
Force sync the acceptance branch and the selected work branch so my machine is guaranteed to have the latest state for both branches.
Synchronize (forced if necessary) the latest state of the acceptance branch to the work branch, so the selected work branch is the same as the acceptance branch.
Open up that branch's application suite in the IDE, debug and solve. Submit to the work branch.
Tell QA to have a look at it in the work branch. If they are satisfied with it, integrate the changes up to acceptance so they can continue their test.
Switch the IDE back to work on the application suite in the branch I was working on before.
Rinse and repeat.
Considering your scenario,
you could check out the stable version of sources in another working copy, correct the bug, commit.
When you come back to your incomplete work, do an update and continue to work.
When you're working on something you usually have a few ideas, few things you're planning to do, some stuff that is not clear and has to be solved later. It tends to get lost when you switch to other task.
I found it useful to write them down somewhere - take a brain snapshot. Later it's easier to restore it and get back faster to your original task.
I make a note of every file I'm working on inside of a Task/Todo item with a reminder in approx. the amount of time I will be away from it. Then I save and close each of those files to prevent them from distracting me/eliminate the clutter/create room for the new task on my desktop. I have the memory of a flea, so I need all the help I can get.

Resources