We have started using SonarQube analysis for C#, JavaScript. Our application is old one. So when we did analysis for the first time (for first release) it showed bugs in thousands.Now what we want is to set benchmark for bugs. Now when I go for next scan for the same project I should not get same thousand defects again, instead it should give only new bugs related to current release(second release). Do we have something in SonarQube to configure which sets benchmark.
What you want is fixing the leak. You can configure your quality gates to rely on issues introduced during the leak period (instead of the absolute value)
Related
We are having an issue with SonarQube analysis where known issues are failing the quality gate. This is an existing code base, after initial analysis, existing issues should remain as is and new code is analyzed. Thus if a developer checks in code, I would expect only new changes are analyzed and scanned. However, SonarQube is detecting both new changes and existing changes as issues during the leak period.
How does Sonar determine the differences between current and previous for the leak period? Is it purely analyzing source files or is there something else happening? What could cause existing code to cause a new issue in the leak period? I'm trying to determine how to diagnose and troubleshoot this issue.
Running:
SonarQube 7.1
sonar-scanner-msbuild 4.2.0.1214
TFS 2012
4 TFS 2012 Build Agents
No SCM integration
Edit:
I mostly see these issues in Bugs and Code Smells. The leak period is based on the previous run, versions are not being used. It seems to be more problematic with SonarQube 7.1 than with the previous 6.7. Here is an example flow that happened:
1) Initial Sonar Analyzes/Scan -- All code is green
2) New check-in -- All code is green
3) New check-in, one line change -- all previously "green" items from step 1 are flagged and gate fails
Maybe this is my ignorance in understanding the Quality Gate, but I have a failing quality gate due to the default 40% Coverage over leak period when using the sonarway code quality gate via VSTS build. The issue is that there has been no modifications to the code between the initial analysis and the latest analysis, so to reference the metaphor in the docs.. there is no additional water in the kitchen.. hence I am not seeing a reason for this criterion to be failing.
Has anyone else experienced this and/or can anyone explain the logic if this is indeed the expected behavior? IMO, I would expect the Leak Period Code Coverage check to not apply when no modifications have occurred on the code base during the leak period.
My SQ analysis is being executed via VSTS and the version of SQ is 6.7.3.
Summary of Analysis with failing QG due to coverage leak (also the percentage coverage is still the same)
The issue also occurs when there are code modifications and those specific modifications have 100% code coverage
The sonarway quality gate configuration is as follows (default configuration):
As requested, I have also created a simple demo project which also demonstrates the behavior (running builds with SQ analysis and the second build fails due to 0.0% coverage over leak period although there is no new code). Sample project can be found here
Appreciate if someone can explain this behavior to me as it seems contrary to the documentation.
In case anyone stumbles upon this question, it turned out to be a bit of a red herring!
Following some good discussions with colleagues we were all in agreement that a coverage gate condition with Over Leak Period set to always was an incorrect gate configuration.
I was under the false assumption that the SonarQube way was the built-in default and was not editable. It turns out however, that the gate is in fact editable. Read-only mode was only enforced from v7.0 onwards as per release notes. So the root cause of our misconfiguration is still under investigation but we can be sure that it's not the default and recommended Gate as per SonarQube documentation
In the example posted above, the gate complains that the additional 0.7% coverage is below the minimum 40% error level. The condition itself is applying as configured but it really should be impossible to enable leak period for coverage. Instead the Coverage on New Code should be used.
So the simple one liner solution:
Make sure Over Leak Period is set to never (and/or not selected) if using the Coverage condition
I'm trying to run a preview analysis for a (Java) project of ours with SonarQube 5.1.1. I am able to get a local report generated, however I get no coverage data, and I also get the message [INFO] [XX:YY:ZZ.ZZZ] Build Breaker plugin is no more supported in preview/incremental mode.
If I check here, the page says that Starting with SonarQube 5.1, the Build Breaker plugin does not work any longer in the preview & incremental modes..
I'm confused - I thought that for continuous inspection one needs the build breaker plugin. Is that no longer so? Has the concept in SonarQube changed?
Why am I not getting coverage data when running a preview analysis?
I don't know where you've read this, but continuous inspection is not specifically related to the preview/incremental mode nor to the build breaker plugin - it's not even related to SonarQube (even though it has been pushed by SonarSource from the very beginning).
Here are the key points:
Continuous inspection is about analyzing your code as often as you can in order to monitor (and eventually improve) the quality of your code. Whatever the tool.
On SonarQube, this means running analyses that will push information on the server so that you can monitor what's going on and take the required actions for your application portfolio.
Obviously, when you are a developer, you'd like to manage those issues early, before they even get pushed to the source code repository. But experience tells us that preventing any code push because of issues is a bad pattern - because some issues might be false-positive or not relevant in the context (and still you want - and have the right, to push your code). This is why we feel that the build breaker plugin is not aligned with all this, and it will be replaced in upcoming versions of SQ by native features that match better these concepts:
Very efficient code analysis to display issues in the IDE at the speed of light - but without computing metrics
Preview mode that will compute everything and make it possible to check quality gate before pushing code to the source code repository - without impacting the results on the server
and in this case, using some specific information found in the logs, it will be possible for a CI to fail a build
I decided to use the following pattern after reading semantic versioning at http://semver.org/. However, I have some unsolved issues in my mind in terms of automaticng and integrating SDLC tools.
Version Pattern:
major.minor.revision.build
Such that;
Major: major changes, should be increamented manually.
Minor: minor changes, should be increamented automatically, whenever a new feature or an enhancement to existing feature is solved in issue tracking system.
Revision: changes not affecting the minor changes, should be increamented automatically, whenever a bug is solved in issue tracking system.
Assume that developers never commit the source unless an issue has been solved in issue tracking system, and the issue tracking system is JIRA in this configuration. This means that there are bugs, improvements, and new features as issue types by default, apart from the tasks.
Furthermore, I am adding a continous integration tool in this configuration, and assume that it is bamboo (by the way, I never used bamboo before, I used Hudson), and I am using Eclipse IDE with mylyn plugin and plus the project is a Maven project (web).
Now, I want to elucidate what I want to do by illustrating following scenario. Analyst (A) opens an issue (I), which is a new feature, related to Maven project (P). As a developer (D), I receive an email about the issue, and I open the task via Mylyn interface in Eclipse. I understand and develop the new feature related to issue (I). Consider, I am a Test Driven Development oriented developer, thus I wrote the Unit, DBUnit, and User-Acceptance (for example using Selenium) tests correspondingly. Finally, I commit the changes to the source control. I think the rest should be cycled automatically but I don't know how can I achieve this? The auto-cycled part is the following:
The Source Control System should have a post-hook script that triggers the Continous integration tool to build the project (P). While building, in the proper phase the test code should be run, and their reports generated. The user-acceptance test should be performed in a dedicated server (For example, jboss, or Tomcat). The order of this acceptance test should be, up the server, run the UA test, then generate the UA test reports and down the server. If all these steps have been successfuly completed, the versioning should be performed. In versioning part, the Maven plugin, or what so ever, should take the number of issues solved from the Issue Tracking System, and increment the related version fragments (minor and revision), at last appends the build number. The fragments of the version may be saved in manifest file in order to show it in User Interface. Last but not the least, the CI tool should deploy it in Test environment. That's all auto-cycled processes I want.
The deployment of the artifact to the production environment should be done automatically or manually?
Let's start with the side question: On the automatic deployment to production, this requires the sign off of "the business" whomever that is. How good do your tests need to be to automatically push to production? Are they good enough that you trust things to just go live? What's your downtime? Is that acceptable? If your tests miss something, can you rollback? Are you monitoring production so you know if you've introduced problems? Generally, the answers to enough of these questions is negative enough that you can't auto-deploy there as the result of a build / autotest event.
As for the tracking, you'll need a few things. You'll need all your assumptions to be true (which I doubt they are, but if you get there that's awesome). You'll also need a build number that can be incremented after build time based on test results. You'll need source changes to be annotated with bug ids. You'll need the build system to parse the source changes and make associations with issues. You'll need an API into the build system so you can get the count of issues associated with the build. Finally you'll need your own bit of scripting to do the query and update the build number accordingly.
That's totally doable, but is it really worth having? What's the value you attach to the numbering scheme?
I've used a Continuous Integration server in the past with great success, and hadn't had the need to ever perform a code freeze on the source control system.
However, lately it seems that everywhere I look, most shops are using the concept of code freezes when preparing for a release, or even a new test version of their product. This idea runs even in my current project.
When you check-in early and often, and use unit tests, integration tests, acceptance tests, etc., are code freezes still needed?
Continuous integration is a "build" but it's part of the programming part of the development cycle. Just like the "tests" in TDD are part of the programming part of the development cycle.
There will still be builds and testing as part of the overall development cycle.
The point of continuous integration and tests is to shorten the feedback loops and give programmers more visibility. Ultimately, this does mean less problems in testing and builds, but it doesn't mean you no longer do the original parts of your development cycle - they are just more effective and can be raised to a higher level, since more tivial problems are being caught earlier in the development cycle.
So you will still have to have a code freeze (or at least a branch) in order to ensure the baseline for what you are shipping is as expected. Just because someone can implement something with a high degree of confidence does not mean it goes into your release without passing through the same final cycles, and the code freeze is an important part of that.
With CI, your code freezes can be very short, since your final build, testing and release may be very reliable, and code freeze may not even exist on small projects, since there is no need for a branch - you release and go right back into development on the next set of features very quickly.
I'd also like to add that CI and TDD allow the final build and testing phase to revert back closer to the traditional waterfall (do all dev, do all testing, then release), as opposed to the more continual QA which has been done on projects with weekly or monthly builds. Your testers can use the CI builds to give early feedback, but it's effectively a different sort of feedback than in the final testing, where you are looking for stability and reliability as opposed to functionality (which obviously was missed in the unit "tests" which the developers had built).
Code freezes are important, because continues integration does not replace runtime regression testing.
Having an application build and pass unit testing is only a small part of the challenge, ideally, when you freeze code for a release, you are signing off on two things:
This code has been fully regressioned, and is defect free
This code is EXACTLY the code that should be in production (for SOX compliance).
If your using a modern SCM, just fork the code at that point and start work on the next release in a branch, and do a merge when the project is deployed. (Of course, place a label so you can rollback that point if you need to apply a breaking patch).
Once code is in "release mode", it should not be touched.
Our typical process:
Development
||
\/
QAT
||
\/
UAT => Freeze until deploy date => Deploy => Merge and repeat
\ /
\- New Branch for future dev -------/
Of course, we usually have many parallel branches during development, that merge back up into the release stream before UAT.
The code freeze has more to do with QA than it has to do with Dev. The code freeze is the point where QA has said: "Enough. We only have bandwidth to fully test the new features added in so far." That doesn't mean dev doesn't have the bandwidth to add more features, it's just that QA needs time with a quiescent code base to ensure that everything works together.
If you're all in continuous integration mode (QA included) this could be just a freeze of a very short time while QA puts the final seal of approval on the whole package just before it goes out the door.
It all depends on how tightly your QA and regression testing are integrated into the dev cycle.
I'd second the votes already mentioned about SCM branching and allowing dev to continue on a different code branch than what QA is testing. It all goes back to the same thing. QA and regression testing need a quiescent code base for a period of time prior to release.
I think that code freezes are important because each new feature is a potential new source of bugs. Sure regression tests are great and help address this issue. But code freezes allow the developers to focus on fixing currently outstanding bugs and get the current feature set into a release worthy state.
At best, if I really wanted to develop new code during a code freeze, I would fork the frozen tree, do my work there, then after the freeze, merge the forked tree back in.
I'm going to sound like one of the context-driven people but the answer is "it depends".
Code Freeze is a strategy to cope with a problem. If you don't have the problem it is good at addressing, then no, it isn't needed. If you have another technique for addressing the problem, then no, it isn't needed.
Code Freeze is one technique for reducing risk. The advantages if brings are stability and simplicity. The disadvantage it brings are
Another technique is to use Branching, such as with "Feature Branches". The disadvantage of Branching is cost of dealing with the branches, of merging changes.
The technique you're describing for reducing risk is Automated Testing to give fast feedback. The trade-off here is increased velocity for some increased risk (you will miss some bugs).
Of these approaches I'm with you, I prefer the Automated Testing. But there are some situations, such as very high cost of failure, where a Code Freeze does provide a lot of value.