sonarqube quality gate - fail on 'unsupported language' - sonarqube

New to sonarqube quality gates. Asking for some help -
Is there way to have a quality gate fail based on, and I'll use the business term, 'unsupported language' - which may be defined as anything that doesn't have a sonarqube scanner....or more specifically defined by a business stance of no more language 'XYZ'.
Applied:
Is there a way to fail quality gate, at least at a high level, by detecting the existence of, say for example, files by extension (i.e. .groovy for groovy files, .c/.h for C++ files for example, .bsh for beanshell scripts for example), and then to provide a simple 'linecount' showing how many files and how many lines are in the files that are 'out of compliance'?
p.s. linecount is show 'scope', so if filesize is easier, that would work at a high level, although linecount is easier to reason/guesstimate with.
p.p.s Additional reasoning/complication - 'code coverage' metrics are only based on recognized scans, so a project showing '95% code coverage' for only 2 java files, but the other 40 groovy files, for example, are not included in the metric. This example will help apply more understanding as to why looking for ways to address this (other than adding an unsupported groovy sonar plugin, which I see/know about, but is not the root problem).
Thanks for any creative/quick-to-apply options!
-Darren

Related

Understand SonarQube and its testing coverage

i am a Beginner with SonarQube and really tried to google and read a lot of community pages to understand which functions SonarQube offers.
What i dont get is: What does the test coverage in SonarQube refer to?
If it says for example that the coverage on New Code is 30% what does new code mean?
And when does SonarQube say that a issue is a bug? Is the analyzed code compared to a certain standard in order for SonarQube to say that there is a bug?
I hope someone with more knowledge about SonarQube can help me understand it. Thank you very much
Test coverage (also known as code coverage) corresponds to the proportion of the application code (i.e., code without test and sample code) that is executed by test cases out of all application code of the code base.
SonarQube does not compute code coverage itself. Instead coverage is computed and uploaded by external code coverage tools (e.g., cobertura, JaCoCo). SonarQube presents code coverage at different levels (e.g., line coverage, condition coverage); see https://docs.sonarqube.org/latest/user-guide/metric-definitions/#header-9.
Coverage on new code refers to the proportion of code that is both covered and added (or modified) since a certain baseline out of all added and changed code since the same baseline. The baseline can, for example, be the previously analyzed code state or the code state of the previous commit. That is, this metric expresses how extensively changes have been tested. Note that 100% coverage does not mean that code has been perfectly tested; it just says that all code has been executed by test cases.
Issues in SonarQube do not necessarily represent bugs. Usually most issues are actually not bugs but are problems affecting code maintainability in the long term (e.g., code duplications) or violations of best practices. Still, some issues can represent bugs (e.g., potential null-dereferences, incorrect concurrency handling).
Note that an issue can also be false a positive and therefore not be a problem at all.
Most issues are identified with static code analysis by searching the code structure for certain patterns. Some can be uncovered by simple code searches (e.g., violation of naming conventions). Other analyses / issue classes may additionally need data-flow analyses (null-dereferences) or require byte-code information.

How to exclude golang tests, structs and constants from counting against code coverage in SonarQube?

SonarQube 7.3 it has inbuilt support for golang where I have found at least 2 issues :-
It does not exclude *_test.go automatically from coverage. In unit tests it also picks up IP addresses and asks them to be made configurable (not constant which also does not fix the error).
It counts structs and const as not covered lines and hence has a significantly lower % covered than the go coverage tool itself making it a bad use case. For example in a medium size project it reports 40% coverage against go tools 70%
Apart from commenting them all to be sonar exclude or putting constants and structs in a common exclude pattern file is there something else that can be done? Is there a plan to address these in a later version of SonarQube?
For now (SonarQube 7.4), SonarGo analyzer does not identify automatically *_test.go as test files. This is a missing feature, this is why the SonarGo documentation describes how to properly identify test files through settings:
sonar.test.inclusions=**/*_test.go
Without proper test identification, the coverage result will be wrong and the analysis result could raise issues that do not make sense (like hard-coded IP addresses in tests).
About the coverage accuracy (for files that are not test files), there's two cases:
If a file has entries related to it in the coverage report, the covered lines shown in SonarQube should match exactly the ones in the report, otherwise it's a major bug. But the percentage shown by go tools (ranges coverage) could be slightly different that the percentage shown in SonarQube (lines coverage), e.g: +2%.
If a file is not in the coverage report, SonarGo generate a 0% coverage based on its definition of executable line of code. If there's a difference with go test definition of executable line of code, it's a bug that will disappear once the file is partially covered.
The best way to have those coverage bugs fixed, is to report them at community.sonarsource.com by creating a Report a Bug topic with a small code reproducer.

Sonarqube and Cucumber features

Is there any way to include the test coverage of Cucumber features and other useful statistics in the SonarQube analysis? I have done a bit of researching, but couldn't find a proper plugin.
From this thread (written after the OP's question), David Racadon added:
As far as I understand:
It is not possible to run an analysis on a project containing only test code because the 'sonar.sources' property is mandatory.
Measures on test code are not aggregated at project level.
As far as I am concerned, I consider test files part of the project the same way source files are. Thus, measures of test files should be aggregated on top of source files.
For now, SonarQube shows that your project is 1,000 lines even if you have 0 or 10,000 lines of test code on top of those 1,000 lines of source code. For me, SonarQube gives you a biased estimate of the size of your project and the effort of maintenance.
The closest would then be his plugin racodond/sonar-gherkin-plugin which:
analyzes Cucumber Gherkin feature files and:
Computes metrics: lines of code, number of scenarios, etc.
Checks various guidelines to find out potential bugs and code smells through more than 40 checks
Provides the ability to write your own checks

SonarQube exclude based on date?

Now that the Cutoff date plugin for Sonar is deprecated (I've tried it, and it doesn't seem to work at all), is there any way to exclude issues based on a set date?
For a large project, it is desirable to really start at fresh at given point.
Maintaining a low alert threshold ( < 100 alerts ) is much more manageable for the developers than cluttering the issues listings with old history/low priority issues. (1500++). Identifying the new and relevant ones are then much harder.
What we want to focus on are new issues.
Example:
when you have changed the quality rules for say 200.000+ lines of code, you really only are interested in what is produced from now on and changes to already existing code that breaks the new rules.
If my feeling is correct, you're not aware of the way to work with differential periods in SonarQube. Because out-of-the-box SonarQube is designed to cover such common use case. See :
http://www.sonarqube.org/differentials-four-ways-to-see-whats-changed/
http://www.sonarqube.org/using-differentials-to-move-the-team-in-the-right-direction/
http://www.sonarqube.org/differentials-but-wait-theres-more/

How does one implement FxCop / static analysis on an existing code base

What are some of the strategies that are used when implementing FxCop / static analysis on existing code bases with existing violations? How can one most effectively reduce the static analysis violations?
Make liberal use of [SuppressMessage] attribute to begin with. At least at the beginning. Once you get the count to 0 via the attribute, you then put in a rule that new checkins may not introduce FxCop violations.
Visual Studio 2008 has a nice code analysis feature that allows you to ensure that code analysis runs on every build and you can treat warnings as errors. That might slow things down a bit so I recommend setting up a continuous integration server (like CruiseControl.NET) and having it run code analysis on every checkin.
Once you get it under control and aren't introducing new violations with every checkin, start to tackle whole classes of FxCop violations at a time with the goal of removing the SuppressMessageAttributes that you used.
The way to keep track of which ones you really want to keep is to always add a Justification value to the ones you really want to suppress.
Rewrite your code in a passing style!
Seriously, an old code base will have hundreds of errors - but that's why we have novice/intern programmers. Correcting FxCop violations is a great way to get an overview of the code base and also learn how to write conforming .NET code.
So just bite the bullet, drink lots of caffeine, and just get through it in a couple days!
NDepend looks like it could do what you're after, but I'm not sure if it can be integrated into a CruiseControl.Net automated build, and fail the build if the code doesn't meet the requirements (which is what I'd like to happen).
Any other ideas?
An alternative to FxCop would be to use the tool NDepend. This tool lets write Code Rules over C# LINQ Queries (what we call CQLinq). Disclaimer: I am one of the developers of the tool
More than 200 code rules are proposed by default. Customizing existing rules or creating your own rules is straightforward thanks to the well-known C# LINQ syntax.
To keep the number of false-positives low, CQLinq offers the unique capabilities to define what is the set JustMyCode through special code queries prefixed with notmycode. More explanations about this feature can be found here. Here are for example two notmycode default queries:
Discard generated and designer Methods from JustMyCode
Discard generated Types from JustMyCode
To keep the number of false-positives low, with CQLinq you can also focus rules result only on code added or code refactored, since a defined baseline in the past. See the following rule, that detect methods too complex added or refactored since the baseline:
warnif count > 0
from m in Methods
where m.CyclomaticComplexity > 20 &&
m.WasAdded() || m.CodeWasChanged()
select new { m, m.CyclomaticComplexity }
Finally, notice that with NDepend code rules can be verified live in Visual Studio and at build process time, in a generated HTML+javascript report.

Resources