That's basically the idea. I own a project and I want to break any new build on TeamCity based on a code coverage percentage. As simply as: this percentage can never go down. This way I ensure that new commits are covered.
TeamCity provides this out of the box. Simply go to the configuration for the project, and click 'Failure Conditions'. This gives you a place whwre you can add a failure condition on a metric change. One of the available metric changes is 'Percentage of line coverage'. You can set it so that the build fails if this is less the 0 difference from the last build.
Beware adding this though, especially if you have projects where the code coverage is not 100% already, as a refactoring which reduces the number of lines in the project and all of those lines happen to be covered by tests will result on the overall coverage going down, and a failing build despite not adding any new functionality.
Related
I am trying to add a quality gate in sonarqube, to fail if the new code coverage % drops below the overall code coverage.
Anyone have tried this ?
You're going to have to be more specific. I'm not sure what "overall" means. Are you referring to the "after merge" value? It's also unclear whether you're referring to a "base project" or a pull request or branch.
If you're looking to ensure that the "after merge" coverage on a branch scan satisfies a required threshold, I'm pretty sure you can't do that out of the box with SonarQube, but you should also specify what version of SonarQube you're using.
I implemented a check for whether the "after merge" coverage value of a branch scan satisfies our required threshold, but I had to do it in script code, using the SonarQube Web API. I had it obtain the project's quality gate, along with the resulting coverage from the scan, and if it's below the required number, I have it fail the build with an appropriate message. There's no way to mark the scan itself to be in violation, but at least we can make the build fail.
I want to use SonarQube on my project. The project is quite a big and scanning whole files take much time. Is it possible to scan only changed files in the last commit, and provide report based only on changed lines of code?
I want to check if added or modified lines make the project quality worst and I don't care about old code.
For example, if person A created a file with 9 bugs and then commited changes - the report and quality gate should show 9 bugs. Then person B edited the same file adding few lines containing 2 additional bugs, then commited changes - the report should show the 2 last bugs and quality gate should be executed on the last changes (so should consider the last 2 bugs)
I was able to narrow scan to only changed files in the last commit- but report is generated based on whole files. I had an idea about cutting only changed lines of code, paste them to new file and run sonar scan on the file - but I'm almost sure the SonarQube needs the whole context of file.
Is it possible to somehow achieve my usecase ?
No, it is impossible. I saw a lot of similar questions. These are answers to two of them:
New Code analysis only:
G Ann Campbell:
Analysis will always include all code. Why? Why take the time to
analyze all of it when only a file or two has been changed? Because
any given change can have far-reaching effects. I’ll give you two
examples:
I check in a change that deprecates a much-used method. Suddenly,
issues about the use of deprecated code should be raised all over the
project, but because I only analyzed that one file, no new issues were
raised.
I modify a much-used method to return null in some cases. Suddenly all
the methods that dereference the returned value without first
null-checking it are at risk of NullPointerExceptions. But only the
one file that I changed was analyzed, so none of those “Possible NPE”
issues are raised. Worse, they won’t be raised until after each
individual file happens to be touched.
And that’s why all files are included in each analysis.
I want sonar analysis on newly checkin code:
G Ann Campbell:
First, the SonarQube interface and default Quality Gate are designed to help you focus
on the New Code Period. You can’t keep analysis from picking up those
old issues, but you can decide to only pay attention to issues raised
on newly-changed code. That means you would essentially ignore the
issues on the left side of the project homepage with a white
background and focus instead on the New Code values over the yellow
background on the right. We call this Fixing the Leak, or
alternately Clean as You Code.
Second, if you have a commercial edition, then branch and PR analysis
are available to you. With Short-Lived Branch (SLB) and PR analysis
still covers all files, but all that’s reported in the UI is what’s
changed in the PR / SLB.
Ideally, you’ll combine both of these things to make sure your new
code stays clean.
The position in this matter has not changed over the last years, so don't expect it will be changed.
SonarQube 7.3 it has inbuilt support for golang where I have found at least 2 issues :-
It does not exclude *_test.go automatically from coverage. In unit tests it also picks up IP addresses and asks them to be made configurable (not constant which also does not fix the error).
It counts structs and const as not covered lines and hence has a significantly lower % covered than the go coverage tool itself making it a bad use case. For example in a medium size project it reports 40% coverage against go tools 70%
Apart from commenting them all to be sonar exclude or putting constants and structs in a common exclude pattern file is there something else that can be done? Is there a plan to address these in a later version of SonarQube?
For now (SonarQube 7.4), SonarGo analyzer does not identify automatically *_test.go as test files. This is a missing feature, this is why the SonarGo documentation describes how to properly identify test files through settings:
sonar.test.inclusions=**/*_test.go
Without proper test identification, the coverage result will be wrong and the analysis result could raise issues that do not make sense (like hard-coded IP addresses in tests).
About the coverage accuracy (for files that are not test files), there's two cases:
If a file has entries related to it in the coverage report, the covered lines shown in SonarQube should match exactly the ones in the report, otherwise it's a major bug. But the percentage shown by go tools (ranges coverage) could be slightly different that the percentage shown in SonarQube (lines coverage), e.g: +2%.
If a file is not in the coverage report, SonarGo generate a 0% coverage based on its definition of executable line of code. If there's a difference with go test definition of executable line of code, it's a bug that will disappear once the file is partially covered.
The best way to have those coverage bugs fixed, is to report them at community.sonarsource.com by creating a Report a Bug topic with a small code reproducer.
Is there any way to include the test coverage of Cucumber features and other useful statistics in the SonarQube analysis? I have done a bit of researching, but couldn't find a proper plugin.
From this thread (written after the OP's question), David Racadon added:
As far as I understand:
It is not possible to run an analysis on a project containing only test code because the 'sonar.sources' property is mandatory.
Measures on test code are not aggregated at project level.
As far as I am concerned, I consider test files part of the project the same way source files are. Thus, measures of test files should be aggregated on top of source files.
For now, SonarQube shows that your project is 1,000 lines even if you have 0 or 10,000 lines of test code on top of those 1,000 lines of source code. For me, SonarQube gives you a biased estimate of the size of your project and the effort of maintenance.
The closest would then be his plugin racodond/sonar-gherkin-plugin which:
analyzes Cucumber Gherkin feature files and:
Computes metrics: lines of code, number of scenarios, etc.
Checks various guidelines to find out potential bugs and code smells through more than 40 checks
Provides the ability to write your own checks
We use Vs2008/2010 with TFS 2010 for our source control, because it also lets us create custom work item types that we can use for project management, such as product backlog items and sprint backlog items.
One item thats not tracked (by machine) is build regression test tasks for release candidates. Our regression testing is part automated, part manual, and the manual part can take several days. Currently we use an excel spreadsheet with a list of all the test cases, and then the testers just fill in results and notes.
I've been proposing creating a build regression test template that contains each test case, default owner, and then when we want to do regression testing on a build, we can automatically create work items for every test in the template.
My argument is that if the regression test work is mandatory for the project, and the results should be tracked, then writing additional TFS work items make sense, especially since the work items can hold estimates, giving managers an idea of how much re-test time remains.
The argument against this is that we already have high level work items to capture the overall project test requirements, and the regression testing is basically a "re-test", so new work items would be duplicate.
My question: Is anyone else doing anything like this? Is it reasonable to use TFS to track outstanding re-test tasks?
Note: we don't own Visual Studio Test Professional
I think it's reasonable to go with your suggested solution. You should have another work item type for the "test tasks", that can be linked as children to the test requirement work items. Doing that, like you said, would allow you to track results, progress, reporting, etc. You can also add other fields like build number, tested by, tested date, etc. to the work item type for history, something that cannot be done with just one test requirement work item type.
Essentially, what you proposed is done in the ITestResult object in the Microsoft.TeamFoundation.TestManagement.Client.dll.