In a project where some components have test coverage and others don't it seems that SonarQube only calculates the total code coverage based on the components that have coverage. I would expect that the lines of code in components without test coverage are classified (at least for calculations) as having 0% code coverage.
Example:
Project X
Module 1: 100% coverage
Module 2: N/A coverage (I reason that this is equivalent to 0% in a computation!)
SonarQube coverage (Project X): 100%
How is the total coverage calculated? If this is by design, why?
Coverage is calculated based on the lines covered during test execution. If you have no test, test execution is skipped, no code is executed at all and therefore no coverage data is produced. So there is a difference between N/A and 0%
If you add an empty test to the module, it will be used for calculation and produce 0% coverage as result.
EDIT:
Assuming you are using Jacoco for coverage. Jacoco write the coverage information into a file (i.e. jacoco.exec). For unit tests (maven surefire) this file is written to the target directory of the module and coverage is calculated for that module using that file. No file, no coverage.
When I run integration tests, I want to determine what parts of the code of the entire project is covered by a tests. My integration tests are typically located in separate module as well. So per-module coverage doesn't make much sense in that case, because test (1 module) and product (all other modules) are sparated.
So I run the integration tests and write the coverage information into a single jacoco-it.exec file into the target folder of the root module (project). As a result, the coverage for the entire code base is calculated, including modules with 0% coverage (See here for how to set it up: Multi-Module Integration Test Coverage with Jacoco and Sonar)
What you loose with this approach is, what parts of code of a single module are covered by tests of that single module. Because parts of a module can be covered by tests of another module, i.e. coverage of a module can be 30% without having any test in that module.
So you have the choice:
detailed coverage per module, with test-less modules having N/A coverage
total coverage for entire project, with test-less modules having 0% coverage but without exact coverage information by module
both: using both approaches for unit test and integration tests separately
Related
I am integrating jenkins jacoco plugin in the jenkins pipeline at two different stages, one after the unit tests which produced a jacoco.exec file.
I used the jacoco(params) to attach this with the build.
Right after that, I run my integration tests with coverage, and produced jaccoco-it.exec and used jacoco(params) to attach it to the build.
But my build shows two different coverage charts with the merged coverage reports.
I would like to get the unit test coverage and integration test coverage separately on the build. Is this possible at all? I could not find any documentation related to this use case.
I know how to check coverage of Unit test cases, we can see coverage for each .swift file in Xcode coverage report. but what about UI test cases?
As per my understanding, In unit test cases the subject being tested is a .swift file. if a file has class with 4 methods/functions. The unit test coverage of that file would be 100% only if all the 4 methods are being called from unit test cases.
In UI test cases the subject is the View, Does interacting with all the UI elements leads to 100% coverage? How does coverage report for UI test work?
Edit:
In Unit tests - I know that when few lines of function are not covered I see red overlay, here I know that I have to write unit tests for screenshot class method, line 56 in above attached image. Is there any similar mechanism in UI tests?
In UI tests - How can we find which UI element is left uncovered?
The code coverage report can be generated for both unit and UI tests in Xcode. In your testing scheme choose Gather coverage for required targets. You can only get coverage for targets in your workspace.
The way coverage report is collected for UI tests is the same as for unit tests.
Even if you interact with all UI elements in your app, some code might be uncovered by tests. If you aim to increase your coverage, add additional tests to execute previously uncovered code.
Does SonarQube always require an external code coverage tool like jacoco (Java), Coverage (python), gcov (c/c++), in order to show coverage on a sonar server?
SonarQube by itself doesn't do any coverage. Its the job for other tools like jacoco and others.
However SonarQube can gather the "results" relevant to the project quality (of course including coverage as an important code quality metric) of the build and allows tracking of the quality over time.
Usually you run coverage tool first, it "adjusts" the code, then you run the tests in the build. Coverage tool creates some results, and only after that you run sonar plugin that processes the results and sends to the sonar qube server.
So, to answer your question: Yes, without an external code coverage tool, sonar won't produce any coverage results, and no, it doesn't have a "default, built-in" coverage tool
I have a Jenkins job that runs multiple jobs, some of those are unit tests for a different part of our platform.
One of those jobs is phpunitTest which basically makes sure that all tests are passing and generates a code-coverage using Codecept.
My question now is, how can I make sure new code pushed is covered by the unit tests?
Currently I'm using this command to run the coverage:
codeception/codeception run unit --coverage-html --quiet
I expect to see have a failed test if the code pushed isn't unit tested.
Unless Codecept has special (and unusual) tooling for this there's basically two ways: achieve 100% coverage and verify that at every run or force a move towards 100% coverage. Since most projects don't even go for 100% coverage (which is not at all the same as having covered all your bases; see for example SQLite for why 100% is just the beginning) I'll assume the latter. What you can do in that situation is to
enforce that the coverage percentage minimum is met at every CI run and
enforce that the coverage percentage is never lowered.
By these simple expedients you'll naturally ensure that code coverage goes up with every piece of code added.
This does not guarantee that each new piece of code is 100% covered; for that you would have to parse the coverage checker results and see if any new or changed files are mentioned as missing coverage.
I'm running a Maven multi-module project, and using Sonar Runner to analyze the project for SonarQube 6.3. This project contains both unit and integration tests in every modules. I succeeded generating reports for UT and IT in target/jacoco-ut.exec and target/jacoco-it.exec.
I think analysis parameters for Sonar Runner are good, as I can see both reports are processed and merged during analysis.
From SonarQube 6.3, there's no difference anymore between unit tests and integration tests, though the only measure reported is "Unit tests", which suggests integration tests are ignored.
When I look at the coverage measures in SonarQube, I'm surprised because the number of tests reported is not the sum of the number of unit tests and the number of integration tests. Integration tests are not listed in the measures. To me, if both unit tests and integration tests were merged in SonarQube, I understand there shall both appear in measures, but that's not the case.
I can't find anything in SonarQube documentation about integration tests inclusion in measures. There are only notes that they are merged during analysis, though I don't see anything about my integration tests in the coverage measures.
How can I see integration tests and unit tests in coverage measures?
All tests are now merged into "Coverage", so those numbers include the sum of UT and IT coverage. However prior to the merger there were no metrics about integrations tests themselves (test count, duration, errors, &etc) so there was nothing there to merge.
In fact, metrics about tests (count, errors...) really aren't seen as relevant in general and remain in the system only because they've been grandfathered.