Preserve code coverage statistics if not calculated in SonarQube - maven

In the build process of my application I use SonarQube to show some statistics regarding code quality. In particular we use it to show the code coverage of some tests that are executed nightly.
To calculate and retrieve code coverage data I use the JaCoCo Maven plugin and agent this way:
mvn org.jacoco:jacoco-maven-plugin:0.7.8:dump \
sonar:sonar
-Djacoco.address=TEST_SERVER \
-Djacoco.destFile=/proj/coverage-reports/jacoco-it.exec \
-Dsonar.projectKey=TEST \
-Dsonar.projectName=TEST \
-Dsonar.branch=TEST \
-Dsonar.jacoco.itReportPath=/proj/coverage-reports/jacoco-it.exec
The code coverage calculated in this way is correct (as it reflects the expected coverage of some tests which are executed nightly).
But there are cases where I cannot execute the jacoco:dump goal to retrieve the code coverage statistics. In those cases, executing sonar:sonar will bring the existing code coverage statistics (calculated with previous executions of the jacoco:dump goal) to 0 because Sonar assumes that if no statistic is sent that statistic does not exist.
What I would like to do is that if I do not dump and calculate the code coverage index with JaCoCo the code coverage measures in SonarQube are not lost but are equal to the last calculated one.
Is there any way to instruct maybe the SonarQube server or the Maven Sonar plugin to preserve past code coverage statistics?

Related

Is there a way to generate a report for the SonarQube branch new code directly and only once?

What I'm targeting is to run SonarQube analysis on a branch only once, and generate a report for just the new code ( like pull requests, the analysis runs one time and displays the result for only the new code).
What is happening right now when scanning a branch .. the first time analysis is to generate a report for overall code .. and then any other analysis later on is for the new code.
I've tried to use sonar.newCode.referenceBranch sonar property .. to analyze my branch according to the reference branch .. so display a report for only the new code but unfortunately it doesn't work as expected .. and I should run analysis again to show the new code result.
Does any one have an idea to achieve this?
Thanks

How to send test results and code coverage results to SonarQube project after executing analysis

I have a monolith and I would like to execute both static code analysis and code coverage reporting to sonar. However, my sonar scanning takes at least 30 mins and it is very bad for the CI feedback time.
I was wondering if there is a way to run static code analysis in parallel to different tasks and report test coverage to sonar in the later stage of the CI. I need them in a single scan.
The SQ document is clear about the nonavailability of parallel scanning. but, this isn't parallel but just aggregation.
It is impossible. SonarScanner sends all data together. It also requires access to the test results to present data correctly. You may consider splitting test executions to safe some time.
/-> test 1/3 --\
/ \
Start --> Build ----> test 2/3 -----> SonarScanner --> End
\ /
\-> test 3/3 --/

How to get a measure of the 'test count' using SonarJS?

Is there a way to get a measure of the number of tests in SonarQube JavaScript project?
We currently only have coverage metrics, and SonarQube even identifies the test files as 'Unit test', but I can't find a measure for test count anywhere.
In contrast, on my Java project I do have a test count measure.
Coverage metrics is provided by SonarJS, while test count is not. You need to use Generic Test Coverage plugin (by providing test execution results in xml format) in order to get it.

How To Capture Unit Testing Metrics

I'm not sure how to capture the testing result data related to unit tests each time a unit test is run. I use Bamboo as a continuous integration server. It works great, basically makes a build of your project every time you submit code, and send you an email if the build failed / you screwed up somewhere. I would like to begin having Bamboo running full unit tests as well as builds. I would also like to begin gathering data about said unit tests.
My question is, I know in a lot of dif programs you can track the number of lines of code changed, and the total lines of code period in the entire program. I also know that with unit testing, it gives you data such as the number of passes / failures. What I would like to do is automatically gather this data, among other data such as defect density, etc.

Comparing/trending test data with googletest and Jenkins

My C++ project uses googletest to produce XML results in the JUnit format for Jenkins. This is working well for pass/fail results and test durations.
Some of my tests measure code performance and assert that this exceeds some threshold. I would like to extend this to charting the performance data over successive builds. I use the googletest RecordProperty method to log additional information in the XML:
<testcase name="MyTest" status="run" time="3.964" classname="MyTestSuite" PerformanceData="131" />
How can I configure Jenkins or one of its plugins to chart PerformanceData (or an equivalent record) across successive builds?
You could try the Plot Plugin to plot the performance numbers.

Resources