Comparing/trending test data with googletest and Jenkins - performance

My C++ project uses googletest to produce XML results in the JUnit format for Jenkins. This is working well for pass/fail results and test durations.
Some of my tests measure code performance and assert that this exceeds some threshold. I would like to extend this to charting the performance data over successive builds. I use the googletest RecordProperty method to log additional information in the XML:
<testcase name="MyTest" status="run" time="3.964" classname="MyTestSuite" PerformanceData="131" />
How can I configure Jenkins or one of its plugins to chart PerformanceData (or an equivalent record) across successive builds?

You could try the Plot Plugin to plot the performance numbers.

Related

JMeter: How to assign weights/frequencies/smapleRates to ThreadGroups/"TestSets" dynamically from CSV file

I would like to create a JMeter test setup with important sampling and test variation parameters entirely controlled by csv-Files (i.e. not modifying the JMX-File). It should be run with maven.
Ideas is as follows:
sources.csv contains
sampleRate;file
40;samplesForController1.csv
30;samplesForController2.csv
5;samplesForController3.csv
...
SampleRate should determine, how often a certain number of tests (defined by parameters in the respective file) should be executed (relative to others).
How could this be achieved? Im asking for the first step here (make sure the files/testSamples are sampled/executed given the indicated sampleRate) as I think I can solve the second part (dealing with parameters in samplesForController1.csv etc.) by myself.
P.S.
I'm struggling with the options presented here: In Jmeter, I have to divide number of thread into the multiple http requests in different percentage but have to keep sequence remain same since
afaics, thread groups cannot be created on-the-fly/dynamically/progammatically
apparently, Throughput Controller needs to know its child element(s probablities) upfront (i.e. not created dynamically), otherweise, sampling is very odd (I could not get it working maintaining the desired sampleRate)
I did not try to integrate jmeter-plugins in the maven build thus far as my impression is, the available plugins/controllers also needs to know their child element upfront
It is possible to create thread groups programmatically, check out:
Five Ways To Launch a JMeter Test without Using the JMeter GUI
jmeter-from-code example project
jmeter-java-dsl project
You can use Switch Controller and use a function like __groovy() to generate the child element index, example implementation can be found in Running JMeter Samplers with Defined Percentage Probability article
It's not a problem to use JMeter Plugins with Maven, see Adding jar's to the /lib/ext directory documentation section for example setup

How to send test results and code coverage results to SonarQube project after executing analysis

I have a monolith and I would like to execute both static code analysis and code coverage reporting to sonar. However, my sonar scanning takes at least 30 mins and it is very bad for the CI feedback time.
I was wondering if there is a way to run static code analysis in parallel to different tasks and report test coverage to sonar in the later stage of the CI. I need them in a single scan.
The SQ document is clear about the nonavailability of parallel scanning. but, this isn't parallel but just aggregation.
It is impossible. SonarScanner sends all data together. It also requires access to the test results to present data correctly. You may consider splitting test executions to safe some time.
/-> test 1/3 --\
/ \
Start --> Build ----> test 2/3 -----> SonarScanner --> End
\ /
\-> test 3/3 --/

How to get a measure of the 'test count' using SonarJS?

Is there a way to get a measure of the number of tests in SonarQube JavaScript project?
We currently only have coverage metrics, and SonarQube even identifies the test files as 'Unit test', but I can't find a measure for test count anywhere.
In contrast, on my Java project I do have a test count measure.
Coverage metrics is provided by SonarJS, while test count is not. You need to use Generic Test Coverage plugin (by providing test execution results in xml format) in order to get it.

Preserve code coverage statistics if not calculated in SonarQube

In the build process of my application I use SonarQube to show some statistics regarding code quality. In particular we use it to show the code coverage of some tests that are executed nightly.
To calculate and retrieve code coverage data I use the JaCoCo Maven plugin and agent this way:
mvn org.jacoco:jacoco-maven-plugin:0.7.8:dump \
sonar:sonar
-Djacoco.address=TEST_SERVER \
-Djacoco.destFile=/proj/coverage-reports/jacoco-it.exec \
-Dsonar.projectKey=TEST \
-Dsonar.projectName=TEST \
-Dsonar.branch=TEST \
-Dsonar.jacoco.itReportPath=/proj/coverage-reports/jacoco-it.exec
The code coverage calculated in this way is correct (as it reflects the expected coverage of some tests which are executed nightly).
But there are cases where I cannot execute the jacoco:dump goal to retrieve the code coverage statistics. In those cases, executing sonar:sonar will bring the existing code coverage statistics (calculated with previous executions of the jacoco:dump goal) to 0 because Sonar assumes that if no statistic is sent that statistic does not exist.
What I would like to do is that if I do not dump and calculate the code coverage index with JaCoCo the code coverage measures in SonarQube are not lost but are equal to the last calculated one.
Is there any way to instruct maybe the SonarQube server or the Maven Sonar plugin to preserve past code coverage statistics?

How To Capture Unit Testing Metrics

I'm not sure how to capture the testing result data related to unit tests each time a unit test is run. I use Bamboo as a continuous integration server. It works great, basically makes a build of your project every time you submit code, and send you an email if the build failed / you screwed up somewhere. I would like to begin having Bamboo running full unit tests as well as builds. I would also like to begin gathering data about said unit tests.
My question is, I know in a lot of dif programs you can track the number of lines of code changed, and the total lines of code period in the entire program. I also know that with unit testing, it gives you data such as the number of passes / failures. What I would like to do is automatically gather this data, among other data such as defect density, etc.

Resources