We use jacoco, but on different builds and different machines, but the same code and gradle script, it gives different results. The problem seems to be anonymous classes - they are sometimes not lining up with the test run - even though it's all done as part of the same, clean, build. We get this:
[ant:jacocoReport] Classes in bundle 'SomeThing' do no match with execution data. For report generation the same class files must be used as at runtime.
[ant:jacocoReport] Execution data for class a/b/c/$Mappings$1 does not match.
[ant:jacocoReport] Execution data for class a/b/c/$Mappings$3 does not match.
so it's looking like it's getting the anonymous classes out of sync. Any idea how to fix this? It's really dicking with our rules, as we have for instance a 100% class coverage requirement - and this means some classes are showing up sometimes as not covered.
To generate report based on file jacoco.exec JaCoCo performs analysis of bytecode, i.e. compiled classes.
Different versions of Java compiler produce different bytecode. Even recompilation might produce different bytecode - see https://bugs.openjdk.java.net/browse/JDK-8067422 and https://github.com/jacoco/jacoco/issues/383.
If classes used at runtime (during creation of jacoco.exec) are different (e.g. created on another machine with different compiler), then they can't be associated with classes used during creation of report, leading to message Execution data for class ... does not match. You can read more about Class Ids in JaCoCo documentation.
All in all to avoid this message - use exact same class files during creation of jacoco.exec and during generation of report.
Related
I am using Jacoco-Maven Plugin for Scala Test Coverage, But when I run the tests I see in Index.html in Jacoco the Singleton Objects are getting Covered twice where one gives the Correct Coverage and the other gives a wrong Coverage Number.
Image:
Jacoco checks coverage of compiled code, not raw Scala code. I believe that in your compiled code there is a private constructor of the class which is not cover by any test and that cause the coverage deficit. You have to investigate the compiled code to verify. However, there is a way to eliminate this problem: adding a trail.
trail XConverter
object XConverter {
def doSomething() = {}
}
Run the jacoco coverage again you will see that the deficit coverage disappear. This is equivalent to having static methods in an interface in Java, no hidden constructor.
What is the significance of minimumClassCoverage and maximumClassCoverage in https://jenkins.io/doc/pipeline/steps/jacoco/
jacoco exclusionPattern: '**/generated-sources/**.class',
execPattern: '**/coverage-reports/jacoco-unit.exec',
inclusionPattern: '**/*.class',
sourceExclusionPattern: '**/generated-sources/**.java',
changeBuildStatus: true,
minimumBranchCoverage: '43',
minimumClassCoverage: '80',
minimumInstructionCoverage: '54',
maximumInstructionCoverage: '80',
minimumClassCoverage: '57',
maximumClassCoverage: '80',
minimumMethodCoverage: '55'
What do the thresholds mean?
These minimumClassCoverage and maximumClassCoverage are the percentage of class code coverage which define whether the Jenkins build will green.
On the same documentation page from your link, you can read.
And the coverage thresholds allow to configure how much coverage is necessary to make the build green (if changing the build status is enabled).
How do we understand "class coverage"?
The good question is "what it a class coverage?".
We can understand it as one of the following:
What percentage of lines covered in each particular class?
How many classes are covered with required instruction/method percentage?
How many classes of all classes in the project have more than 0 coverage?
What "class coverage" actually is
The class counter is defined in JaCoCo counters documentation
From https://www.eclemma.org/jacoco/trunk/doc/counters.html:
Classes
A class is considered as executed when at least one of its methods has been executed. Note that JaCoCo considers constructors as well as static initializers as methods. As Java interface types may contain static initializers such interfaces are also considered as executable classes.
I have an algorithm implemented by a number of classes, all covered by unit test.
I would like to refactor it, which will change behavior of two classes.
When I change one class and its tests, all unit tests pass, though the algorithm becomes incorrect until refactoring is done.
This example illustrates that complete coverage by unit tests is sometimes not enough and I need "integration" tests for the whole algorithm in terms of input-output. Ideally, such tests should cover the behavior of my algorithm completely.
My question: looks like by adding such integration tests I make unit tests unnecessary and superfluous. I don't want to support duplicated test logic.
Should I remove my unit tests or leave them as is, e.g. for easier bug location?
This is part of the problem with tests which are too fine grained and are tightly coupled with the implementation.
Personally I would write tests which focus on the behaviour of the algorithm and would consider this 'a unit'. The fact that it is broken into several classes is an implementation detail, in the same way that breaking down a public method's functionality into several smaller private methods is also an implementation detail. I wouldn't write tests for the private methods separately, they would be tested by the tests of the functionality of the public method.
If some of those classes are generically useful and will be reused elsewhere then I would consider writing unit tests for them at that point as then they will have some defined behaviour on their own.
This would result in some duplication but this is ok as those classes now have a public contract to uphold (and which is used by both components which use it), which those tests can define.
Interestingly, see the definition of Unit in this article
I know there are rules available that fire when a NotImplementedException is left in the code (which I can understand for a release build) (see this answer, or this question), but I would like to have the opposite: have Code Analysis ignore methods that throw a NotImplementedException, as they are known to be unfinished.
Background:
When I develop a block of code, I'd like to test what I have so far, before I'm completely done. I also want CA to be on, so I can correct errors early (like: didn't dispose of resources properly). But as I'm stil developing that code, there are still some stubs that I know will not be used by my test. The body of such a stub usually consists of no more than a throw new NotImplementedException();.
When I build the code, CA complains that the method could be 'static' and that parameters are unused. I know I can suppress the CA-errors, but that means I will have to remember to take those suppressions out when those methods are built and it's an extra build (one to get the messages so I can suppress them and one to really run the code).
My question:
Is there a way to have CA ignore methods that end in a NotImplementedException along (at least) one of the code paths? It would be extra nice if that could be set to only work in debug mode, so you can get warnings about that exception in a release build.
For the record, I'm using VS2010.
No. There is no such option in Code Analysis.
What you could do is write a custom rule which flags the throwing of a NotImplementedException and then process the Code Analysis outcome and remove any entry with the same target method as the NotImplemented custom rule. This rule should be pretty simple to implement. It would probably suffice to search for any Constructor call on the NotImplementedException class.
But this will only work if you run Code Analysis from the commandline and then post-process the xml file.
You could also mark the method using the GeneratedCodeAttribute, as these will be ignored by default.
I am having an interesting situation. In my test assembly, I have folders having specific test classes, i.e., TestFixture's. Consider, for e.g., the following hierarchy in VS:
Sol
TestProject
TestFolder1
TestClass1
TestClass2
TestFolder2
TestClass3
Now, when I run the following at command line:
nunit-console.exe /run:Sol.TestProject.TestFolder1.TestClass2 TestProject.dll
Things are running fine and all the tests are passing. But, if I run as below:
nunit-console.exe /run:Sol.TestProject.TestFolder1 TestProject.dll
In this case, some of the tests in TestClass2 are failing.
I have tried dumping the state of some of the relevant objects involved in the test, and the state seemed fine at the beginning of the test code in both cases. Also, TestClass1/2/3 do not have a superclass doing something - so that is ruled out as well. Any ideas what else can be happening here?
I am using VS2010/.NET4.0 (4.0.30319.1)/nUnit 2.5.9.
Finally figured this out. I was using a singleton class for storing certain options. Looks like the singleton class instance is retained between runs of different TestFixtures (i.e., test classes), when they are run together, e.g., for a folder or for a project. I did not dump the state of this object initially, because I thought that the singleton class will be having new instance for each of the TestFixtures. Interesting finding, hope this helps someone.