Suppressing a null dereference warning in static analysis coming from stub implementations in tests - static-analysis

I have encountered a pattern in false positive results from Coverity Scan. I have an interface I, and two implementations, IImpl and FakeI
interface I {
String f();
}
class IImpl {
String f() {
return "f";
}
}
class FakeI {
String f() {
return null;
}
}
Given this code, if I then do the following
I i;
i.f().equals(other);
I get a null dereference warning, because result of i.f() could possibly be that null from FakeI. The FakeI is implemented in test code, so my production code does not even see it. But Coverity does not know that.
What are the possible solutions? I thought either remove test code from analysis completely, or revisiting my fakes and make sure they don't return null. Is there some Coverity feature which might help handling this?

Static analyzers in general do not benefit from including test code to the analysis. This is in contrast to dynamic analysis, where the tests play crucial role. They are what is being executed, so that there is something to analyze. Since tests represent simplified (shorter, self-contained) usage of the APIs, it is easier to analyze reports generated from tests than from actual running binary.
There are some benefits to including test code in static analysis.
There may be bugs in tests, that the analyzer can help find and resolve.
There are disadvantages, though. Especially what I was asking about here.
I am now trying to remove tests from the scope of the analysis, which actually seems what Coverity Scan documentation recommends. Their maven build command is mvn compile.

Related

How can I have a common test suite for multiple packages in go?

When I'm writing an interface, its often convenient to define my tests in the same package as the interface, and then define multiple packages that implement the interface set, eg.
package/
package/impl/x <-- Implementation X
package/impl/y <-- Implementation Y
Is there an easy way to run the same test suite (in this case, located in package/*_test.go) in the sub packages?
The best solution I've come up with so far is to add a test package:
package/tests/
Which implements the test suite, and a test in each of the implementations to run the tests, but this has two downsides:
1) The tests in package/tests are not in _test.go files, and end up being part of the actual library, documented by godoc, etc.
2) The tests in package/tests are run by a custom test runner, which has to basically duplicate all the functionality of go test to scan for go tests and run them.
Seems like a pretty tacky solution.
Is there is a better way of doing this?
I don't really dislike the idea to use a separate testing library. If you have an interface and you have generic tests for each interface, other people that implement that interface might like to use these tests as well.
You could create a package "package/test" that contains a function
// functions needed for each implementation to test it
type Tester struct {
func New() package.Interface
func (*package.Interface) Done()
// whatever you need. Leave nil if function does not apply
}
func TestInterface(t *testing.T, tester Tester)
Notice that the signature of TestInterface does not match to what go test expects. Now, for each package package/impl/x you add one file generic_test.go:
package x
import "testing"
import "package/test"
// run generic tests on this particular implementation
func TestInterface(t *testing.T) {
test.TestInterface(t,test.Tester{New:New})
}
Where New() is the constructor function of your implementation. The advantage with this scheme is that
Your tests are reusable for whoever implements your interface, even from other packages
It is immediately obvious that you run the generic test suite
The test cases are where the implementation is and not at another, obscure place
The code can be adapted easily if one implementation needs special initialization or similar stuff
It's go test compatible (big plus!)
Of course, in some cases you need a more complicated TestInterface function, but this is the basic idea.
If you share a piece of code for reuse by different packages then yes, it is a library by definition. Even when used only for testing from *_test.go files. It's no different from importing "testing" of "fmt" in the _test.go file. And having the API documented by godoc is a plus, not minus IMHO.
Maybe something gets mixed up here a bit:
If package a defines an interface only than there is no code to
test as interfaces in Go are implementation free.
So I assume the methods in your interface in package a
have constraints. E.g. in
interface Walker {
Walk(step int)
Tired() bool
}
you contract assumes that Tired returns true if more than
500 steps have been Walk'ed (and false otherwise)
and your test code checks these dependencies
(or assumption, contracts, invariants whatever you
name it).
If this is the case I would provide (in package a) an exported
function
func TestWalkerContract(w Walker) error {
w.Walk(100)
if w.Tired() { return errors.New("Tired after 100 steps") }
w.Walk(450)
if !w.Tired() { return errors.New("Not tired after 100+450 steps") }
}
Which documents the contract properly and can be used by packages
b and c with types implementing walker to test their implementations
in b_test.go and c_test.go. IMHO it is perfectly okay that these
function like TestWalkerContract are displayed by godoc.
P.S. More common than Walk and Tired might be an error state
which is kept and reported until cleared/reseted.

Is it possible to create data driven tests with MSpec?

With MSpec is it possible to create data driven tests?
For example, NUnit has the TestCase attribute that allows for multiple data driven cases.
[TestFixture]
public class ExampleOfTestCases
{
[TestCase(1,2,3)]
[TestCase(3,3,6)]
[TestCase(2,2,4)]
public void when_adding_two_numbers(int number1, int number2, int expected)
{
Assert.That(number1 + number2, Is.EqualTo(expected);
}
}
That's not possible. I would advise against driving MSpec with data, use NUnit or MbUnit if you need row tests or combinatorial tests (and MSpec when you describe behavior).
Follow-up: Aeden, TestCases/RowTests are not possible with MSpec and likely will never be. Please use NUnit for such cases, as it is the best tool for that job. MSpec excels when you want to specify system behavior (When an order is submitted => should notify the fulfilment service). For TestCases with MSpec you would need to create a context for every combination of inputs which might lead to class explosion.
MSpec is also great when you want to have a sane test structure that is easy to learn. Instead of starting with a blank sheet of paper (think NUnit's [Test] methods) MSpec gives you a template (Establish, Because, It) that you can build your specifications around. Contrast this to the example you give where Arrange, Act and Assert are combined into one line of code.

How to order methods of execution using Visual Studio to do integration testing?

I have 2 questions in regard doing integration testing using VS 2010
First, I'm really in need of finding a way to execute these testing methods in the order I want them to. Note: I know in Unit Testing, methods should run standalone from anything else, but these are integration tests which I do depend on the order of which method runs first.
On the same note, is there a way to keep a local variable through running the tests? For example like the following code which right now fails.
[TestClass]
public class UnitTest1
{
int i = 0;
[TestMethod]
public void TestMethod1()
{
i = 5;
}
[TestMethod]
public void TestMethod2()
{
Assert.AreEqual(5, i);
}
}
So is there a way to do any of these?
To execute tests in a specific order I followed the next steps:
In a test project with test1, test2 and test3
1 Right click on the project 'Add'->'new test..."
2 Select 'Ordered Test'
3 Double click in the file that appears "OrderedTest1.orderedtest"
4 Build the project if was not build previously
5 From the list of available test select the test you want and order them
From that point on there appears a new test in the test list editor
It is an extra test that runs the enclosed tests in the correct order, but if you run all the test in the project carelessly the tests included in the ordered list will be executed twice so you need to somehow manage lists or test categories to avoid that.
I tried disabling the individual tests but that also disables the ordered test, I don't know a better way to do so.
It is best practice to use functions to set up the tests and to clean them up, by using the attributes [TestInitialize] and [TestCleanUp] or [ClassInitialize] and [ClassCleanup].
http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.unittesting(v=VS.100).aspx
The next code is an example of a similar thing to what you want:
[TestClass]
public class UnitTest1
{
int i=0;
[TestInitialize]
public void Setup()
{
i = 5;
}
[TestMethod]
public void TestMethod1()
{
Assert.AreEqual(5, i);
}
}
The function SetUp will be called before executing each test.
If you need to pass the value from one test to the other you might want to consider using a static variable which is not recommended due to the indeterministic order of execution.
Usually there is a way to avoid needing a specific order by using the setup/cleanup technique, but it is true that this might not be true for very complex integration tests.
If there is no possible way to avoid having them to reorder you can consider merging them in one, breaking again the best practice of having only one assert per test, but if they are so much dependent one from the other it might be even better this way, as in this case one test failing might compromise the result of the others.
EDIT:
May be using ordered tests answers question 1, and using static variables question 2:
http://msdn.microsoft.com/en-us/library/ms182631.aspx

How do I skip specific tests in xUnit based on current platform

I have an assembly that I've built on Windows
I want to run the xUnit tests on mono in Linux.
However, I have found that while 400 of these tests can run (in order), that certain tests either hang the xUnit runner, or bring it down entirely.
I don't care if certain tests are not able to run on Linux, certain tests are to do with the DTC and some unmanaged gumph that we don't need to support there.
What I do want however, is to apply an ignore to those tests, and have the fact that the test was ignored flagged properly in the build output.
The question can be boiled down to I guess a number of possible solutions
How do I run specific tests in xUnit via the console runner? (I haven't found documentation to this end, maybe I'm just not looking hard enough)
Is it possible to go the other way and say "Here is an assembly, please ignore these specific tests though"
Having an attribute on those tests has been suggested a better way, to formally document that these tests are platform specific - is this possible?
If I could avoid modifying the original code too much that would be grand, as the code isn't really mine to change, and applying lots of cross-platform hacks probably won't go down too well.
XUnit v2.0 is now available. Skippable tests are supported by it directly. Use:
[Fact (Skip = "specific reason")]
I would avoid externalising skipping tests (i.e. a config/command file if it's possible). This somewhat goes against making the tests easy to run and trustworthy. Making the tests ignored in code is the safest approach when other people start to get involved.
I could see a number of options, here are two that involve modification of existing code.
Option 1 - Most intrusive, compile time platform detection
In the VS Solution, define another configuration that defines a precompiler flag MONOWIN (just so that it's explicitly a flag the says that it is for code compiled on Windows for use on Mono).
Then define an attribute that will make the test ignored when compiled for Mono:
public class IgnoreOnMonoFactAttribute : FactAttribute {
#if MONOWIN
public IgnoreOnMonoFactAttribute() {
Skip = "Ignored on Mono";
}
#endif
}
It's actually hard to find any advantages to this method as it involves mocking with the original solution and adds another confiration that needs to be supported.
Option 2 - somewhat intrusive - runtime platform detection
Here is a similar solution to option1, except no separate configuration is required:
public class IgnoreOnMonoFactAttribute : FactAttribute {
public IgnoreOnMonoFactAttribute() {
if(IsRunningOnMono()) {
Skip = "Ignored on Mono";
}
}
/// <summary>
/// Determine if runtime is Mono.
/// Taken from http://stackoverflow.com/questions/721161
/// </summary>
/// <returns>True if being executed in Mono, false otherwise.</returns>
public static bool IsRunningOnMono() {
return Type.GetType("Mono.Runtime") != null;
}
}
Note 1
xUnit runner will run a method twice if it is marked with [Fact] and [IgnoreOnMonoFact]. (CodeRush doesn't do that, in this case I assume xUnit is correct). This means that any tests methods must have [Fact] replaced with [IgnoreOnMonoFact]
Note 2
CodeRush test runner still ran the [IgnoreOnMonoFact] test, but it did ignore the [Fact(Skip="reason")] test. I assume it is due to CodeRush reflecting xUnit and not actually running it with the aid of xUnit libraries. This works fine with xUnit runner.
There is a new options now.
Add Nuget Package SkippableFact, which allows you to use [SkippableFact] instead of [Fact] and you can use Skip.<xyz> within a Tests to dynamically Skip the Test during runtime.
Example:
[SkippableFact]
public void SomeTestForWindowsOnly()
{
Skip.IfNot(Environment.IsWindows);
// Test Windows only functionality.
}
[Fact(Skip="reason")]
works but I prefer to use traits
[Fact, Trait("type","unit")]
public void MyUnitTest(){
// given
// when
// then
}
[Fact, Trait("type","http")]
public void MyHttpIntegrationTest(){
// given
// when do things over HTTP
// then
}
usage
dotnet test --filter type=unit
this protects our builds from accidentally running integration tests that devs forgot to skip e.g. [Fact(Skip="Integration")], however it does require unit tests to "opt in" to CI by adding the correct traits which admittedly isn't great.
The Dominik's solution work for me by this code:
[SkippableFact]
public void Get_WhenCall_ReturnsString()
{
// Arrange
Skip.IfNot(RuntimeInformation.IsOSPlatform(OSPlatform.Windows));
// Act
// Assert
}
To add to the previous answers regarding SkippableFact: Note that each of the tests are still constructed - the constructor is run.
If you have timeconsuming code in a base class constructor, an alternative is to gather environment-specific test cases in suitable files and run the environment check in the constructor:
if (!SupportsTemporalQueries())
throw new SkipException("This test class only runs in environments support temporal queries");
This can speed up the test run considerable. In our system we either extend a "generic" base test class (runs in all environments) or an environment-specific base test class. I find this easier to maintain than filtering in pipelines or other solutions.
This is now solved in 1.8 - you can filter on Traits. See this issue log.
Update: Traits work with the console runner but not MSBuild, I've added a feature request for this support.

How good idea is it to use code contracts in Visual Studio 2010 Professional (ie. no static checking) for class libraries?

I create class libraries, some which are used by others around the world, and now that I'm starting to use Visual Studio 2010 I'm wondering how good idea it is for me to switch to using code contracts, instead of regular old-style if-statements.
ie. instead of this:
if (fileName == null)
throw new ArgumentNullException("fileName");
use this:
Contract.Requires(fileName != null);
The reason I'm asking is that I know that the static checker is not available to me, so I'm a bit nervous about some assumptions that I make, that the compiler cannot verify. This might lead to the class library not compiling for someone that downloads it, when they have the static checker. This, coupled with the fact that I cannot even reproduce the problem, would make it tiresome to fix, and I would gather that it doesn't speak volumes to the quality of my class library if it seemingly doesn't even compile out of the box.
So I have a few questions:
Is the static checker on by default if you have access to it? Or is there a setting I need to switch on in the class library (and since I don't have the static checker, I won't)
Are my fears unwarranted? Is the above scenario a real problem?
Any advice would be welcome.
Edit: Let me clarify what I mean.
Let's say I have the following method in a class:
public void LogToFile(string fileName, string message)
{
Contracts.Requires(fileName != null);
// log to the file here
}
and then I have this code:
public void Log(string message)
{
var targetProvider = IoC.Resolve<IFileLogTargetProvider>();
var fileName = targetProvider.GetTargetFileName();
LogToFile(fileName, message);
}
Now, here, IoC kicks in, resolves some "random" class, that provides me with a filename. Let's say that for this library, there is no possible way that I can get back a class that won't give me a non-null filename, however, due to the nature of the IoC call, the static analysis is unable to verify this, and thus might assume that a possible value could be null.
Hence, the static analysis might conclude that there is a risk of the LogToFile method being called with a null argument, and thus fail to build.
I understand that I can add assumptions to the code, saying that the compiler should take it as given that the fileName I get back from that method will never be null, but if I don't have the static analyzer (VS2010 Professional), the above code would compile for me, and thus I might leave this as a sleeping bug for someone with Ultimate to find. In other words, there would be no compile-time warning that there might be a problem here, so I might release the library as-is.
So is this a real scenario and problem?
When both your LogToFile and Log methods are part of your library, it is possible that your Log method will not compile, once you turn on the static checker. This of course will also happen when you supply code to others that compile your code using the static checker. However, as far as I know, your client's static checker will not validate the internals of the assembly you ship. It will statically check their own code against the public API of your assembly. So as long as you just ship the DLL, you'd be okay.
Of course there is a change of shipping a library that has a very annoying API for users that actually have the static checker enabled, so I think it is advisable to only ship your library with the contract definitions, if you tested the usability of the API both with and without the static checker.
Please be warned about changing the existing if (cond) throw ex calls to Contracts.Requires(cond) calls for public API calls that you have already shipped in a previous release. Note that the Requires method throws a different exception (a RequiresViolationException if I recall correctly) than what you'd normally throw (a ArgumentException). In that situation, use the Contract.Requires overload. This way your API interface stays unchanged.
First, the static checker is really (as I understand it) only available in the ultimate/academic editions - so unless everyone in your organization uses it they may not be warned if they are potentially violating an invariant.
Second, while the static analysis is impressive it cannot always find all paths that may lead to violation of the invariant. However, the good news here is that the Requires contract is retained at runtime - it is processed in an IL-transformation step - so the check exists at both compile time and runtime. In this way it is equivalent (but superior) to a regular if() check.
You can read more about the runtime rewriting that code contract compilation performs here, you can also read the detailed manual here.
EDIT: Based on what I can glean from the manual, I suspect the situation you describe is indeed possible. However, I thought that these would be warninings rather than compilation errors - and you can suppress them using System.Diagnostics.CodeAnalysis.SuppressMessage(). Consumers of your code who have the static verifier can also mark specific cases to be ignored - but that could certainly be inconvenient if there are a lot of them. I will try to find some time later today to put together a definitive test of your scenario (I don't have access to the static verifier at the moment).
There's an excellent blog here that is almost exclusively dedicated to code contracts which (if you haven't yet seen) may have some content that interests you.
No; the static analyzer will never prevent compilation from succeeding (unless it crashes!).
The static analyzer will warn you about unproven pre-/post-conditions, but doesn't stop compilation.

Resources