One colleague in my team says that some methods should have both preconditions & postconditions. But the point is about code coverage, those conditions were not being called (not tested) until an invalid implementation implemented (just used in unit test). Lets take below example.
public interface ICalculator
{
int Calculate(int x, int y);
}
public int GetSummary(int x, int y)
{
// preconditions
var result = calculator.Calculate(x, y);
// postconditions
if (result < 0)
{
**throw new Exception("...");**
}
return result;
}
Two options for us:
1/ Remove test implementations + postconditions
2/ Keep both test implementations + postconditions
Can you give some advice please?
Keep pre- and post-conditions.
You'll need at least four tests here: combinations of (pre, post) x (pass, fail). Your failing post-condition test will pass if the expected exception is thrown.
This is easy to do in JUnit with its #Test(expected = Exception.class) annotation.
Be careful with colleagues that make blanket statements like "X must always be true." Dogma in all its forms should be avoided. Understand the reasons for doing things and do them when they make sense.
These conditions should be seen from design view. They ensure the calculator should be working fine, returning result in a range of expected values.
You should see the MS code contracts project to take a document there.
Related
What is the cleaner way of extracting predicates which will have multiple uses. Methods or Class fields?
The two examples:
1.Class Field
void someMethod() {
IntStream.range(1, 100)
.filter(isOverFifty)
.forEach(System.out::println);
}
private IntPredicate isOverFifty = number -> number > 50;
2.Method
void someMethod() {
IntStream.range(1, 100)
.filter(isOverFifty())
.forEach(System.out::println);
}
private IntPredicate isOverFifty() {
return number -> number > 50;
}
For me, the field way looks a little bit nicer, but is this the right way? I have my doubts.
Generally you cache things that are expensive to create and these stateless lambdas are not. A stateless lambda will have a single instance created for the entire pipeline (under the current implementation). The first invocation is the most expensive one - the underlying Predicate implementation class will be created and linked; but this happens only once for both stateless and stateful lambdas.
A stateful lambda will use a different instance for each element and it might make sense to cache those, but your example is stateless, so I would not.
If you still want that (for reading purposes I assume), I would do it in a class Predicates let's assume. It would be re-usable across different classes as well, something like this:
public final class Predicates {
private Predicates(){
}
public static IntPredicate isOverFifty() {
return number -> number > 50;
}
}
You should also notice that the usage of Predicates.isOverFifty inside a Stream and x -> x > 50 while semantically the same, will have different memory usages.
In the first case, only a single instance (and class) will be created and served to all clients; while the second (x -> x > 50) will create not only a different instance, but also a different class for each of it's clients (think the same expression used in different places inside your application). This happens because the linkage happens per CallSite - and in the second case the CallSite is always different.
But that is something you should not rely on (and probably even consider) - these Objects and classes are fast to build and fast to remove by the GC - whatever fits your needs - use that.
To answer, it's better If you expand those lambda expressions for old fashioned Java. You can see now, these are two ways we used in our codes. So, the answer is, it all depends how you write a particular code segment.
private IntPredicate isOverFifty = new IntPredicate<Integer>(){
public void test(number){
return number > 50;
}
};
private IntPredicate isOverFifty() {
return new IntPredicate<Integer>(){
public void test(number){
return number > 50;
}
};
}
1) For field case you will have always allocated predicate for each new your object. Not a big deal if you have a few instances, likes, service. But if this is a value object which can be N, this is not good solution. Also keep in mind that someMethod() may not be called at all. One of possible solution is to make predicate as static field.
2) For method case you will create the predicate once every time for someMethod() call. After GC will discard it.
I have following Extension Method which is just a negation of Linq.Any()
These two UnitTests do test it completely
[TestMethod]
public void EnumerableExtensions_None_WithMatch()
{
Assert.IsTrue(_animals.None(t => t.Name == "Pony"));
}
[TestMethod]
public void EnumerableExtensions_None()
{
var emtpyList = new List<Animal>(); { };
Assert.IsTrue(emtpyList.None());
}
As you can see in the picture, when I run a Code Coverage Analysis, the delegate body is not covered (white selection), because of the deferred execution.
This question comes close to the problem:
Code Coverage on Lambda Expressions
But does not quite solve it: Since the List must stay empty, it's impossible to actually step into that piece of code.
I am tempted to mark the segment with [ExcludeFromCodeCoverage] ...
How would you write the UnitTest?
You need to test that None() returns false when given a non-empty list of Animal. As it is, you never execute your default lambda expression.
You might even find a bug...
This is the correct way to write the Test. Even found a bug!
public void EnumerableExtensions_None()
{
// _animals HAS entries
Assert.IsFalse(_animals.None());
}
Code Coverage 100%!
UncleBobsThreeTDDRules
You are not allowed to write any production code unless it is to make a failing unit test pass.
You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
You are not allowed to write any more production code than is sufficient to pass the one failing unit test.
Can someone tell me the difference between 1 and 3? it's not very clear to me.
To me 1 and 3 can be combined or are these rules suggesting order as well?
First of all: I'd take these rules with a grain of salt.
That said, rule one and rule three have a slightly different notion:
Rule 1: You should not write any code without having a failing test.
Rule 3: You should not implement a complete algorithm (even though it would make the test pass) but only the simplest possible (some might say naive) solution to make the test pass.
An example:
Given you want a method that takes a number and returns the very same number. Say you have the following test:
public void Entering1Returns1() {
assert.That(calculate(1) == 1);
}
This implementation would comply to both rules:
public void calculate(int input) {
return 1;
}
This one would violate rule 3 (strictly speaking) because it does more than needed:
public void calculate(int input) {
return input;
}
How would one write a test for the following function?
bool IsAnInterger(int ignore)
{
return true
}
I don't have enough time to iterate over every integer (for the actual code the parameter isn't even an integer).
This is used as part of the Specification Pattern, so that I can implement a Null Object.
... testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.
-- Edsger W. Dijkstra
I'd say that it's pointless to try to exhaustively black box test this function. It is better to test it in a context similar to where it will be used.
In TDD, you write the test first and that test should specify a specific behavior. So the question should always be: What do I expect to happen? - and then write the test to verify that behavior - finally write the solution to make the test pass.
edit: Understanding the question
Do you mean that this function is the behavior for a non-existent specification, e.g. a Null specification? You can of course test this null specification that it behaves in a certain way. At a guess though, this will pretty much be hard-coded one-line return values (if anything). The tests for this null object would then basically only document what the null specification should do. It won't add any other extra business value to the system.
Since you cannot reasonably test every case, you can use Mathematical Induction as a basis for your tests. You can't do this algebraically, but you can pick an arbitrary value for n.
#include <limits>
#include <cassert>
bool IsAnInteger(int)
{
return true;
}
int main()
{
assert(IsAnInteger(std::numeric_limits<int>::min())); // First
assert(IsAnInteger(0)); // n
assert(IsAnInteger(1)); // n+1
}
Edit
Hold on!
for the actual code the parameter isn't even an integer
What is it then?
#include <cassert>
template <class T>
bool IsAnInteger(const T&)
{
return true;
}
int main()
{
assert(IsAnInteger(0)); // First
assert(IsAnInteger("I am not a number!")); // n
assert(IsAnInteger(42.0f)); // n+x
}
Your function has 100% test coverage and your unit tests accurately document how it behaves. In TDD you only need to write just enough code so that your unit tests pass. You're done with this and can move on.
We have people who run code for simulations, testing etc. on some supercomputers that we have. What would be nice is, if as part of a build process we can check that not only that the code compiles but that the ouput matches some pattern which will indicate we are getting meaningful results.
i.e. the researcher may know that the value of x must be within some bounds. If not, then a logical error has been made in the code (assuming it compiles and their is no compile time error).
Are there any pre-written packages for this kind of thing. The code is written in FORTRAN, C, C++ etc.
Any specific or general advice would be appreciated.
I expect most unit testing frameworks could do this; supply a toy test data set and see that the answer is sane in various different ways.
A good way to ensure that the resulting value of any computation (whether final or intermediate) meets certain constraints, is to use an object oriented programming language like C++, and define data-types that internally enforce the conditions that you are checking for. You can then use those data-types as the return value of any computation to ensure that said conditions are met for the value returned.
Let's look at a simple example. Assume that you have a member function inside of an Airplane class as a part of a flight control system that estimates the mass of the airplane instance as a function of the number passengers and the amount of fuel that plane has at that moment. One way to declare the Airplane class and an airplaneMass() member function is the following:
class Airplane {
public:
...
int airplaneMass() const; // note the plain int return type
...
private:
...
};
However, a better way to implement the above, would be to define a type AirplaneMass that can be used as the function's return type instead of int. AirplaneMass can internally ensure (in it's constructor and any overloaded operators) that the value it encapsulates meets certain constraints. An example implementation of the AirplaneMass datatype could be the following:
class AirplaneMass {
public:
// AirplaneMass constructor
AirplaneMass(int m) {
if (m < MIN || m > MAX) {
// throw exception or log constraint violation
}
// if the value of m meets the constraints,
// assign it to the internal value.
mass_ = m;
}
...
/* range checking should also be done in the implementation
of overloaded operators. For instance, you may want to
make sure that the resultant of the ++ operation for
any instance of AirplaneMass also lies within the
specified constraints. */
private:
int mass_;
};
Thereafter, you can redeclare class Airplane and its airplaneMass() member function as follows:
class Airplane {
public:
...
AirplaneMass airplaneMass() const;
// note the more specific AirplaneMass return type
...
private:
...
};
The above will ensure that the value returned by airplaneMass() is between MIN and MAX. Otherwise, an exception will be thrown, or the error condition will be logged.
I had to do that for conversions this month. I don't know if that might help you, but it appeared quite simple a solution to me.
First, I defined a tolerance level. (Java-ish example code...)
private static final double TOLERANCE = 0.000000000001D;
Then I defined a new "areEqual" method which checks if the difference between both values is lower than the tolerance level or not.
private static boolean areEqual(double a, double b) {
return (abs(a - b) < TOLERANCE);
}
If I get a false somewhere, it means the check has probably failed. I can adjust the tolerance to see if it's just a precision problem or really a bad result. Works quite well in my situation.