How to use pict to generate test cases - ruby

I'm using pict (Pairwise Independent Combinatorial Testing tool) as my tool. I'm trying to generate test cases using these constraints:
video_resolution: 352x240,352x288,640x480,704x480,704x576,720x240,720x480,720x576
video_rotate: 0,90,180,270
IF [video_resolution] IN { "640x480"} THEN [video_rotate]="90" OR "180";
but I'm having trouble doing so.
One more thing: what is <> sig used for? Means <> operator.

Amit,
A couple comments. The first is a solution. The 2nd two concern where benefits from the kind of test design approach you're asking about tend to be largest.
1) Here is a very short video to how your problem could be solved using Hexawise, a test case generator similar to PICT. To mark the invalid pairs, simply click on the symbols to the right of the relevant parameter values.
http://www.screencast.com/users/Hexawise/folders/Camtasia/media/5c6aae22-ec78-4cae-9471-16d5c96cf175
2) Your question involves 8 screen size resolutions and 4 video rotations. Pairwise coverage (AKA 2-way coverage) will require 32 test cases - or 30 test cases once you eliminate the 2 invalid combinations. This is an OK use of PICT or Hexawise (e.g., they'll make sure you don't forget any valid combination) but where you will really see dramatic benefits is when you have a lot of parameters. In such cases, you'll be able to indentify a small subset of test condition combinations that will be surprisingly effective at triggering defects with only a tiny portion of the total possible test cases.
3) If you had 20 Parameters with 4 values each, for example, you would have more than 1 trillion possible tests. If you set your coverage strength to pairwise (e.g., 2-way coverage), you would be able to achieve 100% coverage of all pairs of values in at least one test in only 37 tests.
An example demonstrating this is shown here: http://www.screencast.com/t/YmYzOTZhNTU
Coverage is adjustable as well. You can use this to alter your coverage strength based upon time available for testing and/or risk-based testing considerations. If you wanted to achieve 100% coverage of all possible combinations of 3 paramter values in at least one test, you would require 213 tests to do so. Furthermore, if you were relatively more concerned about the potential interactions between 3 of the sets of parameters (think, e.g., "Income" and "Credit Rating" and "Price of House" in a mortgage application example vs. other, less important test inputs), then you would be able to create 80 tests to match that objective. The flexibility of this test design approach (available in both PICT and Hexawise) are powerful reasons to use these kinds of test design tools.
Hope these tips help.
Full disclosure: I'm the founder of Hexawise.

Late answer, but just for others experiencing simular problems: Your condition must be:
video_resolution: 352x240,352x288,640x480,704x480,704x576,720x240,720x480,720x576
video_rotate: 0,90,180,270
IF [video_resolution] = "640x480" THEN [video_rotate] in {"90", "180"};
<> means NOT. In your case you could do:
IF [video_resolution] <> "720x576" THEN [video_rotate] >= 180;
This means: "If video_resolution is not 720x576, then video_rotate must be
equal or larger than 180"

Related

GoogleTest: the length of filter expression is too long

I have a big set of tests. I want to run a subset of these tests. But this subset includes many tests. So, I use a negative pattern.
For example:
--gtest_filter=TestSet.*-TestSet.Case1:TestSet.Case2:TestSet.Case3:....:TestSet.CaseN
The result is that length of filter is too long.
Is there anything I can do to solve this problem?
The filter length limit is probably imposed by the operating system on the shell. See this SO post on the subject.
As a workaround, with the positive side effect of improving the structure of your unit tests, you can rename your tests in a way that you can use a simpler filter pattern, e.g. --gtest_filter="TestSet.Subset*"

What is "gate count" in synthesis result and how to calculate

I'm synthesizing my design with design compiler and have some comparison with another design (as a evaluation in my report). The Synopsys's tool can easily report the area with command but in all paper I've read care about gate count.
My quiz is what is gate count and how to calculate it?
I googled and heard about gate count is calculated as total_area/NAND2_area. So, is it true?
Thank for your reading and please don't blame me about stupid question :(.
Synthesised area is often quoted as Gate count in NAND2 equivalents. You are correct with:
(total area)/(NAND2 area).
Older tools and libraries use to report this number, a few years a go I noticed a shift for tools to just provide areas in Square Microns. However the gate count is a nicer number to get your head around, and the number is portable between different size geometries.
40K for implementation A is smaller than 50K for implementation B. Much harder to compare 100000 um^2 for implementation A process X vs 65000 um^2 for implementation B on process y.

Ruby: Using rand() in code but writing tests to verify probabilities

I have some code which delivers things based on weighted random. Things with more weight are more likely to be randomly chosen. Now being a good rubyist I of couse want to cover all this code with tests. And I want to test that things are getting fetched according the correct probabilities.
So how do I test this? Creating tests for something that should be random make it very hard to compare actual vs expected. A few ideas I have, and why they wont work great:
Stub Kernel.rand in my tests to return fixed values. This is cool, but rand() gets called multiple times and I'm not sure I can rig this with enough control to test what I need to.
Fetch a random item a HUGE number of times and compare the actual ratio vs the expected ratio. But unless I can run it an infinite number of times, this will never be perfect and could intermittently fail if I get some bad luck in the RNG.
Use a consistent random seed. This makes the RNG repeatable but it still doesn't give me any verification that item A will happen 80% of the time (for example).
So what kind of approach can I use to write test coverage for random probabilities?
I think you should separate your goals. One is to stub Kernel.rand as you mention. With rspec for example, you can do something like this:
test_values = [1, 2, 3]
Kernel.stub!(:rand).and_return( *test_values )
Note that this stub won't work unless you call rand with Kernel as the receiver. If you just call "rand" then the current "self" will receive the message, and you'll actually get a random number instead of the test_values.
The second goal is to do something like a field test where you actually generate random numbers. You'd then use some kind of tolerance to ensure you get close to the desired percentage. This is never going to be perfect though, and will probably need a human to evaluate the results. But it still is useful to do because you might realize that another random number generator might be better, like reading from /dev/random. Also, it's good to have this kind of test because let's say you decide to migrate to a new kind of platform whose system libraries aren't as good at generating randomness, or there's some bug in a certain version. The test could be a warning sign.
It really depends on your goals. Do you only want to test your weighting algorithm, or also the randomness?
It's best to stub Kernel.rand to return fixed values.
Kernel.rand is not your code. You should assume it works, rather than trying to write tests that test it rather than your code. And using a fixed set of values that you've chosen and explicitly coded in is better than adding a dependency on what rand produces for a specific seed.
If you wanna go down the consistent seed route, look at Kernel#srand:
http://www.ruby-doc.org/core/classes/Kernel.html#M001387
To quote the docs (emphasis added):
Seeds the pseudorandom number
generator to the value of number. If
number is omitted or zero, seeds the
generator using a combination of the
time, the process id, and a sequence
number. (This is also the behavior if
Kernel::rand is called without
previously calling srand, but without
the sequence.) By setting the seed
to a known value, scripts can be made
deterministic during testing. The
previous seed value is returned. Also
see Kernel::rand.
For testing, stub Kernel.rand with the following simple but perfectly reasonable LCPRNG:
##q = 0
def r
##q = 1_103_515_245 * ##q + 12_345 & 0xffff_ffff
(##q >> 2) / 0x3fff_ffff.to_f
end
You might want to skip the division and use the integer result directly if your code is compatible, as all bits of the result would then be repeatable instead of just "most of them". This isolates your test from "improvements" to Kernel.rand and should allow you to test your distribution curve.
My suggestion: Combine #2 and #3. Set a random seed, then run your tests a very large number of times.
I do not like #1, because it means your test is super-tightly coupled to your implementation. If you change how you are using the output of rand(), the test will break, even if the result is correct. The point of a unit test is that you can refactor the method and rely on the test to verify that it still works.
Option #3, by itself, has the same problem as #1. If you change how you use rand(), you will get different results.
Option #2 is the only way to have a true black box solution that does not rely on knowing your internals. If you run it a sufficiently high number of times, the chance of random failure is negligible. (You can dig up a stats teacher to help you calculate "sufficiently high," or you can just pick a really big number.)
But if you're hyper-picky and "negligible" isn't good enough, a combination of #2 and #3 will ensure that once the test starts passing, it will keep passing. Even that negligible risk of failure only crops up when you touch the code under test; as long as you leave the code alone, you are guaranteed that the test will always work correctly.
Pretty often when I need predictable results from something that is derived from a random number I usually want control of the RNG, which means that the easiest is to make it injectable. Although overriding/stubbing rand can be done, Ruby provides a fine way to pass your code a RNG that is seeded with some value:
def compute_random_based_value(input_value, random: Random.new)
# ....
end
and then inject a Random object I make on the spot in the test, with a known seed:
rng = Random.new(782199) # Scientific dice roll
compute_random_based_value(your_input, random: rng)

What is the highest Cyclomatic Complexity of any function you maintain? And how would you go about refactoring it?

I was doing a little exploring of a legacy system I maintain, with NDepend (great tool check it out), the other day. My findings almost made me spray a mouthful of coffee all over my screen. The top 3 functions in this system ranked by descending cyclomatic complexity are:
SomeAspNetGridControl.CreateChildControls (CC of 171!!!)
SomeFormControl.AddForm (CC of 94)
SomeSearchControl.SplitCriteria (CC of 85)
I mean 171, wow!!! Shouldn't it be below 20 or something? So this made me wonder. What is the most complex function you maintain or have refactored? And how would you go about refactoring such a method?
Note: The CC I measured is over the code, not the IL.
This is kid stuff compared to some 1970s vintage COBOL I worked on some years ago. We used the original McCabe tool to graphically display the CC for some of the code. The print out was pure black because the lines showing the functional paths were so densely packed and spaghetti-like. I don't have a figure but it had to be way higher than 171.
What to do
Per Code Complete (first edition):
If the score is:
0-5 - the routine is probably fine
6-10 - start to think about ways to simplify the routine
10+ - break part of the routine into a second routine and call it from the first routine
Might be a good idea to write unit tests as you break up the original routine.
This is for C/C++ code currently shipping in a product:
the highest CC value that I could reliably identify (i.e. I don't suspect the tool is erroneously adding complexity values for unrelated instances of main(...) ):
an image processing function: 184
a database item loader with verification: 159
There is also a test subroutine with CC = 339 but that is not strictly part of the shipping product. Makes me wonder though how one could actually verify the test case(s) implemented in there...
and yes, the function names have been suppressed to protect the guilty :)
How to change it:
There is already an effort in place to remedy this problem. The problems are mostly caused by two root causes:
spaghetti code (no encapsulation, lots of copy-paste)
code provided to the product group by some scientists with no real software construction/engineering/carpentry training.
The main method is identifying cohesive pieces of the spaghetti (pull on a thread:) ) and break up the looooong functions into shorter ones. Often there are mappings or transformations that can be extracted into a function or a helper class/object. Switching to using STL instead of hand-built containers and iterators can cut a lot of code too. Using std::string instead of C-strings helps a lot.
I have found another opinion on this from this blog entry which seems to make good sense and work for me when comparing it against various code bases. I know it's a highly opinionated topic, so YMMV.
1-10 - simple, not much risk
11-20 - complex, low risk
21-50 - too complex, medium risk, attention
More than 50 - too complex, can't test , high risk
These are the six most complex functions in PerfectTIN, which will hopefully go into production in a few weeks:
32 32 testptin.cpp(319): testmatrix
36 39 tincanvas.cpp(90): TinCanvas::tick
53 53 mainwindow.cpp(126): MainWindow::tick
56 60 perfecttin.cpp(185): main
58 58 fileio.cpp(457): readPtin
62 62 pointlist.cpp(61): pointlist::checkTinConsistency
Where the two numbers are different, it's because of switch statements.
testmatrix consists of several order-2 and order-1 for-loops in a row and is not hard to understand. The thing that puzzled me, looking at it years after I wrote it in Bezitopo, is why it mods something by 83.
The two tick methods are run 20 times a second and check several conditions. I've had a bit of trouble with the complexity, but the bugs are nothing worse than menu items being grayed out when they shouldn't, or the TIN display looking wonky.
The TIN is stored as a variant winged-edge structure consisting of points, edges, and triangles all pointing to each other. checkTinConsistency has to be as complex as it is because the structure is complex and there are several ways it could be wrong.
The hardest bugs to find in PerfectTIN have been concurrency bugs, not cyclomatic bugs.
The most complex functions in Bezitopo (I started PerfectTIN by copying code from Bezitopo):
49 49 tin.cpp(537): pointlist::tryStartPoint
50 50 ptin.cpp(237): readPtin
51 51 convertgeoid.cpp(596): main
54 54 pointlist.cpp(117): pointlist::checkTinConsistency
73 80 bezier.cpp(1070): triangle::subdivide
92 92 bezitest.cpp(7963): main
main in bezitest is just a long sequence of if-statements: If I should test triangles, then run testtriangle. If I should test measuring units, then run testmeasure. And so on.
The complexity in subdivide is partly because roundoff errors very rarely produce some wrong-looking conditions that the function has to check for.
What is now tryStartPoint used to be part of maketin (which now has a complexity of only 11) with even more complexity. I broke out the inside of the loop into a separate function because I had to call it from the GUI and update the screen in between.

How to use TDD correctly to implement a numerical method?

I am trying to use Test Driven Development to implement my signal processing library. But I have a little doubt: Assume I am trying to implement a sine method (I'm not):
Write the test (pseudo-code)
assertEqual(0, sine(0))
Write the first implementation
function sine(radians)
return 0
Second test
assertEqual(1, sine(pi))
At this point, should I:
implement a smart code that will work for pi and other values, or
implement the dumbest code that will work only for 0 and pi?
If you choose the second option, when can I jump to the first option? I will have to do it eventually...
At this point, should I:
implement real code that will work outside the two simple tests?
implement more dumbest code that will work only for the two simple tests?
Neither. I'm not sure where you got the "write just one test at a time" approach from, but it sure is a slow way to go.
The point is to write clear tests and use that clear testing to design your program.
So, write enough tests to actually validate a sine function. Two tests are clearly inadequate.
In the case of a continuous function, you have to provide a table of known good values eventually. Why wait?
However, testing continuous functions has some problems. You can't follow a dumb TDD procedure.
You can't test all floating-point values between 0 and 2*pi. You can't test a few random values.
In the case of continuous functions, a "strict, unthinking TDD" doesn't work. The issue here is that you know your sine function implementation will be based on a bunch of symmetries. You have to test based on those symmetry rules you're using. Bugs hide in cracks and corners. Edge cases and corner cases are part of the implementation and if you unthinkingly follow TDD you can't test that.
However, for continuous functions, you must test the edge and corner cases of the implementation.
This doesn't mean TDD is broken or inadequate. It says that slavish devotion to a "test first" can't work without some thinking about what you real goal is.
In kind of the strict baby-step TDD, you might implement the dumb method to get back to green, and then refactor the duplication inherent in the dumb code (testing for the input value is a kind of duplication between the test and the code) by producing a real algorithm. The hard part about getting a feel for TDD with such an algorithm is that your acceptance tests are really sitting right next to you (the table S. Lott suggests), so you kind of keep an eye on them the whole time. In more typical TDD, the unit is divorced enough from the whole that the acceptance tests can't just be plugged in right there, so you don't start thinking about testing for all scenarios, because all scenarios are not obvious.
Typically, you might have a real algorithm after one or two cases. The important thing about TDD is that it is driving design, not the algorithm. Once you have enough cases to satisfy the design needs, the value in TDD drops significantly. Then the tests more convert into covering corner cases to ensure your algorithm is correct in all aspects you can think of. So, if you are confident in how to build the algorithm, go for it. The kinds of baby steps you are talking about are only appropriate when you are uncertain. By taking such baby steps you start to build out the boundaries of what your code has to cover, even though your implementation isn't actually real yet. But as I said, that is more for when you are uncertain about how to build the algorithm.
Write tests that verify Identities.
For the sin(x) example, think about double-angle formula and half-angle formula.
Open a signal-processing textbook. Find the relevant chapters and implement every single one of those theorems/corollaries as test code applicable for your function. For most signal-processing functions there are identities that must be uphold for the inputs and the outputs. Write tests that verify those identities, regardless of what those inputs might be.
Then think about the inputs.
Divide the implementation process into separate stages. Each stage should have a Goal. The tests for each stage would be to verify that Goal. (Note 1)
The goal of the first stage is to be "roughly correct". For the sin(x) example, this would be like a naive implementation using binary search and some mathematical identities.
The goal of the second stage is to be "accurate enough". You will try different ways of computing the same function and see which one gets better result.
The goal of the third stage is to be "efficient".
(Note 1) Make it work, make it correct, make it fast, make it cheap. - attributed to Alan Kay
I believe the step when you jump to the first option is when you see there are too many "ifs" in your code "just to pass the tests". That wouldn't be the case yet, just with 0 and pi.
You'll feel the code is beginning to smell, and will be willing to refactor it asap. I'm not sure if that's what pure TDD says, but IMHO you do it in the refactor phase (test fail, test pass, refactor cycle). I mean, unless your failing tests ask for a different implementation.
Note that (in NUnit) you can also do
Assert.That(2.1 + 1.2, Is.EqualTo(3.3).Within(0.0005);
when you're dealing with floating-point equality.
One piece of advice I remember reading was to try to refactor out the magic numbers from your implementations.
You should code up all your unit tests in one hit (in my opinion). While the idea of only creating tests specifically covering what has to be tested is correct, your particular specification calls for a functioning sine() function, not a sine() function that works for 0 and PI.
Find a source you trust enough (a mathematician friend, tables at the back of a math book or another program that already has the sine function implemented).
I opted for bash/bc because I'm too lazy to type it all in by hand :-). If it were a sine() function, I'd just run the following program and paste it into the test code. I'd also put a copy of this script in there as a comment as well so I can re-use it if something changes (such as the desired resolution if more than 20 degrees in this case, or the value of PI you want to use).
#!/bin/bash
d=0
while [[ ${d} -le 400 ]] ; do
r=$(echo "3.141592653589 * ${d} / 180" | bc -l)
s=$(echo "s(${r})" | bc -l)
echo "assertNear(${s},sine(${r})); // ${d} deg."
d=$(expr ${d} + 20)
done
This outputs:
assertNear(0,sine(0)); // 0 deg.
assertNear(.34202014332558591077,sine(.34906585039877777777)); // 20 deg.
assertNear(.64278760968640429167,sine(.69813170079755555555)); // 40 deg.
assertNear(.86602540378430644035,sine(1.04719755119633333333)); // 60 deg.
assertNear(.98480775301214683962,sine(1.39626340159511111111)); // 80 deg.
assertNear(.98480775301228458404,sine(1.74532925199388888888)); // 100 deg.
assertNear(.86602540378470305958,sine(2.09439510239266666666)); // 120 deg.
assertNear(.64278760968701194759,sine(2.44346095279144444444)); // 140 deg.
assertNear(.34202014332633131111,sine(2.79252680319022222222)); // 160 deg.
assertNear(.00000000000079323846,sine(3.14159265358900000000)); // 180 deg.
assertNear(-.34202014332484051044,sine(3.49065850398777777777)); // 200 deg.
assertNear(-.64278760968579663575,sine(3.83972435438655555555)); // 220 deg.
assertNear(-.86602540378390982112,sine(4.18879020478533333333)); // 240 deg.
assertNear(-.98480775301200909521,sine(4.53785605518411111111)); // 260 deg.
assertNear(-.98480775301242232845,sine(4.88692190558288888888)); // 280 deg.
assertNear(-.86602540378509967881,sine(5.23598775598166666666)); // 300 deg.
assertNear(-.64278760968761960351,sine(5.58505360638044444444)); // 320 deg.
assertNear(-.34202014332707671144,sine(5.93411945677922222222)); // 340 deg.
assertNear(-.00000000000158647692,sine(6.28318530717800000000)); // 360 deg.
assertNear(.34202014332409511011,sine(6.63225115757677777777)); // 380 deg.
assertNear(.64278760968518897983,sine(6.98131700797555555555)); // 400 deg.
Obviously you will need to map this answer to what your real function is meant to do. My point is that the test should fully validate the behavior of the code in this iteration. If this iteration was to produce a sine() function that only works for 0 and PI, then that's fine. But that would be a serious waste of an iteration in my opinion.
It may be that your function is so complex that it must be done over several iterations. Then your approach two is correct and the tests should be updated in the next iteration where you add the extra functionality. Otherwise, find a way to add all the tests for this iteration quickly, then you won't have to worry about switching between real code and test code frequently.
Strictly following TDD, you can first implement the dumbest code that will work. In order to jump to the first option (to implement the real code), add more tests:
assertEqual(tan(x), sin(x)/cos(x))
If you implement more than what is absolutely required by your tests, then your tests will not completely cover your implementation. For example, if you implemented the whole sin() function with just the two tests above, you could accidentally "break" it by returning a triangle function (that almost looks like a sine function) and your tests would not be able to detect the error.
The other thing you will have to worry about for numeric functions is the notion of "equality" and having to deal with the inherent loss of precision in floating point calculations. That's what I thought your question was going to be about after reading just the title. :)
I don't know what language you are using, but when I am dealing with a numeric method, I typically write a simple test like yours first to make sure the outline is correct, and then I feed more values to cover cases where I suspect things might go wrong. In .NET, NUnit 2.5 has a nice feature for this, called [TestCase], where you can feed multiple input values to the same test like this:
[TestCase(1,2,Result=3)]
[TestCase(1,1,Result=2)]
public int CheckAddition(int a, int b)
{
return a+b;
}
Short answer.
Write one test at a time.
Once it fails, Get back to green first. If that means doing the simplest thing that can work, do it. (Option 2)
Once you're in the green, you can look at the code and choose to cleanup (option1). Or you can say that the code still doesn't smell that much and write subsequent tests that put the spotlight on the smells.
Another question you seem to have, is how many tests should you write. You need to test till fear (the function may not work) turns into boredom. So once you've tested for all the interesting input-output combinations, you're done.

Resources