TDD with zero production code - tdd

I was going through 'The Clean Coder' by Bob Martin where i read about the discipline to write test before any Production code.
However, TDD articles for asp.net in msdn show classes and method stubs being created and then unit tests were generated from those stubs.
I want to know whether I can write all unit tests before writing a single line of code in Business logic layer.
Edit: 1. My idea was to refactor to the extent where i change the entire class-relationship structure itself if needed. If i start from a stub then i would have to re-write the tests in case the class and method itself was wrong.
Edit: 2. Apart from that the thrust is on data-driven test, so if I use interfaces how would i write complete test where i have passed all the fields and since interfaces need to be generic i don't think they'll have all the properties. At best interfaces can have CRUD stub defined.
Thanks in Advance.

Sure you can. What's stopping you?
(Though typically, you would write one test at a time, rather than writing them all at once. Writing them all up-front smacks of Big Design Up Front, aka Waterfall. Part of the point of TDD is that you design as you go and refactor as needed, so you end up with something that's only as complex as you actually need in order to satisfy your requirements -- it helps you avoid YAGNI.)
If you follow classic TDD principles, then you write a test to fail first, you run it and watch it fail, and only then do you write the necessary code to make it pass. (This helps make sure that there's not a subtle error in your test.) And if you're testing code that doesn't exist yet, the first failure you expect is a compiler error.
This is actually important. You're testing code that doesn't exist. Of course the compile should fail. If it doesn't, then you need to find out why -- maybe your tests aren't actually being compiled, or maybe there's already a class with the same name as the new one you're thinking of writing, or something else you didn't expect.
There's nothing stopping you from writing a non-compilable test first, and then going back and making it compile. (Just because Microsoft didn't understand TDD when they wrote their testing tools doesn't mean you can't do it yourself.) The IDE might step on your toes a bit while you do (completing the names of existing classes instead of leaving the names you're trying to write), but you quickly learn when to press Esc to deal with that.
Visual Studio 2010 lets you temporarily switch Intellisense into a "test-first" mode, where it won't step on your toes in this situation. But if you happen to use ReSharper, I don't think they have that feature yet.

It does not matter if you create the method stubs or the tests first. If you write the tests first, your editor might complain about method/class stubs not existing.

Related

How can I find what tests cover specific Go code?

I'm trying to figure out if there's a way to see what tests cover the functions in my Go code. I'm working on a code base with other developers so I didn't write all the tests/code myself.
I see some functions that are partially covered, or almost entirely covered, but they have no references anywhere (in its own, or other packages), and the functions aren't called directly in any of the tests.
Is there a way I can find which tests are covering that specific code? When I've tried to find if it's possible, all I get are articles showing how to write/run tests and get coverage percentages/highlighting, but nothing that actually shows if it's possible at all.
For the record, I'm using VS Code on linux and running go test ./... -cover in my terminals, as well as Ctrl+Shift+P -> "Go: Toggle Test Coverage In Current Package" for coverage highlighting within VS Code.
With the fuller picture in view now, via comments, it seems that you have a mess of tests, written by someone(s) less experienced with Go, and your goal is to clean up the tests to follow standard Go conventions.
If I were faced with that task, my strategy would probably be to disable all tests in the repository, by using a build tag that never gets executed, such as:
// +build skip
package foo
Confirm that all tests are disabled, by running go test ./... -cover, and confirm that you have 0% coverage everywhere.
Then, test by test, I would move each test into its proper place, and put it in a new file without the skip build tag.
If it's a big project, I'd probably do one package at a time, or in some other small, logical steps, to avoid a monster pull request. Use your own judgement here.
I'd also strongly resist the urge to do any other cleanups or fixes simultaneously. My goal would be to make each PR a simple copy-paste, so review is trivial, and I'd save a list of other cleanups I discover, to do afterward.

Is there a tool to highlight code that has been run?

We write code, and we write tests. When you write tests, you're thinking about both the expected usage of the thing you're testing - and you're also thinking about the internal implementation of that thing, writing your test to try exposing bad behaviors.
Whenever you change the implementation of the thing, you're adding new lines of code, and sometimes it's difficult to be sure that your test actually exercises all the code in the thing. One thing you can do is set a breakpoint, and step through painstakingly. Sometimes you have to. But I'm looking for a faster, better way to ensure all the code has been tested.
Imagine setting a breakpoint and stepping through the code, but every line that was run gets highlighted and stays highlighted after it was run. So when the program finishes, you can easily look at the code and identify lines which were never run during your test.
Is there some kind of tool, extension, add-in, or something out there, that will let you run a program or test, and identify which lines were executed and which were not? This is useful for improving the quality of tests.
I am using Visual Studio and Xamarin Studio, but even if a tool like this exists for different languages and different IDE's, having that information will be helpful to find analogous answers on the IDE's and languages that I personally care about.
As paddy responded in a comment (not sure why it wasn't an answer), the term you're looking for is coverage. It's a pretty critical tool to pair with unit testing obviously because if some code isn't covered by a test and never runs, you'd want to know so you can more completely test. Of course, the pitfall is that knowing THAT a line of code was touched doesn't automatically tell you HOW the line was touched - you can have 100% test coverage without covering 100% of use cases of a particular LOC - maybe you have an inline conditional where one bit never gets hit, just as an example.
Coverage is also useful to know even outside of testing. By performing a coverage analysis on code in production, you can get a feel for what aspects of your codebase are most critical in real use and where you need to focus on bugfixing, testing, and robustness. Similarly, if there are areas of your codebase that rarely or never get hit in a statistically significant period of production uptime, maybe that piece of the codebase can or should get trimmed out.
dotCover and dotTrace seem like they'll probably put you on the right path. Coming from use in Python, I know I used to use Django-Nose, which comes with coverage testing and reporting integrated. There's coverage.py for standalone analysis, and of course these aren't the only tools in the ecosystem, just the ones I used.

Can CxxTest Do Paramaterized Tests?

According to this article, one can make a parameterized test in the GoogleTest framework with some code like this:
INSTANTIATE_TEST_CASE_P(InstantiationName,
MyStringTest,
::testing::Values("meek", "geek", "freek"));
TEST_P(MyStringTest, acceptsEekyWords)
{
ASSERT_TRUE(acceptName(GetParam()));
}
plus some scaffolding.
After going through the CxxTest User Guide, I couldn't help but notice the lack of any mention of parameterized tests. Are parameterized tests even possible with CxxTest?
This question seems to address something similar, but the answer is by no means trivial.
I'm new to C++ unit testing. Maybe parameterized tests aren't a big deal? Almost all of my tests were parameterized in my last C# NUnit project.
As I wrote in my answer to the other question you cited, there's no facility for generating tests at run time, which is what any parameter-based tester I've ever seen does. In CxxTest, the test list is determined at compile time by the Python-based C++ parser, cxxtestgen. You can generate suites dynamically, but you only have the option of generating zero or one copy of any suite.

And the refactor begot a library. Retest?

I understand this is a subjective question and, as such, may be closed but I think it is worth asking.
Let's say, when building an application using TDD and going through a refactor, a library appears. If you yank the code out of your main application and place it into an separate assembly, do you take the time to write tests that cover the code, even though your main application is already testing it? (It's just a refactor.)
For example, in the NerdDinner application, we see wrappers for FormsAuthentication and MembershipProvider. These objects would be very handy across multiple applications and so they could be extracted out of the NerdDinner application and placed into their own assembly and reused.
If you were writing NerdDinner today, from scratch, and you noticed that you had a grab-bag of really useful wrappers and services and you bring them into a new assembly, do you create new tests that fully cover your new assembly--possibly having repeat tests? Is it enough to say that, if your main application runs green on all its tests, your new assembly is effectively covered?
While my example with NerdDinner may be too simplistic to really worry about, I am more thinking about larger APIs or libraries. So, do you write tests to re-cover what you tested before (may be a problem because you will probably start with all your tests passing) or do you just write tests as the new assembly evolves?
In general, yes, I'd write tests for the new library; BUT it's very dependent upon the time constraints. At the least, I'd go through and refactor the unit tests that exist to properly refer to the refactored components; that alone might resolve the question.

Is it possible to get Code Coverage Analysis on an Interop Assembly?

I've asked this question over on the MSDN forums also and haven't found a resolution:
http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=3686852&SiteID=1
The basic problem here as I see it is that an interop assembly doesn't actually contain any IL that can be instrumented (except for maybe a few delegates). So, although I can put together a test project that exercises the interop layer, I can't get a sense for how many of those methods and properties I'm actually calling.
Plan B is to go and write a code generator that creates a library of RCWWs (Runtime Callable Wrapper Wrappers), and instrument that for the purposes of code coverage.
Edit: #Franci Penov,
Yes that's exactly what I want to do. The COM components delivered to us constitute a library of some dozen DLLs containing approx. 3000 types. We consume that library in our application and are charged with testing that Interop layer, since the group delivering the libraries to us does minimal testing. Code coverage would allow us to ensure that all interfaces and coclasses are exercised. That's all I'm attempting to do. We have separate test projects that exercise our own managed code.
Yes, ideally the COM server team should be testing and analyzing their own code, but we don't live in an ideal world and I have to deliver a quality product based on their work. If can produce a test report indicating that I've tested 80% of their code interfaces and 50% of those don't work as advertised, I can get fixes done where fixes need to be done, and not workaround problems.
The mock layer you mentioned would be useful, but wouldn't ultimately be achieving the goal of testing the Interop layer itself, and I certainly would not want to be maintaining it by hand -- we are at the mercy of the COM guys in terms of changes to the interfaces.
Like I mentioned above -- the next step is to generate wrappers for the wrappers and instrument those for testing purposes.
To answer your question - it's not possible to instrument interop assemblies for code coverage. They contain only metadata, and no executable code as you mention yourself.
Besides, I don't see much point in trying to code coverage the interop assembly. You should be measuring the code coverage of code you write.
From the MDN forums thread you mention, it seems to me you actually want to measure how your code uses the COM component. Unless your code's goal is to enumerate and explicitly call all methods and properties of the COM object, you don't need to measure code coverage. You need unit/scenario testing to ensure that your code is calling the right methods/properties at the right time.
Imho, the right way to do this would be to write a mock layer for the COM object and test that you are calling all the methods/properties as expected.
Plan C:
use something like Mono.Cecil to weave simple execution counters into the interop assembly. For example, check out this one section in the Faq: "I would like to add some tracing functionality to an assembly I can’t debug, is it possible using Cecil?"

Resources