And the refactor begot a library. Retest? - refactoring

I understand this is a subjective question and, as such, may be closed but I think it is worth asking.
Let's say, when building an application using TDD and going through a refactor, a library appears. If you yank the code out of your main application and place it into an separate assembly, do you take the time to write tests that cover the code, even though your main application is already testing it? (It's just a refactor.)
For example, in the NerdDinner application, we see wrappers for FormsAuthentication and MembershipProvider. These objects would be very handy across multiple applications and so they could be extracted out of the NerdDinner application and placed into their own assembly and reused.
If you were writing NerdDinner today, from scratch, and you noticed that you had a grab-bag of really useful wrappers and services and you bring them into a new assembly, do you create new tests that fully cover your new assembly--possibly having repeat tests? Is it enough to say that, if your main application runs green on all its tests, your new assembly is effectively covered?
While my example with NerdDinner may be too simplistic to really worry about, I am more thinking about larger APIs or libraries. So, do you write tests to re-cover what you tested before (may be a problem because you will probably start with all your tests passing) or do you just write tests as the new assembly evolves?

In general, yes, I'd write tests for the new library; BUT it's very dependent upon the time constraints. At the least, I'd go through and refactor the unit tests that exist to properly refer to the refactored components; that alone might resolve the question.

Related

Swift impact of performance when splitting project up into frameworks

Are there any drawbacks of splitting up a Swift project into several frameworks? In my current application I don't need to do this but it would logically separate the different parts of my code so I intended to put the code into multiple frameworks (embedded binaries). However, I don't know if Swift's apparently awesome feature 'Whole Module Optimization' is affected by creating several modules.
Furthermore, are there any other drawbacks of using frameworks or performance upsides?
Thanks for your clarifications ;)
One thing I know is that with framework, you can reduce compiling time. Because you can compile your stable framework and reuse it. Then xcode only compile your main project.
The drawback (in my opinion) is that you have to import framework whenever you use it.

Quickly testing a function that is a part of a big DLL project

I use VS2010 for C++ development, and I often end up doing work in some dll project and after everything compiles nicely I would like to try to run dummy data on some classes, but ofc the fact that it is a dll and not an exe with main makes that a no go. So is there a simple way to do what I want, or Im cursed till eternity to c/p parts of a big project into small testing one?
Ofc changing the type of the project also works, but I would like to have some almost like iteractive shell way of testing functions.
I know this isn't a library or anything, but if you want to run the dll on windows simply without framing it into anything, or writing a script, you can use rundll32.exe within windows. It allows you to run any of the exported functions in the dll. The syntax should be similiar to:
rundll32.exe PathAndNameofDll,exportedFunctionName [ArgsToTheExportedFunction]
http://best-windows.vlaurie.com/rundll32.html -- is a good simple still relevant tutorial on how to use this binary. Its got some cool tricks in there that may surprise you.
If you are wondering about a 64-bit version, it has the same name (seriously microsoft?) check it out here:
rundll32.exe equivalent for 64-bit DLLs
Furthermore, if you wanted to go low level, you could in theory utilize OllyDbg which comes with a DLL loader for running DLL's you want to debug (in assembly), which you can do the same type of stuff in (call exported functions and pass args) but the debugger is more for reverse engineering than code debugging.
I think you have basically two options.
First, is to use some sort of unit tests on the function. For C++ you can find a variety of implementations, for one take a look at CppUnit
The second option is to open the DLL, get the function via the Win32API and call it that way (this would still qualify as unit testing on some level). You could generalize this approach somewhat by creating an executable that does the above parametrized with the required information (e.g. dll path, function name) to achieve the "interactive shell" you mentioned -- if you decide to take this path, you can check out this CodeProject article on loading DLLs from C++
Besides using unit tests as provided by CppUnit, you can still write your own
small testing framework. That way you can setup your Dll projects as needed,
load it, link it, whatever you want and prove it with some simple data as
you like.
This is valueable if you have many Dlls that depend on each other to do a certain job.
(legacy Dlls projects in C++ tend to be hardly testable in my experience).
Having done some frame application, you can also inspect the possibilities that
CppUnit will give you and combine it with your test frame.
That way you will end up with a good set of automated test, which still are
valueable unit tests. It is somewhat hard starting to make unit tests if
a project already has a certain size. Having your own framework will let you
write tests whenever you make some change to a dll. Just insert it into your
framework, test what you expect it to do and enhance your frame more and more.
The basic idea is to separate the test, the testrunner, the testdata and the asserts
to be made.
I’m using python + ctypes to build quick testing routines for my DLL applications.
If you are using the extended attribute syntax, will be easy for you.
Google for Python + ctypes + test unit and you will find several examples.
I would recommend Window Powershell commandlets.
If you look at the article here - http://msdn.microsoft.com/en-us/magazine/cc163430.aspx you can see how easy it is to set up. Of course this article is mostly about testing C# code, but you can see how they talk about also being able to load any COM enabled DLL in the same way.
Here you can see how to load a COM assembly - http://blogs.technet.com/b/heyscriptingguy/archive/2009/01/26/how-do-i-use-windows-powershell-to-work-with-junk-e-mail-in-office-outlook.aspx
EDIT: I know a very successful storage virtualization software company that uses Powershell extensively to test both it's managaged and unmanaged (drivers) code.

TDD with zero production code

I was going through 'The Clean Coder' by Bob Martin where i read about the discipline to write test before any Production code.
However, TDD articles for asp.net in msdn show classes and method stubs being created and then unit tests were generated from those stubs.
I want to know whether I can write all unit tests before writing a single line of code in Business logic layer.
Edit: 1. My idea was to refactor to the extent where i change the entire class-relationship structure itself if needed. If i start from a stub then i would have to re-write the tests in case the class and method itself was wrong.
Edit: 2. Apart from that the thrust is on data-driven test, so if I use interfaces how would i write complete test where i have passed all the fields and since interfaces need to be generic i don't think they'll have all the properties. At best interfaces can have CRUD stub defined.
Thanks in Advance.
Sure you can. What's stopping you?
(Though typically, you would write one test at a time, rather than writing them all at once. Writing them all up-front smacks of Big Design Up Front, aka Waterfall. Part of the point of TDD is that you design as you go and refactor as needed, so you end up with something that's only as complex as you actually need in order to satisfy your requirements -- it helps you avoid YAGNI.)
If you follow classic TDD principles, then you write a test to fail first, you run it and watch it fail, and only then do you write the necessary code to make it pass. (This helps make sure that there's not a subtle error in your test.) And if you're testing code that doesn't exist yet, the first failure you expect is a compiler error.
This is actually important. You're testing code that doesn't exist. Of course the compile should fail. If it doesn't, then you need to find out why -- maybe your tests aren't actually being compiled, or maybe there's already a class with the same name as the new one you're thinking of writing, or something else you didn't expect.
There's nothing stopping you from writing a non-compilable test first, and then going back and making it compile. (Just because Microsoft didn't understand TDD when they wrote their testing tools doesn't mean you can't do it yourself.) The IDE might step on your toes a bit while you do (completing the names of existing classes instead of leaving the names you're trying to write), but you quickly learn when to press Esc to deal with that.
Visual Studio 2010 lets you temporarily switch Intellisense into a "test-first" mode, where it won't step on your toes in this situation. But if you happen to use ReSharper, I don't think they have that feature yet.
It does not matter if you create the method stubs or the tests first. If you write the tests first, your editor might complain about method/class stubs not existing.

Placement of interfaces in visual studio solution

What is best practise with regard to the placement of Interface types.
Often the quickest and easiest thing to do is place the interface in the same project as the concrete instances of it - however, if I understand things correctly this means that you are more likely to end-up with project dependency issues. Is this a fair assessment of the situation and if so does this mean interfaces should be separated out into different projects?
It depends on what you want to do. You're correct that placing interfaces and classes in the same assembly will somewhat limit the usefulness of the abstraction of said interfaces. E.g. if you want to load types in an AppDomain with the purpose of unloading these again, you would typically access instances via the interfaces. However, if interfaces and classes are in the same assembly you can't load the interfaces without loading the classes as well.
Similarly if you at a later point want to supply a different set of classes for one or more interfaces you will still get all the old types if they are in the same assembly as the interfaces.
With that said I must admit that I do place interfaces and classes in the same assembly from time to time simply because I don't think that I will need the flexibility, so I prefer to keep things simple. As long as you have the option to rebuild everything you can rearrange the interfaces later if the need arises.
In a simple solution, I might have public interfaces and public factory classes, and internal implementation classes all in the same project.
In a more complicated solution, then to avoid a situation where project A depends on the interfaces in project B, and project B depends on the interfaces defined in project A, I might move the interfaces into a separate project which itself depends on nothing and which all other projects can depend on.
I practice "big systems can't be created from scratch: big systems which work are invariable found to have evolved from small systems which worked." So I might well start with a small and simple solution with the interfaces in the same project as the implementation, and then later (if and when it's found to be necessary) refactor that to move the interfaces into a separate assembly.
Then again there's packaging; you might develop separate projects, and repackage everything into a single assembly when you ship it.
It is a deployment detail. There are a few cases where you have to put an interface type in its own assembly. Definitely when using them in plug-in development or any other kind of code that runs in multiple AppDomains. Almost definitely when Remoting or any other kind of connected architecture.
Beyond that, it doesn't matter so much anymore. An interface type is just like another class, you can add an assembly reference if you need it in another project. Keeping them separate can help controlling versioning. It is a good idea to keep them separate if a change in an interface type can lead to wide-spread changes in the classes that implement them. The act of changing the [AssemblyVersion] when you do so now helps troubleshooting deployment issues where you forgot to update a client assembly.

Is it possible to get Code Coverage Analysis on an Interop Assembly?

I've asked this question over on the MSDN forums also and haven't found a resolution:
http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=3686852&SiteID=1
The basic problem here as I see it is that an interop assembly doesn't actually contain any IL that can be instrumented (except for maybe a few delegates). So, although I can put together a test project that exercises the interop layer, I can't get a sense for how many of those methods and properties I'm actually calling.
Plan B is to go and write a code generator that creates a library of RCWWs (Runtime Callable Wrapper Wrappers), and instrument that for the purposes of code coverage.
Edit: #Franci Penov,
Yes that's exactly what I want to do. The COM components delivered to us constitute a library of some dozen DLLs containing approx. 3000 types. We consume that library in our application and are charged with testing that Interop layer, since the group delivering the libraries to us does minimal testing. Code coverage would allow us to ensure that all interfaces and coclasses are exercised. That's all I'm attempting to do. We have separate test projects that exercise our own managed code.
Yes, ideally the COM server team should be testing and analyzing their own code, but we don't live in an ideal world and I have to deliver a quality product based on their work. If can produce a test report indicating that I've tested 80% of their code interfaces and 50% of those don't work as advertised, I can get fixes done where fixes need to be done, and not workaround problems.
The mock layer you mentioned would be useful, but wouldn't ultimately be achieving the goal of testing the Interop layer itself, and I certainly would not want to be maintaining it by hand -- we are at the mercy of the COM guys in terms of changes to the interfaces.
Like I mentioned above -- the next step is to generate wrappers for the wrappers and instrument those for testing purposes.
To answer your question - it's not possible to instrument interop assemblies for code coverage. They contain only metadata, and no executable code as you mention yourself.
Besides, I don't see much point in trying to code coverage the interop assembly. You should be measuring the code coverage of code you write.
From the MDN forums thread you mention, it seems to me you actually want to measure how your code uses the COM component. Unless your code's goal is to enumerate and explicitly call all methods and properties of the COM object, you don't need to measure code coverage. You need unit/scenario testing to ensure that your code is calling the right methods/properties at the right time.
Imho, the right way to do this would be to write a mock layer for the COM object and test that you are calling all the methods/properties as expected.
Plan C:
use something like Mono.Cecil to weave simple execution counters into the interop assembly. For example, check out this one section in the Faq: "I would like to add some tracing functionality to an assembly I can’t debug, is it possible using Cecil?"

Resources