Is it possible to get Code Coverage Analysis on an Interop Assembly? - interop

I've asked this question over on the MSDN forums also and haven't found a resolution:
http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=3686852&SiteID=1
The basic problem here as I see it is that an interop assembly doesn't actually contain any IL that can be instrumented (except for maybe a few delegates). So, although I can put together a test project that exercises the interop layer, I can't get a sense for how many of those methods and properties I'm actually calling.
Plan B is to go and write a code generator that creates a library of RCWWs (Runtime Callable Wrapper Wrappers), and instrument that for the purposes of code coverage.
Edit: #Franci Penov,
Yes that's exactly what I want to do. The COM components delivered to us constitute a library of some dozen DLLs containing approx. 3000 types. We consume that library in our application and are charged with testing that Interop layer, since the group delivering the libraries to us does minimal testing. Code coverage would allow us to ensure that all interfaces and coclasses are exercised. That's all I'm attempting to do. We have separate test projects that exercise our own managed code.
Yes, ideally the COM server team should be testing and analyzing their own code, but we don't live in an ideal world and I have to deliver a quality product based on their work. If can produce a test report indicating that I've tested 80% of their code interfaces and 50% of those don't work as advertised, I can get fixes done where fixes need to be done, and not workaround problems.
The mock layer you mentioned would be useful, but wouldn't ultimately be achieving the goal of testing the Interop layer itself, and I certainly would not want to be maintaining it by hand -- we are at the mercy of the COM guys in terms of changes to the interfaces.
Like I mentioned above -- the next step is to generate wrappers for the wrappers and instrument those for testing purposes.

To answer your question - it's not possible to instrument interop assemblies for code coverage. They contain only metadata, and no executable code as you mention yourself.
Besides, I don't see much point in trying to code coverage the interop assembly. You should be measuring the code coverage of code you write.
From the MDN forums thread you mention, it seems to me you actually want to measure how your code uses the COM component. Unless your code's goal is to enumerate and explicitly call all methods and properties of the COM object, you don't need to measure code coverage. You need unit/scenario testing to ensure that your code is calling the right methods/properties at the right time.
Imho, the right way to do this would be to write a mock layer for the COM object and test that you are calling all the methods/properties as expected.

Plan C:
use something like Mono.Cecil to weave simple execution counters into the interop assembly. For example, check out this one section in the Faq: "I would like to add some tracing functionality to an assembly I can’t debug, is it possible using Cecil?"

Related

Quickly testing a function that is a part of a big DLL project

I use VS2010 for C++ development, and I often end up doing work in some dll project and after everything compiles nicely I would like to try to run dummy data on some classes, but ofc the fact that it is a dll and not an exe with main makes that a no go. So is there a simple way to do what I want, or Im cursed till eternity to c/p parts of a big project into small testing one?
Ofc changing the type of the project also works, but I would like to have some almost like iteractive shell way of testing functions.
I know this isn't a library or anything, but if you want to run the dll on windows simply without framing it into anything, or writing a script, you can use rundll32.exe within windows. It allows you to run any of the exported functions in the dll. The syntax should be similiar to:
rundll32.exe PathAndNameofDll,exportedFunctionName [ArgsToTheExportedFunction]
http://best-windows.vlaurie.com/rundll32.html -- is a good simple still relevant tutorial on how to use this binary. Its got some cool tricks in there that may surprise you.
If you are wondering about a 64-bit version, it has the same name (seriously microsoft?) check it out here:
rundll32.exe equivalent for 64-bit DLLs
Furthermore, if you wanted to go low level, you could in theory utilize OllyDbg which comes with a DLL loader for running DLL's you want to debug (in assembly), which you can do the same type of stuff in (call exported functions and pass args) but the debugger is more for reverse engineering than code debugging.
I think you have basically two options.
First, is to use some sort of unit tests on the function. For C++ you can find a variety of implementations, for one take a look at CppUnit
The second option is to open the DLL, get the function via the Win32API and call it that way (this would still qualify as unit testing on some level). You could generalize this approach somewhat by creating an executable that does the above parametrized with the required information (e.g. dll path, function name) to achieve the "interactive shell" you mentioned -- if you decide to take this path, you can check out this CodeProject article on loading DLLs from C++
Besides using unit tests as provided by CppUnit, you can still write your own
small testing framework. That way you can setup your Dll projects as needed,
load it, link it, whatever you want and prove it with some simple data as
you like.
This is valueable if you have many Dlls that depend on each other to do a certain job.
(legacy Dlls projects in C++ tend to be hardly testable in my experience).
Having done some frame application, you can also inspect the possibilities that
CppUnit will give you and combine it with your test frame.
That way you will end up with a good set of automated test, which still are
valueable unit tests. It is somewhat hard starting to make unit tests if
a project already has a certain size. Having your own framework will let you
write tests whenever you make some change to a dll. Just insert it into your
framework, test what you expect it to do and enhance your frame more and more.
The basic idea is to separate the test, the testrunner, the testdata and the asserts
to be made.
I’m using python + ctypes to build quick testing routines for my DLL applications.
If you are using the extended attribute syntax, will be easy for you.
Google for Python + ctypes + test unit and you will find several examples.
I would recommend Window Powershell commandlets.
If you look at the article here - http://msdn.microsoft.com/en-us/magazine/cc163430.aspx you can see how easy it is to set up. Of course this article is mostly about testing C# code, but you can see how they talk about also being able to load any COM enabled DLL in the same way.
Here you can see how to load a COM assembly - http://blogs.technet.com/b/heyscriptingguy/archive/2009/01/26/how-do-i-use-windows-powershell-to-work-with-junk-e-mail-in-office-outlook.aspx
EDIT: I know a very successful storage virtualization software company that uses Powershell extensively to test both it's managaged and unmanaged (drivers) code.

TDD with zero production code

I was going through 'The Clean Coder' by Bob Martin where i read about the discipline to write test before any Production code.
However, TDD articles for asp.net in msdn show classes and method stubs being created and then unit tests were generated from those stubs.
I want to know whether I can write all unit tests before writing a single line of code in Business logic layer.
Edit: 1. My idea was to refactor to the extent where i change the entire class-relationship structure itself if needed. If i start from a stub then i would have to re-write the tests in case the class and method itself was wrong.
Edit: 2. Apart from that the thrust is on data-driven test, so if I use interfaces how would i write complete test where i have passed all the fields and since interfaces need to be generic i don't think they'll have all the properties. At best interfaces can have CRUD stub defined.
Thanks in Advance.
Sure you can. What's stopping you?
(Though typically, you would write one test at a time, rather than writing them all at once. Writing them all up-front smacks of Big Design Up Front, aka Waterfall. Part of the point of TDD is that you design as you go and refactor as needed, so you end up with something that's only as complex as you actually need in order to satisfy your requirements -- it helps you avoid YAGNI.)
If you follow classic TDD principles, then you write a test to fail first, you run it and watch it fail, and only then do you write the necessary code to make it pass. (This helps make sure that there's not a subtle error in your test.) And if you're testing code that doesn't exist yet, the first failure you expect is a compiler error.
This is actually important. You're testing code that doesn't exist. Of course the compile should fail. If it doesn't, then you need to find out why -- maybe your tests aren't actually being compiled, or maybe there's already a class with the same name as the new one you're thinking of writing, or something else you didn't expect.
There's nothing stopping you from writing a non-compilable test first, and then going back and making it compile. (Just because Microsoft didn't understand TDD when they wrote their testing tools doesn't mean you can't do it yourself.) The IDE might step on your toes a bit while you do (completing the names of existing classes instead of leaving the names you're trying to write), but you quickly learn when to press Esc to deal with that.
Visual Studio 2010 lets you temporarily switch Intellisense into a "test-first" mode, where it won't step on your toes in this situation. But if you happen to use ReSharper, I don't think they have that feature yet.
It does not matter if you create the method stubs or the tests first. If you write the tests first, your editor might complain about method/class stubs not existing.

And the refactor begot a library. Retest?

I understand this is a subjective question and, as such, may be closed but I think it is worth asking.
Let's say, when building an application using TDD and going through a refactor, a library appears. If you yank the code out of your main application and place it into an separate assembly, do you take the time to write tests that cover the code, even though your main application is already testing it? (It's just a refactor.)
For example, in the NerdDinner application, we see wrappers for FormsAuthentication and MembershipProvider. These objects would be very handy across multiple applications and so they could be extracted out of the NerdDinner application and placed into their own assembly and reused.
If you were writing NerdDinner today, from scratch, and you noticed that you had a grab-bag of really useful wrappers and services and you bring them into a new assembly, do you create new tests that fully cover your new assembly--possibly having repeat tests? Is it enough to say that, if your main application runs green on all its tests, your new assembly is effectively covered?
While my example with NerdDinner may be too simplistic to really worry about, I am more thinking about larger APIs or libraries. So, do you write tests to re-cover what you tested before (may be a problem because you will probably start with all your tests passing) or do you just write tests as the new assembly evolves?
In general, yes, I'd write tests for the new library; BUT it's very dependent upon the time constraints. At the least, I'd go through and refactor the unit tests that exist to properly refer to the refactored components; that alone might resolve the question.

Is there a tool that will track/log managed code?

I'm working on a bug where we have a use case that works, and a subtly different use case that doesn't. The code base has next to no logging, and I don't want to spend time now sprinkling logging throught the code base, though I do have time budgeted to do that at a later date.
Is there a tool that logs a program's actions ie, logs each function call?
Apparently Appsight does do this but costs 100,000's.
You could try using our logging tool SmartInspect together with the aspect oriented programming (AOP) framework PostSharp. By using our aspect library, you can automatically instrument and log all method calls, exceptions and even field/property changes.
A solution that doesn't require any code change / recompile would be to use a tool that's based on the .NET Profiling API, which injects its hooks at runtime.
The only free-open-source project i know that traces not just method entry/exit but also the return values/method parameters is SAE (Simple Assembly Explorer). When you open it up, just navigate to your exe, right click on it, and choose Profile.
There are some limits to this tool - specifically, it won't print out the values of generic types, and the text output is not indented/prettified.
Runtime Flow (developed by me) can log each function call with parameters for .NET 2.0 - 4.0 applications without any additional instrumentation.

Adding Runtime Intelligence Application Analytics for a library and not an application

I want to add usage statistics for a .NET 4.0 library I write on CodePlex.
I try to follow the step described here but my problem lies with the fact that what I write is a library and not an application.
One of the steps is put the Setup and Teardown attributes. I thought about adding the Setup attribute on a static constructor or a different place that will run once per usage of the library. My problem lies with the Teardown attribute that should be placed on code that ends the usage. I don't know where to put this attribute.
Is it possible to get usage statistics on a library?
Maybe I can register on an event that will fire when the application unloads the dll?
This looks like a typical honeypot giveaway, designed to commit you to the retail edition of their obfuscator. It's a tough business, few play this game better than Preemptive. Yes, using attributes is not going work for a library. The only possible candidate would be a finalizer. And you do not want your code to contact some website while the finalizer thread is running.
Take a look at the retail edition of their product. I bet it has a way to invoke the methods that are normally injected by their obfuscator directly. The class constructor is an obvious candidate for "Setup". An event handler for the AppDomain.ProcessExit event could be a possible location for the "Teardown" call. This also might avoid having to run the obfuscator at all, not undesirable in an open source project.

Resources