Is there a tool to highlight code that has been run? - visual-studio

We write code, and we write tests. When you write tests, you're thinking about both the expected usage of the thing you're testing - and you're also thinking about the internal implementation of that thing, writing your test to try exposing bad behaviors.
Whenever you change the implementation of the thing, you're adding new lines of code, and sometimes it's difficult to be sure that your test actually exercises all the code in the thing. One thing you can do is set a breakpoint, and step through painstakingly. Sometimes you have to. But I'm looking for a faster, better way to ensure all the code has been tested.
Imagine setting a breakpoint and stepping through the code, but every line that was run gets highlighted and stays highlighted after it was run. So when the program finishes, you can easily look at the code and identify lines which were never run during your test.
Is there some kind of tool, extension, add-in, or something out there, that will let you run a program or test, and identify which lines were executed and which were not? This is useful for improving the quality of tests.
I am using Visual Studio and Xamarin Studio, but even if a tool like this exists for different languages and different IDE's, having that information will be helpful to find analogous answers on the IDE's and languages that I personally care about.

As paddy responded in a comment (not sure why it wasn't an answer), the term you're looking for is coverage. It's a pretty critical tool to pair with unit testing obviously because if some code isn't covered by a test and never runs, you'd want to know so you can more completely test. Of course, the pitfall is that knowing THAT a line of code was touched doesn't automatically tell you HOW the line was touched - you can have 100% test coverage without covering 100% of use cases of a particular LOC - maybe you have an inline conditional where one bit never gets hit, just as an example.
Coverage is also useful to know even outside of testing. By performing a coverage analysis on code in production, you can get a feel for what aspects of your codebase are most critical in real use and where you need to focus on bugfixing, testing, and robustness. Similarly, if there are areas of your codebase that rarely or never get hit in a statistically significant period of production uptime, maybe that piece of the codebase can or should get trimmed out.
dotCover and dotTrace seem like they'll probably put you on the right path. Coming from use in Python, I know I used to use Django-Nose, which comes with coverage testing and reporting integrated. There's coverage.py for standalone analysis, and of course these aren't the only tools in the ecosystem, just the ones I used.

Related

How can I find what tests cover specific Go code?

I'm trying to figure out if there's a way to see what tests cover the functions in my Go code. I'm working on a code base with other developers so I didn't write all the tests/code myself.
I see some functions that are partially covered, or almost entirely covered, but they have no references anywhere (in its own, or other packages), and the functions aren't called directly in any of the tests.
Is there a way I can find which tests are covering that specific code? When I've tried to find if it's possible, all I get are articles showing how to write/run tests and get coverage percentages/highlighting, but nothing that actually shows if it's possible at all.
For the record, I'm using VS Code on linux and running go test ./... -cover in my terminals, as well as Ctrl+Shift+P -> "Go: Toggle Test Coverage In Current Package" for coverage highlighting within VS Code.
With the fuller picture in view now, via comments, it seems that you have a mess of tests, written by someone(s) less experienced with Go, and your goal is to clean up the tests to follow standard Go conventions.
If I were faced with that task, my strategy would probably be to disable all tests in the repository, by using a build tag that never gets executed, such as:
// +build skip
package foo
Confirm that all tests are disabled, by running go test ./... -cover, and confirm that you have 0% coverage everywhere.
Then, test by test, I would move each test into its proper place, and put it in a new file without the skip build tag.
If it's a big project, I'd probably do one package at a time, or in some other small, logical steps, to avoid a monster pull request. Use your own judgement here.
I'd also strongly resist the urge to do any other cleanups or fixes simultaneously. My goal would be to make each PR a simple copy-paste, so review is trivial, and I'd save a list of other cleanups I discover, to do afterward.

Visual studio 2010: Nunit test case fails in debug mode but passes in run mode

I'm having a weird problem with Visual Studio 2010 Ultimate:
One of my Nunit(2.6.2) test cases is failing in the debug mode but passing in the run mode, as if we had completely different code paths for the two scenarios.
Is this a known bug? or is there some option in vs I'm missing?
Please enlighten me!
Thanks a lot.
EDIT - MORE INFORMATION
My application submits some requests to a dll written by a group of people within the organization. The dll does some computation and returns the results back to me.
In a test case exploring the dll's behavior (e.g. submit a request having certain parameter, check the dll's output), running the Nunit test works fine, but debugging the test case gives me an error - an exception was thrown from within that dll.
IMPORTANT: Running/debugging another test case gives me consistent results.
So, for the weird test case:
1, either the dll is good, and something under the debug mode breaks the dll
2, or, the dll has a bug, which is triggered by something under the debug mode.
To my knowledge, the ONLY difference between running and debugging a piece of code in visual studio is that, when debugging, a pdb file is loaded while when running it's not. Essentially, a symbol table is loaded to identify code execution.
Then the issue doesn't make sense in the first place - why would loading a symbol table affect the dll's behavior? (It's unfair to ask anyone to give an explanation without seeing any code; however, since it's corporate prop. code, I can't show it here. Please, if you've ever encountered such things in your career, do share with me what happened in your case - let's hope my problem has the same cause so that I can actually know what went wrong. Thanks)
Are you using code coverage?
If so, try disabling it and run. It will probably work.
For more details, check: http://social.msdn.microsoft.com/Forums/en-US/aba3d58f-f19f-4742-b960-8ac2be29bb88/unit-test-passes-when-in-debug-but-fails-when-run
You may have run into a situation where you're taking the same code path, but the results are subtly different in debug vs non-debug due to optimizations. There are a couple of distinct possibilities here:
Your code has a subtle bug, e.g. a race condition
Your test is being overly specific, e.g. for floating point comparisons where you should use a tolerance
It's a pain not being able to debug, but I suggest you add logging throughout the method and the test so you can see what's going on. (And hope that the logging itself doesn't change the test result, which is also possible...)
Thank you for your response.
I've identified the reason: it's due to a false parameter which drives the dll nuts. My bad.
It still doesn't answer the question why the behavior at debug time goes crazy, but good while running the test case.
However, I guess, since the parameter is wrong in the first place, I can't really blame the dll for going nuts. Anyway, when I passed in the correct param, all went good.
Thanks a lot guys.

Quickly testing a function that is a part of a big DLL project

I use VS2010 for C++ development, and I often end up doing work in some dll project and after everything compiles nicely I would like to try to run dummy data on some classes, but ofc the fact that it is a dll and not an exe with main makes that a no go. So is there a simple way to do what I want, or Im cursed till eternity to c/p parts of a big project into small testing one?
Ofc changing the type of the project also works, but I would like to have some almost like iteractive shell way of testing functions.
I know this isn't a library or anything, but if you want to run the dll on windows simply without framing it into anything, or writing a script, you can use rundll32.exe within windows. It allows you to run any of the exported functions in the dll. The syntax should be similiar to:
rundll32.exe PathAndNameofDll,exportedFunctionName [ArgsToTheExportedFunction]
http://best-windows.vlaurie.com/rundll32.html -- is a good simple still relevant tutorial on how to use this binary. Its got some cool tricks in there that may surprise you.
If you are wondering about a 64-bit version, it has the same name (seriously microsoft?) check it out here:
rundll32.exe equivalent for 64-bit DLLs
Furthermore, if you wanted to go low level, you could in theory utilize OllyDbg which comes with a DLL loader for running DLL's you want to debug (in assembly), which you can do the same type of stuff in (call exported functions and pass args) but the debugger is more for reverse engineering than code debugging.
I think you have basically two options.
First, is to use some sort of unit tests on the function. For C++ you can find a variety of implementations, for one take a look at CppUnit
The second option is to open the DLL, get the function via the Win32API and call it that way (this would still qualify as unit testing on some level). You could generalize this approach somewhat by creating an executable that does the above parametrized with the required information (e.g. dll path, function name) to achieve the "interactive shell" you mentioned -- if you decide to take this path, you can check out this CodeProject article on loading DLLs from C++
Besides using unit tests as provided by CppUnit, you can still write your own
small testing framework. That way you can setup your Dll projects as needed,
load it, link it, whatever you want and prove it with some simple data as
you like.
This is valueable if you have many Dlls that depend on each other to do a certain job.
(legacy Dlls projects in C++ tend to be hardly testable in my experience).
Having done some frame application, you can also inspect the possibilities that
CppUnit will give you and combine it with your test frame.
That way you will end up with a good set of automated test, which still are
valueable unit tests. It is somewhat hard starting to make unit tests if
a project already has a certain size. Having your own framework will let you
write tests whenever you make some change to a dll. Just insert it into your
framework, test what you expect it to do and enhance your frame more and more.
The basic idea is to separate the test, the testrunner, the testdata and the asserts
to be made.
Iā€™m using python + ctypes to build quick testing routines for my DLL applications.
If you are using the extended attribute syntax, will be easy for you.
Google for Python + ctypes + test unit and you will find several examples.
I would recommend Window Powershell commandlets.
If you look at the article here - http://msdn.microsoft.com/en-us/magazine/cc163430.aspx you can see how easy it is to set up. Of course this article is mostly about testing C# code, but you can see how they talk about also being able to load any COM enabled DLL in the same way.
Here you can see how to load a COM assembly - http://blogs.technet.com/b/heyscriptingguy/archive/2009/01/26/how-do-i-use-windows-powershell-to-work-with-junk-e-mail-in-office-outlook.aspx
EDIT: I know a very successful storage virtualization software company that uses Powershell extensively to test both it's managaged and unmanaged (drivers) code.

TDD with zero production code

I was going through 'The Clean Coder' by Bob Martin where i read about the discipline to write test before any Production code.
However, TDD articles for asp.net in msdn show classes and method stubs being created and then unit tests were generated from those stubs.
I want to know whether I can write all unit tests before writing a single line of code in Business logic layer.
Edit: 1. My idea was to refactor to the extent where i change the entire class-relationship structure itself if needed. If i start from a stub then i would have to re-write the tests in case the class and method itself was wrong.
Edit: 2. Apart from that the thrust is on data-driven test, so if I use interfaces how would i write complete test where i have passed all the fields and since interfaces need to be generic i don't think they'll have all the properties. At best interfaces can have CRUD stub defined.
Thanks in Advance.
Sure you can. What's stopping you?
(Though typically, you would write one test at a time, rather than writing them all at once. Writing them all up-front smacks of Big Design Up Front, aka Waterfall. Part of the point of TDD is that you design as you go and refactor as needed, so you end up with something that's only as complex as you actually need in order to satisfy your requirements -- it helps you avoid YAGNI.)
If you follow classic TDD principles, then you write a test to fail first, you run it and watch it fail, and only then do you write the necessary code to make it pass. (This helps make sure that there's not a subtle error in your test.) And if you're testing code that doesn't exist yet, the first failure you expect is a compiler error.
This is actually important. You're testing code that doesn't exist. Of course the compile should fail. If it doesn't, then you need to find out why -- maybe your tests aren't actually being compiled, or maybe there's already a class with the same name as the new one you're thinking of writing, or something else you didn't expect.
There's nothing stopping you from writing a non-compilable test first, and then going back and making it compile. (Just because Microsoft didn't understand TDD when they wrote their testing tools doesn't mean you can't do it yourself.) The IDE might step on your toes a bit while you do (completing the names of existing classes instead of leaving the names you're trying to write), but you quickly learn when to press Esc to deal with that.
Visual Studio 2010 lets you temporarily switch Intellisense into a "test-first" mode, where it won't step on your toes in this situation. But if you happen to use ReSharper, I don't think they have that feature yet.
It does not matter if you create the method stubs or the tests first. If you write the tests first, your editor might complain about method/class stubs not existing.

How to quickly remove all the unused variables with xCode?

I was wondering if there is a quick and effective way to remove all the unused variables (local, instance, even properties) in xcode... I am doing a code cleanup on my app and if I knew a quick way for code refactoring it would help me a lot...
Thanks...
It's being a long time since you made your question and maybe you found an answer already, but from an answer to a related question:
For static analysis, I strongly
recommend the Clang Static Analyzer
(which is happily built into Xcode 3.2
on Snow Leopard). Among all its other
virtues, this tool can trace code
paths an identify chunks of code that
cannot possibly be executed, and
should either be removed or the
surrounding code should be fixed so
that it can be called.
For dynamic analysis, I use gcov (with
unit testing) to identify which code
is actually executed. Coverage reports
(read with something like CoverStory)
reveal un-executed code, which ā€”
coupled with manual examination and
testing ā€” can help identify code that
may be dead. You do have to tweak some
setting and run gcov manually on your
binaries. I used this blog post to get
started.
Both methodologies are exactly for what you want, detecting unused code (both variables and methods) and removing them.

Resources