How can I find what tests cover specific Go code? - go

I'm trying to figure out if there's a way to see what tests cover the functions in my Go code. I'm working on a code base with other developers so I didn't write all the tests/code myself.
I see some functions that are partially covered, or almost entirely covered, but they have no references anywhere (in its own, or other packages), and the functions aren't called directly in any of the tests.
Is there a way I can find which tests are covering that specific code? When I've tried to find if it's possible, all I get are articles showing how to write/run tests and get coverage percentages/highlighting, but nothing that actually shows if it's possible at all.
For the record, I'm using VS Code on linux and running go test ./... -cover in my terminals, as well as Ctrl+Shift+P -> "Go: Toggle Test Coverage In Current Package" for coverage highlighting within VS Code.

With the fuller picture in view now, via comments, it seems that you have a mess of tests, written by someone(s) less experienced with Go, and your goal is to clean up the tests to follow standard Go conventions.
If I were faced with that task, my strategy would probably be to disable all tests in the repository, by using a build tag that never gets executed, such as:
// +build skip
package foo
Confirm that all tests are disabled, by running go test ./... -cover, and confirm that you have 0% coverage everywhere.
Then, test by test, I would move each test into its proper place, and put it in a new file without the skip build tag.
If it's a big project, I'd probably do one package at a time, or in some other small, logical steps, to avoid a monster pull request. Use your own judgement here.
I'd also strongly resist the urge to do any other cleanups or fixes simultaneously. My goal would be to make each PR a simple copy-paste, so review is trivial, and I'd save a list of other cleanups I discover, to do afterward.

Related

How can I debug Ginkgo tests in VS Code?

I'm evaluating ginkgo at the moment - I very much like the BDD style.
However I'm unable at the moment to get the VS Code debugger to work with the framework. The official VS-Code extension provides test-by-test debugging for native go tests using CodeLens. With other languages and frameworks (eg Typescript/Mocha), I've been able to debug individual test files by setting up launch.json appropriately, but have been unable to find suitable examples for go.
Does anybody have any examples of any launch.json setups for debugging ginkgo tests (or go code invoked from any other framework)?
Thanks!
After a bit of playing around I found a way forward which perhaps should have been obvious. In case it isn't I'll leave the question and this answer here:
For a package foo, a foo_suite_test.go file is generated by the gingko bootstrap command. This contains a top-level test called TestFoo which runs the rest of the tests within the package.
This does have a CodeLens run test | debug test section above it which you can use to debug the entire suite.
It's not quite as convenient as the individual CodeLens entries which appear over each native go test, but it's easy enough to isolate specific tests to run using the Gingko F prefix.

How to detect code-coverage of separated folders in GO?

My project-structure
stuff/stuff.go -> package: stuff
test/stuff/stuff_test.go -> package: test
Although stuff_test executes code from stuff.go it shows
coverage: 0.0% of statements
I used
go test -cover
If I move my *_test.go to the stuff-folder of the program it is working fine.
Or perhaps my approach of the project-structure is not well designed/go-conform?
Conventional Go program structure keeps the tests with the package. Like this:
project
|-stuff
|--stuff.go
|--stuff_test.go
At the top of your testing files you still declare package stuff, and it is required that your test methods take the form TestMethodX if you want go test to automatically run them.
See Go docs for details: https://golang.org/pkg/testing/
Cross-package test coverage is not directly supported, but several people have built wrappers to merge individual coverage profiles.
See Issue #6909 for the long history on this. And see gotestcover for an example tool to do the merging. There's also gocovmerge. I built my own version, so I haven't tried any of these, but I'm sure they all work like mine, and mine works fine.
My sense is that this is just an issue that no one has written a really compelling changelist for, and hasn't been that important to the core maintainers, so it hasn't been addressed. It does raise little corner cases that might break existing tests, so the quick hacks that work for most of us haven't been accepted as-is. But I haven't seen any discussion suggesting the core maintainers actively object to this feature.
You can use the -coverpkg option to select the packages for which to record coverage information.
From the output of go help testflag:
-coverpkg pattern1,pattern2,pattern3
Apply coverage analysis in each test to packages matching the patterns.
The default is for each test to analyze only the package being tested.
See 'go help packages' for a description of package patterns.
Sets -cover.
For example:
go test ./test/... -coverprofile=cover.out -coverpkg ./...
Then view the report with:
go tool cover -html=cover.out

Is there a tool to highlight code that has been run?

We write code, and we write tests. When you write tests, you're thinking about both the expected usage of the thing you're testing - and you're also thinking about the internal implementation of that thing, writing your test to try exposing bad behaviors.
Whenever you change the implementation of the thing, you're adding new lines of code, and sometimes it's difficult to be sure that your test actually exercises all the code in the thing. One thing you can do is set a breakpoint, and step through painstakingly. Sometimes you have to. But I'm looking for a faster, better way to ensure all the code has been tested.
Imagine setting a breakpoint and stepping through the code, but every line that was run gets highlighted and stays highlighted after it was run. So when the program finishes, you can easily look at the code and identify lines which were never run during your test.
Is there some kind of tool, extension, add-in, or something out there, that will let you run a program or test, and identify which lines were executed and which were not? This is useful for improving the quality of tests.
I am using Visual Studio and Xamarin Studio, but even if a tool like this exists for different languages and different IDE's, having that information will be helpful to find analogous answers on the IDE's and languages that I personally care about.
As paddy responded in a comment (not sure why it wasn't an answer), the term you're looking for is coverage. It's a pretty critical tool to pair with unit testing obviously because if some code isn't covered by a test and never runs, you'd want to know so you can more completely test. Of course, the pitfall is that knowing THAT a line of code was touched doesn't automatically tell you HOW the line was touched - you can have 100% test coverage without covering 100% of use cases of a particular LOC - maybe you have an inline conditional where one bit never gets hit, just as an example.
Coverage is also useful to know even outside of testing. By performing a coverage analysis on code in production, you can get a feel for what aspects of your codebase are most critical in real use and where you need to focus on bugfixing, testing, and robustness. Similarly, if there are areas of your codebase that rarely or never get hit in a statistically significant period of production uptime, maybe that piece of the codebase can or should get trimmed out.
dotCover and dotTrace seem like they'll probably put you on the right path. Coming from use in Python, I know I used to use Django-Nose, which comes with coverage testing and reporting integrated. There's coverage.py for standalone analysis, and of course these aren't the only tools in the ecosystem, just the ones I used.

Maven flex project using source directory from seperate module with new artifactId

Finding it difficult to express myself easily around this issue so thought best to start with a context section:
Context:
I have a Flex based application (a rather complex system) that can be compiled using "conditional compilation" into various use cases eg:
Compilation One = portalProjectUserOne
Compilation two = portalProjectUserTwo
Whether using conditional compilation is a sound idea is a completly different argument and therefore lets assume one is forced down this road, I then however decide to create a project for each of my desired compilations:
portalProjectUserOne
-branches
-tags
-trunk
-src
-pom
portalProjectUserTwo
-branches
-tags
-trunk
-src
-{NEEDS TO USE PROJECT ONES SOURCE}
As I do not want to break the ever rigid laws of programming and not duplicate anything I need a way of accessing the source of project ONE and using the source to do a CUSTOM compilation.
Things I have tried:
I tried using relative paths (../../portalProjectUserOne/trunk/src/etc...) with successful compilation but when it came time to release a final product to the nexus repo it had a few issues with reaching out the project structure, that and it felt a bit dirty really.
I attempted to use the "maven-dependency-plugin" to try and copy the sources from the first project, maybe this a pure lack of understanding on my part but I can not get my head around how you generate your classes in one project and access them from another.
This is my first question on stackoverflow and if I have been far to broad please let me know and I shall update with more extensive examples if required.
Thanks for listening/reading/being a coder.

TDD with zero production code

I was going through 'The Clean Coder' by Bob Martin where i read about the discipline to write test before any Production code.
However, TDD articles for asp.net in msdn show classes and method stubs being created and then unit tests were generated from those stubs.
I want to know whether I can write all unit tests before writing a single line of code in Business logic layer.
Edit: 1. My idea was to refactor to the extent where i change the entire class-relationship structure itself if needed. If i start from a stub then i would have to re-write the tests in case the class and method itself was wrong.
Edit: 2. Apart from that the thrust is on data-driven test, so if I use interfaces how would i write complete test where i have passed all the fields and since interfaces need to be generic i don't think they'll have all the properties. At best interfaces can have CRUD stub defined.
Thanks in Advance.
Sure you can. What's stopping you?
(Though typically, you would write one test at a time, rather than writing them all at once. Writing them all up-front smacks of Big Design Up Front, aka Waterfall. Part of the point of TDD is that you design as you go and refactor as needed, so you end up with something that's only as complex as you actually need in order to satisfy your requirements -- it helps you avoid YAGNI.)
If you follow classic TDD principles, then you write a test to fail first, you run it and watch it fail, and only then do you write the necessary code to make it pass. (This helps make sure that there's not a subtle error in your test.) And if you're testing code that doesn't exist yet, the first failure you expect is a compiler error.
This is actually important. You're testing code that doesn't exist. Of course the compile should fail. If it doesn't, then you need to find out why -- maybe your tests aren't actually being compiled, or maybe there's already a class with the same name as the new one you're thinking of writing, or something else you didn't expect.
There's nothing stopping you from writing a non-compilable test first, and then going back and making it compile. (Just because Microsoft didn't understand TDD when they wrote their testing tools doesn't mean you can't do it yourself.) The IDE might step on your toes a bit while you do (completing the names of existing classes instead of leaving the names you're trying to write), but you quickly learn when to press Esc to deal with that.
Visual Studio 2010 lets you temporarily switch Intellisense into a "test-first" mode, where it won't step on your toes in this situation. But if you happen to use ReSharper, I don't think they have that feature yet.
It does not matter if you create the method stubs or the tests first. If you write the tests first, your editor might complain about method/class stubs not existing.

Resources