How can I find check the coverage of changed code? - bash

I have a Jenkins job that runs multiple jobs, some of those are unit tests for a different part of our platform.
One of those jobs is phpunitTest which basically makes sure that all tests are passing and generates a code-coverage using Codecept.
My question now is, how can I make sure new code pushed is covered by the unit tests?
Currently I'm using this command to run the coverage:
codeception/codeception run unit --coverage-html --quiet
I expect to see have a failed test if the code pushed isn't unit tested.

Unless Codecept has special (and unusual) tooling for this there's basically two ways: achieve 100% coverage and verify that at every run or force a move towards 100% coverage. Since most projects don't even go for 100% coverage (which is not at all the same as having covered all your bases; see for example SQLite for why 100% is just the beginning) I'll assume the latter. What you can do in that situation is to
enforce that the coverage percentage minimum is met at every CI run and
enforce that the coverage percentage is never lowered.
By these simple expedients you'll naturally ensure that code coverage goes up with every piece of code added.
This does not guarantee that each new piece of code is 100% covered; for that you would have to parse the coverage checker results and see if any new or changed files are mentioned as missing coverage.

Related

Per-test coverage in Go

I am building a Go provider for Pruner (a CLI that runs only the tests that ran through the lines you changed, saving you time).
For that, I need to be able to see per-test coverage. Not just a full coverage report after all tests have run, but I need a way to know which tests ran through what line.
Is that possible in Go?
I tried using -func, but it just gives me the method names of the original code, not the test code. In other words, I can't know what code each individual test runs through.
I need a way to know which tests ran through what line.
Is that possible in Go?
It's not supported by the tools. But you can do it. It's just very inefficient.
The way to do this is to run:
go test -cover -run=TheName/OfSome/SpecificTest
Then run this for each test in your suite.
Naturally, this will make your tests much more cumersome to manage, and incredibly slow.
So I would consider whether this is truly a requrement for your use case.
Go is optimized, from the ground-up, to compile quickly. If you have a Go project so long, that running all the tests is too slow, you may want to consider other alternatives. Some suggestions:
Run more tests in parallel, so the total runtime is reduced.
Take advantage of Short mode, and only run short tests by default, saving long-running tests for special cases.
If you really need to run only a subset of tests, do it on a per-package basis, not on a per-test basis.

TDD: refactoring and global regressions

While the refactoring step of test driven development should always involve another full run of tests for the given functionality, what is your approach about preventing possible regressions beyond the functionality itself?
My professional experience makes me want to retest the whole functional module after any code change. Is it what TDD recommends?
Thank you.
While the refactoring step of test driven development should always
involve another full run of tests for the given functionality, what is
your approach about preventing possible regressions beyond the
functionality itself?
When you are working on specific feature it is enough to run tests for the given functionality only. There is no need to do full regression.
My professional experience makes me want to retest the whole
functional module after any code change.
You do not need to do full regression but you can, since Unit tests are small, simple and fast.
Also, there are several tools that are used for "Continuous Testing" in different languages:
in Ruby (e.g Watchr)
in PHP, (e.g. Sismo)
in .NET (e.g. NCrunch)
All these tools are used to run tests automatically on your local machine to get fast feedback.
Only when you are about to finish implementation of the feature it is time to do full run of all your tests.
Running tests on Continuous integration (CI) server is essential. Especially, when you have lots of integration tests.
TDD is just a methodology to write new code or modify old one. Your entire tests base should be ran every time a modification is done to any of the code file (new feature or refactoring). That's how you ensure no regression has taken place. We're talking about automatic testing here (unit-tests, system-tests, acceptance-tests, sometimes performance tests as well)
Continuous integration (CI) will help you achieve that: a CI server (Jenkins, Hudson, TeamCity, CruiseControl...) will have all your tests, and run them automatically when you commit a change to source control. It can also calculate test coverage and indicate where your code is insufficiently tested (note if you do proper TDD, your test coverage should always be 100%).

VS 2010 - Code Coverage Results includes the test project itself

I am writing some unit tests for one of my DLL libraries.
The 'Code Coverage Results' pane shows a breakdown of the assemblies covered and tested.
For some strange reason - my test project itself appears in the coverage results ! (at approx. 90% covered).
This seems stupid... what's the deal with this ?
The reason the percentage is so high is that projects for code coverage are instrumented to keep track of which lines are hit by a test run, since you are running the tests from this project, almost all lines of code in the project will be run.
You can choose which projects/DLLs to collect Coverage statistics on in the Test Settings.
So if you don't need to capture stats on the test project (which you shouldn't really), you can simply remove this project from the settings you're using for coverage.
See http://msdn.microsoft.com/en-us/library/ms182534.aspx (steps 5 - 7 in particular) for more details.

How can I fail a TeamCity build if dotCover doesn't report a high enough result?

I would like TeamCity to run my mSpec tests and report on the code covered by the tests.
I would also like TeamCity to report that a build has failed if code coverage in certain namespaces doesn't meet a threshold (e.g. MyProduct.ImportantStuff must be 100%, but MyProduct.LegacyStuff must be [23% or whatever it currently is to ensure we don't add new stuff without covering tests].
I initially looked at dotCover as it's integrated into TeamCity. I have since been looking at OpenCover as I couldn't get TC to fail the build on low coverage.
I got OpenCover working but I would still like to know (as I'm sure a lot of people would) how to get TC to fail a build if code coverage is too low.
Are you using the latest TeamCity, ie version 7?
When setting up a build configuration you can specify this:
There are lots more options in the dropdown related to code coverage. You can also force your build to fail if you're using some other code coverage tool.
For example you can echo a line to the console that will then be picked up by teamcity :
##teamcity[buildStatus status='FAILURE' text='something failed']
see official docs on this here

NCover coverage widley differnt from UAT build to a Live build

Using TeamCity running an MsBuild task for an MVC2 C# application, we successfully run 1561 XUNit tests in both the UAT and the Live builds but the NCover coverage falls from 51% on the UAT build to 35% on the live build. The soulution has identical configuration manager settings.
As our minimum covergae is less than 50% our build subsequently fails with the following error:
"NCover.Reporting.exe" exited with code 3.
A bit lost as to why the coverage drops when it is the same source from svn and identical test run being performed.
Has anyone else experienced this?
My recommendation would be to drop us an email at support#ncover.com, ideally with the two coverage files attached. It's not unusual for us to see small coverage differences between Debug and Release builds running against the same tests (because the build types generate slightly different code), but never on the same build type running against the same test.

Resources