Per-test coverage in Go - go

I am building a Go provider for Pruner (a CLI that runs only the tests that ran through the lines you changed, saving you time).
For that, I need to be able to see per-test coverage. Not just a full coverage report after all tests have run, but I need a way to know which tests ran through what line.
Is that possible in Go?
I tried using -func, but it just gives me the method names of the original code, not the test code. In other words, I can't know what code each individual test runs through.

I need a way to know which tests ran through what line.
Is that possible in Go?
It's not supported by the tools. But you can do it. It's just very inefficient.
The way to do this is to run:
go test -cover -run=TheName/OfSome/SpecificTest
Then run this for each test in your suite.
Naturally, this will make your tests much more cumersome to manage, and incredibly slow.
So I would consider whether this is truly a requrement for your use case.
Go is optimized, from the ground-up, to compile quickly. If you have a Go project so long, that running all the tests is too slow, you may want to consider other alternatives. Some suggestions:
Run more tests in parallel, so the total runtime is reduced.
Take advantage of Short mode, and only run short tests by default, saving long-running tests for special cases.
If you really need to run only a subset of tests, do it on a per-package basis, not on a per-test basis.

Related

Is there an elegant way run Go module tests with nested packages in parallel in a fail fast manner?

I have a Go project that uses modules. The module contains many packages, which are nested.
When I let Go figure out the package structure by passing ./... to go test it runs the tests in parallel (the default behaviour).
Testing a single package with the -failfast flag works, it stops at the first failure, however what I'd like to achieve is to use -failfast across all the packages combined (for increased CI/CD throughput). When the first test in one of the packages that are tested in parallel by a single go test ./... invocation fails, I'd like to stop the whole test suite.
Is this even possible with the current version of the go testing utility? If not, is there perhaps a plan to implement such thing in the future?
I did not find a solution that would enable me to do this in parallel yet, however one thing that I thought of is to combine something like go test -failfast, go list ./... and xargs and run tests in a sequence (not parallely). I'd check the output of the last tested package and stop everything on first failure. This doesn't sound that good though and will probably be quite a bit slower.
So yeah, are there any existing solutions or approaches that I haven't found/thought of?
Thanks!
go version go1.12 darwin/amd64

Do we need to run tests in CI server if every developers run the tests before push?

I am not sure what is the best practice for running unit tests.
I suppose every developers should pass the unit tests locally before pushing the code to the GIT repo. And then the CI server (Jenkins) would pick up the new changes and run the tests again.
Why do we want to do it twice? Should we?
If the unit tests take a lot of time to run, do we expect the developer only picks the tests related to the change or runs every tests (even outside the scope of his projects), assuming we have a big maven multi-module POM.
Also consider we usually have a powerful hardware for CI server and relatively less powerful workstation for developers.
If the unit tests take a lot of time to run, do we expect the
developer only picks the tests related to the change or runs every
tests (even outside the scope of his projects), assuming we have a big
maven multi-module POM.
As a developer changes a class, modifies the database structure or makes any change that could have side effects, he/she will not/cannot know all potential side effects on the whole application.
And he/she never has to try to be too clever by saying : "I know that it may have be broken with my changes, so I will run just this test".
Unit testing ensures a some degree of code quality : don't make it less helpful
The unit tests are also non-regression tests. To not play all non regression tests before commit and push is taking the risk to introduce flawed code in the source content management.
You will never do it.
Unit tests have to be fast executed
If units test are so long to be executed as it hurts the developer velocity , it generally means that they are bad designed or maybe that they are not real unit tests. A unit test is designed to be run fast.
If you write tests that are long to be run because they require to start a server, to load/clean data in a database, to load/unload some containers, and so for... it means you didn't write unit test but integration tests. These tests are not designed to be executed regularly and automatically on the local development machine but on a CI tool.
The CI has to run all tests
Do we need to run tests in CI server if every developers run the tests
before push?
As explained, integration tests have to be executed by the CI tool.
About unit testing, sparing their execution in the CI side is not a good idea either.
Of course developers have to run the tests before pushing to the SCM but actually you don't have the guarantee that it will always be done.
Besides, even in a perfect world where developers execute all tests before pushing, you could fall into cases where the tests are successful on the developer machine but fail on the CI or the other developer machines.
For example, the developer could introduce in the base code some absolute paths specific to its machine or he/she could also forget to replicate a modification on the database used in the CI environment.
So running all tests (unit and integration tests) has not to be an option for the CI.
Yes, they shall be run twice. Because if some developers don't, then they run never. Developers should run the tests locally to make sure that their code works correctly.
Your CI system is however your reference, so there's no chance of one person arguing that it "works on my machine", but fails for others. Looking forward to continuous delivery, knowing this state on the CI/CD system becomes even more important.
You might hope that always and forever, every commit has been tested successfully locally (and all workstations are the same and identical with production systems...), but hope is a bad strategy.

How to exclude non-unit tests from runs across multiple installations of Visual Studio

I am aware of multiple methods one can use to keep certain types of test out of a test playlist: filtering by TestCategory or class name etc.
I am also aware that one can instruct TFS builds to only run certain categories or classes of test.
However, I find it quick and convenient to be able to check my unit test runs just by going into Test Explorer and clicking "run all". It's good practice to do this regularly and prior to check-in to ensure the build is likely to pass.
Is there a straightforward way I can configure my tests to ensure that "run all" just picks up the quick unit tests, and leaves the slower system and regression tests alone? Ideally, I'd like a method that can easily be applied to everyone else working on the code at the same time.
It's a little incomprehensible. Since the MSDN explains very clear.Run all just mean run all tests. And there are also filters or other settings can achieve just run unit tests.
And also as your have mentioned, you can run unit tests with your builds directly. Why you need this function. If you just want to make sure the check-in will not cause build fail. You can use Gated Check-in directly. This also applied to everyone else.

TDD: refactoring and global regressions

While the refactoring step of test driven development should always involve another full run of tests for the given functionality, what is your approach about preventing possible regressions beyond the functionality itself?
My professional experience makes me want to retest the whole functional module after any code change. Is it what TDD recommends?
Thank you.
While the refactoring step of test driven development should always
involve another full run of tests for the given functionality, what is
your approach about preventing possible regressions beyond the
functionality itself?
When you are working on specific feature it is enough to run tests for the given functionality only. There is no need to do full regression.
My professional experience makes me want to retest the whole
functional module after any code change.
You do not need to do full regression but you can, since Unit tests are small, simple and fast.
Also, there are several tools that are used for "Continuous Testing" in different languages:
in Ruby (e.g Watchr)
in PHP, (e.g. Sismo)
in .NET (e.g. NCrunch)
All these tools are used to run tests automatically on your local machine to get fast feedback.
Only when you are about to finish implementation of the feature it is time to do full run of all your tests.
Running tests on Continuous integration (CI) server is essential. Especially, when you have lots of integration tests.
TDD is just a methodology to write new code or modify old one. Your entire tests base should be ran every time a modification is done to any of the code file (new feature or refactoring). That's how you ensure no regression has taken place. We're talking about automatic testing here (unit-tests, system-tests, acceptance-tests, sometimes performance tests as well)
Continuous integration (CI) will help you achieve that: a CI server (Jenkins, Hudson, TeamCity, CruiseControl...) will have all your tests, and run them automatically when you commit a change to source control. It can also calculate test coverage and indicate where your code is insufficiently tested (note if you do proper TDD, your test coverage should always be 100%).

Gui Tests take too long - what's your approach?

we have a typical web application stack. there are 120 selenium (webdriver) tests that are executed against the application. this takes roundabout 1 hour. we execute them as part of our build chain "compile > unit test > integration test > gui tests". the gui tests take up a lot of time and we are wondering how to better structure them. currently they are "happy case and unhappy" case tests. they are quite stable i.e. they won't fail because of programmer errors.
we want to get the build times down and the biggest part are the gui tests. we want to do this based on "customer journeys" i.e. specify (together with the business people) some typical use case and test them (happy path) instead of testing too much.....
how do you guys structure your gui tests? here are some ideas that came to my mind
only execute happy path tests
do a "customer journey test", i.e. do several happy path tests in one ("clicking through the pages")
only take the "top 10" specified by the business (mission critical)
top 10 + "all the rest" as nightly build (one time)
i would appreciate your ideas
thanks
marcel
The nighttime is a perfect time for Selenium tests - you just have to remember to put a "Don't turn me off!" sticky note on your computer :).
Also, there always is Selenium Grid when the nighttime begins to be too short to run all tests. With Grid, you can run your tests on several machines in parallel!
We have several test suites that are applicable to different situations. Before a major release (to test, to pre-live, to production), everything runs. Usually (on a daily or even hourly basis on rush days) only the "The Quickened Normal Path of a User Through the Application" suite runs. And if somebody "fixes" a large bug, then the tests related to that part of application are run.
An hour seems absolutely fine to me.
One suggestion could be to decide which of the tests come under smoke tests, and are required to run every night. That is, tests that show the core functionality of your web application is still intact and working - other more detailed tests can be run at different times (once every few days?).
With that said, ours take around 2 hours - the only problem comes when one test has failed, you fix it, commit it, but then have to wait a very long time to verify it is fixed on the CI server.
TeamCity allows to run builds in parallel on the same machine, so gui tests should not be in build chain along with unit and integration tests. UI tests should have separate database and separate build so they will not waste time of developers or manual testers or any other stakeholders. TeamCity will gather all statistic, will send email on build failures and so on.
Next step is parallelization. As Slanec said you can use Grid (several machines are not required) with Mbunit (c#) or TestNG (java). With the help of Grid you can decrease tests execution time e.g. by 10 times so it will take only 6(!) min to run all your tests.
Also you can combine some of your tests in the bigger ones (but this will lead to increasing time for discover the reasons of failure and make tests difficult to maintain).
After these steps Gui tests can be executed after each source commit and provide fast response on application bugs status.
Great question, great answers.
An extra consideration is that you could prioritize your 120 gui tests: You can run them in an order such that the most important ones or those that are most likely to fail are run first.
This won't help to get the build times down, but it will help to get useful feedback from a build faster.
This prioritization (your top 10) need not be fixed, but can change per release / iteration / completed story / day, etc.
For example, you may want to run the newest gui tests first. Or those that were changed most recently. Or the ones covering most of the code that was most recently changed.
There is no tooling up and running out there immediately supporting this, as far as I know, although there is quite some (academic) research going on in the area of test case prioritization.

Resources