Why does -count=1 ignores caching in Go tests? - go

I understand that in order to avoid cached results in Go tests you can use the -count=1 flag in the go test command, but why?
This is from the docs:
The idiomatic way to disable test caching explicitly is to use -count=1
The explanation for the count flag is:
-count n
Run each test, benchmark, and fuzz seed n times (default 1).
If -cpu is set, run n times for each GOMAXPROCS value.
Examples are always run once. -count does not apply to
fuzz tests matched by -fuzz.
It doesn't say anything about caching and the default value is 1, but skipping cached tests aren't ignored by default.

The simple answer is because this is how the go tool is written.
The reasoning is: Test outputs are cached to speed up tests. If the code doesn't change, the test output shouldn't change either. Of course this is not necessarily true, tests may read info from external sources or may use time and random related data which could change from run to run.
When you request multiple test runs using the -count flag, obviously the intention is to run the tests multiple times, there's no logic running them just once and show n-1 times the same result. So -count triggers omitting the cached results. -count=1 will simply cause running the tests once, omitting previous cached outputs.

Related

How to ONLY print stdout/stderr from failing tests, not failing packages?

If a single test fails, the entire package fails and thus the output from all tests within that package are printed. This is cumbersome since the functions being tested have logging in them. I could suppress the logging entirely but that makes it difficult to track down the root when a test fails because my log output has a ton of extra noise.
Is it possible to only print the output from a specific test that fails, instead of printing the entire package?
The first solution that comes to mind is to wrap each test in a function which redirects the log output to a buffer, and then only print the buffer if the test it's wrapping fails, but that obviously adds extra boilerplate to my tests that I'd rather not have to add.

Cucumber refuses to run tests in a random order

I have a decently sized cucumber test suite, but they currently all run in defined order. I want to run them in a random order. The cucumber docs say that I should be able to run cucumber -P --order random and have the tests be shuffled before execution. I have witnessed this as not happening. In fact, the presence (or lack) of the --order flag seems to do nothing.
What could be stopping my suite from running in a random order? I'm not running with a profile (hence the -P) so nothing should be already defined.
This is part from the help of cucumber:
--order TYPE[:SEED] Run examples in the specified order. Available types:
[defined] Run scenarios in the order they were defined (default).
[random] Shuffle scenarios before running.
Specify SEED to reproduce the shuffling from a previous run.
e.g. --order random:5738
My opinion is you might try adding a seed number.

How do I find which test Karma is skipping?

Karma has started skipping a test from my Jasmine test suite:
Chrome 45.0.2454 (Windows 7 0.0.0): Executed 74 of 75 (skipped 1) SUCCESS (0.163 secs / 0.138 secs)
However, I have no idea why it's doing this. I'm not trying to skip any tests. How do I find out which test is being skipped?
I've searched to see if ddescribe/iit/xit are being used, and they're not.
I'm running Karma 0.13.10 on Windows.
The ddescribe and iit functions are used for focusing on specific suites/tests, not for skipping them. The xit function is used for skipping a specific test, and the xdescribe function is used for skipping suites. From the looks of what you have described, you have a suite with just one test in it that it being skipped. Search your test code for xdescribe. Choose half of your files and remove them from the config. If you still get the skip, look in that half, otherwise look in the other half. Continue splitting the list in half and removing them from the config until you have isolated the one file that has the skip in it. Then search that file. It has to be in there somewhere.
Yet-to-be-written tests (without a function body) are marked as skipped by Karma.
You might have one in your suite.
If you use karma-spec-reporter you can specify in your karma.conf.js which outputs to suppress/show.
specReporter: {
suppressSkipped: false
},
Make sure your tests have at least one expect statement, otherwise they will appear as "skipped".

How to enforce testing order for different tests in a go framework?

If I have different packages and each have a test file (pkg_test.go) is there a way for me to make sure that they run in a particular order ?
Say pkg1_test.go gets executed first and then the rest.
I tried using go channels but it seems to hang.
It isn't obvious, considering a go test ./... triggers test on all packages... but runs in parallel: see "Go: how to run tests for multiple packages?".
go test -p 1 would run the tests sequentially, but not necessarily in the order you would need.
A simple script calling go test on the packages listed in the right expected order would be easier to do.
Update 6 years later: the best practice is to not rely on test order.
So much so issue 28592 advocates for adding -shuffle and -shuffleseed to shuffle tests.
CL 310033 mentions:
This CL adds a new flag to the testing package and the go test command
which randomizes the execution order for tests and benchmarks.
This can be useful for identifying unwanted dependencies
between test or benchmark functions.
The flag is off by default.
If -shuffle is set to on then the system
clock will be used as the seed value.
If -shuffle is set to an integer N, then N will be used as the seed value.
In both cases, the seed will be reported for failed runs so that they can reproduced later on.
Picked up for Go 1.17 (Aug. 2021) in commit cbb3f09.
See more at "Benchmarking with Go".
I found a hack to get around this.
I named my test files as follow:
A_{test_file1}_test.go
B_{test_file2}_test.go
C_{test_file3}_test.go
The A B C will ensure they are run in order.

Is there a good way to debug order dependent test failures in RSpec (RSpec2)?

Too often people write tests that don't clean up after themselves when they mess with state. Often this doesn't matter since objects tend to be torn down and recreated for most tests, but there are some unfortunate cases where there's global state on objects that persist for the entire test run, and when you run tests, that depend on and modify that global state, in a certain order, they fail.
These tests and possibly implementations obviously need to be fixed, but it's a pain to try to figure out what's causing the failure when the tests that affect each other may not be the only things in the full test suite. It's especially difficult when it's not initially clear that the failures are order dependent, and may fail intermittently or on one machine but not another. For example:
rspec test1_spec.rb test2_spec.rb # failures in test2
rspec test2_spec.rb test1_spec.rb # no failures
In RSpec 1 there were some options (--reverse, --loadby) for ordering test runs, but those have disappeared in RSpec 2 and were only minimally helpful in debugging these issues anyway.
I'm not sure of the ordering that either RSpec 1 or RSpec 2 use by default, but one custom designed test suite I used in the past randomly ordered the tests on every run so that these failures came to light more quickly. In the test output the seed that was used to determine ordering was printed with the results so that it was easy to reproduce the failures even if you had to do some work to narrow down the individual tests in the suite that were causing them. There were then options that allowed you to start and stop at any given test file in the order, which allowed you to easily do a binary search to find the problem tests.
I have not found any such utilities in RSpec, so I'm asking here: What are some good ways people have found to debug these types of order dependent test failures?
There is now a --bisect flag that will find the minimum set of tests to run to reproduce the failure. Try:
$ rspec --bisect=verbose
It might also be useful to use the --fail-fast flag with it.
I wouldn't say I have a good answer, and I'd love to here some better solutions than mine. That said...
The only real technique I have for debugging these issues is adding a global (via spec_helper) hook for printing some aspect of database state (my usual culprit) before and after each test (conditioned to check if I care or not). A recent example was adding something like this to my spec_helper.rb.
Spec::Runner.configure do |config|
config.before(:each) do
$label_count = Label.count
end
config.after(:each) do
label_diff = Label.count - $label_count
$label_count = Label.count
puts "#{self.class.description} #{description} altered label count by #{label_diff}" if label_diff != 0
end
end
We have a single test in our Continuous Integration setup that globs the spec/ directory of a Rails app and runs each of them against each other.
Takes a lot of time but we found 5 or 6 dependencies that way.
Here is some quick dirty script I wrote to debug order-dependent failure - https://gist.github.com/biomancer/ddf59bd841dbf0c448f7
It consists of 2 parts.
First one is intended to run rspec suit multiple times with different seed and dump results to rspec_[ok|fail]_[seed].txt files in current directory to gather stats.
The second part is iterating through all these files, extracts test group names and analyzes their position to the affected test to make assumptions about dependencies and forms some 'risk' groups - safe, unsafe, etc. The script output explains other details and group meanings.
This script will work correctly only for simple dependencies and only if the affected test is failing for some seeds and passes for another ones, but I think it's still better than nothing.
In my case it was complex dependency when effect could be cancelled by another test but this script helped me to get directions after running its analyze part multiple times on different sets of dumps, specifically only on the failed ones (I just moved 'ok' dumps out of current directory).
Found my own question 4 years later, and now rspec has a --order flag that lets you set random order, and if you get order dependent failures reproduce the order with --seed 123 where the seed is printed out on every spec run.
https://www.relishapp.com/rspec/rspec-core/v/2-13/docs/command-line/order-new-in-rspec-core-2-8
Its most likely some state persisting between tests so make sure your database and any other data stores (include class var's and globals) are reset after every test. The database_cleaner gem might help.
Rspec Search and Destroy
is meant to help with this problem.
https://github.com/shepmaster/rspec-search-and-destroy

Resources