Correct output of running Go benchmarks - performance

When I am running a benchmark on this repository (just a placeholder for any go project) using:
go test -run="^$" -bench="^BenchmarkLessorRevoke1000$" ./...
The output I am getting is this, the benchmark results are shown:
BenchmarkLessorRevoke1000-8 1033351 1141 ns/op
but also with a whole bunch of other test output. how do I make it only show the benchmarks and not the test outputs?

You can supply a dummy name to the -run parameter of the go test tool and providing you don't have any tests matching that name, then only the benchmarks should get ran.
You covered this with "^$" so all good, you are also have a pattern to match a benchmarks with "^BenchmarkLessorRevoke1000$".
The issue is that you are running go test throughout the entire package and/or subdirectories with the ./....
You should specify the benchmarks that you want to run on a per package basis.
go test -run="$^" -bench="^BenchmarkLessorRevoke1000$" .
go test -run="$^" -bench="^BenchmarkLessorRevoke1000$" ./pkg1/
go test -run="$^" -bench="^BenchmarkLessorRevoke1000$" ./pkg2/
Also be mindful, if you do want to run benchmarks in mass you should do so on a per-package basis.
Running benchmarks for multiple packages will execute them concurrently, skewing your results.

Related

How to execute only one test of a feature file?

I have a file with N>10 tests defined in BDD in a feature file.
For debugging purposes, I need to execute only one of the tests many times, but I am not finding any other way than commenting out the other tests.
I am trying now to use the #focus tag in front of the test, in a way that cypress IDE actually marks the rest of the tests as sync skip; aborting execution, but the test that I need to run actually says No commands were issued in this test..
This is the example .feature file I'm playing with:
#e2eTests
#cases
Feature: Test feature
Scenario: Test 1
Given foo
When bar
Then foobar
#focus
Scenario: Test 2
Given bar
When foo
Then barfoo!
And when running npm run cypress and selecting this feature, I get:
What am I doing wrong?
Thanks
I would recommend using cypress-grep if you are looking for a flaky test, that needs repeating n times.
After installation and configuration (see here: Burning Tests with cypress-grep) in terminal indicate which test needs to be repeated and how many times (cypress 7.6) ->
npx cypress run --spec cypress/folder/with/test/testFile.spec.ts, --env grep="name/description of specific test", burn= >>numberOfIterations<<
Example:
npx cypress run --spec cypress/integration/testExample.spec.ts, --env grep="A popup should appear after login",burn=250

Ruby test-unit not showing summary after tests

Normally Ruby test-unit will display a summary of tests run after they are finished, something like this:
Finished in 0.117158443 seconds.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
10 tests, 10 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications
100% passed
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
298.74 tests/s, 0.00 assertions/s
This was working, but now something has changed and when the unit tests are run it shows the dots but then stops. I tried re-organizing some test file into different directories and made absolutely sure to change the filepaths in the testrunner. Also, the dots do not match the number of tests/assertions.
Loaded suite test
Started
.................$prompt> // <<-- does not put newline here.
I notice that if I run the testrunner from another directory, the summary will show, but it will cause errors with the test dependencies. I should be able to run the testrunner from the same directory. This is an example of the testrunner I am using: https://test-unit.github.io/test-unit/en/file.how-to.html. What are the reasons that this would not display at the end?
Seems like it could be an issue with not having the test-unit.yml file in the same directory as which you run the script.
See here in the code or in the same section in that document you posted.
See how it's configured here, for example:
runner: console
console_options:
color_scheme: inverted
color_schemes:
inverted:
success:
name: red
bold: true
failure:
name: green
bold: true
This part of the code documentation really stuck out:
# ## Test Runners
#
# So, now you have this great test class, but you still
# need a way to run it and view any failures that occur
# during the run. There are some test runner; console test
# runner, GTK+ test runner and so on. The console test
# runner is automatically invoked for you if you require
# 'test/unit' and simply run the file. To use another
# runner simply set default test runner ID to
# Test::Unit::AutoRunner:
Maybe you need to have that runner specified in your YML file?
Without seeing how you are calling your script and your directory organization, it will be hard to tell what is causing that issue but I think it begins with it not reading that yaml file for instance.
If all else fails, let me recommend two great unit testing libraries for Ruby if you feel compelled to switch to a more widely-used library:
minitest
RSpec
Edit: You could also revert your directories to be in the same order as before and hardcode in your Gemfile for that test-unit gem the last version that worked for you, like `"test-unit": "3.4.0".

Webdriver io running in Parallel - How to ensure one test spec runs before another one - execution order

I have inherited a webdriver io - mocha test framework. Until now the tests have been ran one at a time. There was one test spec that had to be ran before the other. This was just handled in the file naming convention:
aFirstTest.js
xLastTest.js
So when the whole suite was ran, this ensured that aFirstTest.js was ran before xLastTest.js
I now want to run the tests in parallel mode.
How can I ensure that aFirstTest.js is ran before xLastTest.js?
This post might give you some ideas.
Otherwise, you'd need to present the specs to WebdriverIO as one. An easy way to do this would be wrapping them in another file.
wrapper.spec.js:
const first = require('./aFirstTest')
const last = require('./xLastTest')
And in your config:
suites: {
firstLast: [
'./specs/wrapper.spec.js'
]
}

Different file permission error in "go test" and "bazel test"

If a test wants to assert the file permission error, for example, writing to the root of file system, "go test" returns an syscall.EACCES error, while "bazel test" returns an syscall.EPERM. How to make the test passes in both "bazel test" and "go test"?
An example can be found here.
You can disable the sandbox by using bazel --spawn_strategy=standalone test //.... I suspect this will work around the problem.
However, you may want to consider whether writing to / is the behavior that you want to test. If you need to run code on a different operating system or inside a Docker container, you'll get different behavior in this case, so you could think about testing a more predictable code path, or mocking out the file access layer to isolate your tests from it.

Testing a Jekyll site with rspec and capybara, getting a bizarre race-case on rspec start

So check this out: it appears as though, upon running bundle exec rspec, there's a race between jekyll serve and puma/rspec's boot up. Sometimes I run the command, and my tests run fine. Other times, I get the error for each of my spec files: cannot load such file -- /path/to/project/sitename_jekyll/_layouts/spec/form_spec.rb which is interesting cause that's not where my spec files are located. They're in /path/to/project/sitename_jekyll/spec/form_spec.rb.
What's crazy is that I can literally just re-run the command over and over and over again and sometimes it'll go through and run the spec tests in the correct location, and sometimes it'll look for them in _layouts and error out. It probably runs correctly maybe once out of ever three or five attempts. All the other times I get the following errors:
Here's what my spec_helper.rb looks like: https://gist.github.com/johnhutch/2cddfafcde0485ff021501d5696c0c2d
And here's an example test file:
https://gist.github.com/johnhutch/a35d15c170f5fd9ca07998bf035d111d
My .rspec only contains two lines:
--color
--require spec_helper
And here's the output, both successful and unsuccesful, back to back:
https://gist.github.com/johnhutch/7927d609170ef5c70a595735502b128d
HEEELLLLLP!
This sounds like jekyll is changing the current directory while building the site, which since it is being run in a thread also affects the tests RSpec is trying to run (See https://bugs.ruby-lang.org/issues/9785 for why Dir.chdir is not threadsafe) - leading to attempts to load things from incorrect locations.
A potential solution to this would be to wait for the Jekyll site to be built before actually running your tests. A comment in your spec_helper seems to state that someone thought passing force_build: true would do this but from a quick perusal of the jekyll-rack code I don't think that's true and you actually need to wait for compiling? to return false (v 0.5) (complete? to return true in the current master branch) to ensure building has finished (as well as passing force_build). This could either be done in a loop sleeping and checking (simpler)
sleep 0.1 while <jekyll app>.compiling?
or (if using the master branch) via the mutex/conditional Rack::Jekyll exposes like in its test suite - https://github.com/adaoraul/rack-jekyll/blob/master/test/helper.rb#L49
Note: Also check my comment about your tests that aren't actually testing anything.
As per Thomas Walpole's super helpful responses this ended up working:
sleep 0.1 while Capybara.app.compiling?
inserted right after:
51 Capybara.app = Rack::Jekyll.new(force_build: true)
in my spec_helper.rb
Thanks again, Thomas!

Resources