If a single test fails, the entire package fails and thus the output from all tests within that package are printed. This is cumbersome since the functions being tested have logging in them. I could suppress the logging entirely but that makes it difficult to track down the root when a test fails because my log output has a ton of extra noise.
Is it possible to only print the output from a specific test that fails, instead of printing the entire package?
The first solution that comes to mind is to wrap each test in a function which redirects the log output to a buffer, and then only print the buffer if the test it's wrapping fails, but that obviously adds extra boilerplate to my tests that I'd rather not have to add.
Related
In order to discover Linux namespaces under certain conditions my open source Golang package lxkns needs to re-execute the application it is used in as a new child process in order to be able to switch mount namespaces before the Golang runtime spins up. The way Linux mount namespaces work makes it impossible to switch them from Golang applications after the runtime has spun up OS threads.
This means that the original process "P" re-runs a copy of itself as a child "C" (reexec package), passing a special indication via the child's environment which signals to the child to only run a specific "action" function belonging to the included "lxkns" package (see below for details), instead of running the whole application normally (avoiding endless recursively spawning children).
forkchild := exec.Command("/proc/self/exe")
forkchild.Start()
...
forkchild.Wait()
At the moment, I invoke the coverage tests from VisualStudio Code, which runs:
go test -timeout 30s -coverprofile=/tmp/vscode-goXXXXX/go-code-cover github.com/thediveo/lxkns
So, "P" re-executes a copy "C" of itself, and tells it to run some action "A", print some result to stdout, and then to immediately terminate. "P" waits for "C"'s output, parses it, and then continues in its program flow.
The module test uses Ginkgo/Gomega and a dedicated TestMain in order to catch when the test gets re-executed as a child in order to run only the requested "action" function.
package lxkns
import (
"os"
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/thediveo/gons/reexec"
)
func TestMain(m *testing.M) {
// Ensure that the registered handler is run in the re-executed child. This
// won't trigger the handler while we're in the parent, because the
// parent's Arg[0] won't match the name of our handler.
reexec.CheckAction()
os.Exit(m.Run())
}
func TestLinuxKernelNamespaces(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "lxkns package")
}
I would like to also create code coverage data from the re-executed child process.
Is it possible to enable code coverage from within the program under test itself, and how so?
Is it possible to then append the code coverage data written by the child to the coverage data of the parent process "P"?
Does the Golang runtime only write the coverage data at exit and does it overwrite the file specified, or does it append? (I would already be glad for a pointer to the corresponding runtime sources.)
Note: switching mount namespaces won't conflict with creating coverage files in the new mount namespaces in my test cases. The reason is that these test mount namespaces are copies of the initial mount namespaces, so creating a new file will show up also in the filesystem normally.
After #Volker's comment on my Q I knew I had to take the challenge and went straight for the source code of Go's testing package. While #marco.m's suggestion is helpful in many cases, it cannot handle my admittedly slightly bizare usecase. testing's mechanics relevant to my original question are as follows, heavily simplified:
cover.go: implements coverReport() which writes a coverage data file (in ASCII text format); if the file already exists (stale version from a previous run), then it will be truncated first. Please note that coverReport() has the annoying habit of printing some “statistics” information to os.Stdout.
testing.go:
gets the CLI arguments -test.coverprofile= and -test.outputdir= from os.Args (via the flags package). If also implements toOutputDir(path) which places cover profile files inside -test.outputdir if specified.
But when does coverReport() get called? Simply spoken, at the end of testing.M.Run().
Now with this knowledge under the belt, a crazy solutions starts to emerge, kind of "Go-ing Bad" ;)
Wrap testing.M in a special re-execution enabled version reexec.testing.M: it detects whether it is running with coverage enabled:
if it is the "parent" process P, then it runs the tests as normal, and then it collects coverage profile data files from re-executed child processes C and merges them into P's coverage profile data file.
while in P and when just about to re-execute a new child C, a new dedicated coverage profile data filename is allocated for the child C. C then gets the filename via its "personal" -test.coverprofile= CLI arg.
when in C, we run the desired action function. Next, we need to run an empty test set in order to trigger writing the coverage profile data for C. For this, the re-execution function in P adds a test.run= with a very special "Bielefeld test pattern" that will most likely result in an empty result. Remember, P will -- after it has run all its tests -- pick up the individual C coverage profile data files and merge them into P's.
when coverage profiling isn't enabled, then no special actions need to be taken.
The downside of this solution is that it depends on some un-guaranteed behavior of Go's testing with respect to how and when it writes code coverage reports. But since a Linux-kernel namespace discovery package already pushes Go probably even harder than Docker's libnetwork, that's just a quantum further over the edge.
To a test developer, the whole enchilada is hidden inside an "enhanced" rxtst.M wrapper.
import (
"testing"
rxtst "github.com/thediveo/gons/reexec/testing"
)
func TestMain(m *testing.M) {
// Ensure that the registered handler is run in the re-executed child.
// This won't trigger the handler while we're in the parent. We're using
// gons' very special coverage profiling support for re-execution.
mm := &rxtst.M{M: m}
os.Exit(mm.Run())
}
Running the whole lxkns test suite with coverage, preferably using go-acc (go accurate code coverage calculation), then shows in the screenshot below that the function discoverNsfsBindMounts() was run once (1). This function isn't directly called from anywhere in P. Instead, this function is registered and then run in a re-executed child C. Previously, no code coverage was reported for discoverNsfsBindMounts(), but now with the help of package github.com/thediveo/gons/reexec/testing code coverage for C is transparently merged in P's code coverage.
I am trying to make my Xamarin.UITest output clearer and easier to work with. Every so often when Xamarin Forms updates the tree changes in subtle ways that break our UITests. Also, when developing a test it isn't always necessarily clear what the query should look like to get to a view element we want our test to interact with.
To address these, when a test fails with an "Unable to find element" error, I want to capture the app's view tree and output it to the test results.
Currently in these cases we have to modify the test code by adding app.Repl(); (see Working With the REPL), re-run the test, wait for the REPL window to appear, type tree, look at the output, type exit to leave the REPL, make my code changes based on what I saw in the tree command's output, and rinse-repeat until I have a working test. Instead, if the test results contains the outputs of the REPL's tree command, I can start making changes to fix the test code immediately and greatly speed up my testing feedback loop.
How could I most easily achieve this?
app.Print.Tree();
I think this is what you searched for.
I understand that interpreter translates your source code into machine code line by line, and stops when it encounters an error.
I am wondering, what does an interpreter do when you give it for loops.
E.g. I have the following (MATLAB) code:
for i = 1:10000
pi*pi
end
Does it really run through and translate the for loop line by line 10000 times?
With compilers is the machine code shorter, consisting only of a set of statements that include a go to control statement that is in effect for 10000 iterations.
I'm sorry if this doesn't make sense, I don't have very good knowledge of the underlying bolts and nuts of programming but I want to quickly understand.
I understand that interpreter translates your source code into machine code line by line, and stops when it encounters an error.
This is wrong. There are many different types of interpreters, but very few execute code line-by-line and those that do (mostly shells) don't generate machine code at all.
In general there are four more-or-less common ways that code can be interpreted:
Statement-by-statement execution
This is presumably what you meant by line-by-line, except that usually semicolons can be used as an alternative for line breaks. As I said, this is pretty much only done by shells.
How this works is that a single statement is parsed at a time. That is the parser reads tokens until the statement is finished. For simple statements that's until a statement terminator is reached, e.g. the end of line or a semicolon. For other statements (such as if-statements, for-loops or while-loops) it's until the corresponding terminator (endif, fi, etc.) is found. Either way the parser returns some kind of representation of the statement (some type of AST usually), which is then executed. No machine code is generated at any point.
This approach has the unusual property that syntax error at the end of the file won't prevent the beginning of the file from being executed. However everything is still parsed at most once and the bodies of if-statements etc. will still be parsed even if the condition is false (so a syntax error inside an if false will still abort the script).
AST-walking interpretation
Here the whole file is parsed at once and an AST is generated from it. The interpreter then simply walks the AST to execute the program. In principle this is the same as above, except that the entire program is parsed first.
Bytecode interpretation
Again the entire file is parsed at once, but instead of an AST-type structure the parser generates some bytecode format. This bytecode is then executed instruction-by-instruction.
JIT-compilation
This is the only variation that actually generates machine code. There are two variations of this:
Generate machine code for all functions before they're called. This might mean translating the whole file as soon as it's loaded or translating each function right before it's called. After the code has been generated, execute it.
Start by interpreting the bytecode and then JIT-compile specific functions or code paths individually once they have been executed a number of times. This allows us to make certain optimizations based on usage data that has been collected during interpretation. It also means we don't pay the overhead of compilation on functions that aren't called a lot. Some implementations (specifically some JavaScript engines) also recompile already-JITed code to optimize based on newly gathered usage data.
So in summary: The overlap between implementations that execute the code line-by-line (or rather statement-by-statement) and implementations that generate machine code should be pretty close to zero. And those implementations that do go statement-by-statement still only parse the code once.
I like Mocha so far but I'm not fond of this when I'm doing continous testing:
watching
watching
watching
watching
watching
watching
[repeat many times]
If I haven't run a test for a while and I want to see the output of the last test it's scroll scroll scroll. It quickly swamps my console buffer. Can I change this behavior without changing mocha's source code?
EDIT: this has been fixed and pulled into master.
Per https://github.com/visionmedia/mocha/blob/master/bin/_mocha#L287-303,
the mocha command gives the test runner a callback that prints a series of strings to the console, and there aren't any cmd line flags that would affect that behavior.
The root of the problem seems that the animation's control characters might be failing in your terminal, though, as it's supposed to do a pretty-looking spinning symbol at the beginning, and then do a character return (not line feed) to rewrite the line after every printout.
If you're really dead-set on changing that behavior without modifying mocha's source, you could make a copy the bin/_mocha file, and just replace the play() function with one that suits your needs. Make sure to fix up all the relative paths.
When running something like "make install", there is a lot of information displayed in the terminal window. Some lines start with make[1], make[2], make[3] or make[4]. What do these mean? Does this mean there is some kind of error?
When make is invoked recursively, each make distinguishes itself in output messages with a count. That is, messages beginning "make[3]" are from the third make that was invoked. It is not indicative of an error of any kind, but is intended to enable you to keep track of what is happening. In particular, you can tell in which directory make is being run to help debug the build if any errors do occur.