In order to discover Linux namespaces under certain conditions my open source Golang package lxkns needs to re-execute the application it is used in as a new child process in order to be able to switch mount namespaces before the Golang runtime spins up. The way Linux mount namespaces work makes it impossible to switch them from Golang applications after the runtime has spun up OS threads.
This means that the original process "P" re-runs a copy of itself as a child "C" (reexec package), passing a special indication via the child's environment which signals to the child to only run a specific "action" function belonging to the included "lxkns" package (see below for details), instead of running the whole application normally (avoiding endless recursively spawning children).
forkchild := exec.Command("/proc/self/exe")
forkchild.Start()
...
forkchild.Wait()
At the moment, I invoke the coverage tests from VisualStudio Code, which runs:
go test -timeout 30s -coverprofile=/tmp/vscode-goXXXXX/go-code-cover github.com/thediveo/lxkns
So, "P" re-executes a copy "C" of itself, and tells it to run some action "A", print some result to stdout, and then to immediately terminate. "P" waits for "C"'s output, parses it, and then continues in its program flow.
The module test uses Ginkgo/Gomega and a dedicated TestMain in order to catch when the test gets re-executed as a child in order to run only the requested "action" function.
package lxkns
import (
"os"
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/thediveo/gons/reexec"
)
func TestMain(m *testing.M) {
// Ensure that the registered handler is run in the re-executed child. This
// won't trigger the handler while we're in the parent, because the
// parent's Arg[0] won't match the name of our handler.
reexec.CheckAction()
os.Exit(m.Run())
}
func TestLinuxKernelNamespaces(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "lxkns package")
}
I would like to also create code coverage data from the re-executed child process.
Is it possible to enable code coverage from within the program under test itself, and how so?
Is it possible to then append the code coverage data written by the child to the coverage data of the parent process "P"?
Does the Golang runtime only write the coverage data at exit and does it overwrite the file specified, or does it append? (I would already be glad for a pointer to the corresponding runtime sources.)
Note: switching mount namespaces won't conflict with creating coverage files in the new mount namespaces in my test cases. The reason is that these test mount namespaces are copies of the initial mount namespaces, so creating a new file will show up also in the filesystem normally.
After #Volker's comment on my Q I knew I had to take the challenge and went straight for the source code of Go's testing package. While #marco.m's suggestion is helpful in many cases, it cannot handle my admittedly slightly bizare usecase. testing's mechanics relevant to my original question are as follows, heavily simplified:
cover.go: implements coverReport() which writes a coverage data file (in ASCII text format); if the file already exists (stale version from a previous run), then it will be truncated first. Please note that coverReport() has the annoying habit of printing some “statistics” information to os.Stdout.
testing.go:
gets the CLI arguments -test.coverprofile= and -test.outputdir= from os.Args (via the flags package). If also implements toOutputDir(path) which places cover profile files inside -test.outputdir if specified.
But when does coverReport() get called? Simply spoken, at the end of testing.M.Run().
Now with this knowledge under the belt, a crazy solutions starts to emerge, kind of "Go-ing Bad" ;)
Wrap testing.M in a special re-execution enabled version reexec.testing.M: it detects whether it is running with coverage enabled:
if it is the "parent" process P, then it runs the tests as normal, and then it collects coverage profile data files from re-executed child processes C and merges them into P's coverage profile data file.
while in P and when just about to re-execute a new child C, a new dedicated coverage profile data filename is allocated for the child C. C then gets the filename via its "personal" -test.coverprofile= CLI arg.
when in C, we run the desired action function. Next, we need to run an empty test set in order to trigger writing the coverage profile data for C. For this, the re-execution function in P adds a test.run= with a very special "Bielefeld test pattern" that will most likely result in an empty result. Remember, P will -- after it has run all its tests -- pick up the individual C coverage profile data files and merge them into P's.
when coverage profiling isn't enabled, then no special actions need to be taken.
The downside of this solution is that it depends on some un-guaranteed behavior of Go's testing with respect to how and when it writes code coverage reports. But since a Linux-kernel namespace discovery package already pushes Go probably even harder than Docker's libnetwork, that's just a quantum further over the edge.
To a test developer, the whole enchilada is hidden inside an "enhanced" rxtst.M wrapper.
import (
"testing"
rxtst "github.com/thediveo/gons/reexec/testing"
)
func TestMain(m *testing.M) {
// Ensure that the registered handler is run in the re-executed child.
// This won't trigger the handler while we're in the parent. We're using
// gons' very special coverage profiling support for re-execution.
mm := &rxtst.M{M: m}
os.Exit(mm.Run())
}
Running the whole lxkns test suite with coverage, preferably using go-acc (go accurate code coverage calculation), then shows in the screenshot below that the function discoverNsfsBindMounts() was run once (1). This function isn't directly called from anywhere in P. Instead, this function is registered and then run in a re-executed child C. Previously, no code coverage was reported for discoverNsfsBindMounts(), but now with the help of package github.com/thediveo/gons/reexec/testing code coverage for C is transparently merged in P's code coverage.
Related
I want to profile a Go program's performance between different runs with different OS-level settings. I'm aware that I can get profiles for single runs via $ go test -cpuprofile cpu.prof -memprofile mem.prof -bench .. However I don't know how to aggregate the information in such a way that I can compare the results either visually or programmatically.
To present a sketch in Xonsh scripting language, which is a creole between Python and Bash. However I'm happy to accept suggestion written in pure Bash as well.
for i in range(n):
change_system_settings()
# Run 'go test' and save the results in cpu0.prof, cpu1.prof, cpu2.prof etc.
#(f'go test -cpuprofile cpu{i}.prof -memprofile mem{i}.prof -bench .'.split())
The script changes the system settings and runs the program through profiler n times. Now, after the process I'm left with possibly dozens of individual .prof files. I would like to have a holistic view of them, compare the memory and CPU usage between runs and even run numeric tests to see which run was optimal.
If you use GoLang's pprof to profile your Go program, the library has a Merge method that merges multiple pprof output files into one.
The library is github.com/google/pprof, so you just import it in a Go script:
import ('github.com/google/pprof/profile')
Then you'll need to load all your pprof files into one array. If we consider that you did that and you have all your files loaded (using os.Open()) in an array called allFiles, you merge them using the following method:
result, err := profile.Merge(allFiles)
Then you output the merged data into a new file, using os.OpenFile(...), writing to this file, then closing it.
I haven't tested this right now honestly, but I remember this is how we did it a long time ago. So technically, you could invoke this golang script after your for loop is done in your test script.
Documentation: https://github.com/google/pprof/blob/master/doc/README.md
If a single test fails, the entire package fails and thus the output from all tests within that package are printed. This is cumbersome since the functions being tested have logging in them. I could suppress the logging entirely but that makes it difficult to track down the root when a test fails because my log output has a ton of extra noise.
Is it possible to only print the output from a specific test that fails, instead of printing the entire package?
The first solution that comes to mind is to wrap each test in a function which redirects the log output to a buffer, and then only print the buffer if the test it's wrapping fails, but that obviously adds extra boilerplate to my tests that I'd rather not have to add.
If I have different packages and each have a test file (pkg_test.go) is there a way for me to make sure that they run in a particular order ?
Say pkg1_test.go gets executed first and then the rest.
I tried using go channels but it seems to hang.
It isn't obvious, considering a go test ./... triggers test on all packages... but runs in parallel: see "Go: how to run tests for multiple packages?".
go test -p 1 would run the tests sequentially, but not necessarily in the order you would need.
A simple script calling go test on the packages listed in the right expected order would be easier to do.
Update 6 years later: the best practice is to not rely on test order.
So much so issue 28592 advocates for adding -shuffle and -shuffleseed to shuffle tests.
CL 310033 mentions:
This CL adds a new flag to the testing package and the go test command
which randomizes the execution order for tests and benchmarks.
This can be useful for identifying unwanted dependencies
between test or benchmark functions.
The flag is off by default.
If -shuffle is set to on then the system
clock will be used as the seed value.
If -shuffle is set to an integer N, then N will be used as the seed value.
In both cases, the seed will be reported for failed runs so that they can reproduced later on.
Picked up for Go 1.17 (Aug. 2021) in commit cbb3f09.
See more at "Benchmarking with Go".
I found a hack to get around this.
I named my test files as follow:
A_{test_file1}_test.go
B_{test_file2}_test.go
C_{test_file3}_test.go
The A B C will ensure they are run in order.
I am not asking to make golang do some sort of "eval" in the current context, just need it to take a input string (say, received from network) and execute it as a separate golang program.
In theory, when you run go run testme.go, it will read the content of testme.go into a string and parse, compile and execute it. Wonder if it is possible to call a go function to execute a string directly. Note that I have a requirement not to write the string into a file.
UPDATE1
I really want to find out if there is a function (in some go package) that serves as an entry point of go, in another word, I can call this function with a string as parameter, it will behave like go run testme.go, where testme.go has the content of that string.
AFAIK it cannot be done, and the go compiler has to write intermediate files, so even if you use go run and not go build, files are created for the sake of running the code, they are just cleaned up if necessary. So you can't run a go program without touching the disk, even if you manage to somehow make the compiler take the source not from a file.
For example, running strace on calling go run on a simple hello world program, shows among other things, the following lines:
mkdir("/tmp/go-build167589894", 0700)
// ....
mkdir("/tmp/go-build167589894/command-line-arguments/_obj/exe/", 0777)
// ... and at the end of things
unlink("/tmp/go-build167589894/command-line-arguments/_obj/exe/foo")
// ^^^ my program was called foo.go
// ....
// and eventually:
rmdir("/tmp/go-build167589894")
So you can see that go run does a lot of disk writing behind the scenes, just cleans up afterwards.
I suppose you can mount some tmpfs and build in it if you wish, but otherwise I don't believe it's possible.
I know that this question is (5 years) old but I wanted to say that actually, it is possible now, for anyone looking for an up-to-date answer.
The Golang compiler is itself written in Go so can theoretically be embedded in a Go program. This would be quite complicated though.
As a better alternative, there are projects like yaegi which are effectively a Go interpreter which can be embedded into Go programs.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Though I have been using CasperJS for some time, and rely on console logging for debugging. I was wondering if there is any IDE which support CasperJS step by step debugging or there is other way(remote debugging) to step in to CasperJS code? Has anybody successfully done it? Any information will be helpful.
Thanks,
When I want to debug with CasperJS, I do the following : I launch my script with slimerJS (it opens a firefox window so I can easilly see click problem, problems of form filling-ajax return error, media uploading...-, and in which step the code blocks).
With that I don't often need to look at the console and I don't call this.capture('img.jpg') several times to debug (for now i don't test responsive design so i don't need to use capture, take a look at PhantomCSS if you test it).
I use slimerJS to debug (always with casper), but phantomJS in continuous Integration-jenkins- (headless), though you can use slimerjs too (being headless) with xvfb on linux or Mac.
But sometimes i have to look at the console for more details, so I use these options (you can call them in command line too) :
casper.options.verbose = true;
casper.options.logLevel ="debug";
Name your closures will be useful with these options, because the name will be displayed.
I don't think there is an IDE : if a step fail, the stack with all the following steps stops anyway (well it's still possible to do a 'sort of hack' using multiple wait -and encapsulate them- to perform differents closures and have the result of all of them, even if one of them fail; but in this case we are not stacking synchronous steps, but executing asynchronous instructions : take care about timeout and logical flow if you try it). Care : i said 'if a step fail', if it's just an instruction in a closure which fails, of course the following steps will be execute.
So a closure which fails -> following steps are executed.
A step which fails (ex : thenOpen(fssfsf) with fssfsf not defined), the stack will stop.
Multiple wait() can be done asynchronously.
So if you have a lot of bugs and execute your tests sequentially (stacking steps), you can only debug them one by one, or by closure for independant step functions-I think an IDE could work in this case- (->for one file. The stacks are of course independent if you launch a folder). And usually at the beginning you launch your file each time you finish a step. And when you're used to the tool, you write the whole script at once.
In most cases, bug are due to asynchrone, scope, context problems (actually js problems, though casper simplifies asynchronous problems : "The callback/listener stuff is an implementation of the Promise pattern.") :
If you want to loop a suit of instructions, don't forget the IIFE or use the casper each function, and encompass all with a then() statement (or directly eachThen). I can show some exemples if needed. Otherwise it will loop the last index 'i.length' times, there isn't loop scope in js, by default i has the same reference.
When you click on a link at the end of a step, use a wait()-step function too- statement after; instead of a then(). If i've well understood the then() statement, it's launched when the previous step is completed. So it's launched just after the click(). If this click launches an ajax return, and your following step scrape or test the result of this ajax return, it will randomly fail, because you haven't explicitly ask to wait for the resource. I saw some issues like that in my first tests.
Don't mixt the two context : casper environment and page DOM environment. Use the evaluate function() for passing from one to the other. In the evaluate function, you can pass an argument from the casper context to the page DOM...
...Like that :
var casperContext = "phantom";
casper.evaluate(function(pageDomContext) {
console.log("will echo ->phantom<- in the page DOM environment : " + pageDomContext + ", use casper.on('remote.message') to see it in the console");
}, casperContext);
Or you can see it directly in the browser with slimerJS using alert() instead of console.log().
Use setFiltrer to handle prompt and confirm box.
If your website exists in mobile version too, you can manipulate the userAgent for your mobile tests.
You can call node modules in a casperJS file, in files using the tester module too. Well it's not entirely true, see use node module from casper. Some core node features are implemented in phantom (and slimer too), like fs, child process, but they are not always well documented. I prefer to execute my tests with node so. Node is useful to launch your tests in parallel (child process). I suggest you to execute as many processes as you have core. Well, it depends of your type of script, with just normal scenario (open a page and check some elements) I can execute 10 child process in parallel whithout random failure (local computer), but with some elements which are slow to load (as multi svg, sometimes xml...), use require('os').cpus().length or a script like that : Repeat a step X times. Otherwise you will have random failure, even if you increase the timeout. When it crashes, you can't do anything other that reload() the page.
You can then integrate your tests in jenkins using the xunit command. Just specify differents index for each log.xml files, jenkins (XUnit -> JUnit) will manage them : pattern *.xml.
I know i didn't really answer your question, but i think to debug, list the main specific problems remains the best way.
There are still useful functions to debug :
var fs = require('fs');
fs.write("results.html", this.getPageContent(), 'w');
I prefer this way rather than this.debugHTML(). I can check in my results.html file if there are missing tags (relative to the browser using firebug or another tool). Or sometimes if i need to check just one tag, output the result in the console isn't a problem, so : this.getHTML("my selector"); and you can still pipe the log result : casperjs test test.js > test.html
Another trick : if you execute your tests in local, sometimes the default timeout isn't sufficient (network freeze) (5sec).
So -> 10sec :
casper.options.waitTimeout = 10000;
Some differences between Phantom and Slimer :
With slimer, if you set casper.options.pageSettings.loadImages = false; and in your file you try to scrape or test the weight/height.... of an element, it will work with slimer but not with phantom. So set it to true in the specific file to keep the compatibility.
Need to specify an absolute path with slimer (with include, import->input media, ...).
Example :
this.page.uploadFile('input[name="media"]', fs.absolute(require('system').args[4]).split(fs.separator).slice(0, -1).join(fs.separator) + '/../../../../avatar.jpg');
To include a file from the root folder (work in every subdir/OS, better than previous inclusion) -you could also do it nodeLike using require()-:
phantom.injectJs(fs.workingDirectory + '/../../../../global.js');