I use Go's native test facility (go test) to write a test. But when the test fails due to a bug in test code, I really can't debug it due to lack of stack trace or any other contextual informations.
And even, the test code needs one contextual object t, so it is not simple work running the test code in normal mode.
What is the best practice to debug test code?
You can log stack trace this way
t.Log(string(debug.Stack()))
Documentation is here https://golang.org/pkg/runtime/debug/#Stack
It is better than PrintStack because it doesn't interfere with regular test logs.
You can use t.Log() to log information about the test case -- go will show that output if the test case fails or if you run go test -v
You can also assert certain state within the test using panics -- if a test panics, you will see the trace in your console.
I don't know if you'd want to check in code with this in it, but for one-off debugging, PrintStack might help. http://golang.org/pkg/runtime/debug/#PrintStack
Related
I'm wondering when developing an Ansible Collection, is it possible to get arbitrary logs written to a log file/console?
This being a random print() statement to help debugging, or is the only way just to concatenate your final return message?
Thank you
Question:
Is it possible to get arbitrary logs written to a log file/console?
Answer:
Your question for me looks similar to Is it possible to print out debugging logs while task is running in Ansible?.
According the answer there citing from documentation
Ansible executes each module, usually on the remote managed node, and collects return values. ...
Ansible modules normally return a data structure that can be registered into a variable ...
so such live output is not implemented. There is an option for Debugging modules during development
To see what is actually happening in the module
but that might not fit all of your cases.
Question:
This being a random print() statement to help debugging
Answer:
According the Developer Guide » Debugging modules » Simple debugging
Since print() statements do not work inside modules, raising an exception is a good approach if you just want to see some specific data. Put raise Exception(some_value) somewhere in the module and run it normally. Ansible will handle this exception, pass the message back to the control node, and display it.
I've a requirement to load test a web application using loadRunner(Community edition : 12.53 ). Currently I've my test scripts recorded using loadrunner default test script recorder. I'm assuming that, the operations I chose to perform in SUT should actually update the application backend/DB when I'm executing the test scripts. Is this the correct behavior of a load testing scenario?
When I ran my test scripts, I couldn't see any value or nothing updated in the application DB.
I've my test scripts written in C and also manual correlation is applied using web_reg_save_param method.
What might be the things that could go wrong in such a scenario?. Any help would be deeply appreciated.
the operations I chose to perform in SUT should actually update the application backend/DB when I'm executing the test scripts. Is this the correct behavior of a load testing scenario? - Yes this is the correct behaviour.
When I ran my test scripts, I couldn't see any value or nothing updated in the application DB. - Something you might be missing in the correlations. this generally happens when some variable is not correlated properly or gets missed. Or something like timestamp that you might think is irrelevant but needs to be taken care of.
I've been asking myself whether there is an easy way of debugging the JavaScript code of the transactions. JS already has mature debuggers, it is only a question of how to easily bind it to the code running in the container. Does anyone have a clue? -- Thx.
One of the easiest ways to debug your transaction code is to deploy your business network into an embedded fabric which basically means that your code is run as any other NodeJS app is and you can use the node debugger to step through your code or even simple console.log statements if that suffices.
To get an insight into how to achieve this, have a look at the code here: UPDATED LINK
https://github.com/hyperledger/composer-sample-networks/blob/master/packages/carauction-network/test/CarAuction.js#L31-L49
This is the beforeEach method of a unit test for the sample network and as you'll see, it deploys the network to the 'embedded' fabric.
The code then goes on to perform tests that include calling the submitTransaction API on the embedded businessNetworkConnection which then causes the Transaction script code to be eval'ed by the embedded fabric.
So it's all happening within a single Node app and is much easier to debug.
HTH
I'm writing protractor e2e tests and use browser.pause() to enter debugger. I appreciate the interactive mode which seems helpful when developing a new test.
However, when I spend too much time in the debuger, the test gets interrupted as timeout is exceeded:
Error: timeout of 240000ms exceeded
I can easily fix that by increasing mochaOpts.timeout in my protractor configuration, but I don't like changing it back and forth depending if I'm debugging or not.
Is there a better way?
if anyone who reads this and was hoping it was for timing out using Jasmine.
you can put this within your individual spec files
jasmine.DEFAULT_TIMEOUT_INTERVAL = 120000; // whatever time you need
I've found answer here: https://stackoverflow.com/a/23492442/4358405
Adding this.timeout(10000000); in the test makes the trick
In our development environment, we run a Continuous Integration service (TeamCity) which responds to code checkins by running build/test jobs and reporting the results. While the job is in progress, we can easily see how many unit tests have executed so far, how many have failed, etc.
My automated testing team is delivering UI tests developed in Rational Functional Tester. Extracting those tests from the source control system, compiling them, and executing them from the command line all seem to be pretty straight forward exercises.
What I haven't been able to find is a way to report the test results automatically - there don't appear to be any hooks for listeners, for example, or any way to customize the messages that are emitted.
From my research thus far, I've come to the conclusion that my only option is to (a) wait until the tests finish, then (b) parse the HTML report that RFT generates.
Does anybody have a better answer than that?
Here is the workaround I've used for the similar purpose:
Write a helper super class that overwrite the onTerminate callback method, implement your log parsing logics there.
Change the helper super class of your test scripts to the helper super class create in step1.
Use RFT CLI invoke your scripts in your Continous Integration code.
Expanding on #eric2323223, in your onTerminate override, you can use TeamCity's build script interaction functionality to have your RFT pass/fail status rolled up to TeamCity. You just need these TeamCity specific messages emitted to the command line, so that TeamCity picks them up.
##teamcity[testStarted name='test1']
##teamcity[testFailed name='test1' message='failure message' details='message and stack trace']
##teamcity[testFinished name='test1']
##teamcity[testStarted name='test2']
##teamcity[testFailed type='comparisonFailure' name='test2' message='failure message' details='message and stack trace' expected='expected value' actual='actual value']
##teamcity[testFinished name='test2']