How to get filename of the test in mocha reporter - mocha.js

Is there a way to get the filename of current test in mocha reporter?
I couldn't find anything in the base and examples.

Actually, file name is passed to Suite in file field in mocha starting from this pull request. It's just nowadays mocha most commonly is ran as a karma plugin (namely, karma-mocha plugin), and, talking of December'14, this plugin just does not pass file name information further.
To make this answer self-consistent, here's how Suite is formed in mocha (it's tdd implementation, but it it is similar for bdd):
context.suite = function(title, fn){
var suite = Suite.create(suites[0], title);
suite.file = file;
suites.unshift(suite);
fn.call(suite);
suites.shift();
return suite;
};
And here's how suits are formed in karma-mocha/lib/adapter.js:
runner.on('test end', function(test) {
var skipped = test.pending === true;
var result = {
id: '',
description: test.title,
suite: [],
success: test.state === 'passed',
skipped: skipped,
time: skipped ? 0 : test.duration,
log: test.$errors || []
};
var pointer = test.parent;
while (!pointer.root) {
result.suite.unshift(pointer.title);
pointer = pointer.parent;
}
tc.result(result);
});
But you know what, I guess this is a nice thing to issue as a feature request in karma-mocha project.

Related

When a before hook fails; the tests within that suite are skipped; how could i reliably get the list of those skipped tests?

When a before hook fails; the tests within that suite are skipped; how could i reliably get the list of those skipped tests ?
I tried to listen to the EVENT_TEST_FAIL on the Mocha runner and extract the information from there:
const Mocha = require('mocha');
const {
EVENT_TEST_FAIL,
} = Mocha.Runner.constants;
class MyReporter {
constructor(runner) {
runner
.on(EVENT_TEST_FAIL, (test, err) => {
console.log(err, test);
console.log("failed tests", test.parent.tests.map(t => t.fullTitle()));
});
}
}
module.exports = MyReporter;
But only issue is that, this code cannot distinguish between a failure in the before hook and a failure in a particular test case.
As a result, this code reports all tests in the suite as failed, even when only one test is failed.

Visual testing with Nightwatch-vrt: Test passed, element is captures, but no picture is seen

I installed nightwatch-vrt locally in my project. Npm showed me multiple vulnerities which I ignored.
I created a nightwatch.vrt.conf.js with the following content:
const path = require('path');
const baseConfig = require('./nightwatch.conf.js');
const config = {
...baseConfig,
custom_commands_path: ['node_modules/nightwatch-vrt/commands'],
custom_assertions_path: ['node_modules/nightwatch-vrt/assertions']
};
function generateScreenshotFilePath(nightwatchClient, basePath, fileName) {
const moduleName = nightwatchClient.currentTest.module,
testName = nightwatchClient.currentTest.name;
return path.join(process.cwd(), basePath, moduleName, testName, fileName);
};
config.test_settings.default.globals = {
"visual_regression_settings": {
"generate_screenshot_path": generateScreenshotFilePath,
"latest_screenshots_path": "vrt/latest",
"latest_suffix": "",
"baseline_screenshots_path": "vrt/baseline",
"baseline_suffix": "",
"diff_screenshots_path": "vrt/diff",
"diff_suffix": "",
"threshold": 0.5,
"prompt": false,
"always_save_diff_screenshot": true
}
}
module.exports = config;
My test (simple, just to see if it works) looks like:
module.exports = {
tags: ['x'],
'visual testing':function(browser) {
browser
.url('https://www.kraeuter-und-duftpflanzen.de')
.maximizeWindow()
.assert.visible('.header-main')
.pause(1000)
.assert.screenshotIdenticalToBaseline('.header-main')
//.saveScreenshot('./tests_output/image.png')
.end();
}
}
Now the test passes, no assertions failed, a folder is created and the file is correctly named placed there, but I can only see an field with checkerboard pattern (like the transparent background in vector graphics) in the size of the captured element.
Before the test report messages like this are shown:
[32644:26476:0414/082519.134:ERROR:device_event_log_impl.cc(214)] [08:25:19.134]
USB: usb_device_handle_win.cc:1049 Failed to read descriptor from node connection:
Ein an das System angeschlossenes Gerõt funktioniert nicht. (0x1F)
If I let Nightwatch take a screenshot itself, it is displayed correctly.
Does anyone know, where's the mistake?
it seems that this package is broken and not updated for a very very long time. I advise you to update to this one https://www.npmjs.com/package/#bbc/nightwatch-vrt

Removing console.log from React Native app

Should you remove the console.log() calls before deploying a React Native app to the stores? Are there some performance or other issues that exist if the console.log() calls are kept in the code?
Is there a way to remove the logs with some task runner (in a similar fashion to web-related task runners like Grunt or Gulp)? We still want them during our development/debugging/testing phase but not on production.
Well, you can always do something like:
if (!__DEV__) {
console.log = () => {};
}
So every console.log would be invalidated as soon as __DEV__ is not true.
Babel transpiler can remove console statements for you with the following plugin:
npm i babel-plugin-transform-remove-console --save-dev
Edit .babelrc:
{
"env": {
"production": {
"plugins": ["transform-remove-console"]
}
}
}
And console statements are stripped out of your code.
source: https://hashnode.com/post/remove-consolelog-statements-in-production-in-react-react-native-apps-cj2rx8yj7003s2253er5a9ovw
believe best practice is to wrap your debug code in statements such as...
if(__DEV__){
console.log();
}
This way, it only runs when you're running within the packager or emulator. More info here...
https://facebook.github.io/react-native/docs/performance#using-consolelog-statements
I know this question has already been answered, but just wanted to add my own two-bits. Returning null instead of {} is marginally faster since we don't need to create and allocate an empty object in memory.
if (!__DEV__)
{
console.log = () => null
}
This is obviously extremely minimal but you can see the results below
// return empty object
console.log = () => {}
console.time()
for (var i=0; i<1000000; i++) console.log()
console.timeEnd()
// returning null
console.log = () => null
console.time()
for (var i=0; i<1000000; i++) console.log()
console.timeEnd()
Although it is more pronounced when tested elsewhere:
Honestly, in the real world this probably will have no significant benefit just thought I would share.
I tried it using babel-plugin-transform-remove-console but the above solutions didn't work for me .
If someone's also trying to do it using babel-plugin-transform-remove-console can use this one.
npm i babel-plugin-transform-remove-console --save-dev
Edit babel.config.js
module.exports = (api) => {
const babelEnv = api.env();
const plugins = [];
if (babelEnv !== 'development') {
plugins.push(['transform-remove-console']);
}
return {
presets: ['module:metro-react-native-babel-preset'],
plugins,
};
};
I have found the following to be a good option as there is no need to log even if __DEV__ === true, if you are not also remote debugging.
In fact I have found certain versions of RN/JavaScriptCore/etc to come to a near halt when logging (even just strings) which is not the case with Chrome's V8 engine.
// only true if remote debugging
const debuggingIsEnabled = (typeof atob !== 'undefined');
if (!debuggingIsEnabled) {
console.log = () => {};
}
Check if in remote JS debugging is enabled
Using Sentry for tracking exceptions automatically disables console.log in production, but also uses it for tracking logs from device. So you can see latest logs in sentry exception details (breadcrumbs).

Testing Yeoman generator with bowerInstall and/or npmInstall

I have a Yeoman generator that uses this.bowerInstall()
When I test it, it tries to install all the bower dependencies that I initialized this way. Is there a way to mock this function ?
The same goes for the this.npmInstall() function.
I eventually went with a different approach. The method from drorb's answer works if you are bootstrapping the test generators manually. If you use the RunContext based setup (as described on the Yeoman (testing page)[http://yeoman.io/authoring/testing.html]), the before block of the test looks something like this.
before(function (done) {
helpers.run(path.join( __dirname, '../app'))
.inDir(path.join( __dirname, './tmp')) // Clear the directory and set it as the CWD
.withOptions({ foo: 'bar' }) // Mock options passed in
.withArguments(['name-x']) // Mock the arguments
.withPrompt({ coffee: false }) // Mock the prompt answers
.on('ready', function (generator) {
// this is called right before `generator.run()`
})
.on('end', done);
})
You can add mock functions to the generator in the 'ready' callback, like so:
.on('ready', function(generator) {
generator.bowerInstall = function(args) {
// Do something when generator runs bower install
};
})
The other way is to include an option in the generator itself. Such as:
installAngular: function() {
if (!this.options['skip-install']) {
this.bowerInstall('angular', {
'save': true
});
}
}
finalInstall: function() {
this.installDependencies({
skipInstall: this.options['skip-install']
});
}
Now since you run the test with the 'skip-install' option, the dependencies are not installed. This has the added advantage of ensuring the command line skip-install argument works as expected. In the alternate case, even if you run the generator with the skip-install argument, the bowerInstall and npmInstall functions from your generator are executed even though, the installDependencies function is not (as it is usually configured as above)
Take a look at the tests for the Bootstrap generator, it contains an example of mocking the bowerInstall() function:
beforeEach(function (done) {
this.bowerInstallCalls = [];
// Mock bower install and track the function calls.
this.app.bowerInstall = function () {
this.bowerInstallCalls.push(arguments);
}.bind(this);
}.bind(this));

JSCover with PhantomJS - TypeError: 'null' is not an object

When I try to run JSCover with PhantomJS, I see below ERROR:
Steps followed:
1) Run the JSCover Server:
java -jar ~/JSCover/target/dist/JSCover-all.jar -ws --report-dir=report
2) Run the PhantomJS runner with JSCover:
*phantomjs --debug=true ~/JSCover/src/test/javascript/lib/PhantomJS/run-jscover-jasmine.js
localhost8080/<app>/module/framework/test/SpecRunner.html
TypeError: 'null' is not an object(evaluating''document.body.querySelector('.description').innerText')`
phantomjs://webpage.evaluate():3
phantomjs://webpage.evaluate():22
phantomjs://webpage.evaluate():22
2013-09-19T16:36:07 [DEBUG] WebPage - evaluateJavaScript result QVariant(, )
2013-09-19T16:36:07 [DEBUG] WebPage - evaluateJavaScript "(function() { return (function () {
jscoverage_report('phantom');
})(); })()"
2013-09-19T16:36:07 [DEBUG] WebPage - evaluateJavaScript result QVariant(, )
2013-09-19T16:36:07 [DEBUG] Network - Resource request error: 5 ( "Operation canceled" ) URL: localhost8080/<app_home>/lib/backbone/1.0.0/backbone.js?cb=0.5381254460662603
This was an issue that I ran into yesterday. It turns out that the example script does not work for newer versions, so I built a new Phantom Script that works for Jasmine 2.X which fixes it. You can locate the working script here in my repository:
https://github.com/tkaplan/PhantomJS-Jasmine
I faced with the same issue when I try running Jasmine with PhantomJS.
I realized that the latest version of Jasmine-html.js (jasmine-2.0.0-rc2)
does not go along with PhantomJS's run-jasmine.js (phantomjs-1.9.2-windows).
In the jasmine-2.0.0-rc2 version of Jasmine-html.js,
The '.description' class is not available if all tests passed.
This 'description' class is created only if any test failed.
Thus, when I run the phantomjs with all tests passed, I get the above error message.
I modified run-jasmine.js to adapt to Jasmine-html.js (jasmine-2.0.0-rc2) to
resolve this issue.
Are you loading your tests asynchronously? I use requirejs for modular javascript. It is also used to load the test specs:
<script data-main='SpecRunner' src='/test/scripts/libs/require.js'></script>
When using JSCover, the run-jscover-jasmine.js script does not account for this async behaviour, so the DOM nodes referenced in the query do not exist (yet). I modified the script to delay the waitFor call by 1 second:
page.open(system.args[1], function(status){
if (status !== "success") {
console.log("Unable to access network");
phantom.exit();
} else {
// Added 1s delay here
window.setTimeout(function() {
waitFor(function(){
return page.evaluate(function(){
return document.body.querySelector('.symbolSummary .pending') === null
});
}, function(){
var exitCode = page.evaluate(function(){
console.log('');
console.log(document.body.querySelector('.description').innerText);
var list = document.body.querySelectorAll('.results > #details > .specDetail.failed');
if (list && list.length > 0) {
console.log('');
console.log(list.length + ' test(s) FAILED:');
for (i = 0; i < list.length; ++i) {
var el = list[i],
desc = el.querySelector('.description'),
msg = el.querySelector('.resultMessage.fail');
console.log('');
console.log(desc.innerText);
console.log(msg.innerText);
console.log('');
}
return 1;
} else {
console.log(document.body.querySelector('.alert > .passingAlert.bar').innerText);
return 0;
}
});
page.evaluate(function(){
jscoverage_report('phantom');
});
phantom.exit(exitCode);
});
}, 1000);
}
});
Depending on the amount of code loaded, you may have to increase the delay.

Resources