I am currently developing a minigame in the phaser.js framework, and since the scope of the project is quite large, I really want to work with unit tests.
However when trying to setup unit tests in Jasmine for Phaser, I run into errors regarding dependencies.
I am not experienced with Jasmine or any other testing framework, so it could be I am overlooking something obvious to a experienced developer.
My .spec file looks like this:
describe("motorMain", function() {
var Phaser = require('../phaser');
var MotorMain = require("../motorMain");
var motorMain;
var phaser;
beforeEach(function() {
phaser = new Phaser();
motorMain = new motorMain();
});
it("should increase the score if a object is clicked", function(){
var scoreBeforeClicking = motorMain.score;
var gameobject;
motorMain.clickhandler("",gameobject);
expect(scoreBeforeClicking+1).toEqual(score);
})
});
But since Phaser is reliant on running in a browser, when I run this it complains about not being able to access elements such as window and document in Phaser.
I get errors such as:
ReferenceError: document is not defined
Does anyone have experience with testing Phaser games? I can't seem to find any information on it online. Is it even possible to test phaser games?
As for other testing frameworks, I also looked into nightwatch, but it's Phaser support is outdated, and it is mainly e2e instead of unit-testing, so it isn't what I am looking for. I also saw online that there was a shimming version of Phaser 2.4.7 developed by someone, but this is outdated now that so much has changed in Phaser 3.
Ok, I've got something working!
This is the bare minimum, but Phaser can be instantiated without errors.
package.json
{
"scripts": {
"test": "jasmine --config=jasmine.json"
},
"devDependencies": {
"canvas": "^2.8.0",
"jasmine": "^3.7.0",
"jsdom": "^16.6.0",
"jsdom-global": "^3.0.2"
},
"dependencies": {
"phaser": "^3.55.2"
}
}
jasmine.json
{
"spec_dir": "spec",
"spec_files": [
"**/*[sS]pec.js"
]
}
spec/Phaser.Spec.js
require('canvas');
require('jsdom-global')();
const Phaser = require("phaser");
describe("A suite", function () {
it("Instantiate a Phaser.Game", function () {
let game = new Phaser.Game({ type: Phaser.HEADLESS });
expect(game).not.toBe(null);
})
})
The output in console is:
$ jasmine --config=jasmine.json
Randomized with seed 06585
Started
Phaser v3.55.2-FB (Headless | HTML5 Audio) https://phaser.io
.
1 spec, 0 failures
Finished in 0.016 seconds
Randomized with seed 06585 (jasmine --random=true --seed=06585)
✨ Done in 1.28s.
Related
I'm currently trying to get code coverage on my fastify routes using Mocha and NYC.
I've tried instrumenting the code beforehand and then running the tests on the instrumented code as well as just trying to setup NYC in various ways to get it to work right.
Here is my current configuration. All previous ones produced the same code coverage output):
nyc config
"nyc": {
"extends": "#istanbuljs/nyc-config-typescript",
"extension": [
".ts",
".tsx"
],
"exclude": [
"**/*.d.ts",
"**/*.test.ts"
],
"reporter": [
"html",
"text"
],
"sourceMap": true,
"instrument": true
}
Route file:
const routes = async (app: FastifyInstance, options) => {
app.post('/code', async (request: FastifyRequest, response: FastifyReply<ServerResponse>) => {
// route logic in here
});
};
The integration test:
import * as fastify from fastify;
import * as sinon from 'sinon';
import * as chai from 'chai';
const expect = chai.expect;
const sinonChai = require('sinon-chai');
chai.use(sinonChai);
describe('When/code POST is called', () => {
let app;
before(() => {
app = fastify();
// load routes for integration testing
app.register(require('../path/to/code.ts'));
});
after(() => {
app.close();
});
it('then a code is created and returned', async () => {
const {statusCode} = await apiTester.inject({
url: '/code',
method: 'POST',
payload:{ code: 'fake_code' }
});
expect(statusCode).to.equal(201);
});
});
My unit test call looks like the following:
nyc mocha './test/unit/**/*.test.ts' --require ts-node/register --require source-map-support/register --recursive
I literally get 5% code coverage just for the const routes =. I'm really banging my head trying to figure this one out. Any help would be greatly appreciated! None of the other solutions I have investigated on here work.
I have a detailed example for typescript + mocha + nyc. It also contains fastify tests including route tests (inject) as well as mock + stub and spy tests using sinon. All async await as well.
It's using all modern versions and also covers unused files as well as VsCode launch configs. Feel free to check it out here:
https://github.com/Flowkap/typescript-node-template
Specifically I think that
instrumentation: true
messes up the results. Heres my working .nycrc.yml
extends: "#istanbuljs/nyc-config-typescript"
reporter:
- html
- lcovonly
- clover
# those 2 are for commandline outputs
- text
- text-summary
report-dir: coverage
I have proper coverage even for mocked ans tub parts of fastify in my above mentioned example.
I'm using Nightwatch JS to run my e2e tests with the Mocha runner.
I want to integrate an HTML reporter that with the suite.
I'm trying to use the nightwatch-html-reporter package. But as far as I understand there is a problem with the CLI commands (it's written in the Nightwatch docs that --reporter will not work when using mocha).
I also copied the code sample from nightwatch-html-reporter to my globals.js but it doesn't seem to work either.
The tests run but there is no output anywhere.
Here is my folder structure:
project
src
spec
e2e
globals
globals.js
tests
smoke
testFile.js
nightwatch.conf.js
Here is my conf file:
const seleniumServer = require('selenium-server-standalone-jar');
const chromeDriver = require('chromedriver');
module.exports = {
src_folders: ['src/spec/e2e/tests'],
output_folder: 'report',
page_objects_path: [
'src/spec/e2e/pageObjects'
],
globals_path: 'src/spec/e2e/globals/globals.js',
custom_commands_path: 'src/spec/e2e/customCommands',
selenium: {
start_process: true,
server_path: seleniumServer.path,
host: '127.0.0.1',
port: 4444,
cli_args: {
'webdriver.chrome.driver': chromeDriver.path
}
},
test_runner: {
type: 'mocha',
options: {
ui: 'bdd',
reporter: 'list'
}
},
test_settings: {
default: {
launch_url: 'http://URL',
silent: true,
desiredCapabilities: {
browserName: 'chrome',
javascriptEnabled: true,
acceptSslCerts: true,
chromeOptions: {
args: [
"--no-sandbox",
"start-fullscreen"
]
}
}
}
}
};
And here is my global.js file:
var HtmlReporter = require('nightwatch-html-reporter');
var reporter = new HtmlReporter({
openBrowser: true,
reportsDirectory: __dirname + '/reports'
});
module.exports = {
reporter: reporter.fn
};
I don't think it will work with nightwatch-html-reporter as it is probably not a mocha reporter (but correct me if I'm wrong).
You want to use built in or custom mocha reporters when using nightwatch with mocha.
You can use a custom mocha html reporter like mochawesome but you'll have to hack around a bit and I offer no guarantees as I only tested those hacks lightly.
Here are the instructions to use mochawesome
(tested with
"mocha": "^5.2.0",
"mochawesome": "^3.1.1",
"nightwatch": "^0.9.21")
npm install mochawesome
Modify mochawesome node_modules\mochawesome *.js files to require mocha-nightwatch instead of mocha. (See instructions/explanations towards the end of the answer)
Presuming you're using a nightwatch.conf.js, configure your test runner to the equivalent of
test_runner : {
type : "mocha",
options : {
ui : "bdd",
reporter : require("mochawesome") // Please observe that you can pass a custom report constructor function here, not just reporter names
}
}
Run tests and observe that you still see the default console reporter (spec) but that at the end of the run you also see an output like:
[mochawesome] Report HTML saved to C:\projects\myWebApp\mochawesome-report\mochawesome.html
Open the html report.
This solution is hackish and fragile because nightwatch comes with it's own version of mocha.
When you install nightwatch you will see in your node_modules a mocha-nightwatch folder. This is the mocha that is being used by nightwatch.
However mochawesome doesn't use mocha-nightwatch. If you look at node_modules\mochawsome\dist\mochawesome.js you will see lines of code like:
var Base = require('mocha/lib/reporters/base');
var Spec = require('mocha/lib/reporters/spec');
This means is requires mocha, not mocha-nightwatch.
Those lines should ideally be: require('mocha-nightwatch/...).
So please change them in all *.js files that need fixing.
You could also fork mochawesome and make them like that ;)
Debugging notes:
Try putting some additional console.logs in node_modules\mocha-nightwatch\lib\mocha.js in the Mocha.prototype.reporter function. That's how I figured out what's going on.
If you use Mocha you can always go with mochawsome: https://www.npmjs.com/package/mochawesome
I haven't tried it myself but it looks pretty neat.
Should you remove the console.log() calls before deploying a React Native app to the stores? Are there some performance or other issues that exist if the console.log() calls are kept in the code?
Is there a way to remove the logs with some task runner (in a similar fashion to web-related task runners like Grunt or Gulp)? We still want them during our development/debugging/testing phase but not on production.
Well, you can always do something like:
if (!__DEV__) {
console.log = () => {};
}
So every console.log would be invalidated as soon as __DEV__ is not true.
Babel transpiler can remove console statements for you with the following plugin:
npm i babel-plugin-transform-remove-console --save-dev
Edit .babelrc:
{
"env": {
"production": {
"plugins": ["transform-remove-console"]
}
}
}
And console statements are stripped out of your code.
source: https://hashnode.com/post/remove-consolelog-statements-in-production-in-react-react-native-apps-cj2rx8yj7003s2253er5a9ovw
believe best practice is to wrap your debug code in statements such as...
if(__DEV__){
console.log();
}
This way, it only runs when you're running within the packager or emulator. More info here...
https://facebook.github.io/react-native/docs/performance#using-consolelog-statements
I know this question has already been answered, but just wanted to add my own two-bits. Returning null instead of {} is marginally faster since we don't need to create and allocate an empty object in memory.
if (!__DEV__)
{
console.log = () => null
}
This is obviously extremely minimal but you can see the results below
// return empty object
console.log = () => {}
console.time()
for (var i=0; i<1000000; i++) console.log()
console.timeEnd()
// returning null
console.log = () => null
console.time()
for (var i=0; i<1000000; i++) console.log()
console.timeEnd()
Although it is more pronounced when tested elsewhere:
Honestly, in the real world this probably will have no significant benefit just thought I would share.
I tried it using babel-plugin-transform-remove-console but the above solutions didn't work for me .
If someone's also trying to do it using babel-plugin-transform-remove-console can use this one.
npm i babel-plugin-transform-remove-console --save-dev
Edit babel.config.js
module.exports = (api) => {
const babelEnv = api.env();
const plugins = [];
if (babelEnv !== 'development') {
plugins.push(['transform-remove-console']);
}
return {
presets: ['module:metro-react-native-babel-preset'],
plugins,
};
};
I have found the following to be a good option as there is no need to log even if __DEV__ === true, if you are not also remote debugging.
In fact I have found certain versions of RN/JavaScriptCore/etc to come to a near halt when logging (even just strings) which is not the case with Chrome's V8 engine.
// only true if remote debugging
const debuggingIsEnabled = (typeof atob !== 'undefined');
if (!debuggingIsEnabled) {
console.log = () => {};
}
Check if in remote JS debugging is enabled
Using Sentry for tracking exceptions automatically disables console.log in production, but also uses it for tracking logs from device. So you can see latest logs in sentry exception details (breadcrumbs).
I have a Yeoman generator that uses this.bowerInstall()
When I test it, it tries to install all the bower dependencies that I initialized this way. Is there a way to mock this function ?
The same goes for the this.npmInstall() function.
I eventually went with a different approach. The method from drorb's answer works if you are bootstrapping the test generators manually. If you use the RunContext based setup (as described on the Yeoman (testing page)[http://yeoman.io/authoring/testing.html]), the before block of the test looks something like this.
before(function (done) {
helpers.run(path.join( __dirname, '../app'))
.inDir(path.join( __dirname, './tmp')) // Clear the directory and set it as the CWD
.withOptions({ foo: 'bar' }) // Mock options passed in
.withArguments(['name-x']) // Mock the arguments
.withPrompt({ coffee: false }) // Mock the prompt answers
.on('ready', function (generator) {
// this is called right before `generator.run()`
})
.on('end', done);
})
You can add mock functions to the generator in the 'ready' callback, like so:
.on('ready', function(generator) {
generator.bowerInstall = function(args) {
// Do something when generator runs bower install
};
})
The other way is to include an option in the generator itself. Such as:
installAngular: function() {
if (!this.options['skip-install']) {
this.bowerInstall('angular', {
'save': true
});
}
}
finalInstall: function() {
this.installDependencies({
skipInstall: this.options['skip-install']
});
}
Now since you run the test with the 'skip-install' option, the dependencies are not installed. This has the added advantage of ensuring the command line skip-install argument works as expected. In the alternate case, even if you run the generator with the skip-install argument, the bowerInstall and npmInstall functions from your generator are executed even though, the installDependencies function is not (as it is usually configured as above)
Take a look at the tests for the Bootstrap generator, it contains an example of mocking the bowerInstall() function:
beforeEach(function (done) {
this.bowerInstallCalls = [];
// Mock bower install and track the function calls.
this.app.bowerInstall = function () {
this.bowerInstallCalls.push(arguments);
}.bind(this);
}.bind(this));
I have installed the chrome jet brains extension
I have tests like this:
describe('Service tests', function () {
beforeEach(module('app'));
it('should have a Service', inject(function($injector) {
var exist = $injector.has('dataService');
etc
but no luck getting breakpoints to hit any where in the tests. I can get the debugger to break when writing debugger, but an unable to step through.
Do you have karma-coverage set up in your karma config? It uses instrumented code, so debugging is not possible. Related tickets: http://github.com/karma-runner/karma/issues/630, http://youtrack.jetbrains.com/issue/WEB-8443
If you are building with Webpack you might need to specify the devtools option in your webpack config property in karma.conf.js like this:
module.exports = (config) => {
config.set({
webpack: {
...,
devtool: 'inline-source-map'
}
})
};
This solution works for me with Webpack v3.
If by any chance you are using Angular and you have removed all the coverage related stuff from your karma.config file and are still unable to hit the breakpoints, look into the angular.json. It might be having the codeCoverage bit set to true.
"test": {
...
"options": {
...
"codeCoverage": false,
...
}
...
}