I'm trying to get a TDD workflow going with koa2/mocha/chai/chai-http but my problem is that when I run the tests the koa2 server keeps running after the tests are finished. So that I have to Ctrl+C (kill) it every time.
Can anyone tell me how to setup a TDD workflow where the server gets stopped after all tests are run?
Also, I'd like to watch the test files for changes and re-run the tests as soon as changes are detected... can anyone help with that? can't find anything on the net -.-
What I currently have (simplified):
package.json:
"scripts": {
"watch-server": "nodemon --watch 'src/**/*' -e ts,tsx --exec 'ts-node' ./src/server.ts",
"test": "./node_modules/mocha/bin/mocha --compilers ts:ts-node/register test/**/*.ts"
},
server.ts:
app.use(routes_v1.routes());
export const server = app.listen(3000, () => {
console.log('Server running on port 3000');
});
test:
process.env.NODE_ENV = 'test';
import * as chai from 'chai';
const chaiHttp = require('chai-http');
const should = chai.should();
chai.use(chaiHttp);
import { server } from '../../../src/server';
describe('routes : login / register', () => {
describe('POST /sign_in', () => {
it('should return unauthorized for invalid user', (done) => {
chai.request(server)
.post('/sign_in')
.send({email: "test#test.de", password: "somePassword"})
.end((err, res) => {
res.status.should.eql(401);
should.exist(err);
done();
});
});
it('should return authorized for valid user', (done) => {
chai.request(server)
.post('/sign_in')
.send({email: 'authorized#test.de', password: "authorizedPassword"})
.end((err, res) => {
res.status.should.eql(200);
should.exist(res.body.token);
done();
});
});
});
Thank you.
Starting from version 4.0 Mocha will no longer force the process to exit once all tests complete. You can use CLI parameter -exit to exit from the process when tests are finished:
"test": "mocha ... -exit"
Or another option, which gives you more control over the process, is to use Hooks. So you can start the server before running test(s) and stop it after:
describe('...', () => {
let server;
before(() => {
server = app.listen()
});
after(() => {
server.close()
});
...
})
As an example, you can take a look at this test. It is using Jest and supertest, but idea is the same.
Related
I started to learn Mocha testing from a tutorial. In this tutorial I see the total time for all tests, but in my opinion it is important to see the time for each test.
I put this screenshot so you can see how I get the tests ouput right now.
I ran all the code on windows OS.
"mocha": "^9.0.3",
"mongoose": "^5.13.5"
The folders structure:
src -> users.js
test -> test_helper.js
-> create_test.js
-> read_test.js
user.js file:
const mongoose = require("mongoose");
const Schema = mongoose.Schema;
const UserSchema = new Schema({
name: { type: String },
});
module.exports = mongoose.model("user", UserSchema);
test_helper.js file:
const mongoose = require("mongoose");
mongoose.connect("mongodb://localhost/users_test", {
useUnifiedTopology: true,
useNewUrlParser: true,
useFindAndModify: false,
});
mongoose.connection
.once("open", () => {
console.log("Mongoose Connected!");
})
.on("error", (error) => {
console.warn("Warning", error);
});
beforeEach((done) => {
const { users } = mongoose.connection.collections;
users.drop(() => {
// Ready to run next tests
done();
});
});
create_test.js file:
const assert = require("assert");
const User = require("../src/user");
describe("Creating records", () => {
it("saves a user", (done) => {
const user = new User({ name: "Joe" });
user
.save()
.then(() => {
assert(!user.isNew);
done();
})
.catch(done);
});
});
read_test.js file:
const assert = require("assert");
const User = require("../src/user");
describe("Reading users out of the database", () => {
let joe, maria, alex, zach;
beforeEach((done) => {
joe = new User({ name: "Joe" });
alex = new User({ name: "Alex" });
maria = new User({ name: "Maria" });
zach = new User({ name: "Zach" });
Promise.all([joe.save(), alex.save(), maria.save(), zach.save()]).then(() =>
done()
);
});
it("finds all users with a name of joe", (done) => {
User.find({ name: "Joe" }).then((users) => {
assert(users[0]._id.toString() === joe._id.toString());
done();
});
});
it("find a user with a particular id", (done) => {
User.findOne({ _id: joe._id }).then((user) => {
// assert(user.name === "Joe");
assert(user._id.toString() === joe._id.toString());
done();
});
});
});
In package.json I tried different commands, but with the same output:
With this inside scripts:
"start:test": "mocha test/ --recursive --exit",
"test": "nodemon --exec \"npm run start:test\"",
or this:
"test": "nodemon --watch . --exec \"mocha || true\""
I got the same ouput in terminat, with total time for all tests.
You can just copy all code that I put here, put it locally on your computer and test it. Of course you have to have a MongoDB database running in a terminal, to run all of this code.
So, please someone tell me, how can I ouput in terminal the time for each test?
I asked on Mocha github about this situation and a member helped me to understand that there are 2 ways to show the time:
1. seting the --slow option to zero: --slow 0
"scripts": {
"start:test": "mocha test/ --slow 0 --recursive --exit",
"test": "nodemon --exec \"npm run start:test\""
},
2. using a reporter list: --reporter list
"scripts": {
"start:test": "mocha test/ --reporter list --recursive --exit",
"test": "nodemon --exec \"npm run start:test\""
}
I searched a lot for at least one option, but now I found 2 :)).
I hope this will help others to save their time.
I'm testing an Angular App with Cypress.
I'm running my test with the Cypress dashboard, that I open using this command:
$(npm bin)/cypress open
I'm calling an API with my test: it works.
But when I change my code, Cypress will rerun the code which will cause my first (and only my first test) to fail. The request calling the API is aborted.
The only way to make it work again is to manually end the process, then start it again.
Has anyone got an idea what is causing this strange behaviour?
Here is my test code:
beforeEach(() => {
cy.visit('/');
cy.server();
cy.route('POST', `myUrl`).as('apiCall');
});
it('should find a doctor when user searches doctor with firstName', () => {
cy.get('#myInput').type('foo');
cy.get('#submitButton]').click();
cy.wait('#apiCall').then((xhr) => {
expect(xhr.status).to.eq(200);
});
});
You can prepare XHR stub like this:
describe('', () => {
let requests = {}; // for store sent request
beforeEach(() => {
cy.server({ delay: 500 }); // cypress will answer for mocked xhr after 0.5s
cy.route({
url: '<URL>',
method: 'POST',
response: 'fixture:response',
onRequest: ({ request }) => {
Object.assign(requests, { someRequest: request.body }); // it has to be mutated
},
});
});
And then in test:
it('', () => {
cy
.doSomeSteps()
.assertEqual(requests, 'someRequest', { data: 42 })
});
There is 2 advantages of this solution: first 0.5s delay make test more realistic because real backend doesn't answer immediately. Second is you can verify if application will send proper payload after doSomeActions() step.
assertEqual is just util to make assertion more readable
Cypress.Commands.add('assertEqual', (obj, key, value) =>
cy
.wrap(obj)
.its(key)
.should('deep.equal', value)
);
I am trying to write test for a model in sequelize, but I do not understand why it is not failing
it('should find user by id', (done) => {
users.findByPk(2)
.then((retrievedUser) => {
expect(retrievedUser.dataValues).to.deep.equal('it should break');
done();
})
.catch((err) => {
console.log(`something went wrong [should find user by id] ${err}`);
done();
})
});
When I run the test the output is the following
something went wrong [should find user by id] AssertionError: expected { Object (id, email, ...) } to deeply equal 'it should break'
1 -__,------,
0 -__| /\_/\
0 -_~|_( ^ .^)
-_ "" ""
1 passing (40ms)
If someone want to watch the full code, I created a project
For an asynchronous Mocha test to fail, pass an error as an argument to the done callback function
it('should find user by id', (done) => {
users.findByPk(2)
.then((retrievedUser) => {
expect(retrievedUser.dataValues).to.deep.equal('it should break');
done();
})
.catch((err) => {
console.log(`something went wrong [should find user by id] ${err}`);
done(err);
})
});
Alternatively, use an async function without a callback:
it('should find user by id', async () => {
const retrievedUser = await users.findByPk(2);
try {
expect(retrievedUser.dataValues).to.deep.equal('it should break');
} catch (err) {
console.log(`something went wrong [should find user by id] ${err}`);
throw err;
}
});
That said, I wouldn't recommend logging the error message of failing tests, because that's what Mocha already does for you in a typical setup. So I would get rid of the try-catch block in the example above.
it('should find user by id', async () => {
const retrievedUser = await users.findByPk(2);
expect(retrievedUser.dataValues).to.deep.equal('it should break');
});
I have scenario like this:
const isBrowser = new Function("try {return this===window;}catch(e){ return false;}");
if(isBrowser){
//Run browser specific code
}else {
// run nodejs specific code
}
I am setting up a test environment using Mocha, Chai and istanbul. How can i setup in such a way that few test suits should run on browser and few on NodeJs.
The goal is to get the combined coverage report.
how can I configure the Mocha to run in both browser and in NodeJs environment, using karma or without karma ?
for example:
//this should run in NodeJS Environment
describe('Loader Node Js', () => {
it('Load files from File system', (done) => {
loader.load('file path')
.then((files) => {
done();
});
});
});
//this should run in Browser Environment
describe('Loader Browser', () => {
it('Load files from Server AJAX', (done) => {
loader.load('file url')
.then((files) => {
done();
});
});
})
You should be able to test for Node.js and browser specific globals in your suite.
if (typeof module === 'object') {
//this should run in NodeJS Environment
describe('Loader Node Js', () => {
it('Load files from File system', (done) => {
loader.load('file path')
.then((files) => {
done();
});
});
});
} else {
//this should run in Browser Environment
describe('Loader Browser', () => {
it('Load files from Server AJAX', (done) => {
loader.load('file url')
.then((files) => {
done();
});
});
})
}
You have other options, too, like testing for typeof self === 'undefined', which is true in Node.js and false in a browser.
There are a few related question you may find interesting:
Environment detection: node.js or browser
How to check whether a script is running under node.js?
How to detect if script is running in browser or in Node.js?
I have read that jest tests in the same tests file execute sequentially. I have also read that when writing tests that involve callbacks a done parameter should be used.
But when using promises using the async/await syntax that I am using in my code below, can I count on the tests to but run and resolve in sequential order?
import Client from '../Client';
import { Project } from '../Client/types/client-response';
let client: Client;
beforeAll(async () => {
jest.setTimeout(10000);
client = new Client({ host: 'ws://127.0.0.1', port: 8080 , logger: () => {}});
await client.connect();
})
describe('Create, save and open project', () => {
let project: Project;
let filename: string;
beforeAll(async () => {
// Close project
let project = await client.getActiveProject();
if (project) {
let success = await client.projectClose(project.id, true);
expect(success).toBe(true);
}
})
test('createProject', async () => {
project = await client.createProject();
expect(project.id).toBeTruthy();
});
test('projectSave', async () => {
filename = await client.projectSave(project.id, 'jesttest.otii', true);
expect(filename.endsWith('jesttest.otii')).toBe(true);
});
test('projectClose', async () => {
let success = await client.projectClose(project.id);
expect(success).toBe(true);
});
test('projectOpen', async () => {
project = await client.openProject(filename);
expect(filename.endsWith('jesttest.otii')).toBe(true);
});
})
afterAll(async () => {
await client.disconnect();
})
From the docs:
...by default Jest runs all the tests serially in the order they were encountered in the collection phase, waiting for each to finish and be tidied up before moving on.
So while Jest may run test files in parallel, by default it runs the tests within a file serially.
That behavior can be verified by the following test:
describe('test order', () => {
let count;
beforeAll(() => {
count = 0;
})
test('1', async () => {
await new Promise((resolve) => {
setTimeout(() => {
count++;
expect(count).toBe(1); // SUCCESS
resolve();
}, 1000);
});
});
test('2', async () => {
await new Promise((resolve) => {
setTimeout(() => {
count++;
expect(count).toBe(2); // SUCCESS
resolve();
}, 500);
});
});
test('3', () => {
count++;
expect(count).toBe(3); // SUCCESS
});
});
For sure it depends on test runnner configured. Say for Jasmine2 it seems impossible to run tests concurrently:
Because of the single-threadedness of javascript, it isn't really possible to run your tests in parallel in a single browser window
But looking into docs' config section:
--maxConcurrency=
Prevents Jest from executing more than the specified amount of tests at the same time. Only affects tests that use test.concurrent.
--maxWorkers=|
Alias: -w. Specifies the maximum number of workers the worker-pool will spawn for running tests. This defaults to the number of the cores available on your machine. It may be useful to adjust this in resource limited environments like CIs but the default should be adequate for most use-cases.
For environments with variable CPUs available, you can use percentage based configuration: --maxWorkers=50%
Also looking at description for jest-runner-concurrent:
Jest's default runner uses a new child_process (also known as a worker) for each test file. Although the max number of workers is configurable, running a lot of them is slow and consumes tons of memory and CPU.
So it looks like you can configure amount of test files running in parallel(maxWorkers) as well as concurrent test cases in scope of single worker(maxConcurrency). If you use jest as test runner. And this affects only test.concurrent() tests.
For some reason I was unable to find anything on test.concurrent() at their main docs site.
Anyway you can check against your environment by yourselves:
describe('checking concurrent execution', () => {
let a = 5;
it('deferred change', (done) => {
setTimeout(() => {
a = 11;
expect(a).toEqual(11);
done();
}, 1000);
});
it('fails if running in concurrency', () => {
expect(a).toEqual(11);
});
})
Sure, above I used Jasmine's syntax(describe, it) so you may need to replace that with other calls.