When writing a unit test in Jest or Jasmine when do you use describe?
When do you use it?
I usually do
describe('my beverage', () => {
test('is delicious', () => {
});
});
When is it time for a new describe or a new it?
describe breaks your test suite into components. Depending on your test strategy, you might have a describe for each function in your class, each module of your plugin, or each user-facing piece of functionality.
You can also nest describes to further subdivide the suite.
it is where you perform individual tests. You should be able to describe each test like a little sentence, such as "it calculates the area when the radius is set". You shouldn't be able to subdivide tests further-- if you feel like you need to, use describe instead.
describe('Circle class', function() {
describe('area is calculated when', function() {
it('sets the radius', function() { ... });
it('sets the diameter', function() { ... });
it('sets the circumference', function() { ... });
});
});
As I mentioned in this question, describe is for grouping, it is for testing.
As the jest docs says, test and it are the same:
https://jestjs.io/docs/en/api#testname-fn-timeout
test(name, fn, timeout)
Also under the alias: it(name, fn, timeout)
and describe is just for when you prefer your tests to be organized into groups:
https://jestjs.io/docs/en/api#describename-fn
describe(name, fn)
describe(name, fn) creates a block that groups together several related tests. For example, if you have a myBeverage object that is supposed to be delicious but not sour, you could test it with:
const myBeverage = {
delicious: true,
sour: false,
};
describe('my beverage', () => {
test('is delicious', () => {
expect(myBeverage.delicious).toBeTruthy();
});
test('is not sour', () => {
expect(myBeverage.sour).toBeFalsy();
});
});
This isn't required - you can write the test blocks directly at the top level. But this can be handy if you prefer your tests to be organized into groups.
I consider this more from the impact on the test output. By using describe or multiple levels of describe you can group your output for readability.
Related
Morning all. I have a slightly unusual design to my tests. A typical example might be
describe('1', () => {
describe('2', () => {
before()
describe('3', () => {
it('1')
// ...
it('n')
});
});
});
If there is a failure in one of the individual tests (it 1..n), I want to re-run ALL of those tests, and run the "before" code first too - ie from "describe 2". If I use a before hook then re-runs don't trigger that again. If I change to a beforeEach, then it gets called before each and every "it" block, which I don't want.
Effectively, each it block is a test check, describe 3 is a test step, describe 2 a test spec, and describe 1 a test "group"
Can anyone suggest a way I can re-run a test spec (describe 2) when one test check fails, including re-running the before code for that spec?
(I know this is probably anti-pattern etc, but....)
You can externalise the before() callback function, and use the test:after:run event to trigger it on a retry.
I haven't tested this extensively, but the gist is
const beforeCallback = () => {...}
before(beforeCallback)
Cypress.on('test:after:run', (result) => {
if (result.currentRetry < result.retries && result.state === 'failed') {
beforeCallback()
}
})
it('fails', {retries:3}, () => expect(false).to.eq(true)) // failing test to check it out
How do I conditionally skip a test if the URL contains "xyz"?
some tests that run in the QA environment "abc" should not be run in Production "xyz" environment.
I've not been able to find a good example of conditionally checking for environment to trigger a test. The baseURL needs to be checked dynamically and the test skipped preferably in the beforeEach.
running cypress 6.2.0
beforeEach(() => {
login.loginByUser('TomJones');
cy.visit(`${environment.getBaseUrl()}${route}`);
});
it('test page', function () {
if environment.getBaseUrl().contains("xyz")
then *skip test*
else
cy.intercept('GET', '**/some-api/v1/test*').as('Test'););
cy.get('#submitButton').click();
})
Potential Solution (tested and tried successfully):
I used a combination of filtering (grouping) and folder structures via CLI
I set folders /integrations/smokeTest/QA and /integrations/smokeTest/Prod/
1.QA Test Run:
npm run *cy:filter:qa* --spec "cypresss/integration/smokeTests/QA/*-spec.ts"
2.Run All (both QA and PROD tests)
npm run cypress:open --spec "cypresss/integration/smokeTests/*/*-spec.ts"
3. Prod Test Run:
npm run cy:filter:prod --spec "cypresss/integration/smokeTests/PROD*/*-spec.ts"
Normally I wouldn't write a custom command just to exercise one Cypress command, but in this case it's useful to obtain the global test context this.
Using the function form of callback with the custom command allows access to this, then you are free to use arrow functions on the test themselves.
Cypress.Commands.add('skipWhen', function (expression) {
if (expression) {
this.skip()
}
})
it('test skipping with arrow function', () => {
cy.skipWhen(Cypress.config('baseUrl').includes('localhost'));
// NOTE: a "naked" expect() will not be skipped
// if you call your custom command within the test
// Wrap it in a .then() to make sure it executes on the command queue
cy.then(() => {
expect('this.stackOverflow.answer').to.eq('a.better.example')
})
})
I would add a helper method which you can call from any Mocha.Context (at the time of writing, any it, describe, or context block).
// commands.ts
declare global {
// eslint-disable-next-line #typescript-eslint/no-namespace
namespace Cypress {
interface Chainable {
/**
* Custom command which will skip a test or context based on a boolean expression.
*
* You can call this command from anywhere, just make sure to pass in the the it, describe, or context block you wish to skip.
*
* #example cy.skipIf(yourCondition, this);
*/
skipIf(expression: boolean, context: Mocha.Context): void;
}
}
}
Cypress.Commands.add(
'skipIf',
(expression: boolean, context: Mocha.Context) => {
if (expression) context.skip.bind(context)();
}
);
And from your spec:
describe('Events', () => {
const url = `${environment.getBaseUrl()}${route}`;
before(function () {
cy.visit(url);
cy.skipIf(url.includes('xyz'), this);
});
context('Nested context', () => {
it('test', function () {
cy.skipIf(url.includes('abc'), this);
expect(this.stackOverflow.answer).to.be('accepted');
});
});
});
Now you have a reusable custom command that you can call from anywhere to conditionally skip tests based on any expression that evaluates to a boolean. Careful of classic JS equality and definition gotchas (read more about equality in JS here).
Im trying to unit test a Service that uses elastic search. I want to make sure I am using the right techniques.
I am new user to many areas of this problem, so most of my attempts have been from reading other problems similar to this and trying out the ones that make sense in my use case. I believe I am missing a field within the createTestingModule. Also sometimes I see providers: [Service] and others components: [Service].
const module: TestingModule = await Test.createTestingModule({
providers: [PoolJobService],
}).compile()
This is the current error I have:
Nest can't resolve dependencies of the PoolJobService (?).
Please make sure that the argument at index [0]
is available in the _RootTestModule context.
Here is my code:
PoolJobService
import { Injectable } from '#nestjs/common'
import { ElasticSearchService } from '../ElasticSearch/ElasticSearchService'
#Injectable()
export class PoolJobService {
constructor(private readonly esService: ElasticSearchService) {}
async getPoolJobs() {
return this.esService.getElasticSearchData('pool/job')
}
}
PoolJobService.spec.ts
import { Test, TestingModule } from '#nestjs/testing'
import { PoolJobService } from './PoolJobService'
describe('PoolJobService', () => {
let poolJobService: PoolJobService
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [PoolJobService],
}).compile()
poolJobService = module.get<PoolJobService>(PoolJobService)
})
it('should be defined', () => {
expect(poolJobService).toBeDefined()
})
I could also use some insight on this, but haven't been able to properly test this because of the current issue
it('should return all PoolJobs', async () => {
jest
.spyOn(poolJobService, 'getPoolJobs')
.mockImplementation(() => Promise.resolve([]))
expect(await poolJobService.getPoolJobs()).resolves.toEqual([])
})
})
First off, you're correct about using providers. Components is an Angular specific thing that does not exist in Nest. The closest thing we have are controllers.
What you should be doing for a unit test is testing what the return of a single function is without digging deeper into the code base itself. In the example you've provided you would want to mock out your ElasticSearchServices with a jest.mock and assert the return of the PoolJobService method.
Nest provides a very nice way for us to do this with Test.createTestingModule as you've already pointed out. Your solution would look similar to the following:
PoolJobService.spec.ts
import { Test, TestingModule } from '#nestjs/testing'
import { PoolJobService } from './PoolJobService'
import { ElasticSearchService } from '../ElasticSearch/ElasticSearchService'
describe('PoolJobService', () => {
let poolJobService: PoolJobService
let elasticService: ElasticSearchService // this line is optional, but I find it useful when overriding mocking functionality
beforeEach(async () => {
const module: TestingModule = await Test.createTestingModule({
providers: [
PoolJobService,
{
provide: ElasticSearchService,
useValue: {
getElasticSearchData: jest.fn()
}
}
],
}).compile()
poolJobService = module.get<PoolJobService>(PoolJobService)
elasticService = module.get<ElasticSearchService>(ElasticSearchService)
})
it('should be defined', () => {
expect(poolJobService).toBeDefined()
})
it('should give the expected return', async () => {
elasticService.getElasticSearchData = jest.fn().mockReturnValue({data: 'your object here'})
const poolJobs = await poolJobService.getPoolJobs()
expect(poolJobs).toEqual({data: 'your object here'})
})
You could achieve the same functionality with a jest.spy instead of a mock, but that is up to you on how you want to implement the functionality.
As a basic rule, whatever is in your constructor, you will need to mock it, and as long as you mock it, whatever is in the mocked object's constructor can be ignored. Happy testing!
EDIT 6/27/2019
About why we mock ElasticSearchService: A unit test is designed to test a specific segment of code and not make interactions with code outside of the tested function. In this case, we are testing the function getPoolJobs of the PoolJobService class. This means that we don't really need to go all out and connect to a database or external server as this could make our tests slow/prone to breaking if the server is down/modify data we don't want to modify. Instead, we mock out the external dependencies (ElasticSearchService) to return a value that we can control (in theory this will look very similar to real data, but for the context of this question I made it a string). Then we test that getPoolJobs returns the value that ElasticSearchService's getElasticSearchData function returns, as that is the functionality of this function.
This seems rather trivial in this case and may not seem useful, but when there starts to be business logic after the external call then it becomes clear why we would want to mock. Say that we have some sort of data transformation to make the string uppercase before we return from the getPoolJobs method
export class PoolJobService {
constructor(private readonly elasticSearchService: ElasticSearchService) {}
getPoolJobs(data: any): string {
const returnData = this.elasticSearchService.getElasticSearchData(data);
return returnData.toUpperCase();
}
}
From here in the test we can tell getElasticSearchData what to return and easily assert that getPoolJobs does it's necessary logic (asserting that the string really is upperCased) without worrying about the logic inside getElasticSearchData or about making any network calls. For a function that does nothing but return another functions output, it does feel a little bit like cheating on your tests, but in reality you aren't. You're following the testing patterns used by most others in the community.
When you move on to integration and e2e tests, then you'll want to have your external callouts and make sure that your search query is returning what you expect, but that is outside the scope of unit testing.
I am trying to pull out a smoke suite from my regression suite written using the Jasmine framework (wdio-jasmine-framework).
Is it possible to just add a tag on specific testcases in Jasmine?
If I remember correctly from my Jasmine/Mocha days, there were several ways to achieve this. I'll detail a few, but I'm sure there might be some others too. Use the one that's best for you.
1. Use the it.skip() statement inside a conditional operator expression to define the state of a test-case (e.g: in the case of a smokeRun, skip the non-smoke tests using: (smokeRun ? it.skip : it)('not a smoke test', () => { // > do smth here < });).
Here is an extended example:
// Reading the smokeRun state from a system variable:
const smokeRun = (process.env.SMOKE ? true : false);
describe('checkboxes testsuite', function () {
// > this IS a smoke test! < //
it('#smoketest: checkboxes page should open successfully', () => {
CheckboxPage.open();
// I am a mock test...
// I do absolutely nothing!
});
// > this IS NOT a smoke test! < //
(smokeRun ? it.skip : it)('checkbox 2 should be enabled', () => {
CheckboxPage.open();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(false);
expect(CheckboxPage.lastCheckbox.isSelected()).toEqual(true);
});
// > this IS NOT a smoke test! < //
(smokeRun ? it.skip : it)('checkbox 1 should be enabled after clicking on it', () => {
CheckboxPage.open();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(false);
CheckboxPage.firstCheckbox.click();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(true);
});
});
2. Use it.only() to achieve mainly the same effect, the difference being the test-case refactor workload. I'll summarize these ideas as:
if you have more smoke tests than non-smoke tests, use the it.skip() approach;
if you have more non-smoke tests than smoke tests, use the it.only() approach;
You can read more about pending-tests here.
3. Use the runtime skip (.skip()) in conjunction with some nested describe statements.
It should look something like this:
// Reading the smokeRun state from a system variable:
const smokeRun = (process.env.SMOKE ? true : false);
describe('checkboxes testsuite', function () {
// > this IS a smoke test! < //
it('#smoketest: checkboxes page should open successfully', function () {
CheckboxPage.open();
// I am a mock test...
// I do absolutely nothing!
});
describe('non-smoke tests go here', function () {
before(function() {
if (smokeRun) {
this.skip();
}
});
// > this IS NOT a smoke test! < //
it('checkbox 2 should be enabled', function () {
CheckboxPage.open();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(false);
expect(CheckboxPage.lastCheckbox.isSelected()).toEqual(true);
});
// > this IS NOT a smoke test! < //
it('checkbox 1 should be enabled after clicking on it', function () {
CheckboxPage.open();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(false);
CheckboxPage.firstCheckbox.click();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(true);
});
});
});
!Note: These are working examples! I tested them using WebdriverIO's recommended Jasmine Boilerplace project.
!Obs: There multiple ways to filter Jasmine tests, unfortunately only at a test-file(testsuite) level (e.g: using grep piped statements, or the built-in WDIO specs & exclude attributes).
I have a Yeoman generator that uses this.bowerInstall()
When I test it, it tries to install all the bower dependencies that I initialized this way. Is there a way to mock this function ?
The same goes for the this.npmInstall() function.
I eventually went with a different approach. The method from drorb's answer works if you are bootstrapping the test generators manually. If you use the RunContext based setup (as described on the Yeoman (testing page)[http://yeoman.io/authoring/testing.html]), the before block of the test looks something like this.
before(function (done) {
helpers.run(path.join( __dirname, '../app'))
.inDir(path.join( __dirname, './tmp')) // Clear the directory and set it as the CWD
.withOptions({ foo: 'bar' }) // Mock options passed in
.withArguments(['name-x']) // Mock the arguments
.withPrompt({ coffee: false }) // Mock the prompt answers
.on('ready', function (generator) {
// this is called right before `generator.run()`
})
.on('end', done);
})
You can add mock functions to the generator in the 'ready' callback, like so:
.on('ready', function(generator) {
generator.bowerInstall = function(args) {
// Do something when generator runs bower install
};
})
The other way is to include an option in the generator itself. Such as:
installAngular: function() {
if (!this.options['skip-install']) {
this.bowerInstall('angular', {
'save': true
});
}
}
finalInstall: function() {
this.installDependencies({
skipInstall: this.options['skip-install']
});
}
Now since you run the test with the 'skip-install' option, the dependencies are not installed. This has the added advantage of ensuring the command line skip-install argument works as expected. In the alternate case, even if you run the generator with the skip-install argument, the bowerInstall and npmInstall functions from your generator are executed even though, the installDependencies function is not (as it is usually configured as above)
Take a look at the tests for the Bootstrap generator, it contains an example of mocking the bowerInstall() function:
beforeEach(function (done) {
this.bowerInstallCalls = [];
// Mock bower install and track the function calls.
this.app.bowerInstall = function () {
this.bowerInstallCalls.push(arguments);
}.bind(this);
}.bind(this));