Goal
Be able to perform multiple retries of an entire test suite until all tests succeed.
Instead of configuring retries allowed for every test within a test suite, as defined by Configure Test Retries: Global Configuration
How to provide test suite retries?
If any test fails within a test suite, then start over and retry the entire test suite again until all tests within a test suite succeed or stop when the entire test suite retry attempts exceed the requested retry attempts.
In the following code, configures retry attempts applicable for every test within a test suite.
The desired goal for this test suite, test 'TEST counter' is expected to fail until incrementing _counter equals _runModeRetries and only then is it expected to succeed. Repeat the entire test suite means re-run every test prior to test 'TEST counter' until it succeeds.
However, what happens is that only test 'TEST counter' is retried _runModeRetries times, and _counter is not incremented because test 'TEST increment' is called only once.
Why do I want this?
I have test suites that have a series of tests the required to run in sequence, and if a test fails, then retries require restarting the sequence. For example, _counter can only be incremented if test 'TEST increment' is called again with a full test suite retry.
How can I do test suite retries?
let _counter = 0;
const _runModeRetries = 3;
context(
'CONTEXT Cypress Retries',
{
retries: {
runMode: _runModeRetries,
openMode: 0
}
},
() => {
it('TEST increment', () => {
_counter++;
expect(_counter).to.be.a('number').gt(0);
});
it('TEST true', () => {
expect(true).to.be.a('boolean').to.be.true;
});
it('TEST false', () => {
expect(false).to.be.a('boolean').to.be.false;
});
it('TEST counter', () => {
if (_counter < _runModeRetries) {
assert.fail();
} else {
assert.isTrue(true);
}
});
}
);
This is really hacky, I'm but posting it in case you can improve on it
run the suite _runModeRetries times
add a skip variable to control if tests run
make all tests function() so that this.skip() can be called
add an after() within the suite to set skip true after first pass
add an onFail handler to reset skip when a fail occurs
let _counter = 0;
const _runModeRetries = 3;
let skip = false
Cypress.on('fail', (error, test) => {
skip = false
throw error // behave as normal
})
Cypress._.times(_runModeRetries, () => {
context('CONTEXT', {retries: {runMode: _runModeRetries}}, () => {
it('TEST increment', function() {
if (skip) this.skip()
_counter++;
expect(_counter).to.be.a('number').gt(0);
});
it('TEST true', function() {
if (skip) this.skip()
expect(true).to.be.a('boolean').to.be.true;
});
it('TEST false', function() {
if (skip) this.skip()
expect(false).to.be.a('boolean').to.be.false;
});
it('TEST counter', function() {
if (skip) this.skip()
assert.isTrue(_counter < _runModeRetries ? false :true);
});
}
);
after(() => skip = true) // any fail, this does not run
})
There may be some improvements available by adjusting the suite inside the onFail (suite = test.parent) and avoiding adding the changes to individual tests.
A cleaner way is to use the Module API which would allow you to run the test suite, examine the results and run again if there's any fail. Sort of manual-retry.
Related
Why don't I have access to #testVariable after the first beforeEach? I should be able to have access to it for each test in this context.
describe('Testing before and beforeEach issue', () => {
context('testing before and beforeEach', () => {
before(() => {
cy.wrap("TESTING 123").as("testVariable")
})
beforeEach(() => {
// This fails after the first beforeEach call.
cy.get("#testVariable").then((testVariable) => {
cy.log(`THIS IS A TEST: ${testVariable}`)
})
})
// This test is reached and passes
it('Testing the variable one time', () => {
cy.log("THIS IS LOG 1")
})
// This test should also pass but is not reached before
// an error occurs in beforeEach
it('Testing the variable a second time', () => {
cy.log("THIS IS LOG 2")
})
})
})
Here is the test output:
Testing before and beforeEach issue
testing before and beforeEach
✓ Testing the variable one time
1) "before each" hook for "Testing the variable a second time"
1 passing (684ms)
1 failing
1) Testing before and beforeEach issue
testing before and beforeEach
"before each" hook for "Testing the variable a second time":
CypressError: `cy.get()` could not find a registered alias for: `#testVariable`.
You have not aliased anything yet.
Because this error occurred during a `before each` hook we are skipping the remaining tests in the current suite: `testing before and beforeEach`
It's by design, part of Cypress approach to minimal cross-test pollution (since beforeEach() is part of each test).
Aliases are reset before each test
Note: all aliases are reset before each test. A common user mistake is to create aliases using the before hook. Such aliases work in the first test only!
But there's a loophole. The alias sets up a variable of the same name on the Mocha context (this) and it's not cleared between tests.
Sharing Context
Under the hood, aliasing basic objects and primitives utilizes Mocha's shared context object: that is, aliases are available as this.* (i.e properties of this).
It goes on to say
Additionally these aliases and properties are automatically cleaned up after each test.
but if you run the following, you find that's not true - the alias is removed but not the context property.
describe('Testing before and beforeEach issue', () => {
context('testing before and beforeEach', () => {
before(() => {
cy.wrap("TESTING 123").as("testVariable")
})
// beforeEach(function() {
// // This fails after the first beforeEach call.
// cy.get("#testVariable").then((testVariable) => {
// cy.log(`THIS IS A TEST: ${testVariable}`)
// })
// })
beforeEach(function() {
cy.log(`THIS IS A TEST: ${this.testVariable}`) // logged for each test
})
// This test is reached and passes
it('Testing the variable one time', () => {
cy.log("THIS IS LOG 1")
})
// This test should also pass but is not reached before
// an error occurs in beforeEach
it('Testing the variable a second time', () => {
cy.log("THIS IS LOG 2")
})
})
})
I want to skip and allow tests in the before each hook as follows
beforeEach(() =>{
if(Cypress.mocha.getRunner().suite.ctx.currentTest.title === `Skip this`){
// skip the first test case only but run the second one [How?]
}
});
it(`Skip this`, () => {
});
it(`Don't skip this`, () => {
});
In the place of [How?] I tried using the following:
cy.skipOn(true) from the cypress skip-test plugin but apparently it skips the beforeEach hook not the test itself.
this.skip() but apparently this is not a valid function. Also, if I changed the beforeEach from an arrow function expression, the skip function works but it skips the whole suite and not just the desired test case.
Any ideas?
Change the function type from arrow function to regular function, then you can use the built-in Mocha skip() method.
beforeEach(function() {
if (condition) {
this.skip()
}
})
Your code sample will look like this:
beforeEach(function() { // NOTE regular function
if (Cypress.mocha.getRunner().suite.ctx.currentTest.title === 'Skip this') {
this.skip()
}
});
it(`Skip this`, () => {
});
it(`Don't skip this`, () => {
});
Or use the Mocha context you already use for test title
beforeEach(() => { // NOTE arrow function is allowed
const ctx = Cypress.mocha.getRunner().suite.ctx
if (ctx.currentTest.title === 'Skip this') {
ctx.skip()
}
});
afterEach()
If you have an afterEach() hook, the this.skip() call does not stop it running for the skipped test.
You should check the condition inside that hook also,
afterEach(function() {
if (condition) return;
... // code that should not run for skipped tests.
})
I try to use aftereach() but it will execute either the testcase is passed or failed. I need to know the context of that should i use for example if condition or what?
For example:
describe('TestSuite', function(){
it('THIS TEST CASE PASSED', function(){
})
it('THIS TEST CASE FAILED', function(){
})
})
I need to make it like this. If the a testcase is passed do the actions
...
...
...
and if the testcase failed do the actions
...
...
...
You didn't give much info about your code, so I will suggest a solution that might not be suited for you. I would use a variable that gets set to "fail" if an assertion fails in the previous test and use that to determine the action in the 2nd test, e.g.
describe('check if first test passed or failed then do A or B', () => {
let result
it('test 1', () => {
cy.request({url: Cypress.env('url')}).its('status').then((status) => {
if (status === 500) {
result = 'fail'
}
})
})
it('test 2', () => {
if (result === 'fail') {
cy.log('Previous test failed, so I did Action A')
Code Action A
}
else
{
cy.log('Previous test passed, so I did Action B')
Code Action B
}
})
})
My test contains something like this:
it.only('does something', function() {
cy.window().then(function(win) {
win.GlobalObject = {
someMethod: function(data) {
return expect(data).to.deep.eq({
company: 'Pied Piperz',
country: 'United States'
});
}
};
});
cy.get('[data-cy=submit]').click();
});
When my test runs, part of the logic invokes window.GlobalObject.someMethod({}) which should fail the test since I didn't pass the expected object into someMethod(). Instead, I see a failing assertion in the log:
But the overall test is marked as succeeding:
How can I get get the failing asserting inside my mocked GlobalObject to fail the whole test?
The problem is that your test is failing AFTER it has already passed.
Once Cypress finishes cy.get('[data-cy=submit]').click(), it's done executing all the commands, so it considers your test to have completed.
Then, some milliseconds later, you have a failing expect() - but Cypress has already passed the test, so there is nothing meaningful to be done.
But all is not lost! You can use the optional done parameter to manually tell Cypress when the test has finished, like so:
it.only('does something', function(done) { // <--- note `done` here
cy.window().then(function(win) {
win.GlobalObject = {
someMethod: function(data) {
expect(data).to.deep.eq({
company: 'Pied Piperz',
country: 'United States'
});
done(); // <--- this call tells Cypress the test is over
}
};
});
cy.get('[data-cy=submit]').click();
});
I have read that jest tests in the same tests file execute sequentially. I have also read that when writing tests that involve callbacks a done parameter should be used.
But when using promises using the async/await syntax that I am using in my code below, can I count on the tests to but run and resolve in sequential order?
import Client from '../Client';
import { Project } from '../Client/types/client-response';
let client: Client;
beforeAll(async () => {
jest.setTimeout(10000);
client = new Client({ host: 'ws://127.0.0.1', port: 8080 , logger: () => {}});
await client.connect();
})
describe('Create, save and open project', () => {
let project: Project;
let filename: string;
beforeAll(async () => {
// Close project
let project = await client.getActiveProject();
if (project) {
let success = await client.projectClose(project.id, true);
expect(success).toBe(true);
}
})
test('createProject', async () => {
project = await client.createProject();
expect(project.id).toBeTruthy();
});
test('projectSave', async () => {
filename = await client.projectSave(project.id, 'jesttest.otii', true);
expect(filename.endsWith('jesttest.otii')).toBe(true);
});
test('projectClose', async () => {
let success = await client.projectClose(project.id);
expect(success).toBe(true);
});
test('projectOpen', async () => {
project = await client.openProject(filename);
expect(filename.endsWith('jesttest.otii')).toBe(true);
});
})
afterAll(async () => {
await client.disconnect();
})
From the docs:
...by default Jest runs all the tests serially in the order they were encountered in the collection phase, waiting for each to finish and be tidied up before moving on.
So while Jest may run test files in parallel, by default it runs the tests within a file serially.
That behavior can be verified by the following test:
describe('test order', () => {
let count;
beforeAll(() => {
count = 0;
})
test('1', async () => {
await new Promise((resolve) => {
setTimeout(() => {
count++;
expect(count).toBe(1); // SUCCESS
resolve();
}, 1000);
});
});
test('2', async () => {
await new Promise((resolve) => {
setTimeout(() => {
count++;
expect(count).toBe(2); // SUCCESS
resolve();
}, 500);
});
});
test('3', () => {
count++;
expect(count).toBe(3); // SUCCESS
});
});
For sure it depends on test runnner configured. Say for Jasmine2 it seems impossible to run tests concurrently:
Because of the single-threadedness of javascript, it isn't really possible to run your tests in parallel in a single browser window
But looking into docs' config section:
--maxConcurrency=
Prevents Jest from executing more than the specified amount of tests at the same time. Only affects tests that use test.concurrent.
--maxWorkers=|
Alias: -w. Specifies the maximum number of workers the worker-pool will spawn for running tests. This defaults to the number of the cores available on your machine. It may be useful to adjust this in resource limited environments like CIs but the default should be adequate for most use-cases.
For environments with variable CPUs available, you can use percentage based configuration: --maxWorkers=50%
Also looking at description for jest-runner-concurrent:
Jest's default runner uses a new child_process (also known as a worker) for each test file. Although the max number of workers is configurable, running a lot of them is slow and consumes tons of memory and CPU.
So it looks like you can configure amount of test files running in parallel(maxWorkers) as well as concurrent test cases in scope of single worker(maxConcurrency). If you use jest as test runner. And this affects only test.concurrent() tests.
For some reason I was unable to find anything on test.concurrent() at their main docs site.
Anyway you can check against your environment by yourselves:
describe('checking concurrent execution', () => {
let a = 5;
it('deferred change', (done) => {
setTimeout(() => {
a = 11;
expect(a).toEqual(11);
done();
}, 1000);
});
it('fails if running in concurrency', () => {
expect(a).toEqual(11);
});
})
Sure, above I used Jasmine's syntax(describe, it) so you may need to replace that with other calls.