Retries in Cypress and Before hooks - cypress

Morning all. I have a slightly unusual design to my tests. A typical example might be
describe('1', () => {
describe('2', () => {
before()
describe('3', () => {
it('1')
// ...
it('n')
});
});
});
If there is a failure in one of the individual tests (it 1..n), I want to re-run ALL of those tests, and run the "before" code first too - ie from "describe 2". If I use a before hook then re-runs don't trigger that again. If I change to a beforeEach, then it gets called before each and every "it" block, which I don't want.
Effectively, each it block is a test check, describe 3 is a test step, describe 2 a test spec, and describe 1 a test "group"
Can anyone suggest a way I can re-run a test spec (describe 2) when one test check fails, including re-running the before code for that spec?
(I know this is probably anti-pattern etc, but....)

You can externalise the before() callback function, and use the test:after:run event to trigger it on a retry.
I haven't tested this extensively, but the gist is
const beforeCallback = () => {...}
before(beforeCallback)
Cypress.on('test:after:run', (result) => {
if (result.currentRetry < result.retries && result.state === 'failed') {
beforeCallback()
}
})
it('fails', {retries:3}, () => expect(false).to.eq(true)) // failing test to check it out

Related

Is There any way to stop all the test cases (It's) in Nested describes if one of the test case is failing in Cypress

Is there any way to stop all nested describes in case one of the Iterations (Test case) is failing inside one of the nested describes
how to achieve this anyone have any idea
Example
Test
Describe 1
it() {}
Describe 1.1
It1() {}
It2() {} (On Error)
It3() {} (Skip this)
Describe 1.2 (Skip this)
It12() {} (Skip this)
It22() {} (Skip this)
It33() {} (Skip this)
Describe 2 (Don't Skip this)
it() {} (Don't Skip this)
Describe 1.1
It21() {} (Don't Skip this)
It22() {} (Don't Skip this)
It23() {} (Don't Skip this)
It can be achieved using the this.skip() method Docs, you can set a flag to indicate a failure, and in the beforeEach hook decide if you want to skip it or not.
here is an example:
describe("my test cases", () => {
let skipFlag = false;
Cypress.on("fail", (err, runnable) => {
skipFlag = true;
throw err; //keeps the original error
});
beforeEach(function () {
if (skipFlag) {
this.skip();
}
});
it("a", () => {
throw {}; //error
});
it("b", () => {
// this will be skipped
cy.log("hello");
});
});
one way to achieve this is
using
cypress-fail-fast plugin
[https://github.com/javierbrea/cypress-fail-fast]
after installing this package
then set the environment variable as
"FAIL_FAST_STRATEGY": "spec"
in cypress.env.json
that's all
it will work accordingly

In my Cypress.io tests why do I need to treat a cy.task like it is asyncronous when its not

I have Cypress.io tests that use a simple Cypress task to log some table information to the terminal. For example I have a test like so:
it("Writes some console logs and checks a val", () => {
cy.task("rowLog", { id: 1, name: "foo", type: "bar" });
let one = 1;
expect(one).to.equal(2);
});
And the task, "rowLog" like so:
module.exports = (on, config) => {
on("task", {
rowLog(data) {
// use node.js console.table to pretty print table data:
console.table(data);
return null;
},
}
But the result of rowLog will not display in my terminal if I run Cypress headlessly via Cypress run. This is because the test will fail. If we switch the test so that it passes, then it will show.
However I just realized that if I treat rowLog like it's async like below. It will print the results to the terminal:
it("Writes some console logs and checks a val", () => {
// add a .then to task:
cy.task("rowLog", { id: 1, name: "foo", type: "bar" }).then(() => {
let one = 1;
expect(one).to.equal(2);
});
});
This is not what I would expect from the docs. They say that:
cy.task() yields the value returned or resolved by the task event in the pluginsFile.
(source)
And that a task can yield either a promise or a value.
I'm new to Cypress here-- is there something I'm missing or doing wrong? I'd like to be able to not have to chain my tasks with .then statements if it is just synchronous stuff like writing output to ensure everything is emitted to my terminal.
If you look into the type definition of cy.task command, you will see that it returns a Chainable (that is a promise-like entity). So it behaves like any other cy command (ansynchrounously).
As for the yield either a promise or a **value** - this statement refers to the handler of the task, not the task itself. As for the other command, Cypress will wrap a returned value into a promise if it was not done by the handler.

Cypress: How do I conditionally skip a test by checking the URL

How do I conditionally skip a test if the URL contains "xyz"?
some tests that run in the QA environment "abc" should not be run in Production "xyz" environment.
I've not been able to find a good example of conditionally checking for environment to trigger a test. The baseURL needs to be checked dynamically and the test skipped preferably in the beforeEach.
running cypress 6.2.0
beforeEach(() => {
login.loginByUser('TomJones');
cy.visit(`${environment.getBaseUrl()}${route}`);
});
it('test page', function () {
if environment.getBaseUrl().contains("xyz")
then *skip test*
else
cy.intercept('GET', '**/some-api/v1/test*').as('Test'););
cy.get('#submitButton').click();
})
Potential Solution (tested and tried successfully):
I used a combination of filtering (grouping) and folder structures via CLI
I set folders /integrations/smokeTest/QA and /integrations/smokeTest/Prod/
1.QA Test Run:
npm run *cy:filter:qa* --spec "cypresss/integration/smokeTests/QA/*-spec.ts"
2.Run All (both QA and PROD tests)
npm run cypress:open --spec "cypresss/integration/smokeTests/*/*-spec.ts"
3. Prod Test Run:
npm run cy:filter:prod --spec "cypresss/integration/smokeTests/PROD*/*-spec.ts"
Normally I wouldn't write a custom command just to exercise one Cypress command, but in this case it's useful to obtain the global test context this.
Using the function form of callback with the custom command allows access to this, then you are free to use arrow functions on the test themselves.
Cypress.Commands.add('skipWhen', function (expression) {
if (expression) {
this.skip()
}
})
it('test skipping with arrow function', () => {
cy.skipWhen(Cypress.config('baseUrl').includes('localhost'));
// NOTE: a "naked" expect() will not be skipped
// if you call your custom command within the test
// Wrap it in a .then() to make sure it executes on the command queue
cy.then(() => {
expect('this.stackOverflow.answer').to.eq('a.better.example')
})
})
I would add a helper method which you can call from any Mocha.Context (at the time of writing, any it, describe, or context block).
// commands.ts
declare global {
// eslint-disable-next-line #typescript-eslint/no-namespace
namespace Cypress {
interface Chainable {
/**
* Custom command which will skip a test or context based on a boolean expression.
*
* You can call this command from anywhere, just make sure to pass in the the it, describe, or context block you wish to skip.
*
* #example cy.skipIf(yourCondition, this);
*/
skipIf(expression: boolean, context: Mocha.Context): void;
}
}
}
Cypress.Commands.add(
'skipIf',
(expression: boolean, context: Mocha.Context) => {
if (expression) context.skip.bind(context)();
}
);
And from your spec:
describe('Events', () => {
const url = `${environment.getBaseUrl()}${route}`;
before(function () {
cy.visit(url);
cy.skipIf(url.includes('xyz'), this);
});
context('Nested context', () => {
it('test', function () {
cy.skipIf(url.includes('abc'), this);
expect(this.stackOverflow.answer).to.be('accepted');
});
});
});
Now you have a reusable custom command that you can call from anywhere to conditionally skip tests based on any expression that evaluates to a boolean. Careful of classic JS equality and definition gotchas (read more about equality in JS here).

Webdriver.IO: How do I run a specific 'it' statement in Jasmine using WDIO

I am trying to pull out a smoke suite from my regression suite written using the Jasmine framework (wdio-jasmine-framework).
Is it possible to just add a tag on specific testcases in Jasmine?
If I remember correctly from my Jasmine/Mocha days, there were several ways to achieve this. I'll detail a few, but I'm sure there might be some others too. Use the one that's best for you.
1. Use the it.skip() statement inside a conditional operator expression to define the state of a test-case (e.g: in the case of a smokeRun, skip the non-smoke tests using: (smokeRun ? it.skip : it)('not a smoke test', () => { // > do smth here < });).
Here is an extended example:
// Reading the smokeRun state from a system variable:
const smokeRun = (process.env.SMOKE ? true : false);
describe('checkboxes testsuite', function () {
// > this IS a smoke test! < //
it('#smoketest: checkboxes page should open successfully', () => {
CheckboxPage.open();
// I am a mock test...
// I do absolutely nothing!
});
// > this IS NOT a smoke test! < //
(smokeRun ? it.skip : it)('checkbox 2 should be enabled', () => {
CheckboxPage.open();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(false);
expect(CheckboxPage.lastCheckbox.isSelected()).toEqual(true);
});
// > this IS NOT a smoke test! < //
(smokeRun ? it.skip : it)('checkbox 1 should be enabled after clicking on it', () => {
CheckboxPage.open();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(false);
CheckboxPage.firstCheckbox.click();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(true);
});
});
2. Use it.only() to achieve mainly the same effect, the difference being the test-case refactor workload. I'll summarize these ideas as:
if you have more smoke tests than non-smoke tests, use the it.skip() approach;
if you have more non-smoke tests than smoke tests, use the it.only() approach;
You can read more about pending-tests here.
3. Use the runtime skip (.skip()) in conjunction with some nested describe statements.
It should look something like this:
// Reading the smokeRun state from a system variable:
const smokeRun = (process.env.SMOKE ? true : false);
describe('checkboxes testsuite', function () {
// > this IS a smoke test! < //
it('#smoketest: checkboxes page should open successfully', function () {
CheckboxPage.open();
// I am a mock test...
// I do absolutely nothing!
});
describe('non-smoke tests go here', function () {
before(function() {
if (smokeRun) {
this.skip();
}
});
// > this IS NOT a smoke test! < //
it('checkbox 2 should be enabled', function () {
CheckboxPage.open();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(false);
expect(CheckboxPage.lastCheckbox.isSelected()).toEqual(true);
});
// > this IS NOT a smoke test! < //
it('checkbox 1 should be enabled after clicking on it', function () {
CheckboxPage.open();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(false);
CheckboxPage.firstCheckbox.click();
expect(CheckboxPage.firstCheckbox.isSelected()).toEqual(true);
});
});
});
!Note: These are working examples! I tested them using WebdriverIO's recommended Jasmine Boilerplace project.
!Obs: There multiple ways to filter Jasmine tests, unfortunately only at a test-file(testsuite) level (e.g: using grep piped statements, or the built-in WDIO specs & exclude attributes).

What is the difference between describe and it in Jest?

When writing a unit test in Jest or Jasmine when do you use describe?
When do you use it?
I usually do
describe('my beverage', () => {
test('is delicious', () => {
});
});
When is it time for a new describe or a new it?
describe breaks your test suite into components. Depending on your test strategy, you might have a describe for each function in your class, each module of your plugin, or each user-facing piece of functionality.
You can also nest describes to further subdivide the suite.
it is where you perform individual tests. You should be able to describe each test like a little sentence, such as "it calculates the area when the radius is set". You shouldn't be able to subdivide tests further-- if you feel like you need to, use describe instead.
describe('Circle class', function() {
describe('area is calculated when', function() {
it('sets the radius', function() { ... });
it('sets the diameter', function() { ... });
it('sets the circumference', function() { ... });
});
});
As I mentioned in this question, describe is for grouping, it is for testing.
As the jest docs says, test and it are the same:
https://jestjs.io/docs/en/api#testname-fn-timeout
test(name, fn, timeout)
Also under the alias: it(name, fn, timeout)
and describe is just for when you prefer your tests to be organized into groups:
https://jestjs.io/docs/en/api#describename-fn
describe(name, fn)
describe(name, fn) creates a block that groups together several related tests. For example, if you have a myBeverage object that is supposed to be delicious but not sour, you could test it with:
const myBeverage = {
delicious: true,
sour: false,
};
describe('my beverage', () => {
test('is delicious', () => {
expect(myBeverage.delicious).toBeTruthy();
});
test('is not sour', () => {
expect(myBeverage.sour).toBeFalsy();
});
});
This isn't required - you can write the test blocks directly at the top level. But this can be handy if you prefer your tests to be organized into groups.
I consider this more from the impact on the test output. By using describe or multiple levels of describe you can group your output for readability.

Resources