How to stop a Cypress test but mark it as a success? - cypress

I have a Cypress test that clicks a link which runs a method and then brings the user to a new page on different website entirely.
This is the test:
it('Cards', () => {
cy.get('#my-id').click({ force: true })
var itemID= localStorage.getItem('itemID')
expect(itemID).to.eq(null);
cy.get(`a:visible[id*="the_link"]`).first().click()
Cypress.on('fail', (error, runnable) => {
var itemID= localStorage.getItem('itemID')
expect(itemID).to.not.equal(null)
// end test but mark as a success
})
})
The problem is that I get a cross origin error, therefore I added in the Cypress.on('fail') piece of code. So now, the test does not fail but it waits for minutes because it attempting to intercept more data. There is a lot of other stuff going on so I cannot change the intercept logic.
All I want is to end the test and mark it a success on the line that says // end test but mark as a success.
Is this possible?

If you are using chrome based browsers you might wanna try setting chromeWebSecurity to false.
Other browsers (specifically Firefox) might still run into that issue, to which you might wanna properly address the on failure method or split the interaction with another origin to another it statement. The later is pretty simple and should work on all browsers:
it('Cards', () => {
cy.get('#my-id').click({ force: true })
}) //if that gets you to another superdomain, we go to another it
it('Another superdomain', () => {
var itemID= localStorage.getItem('itemID')
expect(itemID).to.eq(null);
cy.get(`a:visible[id*="the_link"]`).first().click()
}) If that is yet another superdomain, we go to another it again
it('Third superdomain', () => {
var itemID= localStorage.getItem('itemID')
expect(itemID).to.not.equal(null)
// test ends by itself
})
The way you describe on failure method, you just provide additional functionality for the failure but do not instruct the method to ignore the test status finalisation.
That can be done by returning false on that specific event:
Cypress.on('fail', () => false);
Mind that you can probably include an assertion before returning false if it is really required.
Cypress.on('fail', () => { // no params are needed tho
var itemID= localStorage.getItem('itemID')
expect(itemID).to.not.equal(null)
// end test but mark as a success
return false
})
That piece of code must be provided right before the potential failure event. This will interrupt the test as it will fail but will mark it as passed.
This is a huge anti-pattern and generalisation, as you use a Cypress.on('fail', ()) event. Better way is to anchor any sort of negative logic to a more specific event, one of those listed here. However, that specific hack above might work for avoiding cross origin error on non-chrome-based browsers if it is for sure the last thing that happens in that test.

Related

How to programmatically stop a Cypress test

I have a Cypress test that has only one testing case (using v10.9 with the test GUI):
describe("Basic test", () => {
it.only("First test", () => {
cy.visit("http://localhost:9999");
cy.pause();
//if(cy.get("#importantBox").contains(...) {
//}
//else
{
Cypress.runner.stop();
console.log("Stopping...");
}
console.log("Visiting...");
cy.visit("http://localhost:9999/page1");
If a certain element doesn't exist in the page, I don't want the test to continue, so I try to stop it.
Unfortunately, I can see this in the console:
Stopping...
Visiting...
And the test keeps going without the necessary data...
So, can I somehow stop it without using huge if statements?
Stopping the test is relatively easy, the harder part is the condition check.
Cypress runner is built on the Mocha framework, which has a .skip() method. If you issue it in your else clause and inside the Cypress queue, the test will stop.
Two ways to access skip():
Using a function() callback gives access to this which is the Mocha context
it('stops on skip', function() {
...
cy.then(() => this.skip()) // stop here
})
Use cy.state() internal command (may be removed at some point)
it('stops on skip', () => {
...
cy.then(() => cy.state('test').skip()) // stop here
})
You should be aware that all Cypress commands and queries run on an internal queue which is asynchronous to javascript code in the test like console.log("Visiting..."), so you won't get any useful indication from that line.
To use synchronous javascript on the queue, wrap it in a cy.then() command as shown above with the skip() method, so to console log do
cy.then(() => console.log("Visiting..."))

Test that an API call does NOT happen in Cypress

I've implemented API data caching in my app so that if data is already present it is not re-fetched.
I can intercept the initial fetch
cy.intercept('**/api/things').as('api');
cy.visit('/things')
cy.wait('#api') // passes
To test the cache is working I'd like to explicitly test the opposite.
How can I modify the cy.wait() behavior similar to the way .should('not.exist') modifies cy.get() to allow the negative logic to pass?
// data is cached from first route, how do I assert no call occurs?
cy.visit('/things2')
cy.wait('#api')
.should('not.have.been.called') // fails with "no calls were made"
Minimal reproducible example
<body>
<script>
setTimeout(() =>
fetch('https://jsonplaceholder.typicode.com/todos/1')
}, 300)
</script>
</body>
Since we test a negative, it's useful to first make the test fail. Serve the above HTML and use it to confirm the test fails, then remove the fetch() and the test should pass.
The add-on package cypress-if can change default command behavior.
cy.get(selector)
.if('exist').log('exists')
.else().log('does.not.exist')
Assume your API calls are made within 1 second of the action that would trigger them - the cy.visit().
cy.visit('/things2')
cy.wait('#alias', {timeout:1100})
.if(result => {
expect(result.name).to.eq('CypressError') // confirm error was thrown
})
You will need to overwrite the cy.wait() command to check for chained .if() command.
Cypress.Commands.overwrite('wait', (waitFn, subject, selector, options) => {
// Standard behavior for numeric waits
if (typeof selector === 'number') {
return waitFn(subject, selector, options)
}
// Modified alias wait with following if()
if (cy.state('current').attributes.next?.attributes.name === 'if') {
return waitFn(subject, selector, options).then((pass) => pass, (fail) => fail)
}
// Standard alias wait
return waitFn(subject, selector, options)
})
As yet only cy.get() and cy.contains() are overwritten by default.
Custom Command for same logic
If the if() syntax doesn't feel right, the same logic can be used in a custom command
Cypress.Commands.add('maybeWaitAlias', (selector, options) => {
const waitFn = Cypress.Commands._commands.wait.fn
// waitFn returns a Promise
// which Cypress resolves to the `pass` or `fail` values
// depending on which callback is invoked
return waitFn(cy.currentSubject(), selector, options)
.then((pass) => pass, (fail) => fail)
// by returning the `pass` or `fail` value
// we are stopping the "normal" test failure mechanism
// and allowing downstream commands to deal with the outcome
})
cy.visit('/things2')
cy.maybeWaitAlias('#alias', {timeout:1000})
.should(result => {
expect(result.name).to.eq('CypressError') // confirm error was thrown
})
I also tried cy.spy() but with a hard cy.wait() to avoid any latency in the app after the route change occurs.
const spy = cy.spy()
cy.intercept('**/api/things', spy)
cy.visit('/things2')
cy.wait(2000)
.then(() => expect(spy).not.to.have.been.called)
Running in a burn test of 100 iterations, this seems to be ok, but there is still a chance of flaky test with this approach, IMO.
A better way would be to poll the spy recursively:
const spy = cy.spy()
cy.intercept('**/api/things', spy)
cy.visit('/things2')
const waitForSpy = (spy, options, start = Date.now()) => {
const {timeout, interval = 30} = options;
if (spy.callCount > 0) {
return cy.wrap(spy.lastCall)
}
if ((Date.now() - start) > timeout) {
return cy.wrap(null)
}
return cy.wait(interval, {log:false})
.then(() => waitForSpy(spy, {timeout, interval}, start))
}
waitForSpy(spy, {timeout:2000})
.should('eq', null)
A neat little trick I learned from Gleb's Network course.
You will want use cy.spy() with your intercept and use cy.get() on the alias to be able to check no calls were made.
// initial fetch
cy.intercept('**/api/things').as('api');
cy.visit('/things')
cy.wait('#api')
cy.intercept('METHOD', '**/api/things', cy.spy().as('apiNotCalled'))
// trigger the fetch again but will not send since data is cached
cy.get('#apiNotCalled').should('not.been.called')

How to handle Exceptions in Cypress in a similar way we did in selenium

In selenium we can handle exception. If any exception occur in any testcase it will then jump onto next testcase we can did in selenium. But i an confused that how can we did this in Cypress. Taking below example
it('Test Case 1', function () {
cy.visit('https://habitica.com/login')
cy.get('form').find('input[id="usernameInput"]').click().type("username")
cy.get('form').find('input[id="passwordInput"]').click().type("password")
**cy.get('.btn-info').click()**
cy.get('.modal-dialog').find('button[class="btn btn-warning"]').click()
cy.get('.start-day').find('button').click({force:true})
})
it('Test Case 2', function () {
cy.visit('https://habitica.com/login')
cy.get('form').find('input[id="usernameInput"]').click().type("username")
cy.get('form').find('input[id="passwordInput"]').click().type("password")
cy.get('.btn-info').click()
cy.get('.modal-dialog').find('button[class="btn btn-warning"]').click()
cy.get('.start-day').find('button').click({force:true})
})
Lets say browser unable to find click element (Highlighted with bold) in testcase 1 then it will jump onto Testcase 2.
How can we do it in Cypress?
Please help me on this
Exceptions like Unable to fine element or similar others.
Other than this example how can we handle exceptions or error.
Although Cypress team is saying that we need to avoid conditional test as much as possible (and maybe a need to change your approach). However, in you case, you can include a conditional test:
cy.get('.btn-info').then((body) => {
if (body.length > 0) { // continues if the element exists
cy.get('.btn-info').click();
cy.get('.modal-dialog').find('button[class="btn btn-warning"]').click()
cy.get('.start-day').find('button').click({force:true})
} // if the above condition is not met, then it skips this the commands and moves to the next test
});
Thanks alot for your response. Please have a look at this. I have used your code. "'.btn-info'" does not exist so exception occur which is fine. but problem is it do not move onto else statement. I mean If statement gets failed then it must execute else but it do not. Why it is doing so?
it('First Test Case', function() {
cy.visit('http://pb.folio3.com:9000/admin/#/login');
cy.get('.btn-info').then((body) => { // **THIS ELEMENT NOT EXIST**
if (body.length > 0) { // continues if the element exists
cy.get('.btn-info').click();
cy.get('.modal-dialog').find('button[class="btn btn-warning"]').click()
cy.get('.start-day').find('button').click({force:true})
}
else
{
**cy.visit('https://www.facebook.com/');**
}
});
it('Second Test Case', function() {
cy.visit('https://www.google.com/');
})

How to stub Fluture?

Background
I am trying to convert a code snippet from good old Promises into something using Flutures and Sanctuary:
https://codesandbox.io/embed/q3z3p17rpj?codemirror=1
Problem
Now, usually, using Promises, I can uses a library like sinonjs to stub the promises, i.e. to fake their results, force to resolve, to reject, ect.
This is fundamental, as it helps one test several branch directions and make sure everything works as is supposed to.
With Flutures however, it is different. One cannot simply stub a Fluture and I didn't find any sinon-esque libraries that could help either.
Questions
How do you stub Flutures ?
Is there any specific recommendation to doing TDD with Flutures/Sanctuary?
I'm not sure, but those Flutures (this name! ... nevermind, API looks cool) are plain objects, just like promises. They only have more elaborate API and different behavior.
Moreover, you can easily create "mock" flutures with Future.of, Future.reject instead of doing some real API calls.
Yes, sinon contains sugar helpers like resolves, rejects but they are just wrappers that can be implemented with callsFake.
So, you can easily create stub that creates fluture like this.
someApi.someFun = sinon.stub().callsFake((arg) => {
assert.equals(arg, 'spam');
return Future.of('bar');
});
Then you can test it like any other API.
The only problem is "asynchronicity", but that can be solved like proposed below.
// with async/await
it('spams with async', async () => {
const result = await someApi.someFun('spam).promise();
assert.equals(result, 'bar');
});
// or leveraging mocha's ability to wait for returned thenables
it('spams', async () => {
return someApi.someFun('spam)
.fork(
(result) => { assert.equals(result, 'bar');},
(error) => { /* ???? */ }
)
.promise();
});
As Zbigniew suggested, Future.of and Future.reject are great candidates for mocking using plain old javascript or whatever tools or framework you like.
To answer part 2 of your question, any specific recommendations how to do TDD with Fluture. There is of course not the one true way it should be done. However I do recommend you invest a little time in readability and ease of writing tests if you plan on using Futures all across your application.
This applies to anything you frequently include in tests though, not just Futures.
The idea is that when you are skimming over test cases, you will see developer intention, rather than boilerplate to get your tests to do what you need them to.
In my case I use mocha & chai in the BDD style (given when then).
And for readability I created these helper functions.
const {expect} = require('chai');
exports.expectRejection = (f, onReject) =>
f.fork(
onReject,
value => expect.fail(
`Expected Future to reject, but was ` +
`resolved with value: ${value}`
)
);
exports.expectResolve = (f, onResolve) =>
f.fork(
error => expect.fail(
`Expected Future to resolve, but was ` +
`rejected with value: ${error}`
),
onResolve
);
As you can see, nothing magical going on, I simply fail the unexpected result and let you handle the expected path, to do more assertions with that.
Now some tests would look like this:
const Future = require('fluture');
const {expect} = require('chai');
const {expectRejection, expectResolve} = require('../util/futures');
describe('Resolving function', () => {
it('should resolve with the given value', done => {
// Given
const value = 42;
// When
const f = Future.of(value);
// Then
expectResolve(f, out => {
expect(out).to.equal(value);
done();
});
});
});
describe('Rejecting function', () => {
it('should reject with the given value', done => {
// Given
const value = 666;
// When
const f = Future.of(value);
// Then
expectRejection(f, out => {
expect(out).to.equal(value);
done();
});
});
});
And running should give one pass and one failure.
✓ Resolving function should resolve with the given value: 1ms
1) Rejecting function should reject with the given value
1 passing (6ms)
1 failing
1) Rejecting function
should reject with the given value:
AssertionError: Expected Future to reject, but was resolved with value: 666
Do keep in mind that this should be treated as asynchronous code. Which is why I always accept the done function as an argument in it() and call it at the end of my expected results. Alternatively you could change the helper functions to return a promise and let mocha handle that.

Conditionally ignore individual tests with Karma / Jasmine

I have some tests that fail in PhantomJS but not other browsers.
I'd like these tests to be ignored when run with PhantomJS in my watch task (so new browser windows don't take focus and perf is a bit faster), but in my standard test task and my CI pipeline, I want all the tests to run in Chrome, Firefox, etc...
I've considered a file-naming convention like foo.spec.dont-use-phantom.js and excluding those in my Karma config, but this means that I will have to separate out the individual tests that are failing into their own files, separating them from their logical describe blocks and having more files with weird naming conventions would generally suck.
In short:
Is there a way I can extend Jasmine and/or Karma and somehow annotate individual tests to only run with certain configurations?
Jasmine supports a pending() function.
If you call pending() anywhere in the spec body, no matter the expectations, the spec will be marked pending.
You can call pending() directly in test, or in some other function called from test.
function skipIfCondition() {
pending();
}
function someSkipCheck() {
return true;
}
describe("test", function() {
it("call pending directly by condition", function() {
if (someSkipCheck()) {
pending();
}
expect(1).toBe(2);
});
it("call conditionally skip function", function() {
skipIfCondition();
expect(1).toBe(3);
});
it("is executed", function() {
expect(1).toBe(1);
});
});
working example here: http://plnkr.co/edit/JZtAKALK9wi5PdIkbw8r?p=preview
I think it is purest solution. In test results you can see count of finished and skipped tests.
The most simple solution that I see is to override global functions describe and it to make them accept third optional argument, which has to be a boolean or a function returning a boolean - to tell whether or not current suite/spec should be executed. When overriding we should check if this third optional arguments resolves to true, and if it does, then we call xdescribe/xit (or ddescribe/iit depending on Jasmine version), which are Jasmine's methods to skip suite/spec, istead of original describe/it. This block has to be executed before the tests, but after Jasmine is included to the page. In Karma just move this code to a file and include it before test files in karma.conf.js. Here is the code:
(function (global) {
// save references to original methods
var _super = {
describe: global.describe,
it: global.it
};
// override, take third optional "disable"
global.describe = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xdescribe" (or "ddescribe")
if (disable) {
return global.xdescribe.apply(this, arguments);
}
// otherwise call original "describe"
return _super.describe.apply(this, arguments);
};
// override, take third optional "disable"
global.it = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xit" (or "iit")
if (disable) {
return global.xit.apply(this, arguments);
}
// otherwise call original "it"
return _super.it.apply(this, arguments);
};
}(window));
And usage example:
describe('foo', function () {
it('should foo 1 ', function () {
expect(true).toBe(true);
});
it('should foo 2', function () {
expect(true).toBe(true);
});
}, true); // disable suite
describe('bar', function () {
it('should bar 1 ', function () {
expect(true).toBe(true);
});
it('should bar 2', function () {
expect(true).toBe(true);
}, function () {
return true; // disable spec
});
});
See a working example here
I've also stumbled upon this issue where the idea was to add a chain method .when() for describe and it, which will do pretty much the same I've described above. It may look nicer but is a bit harder to implement.
describe('foo', function () {
it('bar', function () {
// ...
}).when(anything);
}).when(something);
If you are really interested in this second approach, I'll be happy to play with it a little bit more and try to implement chain .when().
Update:
Jasmine uses third argument as a timeout option (see docs), so my code sample is replacing this feature, which is not ok. I like #milanlempera and #MarcoCI answers better, mine seems kinda hacky and not intuitive. I'll try to update my solution anyways soon not to break compatibilty with Jasmine default features.
I can share my experience with this.
In our environment we have several tests running with different browsers and different technologies.
In order to run always the same suites on all the platforms and browsers we have a helper file loaded in karma (helper.js) with some feature detection functions loaded globally.
I.e.
function isFullScreenSupported(){
// run some feature detection code here
}
You can use also Modernizr for this as well.
In our tests then we wrap things in if/else blocks like the following:
it('should work with fullscreen', function(){
if(isFullScreenSupported()){
// run the test
}
// don't do anything otherwise
});
or for an async test
it('should work with fullscreen', function(done){
if(isFullScreenSupported()){
// run the test
...
done();
} else {
done();
}
});
While it's a bit verbose it will save you time for the kind of scenario you're facing.
In some cases you can use user agent sniffing to detect a particular browser type or version - I know it is bad practice but sometimes there's effectively no other way.
Try this. I am using this solution in my projects.
it('should do something', function () {
if (!/PhantomJS/.test(window.navigator.userAgent)) {
expect(true).to.be.true;
}
});
This will not run this particular test in PhantomJS, but will run it in other browsers.
Just rename the tests that you want to disable from it(...) to xit(...)
function xit: A temporarily disabled it. The spec will report as
pending and will not be executed.

Resources