Say I have a Nightwatch test with two steps that fill out a form. As part of the first step, I need to dynamically query some data from the page (using the Selenium api), then use that data to make additional selenium calls, and use the final result to make custom assertions. The reason I need to use the Selenium api is not that I do not know how to use the normal Nightwatch assertions, but rather that the normal assertions are not sufficient to test the types of things I want to test. Additionally, at the end of the first step a button is clicked that moves on to the next part of the form (in preparation for the second step).
(code version):
module.exports = {
'Part 1': (client) => {
// ... do cool stuff
client.SOME_SELENIUM_COMMAND(...SOME_ARGS..., (result) => {
client.SOME_OTHER_SELENIUM_COMMAND(...SOME_OTHER_ARGS..., (result2) => {
// ... do more cool stuff with result2
});
});
// moves the page onto part 2
client.click(SOME_BUTTON);
},
'Part 2': (client) => {
// ... part 2 stuff
}
};
My problem is this: the test moves on to the second part before the selenium command result part resolves.
I am aware that internally Nightwatch uses some kind of event queue and EventEmitters to make sure that commands are executed in the correct order however it appears that the click command at the end of part one is being queued up before the commands in the callback can be.
You can use the Nightwatch perform command to perform some other commands and call done() when you want the test to continue to Part 2. You would do something like this:
module.exports = {
'Part 1': (client) => {
// ... do cool stuff
client
.perform(function(client, done) {
client.SOME_SELENIUM_COMMAND(...SOME_ARGS..., (result) => {
client.SOME_OTHER_SELENIUM_COMMAND(...SOME_OTHER_ARGS..., (result2) => {
// ... do more cool stuff with result2
// Call done to continue to Part 2
done();
});
});
})
// moves the page onto part 2
.click(SOME_BUTTON);
},
'Part 2': (client) => {
// ... part 2 stuff
}
};
Related
I have a Cypress test that has only one testing case (using v10.9 with the test GUI):
describe("Basic test", () => {
it.only("First test", () => {
cy.visit("http://localhost:9999");
cy.pause();
//if(cy.get("#importantBox").contains(...) {
//}
//else
{
Cypress.runner.stop();
console.log("Stopping...");
}
console.log("Visiting...");
cy.visit("http://localhost:9999/page1");
If a certain element doesn't exist in the page, I don't want the test to continue, so I try to stop it.
Unfortunately, I can see this in the console:
Stopping...
Visiting...
And the test keeps going without the necessary data...
So, can I somehow stop it without using huge if statements?
Stopping the test is relatively easy, the harder part is the condition check.
Cypress runner is built on the Mocha framework, which has a .skip() method. If you issue it in your else clause and inside the Cypress queue, the test will stop.
Two ways to access skip():
Using a function() callback gives access to this which is the Mocha context
it('stops on skip', function() {
...
cy.then(() => this.skip()) // stop here
})
Use cy.state() internal command (may be removed at some point)
it('stops on skip', () => {
...
cy.then(() => cy.state('test').skip()) // stop here
})
You should be aware that all Cypress commands and queries run on an internal queue which is asynchronous to javascript code in the test like console.log("Visiting..."), so you won't get any useful indication from that line.
To use synchronous javascript on the queue, wrap it in a cy.then() command as shown above with the skip() method, so to console log do
cy.then(() => console.log("Visiting..."))
I've implemented API data caching in my app so that if data is already present it is not re-fetched.
I can intercept the initial fetch
cy.intercept('**/api/things').as('api');
cy.visit('/things')
cy.wait('#api') // passes
To test the cache is working I'd like to explicitly test the opposite.
How can I modify the cy.wait() behavior similar to the way .should('not.exist') modifies cy.get() to allow the negative logic to pass?
// data is cached from first route, how do I assert no call occurs?
cy.visit('/things2')
cy.wait('#api')
.should('not.have.been.called') // fails with "no calls were made"
Minimal reproducible example
<body>
<script>
setTimeout(() =>
fetch('https://jsonplaceholder.typicode.com/todos/1')
}, 300)
</script>
</body>
Since we test a negative, it's useful to first make the test fail. Serve the above HTML and use it to confirm the test fails, then remove the fetch() and the test should pass.
The add-on package cypress-if can change default command behavior.
cy.get(selector)
.if('exist').log('exists')
.else().log('does.not.exist')
Assume your API calls are made within 1 second of the action that would trigger them - the cy.visit().
cy.visit('/things2')
cy.wait('#alias', {timeout:1100})
.if(result => {
expect(result.name).to.eq('CypressError') // confirm error was thrown
})
You will need to overwrite the cy.wait() command to check for chained .if() command.
Cypress.Commands.overwrite('wait', (waitFn, subject, selector, options) => {
// Standard behavior for numeric waits
if (typeof selector === 'number') {
return waitFn(subject, selector, options)
}
// Modified alias wait with following if()
if (cy.state('current').attributes.next?.attributes.name === 'if') {
return waitFn(subject, selector, options).then((pass) => pass, (fail) => fail)
}
// Standard alias wait
return waitFn(subject, selector, options)
})
As yet only cy.get() and cy.contains() are overwritten by default.
Custom Command for same logic
If the if() syntax doesn't feel right, the same logic can be used in a custom command
Cypress.Commands.add('maybeWaitAlias', (selector, options) => {
const waitFn = Cypress.Commands._commands.wait.fn
// waitFn returns a Promise
// which Cypress resolves to the `pass` or `fail` values
// depending on which callback is invoked
return waitFn(cy.currentSubject(), selector, options)
.then((pass) => pass, (fail) => fail)
// by returning the `pass` or `fail` value
// we are stopping the "normal" test failure mechanism
// and allowing downstream commands to deal with the outcome
})
cy.visit('/things2')
cy.maybeWaitAlias('#alias', {timeout:1000})
.should(result => {
expect(result.name).to.eq('CypressError') // confirm error was thrown
})
I also tried cy.spy() but with a hard cy.wait() to avoid any latency in the app after the route change occurs.
const spy = cy.spy()
cy.intercept('**/api/things', spy)
cy.visit('/things2')
cy.wait(2000)
.then(() => expect(spy).not.to.have.been.called)
Running in a burn test of 100 iterations, this seems to be ok, but there is still a chance of flaky test with this approach, IMO.
A better way would be to poll the spy recursively:
const spy = cy.spy()
cy.intercept('**/api/things', spy)
cy.visit('/things2')
const waitForSpy = (spy, options, start = Date.now()) => {
const {timeout, interval = 30} = options;
if (spy.callCount > 0) {
return cy.wrap(spy.lastCall)
}
if ((Date.now() - start) > timeout) {
return cy.wrap(null)
}
return cy.wait(interval, {log:false})
.then(() => waitForSpy(spy, {timeout, interval}, start))
}
waitForSpy(spy, {timeout:2000})
.should('eq', null)
A neat little trick I learned from Gleb's Network course.
You will want use cy.spy() with your intercept and use cy.get() on the alias to be able to check no calls were made.
// initial fetch
cy.intercept('**/api/things').as('api');
cy.visit('/things')
cy.wait('#api')
cy.intercept('METHOD', '**/api/things', cy.spy().as('apiNotCalled'))
// trigger the fetch again but will not send since data is cached
cy.get('#apiNotCalled').should('not.been.called')
I have a Cypress test that clicks a link which runs a method and then brings the user to a new page on different website entirely.
This is the test:
it('Cards', () => {
cy.get('#my-id').click({ force: true })
var itemID= localStorage.getItem('itemID')
expect(itemID).to.eq(null);
cy.get(`a:visible[id*="the_link"]`).first().click()
Cypress.on('fail', (error, runnable) => {
var itemID= localStorage.getItem('itemID')
expect(itemID).to.not.equal(null)
// end test but mark as a success
})
})
The problem is that I get a cross origin error, therefore I added in the Cypress.on('fail') piece of code. So now, the test does not fail but it waits for minutes because it attempting to intercept more data. There is a lot of other stuff going on so I cannot change the intercept logic.
All I want is to end the test and mark it a success on the line that says // end test but mark as a success.
Is this possible?
If you are using chrome based browsers you might wanna try setting chromeWebSecurity to false.
Other browsers (specifically Firefox) might still run into that issue, to which you might wanna properly address the on failure method or split the interaction with another origin to another it statement. The later is pretty simple and should work on all browsers:
it('Cards', () => {
cy.get('#my-id').click({ force: true })
}) //if that gets you to another superdomain, we go to another it
it('Another superdomain', () => {
var itemID= localStorage.getItem('itemID')
expect(itemID).to.eq(null);
cy.get(`a:visible[id*="the_link"]`).first().click()
}) If that is yet another superdomain, we go to another it again
it('Third superdomain', () => {
var itemID= localStorage.getItem('itemID')
expect(itemID).to.not.equal(null)
// test ends by itself
})
The way you describe on failure method, you just provide additional functionality for the failure but do not instruct the method to ignore the test status finalisation.
That can be done by returning false on that specific event:
Cypress.on('fail', () => false);
Mind that you can probably include an assertion before returning false if it is really required.
Cypress.on('fail', () => { // no params are needed tho
var itemID= localStorage.getItem('itemID')
expect(itemID).to.not.equal(null)
// end test but mark as a success
return false
})
That piece of code must be provided right before the potential failure event. This will interrupt the test as it will fail but will mark it as passed.
This is a huge anti-pattern and generalisation, as you use a Cypress.on('fail', ()) event. Better way is to anchor any sort of negative logic to a more specific event, one of those listed here. However, that specific hack above might work for avoiding cross origin error on non-chrome-based browsers if it is for sure the last thing that happens in that test.
I'm working on some Jasmine end-to-end testing, using Protractor test runner. The application I am testing is a simple webpage. I already have a test scenario that works fine.
Now I'd like to improve my code so that I can use the same script to run the testing scenario twice.
The first time: the test would be performed on the English version of the page
The second time: on a translated version of the same page.
Here is my code:
var RandomSentenceInThePage = ["Sentence in English", "Phrase en Francais"];
var i;
var signInButton;
var TranslationButton;
var RandomSentenceInThePageBis;
i = 0;
//Runs the testing scenario twice
while (i < 2) {
describe('TC1 - The registration Page', function() {
//the translation is done on the second iteration
if (i != 0) {
beforeEach(function() {
browser.ignoreSynchronization = true;
browser.get('https://Mywebsite.url.us/');
//we get the translation button then click on it
TranslationButton = element(by.css('.TranslationButtonClass'));
TranslationButton.click();
});
}
//On the first iteration, we run the test on the not translated pageā¦
Else {
beforeEach(function() {
browser.ignoreSynchronization = true; //Necessary for the browser.get() method to work inside the it statements.
browser.get('https://Mywebsite.url.us/');
});
}
it('should display the log in page', function() {
//Accessing the browser is done in the before each section
signInButton = element(by.css('.SignInButtonClass'));
signInButton.click();
RandomSentenceInThePageBis = element(by.css('.mt-4.text-center.signin-header')).getText();
/*******************[HERE IS WHERE THE PROBLEM IS]*******************/
expect(RandomSentenceInThePageBis.getText()).toEqual(RandomSentenceInThePage[i]);
});
/*******************************************************************/
});
}
I have highlighted the problematic section. The code keeps running even before the comparison between RandomSentenceInThePage[i] and RandomSentenceInThePageBis are compared. And when they are finally compared, the loop is already done.
According to what I have seen on the other related topics, because of the use of expect statements and getText() methods, I am dealing with promises and I have to wait for them to be resolved. After trying for the whole day, I think I could use a hint on how to deal with this promise resolution. Let me know if you need more information.
Change while loop to for loop and declare the variable: i by let, rather than var
let can declare variable at code block scope like for, if block etc. But var can't.
Because protractor api execute async, thus when the expect()... execute for the second time. the value of i has become 2, not 1
for(let i=0;i<2;i++) {
describe('TC1 - The registration Page', function() {
....
})
}
I have some tests that fail in PhantomJS but not other browsers.
I'd like these tests to be ignored when run with PhantomJS in my watch task (so new browser windows don't take focus and perf is a bit faster), but in my standard test task and my CI pipeline, I want all the tests to run in Chrome, Firefox, etc...
I've considered a file-naming convention like foo.spec.dont-use-phantom.js and excluding those in my Karma config, but this means that I will have to separate out the individual tests that are failing into their own files, separating them from their logical describe blocks and having more files with weird naming conventions would generally suck.
In short:
Is there a way I can extend Jasmine and/or Karma and somehow annotate individual tests to only run with certain configurations?
Jasmine supports a pending() function.
If you call pending() anywhere in the spec body, no matter the expectations, the spec will be marked pending.
You can call pending() directly in test, or in some other function called from test.
function skipIfCondition() {
pending();
}
function someSkipCheck() {
return true;
}
describe("test", function() {
it("call pending directly by condition", function() {
if (someSkipCheck()) {
pending();
}
expect(1).toBe(2);
});
it("call conditionally skip function", function() {
skipIfCondition();
expect(1).toBe(3);
});
it("is executed", function() {
expect(1).toBe(1);
});
});
working example here: http://plnkr.co/edit/JZtAKALK9wi5PdIkbw8r?p=preview
I think it is purest solution. In test results you can see count of finished and skipped tests.
The most simple solution that I see is to override global functions describe and it to make them accept third optional argument, which has to be a boolean or a function returning a boolean - to tell whether or not current suite/spec should be executed. When overriding we should check if this third optional arguments resolves to true, and if it does, then we call xdescribe/xit (or ddescribe/iit depending on Jasmine version), which are Jasmine's methods to skip suite/spec, istead of original describe/it. This block has to be executed before the tests, but after Jasmine is included to the page. In Karma just move this code to a file and include it before test files in karma.conf.js. Here is the code:
(function (global) {
// save references to original methods
var _super = {
describe: global.describe,
it: global.it
};
// override, take third optional "disable"
global.describe = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xdescribe" (or "ddescribe")
if (disable) {
return global.xdescribe.apply(this, arguments);
}
// otherwise call original "describe"
return _super.describe.apply(this, arguments);
};
// override, take third optional "disable"
global.it = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xit" (or "iit")
if (disable) {
return global.xit.apply(this, arguments);
}
// otherwise call original "it"
return _super.it.apply(this, arguments);
};
}(window));
And usage example:
describe('foo', function () {
it('should foo 1 ', function () {
expect(true).toBe(true);
});
it('should foo 2', function () {
expect(true).toBe(true);
});
}, true); // disable suite
describe('bar', function () {
it('should bar 1 ', function () {
expect(true).toBe(true);
});
it('should bar 2', function () {
expect(true).toBe(true);
}, function () {
return true; // disable spec
});
});
See a working example here
I've also stumbled upon this issue where the idea was to add a chain method .when() for describe and it, which will do pretty much the same I've described above. It may look nicer but is a bit harder to implement.
describe('foo', function () {
it('bar', function () {
// ...
}).when(anything);
}).when(something);
If you are really interested in this second approach, I'll be happy to play with it a little bit more and try to implement chain .when().
Update:
Jasmine uses third argument as a timeout option (see docs), so my code sample is replacing this feature, which is not ok. I like #milanlempera and #MarcoCI answers better, mine seems kinda hacky and not intuitive. I'll try to update my solution anyways soon not to break compatibilty with Jasmine default features.
I can share my experience with this.
In our environment we have several tests running with different browsers and different technologies.
In order to run always the same suites on all the platforms and browsers we have a helper file loaded in karma (helper.js) with some feature detection functions loaded globally.
I.e.
function isFullScreenSupported(){
// run some feature detection code here
}
You can use also Modernizr for this as well.
In our tests then we wrap things in if/else blocks like the following:
it('should work with fullscreen', function(){
if(isFullScreenSupported()){
// run the test
}
// don't do anything otherwise
});
or for an async test
it('should work with fullscreen', function(done){
if(isFullScreenSupported()){
// run the test
...
done();
} else {
done();
}
});
While it's a bit verbose it will save you time for the kind of scenario you're facing.
In some cases you can use user agent sniffing to detect a particular browser type or version - I know it is bad practice but sometimes there's effectively no other way.
Try this. I am using this solution in my projects.
it('should do something', function () {
if (!/PhantomJS/.test(window.navigator.userAgent)) {
expect(true).to.be.true;
}
});
This will not run this particular test in PhantomJS, but will run it in other browsers.
Just rename the tests that you want to disable from it(...) to xit(...)
function xit: A temporarily disabled it. The spec will report as
pending and will not be executed.