I am running into this issue where I execute my protractor test only to find out that it has failed. Now the issue is not with the actual functionality failing but with the Default Timeout Interval apparently. The Code runs fine, performs all the operation on the Webpage and just when you expect a green dot it errors out.
Before anyone marks it duplicate, I would just like to tell that I have tried the below approaches going through the answers of other similar question.
Included argument in the "it" block and called the argument after the test.
Changed the default_timeout_interval in conf.js file to 30s.
Tried using Async/await in the last function to wait for the promise to get resolved.
I would not only like to find a answer to this one but also if someone can give me an explanation on what exactly Protractor wants to convey here. To me as a novice in JavaScript and Protractor this looks like a very vague message.
Below is my spec file :
describe("Validating Booking for JetBlue WebApplication", function(){
var firstPage = require("../PageLogic/jetBlueHomePage.js");
it("Validating One Way Booking", function(pleaserun){
firstPage.OneWayTrip();
firstPage.EnterFromCity("California");
firstPage.EnterToCity("New York");
firstPage.SelectDepartureDate();
firstPage.searchFlights();
pleaserun();
});
});
Below is my Page file:
var homePage = function(){
this.OneWayTrip = function(){
element(by.xpath("//label[text()=' One-way ']/parent::jb-radio/div")).click();
}
this.EnterFromCity = function(FromCityName){
element(by.xpath("//input[#placeholder='Where from?']")).clear();
element(by.xpath("//input[#placeholder='Where from?']")).sendKeys(FromCityName);
browser.sleep(3000);
element(by.xpath("//ul[#id='listbox']/li[1]")).click();
}
this.EnterToCity = function(ToCityName){
element(by.xpath("//input[#placeholder='Where to?']")).clear();
element(by.xpath("//input[#placeholder='Where to?']")).sendKeys(ToCityName);
browser.sleep(3000);
element(by.xpath("//ul[#id='listbox']/li[1]")).click();
browser.sleep(3000);
}
this.SelectDepartureDate = function(){
element(by.xpath("//input[#placeholder='Select Date']")).click();
browser.sleep(3000);
element(by.xpath("//span[text()=' 24 ']")).click();
}
this.NumberOfPassengers = function(){
element(by.xpath("//button[#pax='traveler-selector']")).click();
}
this.searchFlights = async function(){
await element(by.buttonText('Search flights')).click();
}
};
module.exports = new homePage();
Below is the Conf.js file :
exports.config = {
directConnect: true,
specs:["TestSpecs/jetBlueBookingTest.js"],
onPrepare: function(){
browser.get("https://www.jetblue.com/");
browser.manage().window().maximize();
},
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 30000,
isVerbose: true,
includeStackTrace: true,
}
};
Seeking help from all the protractor pros out there to help me get this solution and hopefully make me understand the concept.
the message means that an async script was run in the browser, and it hasn't returned a result within the expected time limit. it would be useful if you could share the full error stack, the root cause is probably mentioned deeper down in it
it('should for something', function check(done) {
browser.sleep(2000);
$('.csTx').isPresent().then(function(result) {
if(result) {
done();
} else {
xPage.clickBack();
check(done);
}
})
}, 30000);
Can someone explain how done() works and what this is for. I googled it but cannot find any information that would be easy enough for me to understand. I am automating with protractor and jasmine. please consider the above code.
You need to use done if your test creates a parallel TaskQueue in your test's Control Flow (read more about promises and control flow).
For example:
describe('Control Flow', function() {
function logFromPromise(text) {
var deferred = protractor.promise.defer();
deferred.then(function() {
console.log(text);
});
deferred.fulfill();
return deferred;
}
it('multiple control flows', function() {
setTimeout(function() {
logFromPromise('1');
});
logFromPromise('0');
});
}
Calling setTime creates a parallel Task Queue in the control:
ControlFlow
| TaskQueue
| | Task<Run fit("multiple control flows") in control flow>
| | | TaskQueue
| | | | Task <logFromPromise('0');>
| TaskQueue
| | Task <setTimeout>
Protractor thinks the test is "done" after 0 is printed. In this example, 1 will probably be printed after the test is completed. To make protractor wait for Task <setTimeout>, you need to call the done function:
it('multiple control flows', function(done) {
setTimeout(function() {
logFromPromise('1').then(function() {
done();
});
});
logFromPromise('0');
});
If you can, let protractor handle when the test is "done". Having parallel TaskQueues can lead to unexpected race conditions in your test.
Here is a sample describe that you can run and see what happens. I have to mention that I don't use Protractor so there might exist some additional considerations to be made concerning its specific capabilities.
describe('Done functionality', function(){
var echoInOneSecond = function(value){
console.log('creating promise for ', value);
return new Promise(function(resolve, reject){
console.log('resolving with ', value);
resolve(value);
});
};
it('#1 this will untruly PASS', function(){
var p = echoInOneSecond('value #1');
p.then(function(value){
console.log('#1 expecting...and value is ', value);
expect(value).toBe('value #1');
});
});
it('#2 this will NOT FAIL', function(){
var p = echoInOneSecond('value #2');
p.then(function(value){
console.log('#2 expecting... and value is ', value);
expect(value).not.toBe('value #2');
});
});
it('3 = will truly FAIl', function(done){
var p = echoInOneSecond('value #3');
p.then(function(value){
console.log('#3 expecting... and value is ', value);
expect(value).not.toBe('value #3');
done();
});
});
it('4 = this will truly PASS', function(done){
var p = echoInOneSecond('value #4');
p.then(function(value){
console.log('#4 expecting... and value is ', value);
expect(value).toBe('value #4');
done();
});
});
});
when running the test you will note the sequence: first promises #1, #2, #3 will be created and resolved one by one. Please note that expectation for #1 and #2 will not be run yet because promises are resolved asynchronously.
Then, since #3 test uses done, after #3 promise is created, functions for thens of all previous promises are evaluated: you will see '#1 expecting...' and '#2 expecting...', but jasmine won't care about that because tests #1 and #2 are already finished and everything concerning them done. Only after those #3 expectation is made and it will truly fail because jasmine does take care of everything that happens before done() is made.
And then you can watch #4 test normal flow -- creating promise, resolving, expectation, everything considered by jasmine so expectation will truly pass.
I haven't used Protractor. For Jasmine, my understanding is that done makes Jasmine wait but not in the traditional sense of timeout. It is not like a timer which is always run. I think done acts as a checkpoint in Jasmine. When Jasmine sees that a spec uses done, it knows that it cannot proceed to the next step (say run next spec or mark this spec as finished i.e. declare verdict of the current spec) unless the code leg containing done has been run.
For example, jasmine passes this spec even though it should fail as it doesn't wait for setTimeout to be called.
fit('lets check done',()=>{
let i=0;
setTimeout(function(){
console.log("in timeout");
expect(i).toBeTruthy();//the spec should fail as i is 0 but Jasmine passes it!
},1000);
//jasmine reaches this point and see there is no expectation so it passes the spec. It doesn't wait for the async setTimeout code to run
});
But if my intention is that Jasmine waits for the the async code in setTimeout, then I use done in the async code
fit('lets check done',(done)=>{
let i=0;
setTimeout(function(){
console.log("in timeout");
expect(i).toBeTruthy();//with done, the spec now correctly fails with reason Expected 0 to be truthy.
done();//this should make jasmine wait for this code leg to be called before declaring the verdict of this spec
},1000);
});
Note that done should be called where I want to check the assertions.
fit('lets check done',(done)=>{
let i=0;
setTimeout(function(){
console.log("in timeout");
expect(i).toBeTruthy();//done not used at the right place, so spec will incorrectly ypass again!.
//done should have been called here as I am asserting in this code leg.
},1000);
done();//using done here is not right as this code leg will be hit inn normal execution of it.
});
In summary, think of done as telling Jasmine - "I am done now" or "I'll be done when this code hits"
I have some Protractor tests using Axe (AxeBuilder) like the following:
var AxeBuilder = require('path_to_the/axe-webdriverjs');
describe('Page under test', function() {
'use strict';
it('should be accessible', function() {
AxeBuilder(browser.driver).analyze(function(results) {
expect(results.violations.length).toBe(0);
});
});
});
How would I go about passing results.violations out to Jasmine so that it can be reported in my Jasmine Reporter?
I am currently looking to use the following Jasmine JSON Reporter:
https://github.com/DrewML/jasmine-json-test-reporter
But I will eventually customise this to output HTML.
I found a fix for this in the end.
It turns out that the solution is to write a custom Jasmine matcher, like this: http://jasmine.github.io/2.4/custom_matcher.html
This allows you to control what information is passed out to the result.message.
I am running this test and it seems that when the test get's to the function portion of my describe block, it skips the whole thing and gives a false positive for passing.
// required libraries
var webdriverio = require('webdriverio');
var describe = require('describe');
var after = require('after');
console.log("Lets begin");
describe('Title Test for google site', function() {
console.log("MARTY!!");
// set timeout to 10 seconds
this.timeout(10000);
var driver = {};
console.log("before we start");
// hook to run before tests
before( function (done) {
// load the driver for browser
console.log("before browser");
driver = webdriverio.remote({ desiredCapabilities: {browserName: 'firefox'} });
driver.init(done);
});
it('should load correct page and title', function () {
// load page, then call function()
return driver
.console.log("before site")
.url('http://www.ggogle.com')
// get title, then pass title to function()
.getTitle().then( function (title) {
// verify title
(title).should.be.equal("google");
// uncomment for console debug
// console.log('Current Page Title: ' + title);
});
});
});
// a "hook" to run after all tests in this block
after(function(done) {
driver.end(done);
});
console.log ("Fin");
This is the output I get
Lets begin
Fin
[Finished in 0.4s]
As you can see it skips everything else.
This is wrong and should be removed:
var describe = require('describe');
var after = require('after');
Mocha's describe and after are added to the global space of your test files by Mocha. You do not need to import them. Look at all the examples on the Mocha site, you won't find anywhere where they tell you to import describe and its siblings.
To get Mocha to add describe and its siblings, you need to be running your test through mocha. Running node directly on a test file won't work. And for mocha to be findable it has to be in your PATH. If you installed it locally, it is (most likely) not in your PATH so you have to give the full path ./node_modules/.bin/mocha.
I want to build a test that makes sure the Jasmine Test is using a file called SpecRunner.html".
How do I do that?
You can check window.location (at least if you are running tests in browser).
Quick POC code:
describe("File name", function () {
it("ends with SpecRunner.html", function () {
var fileName = window.location.pathname.split("/").pop();
expect(window.location.pathname).toMatch(/SpecRunner.html$/);
});
});