I'm working on some Jasmine end-to-end testing, using Protractor test runner. The application I am testing is a simple webpage. I already have a test scenario that works fine.
Now I'd like to improve my code so that I can use the same script to run the testing scenario twice.
The first time: the test would be performed on the English version of the page
The second time: on a translated version of the same page.
Here is my code:
var RandomSentenceInThePage = ["Sentence in English", "Phrase en Francais"];
var i;
var signInButton;
var TranslationButton;
var RandomSentenceInThePageBis;
i = 0;
//Runs the testing scenario twice
while (i < 2) {
describe('TC1 - The registration Page', function() {
//the translation is done on the second iteration
if (i != 0) {
beforeEach(function() {
browser.ignoreSynchronization = true;
browser.get('https://Mywebsite.url.us/');
//we get the translation button then click on it
TranslationButton = element(by.css('.TranslationButtonClass'));
TranslationButton.click();
});
}
//On the first iteration, we run the test on the not translated pageā¦
Else {
beforeEach(function() {
browser.ignoreSynchronization = true; //Necessary for the browser.get() method to work inside the it statements.
browser.get('https://Mywebsite.url.us/');
});
}
it('should display the log in page', function() {
//Accessing the browser is done in the before each section
signInButton = element(by.css('.SignInButtonClass'));
signInButton.click();
RandomSentenceInThePageBis = element(by.css('.mt-4.text-center.signin-header')).getText();
/*******************[HERE IS WHERE THE PROBLEM IS]*******************/
expect(RandomSentenceInThePageBis.getText()).toEqual(RandomSentenceInThePage[i]);
});
/*******************************************************************/
});
}
I have highlighted the problematic section. The code keeps running even before the comparison between RandomSentenceInThePage[i] and RandomSentenceInThePageBis are compared. And when they are finally compared, the loop is already done.
According to what I have seen on the other related topics, because of the use of expect statements and getText() methods, I am dealing with promises and I have to wait for them to be resolved. After trying for the whole day, I think I could use a hint on how to deal with this promise resolution. Let me know if you need more information.
Change while loop to for loop and declare the variable: i by let, rather than var
let can declare variable at code block scope like for, if block etc. But var can't.
Because protractor api execute async, thus when the expect()... execute for the second time. the value of i has become 2, not 1
for(let i=0;i<2;i++) {
describe('TC1 - The registration Page', function() {
....
})
}
Related
I understand that Cypress is designed for e2e testing and is not a generic browser automation tool. However, I'm wondering if it's possible to use Cypress to log into a website and crawl through it pulling all the hrefs, thus building a list of the pages you'd like to test.
For a large website, this seems almost necessary for certain kinds of e2e tests, but I'm stuck trying to implement it. Here's what I have:
describe('Link crawler', () => {
const linksQueue = ['www.example.com/'];
const seen = {};
before(() => {
cy.login(email, password);
});
// Build a queue, typical BFS algorithm.
// For all links in the queue, pull out the anchor tags and add new hrefs to the queue.
// Mark links as seen so you don't infinitely loop.
while (linksQueue.length) {
let currentLink = linksQueue.pop();
it(`${currentLink} should have links.`, () => {
cy.visit(`${currentLink}`);
cy.window().then(win => {
let anchorTags = win.document.getElementsByTagName('a');
for (let idx = 0; idx < anchorTags.length; ++idx) {
let newLink = anchorTags[idx].href;
if (!(newLink in seen)) {
linksQueue.unshift(newLink);
}
seen[newLink] = true;
}
});
});
}
});
The problem with the above is that Cypress only processes what's in the queue to begin with, so this will run and extract links, but only on 'www.example.com/'.
How can I use Cypress to work over a queue of links that continues to grow? Is there something else I can use besides cy.window?
I've made this work using Puppeteer, but it would be great to use a single library and Cypress is my team's tool of choice for e2e.
Let's assume I have a service function that returns me the current location. And the function has callbacks to return the location. We can easily mock the function like as follows. But I wanted to introduce some delay (let's say 1 sec) before the callFake() invokes the successHandler(location).
Is there a way to achieve that?
xxxSpec.js
spyOn(LocationService, 'getLocation').and.callFake(function(successHandler, errorHandler) {
//TODO: introduce some delay here
const location = {...};
successHandler(location);
}
LocationService.js
function getLocation(successCallback, errorCallback) {
let location = {...};
successCallback(location);
}
Introducing delay in Javascript is easily done with the setTimeout API, details here. You haven't specified if you are using a framework such as Angular, so your code may differ slightly from what I have below.
It does not appear that you are using Observables or Promises for easier handling of asynchronous code. Jasmine 2 does have the 'done' callback that can be useful for this. Something like this could work:
it( "my test", function(done) {
let successHandler = jasmine.createSpy();
spyOn(LocationService, 'getLocation').and.callFake(function(successHandler, errorHandler) {
setTimeout(function() {
const location = {...};
successHandler(location);
}, 1000); // wait for 1 second
})
// Now invoke the function under test
functionUnderTest(/* location data */);
// To test we have to wait until it's completed before expecting...
setTimeout(function(){
// check what you want to check in the test ...
expect(successHandler).toHaveBeenCalled();
// Let Jasmine know the test is done.
done();
}, 1500); // wait for longer than one second to test results
});
However, it is not clear to me why adding the timeouts would be valuable to your testing. :)
I hope this helps.
Consider the following code:
function toolsQueryResult(){
const query = `query{......}`
return request('http://...', query,).then(data => { return data })
}
var toolsQueryResult= toolsQueryResult();
var toolsNames = [];
toolsQueryResult.then(function(result){
result['key'].forEach(function(item){
toolsNames.push(item["name"])
})
})
console.log(toolsNames)
This returns and prints out empty list "[ ]" to me.Does any one know why?
But if I put "console.log()" between two final "})", it returns list of tools correctly.How should I work with this promise object to have list of tools correctly after second "})" at the end of code?
The reason is, your console.log statement is executed before the the promise toolsQueryResult is resolved. It would be really useful and helpful if you have debugger tools on and place breakpoints to see what i just said.
That said,having the console.log outside of the promise being resolved or rejected beats the whole purpose, meaning you are trying to output a statement before it could complete its execution, hence when you place the console.log statement inside of the then function it outputs result.
Fiddle with your code (modified the result to be a simple array) for you to debug and see : https://jsfiddle.net/jayas_godblessall/vz8mcteh/
or execute it here to see :
function toolsQueryResult() {
const query = `query{......}`
return request('http://...', query, ).then(data => {
return data
})
}
// just to emulate your api call
function request(foo) {
return Promise.resolve(["item1", "item2"]);
}
var toolsQueryResult = toolsQueryResult();
var toolsNames = [];
toolsQueryResult.then(function(result) {
result.forEach(function(item) {
toolsNames.push(item)
})
console.log("am executed after you waited for promise to complete - in this case successfully, so you can see the tool sets")
console.log(toolsNames)
})
console.log("am executed before you could resolve promise")
console.log(toolsNames)
I have some tests that fail in PhantomJS but not other browsers.
I'd like these tests to be ignored when run with PhantomJS in my watch task (so new browser windows don't take focus and perf is a bit faster), but in my standard test task and my CI pipeline, I want all the tests to run in Chrome, Firefox, etc...
I've considered a file-naming convention like foo.spec.dont-use-phantom.js and excluding those in my Karma config, but this means that I will have to separate out the individual tests that are failing into their own files, separating them from their logical describe blocks and having more files with weird naming conventions would generally suck.
In short:
Is there a way I can extend Jasmine and/or Karma and somehow annotate individual tests to only run with certain configurations?
Jasmine supports a pending() function.
If you call pending() anywhere in the spec body, no matter the expectations, the spec will be marked pending.
You can call pending() directly in test, or in some other function called from test.
function skipIfCondition() {
pending();
}
function someSkipCheck() {
return true;
}
describe("test", function() {
it("call pending directly by condition", function() {
if (someSkipCheck()) {
pending();
}
expect(1).toBe(2);
});
it("call conditionally skip function", function() {
skipIfCondition();
expect(1).toBe(3);
});
it("is executed", function() {
expect(1).toBe(1);
});
});
working example here: http://plnkr.co/edit/JZtAKALK9wi5PdIkbw8r?p=preview
I think it is purest solution. In test results you can see count of finished and skipped tests.
The most simple solution that I see is to override global functions describe and it to make them accept third optional argument, which has to be a boolean or a function returning a boolean - to tell whether or not current suite/spec should be executed. When overriding we should check if this third optional arguments resolves to true, and if it does, then we call xdescribe/xit (or ddescribe/iit depending on Jasmine version), which are Jasmine's methods to skip suite/spec, istead of original describe/it. This block has to be executed before the tests, but after Jasmine is included to the page. In Karma just move this code to a file and include it before test files in karma.conf.js. Here is the code:
(function (global) {
// save references to original methods
var _super = {
describe: global.describe,
it: global.it
};
// override, take third optional "disable"
global.describe = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xdescribe" (or "ddescribe")
if (disable) {
return global.xdescribe.apply(this, arguments);
}
// otherwise call original "describe"
return _super.describe.apply(this, arguments);
};
// override, take third optional "disable"
global.it = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xit" (or "iit")
if (disable) {
return global.xit.apply(this, arguments);
}
// otherwise call original "it"
return _super.it.apply(this, arguments);
};
}(window));
And usage example:
describe('foo', function () {
it('should foo 1 ', function () {
expect(true).toBe(true);
});
it('should foo 2', function () {
expect(true).toBe(true);
});
}, true); // disable suite
describe('bar', function () {
it('should bar 1 ', function () {
expect(true).toBe(true);
});
it('should bar 2', function () {
expect(true).toBe(true);
}, function () {
return true; // disable spec
});
});
See a working example here
I've also stumbled upon this issue where the idea was to add a chain method .when() for describe and it, which will do pretty much the same I've described above. It may look nicer but is a bit harder to implement.
describe('foo', function () {
it('bar', function () {
// ...
}).when(anything);
}).when(something);
If you are really interested in this second approach, I'll be happy to play with it a little bit more and try to implement chain .when().
Update:
Jasmine uses third argument as a timeout option (see docs), so my code sample is replacing this feature, which is not ok. I like #milanlempera and #MarcoCI answers better, mine seems kinda hacky and not intuitive. I'll try to update my solution anyways soon not to break compatibilty with Jasmine default features.
I can share my experience with this.
In our environment we have several tests running with different browsers and different technologies.
In order to run always the same suites on all the platforms and browsers we have a helper file loaded in karma (helper.js) with some feature detection functions loaded globally.
I.e.
function isFullScreenSupported(){
// run some feature detection code here
}
You can use also Modernizr for this as well.
In our tests then we wrap things in if/else blocks like the following:
it('should work with fullscreen', function(){
if(isFullScreenSupported()){
// run the test
}
// don't do anything otherwise
});
or for an async test
it('should work with fullscreen', function(done){
if(isFullScreenSupported()){
// run the test
...
done();
} else {
done();
}
});
While it's a bit verbose it will save you time for the kind of scenario you're facing.
In some cases you can use user agent sniffing to detect a particular browser type or version - I know it is bad practice but sometimes there's effectively no other way.
Try this. I am using this solution in my projects.
it('should do something', function () {
if (!/PhantomJS/.test(window.navigator.userAgent)) {
expect(true).to.be.true;
}
});
This will not run this particular test in PhantomJS, but will run it in other browsers.
Just rename the tests that you want to disable from it(...) to xit(...)
function xit: A temporarily disabled it. The spec will report as
pending and will not be executed.
So, I took some code from this Microsoft provided Example which allows me to use the jquery validate unobtrusive library to parse validation error message returned from my server and display them in the UI. They have a video demonstrating this. So, here is the piece of Javascript code I'm using:
$.validator.addMethod("failure", function () { return false; });
$.validator.unobtrusive.adapters.addBool("failure");
$.validator.unobtrusive.revalidate = function (form, validationResult) {
$.removeData(form[0], 'validator');
var serverValidationErrors = [];
for (var property in validationResult) {
//var elementId = property.toLowerCase();
var item = form.find('#' + property);
if (item.length < 1) { item = form.find('#' + property.replace('.', '_')); }
serverValidationErrors.push(item);
item.attr('data-val-failure', validationResult[property].join(', '));
jQuery.validator.unobtrusive.parseElement(item[0]);
}
form.valid();
$.removeData(form[0], 'validator');
$.each(serverValidationErrors, function () {
this.removeAttr('data-val-failure');
jQuery.validator.unobtrusive.parseElement(this[0]);
});
};
So then after a AJAX form post in the handle error function I would do something like this:
$.validator.unobtrusive.revalidate(form, { 'PhysicalAddress.CityName': ['You must select a valid city'] });
Where PhysicalAddress.CityName is the name of my viewmodel property and html input field. So, it knows to put the validation message next to the correct html element.
This works 1 time. Then when they hit submit again and my code calls the unobtrusive.revalidate method again.. it doesnt work. It only shows the validation message one time then after that the validation message disappears for good.
Does anyone have any idea as to why this might be happening?.. I stepped through the revalidate method and no errors were thrown and everything seems like it should work.. but the unobtrusive library for some reason is not re-binding the validation error message.
Thanks
Probably this behavior depends on a known problem of the jQuery validation plugin: dynamically adding new validation rules for elements works just once! Further attempts are rejected because the plugin think they are a duplicated attempt to define the already defined rules.
This is the reason why the $.validator.unobtrusive.parse doesn't work when you add newly created content (when for instance you add a new row to a collection of items). There is a patch for the $.validator.unobtrusive.parse that you might try to apply also to the revalidate function....but it is better to rewrite it from scratch in a different way. The revalidate function usse the validation plugin just to place at the right place all validation errors, then it tries to reset the state of the validation plugin. However, deleting the validator object from the form is not enough to cancel all job done since there is another object contained in the form.data('unobtrusiveValidation'), where form is a variable containing the form being validated...This data are not reset by the revalidate function...and CANNOT be reset since resetting them would cause the cancellation of ALL client side validation rules.
Maybe this problem has been solved in the last version of the validation plugin, so try to update to the last version with nuget.
If this doesn't solve your issue I can pass you an analogous function implemented in a completely different way(it mimics what the server does on the server side to show server side errors). It will be contained in the upcoming version of the Mvc Controls toolkit. However, if you give me a couple of days (I will be very busy for 2 days) I can extract it from there with its dependencies so you can use it. Let me know if you are interested.
Below the code I promised. It expects an array whose elements are:
{
id:id of the element in error
errors:array of strings errors associated to the element
}
It accepts several errors for each element but just display di first one for each element
id is different from the name because . [ ] an other special char are replaced by _
You can transform name into id on the sever with
htmlName.Replace('$', '_').Replace('.', '_').Replace('[', '_').Replace(']', '_');
or on the client in javascript with:
name.replace(/[\$\[\]\.]/g, '_');
function remoteErrors(jForm, errors) {
//////////
function inner_ServerErrors(elements) {
var ToApply = function () {
for (var i = 0; i < elements.length; i++) {
var currElement = elements[i];
var currDom = $('#' + currElement.id);
if (currDom.length == 0) continue;
var currForm = currDom.parents('form').first();
if (currForm.length == 0) continue;
if (!currDom.hasClass('input-validation-error'))
currDom.addClass('input-validation-error');
var currDisplay = $(currForm).find("[data-valmsg-for='" + currElement.name + "']");
if (currDisplay.length > 0) {
currDisplay.removeClass("field-validation-valid").addClass("field-validation-error");
replace = $.parseJSON(currDisplay.attr("data-valmsg-replace")) !== false;
if (replace) {
currDisplay.empty();
$(currElement.errors[0]).appendTo(currDisplay);
}
}
}
};
setTimeout(ToApply, 0);
}
/////////
jForm.find('.input-validation-error').removeClass('input-validation-error');
jForm.find('.field-validation-error').removeClass('field-validation-error').addClass('field-validation-valid');
var container = jForm.find("[data-valmsg-summary=true]");
list = container.find("ul");
list.empty();
if (errors.length > 0) {
$.each(errors, function (i, ival) {
$.each(ival.errors, function (j, jval) {
$("<li />").html(jval).appendTo(list);
});
});
container.addClass("validation-summary-errors").removeClass("validation-summary-valid");
inner_ServerErrors(errors);
setTimeout(function () { jForm.find('span.input-validation-error[data-element-type]').removeClass('input-validation-error') }, 0);
}
else {
container.addClass("validation-summary-valid").removeClass("validation-summary-errors");
}
}
function clearErrors(jForm) {
remoteErrors(jForm, []);
}