Post result to external api after each test case - cypress

I'm setting up a CI-chain and decided to use Cypress for the UI testing. I need to get the result for each individual testcase in my suite. Preferably from within Node in for example a afterEach statement.
Has anyone done this before? Is there any built-in support for this?
I do not want to parse the end result for testcases preferably.

It was possible by using Mocha's this.currentState in conjunction with Cypress plugins.
This is how I solved it:
cypress/plugins/index.js
on("task", {
testFinished(event) {
console.log(event.title, event.result);
return null;
}
});
in my testsuite
afterEach(function() {
cy.task("testFinished", { title: this.currentTest.title, result: this.currentTest.state });
});
The console.log in plugins can now easily be switched for a POST request to wherever you want to store the results.

Related

is there an equivalent to cypress the dependsonmethods in testNG selenium?

Is there as simple as dependsonmethods used in test annotations in testNG equivalent in cypress?
example if in selenium test annotations it looks like this?
#Test()
public void tc1(){
}
#Test(dependsOnMethods= {"tc1"})
public void tc2(){
}
#Test(dependsOnMethods= {"tc1"})
public void tc3(){
}
if I am not mistaken this is somewhat like a parent function with 2 child functions that when the parent conditions inside is error then the two child functions will be skip.
in cypress I know there is callbacks and promise but depending on the kind of assertion you want it becomes more complex to me. I am new to cypress
please let me know if not too much to ask, can you at least provide an example
thanks
Cypress doesn't have dependsOnMethods like TestNG runner provides as both of them are different. But whatever you want to achieve, you can achieve through hooks provided by Mocha, as Cypress has Mocha as a test framework in itself.
Note: This is what all you can do with hooks and your problem should be solved with below code. If you have any specific requirement, please mention it.
describe('test suite', () => {
before(() => {})
beforeEach(() => { // put tc1() functionality
})
it('tc2 functionality', () => {
// now tc2() depends on beforeEach block where tc1 functionality is done
})
it('tc3 functionality', () => {
// now tc3() depends on beforeEach block where tc1 functionality is done
})
})

Cypress - get element without assertion

How do I get an element in Cypress without it asserting that it is present?
cy.get('.something')
Sometimes my element might not be there and I don't want it to fail the test.
Is there a different command I should be using?
You can use cy.$$('selector') to synchronously query for an element (jquery).
If you want this to happen after a cypress command, you'll need a .then:
cy.visit('/')
cy.get('element-one').then(() => {
const $el2 = cy.$$('element-two')
if ($el2.length) {
// do this
} else {
// do that
}
})
You might want to check this section of the docs in Cypress
https://docs.cypress.io/guides/core-concepts/conditional-testing.html#Element-existence

Can I dynamically create a test spec within a callback?

I want to retrieve a list of elements on a page, and for each one, create a test spec. My (pseudo) code is :-
fetchElements().then(element_list) {
foreach element {
it("should have some property", function() {
expect("foo")
})
}
}
When I run this, I get "No specs found", which I guess makes sense since they are being defined off the main path.
What's the best way to achieve dynamically created specs?
There are major problems preventing it to be easily achieved:
the specs you are creating are based on the result of asynchronous code - on the elements Protractor should first find
you can only have the Protractor/WebDriverJS specific code inside the it, beforeEach, beforeAll, afterEach, afterAll for it to work properly and have the promises put on the Control Flow etc.
you cannot have nested it blocks - jasmine would not execute them: Cannot perform a 'it' inside another 'it'
If it were not the elements you want to generate test cases from, but a static variable with a defined value, it would be as simple as:
describe("Check something", function () {
var arr = [
{name: "Status Reason", inclusion: true},
{name: "Status Reason", inclusion: false}
];
arr.map(function(item) {
it("should look good with item " + item, function () {
// test smth
});
});
});
But, if arr would be a promise, the test would fail at the very beginning since the code inside describe (which is not inside it) would be executed when the tests would be loaded by jasmine.
To conclude, have a single it() block and work inside it:
it("should have elements with a desired property", function() {
fetchElements().then(element_list) {
foreach element {
expect("foo")
})
}
}
If you are worried about distinguishing test failures from an element to element, you can, for instance, provide readable error messages so that, if a test fails, you can easily say, which of the elements has not passed the test (did not have a specific property in your pseudo-test case). For example, you can provide custom messages to expect():
expect(1).toEqual(2, 'because of stuff')
We can generate dynamic tests using jasmin data provider but it is working with only static data.
If we want to generate tests from the asynchronous call in protractor, then we need to use onprepare function in the protractor config js.
Create a bootloader and read the test cases from excel or server and import the data loader in the onprepare function. It is bit difficult to explain because I have faced
many issues like import is not supported in this javascript version and expected 2 args but got only 1. finally I have used babel to fix the issues and able to generate tests.
Below is the sample implementation that I have done in the on prepare method
var automationModule = require('./src/startup/bootloader.ts');
var defer = protractor.promise.defer();
automationModule.tests.then(function(res) {
defer.fulfill(res);
});
bootloader.ts contains the code to read the test suites and tests from excel sheet and sets the tests to the single to class.
Here res is the instance of singleton class which is returning from bootloader.ts
hard to explain everything here but you can take a look at my full implementation in my github https://github.com/mannejkumar/protractor-keyword-driven-framework

Conditionally ignore individual tests with Karma / Jasmine

I have some tests that fail in PhantomJS but not other browsers.
I'd like these tests to be ignored when run with PhantomJS in my watch task (so new browser windows don't take focus and perf is a bit faster), but in my standard test task and my CI pipeline, I want all the tests to run in Chrome, Firefox, etc...
I've considered a file-naming convention like foo.spec.dont-use-phantom.js and excluding those in my Karma config, but this means that I will have to separate out the individual tests that are failing into their own files, separating them from their logical describe blocks and having more files with weird naming conventions would generally suck.
In short:
Is there a way I can extend Jasmine and/or Karma and somehow annotate individual tests to only run with certain configurations?
Jasmine supports a pending() function.
If you call pending() anywhere in the spec body, no matter the expectations, the spec will be marked pending.
You can call pending() directly in test, or in some other function called from test.
function skipIfCondition() {
pending();
}
function someSkipCheck() {
return true;
}
describe("test", function() {
it("call pending directly by condition", function() {
if (someSkipCheck()) {
pending();
}
expect(1).toBe(2);
});
it("call conditionally skip function", function() {
skipIfCondition();
expect(1).toBe(3);
});
it("is executed", function() {
expect(1).toBe(1);
});
});
working example here: http://plnkr.co/edit/JZtAKALK9wi5PdIkbw8r?p=preview
I think it is purest solution. In test results you can see count of finished and skipped tests.
The most simple solution that I see is to override global functions describe and it to make them accept third optional argument, which has to be a boolean or a function returning a boolean - to tell whether or not current suite/spec should be executed. When overriding we should check if this third optional arguments resolves to true, and if it does, then we call xdescribe/xit (or ddescribe/iit depending on Jasmine version), which are Jasmine's methods to skip suite/spec, istead of original describe/it. This block has to be executed before the tests, but after Jasmine is included to the page. In Karma just move this code to a file and include it before test files in karma.conf.js. Here is the code:
(function (global) {
// save references to original methods
var _super = {
describe: global.describe,
it: global.it
};
// override, take third optional "disable"
global.describe = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xdescribe" (or "ddescribe")
if (disable) {
return global.xdescribe.apply(this, arguments);
}
// otherwise call original "describe"
return _super.describe.apply(this, arguments);
};
// override, take third optional "disable"
global.it = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xit" (or "iit")
if (disable) {
return global.xit.apply(this, arguments);
}
// otherwise call original "it"
return _super.it.apply(this, arguments);
};
}(window));
And usage example:
describe('foo', function () {
it('should foo 1 ', function () {
expect(true).toBe(true);
});
it('should foo 2', function () {
expect(true).toBe(true);
});
}, true); // disable suite
describe('bar', function () {
it('should bar 1 ', function () {
expect(true).toBe(true);
});
it('should bar 2', function () {
expect(true).toBe(true);
}, function () {
return true; // disable spec
});
});
See a working example here
I've also stumbled upon this issue where the idea was to add a chain method .when() for describe and it, which will do pretty much the same I've described above. It may look nicer but is a bit harder to implement.
describe('foo', function () {
it('bar', function () {
// ...
}).when(anything);
}).when(something);
If you are really interested in this second approach, I'll be happy to play with it a little bit more and try to implement chain .when().
Update:
Jasmine uses third argument as a timeout option (see docs), so my code sample is replacing this feature, which is not ok. I like #milanlempera and #MarcoCI answers better, mine seems kinda hacky and not intuitive. I'll try to update my solution anyways soon not to break compatibilty with Jasmine default features.
I can share my experience with this.
In our environment we have several tests running with different browsers and different technologies.
In order to run always the same suites on all the platforms and browsers we have a helper file loaded in karma (helper.js) with some feature detection functions loaded globally.
I.e.
function isFullScreenSupported(){
// run some feature detection code here
}
You can use also Modernizr for this as well.
In our tests then we wrap things in if/else blocks like the following:
it('should work with fullscreen', function(){
if(isFullScreenSupported()){
// run the test
}
// don't do anything otherwise
});
or for an async test
it('should work with fullscreen', function(done){
if(isFullScreenSupported()){
// run the test
...
done();
} else {
done();
}
});
While it's a bit verbose it will save you time for the kind of scenario you're facing.
In some cases you can use user agent sniffing to detect a particular browser type or version - I know it is bad practice but sometimes there's effectively no other way.
Try this. I am using this solution in my projects.
it('should do something', function () {
if (!/PhantomJS/.test(window.navigator.userAgent)) {
expect(true).to.be.true;
}
});
This will not run this particular test in PhantomJS, but will run it in other browsers.
Just rename the tests that you want to disable from it(...) to xit(...)
function xit: A temporarily disabled it. The spec will report as
pending and will not be executed.

Is there a way to continue test scenario execution after step failure in a previous scenario?

Whenever there is a step failure while running on a remote server, I would like to capture the failed step and then continue running the remaining scenarios. The captured step would then be included in a file for reporting purposes. Is this a possibility? All replies I've seen elsewhere just say you should fix the test before moving on. I agree, but I only want the tests to stop when running locally, not remotely.
➜ customer git:(pat104) ✗ cucumber.js -f progress (pat104⚡)
...F-----Failed scenario: View and select first contact from contact history
...F-Failed scenario: View and select a contact from multiple contacts in history
..................................................F---Failed scenario: Navigating to profile with url and enrollmentId
...................................................F-Failed scenario: Successful MDN Search with 1 result returned. Tech Selects and continues
.............FFailed scenario: Successful MDN with multiple results
Turns out, one of the step-definitions was using .waitForExist incorrectly. The test was written:
this.browser
.waitForExist('#someElement', 1000, callback)
Callback isn't a parameter for .waitForExist, rewrote to:
.waitForExist('#someElement',1000).then(function (exists) {
assert.equal(exists, true, callback);
})
This is the default behavior, isn't it? Example command
cucumber.js -f json > cucumberOutput.json
Well, that you need to manage in your test itself using callback.fail(e) like below. You can use library like grunt-cucumberjs to add these errors to nice HTML reports.
this.Then(/^the save to wallet button reflects the offer is saved$/, function (callback) {
merchantPage(this.nemo).doStuff().then(function () {
callback();
}, function (e) {
callback.fail(e); //add this to report
});
});
Or you could use Hooks and check whether a scenario is failed and report (take screenshot or add logging etc.)
this.After(function (scenario, callback) {
var driver = this.nemo.driver;
if(scenario.isFailed()){
driver.takeScreenshot().then(function (buffer) {
scenario.attach(new Buffer(buffer, 'base64').toString('binary'), 'image/png');
});
}
driver.quit().then(function () {
callback();
});
});

Resources