How do I structure BDD style tests in multiple files? - jasmine

How does a BDD-style (mocha, jasmine, etc.) large wrapped single describe() get broken down into multiple files. I have a 5,000-line unit test file, all of it wrapped as follows:
describe('services',function(){
describe('serviceA',function(){...});
describe('serviceB',function(){...});
describe('serviceC',function(){...});
describe('serviceD',function(){...});
// .....
describe('serviceN',function(){...});
});`
For sanity's sake I would like to break each one ('serviceA','serviceB',...,'serviceN') into its own file. But I want each to still be inside describe("services").
I see 2 methods:
use require in the parent: describe('serviceA',require('./services/serviceA')); (repeat lots of times)
have some wrapper include them all
I don't like the second method, since it requires a lot of keeping of file name, require() and describe() in sync (and thus violates DRY).
I prefer each file know what it is testing, and not have to explicitly add the path, rather just have the testrunner intelligently include everything in './services', and thus have inside serviceA.js:
describe('serviceA', function() {
// ....
});
but then I need some way to "wrap" 'serviceA' inside 'services'. Is there any way to do that?
In this case, I am testing an angular app via karma, but the concept is the same.

Jasmine's describe has no special meaning for loading of files. It's a descriptor. What's stopping you from just doing this?
Karma config:
files: [
'src/services/**/*.js',
'tests/services/**/*.js'
]
serviceA.spec.js
describe('services', function () {
describe('serviceA', function () {
});
});
serviceB.spec.js
describe('services', function () {
describe('serviceB', function () {
});
});

Related

Can we keep different environment test data files in cypress

The application i am working on has 3 different environments. so is there a way to use a specific test data files for each environment( QA/STG/PRD) when running tests. I know we can use cypress.env file to specify environment related data but i couldn't figure out how to specify different file when running in different environments.
I believe it is not possible to choose which files your cypress will run according to the .env, but you can put your test content inside an IF, so even if cypress runs the test file, everything will be inside one IF that if not satisfied will not perform actions.
describe('Group of tests', () => {
it('Test 1', () => {
if(env == 'dev') {
// Your test here
} else {
/* put something here so that cypress doesn't
throw an error claiming the test suite is empty. */
}
});
});
There's a package cypress-tags that might suit your needs.
Select tests by passing a comma separated list of tags to the Cypress environment variable CYPRESS_INCLUDE_TAGS
describe(['QA'], 'Will tag every test inside the describe with the "QA" tag',
function () { ... });
it(['QA'], 'This is a work-in-progress test', function () { ... });

Using pageBase with nightwatch.js

I'm trying to create an end-2-end test suite using nightwatch.js
I've looked around a bit and haven't really figured out how to use a pageBase, like is usually used when implementing POM.
I'm using the page_object that is built in to nightwatch, but can't seem to get it to use a pageBase.
Here is the code example.
To simplify things, let's say I have a common.js file, and a test.js file
I want test.js to inherit all of common.js commands and elements and implement some commands and elements of it's own, but I'm struggling with the syntax.
this is the common.js file
let commonCommands = {
clickOnMe: function () {
return this.waitForElementVisible('#someElement', 2000)
}
};
module.exports = {
commands: [commonCommands],
elements: {
someElement: '#elementId'
},
};
this is the test.js file
const common = require('./common');
let testCommands = {
doStuffFromTest: function () {
return this;
}
};
module.exports = {
url: function () {
return this.api.launch_url ;
},
commands: common.commands,
elements: common.elements
};
How can I add commands and elements to the test.js ?
You generally don't want to access those commands from your test, but rather from your other page objects. Since that's where all of your commands will be happening, common actions like clicking on an element or checking if something is present will be done at the page object level.

Evaluating cucumber tags in BeforeFeature hook

I am trying to evaluate the tagged features in the this.BeforeFeature hook in world file but I am getting the error 'TypeError: handler is not a function' . What I interpret from the error message is that this.BeforeFeature() takes function as parameter and I am using the below code.
there are other ways to expedite this problem - like reading the names of feature but it will totally defeat the purpose of tags it that case so I don't want to employ that approach.
this.registerHandler('BeforeFeature', {tags: ["#foo,#bar"]} ,function (event, callback) {
console.log("before feature")
global.browser.driver.manage().window().setSize(500, 800);
callback();
});
Any help is appreciated.
Since scenario's inherit the hooks from feature evaluating the hooks on scenario should do the job-
Inheritance works as below-
Feature(hooks)->Scenario(hooks)/Scenario Outlines(hooks -> Examples
this.Before("#foo", function (scenario) {
// This hook will be executed before scenarios tagged with #foo // ...
});
Hope it helps.Thanks

Can I dynamically create a test spec within a callback?

I want to retrieve a list of elements on a page, and for each one, create a test spec. My (pseudo) code is :-
fetchElements().then(element_list) {
foreach element {
it("should have some property", function() {
expect("foo")
})
}
}
When I run this, I get "No specs found", which I guess makes sense since they are being defined off the main path.
What's the best way to achieve dynamically created specs?
There are major problems preventing it to be easily achieved:
the specs you are creating are based on the result of asynchronous code - on the elements Protractor should first find
you can only have the Protractor/WebDriverJS specific code inside the it, beforeEach, beforeAll, afterEach, afterAll for it to work properly and have the promises put on the Control Flow etc.
you cannot have nested it blocks - jasmine would not execute them: Cannot perform a 'it' inside another 'it'
If it were not the elements you want to generate test cases from, but a static variable with a defined value, it would be as simple as:
describe("Check something", function () {
var arr = [
{name: "Status Reason", inclusion: true},
{name: "Status Reason", inclusion: false}
];
arr.map(function(item) {
it("should look good with item " + item, function () {
// test smth
});
});
});
But, if arr would be a promise, the test would fail at the very beginning since the code inside describe (which is not inside it) would be executed when the tests would be loaded by jasmine.
To conclude, have a single it() block and work inside it:
it("should have elements with a desired property", function() {
fetchElements().then(element_list) {
foreach element {
expect("foo")
})
}
}
If you are worried about distinguishing test failures from an element to element, you can, for instance, provide readable error messages so that, if a test fails, you can easily say, which of the elements has not passed the test (did not have a specific property in your pseudo-test case). For example, you can provide custom messages to expect():
expect(1).toEqual(2, 'because of stuff')
We can generate dynamic tests using jasmin data provider but it is working with only static data.
If we want to generate tests from the asynchronous call in protractor, then we need to use onprepare function in the protractor config js.
Create a bootloader and read the test cases from excel or server and import the data loader in the onprepare function. It is bit difficult to explain because I have faced
many issues like import is not supported in this javascript version and expected 2 args but got only 1. finally I have used babel to fix the issues and able to generate tests.
Below is the sample implementation that I have done in the on prepare method
var automationModule = require('./src/startup/bootloader.ts');
var defer = protractor.promise.defer();
automationModule.tests.then(function(res) {
defer.fulfill(res);
});
bootloader.ts contains the code to read the test suites and tests from excel sheet and sets the tests to the single to class.
Here res is the instance of singleton class which is returning from bootloader.ts
hard to explain everything here but you can take a look at my full implementation in my github https://github.com/mannejkumar/protractor-keyword-driven-framework

Conditionally ignore individual tests with Karma / Jasmine

I have some tests that fail in PhantomJS but not other browsers.
I'd like these tests to be ignored when run with PhantomJS in my watch task (so new browser windows don't take focus and perf is a bit faster), but in my standard test task and my CI pipeline, I want all the tests to run in Chrome, Firefox, etc...
I've considered a file-naming convention like foo.spec.dont-use-phantom.js and excluding those in my Karma config, but this means that I will have to separate out the individual tests that are failing into their own files, separating them from their logical describe blocks and having more files with weird naming conventions would generally suck.
In short:
Is there a way I can extend Jasmine and/or Karma and somehow annotate individual tests to only run with certain configurations?
Jasmine supports a pending() function.
If you call pending() anywhere in the spec body, no matter the expectations, the spec will be marked pending.
You can call pending() directly in test, or in some other function called from test.
function skipIfCondition() {
pending();
}
function someSkipCheck() {
return true;
}
describe("test", function() {
it("call pending directly by condition", function() {
if (someSkipCheck()) {
pending();
}
expect(1).toBe(2);
});
it("call conditionally skip function", function() {
skipIfCondition();
expect(1).toBe(3);
});
it("is executed", function() {
expect(1).toBe(1);
});
});
working example here: http://plnkr.co/edit/JZtAKALK9wi5PdIkbw8r?p=preview
I think it is purest solution. In test results you can see count of finished and skipped tests.
The most simple solution that I see is to override global functions describe and it to make them accept third optional argument, which has to be a boolean or a function returning a boolean - to tell whether or not current suite/spec should be executed. When overriding we should check if this third optional arguments resolves to true, and if it does, then we call xdescribe/xit (or ddescribe/iit depending on Jasmine version), which are Jasmine's methods to skip suite/spec, istead of original describe/it. This block has to be executed before the tests, but after Jasmine is included to the page. In Karma just move this code to a file and include it before test files in karma.conf.js. Here is the code:
(function (global) {
// save references to original methods
var _super = {
describe: global.describe,
it: global.it
};
// override, take third optional "disable"
global.describe = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xdescribe" (or "ddescribe")
if (disable) {
return global.xdescribe.apply(this, arguments);
}
// otherwise call original "describe"
return _super.describe.apply(this, arguments);
};
// override, take third optional "disable"
global.it = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xit" (or "iit")
if (disable) {
return global.xit.apply(this, arguments);
}
// otherwise call original "it"
return _super.it.apply(this, arguments);
};
}(window));
And usage example:
describe('foo', function () {
it('should foo 1 ', function () {
expect(true).toBe(true);
});
it('should foo 2', function () {
expect(true).toBe(true);
});
}, true); // disable suite
describe('bar', function () {
it('should bar 1 ', function () {
expect(true).toBe(true);
});
it('should bar 2', function () {
expect(true).toBe(true);
}, function () {
return true; // disable spec
});
});
See a working example here
I've also stumbled upon this issue where the idea was to add a chain method .when() for describe and it, which will do pretty much the same I've described above. It may look nicer but is a bit harder to implement.
describe('foo', function () {
it('bar', function () {
// ...
}).when(anything);
}).when(something);
If you are really interested in this second approach, I'll be happy to play with it a little bit more and try to implement chain .when().
Update:
Jasmine uses third argument as a timeout option (see docs), so my code sample is replacing this feature, which is not ok. I like #milanlempera and #MarcoCI answers better, mine seems kinda hacky and not intuitive. I'll try to update my solution anyways soon not to break compatibilty with Jasmine default features.
I can share my experience with this.
In our environment we have several tests running with different browsers and different technologies.
In order to run always the same suites on all the platforms and browsers we have a helper file loaded in karma (helper.js) with some feature detection functions loaded globally.
I.e.
function isFullScreenSupported(){
// run some feature detection code here
}
You can use also Modernizr for this as well.
In our tests then we wrap things in if/else blocks like the following:
it('should work with fullscreen', function(){
if(isFullScreenSupported()){
// run the test
}
// don't do anything otherwise
});
or for an async test
it('should work with fullscreen', function(done){
if(isFullScreenSupported()){
// run the test
...
done();
} else {
done();
}
});
While it's a bit verbose it will save you time for the kind of scenario you're facing.
In some cases you can use user agent sniffing to detect a particular browser type or version - I know it is bad practice but sometimes there's effectively no other way.
Try this. I am using this solution in my projects.
it('should do something', function () {
if (!/PhantomJS/.test(window.navigator.userAgent)) {
expect(true).to.be.true;
}
});
This will not run this particular test in PhantomJS, but will run it in other browsers.
Just rename the tests that you want to disable from it(...) to xit(...)
function xit: A temporarily disabled it. The spec will report as
pending and will not be executed.

Resources