I have some tests that fail in PhantomJS but not other browsers.
I'd like these tests to be ignored when run with PhantomJS in my watch task (so new browser windows don't take focus and perf is a bit faster), but in my standard test task and my CI pipeline, I want all the tests to run in Chrome, Firefox, etc...
I've considered a file-naming convention like foo.spec.dont-use-phantom.js and excluding those in my Karma config, but this means that I will have to separate out the individual tests that are failing into their own files, separating them from their logical describe blocks and having more files with weird naming conventions would generally suck.
In short:
Is there a way I can extend Jasmine and/or Karma and somehow annotate individual tests to only run with certain configurations?
Jasmine supports a pending() function.
If you call pending() anywhere in the spec body, no matter the expectations, the spec will be marked pending.
You can call pending() directly in test, or in some other function called from test.
function skipIfCondition() {
pending();
}
function someSkipCheck() {
return true;
}
describe("test", function() {
it("call pending directly by condition", function() {
if (someSkipCheck()) {
pending();
}
expect(1).toBe(2);
});
it("call conditionally skip function", function() {
skipIfCondition();
expect(1).toBe(3);
});
it("is executed", function() {
expect(1).toBe(1);
});
});
working example here: http://plnkr.co/edit/JZtAKALK9wi5PdIkbw8r?p=preview
I think it is purest solution. In test results you can see count of finished and skipped tests.
The most simple solution that I see is to override global functions describe and it to make them accept third optional argument, which has to be a boolean or a function returning a boolean - to tell whether or not current suite/spec should be executed. When overriding we should check if this third optional arguments resolves to true, and if it does, then we call xdescribe/xit (or ddescribe/iit depending on Jasmine version), which are Jasmine's methods to skip suite/spec, istead of original describe/it. This block has to be executed before the tests, but after Jasmine is included to the page. In Karma just move this code to a file and include it before test files in karma.conf.js. Here is the code:
(function (global) {
// save references to original methods
var _super = {
describe: global.describe,
it: global.it
};
// override, take third optional "disable"
global.describe = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xdescribe" (or "ddescribe")
if (disable) {
return global.xdescribe.apply(this, arguments);
}
// otherwise call original "describe"
return _super.describe.apply(this, arguments);
};
// override, take third optional "disable"
global.it = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xit" (or "iit")
if (disable) {
return global.xit.apply(this, arguments);
}
// otherwise call original "it"
return _super.it.apply(this, arguments);
};
}(window));
And usage example:
describe('foo', function () {
it('should foo 1 ', function () {
expect(true).toBe(true);
});
it('should foo 2', function () {
expect(true).toBe(true);
});
}, true); // disable suite
describe('bar', function () {
it('should bar 1 ', function () {
expect(true).toBe(true);
});
it('should bar 2', function () {
expect(true).toBe(true);
}, function () {
return true; // disable spec
});
});
See a working example here
I've also stumbled upon this issue where the idea was to add a chain method .when() for describe and it, which will do pretty much the same I've described above. It may look nicer but is a bit harder to implement.
describe('foo', function () {
it('bar', function () {
// ...
}).when(anything);
}).when(something);
If you are really interested in this second approach, I'll be happy to play with it a little bit more and try to implement chain .when().
Update:
Jasmine uses third argument as a timeout option (see docs), so my code sample is replacing this feature, which is not ok. I like #milanlempera and #MarcoCI answers better, mine seems kinda hacky and not intuitive. I'll try to update my solution anyways soon not to break compatibilty with Jasmine default features.
I can share my experience with this.
In our environment we have several tests running with different browsers and different technologies.
In order to run always the same suites on all the platforms and browsers we have a helper file loaded in karma (helper.js) with some feature detection functions loaded globally.
I.e.
function isFullScreenSupported(){
// run some feature detection code here
}
You can use also Modernizr for this as well.
In our tests then we wrap things in if/else blocks like the following:
it('should work with fullscreen', function(){
if(isFullScreenSupported()){
// run the test
}
// don't do anything otherwise
});
or for an async test
it('should work with fullscreen', function(done){
if(isFullScreenSupported()){
// run the test
...
done();
} else {
done();
}
});
While it's a bit verbose it will save you time for the kind of scenario you're facing.
In some cases you can use user agent sniffing to detect a particular browser type or version - I know it is bad practice but sometimes there's effectively no other way.
Try this. I am using this solution in my projects.
it('should do something', function () {
if (!/PhantomJS/.test(window.navigator.userAgent)) {
expect(true).to.be.true;
}
});
This will not run this particular test in PhantomJS, but will run it in other browsers.
Just rename the tests that you want to disable from it(...) to xit(...)
function xit: A temporarily disabled it. The spec will report as
pending and will not be executed.
Related
I've implemented API data caching in my app so that if data is already present it is not re-fetched.
I can intercept the initial fetch
cy.intercept('**/api/things').as('api');
cy.visit('/things')
cy.wait('#api') // passes
To test the cache is working I'd like to explicitly test the opposite.
How can I modify the cy.wait() behavior similar to the way .should('not.exist') modifies cy.get() to allow the negative logic to pass?
// data is cached from first route, how do I assert no call occurs?
cy.visit('/things2')
cy.wait('#api')
.should('not.have.been.called') // fails with "no calls were made"
Minimal reproducible example
<body>
<script>
setTimeout(() =>
fetch('https://jsonplaceholder.typicode.com/todos/1')
}, 300)
</script>
</body>
Since we test a negative, it's useful to first make the test fail. Serve the above HTML and use it to confirm the test fails, then remove the fetch() and the test should pass.
The add-on package cypress-if can change default command behavior.
cy.get(selector)
.if('exist').log('exists')
.else().log('does.not.exist')
Assume your API calls are made within 1 second of the action that would trigger them - the cy.visit().
cy.visit('/things2')
cy.wait('#alias', {timeout:1100})
.if(result => {
expect(result.name).to.eq('CypressError') // confirm error was thrown
})
You will need to overwrite the cy.wait() command to check for chained .if() command.
Cypress.Commands.overwrite('wait', (waitFn, subject, selector, options) => {
// Standard behavior for numeric waits
if (typeof selector === 'number') {
return waitFn(subject, selector, options)
}
// Modified alias wait with following if()
if (cy.state('current').attributes.next?.attributes.name === 'if') {
return waitFn(subject, selector, options).then((pass) => pass, (fail) => fail)
}
// Standard alias wait
return waitFn(subject, selector, options)
})
As yet only cy.get() and cy.contains() are overwritten by default.
Custom Command for same logic
If the if() syntax doesn't feel right, the same logic can be used in a custom command
Cypress.Commands.add('maybeWaitAlias', (selector, options) => {
const waitFn = Cypress.Commands._commands.wait.fn
// waitFn returns a Promise
// which Cypress resolves to the `pass` or `fail` values
// depending on which callback is invoked
return waitFn(cy.currentSubject(), selector, options)
.then((pass) => pass, (fail) => fail)
// by returning the `pass` or `fail` value
// we are stopping the "normal" test failure mechanism
// and allowing downstream commands to deal with the outcome
})
cy.visit('/things2')
cy.maybeWaitAlias('#alias', {timeout:1000})
.should(result => {
expect(result.name).to.eq('CypressError') // confirm error was thrown
})
I also tried cy.spy() but with a hard cy.wait() to avoid any latency in the app after the route change occurs.
const spy = cy.spy()
cy.intercept('**/api/things', spy)
cy.visit('/things2')
cy.wait(2000)
.then(() => expect(spy).not.to.have.been.called)
Running in a burn test of 100 iterations, this seems to be ok, but there is still a chance of flaky test with this approach, IMO.
A better way would be to poll the spy recursively:
const spy = cy.spy()
cy.intercept('**/api/things', spy)
cy.visit('/things2')
const waitForSpy = (spy, options, start = Date.now()) => {
const {timeout, interval = 30} = options;
if (spy.callCount > 0) {
return cy.wrap(spy.lastCall)
}
if ((Date.now() - start) > timeout) {
return cy.wrap(null)
}
return cy.wait(interval, {log:false})
.then(() => waitForSpy(spy, {timeout, interval}, start))
}
waitForSpy(spy, {timeout:2000})
.should('eq', null)
A neat little trick I learned from Gleb's Network course.
You will want use cy.spy() with your intercept and use cy.get() on the alias to be able to check no calls were made.
// initial fetch
cy.intercept('**/api/things').as('api');
cy.visit('/things')
cy.wait('#api')
cy.intercept('METHOD', '**/api/things', cy.spy().as('apiNotCalled'))
// trigger the fetch again but will not send since data is cached
cy.get('#apiNotCalled').should('not.been.called')
Let's assume I have a service function that returns me the current location. And the function has callbacks to return the location. We can easily mock the function like as follows. But I wanted to introduce some delay (let's say 1 sec) before the callFake() invokes the successHandler(location).
Is there a way to achieve that?
xxxSpec.js
spyOn(LocationService, 'getLocation').and.callFake(function(successHandler, errorHandler) {
//TODO: introduce some delay here
const location = {...};
successHandler(location);
}
LocationService.js
function getLocation(successCallback, errorCallback) {
let location = {...};
successCallback(location);
}
Introducing delay in Javascript is easily done with the setTimeout API, details here. You haven't specified if you are using a framework such as Angular, so your code may differ slightly from what I have below.
It does not appear that you are using Observables or Promises for easier handling of asynchronous code. Jasmine 2 does have the 'done' callback that can be useful for this. Something like this could work:
it( "my test", function(done) {
let successHandler = jasmine.createSpy();
spyOn(LocationService, 'getLocation').and.callFake(function(successHandler, errorHandler) {
setTimeout(function() {
const location = {...};
successHandler(location);
}, 1000); // wait for 1 second
})
// Now invoke the function under test
functionUnderTest(/* location data */);
// To test we have to wait until it's completed before expecting...
setTimeout(function(){
// check what you want to check in the test ...
expect(successHandler).toHaveBeenCalled();
// Let Jasmine know the test is done.
done();
}, 1500); // wait for longer than one second to test results
});
However, it is not clear to me why adding the timeouts would be valuable to your testing. :)
I hope this helps.
I am currently writing some tests in a legacy system and am confused about a test result here.
I have one test here which fails as expected but mocha shows as a result 1 passing and 1 failing.
I am using Bluebird Promises, mocha, chai-as-promised and sinon with sinon-chai for spies and stubs. This is the test (I have removed stuff not needed for understanding my problem):
describe("with a triggerable activity (selection)", function () {
beforeEach(function stubData() {
stubbedTriggerFunction = sinon.stub(trigger, "newParticipation");
activityLibStub = ... // stub
selectionLibStub = ... // stub
fakeActivities = ... // simple data with just ONE element
fakeSelection = ... // simple data with just ONE element
// stub methods to return synthetic data
activityLibStub.returns(new Promise(function (resolve) {
resolve(fakeActivities);
}));
selectionLibStub.withArgs(1).returns(new Promise(function (resolve) {
resolve(fakeSelection);
}));
});
afterEach(function releaseStubs() {
// restore stubs...
});
it("should call the newParticipation function", function () {
var member = memberBuilder.buildCorrect();
trigger.allActivities(member)
.then(function () {
return expect(stubbedTriggerFunction).to.not.have.been.called;
})
.done();
})
});
This test fails as expected, because the function is actually invoked. However, mocha tells me that one test passed and one test failed. This is the only test I have implemented in this module.
I am pretty sure this has something to do with the promises but I don't seem to figure out what it is. Also if I add
.catch(function(){
chai.assert.fail();
})
after the then-block, mocha still claims that one test passed. The method is also just invoked one time and I have only one synthetic dataset which I am working with. So I don't see what it is which tells mocha that this has passed while failed...
Any ideas?
Regards, Vegaaaa
Return your promise, return your promise, return your promise. Let's chant it all together now "Return, yoooooooour promise!"
it("should call the newParticipation function", function () {
var member = memberBuilder.buildCorrect();
return trigger.allActivities(member)
.then(function () {
return expect(stubbedTriggerFunction).to.not.have.been.called;
});
})
});
I've also removed .done() because that's not generally useful with Bluebird and would be downright harmful here. Mocha still needs to use your promise.
(It's use is generally discouraged, see here.) If you do not return your promise, then Mocha treats you test as synchronous and most likely that's going to be successful because your test is not really testing anything synchronously. Then if you get an asychronous failure, Mocha has to decide what failed exactly and will do its best to record the failure but it can lead to funny things like having an incorrect number of tests or the same test being reported as failed and successful!
The issue is to test the event handlers with asynchronous internal methods, which is executed by SDK like facebook.
the plain test is:
describe('Listens to someevent', function () {
it('and triggers anotherevent', function () {
var eventSpy = spyOnEvent(document, 'anotherevent');
var data = {
param1: 'param1',
param2: 'param2',
}
this.component.trigger('somevent', data);
runs(function() {
expect(eventSpy).toHaveBeenTriggeredOn(document);
});
});
});
when someevent is triggered with options, the component handler is fired:
this.handler = function (e, data) {
SDK.apicall(data, function (err, response) {
if (!err) {
doSomething();
}
// trigger data event
that.trigger(document, 'anotherevent');
});
}
;
};
In jasmine 1.3 and before, a runs without a preceding waitsFor still executes immediately. It's really the waitsFor that makes the spec asynchronous. This has changed in 2.0.
Alternatively, if you don't really want to call the external API during your tests. You can use something like jasmine-ajax (docs). This will allow you to return the ajax call immediately in test with whatever response you want to test with. This has the advantage of making your specs faster (there's no waiting for the API), and make your specs less dependent on the API being up when they run.
There are plenty of documents that show how to add a matcher to a Jasmine spec (here, for example).
Has anyone found a way to add matchers to the whole environment; I'm wanting to create a set of useful matchers to be called by any and all tests, without copypasta all over my specs.
Currently working to reverse engineer the source, but would prefer a tried and true method, if one exists.
Sure, you just call beforeEach() without any spec scoping at all, and add matchers there.
This would globally add a toBeOfType matcher.
beforeEach(function() {
var matchers = {
toBeOfType: function(typeString) {
return typeof this.actual == typeString;
}
};
this.addMatchers(matchers);
});
describe('Thing', function() {
// matchers available here.
});
I've made a file named spec_helper.js full of things like custom matchers that I just need to load onto the page before I run the rest of the spec suite.
Here's one for jasmine 2.0+:
beforeEach(function(){
jasmine.addMatchers({
toEqualData: function() {
return {
compare: function(actual, expected) {
return { pass: angular.equals(actual, expected) };
}
};
}
});
});
Note that this uses angular's angular.equals.
Edit: I didn't know it was an internal implementation that may be subjected to change. Use at your own risk.
jasmine.Expectation.addCoreMatchers(matchers)
Based on previous answers, I created the following setup for angular-cli. I also need an external module in my matcher (in this case moment.js)
Note In this example I added an equalityTester, but it should work with a customer matcher
Create a file src/spec_helper.ts with the following contents:
// Import module
import { Moment } from 'moment';
export function initSpecHelper() {
beforeEach(() => {
// Add your matcher
jasmine.addCustomEqualityTester((a: Moment, b: Moment) => {
if (typeof a.isSame === 'function') {
return a.isSame(b);
}
});
});
}
Then, in src/test.ts import the initSpecHelper() function add execute it. I placed it before Angular's TestBed init, wich seems to work just fine.
import { initSpecHelper } from './spec_helper';
//...
// Prevent Karma from running prematurely.
__karma__.loaded = function () {};
// Init our own spec helper
initSpecHelper();
// First, initialize the Angular testing environment.
getTestBed().initTestEnvironment(
BrowserDynamicTestingModule,
platformBrowserDynamicTesting()
);
//...