I'm using Sinon with Mocha to test some expiration date values. I used the same code a few months ago and it worked fine, but somewhere between v1.12.x and v1.17.x, something has changed and I can't seem to find the right path.
let sinon = require('sinon');
describe('USER & AUTHENTICATION ENDPOINTS', function(done) {
beforeEach(function() {
this.clock = sinon.useFakeTimers(new Date().getTime());
return fixtures.load(data);
});
afterEach(function() {
this.clock.restore();
return fixtures.clear(data);
});
context('POST /users', function() { ... }
});
I've tried with and without the new Date().getTime() argument.
I've tried passing in and explicitly calling done().
I've tried removing my fixture load/clear processes.
The end result is always the same:
Error: timeout of 5000ms exceeded. Ensure the done() callback is being called in this test.
Has something changed that I just haven't noticed in the documentation? Do I have some kind of error in there that I can't see?
Any thoughts would be appreciated.
UPDATE
So a little more info here. This clearly has something to do with my code, but I'm at a loss.
If I comment every actual test, the tests run and give me a green "0 passing".
If I run an actual test, even one that just this:
context('POST /users', function() {
it('should create a new user', function(done) {
done();
})
});
I'm right back to the timeout. What am I missing?
Mystery solved. It appears to be a conflict between Sinon and versions of Knex > 0.7.6.
Seems to be because pool2 relies on behavior of setTimeout. Using sinon.useFakeTimers(...) replaces several methods including setTimeout with synchronous versions which breaks it. Can fix by replacing with: clock = sinon.useFakeTimers(Number(date), 'Date');
My original code was written in a world where Knex v0.7.6 was the latest version. Now that it's not everything failed even though the code itself was the same. I used the fix mentioned and things look to be fine.
You are passing done to your describe callback in line 2:
describe('USER & AUTHENTICATION ENDPOINTS', function(done) {
Mocha expects you to invoke it... To get rid of the timeout error, just remove the done parameter from the callback.
Related
I am currently writing some tests in a legacy system and am confused about a test result here.
I have one test here which fails as expected but mocha shows as a result 1 passing and 1 failing.
I am using Bluebird Promises, mocha, chai-as-promised and sinon with sinon-chai for spies and stubs. This is the test (I have removed stuff not needed for understanding my problem):
describe("with a triggerable activity (selection)", function () {
beforeEach(function stubData() {
stubbedTriggerFunction = sinon.stub(trigger, "newParticipation");
activityLibStub = ... // stub
selectionLibStub = ... // stub
fakeActivities = ... // simple data with just ONE element
fakeSelection = ... // simple data with just ONE element
// stub methods to return synthetic data
activityLibStub.returns(new Promise(function (resolve) {
resolve(fakeActivities);
}));
selectionLibStub.withArgs(1).returns(new Promise(function (resolve) {
resolve(fakeSelection);
}));
});
afterEach(function releaseStubs() {
// restore stubs...
});
it("should call the newParticipation function", function () {
var member = memberBuilder.buildCorrect();
trigger.allActivities(member)
.then(function () {
return expect(stubbedTriggerFunction).to.not.have.been.called;
})
.done();
})
});
This test fails as expected, because the function is actually invoked. However, mocha tells me that one test passed and one test failed. This is the only test I have implemented in this module.
I am pretty sure this has something to do with the promises but I don't seem to figure out what it is. Also if I add
.catch(function(){
chai.assert.fail();
})
after the then-block, mocha still claims that one test passed. The method is also just invoked one time and I have only one synthetic dataset which I am working with. So I don't see what it is which tells mocha that this has passed while failed...
Any ideas?
Regards, Vegaaaa
Return your promise, return your promise, return your promise. Let's chant it all together now "Return, yoooooooour promise!"
it("should call the newParticipation function", function () {
var member = memberBuilder.buildCorrect();
return trigger.allActivities(member)
.then(function () {
return expect(stubbedTriggerFunction).to.not.have.been.called;
});
})
});
I've also removed .done() because that's not generally useful with Bluebird and would be downright harmful here. Mocha still needs to use your promise.
(It's use is generally discouraged, see here.) If you do not return your promise, then Mocha treats you test as synchronous and most likely that's going to be successful because your test is not really testing anything synchronously. Then if you get an asychronous failure, Mocha has to decide what failed exactly and will do its best to record the failure but it can lead to funny things like having an incorrect number of tests or the same test being reported as failed and successful!
Gist:
I spy on get method of my rest service:
spyOn(restService, 'get').and.callFake(function () {
return deferred.promise;
});
The method I am trying to test is myService.getFormData() that returns a chained promise:
function getFormData() {
var getPromise = this.restService.get(endPoint, params, true);
var processedDataPromise = then(successHandle, failHandler);
return processedDataPromise;
}
Back to Jasmine spec, I invoke getFormData function and make assertions:
var processedDataPromise = myService.getFormData();
processedDataPromise.then(function(data) {
expect(data).not.toBeNull();
});
deferred.resolve(testFormData);
$rootScope.$digest();
The Problem:
The above derivative promise (processedDataPromise) does indeed get resolved. However the 'data' passed to it is undefined. Is it anything to do with $digest cycles not doing its job in Jasmine?
Why does Jasmine not pass any data to the above derived promise.?
Further Note: The processedDataPromise is a completely new promise than the get returned promise.
It is a promise for processedData which as we can see is returned by successHandle (Definition not shown) once its parent getPromise gets resolved.
In UI everything works like a Charm.
Sorry for posting the question. The derivative promise indeed got the resolved data I was referring to. The problem was that I was incorrectly accessing the JSON data inside of successHandle.
As a result the 'successHandle' returned null and the the processedDataPromise got back undefined response.
Stupid mistake, very hard to find, but the best part is the learning and understanding of JS Promises.
I have some tests that fail in PhantomJS but not other browsers.
I'd like these tests to be ignored when run with PhantomJS in my watch task (so new browser windows don't take focus and perf is a bit faster), but in my standard test task and my CI pipeline, I want all the tests to run in Chrome, Firefox, etc...
I've considered a file-naming convention like foo.spec.dont-use-phantom.js and excluding those in my Karma config, but this means that I will have to separate out the individual tests that are failing into their own files, separating them from their logical describe blocks and having more files with weird naming conventions would generally suck.
In short:
Is there a way I can extend Jasmine and/or Karma and somehow annotate individual tests to only run with certain configurations?
Jasmine supports a pending() function.
If you call pending() anywhere in the spec body, no matter the expectations, the spec will be marked pending.
You can call pending() directly in test, or in some other function called from test.
function skipIfCondition() {
pending();
}
function someSkipCheck() {
return true;
}
describe("test", function() {
it("call pending directly by condition", function() {
if (someSkipCheck()) {
pending();
}
expect(1).toBe(2);
});
it("call conditionally skip function", function() {
skipIfCondition();
expect(1).toBe(3);
});
it("is executed", function() {
expect(1).toBe(1);
});
});
working example here: http://plnkr.co/edit/JZtAKALK9wi5PdIkbw8r?p=preview
I think it is purest solution. In test results you can see count of finished and skipped tests.
The most simple solution that I see is to override global functions describe and it to make them accept third optional argument, which has to be a boolean or a function returning a boolean - to tell whether or not current suite/spec should be executed. When overriding we should check if this third optional arguments resolves to true, and if it does, then we call xdescribe/xit (or ddescribe/iit depending on Jasmine version), which are Jasmine's methods to skip suite/spec, istead of original describe/it. This block has to be executed before the tests, but after Jasmine is included to the page. In Karma just move this code to a file and include it before test files in karma.conf.js. Here is the code:
(function (global) {
// save references to original methods
var _super = {
describe: global.describe,
it: global.it
};
// override, take third optional "disable"
global.describe = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xdescribe" (or "ddescribe")
if (disable) {
return global.xdescribe.apply(this, arguments);
}
// otherwise call original "describe"
return _super.describe.apply(this, arguments);
};
// override, take third optional "disable"
global.it = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xit" (or "iit")
if (disable) {
return global.xit.apply(this, arguments);
}
// otherwise call original "it"
return _super.it.apply(this, arguments);
};
}(window));
And usage example:
describe('foo', function () {
it('should foo 1 ', function () {
expect(true).toBe(true);
});
it('should foo 2', function () {
expect(true).toBe(true);
});
}, true); // disable suite
describe('bar', function () {
it('should bar 1 ', function () {
expect(true).toBe(true);
});
it('should bar 2', function () {
expect(true).toBe(true);
}, function () {
return true; // disable spec
});
});
See a working example here
I've also stumbled upon this issue where the idea was to add a chain method .when() for describe and it, which will do pretty much the same I've described above. It may look nicer but is a bit harder to implement.
describe('foo', function () {
it('bar', function () {
// ...
}).when(anything);
}).when(something);
If you are really interested in this second approach, I'll be happy to play with it a little bit more and try to implement chain .when().
Update:
Jasmine uses third argument as a timeout option (see docs), so my code sample is replacing this feature, which is not ok. I like #milanlempera and #MarcoCI answers better, mine seems kinda hacky and not intuitive. I'll try to update my solution anyways soon not to break compatibilty with Jasmine default features.
I can share my experience with this.
In our environment we have several tests running with different browsers and different technologies.
In order to run always the same suites on all the platforms and browsers we have a helper file loaded in karma (helper.js) with some feature detection functions loaded globally.
I.e.
function isFullScreenSupported(){
// run some feature detection code here
}
You can use also Modernizr for this as well.
In our tests then we wrap things in if/else blocks like the following:
it('should work with fullscreen', function(){
if(isFullScreenSupported()){
// run the test
}
// don't do anything otherwise
});
or for an async test
it('should work with fullscreen', function(done){
if(isFullScreenSupported()){
// run the test
...
done();
} else {
done();
}
});
While it's a bit verbose it will save you time for the kind of scenario you're facing.
In some cases you can use user agent sniffing to detect a particular browser type or version - I know it is bad practice but sometimes there's effectively no other way.
Try this. I am using this solution in my projects.
it('should do something', function () {
if (!/PhantomJS/.test(window.navigator.userAgent)) {
expect(true).to.be.true;
}
});
This will not run this particular test in PhantomJS, but will run it in other browsers.
Just rename the tests that you want to disable from it(...) to xit(...)
function xit: A temporarily disabled it. The spec will report as
pending and will not be executed.
I've got many instability with Protractor, and I'm sure there is something I don't understand.
Sometimes I need use the .then() when clicking on a button before continuing, sometimes it don't have any impact and I should not use .then() or the test failed.
I wonder when should I use the .then() callback when testing in Protractor ?
Example :
createAccountForm = $('#form-create-account');
submitButton = createAccountForm.$('button[type=submit]');
browser.wait(EC.elementToBeClickable(submitButton), 5000);
submitButton.click(); // .then(function(){ <-- uncomment in the .then form
// find the confirmation message
var message = $('.alert-success');
browser.wait(EC.visibilityOf(message), 5000);
log.debug('After visibilityOf');
expect(message.isPresent()).to.be.eventually.true;
// }); --> uncomment when in .then form
When I use this form of test (without .then()) I see on browser that the click on the button is not done, the test continue with the following expect and then stop.
If I use the .then() form, the click on the button is done, and the test continue without error.
On other test, I don't need to use the then() callback when clicking on button.
So , when should I use the .then() and when not ?
Jean-Marc
The answer of this question can be found in this post : http://spin.atomicobject.com/2014/12/17/asynchronous-testing-protractor-angular/
That is :
Protractor enqueue all driver commands in the ControlFlow,
when you need the result of a driver command you should use .then,
when you don't need the result of a driver you can avoid .then but all
following instructions must be enqueued in the ControlFlow else they
will be run before commands in the queue leading to unpredictable
result. So, if you want to run a non driver tests command, you should add it into the .then callback or wrap the test into a Promise and enqueue the test in the ControlFlow. See example below.
Here is an example of my test working without .then :
log.debug('test0');
// enqueue the click
submitButton.click();
var message = $('.alert-success');
// enqueue the wait for message to be visible
browser.wait(EC.visibilityOf(message), 5000);
log.debug('test1');
// enqueue a test
expect(message.isPresent()).to.be.eventually.true;
log.debug('test2');
// a function returning a promise that does an async test (check in MongoDB Collection)
var testAccount = function () {
var deferred = protractor.promise.defer();
// Verify that an account has been created
accountColl.find({}).toArray(function (err, accs) {
log.debug('test5');
expect(err).to.not.exist;
log.debug('test6');
expect(accs.length).to.equal(1);
return deferred.fulfill();
});
return deferred.promise;
};
log.debug('test3');
// Enqueue the testAccount function
browser.controlFlow().execute(testAccount);
log.debug('test4');
Output is now what we expect :
test0
test1
test2
test3
test4
test5
test6
Give yourself a favor, and avoid .then
Today, async/await is a lot more appropriate for handing promises
But in nutshell, open protractor API page https://www.protractortest.org/#/api, find the method you're about to use and see what it returns. If it says promise, just add await before calling it. Make sure, to make your wrapping function async
it('test case 1', async () => {
await elem.click()
})
New to Meteor, and I love it so far. I use vzaar.com as video hosting platform, and they have a node.js package/API which I added to my Meteor project with meteorhacks:npm. Everything in the API works great, but when I upload a video, I need to fetch the video ID from the API when successfully uploaded.
Problem:
I need to save the video id returned from the vzaar API after uploading, but since it happens in the future, my code does not wait on the result and just gives me "undefined". Is it possible to make the Meteor.method wait for the response?
Here is my method so far:
Meteor.methods({
vzaar: function (videopath) {
api.uploadAndProcessVideo(videopath, function (statusCode, data) {
console.log("Video ID: " + data.id);
return data.id;
}, {
title: "my video",
profile: 3
});
console.log(videoid);
}
});
And this is how the Meteor.call looks like right now:
Meteor.call("vzaar", "/Uploads/" + fileInfo.name, function (err, message) {
console.log(message);
});
When I call this method, I immediately get undefined in browser console and meteor console, and after some seconds, I get the video ID in the meteor console.
Solution
I finally solved the problem, after days of trial and error. I learned about Fibers (here and here), and learned more about the core event loop of Node.js. The problem were that this call answered in the future, so my code always returned undefined because it ran before the api had answered.
I first tried the Meteor.wrapAsync, which I thought were going to work, as it is actually the Future fiber. But I ended up using the raw NPM module of Future instead. See this working code:
var Future = Npm.require('fibers/future');
Meteor.methods({
vzaar: function (videopath) {
var fut = new Future();
api.uploadAndProcessVideo(videopath, function (statusCode, data) {
// Return video id
fut.return (data.id);
}, {
// Video options
title: "hello world",
profile: 3
});
// The delayed return
return fut.wait();
}
});
Remember to install the npm module correctly with meteorhacks:npm first.
I learned about how to use the Future fiber in this case via this stackoverflow answer.
I hope this can be useful for others, as it was really easy to implement.