Why allowRespy is set to false by default in jasmine? - jasmine

I am a beginner in Jasmine and I was wondering, why the default value for allowRespy is set to false?

I found this answer, where explained that jasmine.spyOn() always returns the first created spy, so I assume that the reason for flag it to prevent usage of mock with state.
I can come out with a simplified example where such a behavior could be problematic:
describe("TestedService", () => {
const testedService = TestBed.inject(TestedService) // configuration is ommited for simplicity
it("calls foo only once", () => {
const fooSpy = spyOn(testedService, 'foo')
testedService.doAction()
expect(fooSpy).toHaveBeenCalledOnce()
fooSpy = spyOn(testedService, 'foo') // creating new spy to check if it will be called for the second time
testedService.doAction()
expect(fooSpy).not.toHaveBeenCalledOnce() // fails the test because it still points to the same spy,that was called few moments later.
})
})
Another, more realistic example of problematic behavior is when you want to reset a spy used in your test by creating a new spy with spyOn function, but instead of it you will not be created and spy with old state will be used.
Example
describe("TestedService", () => {
beforeEach(() => {
TestBed.inject(TestedService) // configuration is ommited for simplicity
spyOn(TestedService, 'foo').and.returnValue(100)
})
....
it('tests TestedService behavior when foo returns undefined', () => {
spyOn(TestedService, 'foo') // expect it to set spy return value to undefined,
but it don't, so the test below does not tests behavior when foo returns undefined
.....
})
})
Other examples and corrections would be appreciated!

Related

Test that an API call does NOT happen in Cypress

I've implemented API data caching in my app so that if data is already present it is not re-fetched.
I can intercept the initial fetch
cy.intercept('**/api/things').as('api');
cy.visit('/things')
cy.wait('#api') // passes
To test the cache is working I'd like to explicitly test the opposite.
How can I modify the cy.wait() behavior similar to the way .should('not.exist') modifies cy.get() to allow the negative logic to pass?
// data is cached from first route, how do I assert no call occurs?
cy.visit('/things2')
cy.wait('#api')
.should('not.have.been.called') // fails with "no calls were made"
Minimal reproducible example
<body>
<script>
setTimeout(() =>
fetch('https://jsonplaceholder.typicode.com/todos/1')
}, 300)
</script>
</body>
Since we test a negative, it's useful to first make the test fail. Serve the above HTML and use it to confirm the test fails, then remove the fetch() and the test should pass.
The add-on package cypress-if can change default command behavior.
cy.get(selector)
.if('exist').log('exists')
.else().log('does.not.exist')
Assume your API calls are made within 1 second of the action that would trigger them - the cy.visit().
cy.visit('/things2')
cy.wait('#alias', {timeout:1100})
.if(result => {
expect(result.name).to.eq('CypressError') // confirm error was thrown
})
You will need to overwrite the cy.wait() command to check for chained .if() command.
Cypress.Commands.overwrite('wait', (waitFn, subject, selector, options) => {
// Standard behavior for numeric waits
if (typeof selector === 'number') {
return waitFn(subject, selector, options)
}
// Modified alias wait with following if()
if (cy.state('current').attributes.next?.attributes.name === 'if') {
return waitFn(subject, selector, options).then((pass) => pass, (fail) => fail)
}
// Standard alias wait
return waitFn(subject, selector, options)
})
As yet only cy.get() and cy.contains() are overwritten by default.
Custom Command for same logic
If the if() syntax doesn't feel right, the same logic can be used in a custom command
Cypress.Commands.add('maybeWaitAlias', (selector, options) => {
const waitFn = Cypress.Commands._commands.wait.fn
// waitFn returns a Promise
// which Cypress resolves to the `pass` or `fail` values
// depending on which callback is invoked
return waitFn(cy.currentSubject(), selector, options)
.then((pass) => pass, (fail) => fail)
// by returning the `pass` or `fail` value
// we are stopping the "normal" test failure mechanism
// and allowing downstream commands to deal with the outcome
})
cy.visit('/things2')
cy.maybeWaitAlias('#alias', {timeout:1000})
.should(result => {
expect(result.name).to.eq('CypressError') // confirm error was thrown
})
I also tried cy.spy() but with a hard cy.wait() to avoid any latency in the app after the route change occurs.
const spy = cy.spy()
cy.intercept('**/api/things', spy)
cy.visit('/things2')
cy.wait(2000)
.then(() => expect(spy).not.to.have.been.called)
Running in a burn test of 100 iterations, this seems to be ok, but there is still a chance of flaky test with this approach, IMO.
A better way would be to poll the spy recursively:
const spy = cy.spy()
cy.intercept('**/api/things', spy)
cy.visit('/things2')
const waitForSpy = (spy, options, start = Date.now()) => {
const {timeout, interval = 30} = options;
if (spy.callCount > 0) {
return cy.wrap(spy.lastCall)
}
if ((Date.now() - start) > timeout) {
return cy.wrap(null)
}
return cy.wait(interval, {log:false})
.then(() => waitForSpy(spy, {timeout, interval}, start))
}
waitForSpy(spy, {timeout:2000})
.should('eq', null)
A neat little trick I learned from Gleb's Network course.
You will want use cy.spy() with your intercept and use cy.get() on the alias to be able to check no calls were made.
// initial fetch
cy.intercept('**/api/things').as('api');
cy.visit('/things')
cy.wait('#api')
cy.intercept('METHOD', '**/api/things', cy.spy().as('apiNotCalled'))
// trigger the fetch again but will not send since data is cached
cy.get('#apiNotCalled').should('not.been.called')

How to stub Fluture?

Background
I am trying to convert a code snippet from good old Promises into something using Flutures and Sanctuary:
https://codesandbox.io/embed/q3z3p17rpj?codemirror=1
Problem
Now, usually, using Promises, I can uses a library like sinonjs to stub the promises, i.e. to fake their results, force to resolve, to reject, ect.
This is fundamental, as it helps one test several branch directions and make sure everything works as is supposed to.
With Flutures however, it is different. One cannot simply stub a Fluture and I didn't find any sinon-esque libraries that could help either.
Questions
How do you stub Flutures ?
Is there any specific recommendation to doing TDD with Flutures/Sanctuary?
I'm not sure, but those Flutures (this name! ... nevermind, API looks cool) are plain objects, just like promises. They only have more elaborate API and different behavior.
Moreover, you can easily create "mock" flutures with Future.of, Future.reject instead of doing some real API calls.
Yes, sinon contains sugar helpers like resolves, rejects but they are just wrappers that can be implemented with callsFake.
So, you can easily create stub that creates fluture like this.
someApi.someFun = sinon.stub().callsFake((arg) => {
assert.equals(arg, 'spam');
return Future.of('bar');
});
Then you can test it like any other API.
The only problem is "asynchronicity", but that can be solved like proposed below.
// with async/await
it('spams with async', async () => {
const result = await someApi.someFun('spam).promise();
assert.equals(result, 'bar');
});
// or leveraging mocha's ability to wait for returned thenables
it('spams', async () => {
return someApi.someFun('spam)
.fork(
(result) => { assert.equals(result, 'bar');},
(error) => { /* ???? */ }
)
.promise();
});
As Zbigniew suggested, Future.of and Future.reject are great candidates for mocking using plain old javascript or whatever tools or framework you like.
To answer part 2 of your question, any specific recommendations how to do TDD with Fluture. There is of course not the one true way it should be done. However I do recommend you invest a little time in readability and ease of writing tests if you plan on using Futures all across your application.
This applies to anything you frequently include in tests though, not just Futures.
The idea is that when you are skimming over test cases, you will see developer intention, rather than boilerplate to get your tests to do what you need them to.
In my case I use mocha & chai in the BDD style (given when then).
And for readability I created these helper functions.
const {expect} = require('chai');
exports.expectRejection = (f, onReject) =>
f.fork(
onReject,
value => expect.fail(
`Expected Future to reject, but was ` +
`resolved with value: ${value}`
)
);
exports.expectResolve = (f, onResolve) =>
f.fork(
error => expect.fail(
`Expected Future to resolve, but was ` +
`rejected with value: ${error}`
),
onResolve
);
As you can see, nothing magical going on, I simply fail the unexpected result and let you handle the expected path, to do more assertions with that.
Now some tests would look like this:
const Future = require('fluture');
const {expect} = require('chai');
const {expectRejection, expectResolve} = require('../util/futures');
describe('Resolving function', () => {
it('should resolve with the given value', done => {
// Given
const value = 42;
// When
const f = Future.of(value);
// Then
expectResolve(f, out => {
expect(out).to.equal(value);
done();
});
});
});
describe('Rejecting function', () => {
it('should reject with the given value', done => {
// Given
const value = 666;
// When
const f = Future.of(value);
// Then
expectRejection(f, out => {
expect(out).to.equal(value);
done();
});
});
});
And running should give one pass and one failure.
✓ Resolving function should resolve with the given value: 1ms
1) Rejecting function should reject with the given value
1 passing (6ms)
1 failing
1) Rejecting function
should reject with the given value:
AssertionError: Expected Future to reject, but was resolved with value: 666
Do keep in mind that this should be treated as asynchronous code. Which is why I always accept the done function as an argument in it() and call it at the end of my expected results. Alternatively you could change the helper functions to return a promise and let mocha handle that.

How can I add shared variable to chakram?

I'am using chakram + mocha.
How can I use shared variables for all test?
For example, I would like to use variable API_PATH="http://example.com/v1/" in tests. And this variable could be changed during running test. So, my test looks like this.
var response = chakram.get(API_PATH + "products");
expect(response).to.have.status(200);
As example, protractor has conf.js with parameter baseUrl. Running test looks like protractor conf.js --baseUrl http://example.com/
what have you tried so far? Have you tried using beforeEach to reinitialize the object that you are using? You could just make the the shared variables declared outside of your tests.
EDIT: Adding details from what Jerry said:
If you want all variable to be reused within the same test, you must make them global variables. See example below
///include dependencies
var assert = require('assert'),
chai = require('chai'),
expect = chai.expect,
chakram = require('chakram');
//INIT GLOBAL VARAIBLES FOR within the same test
var testObj,
dummyData = {
user: 'John Kim',
lastSeenOnline: 'Wed August 11 12:05 2017'
};
describe ('#User', function () {
beforeEach(function () {
//init what the object contains
testObj = new DataStore(data, ContainerStore);
});
it ('#Should return the name of the user', function () {
assert.equal(testObj.get('user'), dummyData.user);
});
it("should offer simple HTTP request capabilities", function () {
return chakram.get("http://httpbin.org/get");
});
});
Note: I work with react but this is an example. We assume that the ContainerStore contains a method that has a method for get() which just gets the value of our JSON object. You can use testObj many time in different describe blocks since it is declared outside of your tests. But you should remember to always reinitialize your tesObj in a beforeEach(); otherwise, you risk populating your individual tests. There are cases were you do not have to initialize the beforeEach() and it is optional.
For Example in config.js
module.exports = {
"baseUrl": "http://example.com/",
"googleUrl": "http://www.google.com.tr/"
};
And use in javascript code:
let config = require('/config');
describle("test describle", () => {
it("test", () => {
chakram.get(config.baseUrl + "products"); //for example use
})
})

how do I test a class in mocha?

In my journey towards TDD, I am using Mocha, chai and sinon.
There certainly is a learning curve there.
My goal is to write a test to verify that method4 was executed. How do I achieve that ?
//MyData.js
class MyData {
constructor(input) {
this._runMethod4 = input; //true or false
this.underProcessing = this.init();
}
method1() { return this.method2() }
method2() {
if (this._runMethod4) {
return this.method4();
} else {
return this.method3();
}
method4(){
return thirdPartyAPI.getData();
}
method3(){
return someAPI.fetchData();
}
init(){
return this.method1();
}
}
MyData.spec.js
describe('MyData', () => {
it('should execute method 4', function() {
let foo = new MyData(true);
foo.underProcessing.then(()=>{
// How do I verify that method4 was executed ??
expect(foo.method4.callCount).to.equal(1);
});
});
})
Here's an example:
const expect = require('chai').expect;
const sinon = require('sinon');
const sinonTest = require('sinon-test');
sinon.test = sinonTest.configureTest(sinon);
sinon.testCase = sinonTest.configureTestCase(sinon);
describe("MyData", () => {
it("should execute method 4", sinon.test(function() {
let spy = this.spy(MyData.prototype, 'method4');
let foo = new MyData(true);
return foo.underProcessing.then(() => {
expect(spy.callCount).to.equal(1);
});
}));
});
As an additional requirement, I added sinon-test because it's really useful to help clean up spies/stubs after the test has run.
The main feature is this line:
let spy = this.spy(MyData.prototype, 'method4');
This replaces MyData.prototype.method4 by a Sinon spy, which is a pass-through function (so it calls the original) that will record how exactly it was called, how often, with what arguments, etc. You need to do this before the instance is created, because otherwise you're too late (the method might already have been called through the chain of method calls that starts with this.init() in the constructor).
If you want to use a stub instead, which is not pass-through (so it won't call the original method), you can do that as well:
let spy = this.stub(MyData.prototype, 'method4').returns(Promise.resolve('yay'));
So instead of calling thirdPartyAPI.getData() and returning its result, method4 will now return a promise resolved with the value yay.
The remaining code should speak for itself, with one caveat: an explicit return in front of foo.underProcessing.then(...).
I assume that foo.underProcessing is a promise, so it's asynchronous, which means that your test should be asynchronous as well. Since Mocha supports promises, when you return a promise from a test, Mocha will know how to deal with it properly (there's an alternative method of making an asynchronous test with Mocha, involving callback functions, but when you're testing promise-based code you shouldn't really use those since it's easy to run into timeouts or swallowed exceptions).

Bluebird Promise becomes resolved in any case

I must be doing something wrong, however this is my testcase:
const { describe, it } = require('mocha'),
should = require('should'),
Promise = require('bluebird') //v3.4.6
describe('Bluebird', () => {
it('Promise is never resolved but does it get resolved?', () => {
new Promise(() => false)
.should.be.fulfilled() // It really shouldn't be
})
})
This passes, but shouldn't it fail?
When working with promises in mocha tests, it is important to return the promise from the test.
In your case, that would be:
it('Promise is never resolved but does it get resolved?', () => {
return new Promise(() => false)
.should.be.fulfilled()
})
However, that is probably still not exactly what you need here, as the fulfillment of the promise can't be determined at the time should is invoked. Your actual test is probably different, the most important part is still to return the promise chain.
When you do that, then you don't need to further assert the fulfillment/rejection of the promise, as that is done implicitly by mocha.
I'm personally a big fan of chai-as-promised, which would allow you to use the exact same test you had before, but this time, it would be working.

Resources