Numerate jasmine tests - jasmine

Is there a way to automatically numerate "its"/"describes" in Jasmine tests?
It's really confusing when you need to include a lot of tests scripts (200+) and run them all at once from the same file.
And I'm trying to avoid manual numeration, for the obvious reasons.
Thank you in advance.

I'm using a variable and incrementing it inside "it" description:
describe('EntityInfoComponent', () => {
let testNum = 1;
it('should be initialized #' + testNum++, () => {
...
});
it('should call service and show the results #' + testNum++, () => {
...
});
});
Not a great method but it works. I've tried to inserting it on beforeEach() and afterEach() but it seems to be executed inside each iteration function code.
My answer has come 4 years later to this ask but I hope It'll be useful for future searches at least.

Related

How to handle Exceptions in Cypress in a similar way we did in selenium

In selenium we can handle exception. If any exception occur in any testcase it will then jump onto next testcase we can did in selenium. But i an confused that how can we did this in Cypress. Taking below example
it('Test Case 1', function () {
cy.visit('https://habitica.com/login')
cy.get('form').find('input[id="usernameInput"]').click().type("username")
cy.get('form').find('input[id="passwordInput"]').click().type("password")
**cy.get('.btn-info').click()**
cy.get('.modal-dialog').find('button[class="btn btn-warning"]').click()
cy.get('.start-day').find('button').click({force:true})
})
it('Test Case 2', function () {
cy.visit('https://habitica.com/login')
cy.get('form').find('input[id="usernameInput"]').click().type("username")
cy.get('form').find('input[id="passwordInput"]').click().type("password")
cy.get('.btn-info').click()
cy.get('.modal-dialog').find('button[class="btn btn-warning"]').click()
cy.get('.start-day').find('button').click({force:true})
})
Lets say browser unable to find click element (Highlighted with bold) in testcase 1 then it will jump onto Testcase 2.
How can we do it in Cypress?
Please help me on this
Exceptions like Unable to fine element or similar others.
Other than this example how can we handle exceptions or error.
Although Cypress team is saying that we need to avoid conditional test as much as possible (and maybe a need to change your approach). However, in you case, you can include a conditional test:
cy.get('.btn-info').then((body) => {
if (body.length > 0) { // continues if the element exists
cy.get('.btn-info').click();
cy.get('.modal-dialog').find('button[class="btn btn-warning"]').click()
cy.get('.start-day').find('button').click({force:true})
} // if the above condition is not met, then it skips this the commands and moves to the next test
});
Thanks alot for your response. Please have a look at this. I have used your code. "'.btn-info'" does not exist so exception occur which is fine. but problem is it do not move onto else statement. I mean If statement gets failed then it must execute else but it do not. Why it is doing so?
it('First Test Case', function() {
cy.visit('http://pb.folio3.com:9000/admin/#/login');
cy.get('.btn-info').then((body) => { // **THIS ELEMENT NOT EXIST**
if (body.length > 0) { // continues if the element exists
cy.get('.btn-info').click();
cy.get('.modal-dialog').find('button[class="btn btn-warning"]').click()
cy.get('.start-day').find('button').click({force:true})
}
else
{
**cy.visit('https://www.facebook.com/');**
}
});
it('Second Test Case', function() {
cy.visit('https://www.google.com/');
})

In Mocha, why do I want to use only()/skip()?

What is the point of only() and skip()? If I only want to have a single it/describe get executed, why should I keep the other things in the file? If I want to skip something, why shouldn't I just remove that code? When does one want to use those methods?
About only. Imagine that you have 2000 unit tests in some npm module. And you need write 3 more tests for new feature. So you create something.test.js file and write test cases with describe.only()
const assert = require('assert')
describe.only('sample class', () => {
it('constructor works', () => {
assert.deepEqual(true, true)
})
it('1st method works', () => {
assert.deepEqual(true, true)
})
it('2nd method works', () => {
assert.deepEqual(true, true)
})
})
Now if you launch test locally via npm test, you run only your 3 tests, not whole bunch of 2003 tests. Tests are written much faster with only
About skip. Imagine that you need to implement urgent feature in 20mins, you don't have enough time to write tests, but you have time to document your code. As we know unit tests are the best documentation, so you just write test cases how code should works with describe.skip()
describe.skip('urgent feature', () => {
it('should catch thrown error', () => {})
})
Now everyone in your team know about your new functionality and maybe someone write tests for you. Now the knowledge of how your new feature works is not only in your head, the whole team knows about it. It's good for the project and business.
more reasons to use skip

How to stub Fluture?

Background
I am trying to convert a code snippet from good old Promises into something using Flutures and Sanctuary:
https://codesandbox.io/embed/q3z3p17rpj?codemirror=1
Problem
Now, usually, using Promises, I can uses a library like sinonjs to stub the promises, i.e. to fake their results, force to resolve, to reject, ect.
This is fundamental, as it helps one test several branch directions and make sure everything works as is supposed to.
With Flutures however, it is different. One cannot simply stub a Fluture and I didn't find any sinon-esque libraries that could help either.
Questions
How do you stub Flutures ?
Is there any specific recommendation to doing TDD with Flutures/Sanctuary?
I'm not sure, but those Flutures (this name! ... nevermind, API looks cool) are plain objects, just like promises. They only have more elaborate API and different behavior.
Moreover, you can easily create "mock" flutures with Future.of, Future.reject instead of doing some real API calls.
Yes, sinon contains sugar helpers like resolves, rejects but they are just wrappers that can be implemented with callsFake.
So, you can easily create stub that creates fluture like this.
someApi.someFun = sinon.stub().callsFake((arg) => {
assert.equals(arg, 'spam');
return Future.of('bar');
});
Then you can test it like any other API.
The only problem is "asynchronicity", but that can be solved like proposed below.
// with async/await
it('spams with async', async () => {
const result = await someApi.someFun('spam).promise();
assert.equals(result, 'bar');
});
// or leveraging mocha's ability to wait for returned thenables
it('spams', async () => {
return someApi.someFun('spam)
.fork(
(result) => { assert.equals(result, 'bar');},
(error) => { /* ???? */ }
)
.promise();
});
As Zbigniew suggested, Future.of and Future.reject are great candidates for mocking using plain old javascript or whatever tools or framework you like.
To answer part 2 of your question, any specific recommendations how to do TDD with Fluture. There is of course not the one true way it should be done. However I do recommend you invest a little time in readability and ease of writing tests if you plan on using Futures all across your application.
This applies to anything you frequently include in tests though, not just Futures.
The idea is that when you are skimming over test cases, you will see developer intention, rather than boilerplate to get your tests to do what you need them to.
In my case I use mocha & chai in the BDD style (given when then).
And for readability I created these helper functions.
const {expect} = require('chai');
exports.expectRejection = (f, onReject) =>
f.fork(
onReject,
value => expect.fail(
`Expected Future to reject, but was ` +
`resolved with value: ${value}`
)
);
exports.expectResolve = (f, onResolve) =>
f.fork(
error => expect.fail(
`Expected Future to resolve, but was ` +
`rejected with value: ${error}`
),
onResolve
);
As you can see, nothing magical going on, I simply fail the unexpected result and let you handle the expected path, to do more assertions with that.
Now some tests would look like this:
const Future = require('fluture');
const {expect} = require('chai');
const {expectRejection, expectResolve} = require('../util/futures');
describe('Resolving function', () => {
it('should resolve with the given value', done => {
// Given
const value = 42;
// When
const f = Future.of(value);
// Then
expectResolve(f, out => {
expect(out).to.equal(value);
done();
});
});
});
describe('Rejecting function', () => {
it('should reject with the given value', done => {
// Given
const value = 666;
// When
const f = Future.of(value);
// Then
expectRejection(f, out => {
expect(out).to.equal(value);
done();
});
});
});
And running should give one pass and one failure.
✓ Resolving function should resolve with the given value: 1ms
1) Rejecting function should reject with the given value
1 passing (6ms)
1 failing
1) Rejecting function
should reject with the given value:
AssertionError: Expected Future to reject, but was resolved with value: 666
Do keep in mind that this should be treated as asynchronous code. Which is why I always accept the done function as an argument in it() and call it at the end of my expected results. Alternatively you could change the helper functions to return a promise and let mocha handle that.

Sinon useFakeTimers() creates a timeout in before/afterEach

I'm using Sinon with Mocha to test some expiration date values. I used the same code a few months ago and it worked fine, but somewhere between v1.12.x and v1.17.x, something has changed and I can't seem to find the right path.
let sinon = require('sinon');
describe('USER & AUTHENTICATION ENDPOINTS', function(done) {
beforeEach(function() {
this.clock = sinon.useFakeTimers(new Date().getTime());
return fixtures.load(data);
});
afterEach(function() {
this.clock.restore();
return fixtures.clear(data);
});
context('POST /users', function() { ... }
});
I've tried with and without the new Date().getTime() argument.
I've tried passing in and explicitly calling done().
I've tried removing my fixture load/clear processes.
The end result is always the same:
Error: timeout of 5000ms exceeded. Ensure the done() callback is being called in this test.
Has something changed that I just haven't noticed in the documentation? Do I have some kind of error in there that I can't see?
Any thoughts would be appreciated.
UPDATE
So a little more info here. This clearly has something to do with my code, but I'm at a loss.
If I comment every actual test, the tests run and give me a green "0 passing".
If I run an actual test, even one that just this:
context('POST /users', function() {
it('should create a new user', function(done) {
done();
})
});
I'm right back to the timeout. What am I missing?
Mystery solved. It appears to be a conflict between Sinon and versions of Knex > 0.7.6.
Seems to be because pool2 relies on behavior of setTimeout. Using sinon.useFakeTimers(...) replaces several methods including setTimeout with synchronous versions which breaks it. Can fix by replacing with: clock = sinon.useFakeTimers(Number(date), 'Date');
My original code was written in a world where Knex v0.7.6 was the latest version. Now that it's not everything failed even though the code itself was the same. I used the fix mentioned and things look to be fine.
You are passing done to your describe callback in line 2:
describe('USER & AUTHENTICATION ENDPOINTS', function(done) {
Mocha expects you to invoke it... To get rid of the timeout error, just remove the done parameter from the callback.

Conditionally ignore individual tests with Karma / Jasmine

I have some tests that fail in PhantomJS but not other browsers.
I'd like these tests to be ignored when run with PhantomJS in my watch task (so new browser windows don't take focus and perf is a bit faster), but in my standard test task and my CI pipeline, I want all the tests to run in Chrome, Firefox, etc...
I've considered a file-naming convention like foo.spec.dont-use-phantom.js and excluding those in my Karma config, but this means that I will have to separate out the individual tests that are failing into their own files, separating them from their logical describe blocks and having more files with weird naming conventions would generally suck.
In short:
Is there a way I can extend Jasmine and/or Karma and somehow annotate individual tests to only run with certain configurations?
Jasmine supports a pending() function.
If you call pending() anywhere in the spec body, no matter the expectations, the spec will be marked pending.
You can call pending() directly in test, or in some other function called from test.
function skipIfCondition() {
pending();
}
function someSkipCheck() {
return true;
}
describe("test", function() {
it("call pending directly by condition", function() {
if (someSkipCheck()) {
pending();
}
expect(1).toBe(2);
});
it("call conditionally skip function", function() {
skipIfCondition();
expect(1).toBe(3);
});
it("is executed", function() {
expect(1).toBe(1);
});
});
working example here: http://plnkr.co/edit/JZtAKALK9wi5PdIkbw8r?p=preview
I think it is purest solution. In test results you can see count of finished and skipped tests.
The most simple solution that I see is to override global functions describe and it to make them accept third optional argument, which has to be a boolean or a function returning a boolean - to tell whether or not current suite/spec should be executed. When overriding we should check if this third optional arguments resolves to true, and if it does, then we call xdescribe/xit (or ddescribe/iit depending on Jasmine version), which are Jasmine's methods to skip suite/spec, istead of original describe/it. This block has to be executed before the tests, but after Jasmine is included to the page. In Karma just move this code to a file and include it before test files in karma.conf.js. Here is the code:
(function (global) {
// save references to original methods
var _super = {
describe: global.describe,
it: global.it
};
// override, take third optional "disable"
global.describe = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xdescribe" (or "ddescribe")
if (disable) {
return global.xdescribe.apply(this, arguments);
}
// otherwise call original "describe"
return _super.describe.apply(this, arguments);
};
// override, take third optional "disable"
global.it = function (name, fn, disable) {
var disabled = disable;
if (typeof disable === 'function') {
disabled = disable();
}
// if should be disabled - call "xit" (or "iit")
if (disable) {
return global.xit.apply(this, arguments);
}
// otherwise call original "it"
return _super.it.apply(this, arguments);
};
}(window));
And usage example:
describe('foo', function () {
it('should foo 1 ', function () {
expect(true).toBe(true);
});
it('should foo 2', function () {
expect(true).toBe(true);
});
}, true); // disable suite
describe('bar', function () {
it('should bar 1 ', function () {
expect(true).toBe(true);
});
it('should bar 2', function () {
expect(true).toBe(true);
}, function () {
return true; // disable spec
});
});
See a working example here
I've also stumbled upon this issue where the idea was to add a chain method .when() for describe and it, which will do pretty much the same I've described above. It may look nicer but is a bit harder to implement.
describe('foo', function () {
it('bar', function () {
// ...
}).when(anything);
}).when(something);
If you are really interested in this second approach, I'll be happy to play with it a little bit more and try to implement chain .when().
Update:
Jasmine uses third argument as a timeout option (see docs), so my code sample is replacing this feature, which is not ok. I like #milanlempera and #MarcoCI answers better, mine seems kinda hacky and not intuitive. I'll try to update my solution anyways soon not to break compatibilty with Jasmine default features.
I can share my experience with this.
In our environment we have several tests running with different browsers and different technologies.
In order to run always the same suites on all the platforms and browsers we have a helper file loaded in karma (helper.js) with some feature detection functions loaded globally.
I.e.
function isFullScreenSupported(){
// run some feature detection code here
}
You can use also Modernizr for this as well.
In our tests then we wrap things in if/else blocks like the following:
it('should work with fullscreen', function(){
if(isFullScreenSupported()){
// run the test
}
// don't do anything otherwise
});
or for an async test
it('should work with fullscreen', function(done){
if(isFullScreenSupported()){
// run the test
...
done();
} else {
done();
}
});
While it's a bit verbose it will save you time for the kind of scenario you're facing.
In some cases you can use user agent sniffing to detect a particular browser type or version - I know it is bad practice but sometimes there's effectively no other way.
Try this. I am using this solution in my projects.
it('should do something', function () {
if (!/PhantomJS/.test(window.navigator.userAgent)) {
expect(true).to.be.true;
}
});
This will not run this particular test in PhantomJS, but will run it in other browsers.
Just rename the tests that you want to disable from it(...) to xit(...)
function xit: A temporarily disabled it. The spec will report as
pending and will not be executed.

Resources