I guess "it" is an abbreviation of a phrase. But I don't know what it is. Each time when I see this function I always try to find out its meaning.
Can somebody tell me about this?
it('is really much more simple than you think)
The it() syntax it is used to tell Jasmine what it('should happen in your test')
Jasmine is a testing framework for behavior driven development. And it function's purpose is to test a behavior of your code.
It takes a string that explains expected behavior and a function that tests it.
it("should be 4 when I multiply 2 with 2", function() {
expect(2 * 2).toBe(4);
});
It's not an abbreviation, it's just the word it.
This allows for writing very expressive test cases in jasmine, and if you follow the scheme has very readable output.
Example:
describe("My object", function () {
it("can calculate 3+4", function () {
expect(myObject.add(3, 4)).toBe(7);
}
});
Then, if that test fails, the output will be like
Test failed: My object can calculate 3+4
Message:
Expected 6.99999 to equal 7
As you can see, this imaginary function suffers from some rounding error. But the point is that the resulting output is very readable and expressive, and the code is too.
The basic scheme in the code is: You describe the unit that will be tested, and then test its various functions and states.
Jasmine is a BDD testing framework (Behavior Driven Development) and differently from "stanndard" TDD (Test Driven Development) you are actually testing against behaviors of your application.
So "it" refers to the object/component/class/whatever your are testing rather than a method.
Imagine you are writing a test for a calendar widget in which you want to test that once a user click on the next arrow the widget changes the displayed month, you will write something like:
it('should change displayed month once the button is clicked', function(){
// assertions
});
So, "it" is your calendar widget, you are practically saying "the calendar widget should change displayed month once the button is clicked".
In a TDD it would be instead something like:
testButtonArrowClickChangesDisplayedMonth()
In the end there isn't an actual difference, it's just a matter of style and readability.
Jasmine's tests are defined in a quite verbose manner, so developers can better understand what is the purpose of the test.
From the docs:
The name it is a pronoun for the test target, not an abbreviation of
anything. It makes the spec [abbr. for specification] more readable by connecting the function
name it and the argument description as a complete sentence.
https://jasmine.github.io/api/edge/global.html#it
Related
Jasmine expect statements can produce worthless error messages like:
Expected true to be false.
To address this, matchers allow you to add a clarifying message as a second argument, expectationFailedOutput:
toBe(expected: any, expectationFailOutput?: any): Promise<void>;
This allows you to write:
expect(await password.isDisplayed).toBe(true, "Password field should be visible");
expect(await password.isDisplayed).toBe(true, "Password field was not visible");
These will produce the following error messages, respectively:
Expected false to be true, 'Password field should be visible'.
Expected false to be true, 'Password field was not visible'.
Note, that these lines are the same except that in the first case, I described what the expect was testing for, and in the second, I describe what actually happened.
Obviously, I should choose one these conventions and use it consistently in my code base, but I can't find anything in the documentation about what the typical convention is. Should the message describe what we expected to happen, or should it describe what did happen?
If the Jasmine team doesn't have a convention for this, perhaps somebody who's worked on a lot of Jasmine projects knows what the typical convention is.
I don't see why it should be consistent and why it obviously. Some checks are easy to understand, some hard. When you feel that you need message - add it. Don't make it hard when it could be simple.
Explaine me, please, what is the difference between those three variants of code? They all are working well.
Or maybe the first and second variant are identical?
What about the third one: I read that "browser.sleep()" is better to avoid in code, as it cause unstabilities in tests work. Is it true?
Help me to understand.
Thanks.
var MenuSigninButton = $('button.btn');
var LoginDropdownForm = element(by.id('loginForm'));
MenuSigninButton.click();
browser.wait(EC.visibilityOf(LoginDropdownForm));
and
MenuSigninButton.click();
browser.wait (function () {
return LoginDropdownForm.isDisplayed()
});
and
MenuSigninButton.click();
browser.sleep(3000);
expect(LoginDropdownForm.isDisplayed()).toBe(true);
First of all, yes, you should avoid use driver.sleep() as long as you can. Why?
Well, if you started to use that is because you are waiting something to appear on the screen in order to continue the execution of the test.
Well, you don't know how much time it will last that "something" to appear.
Maybe sometimes the time that you hardcoded is too short, and the test will fail, giving a fake result.
Maybe, the time is too long, and the test will last too much, and you could save that time.
Sometimes, the two points above could happen depending on time, overall load of the env,...
Second question:
When you use browser.wait(), you have to pass a condition to stop the wait.
It can be a Promise that will be solved on the future(not covered on your examples), it can be a condition(First example) or it can be a function, that has to be executed (Second example).
The third example is a bit different:
When you use the expect method, you are explicitly writing a condition that must pass in order to pass the test. For that reason, expect does not ends until the Promise inside it is resolved.
I recently asked how to test in RSpec if a block was called and the answers to that question seem to work in a simple case. The problem is when the initialization with the block is more complex. Then it is done in before and reused by a number of different tests in the context, among them the one testing if the block was evaluated. See the example:
context "the node definition using block of code" do
before do
#n=node do
# this block should be called
end
# some more complex setup concerning #n
end
it "should call the block" do
# how to test it?
end
# here a bunch of other tests using #n
end
In this case the solution with side effect changing value of a local variable does not work. Raising an exception from the block is useless since the whole statement must be properly evaluated to be used by the other tests.
Obviously I could do the tests separately, but it seems to stink, since I must copy-paste the initialization part and since the was-the-block-called test inherently belongs to this very context.
How to test if the block was evaluated in such a case?
Explanation for question asked by #zetetic below.
The context is that I'm implementing a kind of DSL, with nodes defined by their parameters and blocks of code (that can define something else in the scope of node). Since the things defined by the node's block can be pretty generic, at least for the first attempt I just need to be sure the block is evaluated and that what a user provides there will be considered. For now does not matter what it is.
Probably I should refactor my tests now and using mocks make them test behaviors rather then implementation. However it will be a little bit tricky, for the sake of some mixins and dynamic handling of messages sent to objects. For now the cincept of such tests is a little bit fuzzy in my head ;-)
Anyway your answers and comments helped me to better understand how RSpec works and explained why what I'm trying to do looks as if it did not fit to the RSpec.
Try something like this (untested by me):
context "the node definition using block of code" do
let(:node){
node = Node.new "arg1", "arg2", node_block
# more complex stuff here
node
}
context "checking the block is called" do
let(:node_block) {
double = double("node_block")
double.should_receive("some kind of arg").and_return("something")
# this will now cause a fail if it isn't called
double
}
it "should call the block" do
node.blah()
end
end
let(:node_block) {
# some real code
}
subject { node.blah() }
it { should == 2 }
# ...
end
So that's a very shaky piece of code (you'll have to fill in the gaps as you didn't give very much to go on, and let is obviously a lambda too, which could mean you've got to play around with it a bit) that uses let and a double to check it's called, and avoids using before, which is really for side effects not setting up variables for use in the specs.
#zetetic makes a very insightful comment that you're not testing behaviour here. I'm not against using rspec for doing more unit test style stuff (guidelines are made to be broken), but you might ask how later tests will pass when using a real block of code if that block isn't being called? In a way, I'm not even sure you need to check the block is called, but only you know.
When debugging a function I usually use
library(debug)
mtrace(FunctionName)
FunctionName(...)
And that works quite well for me.
However, sometimes I am trying to debug a complex function that I don't know. In which case, I can find that inside that function there is another function that I would like to "go into" ("debug") - so to better understand how the entire process works.
So one way of doing it would be to do:
library(debug)
mtrace(FunctionName)
FunctionName(...)
# when finding a function I want to debug inside the function, run again:
mtrace(FunctionName.SubFunction)
The question is - is there a better/smarter way to do interactive debugging (as I have described) that I might be missing?
p.s: I am aware that there where various questions asked on the subject on SO (see here). Yet I wasn't able to come across a similar question/solution to what I asked here.
Not entirely sure about the use case, but when you encounter a problem, you can call the function traceback(). That will show the path of your function call through the stack until it hit its problem. You could, if you were inclined to work your way down from the top, call debug on each of the functions given in the list before making your function call. Then you would be walking through the entire process from the beginning.
Here's an example of how you could do this in a more systematic way, by creating a function to step through it:
walk.through <- function() {
tb <- unlist(.Traceback)
if(is.null(tb)) stop("no traceback to use for debugging")
assign("debug.fun.list", matrix(unlist(strsplit(tb, "\\(")), nrow=2)[1,], envir=.GlobalEnv)
lapply(debug.fun.list, function(x) debug(get(x)))
print(paste("Now debugging functions:", paste(debug.fun.list, collapse=",")))
}
unwalk.through <- function() {
lapply(debug.fun.list, function(x) undebug(get(as.character(x))))
print(paste("Now undebugging functions:", paste(debug.fun.list, collapse=",")))
rm(list="debug.fun.list", envir=.GlobalEnv)
}
Here's a dummy example of using it:
foo <- function(x) { print(1); bar(2) }
bar <- function(x) { x + a.variable.which.does.not.exist }
foo(2)
# now step through the functions
walk.through()
foo(2)
# undebug those functions again...
unwalk.through()
foo(2)
IMO, that doesn't seem like the most sensible thing to do. It makes more sense to simply go into the function where the problem occurs (i.e. at the lowest level) and work your way backwards.
I've already outlined the logic behind this basic routine in "favorite debugging trick".
I like options(error=recover) as detailed previously on SO. Things then stop at the point of error and one can inspect.
(I'm the author of the 'debug' package where 'mtrace' lives)
If the definition of 'SubFunction' lives outside 'MyFunction', then you can just mtrace 'SubFunction' and don't need to mtrace 'MyFunction'. And functions run faster if they're not 'mtrace'd, so it's good to mtrace only as little as you need to. (But you probably know those things already!)
If 'MyFunction' is only defined inside 'SubFunction', one trick that might help is to use a conditional breakpoint in 'MyFunction'. You'll need to 'mtrace( MyFunction)', then run it, and when the debugging window appears, find out what line 'MyFunction' is defined in. Say it's line 17. Then the following should work:
D(n)> bp( 1, F) # don't bother showing the window for MyFunction again
D(n)> bp( 18, { mtrace( SubFunction); FALSE})
D(n)> go()
It should be clear what this does (or it will be if you try it).
The only downsides are: the need to do it again whenever you change the code of 'MyFunction', and; the slowing-down that might occur through 'MyFunction' itself being mtraced.
You could also experiment with adding a 'debug.sub' argument to 'MyFunction', that defaults to FALSE. In the code of 'MyFunction', then add this line immediately after the definition of 'SubFunction':
if( debug.sub) mtrace( SubFunction)
That avoids any need to mtrace 'MyFunction' itself, but does require you to be able to change its code.
Do you prefer literal values or expressions in your Asserts in your unit tests? This little example demonstrates what I mean - please pay attention to the comments:
[Test]
public function fromXML_works() : void {
var slideshow : Slideshow = SlideshowConverter.fromXML(xmlSample);
// do you prefer literal value "1":
assertEquals(slideshow.id, "1");
// ... or an expression like this:
assertEquals(slideshow.id, xmlSample.#id);
}
private var xmlSample : XML =
<slideshow id="1">
<someOtherTags />
</slideshow>;
The nice thing about the expression is that when the XML sample changes, the unit test will not break. On the other hand, I've basically provided an implementation of one aspect of my SlideshowConverter directly in my unit test which I don't like (the test should test intent, not implementation). I can also imagine that tests using expressions will be more prone to programming errors (I could have, for example, made a mistake in my E4X expression in my test method).
What approach do you prefer? What advantage is usually more important on real world projects?
Particularly since you've tagged this TDD: stick with literals. Writing a test before code exists to pass it, you say to yourself, "Self: if I had this function and gave it those parameters, then this is what I would get back." Where this is a very specific value. Don't hide it away; don't abstract it - just put the value into the test. It enhances the documentation value of the test as well.
Personally, I like to use constants within my tests - it ensures that the test fixtures are simple and straightforward. Plus, as you mention, it avoids programming errors in the test itself, which may hide programming errors in the real code.