Should Jasmine 'expectationFailedOutput' messages describe what was expected, or what happened? - jasmine

Jasmine expect statements can produce worthless error messages like:
Expected true to be false.
To address this, matchers allow you to add a clarifying message as a second argument, expectationFailedOutput:
toBe(expected: any, expectationFailOutput?: any): Promise<void>;
This allows you to write:
expect(await password.isDisplayed).toBe(true, "Password field should be visible");
expect(await password.isDisplayed).toBe(true, "Password field was not visible");
These will produce the following error messages, respectively:
Expected false to be true, 'Password field should be visible'.
Expected false to be true, 'Password field was not visible'.
Note, that these lines are the same except that in the first case, I described what the expect was testing for, and in the second, I describe what actually happened.
Obviously, I should choose one these conventions and use it consistently in my code base, but I can't find anything in the documentation about what the typical convention is. Should the message describe what we expected to happen, or should it describe what did happen?
If the Jasmine team doesn't have a convention for this, perhaps somebody who's worked on a lot of Jasmine projects knows what the typical convention is.

I don't see why it should be consistent and why it obviously. Some checks are easy to understand, some hard. When you feel that you need message - add it. Don't make it hard when it could be simple.

Related

What's the meaning of "it()" in Jasmine?

I guess "it" is an abbreviation of a phrase. But I don't know what it is. Each time when I see this function I always try to find out its meaning.
Can somebody tell me about this?
it('is really much more simple than you think)
The it() syntax it is used to tell Jasmine what it('should happen in your test')
Jasmine is a testing framework for behavior driven development. And it function's purpose is to test a behavior of your code.
It takes a string that explains expected behavior and a function that tests it.
it("should be 4 when I multiply 2 with 2", function() {
expect(2 * 2).toBe(4);
});
It's not an abbreviation, it's just the word it.
This allows for writing very expressive test cases in jasmine, and if you follow the scheme has very readable output.
Example:
describe("My object", function () {
it("can calculate 3+4", function () {
expect(myObject.add(3, 4)).toBe(7);
}
});
Then, if that test fails, the output will be like
Test failed: My object can calculate 3+4
Message:
Expected 6.99999 to equal 7
As you can see, this imaginary function suffers from some rounding error. But the point is that the resulting output is very readable and expressive, and the code is too.
The basic scheme in the code is: You describe the unit that will be tested, and then test its various functions and states.
Jasmine is a BDD testing framework (Behavior Driven Development) and differently from "stanndard" TDD (Test Driven Development) you are actually testing against behaviors of your application.
So "it" refers to the object/component/class/whatever your are testing rather than a method.
Imagine you are writing a test for a calendar widget in which you want to test that once a user click on the next arrow the widget changes the displayed month, you will write something like:
it('should change displayed month once the button is clicked', function(){
// assertions
});
So, "it" is your calendar widget, you are practically saying "the calendar widget should change displayed month once the button is clicked".
In a TDD it would be instead something like:
testButtonArrowClickChangesDisplayedMonth()
In the end there isn't an actual difference, it's just a matter of style and readability.
Jasmine's tests are defined in a quite verbose manner, so developers can better understand what is the purpose of the test.
From the docs:
The name it is a pronoun for the test target, not an abbreviation of
anything. It makes the spec [abbr. for specification] more readable by connecting the function
name it and the argument description as a complete sentence.
https://jasmine.github.io/api/edge/global.html#it

When to use curly braces vs parenthesis in expect Rspec method?

I had a test that did this:
expect(#parser.parse('adsadasdas')).to raise_error(Errno::ENOENT)
and it didn't work. I changed to:
expect { #parser.parse('adsadasdas') }.to raise_error(Errno::ENOENT)
And it worked.
When do we use curly braces and when do we use parentheses with expect?
In response to OP's comment, I've edited and completely rewritten my answer. I realize that my original answer was oversimplified, so much so that it could be considered incorrect.
Your question was actually addressed somewhat by this other StackOverflow question.
One poster, Peter Alfvin, makes a good point when he says:
As for rules, you pass a block or a Proc if you're trying to test
behavior (e.g. raising errors, changing some value). Otherwise, you
pass a "conventional" argument, in which case the value of that
argument is what is tested.
The reason you're encountering the phenomenon you're seeing has to do with the raising of errors. When you pass #parser.parse('adsadasdas') as an argument (use parentheses) to expect, you are essentially telling ruby:
Evaluate #parser.parse('adsadasdas') first.
Take the result and pass this to expect.
expect should see if this result matches my expectation (that is, that Errno:ENOENT will be raised).
But, what happens is: when ruby evaluates #parser.parse('adsadasdas'), an error is raised right then and there. Ruby doesn't even get a chance to pass the result on to expect. (For all we care, you could have passed #parser.parse('adsadasdas') as an argument to any function... like multiply() or capitalize()) The error is raised, and expect never even gets a chance to do its work.
But when you pass #parser.parse('adsadasdas') as a proc (a code block) to expect using curly braces, what you are telling ruby is this:
expect, get ready to do some work.
expect, I would like you to keep track of what happens as we evaluate #parser.parse('adsadasdas').
Ok, expect, did the code block that was just evaluated raise a Errno:ENOENT error? I was expecting that it would.
When you pass a code block to expect, you are telling expect that you want it to examine the resulting behavior, the changes, made by your code block's execution, and then to let you know if it meets up to the expectations that you provide it.
When you pass an argument to expect, you are telling ruby to evaluate that argument to come to some value before expect even gets involved, and then you are passing that value to expect to see if it meets up to some expectation.
TL;DR: use expect(exp) to specify something about the value of exp and use expect { exp } to specify a side effect that occurs when exp is executed.
Let's unpack this a bit. Most of RSpec's matchers are value matchers. They match (or not) against any ruby object. In contrast, a handful of RSpec's matchers can only be matched against a block, because they have to observe the block while it's running in order to operate properly. These matchers concern side effects that take place (or not) while the block executes. The matcher would have no way to tell if the named side effect had occurred unless it is passed a block to execute. Let's consider the built-in block matchers (as of RSpec 3.1) one-by-one:
raise_error
Consider that one can return an exception from a method, and that is different than raising the exception. Raising an exception is a side effect, and can only be observed by the matcher by it executing the block with an appropriate rescue clause. Thus, this matcher must receive a block to work properly.
throw_symbol
Throwing symbols is similar to raising errors -- it causes a stack jump and is a side effect that can only be observed by running a block inside an appropriate catch block.
change
Mutation to state is a side effect. The matcher can only tell if there was a change to some state by checking the state before hand, running the block, then checking the state after.
output
I/O is a side effect. For the output matcher to work, it has to replace the appropriate stream ($stdout or $stderr) with a new StringIO, execute the block, restore the stream to its original value, and then check the contents of theStringIO`.
yield_control/yield_with_args/yield_with_no_args/yield_with_successive_args
These matchers are a bit different. Yielding isn't really a side effect (it's really just syntactic sugar for calling another function provided by the caller), but yielding can't be observed by looking at the return value of the expression. For the yield matchers to work, they provide a probe object that you pass on to the method-under-test as a block using the &probe syntax:
expect { |probe| [1, 2, 3].each(&probe) }.to yield_with_successive_args(1, 2, 3)
What do all these matchers have in common? None of them can work on simple ruby values. Instead, they all have to wrap a block in an appropriate context (i.e. rescuing, catching or checking before/after values).
Note that in RSpec 3, we added some logic to provide users clear errors when they use the wrong expect form with a given matcher. However, in the specific case of expect(do_something).to raise_error, there's nothing we can do to provide you a clear explanation there -- if do_something raises an error (as you expect it to...), then the error is raised before ruby evaluates the to argument (the raise_error matcher) so RSpec has no way to check with the matcher to see if supports value or block expectations.
in short:
use curly-brace (a block): when you want to test the behavior
use parenthesis when you want to test the returned value
worth reading: As for rules, you pass a block or a Proc if you're trying to test behavior (e.g. raising errors, changing some value). Otherwise, you pass a "conventional" argument, in which case the value of that argument is what is tested. - from this answer
In the test written with parentheses, the code is executed normally, including all normal error handling. The curly-brace syntax defines a block object upon which you can place the expectation. It encapsulates the code you expect to be broken and allows rspec to catch the error and provide its own handling (in this case, a successful test).
You can think of it this way as well: with the parentheses, the code is executed before being passed to the expect method, but with the block, expect will run the code itself.

How do Ruby programmers do type checking?

Since there is no type in ruby, how do Ruby programmers make sure a function receives correct arguments? Right now, I am repeating if object.kind_of/instance_of statements to check and raise runtime errors everywhere, which is ugly. There must be a better way of doing this.
My personal way, which I am not sure if it a recommended way in general, is to type-check and do other validations once an error occurs. I put the type check routine in a rescue block. This way, I can avoid performance loss when correct arguments are given, but still give back the correct error message when an error occurs.
def foo arg1, arg2, arg3
...
main_routine
...
rescue
## check for type and other validations
raise "Expecting an array: #{arg1.inspect}" unless arg1.kind_of?(Array)
raise "The first argument must be of length 2: #{arg1.inspect}" unless arg1.length == 2
raise "Expecting a string: #{arg2.inspect}" unless arg2.kind_of?(String)
raise "The second argument must not be empty" if arg2.empty?
...
raise "This is `foo''s bug. Something unexpected happened: #{$!.message}"
end
Suppose in the main_routine, you use the method each on arg1 assuming that arg1 is an array. If it turns out that it is something else, to which each is not defined, then the bare error message will be something like method each not defined on ..., which, from the perspective of the user of the method foo, might be not helpful. In that case, the original error message will be replaced by the message Expecting an array: ..., which is much more helpful.
Ruby is, of course, dynamically typed.
Thus the method documentation determines the type contract; the type-information is moved from the formal type-system to the [informal type specification in the] method documentation. I mix generalities like "acts like an array" and specifics such as "is a string". The caller should only expect to work with the stated types.
If the caller violates this contract then anything can happen. The method need not worry: it was used incorrectly.
In light of the above, I avoid checking for a specific type and avoid trying to create overloads with such behavior.
Unit-tests can help ensure that the contract works for expected data.
If a method has a reason to exist, it will be called.
If reasonable tests are written, everything will be called.
And if every method is called, then every method will be type-checked.
Don't waste time putting in type checks that may unnecessarily constrain callers and will just duplicate the run-time check anyway. Spend that time writing tests instead.
I recommend to use raise at the beginning of the method to add manual type checking, simple and effective:
def foo(bar)
raise TypeError, "You called foo without the bar:String needed" unless bar.is_a? String
bar.upcase
end
Best way when you don't have much parameters, also a recommendation is to use keyword arguments available on ruby 2+ if you have multiple parameters and watch for its current/future implementation details, they are improving the situation, giving the programmer a way to see if the value is nil.
plus: you can use a custom exception
class NotStringError < TypeError
def message
"be creative, use metaprogramming ;)"
#...
raise NotStringError
You can use a Design by Contract approach, with the contracts ruby gem. I find it quite nice.

What makes a good failure message for testunit or other nunit style frameworks?

In Ruby's test/unit, and other such nunit style frameworks, what makes a good failure message?
Should the failure message merely describe how the expected value does not match the expected value?
assert_match("hey", "hey this is a test", "The word does not exist in the string")
Should it describe what you expected to happen?
assert_match("hey", "hey this is a test", "I expected hey to be in the string")
Should it describe why you wanted the behavior to happen?
assert_match("hey", "hey this is a test", "Program should provide a greeting")
Should it describe why you thought the test may fail?
assert_match("konnichiwa", "konnichiwa this is a test",
"Program failed to use supplied i18n configuration")
Should information about tests also exist in the name of the test method, and in the name of the test case?
This is based on Ruby "test/unit" , how do I display the messages in asserts
the failure message is supposed to add context to the failure message. So anything that saves you having to drill into the test code to know what failed.
So if the [method name, expected, actual] set is adequate for the above purpose, you can skip the failure message. If you need more information, then you add the optional failure message.
e.g.
Expected true but was false, doesn't tell me anything.
You can use a failure message so that
Return value should contain only multiples of 10. Expected true but was false
You can first try to use more descriptive matchers.
So that failures read Expected all items to be divisible by 10 but was [10,20,35,40] does.
Personally I prefer matchers... use failure messages as the last resort. (because like comments, it decays. You need discipline to ensure that the failure message is updated if you change the check.)

A BDD example - why test "happy path" only?

I've accidentally stumbled upon an old article by Luke Redpath that presents a very simple BDD example (very short and easy to follow even for non-Ruby programmers like me). I found the end result very incomplete, thus making the example pretty useless.
The end result is a single test which verifies that a user with preset attributes is valid. In my view, this is simply not enough to verify the validation rules correctly. For example, if you change
validates_length_of :password, :in => 6..12, :allow_nil => :true
to
validates_length_of :password, :in => 7..8, :allow_nil => :true
(or even remove password length validation completely) the test will still pass, but you can obviously see the code is now violating the initial requirements.
I just think the last refactoring of putting all the individual tests into a single one is simply not enough. He tests only the "happy path" which doesn't guarantee much. I would absolutely have all the tests that verify that the correct error is triggered with certain values. In the case of the password, I would test that a password of length less than 6 and greater than 12 is invalid and triggers the appropriate error. The "happy path" test would be there as well, but not alone by itself as it's in the article.
What's your opinion? I'm just trying to figure out why the guy did it the way he did and whether he simply overlooked the problem or it was his intention. I may be missing something.
I don't quite understand your question. The specs do contain expectations about the password lenght, both for the happy path and the two different failure modes (password too long and password too short):
specify "should be valid with a full set of valid attributes" do
#user.attributes = valid_user_attributes
#user.should_be_valid
end
This takes care of the happy path, since valid_user_attributes contains a valid password.
specify "should be invalid if password is not between 6 and 12 characters in length" do
#user.attributes = valid_user_attributes.except(:password)
#user.password = 'abcdefghijklm'
#user.should_not_be_valid
#user.password = 'abcde'
#user.should_not_be_valid
end
And this tests the two failure modes.
Granted, there is one boundary case missing (12 characters), but that's not too bad.
I don't have time to read the article, so I can't verify your claims, but the general answer in my opinion is that if the password validation rule is a concrete requirement, it should be verified with one or more tests for that specific requirement (at least one per "part" of the requirement).
BDD (and TDD) are design activities. The tests are meant to drive the design of the code, not guarantee that it is completely bug-free. There should be independent testers for that. So we need a decent degree of coverage, to ensure that our code works as expected and handles exceptions in a clean fashion. But TDD doesn't demand that we write unit tests for every conceivable edge case.
With regard to the specific example you cite, perhaps he should have coded two tests, one with a password of six characters, one with a passowrd of twelve characters. But what would be the point? We know that the requirement is the password must be between six and twelve characters in length. If we have misunderstood the requirements and think the rule ought to be ...
validates_length_of :password, :in => 7..8, :allow_nil => :true
... then we're going to write our test data to make a test which passes our incorrect interpretation. So writing more tests would only give us misplaced confidence. That's why proponents of TDD and BDD favour other XP techniques like pair programming as well: to catch the errors we introduce into our unit tests.
Similarly, we could remove the test validating the password length altogether, but what would be the point? The tests are there to help us correctly implement the spceification. If we don't have tests for every piece of code we write then we are not doing TDD/BDD.

Resources