Producing reports in Protractor (Jasmine) with details for expectations that pass - jasmine

So I'm kind of new to Protractor. I've written a number of parameterised functions (eg loginAs, navigateTo, enterTextIntoSearchField, clickButton etc), which I can then use repeatedly as I create my specs and suites. So for example I might have a "perform search" suite, with specs for "perform search as regular user", "perform search as admin" etc.
All this is fine. I'm using the Jasmine2HTMLReporter which produces output similar to sample Jasmine2HTMLReporter output
Some of my reusable functions have expect statements, some don't (although I may yet go back and try to add them for clarity!)
The problem I have is when an individual spec consists of quite a few function calls, the list of passed / failed expectations in the report can be rather long. In the case of failed expectations it gives details of the failure ("expected Fred to equal Bob" etc). However, I would like to see something similar for the passes aswell ("expected Fred to equal Fred") - as this would allow whoever was reading the report to understand which function call any one "passed" related to - and understand the flow of the test, rather than just seeing an otherwise meaningless list of "Passed" statements.
Is this at all possible? I could have nested specs so that each function call was it's own spec within a "parent" spec, but this strikes me as over the top and messy, and would make the report way bigger than it needed to be? Would a different reporter give me what I want? I've not found one yet that looks like it would...

Related

Complex RSpec Argument Testing

I have a method I want to test that is being called with a particular object. However, identifying this object is somewhat complex because I'm not interested in a particular instance of the object, but one that conforms to certain conditions.
For example, my code might be like:
some_complex_object = ComplexObject.generate(params)
my_function(some_complex_object)
And in my test I want to check that
test_complex_object = ComplexObject.generate(test_params)
subject.should_receive(:my_function).with(test_complex_object)
But I definitely know that ComplexObject#== will return false when comparing some_complex_object and test_complex_object because that is desired behavior (elsewhere I rely on standard == behavior and so don't want to overload it for ComplexObject).
I would argue that it's a problem that I find myself needing to write a test such as this, and would prefer to restructure the code so that I don't need to write such a test, but that's unfortunately a much larger task that would require rewriting a lot of existing code and so is a longer term fix that I want to get to but can't do right now within time constraints.
Is there a way with Rspec to be able to do a more complex comparison between arguments in a test? Ideally I'd like something where I could use a block so I can write arbitrary comparison code.
See https://www.relishapp.com/rspec/rspec-mocks/v/2-7/docs/argument-matchers for information on how you can provide a block to do arbitrary analysis of the arguments passed to a method, as in:
expect(subject).to receive(:my_function) do |test_complex_object|
# code setting expectations on test_complex_object
end
You can also define a custom matcher which will let you evaluate objects to see if that satisfy the condition, as described in https://www.relishapp.com/rspec/rspec-expectations/v/2-3/docs/custom-matchers/define-matcher

rspec: 'should_receive' with multiple argument expectations

I have a function which receives a complex argument (an HTML string). I want to check multiple conditions about this string, i.e.:
receiver.should_receive(:post_data).with(json_content).with(id_matching(5))
Multiple with arguments doesn't work, any alternatives? I'm happy to define custom matchers if it's possible to make a compound one in some way.
Obviously I could run the same test multiple times and test different things about the result, however this is an integration test which takes several seconds to run, so I don't want to make it even slower.
Thanks
EDIT:
At time of writing, the accepted answer (use a custom matcher with custom description), appears to be the best option. However it isn't perfect, ideally with would support a concept of 'this was an item of the expected type, but wasn't the one we expected', instead of a pure binary match.
Maybe you don't even need a custom matcher and the block form is sufficient for you.
receiver.should_receive(:post_data) do |*args|
json_content = args.first
json_content.should_not be_empty
json_content.should include "some string"
end
See RSpec Mocks documentation, section Arbitrary Handling
You need to provide a custom matcher, but you can readily define your error reporting so that you can give specifics about what failed and why. See https://github.com/dchelimsky/rspec/wiki/Custom-Matchers .
In particular, the custom matcher would be supplied as the argument to with, as mentioned in the last sentence of the first paragraph of the "Argument Matchers" section of https://github.com/rspec/rspec-mocks.
As for error reporting, there are no custom failure methods that apply to this use case, but the description method of the custom matcher is used generate the string shown as the "expected" value and, though not its purpose, can be defined to output anything you want regarding the failed match.

Dependent tests in rspec

I write functional tests, and I need to do tests that were being would depend on the passage of previous tests. Let's say I have a button that opens a window in which there is a functional. That is, in order to check this functionality, I need to first check the correct operation of the button (ie, open window or not functional). So, I need to do so that would be if the test failed on a button click, the tests did not run to check the functional window. Writing tests separately - for me is not an option. I would like to see something like this:
describe "some tests" do
open_result = nil
it "should check work button" do
click_to_button()
open_result = window_opened?
open_result.should == true
end
if open_result
describe "Check some functional" do
it "should check first functional"
it "should check second functional"
end
end
end
I know that this method does not work for rspec. It's just a simple description of what I want to see. Is it achievable using rspec? If not, are there other ways (gems, etc.)
RSpec is designed as a unit tests framework, so it may be a little difficult to get perfect functional-test behavior from it. In RSpec's philosophy tests must be independent. It is especially important when you use autotest: in this case the order of execution is truly unpredictable. Sad but true.
Still, of course you can save some state between test using global ($a) or instance (#a, not sure here) variables. Anyway you need to move if into it block so that it will be executed in time. You may use pending keyword to interrupt a test without failing it in case if pre-condition is not met.
BUT
I'm sure that the best solution is to avoid golden hammer antipattern, and not to write functional tests in unit-test framework. You want not to test some separate functions. You do want to test scenarios. So I suggest trying some scenario-testing suite.
Behold, Cucumber! Usage is simle:
Define parametrized scenario steps in Ruby, and expectations in RSpec-style
Write scenarios in natural language (yes, not only English, but even Russian or whatever — the power of Regexps is on your side)
In your case, you will have in features/step_definitions/gui_steps.rb something like
Given /I pushed a button "(.*)"/ do |name|
#buttons.find(name).click() # This line is pseudo-code, you know
end
and something similar for checking window opening, etc (see examples). Then you can combine defined steps in any way, for example your two scenarios might look like
Scenario: Feature 1
Given I pushed a button "go"
And I focus on opened window
When I trigger "feature 1"
Then I should se "result 1" in text area
Scenario: Feature 2
Given I pushed a button "go"
And I focus on opened window
When I trigger "feature 2"
Then I should se "result 2" in text area
In both cases, if some step of scenario fails (like I focus on opened window — if it is not opened), the consequent steps are not executed — just as you want. As a bonus you get an extremely detailed output of what happened and on what step (see pics on the site).
The good news is that you don't always need to define all the step yourself. For example, when you test a web app, you can use webrat steps for typical things like When I go to url/a/b/c and Then I should see text "foo" on the page. I don't know which GUI testing framework you use, but probably there are already steps for it, so I suggest you to google on Cucumber %framework name%. Even if not, writing these steps once will not be more difficult than trying to make Cucumber from RSpec.

Jasmine: Why toBeUndefined and not.toBeDefined?

I'm just learning the Jasmine library, and I've noticed that Jasmine has a very limited number of built-in assertions. I've also noticed that, despite having such a limited number, two of its assertions appear to be redundant: toBeDefined/toBeUndefined.
In other words, both of these would seem to check for the same exact thing:
expect(1).toBeDefined();
expect(undefined).not.toBeUndefined();
Is there some reason for this, like a case where toBeDefined isn't the same as toBeUndefined? Or is this just the one assertion in Jasmine that has two perfectly equal ways of being invoked?
One might assume the same for toBeTruthy and toBeFalsy, or toBeLessThan and toBeGreaterThan (although I guess the missing assert from the last two is toEqual). In the end it comes down to readability and user preference.
To give you a more complete answer, it might be useful to take a look at the code that is invoked for these functions. The code that is executed goes through separate paths (so toBeUndefined is not simply !toBeDefined). The only real answer that makes sense is readability (or giving in to annoying feature requests). https://github.com/jasmine/jasmine/tree/main/src/core/matchers

Can I ensure all tests contain an assertion in test/unit?

With test/unit, and minitest, is it possible to fail any test that doesn't contain an assertion, or would monkey-patching be required (for example, checking if the assertion count increased after each test was executed)?
Background: I shouldn't write unit tests without assertions - at a minimum, I should use assert_nothing_raised if I'm smoke testing to indicate that I'm smoke testing.
Usually I write tests that fail first, but I'm writing some regression tests. Alternatively, I could supply an incorrect expected value to see if the test is comparing the expected and actual value.
To ensure unit tests actually verify anything a technique called Mutation testing is used.
For Ruby, you can take a look at Mutant.
As PK's link points out too, the presence of assertions in itself doesn't mean the unit test is meaningful and correct. I believe there is no automatic replacement for careful thinking and awareness.
Ensuring the tests fail first is a good practice, which should be made into a habit.
Apart from the things you mention, I often set wrong values in asserts in new tests, to check that the test really runs and fails. (Then I fix it of course :-) This is less obtrusive than editing the production code.
I don't really think that forcing the test to fail without an assert is really helpful. Having an assert in a test is not a goal in itself - the goal is to write useful tests.
The missing assert is just an indication that the test may not be useful. The interesting question is: Will the test fail if something breaks?. If it doesn't, it's obviously useless.
If all you're testing for is that the code doesn't crash, then assert_nothing_raised around it is just a kind of comment. But testing for "no explosions" probably indicates a weak test in itself. In most cases, it doesn't give you any useful information about your code (because "no crash != correct"), so why did you write the test in the first place? Plus I rather prefer a method that explodes properly to one that just returns a wrong result.
I found the best regression test come from the field: Bang your app (or have your tester do it), and for each problem you find write a test that fails. Fix it, and have the test pass.
Otherwise I'd test the behavior, not the absence of crashes. In the case that I have "empty" tests (meaning that I didn't write the test code yet), I usually put a #flunk inside to remind me.

Resources