Dependent tests in rspec - ruby

I write functional tests, and I need to do tests that were being would depend on the passage of previous tests. Let's say I have a button that opens a window in which there is a functional. That is, in order to check this functionality, I need to first check the correct operation of the button (ie, open window or not functional). So, I need to do so that would be if the test failed on a button click, the tests did not run to check the functional window. Writing tests separately - for me is not an option. I would like to see something like this:
describe "some tests" do
open_result = nil
it "should check work button" do
click_to_button()
open_result = window_opened?
open_result.should == true
end
if open_result
describe "Check some functional" do
it "should check first functional"
it "should check second functional"
end
end
end
I know that this method does not work for rspec. It's just a simple description of what I want to see. Is it achievable using rspec? If not, are there other ways (gems, etc.)

RSpec is designed as a unit tests framework, so it may be a little difficult to get perfect functional-test behavior from it. In RSpec's philosophy tests must be independent. It is especially important when you use autotest: in this case the order of execution is truly unpredictable. Sad but true.
Still, of course you can save some state between test using global ($a) or instance (#a, not sure here) variables. Anyway you need to move if into it block so that it will be executed in time. You may use pending keyword to interrupt a test without failing it in case if pre-condition is not met.
BUT
I'm sure that the best solution is to avoid golden hammer antipattern, and not to write functional tests in unit-test framework. You want not to test some separate functions. You do want to test scenarios. So I suggest trying some scenario-testing suite.
Behold, Cucumber! Usage is simle:
Define parametrized scenario steps in Ruby, and expectations in RSpec-style
Write scenarios in natural language (yes, not only English, but even Russian or whatever — the power of Regexps is on your side)
In your case, you will have in features/step_definitions/gui_steps.rb something like
Given /I pushed a button "(.*)"/ do |name|
#buttons.find(name).click() # This line is pseudo-code, you know
end
and something similar for checking window opening, etc (see examples). Then you can combine defined steps in any way, for example your two scenarios might look like
Scenario: Feature 1
Given I pushed a button "go"
And I focus on opened window
When I trigger "feature 1"
Then I should se "result 1" in text area
Scenario: Feature 2
Given I pushed a button "go"
And I focus on opened window
When I trigger "feature 2"
Then I should se "result 2" in text area
In both cases, if some step of scenario fails (like I focus on opened window — if it is not opened), the consequent steps are not executed — just as you want. As a bonus you get an extremely detailed output of what happened and on what step (see pics on the site).
The good news is that you don't always need to define all the step yourself. For example, when you test a web app, you can use webrat steps for typical things like When I go to url/a/b/c and Then I should see text "foo" on the page. I don't know which GUI testing framework you use, but probably there are already steps for it, so I suggest you to google on Cucumber %framework name%. Even if not, writing these steps once will not be more difficult than trying to make Cucumber from RSpec.

Related

How would I test a ruby method that only calls a system command?

After looking at:
How do I stub/mock a call to the command line with rspec?
The answer provided is:
require "rubygems"
require "spec"
class Dummy
def command_line
system("ls")
end
end
describe Dummy do
it "command_line should call ls" do
d = Dummy.new
d.should_receive("system").with("ls")
d.command_line
end
end
My question is: How does that actually test anything?
By making a method to that says "call the ls command on the system", and then writing a test that says "my method should call the ls command on the system", how does that provide any benefit?
If the method were to change, I would have to change the test as well, but I'm not sure I see the added benefit.
The approach you are describing is known as the "mockist" or "London" school of unit testing. It's benefits include
That the act of constructing such a test creates an incentive to design units which are not excessively complex in terms of their dependencies or conditional logic
That such tests execute very quickly, which can be very important for large systems
That such tests can be reasonably built to provide maximal coverage for the unit under test
That such tests provide a "vise" of sorts around your code such that inadvertent changes will result in a failing test
Of course, such tests have their limitations, which is why they are often paired with system level tests which test whether a collection of units operating together achieve some higher order outcome.

Producing reports in Protractor (Jasmine) with details for expectations that pass

So I'm kind of new to Protractor. I've written a number of parameterised functions (eg loginAs, navigateTo, enterTextIntoSearchField, clickButton etc), which I can then use repeatedly as I create my specs and suites. So for example I might have a "perform search" suite, with specs for "perform search as regular user", "perform search as admin" etc.
All this is fine. I'm using the Jasmine2HTMLReporter which produces output similar to sample Jasmine2HTMLReporter output
Some of my reusable functions have expect statements, some don't (although I may yet go back and try to add them for clarity!)
The problem I have is when an individual spec consists of quite a few function calls, the list of passed / failed expectations in the report can be rather long. In the case of failed expectations it gives details of the failure ("expected Fred to equal Bob" etc). However, I would like to see something similar for the passes aswell ("expected Fred to equal Fred") - as this would allow whoever was reading the report to understand which function call any one "passed" related to - and understand the flow of the test, rather than just seeing an otherwise meaningless list of "Passed" statements.
Is this at all possible? I could have nested specs so that each function call was it's own spec within a "parent" spec, but this strikes me as over the top and messy, and would make the report way bigger than it needed to be? Would a different reporter give me what I want? I've not found one yet that looks like it would...

TDD, How to write tests that will compile even if objects don't exist

I'm using VS 2012, but that's not really important.
What is important is that I'm trying to do some TDD by writing all my tests first and then creating the code.
However, the app will not compile because none of my objects or methods exist.
Now, to my mind, I should be able to create ALL my tests but still run my app so I can debug etc. The tests shouldn't prevent compilation because objects and methods are missing.
I thought the whole point of it was that as you develop your tests you can begin to see duplications etc so that you can refactor before writing a single line of code.
So the question is, is there a way to do this or am I doing this wrong?
EDIT
I am using VS2012 and C#
I see a small problem with
writing all my tests first and then creating the code.
You don't need to write ALL your tests first, you just need one, make it fail, make it pass and repeat. That means ideally at any point you should have ideally one failing test.
A compile failure counts as a failed test in that sense. So the next step is to make it pass - i.e. add stubs or return default values to make it compile. The test would then be red.. then work at getting it to green.
Test Driven Development is about very small iterations. You don't define all your tests up front. You create one test based on one fraction of one requirement. Then you implement the code to pass that test. Once it's passing, you work on another fraction of a requirement.
The idea is that trying to do all the design up front (whether it be creating detailed class diagrams or creating a bunch of tests) means that you will find it too expensive to change a weakness in your design, so you won't improve your code.
Here's an example. Let's say you decide to use inheritance to relate two objects, but when you started implementing the objects, you found that made testing them tough. You discover it would be much easier to test each object individually, and relate them via containment instead. What is happening is the tests are driving your design in a more loosely coupled direction. This is a very good outcome of TDD - you are using tests to improve the design.
If you had written all your tests in advance assuming your design decision of inheritance was a good choice, you would either throw away a lot of work, or you would say "it's too tough to make a change like that now, so I'll just live with this sub-optimal design instead."
You can certainly create business-rule-related acceptance tests in advance. Those are called behavior tests (part of Behavior Driven Development, or BDD) and they are good to test features of the software from the user's point of view. But those are NOT unit tests. Unit tests are for testing code from the developer's point of view. Creating the unit tests in advance defeats the purpose of TDD, because it will make testing harder, it will prevent you from improving your code, and will often lead to rebellion and failure of the practice. That's why it's important to do it right.
What is important is that I'm trying to do some TDD by writing all my tests first and then creating the code.
The problem is that "writing all my tests first" is most emphatically not "do[ing] some TDD". Test driven development consists of lots of small repetitions of the "red-green-refactor" cycle:
Add a unit test to the test suite, run it and watch it fail (red)
Add just enough code to the system under test to make all the tests
pass (green)
Improve the design of the system under test (typically
by removing duplication) while keeping all the tests passing
(refactor)
If you code an entire huge test suite up front, you'll spend forever trying to get to the "green" (all tests passing) state.
However, the app will not compile because none of my objects or methods exist.
That's typical of any compiled language; it's not a TDD issue per se. All it means is that, in order to watch the new test fail, you may have to write a minimal stub for whatever feature you're currently working on to make the compiler happy.
For example, I might write this test (using NUnit):
[Test]
public void DefaultGreetingIsHelloWorld()
{
WorldGreeter target = new WorldGreeter();
string expected = "Hello, world!";
string actual = target.Greet();
Assert.AreEqual(expected, actual);
}
And I'd have to then write this much code to get the app to compile and the test to fail:
public class WorldGreeter
{
public string Greet()
{
return String.Empty;
}
}
Once I've gotten the solution to build and I've seen the one failing test, I can add the code to make the first test pass:
public string Greet()
{
return "Hello, world!";
}
Once all tests pass, I can look through the system under test and see what could be done to improve the design. However, it's essential to the TDD discipline to go through both the "red" and "green" steps before playing around with refactoring.
I thought the whole point of it was that as you develop your tests you can begin to see duplications etc so that you can refactor before writing a single line of code.
Martin Fowler defines refactoring as "a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior" (emphasis added). If you haven't written a single line of code, there's nothing to refactor.
So the question is, is there a way to do this or am I doing this wrong?
If you're looking to do TDD, then, yes, I fear you are doing this wrong. You may well be able to deliver great code doing what you're doing, but it isn't TDD; whether or not that's a problem is for you to decide for yourself.
You should be able to create your empty class with stub functions, no?
class Whatever {
char *foo( const char *name ) {}
int can_wibble( Bongo *myBongo ) {}
}
Then you can compile.
No. It's about coding just enough to verify the implementation of the required use cases
You can define your tests cases early, but to code the test cases them you iteratively write a test, have it fail. Then write some code that ensures that the code passes.
Then rinse and repeat until all your test cases are covered,
Edit to address comment.
As you build out the code, that's where your programming designs and faults are identified. Extreme programming lends it self to you being able to change code with out care as the test base protects your requirements. Your intentions are good but the reality is that you'll refactor/redesign test test cases as you discover design issues and flaws through building out the code and test base.
However IMHO, in a very general case, a test that doesn't compile is effectively a meta test that's failing that needs to be corrected before moving on. It's telling you to write some code!
Use mock, from Wikipiedia: mock objects are simulated objects that mimic the behavior of real objects in controlled ways. A programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in vehicle impacts.
Please refer this.
I found this using dynamic for objects that don't exists yet:
https://coderwall.com/p/0uzmdw

Testing: How to test that view contains desired data

Say a Chef can make Recipes, and Sous-Chefs can create Recipes that must be approved by a Head Chef.
You want to test that, when a Head Chef views her homepage, she sees Recipes that she herself created. You also want to test that she sees there are Recipes awaiting her approval.
I can think of two ways to do this:
Test that the view contains certain words, like "Your recipes" and "Recipes awaiting your approval"
Add unnecessary attributes to the html elements you're using so that you can check for an element with "id=recipe_1" or "data-for-the-sake-of-testing=1"
I very much dislike both of these approaches.
Why approach #1 sucks
Incredibly brittle tests. Every time you want to make minor updates to the copy, tests will break.
i18n? How will that work with this approach?
There are probably more reasons, but those two are pretty huge.
Why approach #2 sucks
How annoying to have superfluous markup just for the sake of testing! The user should not have an increased download size for the sake of tests.
What is a good approach to this? I'm interested to hear any alternatives at all, in whatever language you think in. I mostly think in Ruby, Test::Unit, Minitest, RSpec, and Cucumber (though my Cuke skills are stale), but if other languages/frameworks have this figured out, I'd love to see what they're doing, too.
Use a page paradigm.
Phrase the steps in as human a way as you can, at the level of capabilities (high level) wherever possible, and use specific examples. For instance, if I'm using Cucumber I might say:
Given the sous-chef has created a recipe for Frog Pie
When the chef looks for recipes to approve
Then the recipe for Frog Pie should be in the list.
Inside the code for these steps, instantiate or find the particular page you're looking for, where the page is an object that represents the capabilities of the page. That page can then have all the things that the user can do with the page - look for recipes, approve recipes, move to another page, etc.
This way, if you need to change the underlying code for the step, you only have to change it in one place, and all the changes for a particular page will be together. Because you've phrased the scenario in terms of capabilities you're delivering, it's unlikely that the scenario will need to change much (unless you discover that your business need different capabilities to the ones you're delivering).
This also works pretty well for window-based apps too, with each widget or module being a particular page.
It's also fine to have extra ids just for testing. Sometimes designers like to use them too.
I see at least two options:
Avoid testing business logic through the UI. Write a "service" or "use case controller" object that returns plain data structures. In other words, you build an API to your system. Your unit test accesses the system through the API. Your UI accesses the system through the same API, but then there should be almost no logic in the view. See http://www.confreaks.com/videos/759-rubymidwest2011-keynote-architecture-the-lost-years or http://www.cleancoders.com/codecast/clean-code-episode-7/show.
Use the "page object" pattern. Write an object that reads in the HTML code that your app produces, parses it, and makes interesting data available through getters. This will do wonders to make your test code clear. Your objection could be that you still have problem #2. In fact I don't think it's really a problem. If you use structural HTML markup, it should be fairly easy to extract the information you need. It will be much easier if you attach an ID to a key element of the page; in your example I would have a div with id="my-recipes" and another div with id="to-be-approved". That should be enough; anything else should be easy to find with xpath or css selectors. Why do you find this objectionable? These IDs will probably be useful for other purposes, such as attaching behaviour with unobtrusive JavaScript or attaching styles with a CSS stylesheet.
Live with #2, perhaps using brief comments (no i18n issues and not visble to the end user):
<!-- APPROVAL -->
The documentation of simpletest has a nice take on it:
Next chance you get, look at a circuit board, perhaps the motherboard
of the computer you are looking at right now. On most boards you will
find the odd empty hole, or solder joint with nothing attached or
perhaps a pin or socket that has no obvious function. Chances are that
some of these are for expansion and variations, but most of the
remainder will be for testing.
If a small amount of superfluous markup makes your product more testable and reliable then just live with it!
I personally try to do not test views at all. I mean generated markup, since those tests appearing to be much fragile.
Instead, I focus on "data-provider" side, in case of MVC web framework is Controller. As soon as controller is covered by unit tests that check what kind of data controller prepares, you are pretty much safe. The view you create is easy to test by just running the application and see that it looks OK.
Nethertheless, there some approaches of view testing. First one is based on "end-to-end" testing simulation with Selenium Driver. It runs the browsers and initialize requests to you application. Tests are checking the output HTML. Tests logon to "known" edition, it means tests know that current localization is EN, for instance.
You should basically combine the approaches, where it works use HTML markup values ("Recipies") otherwise use HTML elements id's or classes. I would not add any additional markup for the testing.
Another approach you can try is Approval Testing. I believe there is a Ruby driver for that - http://approvaltests.sourceforge.net/. With approvals you render the view and save the HTML as golden master. The test will fail in case of View had changed. It much more easier to implement than Selenium tests.

Asserting that a particular exception is thrown in Cucumber

Scenario
I'm writing a library (no Ruby on Rails) for which I'd like to have very detailed Cucumber features. This especially includes describing errors/exceptions that should be thrown in various cases.
Example
The most intuitive way to write the Cucumber steps would probably be something like
When I do something unwanted
Then an "ArgumentError" should be thrown
Problem
There are two issues I have to address:
The first step should not fail when an exception is thrown.
The exception that the first step throws should be accessible to the second step in order to do some assertion magic.
Unelegant And Cumbersome Solution
The best approach I've been able to come up with is caching the exception in the first step and putting it into an instance variable that the second step can access, like so:
When /^I do something unwanted$/ do
begin
throw_an_exception!
rescue => #error
end
end
Then /^an "(.*)" should be thrown$/ do |error|
#error.class.to_s.should == error
end
However, this makes the first step more or less useless in cases where I don't want it to fail, and it requires an instance variable, which is never a good thing.
So, can anyone help me out with an at least less cumbersome solution? Or should I write my features differently anyway? Any help would be much appreciated.
I thought about it once more, and maybe the answer is:
There is no elegant solution, because the Given-When-Then-Scheme is violated in your case.
You expect that "Then an exception should be thrown" is the outcome of "When I do something unwanted".
But when you think about it, this is not true! The exception is not the outcome of this action, in fact the exception just shows that the "When"-Statement failed.
My solution to this would be to test at a higher level:
When I do something unwanted
Then an error should be logged
or
When I do something unwanted
Then the user should get an error message
or
When I do something unwanted
Then the program should be locked in state "error"
or a combination of these.
Then you would "cache the exception" in your program - which makes perfect sense, as you most likely need to do that anyway.
The two problems you've stated would be solved, too.
In case you really must test for exceptions
Well, i guess then cucumber isn't the right test suite, hmm? ;-)
As the Given-When-Then-Scheme is violated anyway, I would simply write
When I do something unwanted it should fail with "ArgumentError"
and in the step definitions something like (untested, please correct me if you try it)
When /^I do something unwanted it should fail with "(.*)"$/ do |errorstring|
expect {
throw_an_exception!
}.to raise_error(errorstring)
end
As said above, that is horribly wrong as the scheme is broken, but it would serve the purpose, wouldn't it? ;-)
You'll find further documentation at testing errors at rspec expectations.
One option is to mark the scenario with #allow-rescue and check the page's output and status code. For example
In my_steps.rb
Then(/^the page (?:should have|has) content (.+)$/) do |content|
expect(page).to have_content(content)
end
Then(/^the page should have status code (\d+)$/) do |status_code|
expect(page.status_code.to_s).to eq(status_code)
end
Then /^I should see an error$/ do
expect(400..599).to include(page.status_code)
end
In my_feature.feature
#allow-rescue
Scenario: Make sure user can't do XYZ
Given some prerequisite
When I do something unwanted
Then the page should have content Routing Error
And the page should have status code 404
or alternatively:
#allow-rescue
Scenario: Make sure user can't do XYZ
Given some prerequisite
When I do something unwanted
Then I should see an error
This may not be exactly what you were hoping for, but it might be an acceptable workaround for some people who come across this page. I think it will depend on the type of exception, since if the exception is not rescued at any level then the scenario will still fail. I have used this approach mostly for routing errors so far, which has worked fine.
It is possible to raise an exception in a When block and then make assertions about it in the following Then blocks.
Using your example:
When /^I do something unwanted$/ do
#result = -> { throw_an_exception! }
end
Then /^an "(.*)" should be thrown$/ do |error|
expect{ #result.call }.to raise_error(error)
end
That example uses RSpec's matchers but the important part is the -> (Lambda); which allows the reference to the throw_an_exception! method to be passed around.
I hope that helps!
I'm answering from the perspective of someone who uses Cucumber features in a Behavior-Driven Development situation, so take it or leave it...
Scenarios should be written to test a 'feature' or functionality of the application, as opposed to being used to test the code itself. An example being:
When the service is invoked
Then a success code should be returned
It sounds like your test case (i.e. If I do this, then this exception should be thrown) is a candidate for unit or integration testing - in my case, we would use some Mocking or unit testing framework.
My suggestion would be to re-evaluate your feature scenarios to see if they are really testing what you intend them to test. From personal experience, I've found that if my test classes are becoming abnormally complex, then my features are 'wrong.'

Resources