Prevent database rollback in specs in Ruby on Rails? - ruby

When running RSpec tests in Ruby on Rails 2.3 with ActiveRecord, the database gets rolled back to the state after a before :all block after each example (it block).
However, I want to spec the lifecycle of an object, which means going through a number of examples one by one, changing the state and testing postconditions. This is impossible with the rollback behaviour.
So to clarify:
describe MyModel
before :all { #thing = MyModel.create }
it "should be settable" do
lambda { #thing.a_number = 42 }.should_not raise_exception
end
it "should remember things" do
#thing.a_number.should == 42
# this fails because the database was rolled back ☹
end
end
Is there some way to persist changes made in examples?

I agree with normalocity, in this case it looks like you would be better off with a single spec containing two assertions.
There are cases in which it is helpful to turn off rollbacks, e.g. for higher level tests with Capybara and Selenium, in which case you can use the use_transactional_fixtures configuration option. You can put thi
RSpec.configure do |config|
config.use_transactional_fixtures = false
end

Well, that depends on what you're trying to do. If you're testing the life cycle (a series of things that happen over time), that's more the realm of integration tests, which you can build more in tools such as Cucumber, etc. Spec is more designed to do small tests of small bits of code.
It's technically possible for you to simply write a long spec test, with multiple .should statements, and so long as all of them pass, then you've effectively got the kind of test you're describing. However, that's not really, in my experience, what spec is designed to give you.
I guess what I'm saying is, don't try to prevent the rollback - that's not what it's there to do. Either use a tool more designed to do the kinds of tests you're looking to build, or write a longer test that has multiple .should statements.

Related

How would I test a ruby method that only calls a system command?

After looking at:
How do I stub/mock a call to the command line with rspec?
The answer provided is:
require "rubygems"
require "spec"
class Dummy
def command_line
system("ls")
end
end
describe Dummy do
it "command_line should call ls" do
d = Dummy.new
d.should_receive("system").with("ls")
d.command_line
end
end
My question is: How does that actually test anything?
By making a method to that says "call the ls command on the system", and then writing a test that says "my method should call the ls command on the system", how does that provide any benefit?
If the method were to change, I would have to change the test as well, but I'm not sure I see the added benefit.
The approach you are describing is known as the "mockist" or "London" school of unit testing. It's benefits include
That the act of constructing such a test creates an incentive to design units which are not excessively complex in terms of their dependencies or conditional logic
That such tests execute very quickly, which can be very important for large systems
That such tests can be reasonably built to provide maximal coverage for the unit under test
That such tests provide a "vise" of sorts around your code such that inadvertent changes will result in a failing test
Of course, such tests have their limitations, which is why they are often paired with system level tests which test whether a collection of units operating together achieve some higher order outcome.

Alternatives to pending or skip in RSpec 3

There's some tests that sometimes pass, and sometimes fail. I'd like to fix them, but I'm not able to at the moment, for reasons beyond the scope of this question. Are there any alternatives to pending or skip for them in RSpec 3?
pending isn't suitable, because as of the current version of RSpec, when the tests pass, RSpec will tell me that they passed, and therefore shouldn't be marked as pending, and mark the build as broken.
skip isn't suitable. I only use skip to avoid specs that cause the suite to crash. If the tests consistently stop failing, I'd like to know that that's the case.
I'd like something that runs the tests, displays whether it passes or not, but doesn't cause the build to be broken whether they pass or fail.
An additional gem to add this behaviour is ok.
There's nothing built in, but it's trivial to do this yourself:
module AllowFailure
def allow_failure(reason)
yield
rescue Exception => e
pending(reason)
raise
end
end
RSpec.configure do |config|
config.include AllowFailure
end
Then just wrap the body of the flickering tests with allow_failure(reason) { ... }.
(Caveat: The code above is off-the-cuff and I haven't tried it so it may not be exactly correct -- but it should be close).

RSpec - How do nested before :all and before :each blocks interact?

I encountered an issue with some Web UI Automation using RSpec and Selenium/Capybara/SitePrism. It's related to the combination of before :all and before :each clauses that occur in the code, and perhaps especially in my spec_helper file.
Up until now, I've been running rspec against one spec file at a time. Each spec file required a spec_helper file, which included the following:
spec_helper.rb:
RSpec.configure do |config|
config.before(:all) do
# 1) Code to configure WebDriver and launch Browser is here
end
end
The spec files themselves contain their own before blocks for various reasons. Most of them contain something like the following:
test_a_spec.rb:
describe "Page A" do
before :all do
# 2) Log in to web site, maybe load the test page in question
end
it "does this thing" do
# 3) Test this thing
end
it "does that thing" do
# 4) Test that thing
end
end
This worked fine as long as I was running RSpec against individual spec files. When I tried tagging some of my examples and then running against the whole spec folder, I had a problem. The before :all block in spec_helper.rb didn't prepend itself to every file, as I thought it would, but instead ran once at the beginning. All the spec files after the first were expecting a clean browser to be launched by spec_helper and to do the log in part themselves, but the browser wasn't clean and was already logged in, so that wasn't good.
Changing the before :all in spec_helper.rb to a before :each seemed like the natural solution, so I did, and suddently my tests instantly failed with an error claiming that rack-test requires a rack application, but none was given. This happened to some of my tests but not all, and through process of elimination I realized it was only tests that had their own before :all blocks that were failing. It appears that the before :all in the spec file is superceding the before :each in the spec_helper file. So it was trying to log in before it had launched a browser.
This vexes me. I am terribly vexed. I had sort of assumed two things:
I assumed before :each was on equal footing with before :all, and in fact imagine it like a before :all plus more, in a sense. The idea that a before :all would supercede a before :each seems weird to me.
I assume all these before blocks were subject to nesting in a reasonable way. That is, a before :all block should fire once before all the things below it, which would mean firing within each iteration of a before :each block that might contain the before :all block. Also, if a before :each contains some code and then a describe statement below it has a before :all, the before :each code should still fire before the before :all code. Maybe it does do that, I'm just not sure at this point.
Questions:
1) What kind of behavior is my spec_helper file actually producing? If I were to take that same behavior and put it into one of the spec files itself, for instance, what would that look like? Would the before :each with the browser launch code wrap around all the code in the spec file in some sort of implicit describe block? Does the before :each get inserted into the outermost describe block, and therefore have to compete with the spec file's before :all block?
2) I could "solve" this by abandoning all before :all blocks in my tests, but I like the flexibility of the way things are, and since this is UI Automation and speed is a factor, I don't really want to be opening a new browser and logging in for each describe, even though I'm aware that this would be ideal for keeping every test separated from the others. Do I have to/want to do just that?
Thanks!
To address the part of your question not already covered by https://stackoverflow.com/a/22489263/1008891, the require method will not reload any file already loaded previously, as described in https://stackoverflow.com/a/22489263/1008891. This contrasts with the load method, which acts like a "copy/paste" of the file contents wherever it is used.
I'm not saying that if you changed all your requires to loads that you'd get the same result as running the individual files, as there may be other global state that affects the behavior you're seeing.
The RSpec documentation (http://rspec.info/documentation/3.3/rspec-core/RSpec/Core/Hooks.html#before-instance_method) provides some very useful information about how before blocks interact and the order in which they are run, but they do recommend avoiding before(:context) [aka before(:all)].
FYI, note that the after blocks are run in the reverse order fin which before blocks are.

Using asserts in Ruby production... yes or no?

So, here's the deal. I'm currently working in a Ruby on Rails environment and have been for ~1 year now. Before that I was in C++/Java land for almost a decade. I'm (still) trying to figure out what the Ruby way is when it comes to asserts.
I'm not worried about the technical detail. I know TestUnit has asserts which can be used in the testing environment and I know I can add my own assert methods to my Ruby project and use them in production Rails to lock down known conditions. The question is: What is the Ruby way for ensuring something in code that I know should/not happen?
For the record, I've been asserting in tests and raising in production. I still can't help but miss my production asserts...
Asserts really shouldn't be used in production code for two reasons.
assert x is very functional, and as such hard to read. Using a raise/if combo adds readability.
assert doesn't make it clear what error will be raised if the condition fails. While,
raise ObscureButInformitiveError if condition
lets the application layers further up do something relveant. Such as emailing an admin, or writing to a perticular log.
Let the error happen, then check the logs for what went wrong, then fix it.
Rails catches all uncaught exceptions automatically, it will only mess up the single request the error happened in.
There's no official non-test assertions in Ruby, but there are gems.
For instance Jim Weirich's Given looks promising. Although its main focus is testing environments (rspec / minitest), but it also:
... provides three assertions meant to be used in
non-test/non-spec code. For example, here is a square root function
decked out with pre and post-condition assertions.
require 'given/assertions'
require 'given/fuzzy_number'
include Given::Assertions
include Given::Fuzzy
def sqrt(n)
Precondition { n >= 0 }
result = Math.sqrt(n)
Postcondition { result ** 2 == about(n) }
result
end
To use the
non-testing assertions, you need to require the 'given/assertions'
file and then include the Given::Assertions module into what ever
class is using the Precondition / Postcondition / Assert methods. The code
block for these assertions should always be a regular Ruby true/false
value (the should and expect methods from RSpec are not available).
Note that this example also uses the fuzzy number matching, but that
is not required for the assertions themselves.

Passing a parameter/object to a ruby unit/test before running it using TestRunner

I'm building a tool that automates a process then runs some tests on it's own results then goes to do some other stuff.
In trying to clean up my code I have created a separate file that just has the test cases class. Now before I can run these tests, I have to pass the class a couple of parameters/objects before they can be run. Now the problem is that I can't seem to find a way to pass a parameter/object to the test class.
Right now I am thinking to generate a Yaml file and read it in the test class but it feels "wrong" to use a temporary file for this. If anyone has a nicer solution that would be great!
**************Edit************
Example Code of what I am doing right now:
#!/usr/bin/ruby
require 'test/unit/ui/console/testrunner'
require 'yaml'
require 'TS_SampleTestSuite'
automatingSomething()
importantInfo = getImportantInfo()
File.open('filename.yml', 'w') do |f|
f.puts importantInfo.to_yaml
end
Test::Unit::UI::Console::TestRunner.run(TS_SampleTestSuite)
Now in the example above TS_SampleTestSuite needs importantInfo, so the first "test case" is a method that just reads in the information from the Yaml file filname.yml.
I hope that clears up some confusion.
Overall, it looks like you're not really using the unit tests in a very rubyish way, but I'll leave that aside for a minute.
Your basic problem is that you have some setup that needs to happen before the tests run. The normal way to do that is with a setup method within the test unit case itself.
class UserTest < TestUnit::TestCase
def setup
# do your important calculation
end
def test_success
#.. assert some things
end
end
I would give some thought to what code it is that you're actually testing here, and see if you can break it down and test it in a more granular way, with lots more tests.
First, I agree with Cameron, this code definitely does not adhere to the Ruby way, though I'll also sidestep that for now.
The fastest way to get up and running with this, especially if this data is pretty much immutable (that is to say, your tests won't be altering it in anyway), is to just assign the value to a constant. So instead of naming your variable importantInfo, you name it IMPORTANT_INFO. Then it will be available to you in your tests. It's definitely not a pretty solution, and I think it couuld even be considered a test smell that you need that sort of global setup, but it's there for you.
Alternatively, you could look at stubbing out the importantInfo, which I actually think would provide for much cleaner and more readable tests.

Resources