Is there a clean way to test ActiveRecord callbacks in Rspec? - ruby

Suppose I have the following ActiveRecord class:
class ToastMitten < ActiveRecord::Base
before_save :brush_off_crumbs
end
Is there a clean way to test that :brush_off_crumbs has been set as a before_save callback?
By "clean" I mean:
"Without actually saving", because
It's slow
I don't need to test that ActiveRecord correctly handles a before_save directive; I need to test that I correctly told it what to do before it saves.
"Without hacking through undocumented methods"
I found a way that satisfies criteria #1 but not #2:
it "should call have brush_off_crumbs as a before_save callback" do
# undocumented voodoo
before_save_callbacks = ToastMitten._save_callbacks.select do |callback|
callback.kind.eql?(:before)
end
# vile incantations
before_save_callbacks.map(&:raw_filter).should include(:brush_off_crumbs)
end

Use run_callbacks
This is less hacky, but not perfect:
it "is called as a before_save callback" do
revenue_object.should_receive(:record_financial_changes)
revenue_object.run_callbacks(:save) do
# Bail from the saving process, so we'll know that if the method was
# called, it was done before saving
false
end
end
Using this technique to test for an after_save would be more awkward.

Related

RSpec - how to test if object sends messages to self in #initialize

After reading this question I really do not like the answer.
Rails / RSpec: How to test #initialize method?
Maybe I am having a third scenario. This is what I have now, inspired by second code from that answer.
# Picture is collection of SinglePictures with same name and filename,
# but different dimensions
class Picture
attr_accessor :name, :filename
attr_reader :single_pics, :largest_width
def initialize(name, filename, dimensions=nil)
#largest_width = 0
#single_pics = {}
add_single_pics(dimensions) if dimensions
end
def add_single_pics(max_dimension)
# logic
end
end
describe '#initialize' do
it 'should not call add_single_pics if dimensions is not given' do
subject = Picture.new('Test Picture', 'Test-Picture')
expect(subject.largest_width).to eq 0
end
it 'should call add_single_pics if dimensions are given' do
subject = Picture.new('Test Picture', 'Test-Picture', 1920)
expect(subject.largest_width).to eq 1920
end
end
I really don't like this because I am testing the functionality of add_single_pics in #initialize tests. I would like to write somehow this in spec:
expect(subject).not_to have_received(:add_single_pics)
expect(subject).to have_received(:add_single_pics)
But I get
Expected to have received add_single_pics, but that object is not a spy
or method has not been stubbed.
Can I fix this somehow?
Spies are an alternate type of test double that support this pattern
by allowing you to expect that a message has been received after the
fact, using have_received.
https://relishapp.com/rspec/rspec-mocks/v/3-5/docs/basics/spies
Only spy object can store the method calls. To test your real class in the way that you want, you have to use expect_any_instance_of statement before the class will be initialized:
expect_any_instance_of(Picture).to receive(:add_single_pics)
Picture.new('Test Picture', 'Test-Picture')
In this case your add_single_pics method will be called, but its logic will not be run, if you need to run it you need to call the and_call_original method on the matcher:
expect_any_instance_of(Picture).to receive(:add_single_pics).and_call_original

user must_send with a method that calls super in minitest

Lets say I have the following module:
module SillyDemo
class Monkey
def screech(sound)
sound
end
end
class Ape < Monkey
def process(sound)
sound
end
def screech(sound)
process(sound)
super
sound
end
end
end
And then the following minitest:
require_relative 'sillydemo'
require "minitest/spec"
require "minitest/autorun"
describe "Ape" do
before do
#ape = Ape.new
#screech = "YEEEEEEE"
end
it "screeches" do
#ape.screech(#screech)
must_send [#ape, :process, #screech]
must_send [#ape, :super, #screech]
end
end
This errors out with:
NoMethodError: undefined method `super' for #<SillyDemo::Ape:0x007feeb10943c0>
(eval):4:in `must_send'
I have also tried:
must_send [#ape, :"SillyDemo::Monkey.screech", #screech]
which errors out with:
NoMethodError: undefined method `SillyDemo::Ape.run' for #<SillyDemo::Ape:0x007fc5a1874e20>
(eval):4:in `must_send'
My question is, how can I use minitest to test a call to super?
In Ruby super is a keyword, not a method. Also, the must_send expectation isn't verifying that the method was called, it just verifies that the return value from the method is truthy.
http://www.ruby-doc.org/stdlib-2.0.0/libdoc/minitest/rdoc/MiniTest/Expectations.html#method-i-must_send
http://www.ruby-doc.org/stdlib-2.0.0/libdoc/minitest/rdoc/MiniTest/Assertions.html#method-i-assert_send
Mocks are usually used to verify that a method was called. However, Minitest::Mock doesn't allow for this type of check very easily by design. Here is how you can do this though.
it "screeches" do
# 1) Create mock
sound_mock = Minitest::Mock.new
sound_mock.expect :process, true, [String]
# 2) Place mock
#ape.instance_exec(sound_mock) do |sound_mock|
#mock = sound_mock
def process sound
#mock.process sound
end
end
# 3) Verify mock was called
#ape.screech(#screech)
sound_mock.verify
end
Pretty ugly, right? This is by design. Sort of like syntactic vinegar. The reason is that this use of mocks isn't very informative. It is checking the implementation of the code. You would like to be able to refactor the code without changing behavior and have the tests continue to pass. However, this test will very likely fail when the implementation is changed. To discourage folks from making this kind of mistake it was decided by the Minitest authors to keep this type of check difficult.
Other mocking libraries such as RR or Mocha make this type of check much easier.

Unit testing with and without requiring ActiveSupport

I've extracted a single class from a Rails app into a gem. It's very, very simple, but of course I'd like to fully test it (I'm using rspec).
The class does some simple date-calculation. It's not dependent on Rails, but since it started out in a Rails app, and is still used there, it uses ActiveSupport's time zone-aware methods when it can. But, if ActiveSupport isn't available, it should use the std-lib Date methods.
Specifically, it only does this in one single place: Defaulting an optional argument to "today's date":
arg ||= if Date.respond_to?(:current)
Date.current # use ActiveSupport's time zone-aware mixin if possible
else
Date.today # stdlib fallback
end
Question is: How do I properly test this? If I require ActiveSupport in my spec_helper.rb, it'll obviously always use that. If I don't require it anywhere, it'll never use it. And if I require it for a single example group, rspec's random execution order makes the testing unpredictable, as I don't know when AS will be required.
I can require maybe it in a before(:all) in a nested group, as nested groups are (I believe) processed highest to deepest. But that seems terribly inelegant.
I could also split the specs into two files, and run them separately, but again, that seems unnecessary.
I could also disable rspec's random ordering, but that's sort of going against the grain. I'd rather have it as randomized as possible.
Any ideas?
Another solution is to mock the current and today methods, and use those for testing. Eg:
# you won't need these two lines, just there to make script work standalone
require 'rspec'
require 'rspec/mocks/standalone'
def test_method(arg = nil)
arg ||= if Date.respond_to?(:current)
Date.current # use ActiveSupport's time zone-aware mixin if possible
else
Date.today # stdlib fallback
end
arg
end
describe "test_method" do
let(:test_date) { Date.new(2001, 2, 3) }
it "returns arg unchanged if not nil" do
test_method(34).should == 34
end
context "without Date.current available" do
before(:all) do
Date.stub(:today) { test_date }
end
it "returns Date.today when arg isn't present" do
test_method.should == test_date
end
end
context "with Date.current available" do
before(:all) do
Date.stub(:current) { test_date }
end
it "returns Date.current when arg isn't present" do
test_method.should == test_date
end
end
end
Running with rspec test.rb results in the tests passing.
Also, the stubs are present only in each context, so it doesn't matter what order the specs are run in.
This is more than a little perverse, but it should work. Include ActiveSupport, and then:
context "without ActiveSupport's Date.current" do
before(:each) do
class Date
class << self
alias_method :current_backup, :current
undef_method :current
end
end
end
# your test
after(:each) do
class Date
class << self
alias_method :current, :current_backup
end
end
end
end
I can't really recommend this; I would prefer to split out this one spec and run it separately as you suggested.

why is before :save callback hook not getting called from FactoryGirl.create()?

This simple example uses DataMapper's before :save callback (aka hook) to increment callback_count. callback_count is initialized to 0 and should be set to 1 by the callback.
This callback is invoked when the TestObject is created via:
TestObject.create()
but the callback is skipped when created by FactoryGirl via:
FactoryGirl.create(:test_object)
Any idea why? [Note: I'm running ruby 1.9.3, factory_girl 4.2.0, data_mapper 1.2.0]
Full details follow...
The DataMapper model
# file: models/test_model.rb
class TestModel
include DataMapper::Resource
property :id, Serial
property :callback_count, Integer, :default => 0
before :save do
self.callback_count += 1
end
end
The FactoryGirl declaration
# file: spec/factories.rb
FactoryGirl.define do
factory :test_model do
end
end
The RSpec tests
# file: spec/models/test_model_spec.rb
require 'spec_helper'
describe "TestModel Model" do
it 'calls before :save using TestModel.create' do
test_model = TestModel.create
test_model.callback_count.should == 1
end
it 'fails to call before :save using FactoryGirl.create' do
test_model = FactoryGirl.create(:test_model)
test_model.callback_count.should == 1
end
end
The test results
Failures:
1) TestModel Model fails to call before :save using FactoryGirl.create
Failure/Error: test_model.callback_count.should == 1
expected: 1
got: 0 (using ==)
# ./spec/models/test_model_spec.rb:10:in `block (2 levels) in <top (required)>'
Finished in 0.00534 seconds
2 examples, 1 failure
At least for factory_girl 4.2 (don't know since which version it is supported), there is another workwaround through the use of custom methods to persist objects. As it is stated in a response to an issue about it in Github, it is just a matter of calling save instead of save!.
FactoryGirl.define do
to_create do |instance|
if !instance.save
raise "Save failed for #{instance.class}"
end
end
end
Of course it is not ideal because it should be functional in FactoryGirl core, but I think right now it is the best solution and, at the moment, I'm not having conflicts with other tests...
The caveat is that you have to define it in each factory (but for me it wasn't an inconvenient)
Solved.
#Jim Stewart pointed me to this FactoryGirl issue where it says "FactoryGirl calls save! on the instance [that it creates]". In the world of DataMapper, save! expressly does not run the callbacks -- this explains the behavior that I'm seeing. (But it doesn't explain why it works for #enthrops!)
That same link offers some workarounds specifically for DataMapper and I'll probably go with one of them. Still, it would be nice if an un-modified FactoryGirl played nice with DataMapper.
update
Here's the code suggested by Joshua Clayton of thoughtbot. I added it to my spec/factories.rb file and test_model_spec.rb now passes without error. Cool beans.
# file: factories.rb
class CreateForDataMapper
def initialize
#default_strategy = FactoryGirl::Strategy::Create.new
end
delegate :association, to: :#default_strategy
def result(evaluation)
evaluation.singleton_class.send :define_method, :create do |instance|
instance.save ||
raise(instance.errors.send(:errors).map{|attr,errors| "- #{attr}: #{errors}" }.join("\n"))
end
#default_strategy.result(evaluation)
end
end
FactoryGirl.register_strategy(:create, CreateForDataMapper)
update 2
Well. perhaps I spoke too soon. Adding the CreateForDataMapper fixes that one specific test, but appears to break others. So I'm un-answering my question for now. Someone else have a good solution?
Use build to build your object, then call save manually...
t = build(:test_model)
t.save

How do I effectively force Minitest to run my tests in order?

I know. This is discouraged. For reasons I won't get into, I need to run my tests in the order they are written. According to the documentation, if my test class (we'll call it TestClass) extends Minitest::Unit::TestCase, then I should be able to call the public method i_suck_and_my_tests_are_order_dependent! (Gee - do you think the guy who created Minitest had an opinion on that one?). Additionally, there is also the option of calling a method called test_order and specifying :alpha to override the default behavior of :random. Neither of these are working for me.
Here's an example:
class TestClass < Minitest::Unit::TestCase
#override random test run ordering
i_suck_and_my_tests_are_order_dependent!
def setup
...setup code
end
def teardown
...teardown code
end
def test_1
test_1 code....
assert(stuff to assert here, etc...)
puts 'test_1'
end
def test_2
test_2_code
assert(stuff to assert here, etc...)
puts 'test_2'
end
end
When I run this, I get:
undefined method `i_suck_and_my_tests_are_order_dependent!' for TestClass:Class (NoMethodError)
If I replace the i_suck method call with a method at the top a la:
def test_order
:alpha
end
My test runs, but I can tell from the puts for each method that things are still running in random order each time I run the tests.
Does anyone know what I'm doing wrong?
Thanks.
If you just add test_order: alpha to your test class, the tests will run in order:
class TestHomePage
def self.test_order
:alpha
end
def test_a
puts "a"
end
def test_b
puts "b"
end
end
Note that, as of minitest 5.10.1, the i_suck_and_my_tests_are_order_dependent! method/directive is completely nonfunctional in test suites using MiniTest::Spec syntax. The Minitest.test_order method is apparently not being called at all.
EDIT: This has been a known issue since Minitest 5.3.4: see seattlerb/minitest#514 for the blow-by-blow wailing and preening.
You and I aren't the ones who "suck". What's needed is a BDD specification tool for Ruby without the bloat of RSpec and without the frat-boy attitude and contempt for wider community practices of MiniTest. Does anyone have any pointers?
i_suck_and_my_tests_are_order_dependent! may be a later addition to minitest & not available as a Ruby core method. In that case, you'd want to force use of your gem version:
require 'rubygems'
gem 'minitest'
I think that the method *test_order* should be a class method and not a instance method like so:
# tests are order dependent
def self.test_order
:alpha
end
The best way to interfere in this chain may be to override a class method runnable_methods:
def self.runnable_methods
['run_first'] | super | ['run_last']
end
#Minitest version:
def self.runnable_methods
methods = methods_matching(/^test_/)
case self.test_order
when :random, :parallel then
max = methods.size
methods.sort.sort_by { rand max }
when :alpha, :sorted then
methods.sort
else
raise "Unknown test_order: #{self.test_order.inspect}"
end
end
You can reorder test any suitable way around. If you define your special ordered tests with
test 'some special ordered test' do
end
, don't forget to remove them from the results of super call.
In my example I need to be sure only in one particular test to run last, so I keep random order on whole suite and place 'run_last' at the end of it.

Resources