I'm trying to mock a class, so that I can expect it is instantiated and that a certain method is then called.
I tried:
expect(MyPolicy).
to receive(:new).
and_wrap_original do |method, *args|
expect(method.call(*args)).to receive(:show?).and_call_original
end
But all I'm getting is:
undefined method `show?' for #RSpec::Mocks::VerifyingMessageExpectation:0x0055e9ffd0b530
I've tried providing a block and calling the original methods first (both :new and :show?, which I had to bind first), but the error is always the same.
I know about expect_any_instance_of, but it's considered code-smell, so I'm trying to find another way to do it properly.
Context: I have pundit policies and I want to check whether or not it has been called
I also tried, with the same error:
ctor = policy_class.method(:new)
expect(policy_class).
to receive(:new).
with(user, record) do
expect(ctor.call(user, record)).to receive(query).and_call_original
end
You broke MyPolicy.new.
Your wrapper for new does not return a new MyPolicy object. It returns the result of expect(method.call(*args)).to receive(:show?).and_call_original which is a MessageExpectation.
Instead, you can ensure the new object is returned with tap.
# This is an allow. It's not a test, it's scaffolding for the test.
allow(MyPolicy).to receive(:new)
.and_wrap_original do |method, *args|
method.call(*args).tap do |obj|
expect(obj).to receive(:show?).and_call_original
end
end
Or do it the old fashioned way.
allow(MyPolicy).to receive(:new)
.and_wrap_original do |method, *args|
obj = method.call(*args)
expect(obj).to receive(:show?).and_call_original
obj
end
It is often simpler to separate the two steps. Mock MyPolicy.new to return a particular object and then expect the call to show? on that object.
let(:policy) do
# This calls the real MyPolicy.new because policy is referenced
# when setting up the MyPolicy.new mock.
MyPolicy.new
end
before do
allow(MyPolicy).to receive(:new).and_return(policy)
end
it 'shows' do
expect(policy).to receive(:show?).and_call_original
MyPolicy.new.show?
end
This does mean MyPolicy.new always returns the same object. That's an advantage for testing, but might break something. This is more flexible since it separates the scaffolding from what's being tested. The scaffolding can be reused.
RSpec.describe SomeClass do
let(:policy) {
MyPolicy.new
}
let(:thing) {
described_class.new
}
shared_context 'mocked MyPolicy.new' do
before do
allow(MyPolicy).to receive(:new).and_return(policy)
end
end
describe '#some_method' do
include_context 'mocked new'
it 'shows a policy' do
expect(policy).to receive(:show?).and_call_original
thing.some_method
end
end
describe '#other_method' do
include_context 'mocked MyPolicy.new'
it 'checks its policy' do
expect(policy).to receive(:check)
thing.other_method
end
end
end
Finally, inaccessible constructor calls are a headache both for testing, and they're inflexible. It's a default which cannot be overridden.
class SomeClass
def some_method
MyPolicy.new.show?
end
end
Turn it into an accessor with a default.
class SomeClass
attr_writer :policy
def policy
#policy ||= MyPolicy.new
end
def some_method
policy.show?
end
end
Now it can be accessed in the test or anywhere else.
RSpec.describe SomeClass do
let(:thing) {
described_class.new
}
describe '#some_method' do
it 'shows its policy' do
expect(thing.policy).to receive(:show?).and_call_original
thing.some_method
end
end
end
This is the most robust option.
Related
Given that I have an abstract class which provides inherited functionality to subclasses:
class Superclass
class_attribute :_configuration_parameter
def self.configuration_parameter config
self._configuration_parameter = config
end
def results
unless #queried
execute
#queried = true
end
#results
end
private
# Execute uses the class instance config
def execute
#rows = DataSource.fetch self.class._configuration_parameter
#results = Results.new #rows, count
post_process
end
def post_process
#results.each do |row|
# mutate results
end
end
end
Which might be used by a subclass like this:
class Subclass < Superclass
configuration_parameter :foo
def subclass_method
end
end
I'm having a hard time writing RSpec to test the inherited and configured functionality without abusing the global namespace:
RSpec.describe Superclass do
let(:config_parameter) { :bar }
let(:test_subclass) do
# this feels like an anti-pattern, but the Class.new block scope
# doesn't contain config_parameter from the Rspec describe
$config_parameter = config_parameter
Class.new(Superclass) do
configuration_parameter $config_parameter
end
end
let(:test_instance) do
test_subclass.new
end
describe 'config parameter' do
it 'sets the class attribute' do
expect(test_subclass._configuration_parameter).to be(config_parameter)
end
end
describe 'execute' do
it 'fetches the data from the right place' do
expect(DataSource).to receive(:fetch).with(config_parameter)
instance.results
end
end
end
The real world superclass I'm mocking here has a few more configuration parameters and several other pieces of functionality which test reasonably well with this pattern.
Am I missing something obviously bad about the class or test design?
Thanks
I'm just going to jump to the most concrete part of your question, about how to avoid using a global variable to pass a local parameter to the dummy class instantiated in your spec.
Here's your spec code:
let(:test_subclass) do
# this feels like an anti-pattern, but the Class.new block scope
# doesn't contain config_parameter from the Rspec describe
$config_parameter = config_parameter
Class.new(Superclass) do
configuration_parameter $config_parameter
end
end
If you take the value returned from Class.new you can call configuration_parameter on that with the local value and avoid the global. Using tap does this with only a minor change to your existing code:
let(:test_subclass) do
Class.new(SuperClass).tap do |klass|
klass.configuration_parameter config_parameter
end
end
As to the more general question of how to test functionality inherited from a superclass, I think the general approach of creating a stub subclass and writing specs for that subclass is fine. I personally would make your _configuration_parameter class attribute private, and rather than testing that the configuration_parameter method actually sets the value, I'd instead focus on checking that the value is different from the superclass value. But I'm not sure that's in the scope of this question.
I have situaltion like this:
module Something
def my_method
return :some_symbol
end
end
class MyClass
include Something
def my_method
if xxx?
:other_symbol
else
super
end
end
end
Now the problem is with testing - I would like to ensure that super method got called from overriden method and stub it so that I can test other parts of method. How can I accomplish that using RSpec mocks?
Ensuring that super gets called sounds a lot like testing the implementation, not the behaviour, and mocking the subject-under-test isn't such a great idea anyway. I would suggest just explicitly specifying the different code paths
describe "#my_method" do
it "returns :other_symbol when xxx" do
...
end
it "returns :some_symbol when not xxx" do
...
end
end
If you had a lot of classes that included that module, you could use shared examples to reduce the duplication in your tests.
shared_examples_for "Something#my_method" do
it "returns :some_symbol" do
expect(subject.my_method).to eq :some_symbol
end
end
describe MyClass do
describe "#my_method" do
context "when xxx" do
subject { ... }
it "returns :other_symbol" do
expect(subject.my_method).to eq :other_symbol
end
end
context "when not xxx" do
subject { ... }
it_behaves_like "Something#my_method"
end
end
end
Update: If you really can't predict the behaviour of the mixin, you could switch out what method gets called by super by including another module that defines it.
If you have a class C that includes modules M and N that both define a method f, then in C#f, super will refer to whichever module was included last.
class C
include M
include N
def f
super # calls N.f because it was included last
end
end
If you include it in the singleton class of your subject-under-test, then it won't affect any other tests:
describe MyClass do
describe "#my_method" do
it "calls super when not xxx" do
fake_library = Module.new do
def my_method
:returned_from_super
end
end
subject.singleton_class.send :include, fake_library
expect(subject.my_method).to be :returned_from_super
end
end
end
Disclaimer: this doesn't actually test that the mixin works, just that super gets called. I still would advise actually testing the behaviour.
Say I have code like this:
class Car
def test_drive!; end
end
class AssemblyLine
def produce!
car = Car.new
car.test_drive!
end
end
Now, using RSpec I want to test/spec AssemblyLine without exercising Car as well. I hear we don't do dependency injection in Ruby, we stub new instead:
describe AssemblyLine
before do
Car.stub(:new).and_return(double('Car'))
end
describe '#produce'
it 'test-drives new cars' do
the_new_instance_of_car.should_receive(:test_drive) # ???
AssemblyLine.new.produce!
end
end
end
The problem, as you can see, is with the_new_instance_of_car. It doesn't exist yet before produce is called, and after produce returns it's too late to set any method call expectations on it.
I can think of a workaround involving a callback in the stubbed new method, but that's rather hideous. There must be a more elegant and idiomatic way to solve this seemingly common problem. Right...?
Update: here's how I solved it.
describe AssemblyLine
def stub_new_car(&block)
Car.stub(:new) do
car = double('Car')
block.call(car) if block
car
end
end
before { stub_new_car } # to make other tests use the stub as well
describe '#produce'
it 'test-drives new cars' do
stub_new_car { |car| car.should_receive(:test_drive) }
AssemblyLine.new.produce!
end
end
end
You can set an expectation on the test double:
describe AssemblyLine do
let(:car) { double('Car') }
before { Car.stub(:new) { car } }
describe "#produce" do
it "test-drives new cars" do
car.should_receive(:test_drive!)
AssemblyLine.new.produce!
end
end
end
You can also call any_instance on the class (as of RSpec 2.7, I think):
describe AssemblyLine do
describe "#produce" do
it "test-drives new cars" do
Car.any_instance.should_receive(:test_drive!)
AssemblyLine.new.produce!
end
end
end
How can I create an Object in ruby that will be evaluated to false in logical expressions similar to nil?
My intention is to enable nested calls on other Objects where somewhere half way down the chain a value would normally be nil, but allow all the calls to continue - returning my nil-like object instead of nil itself. The object will return itself in response to any received messages that it does not know how to handle and I anticipate that I will need to implement some override methods such as nil?.
For example:
fizz.buzz.foo.bar
If the buzz property of fizz was not available I would return my nil-like object, which would accept calls all the way down to bar returning itself. Ultimately, the statement above should evaluate to false.
Edit:
Based on all the great answers below I have come up with the following:
class NilClass
attr_accessor :forgiving
def method_missing(name, *args, &block)
return self if #forgiving
super
end
def forgive
#forgiving = true
yield if block_given?
#forgiving = false
end
end
This allows for some dastardly tricks like so:
nil.forgiving {
hash = {}
value = hash[:key].i.dont.care.that.you.dont.exist
if value.nil?
# great, we found out without checking all its parents too
else
# got the value without checking its parents, yaldi
end
}
Obviously you could wrap this block up transparently inside of some function call/class/module/wherever.
This is a pretty long answer with a bunch of ideas and code samples of how to approach the problem.
try
Rails has a try method that let's you program like that. This is kind of how it's implemented:
class Object
def try(*args, &b)
__send__(*a, &b)
end
end
class NilClass # NilClass is the class of the nil singleton object
def try(*args)
nil
end
end
You can program with it like this:
fizz.try(:buzz).try(:foo).try(:bar)
You could conceivably modify this to work a little differently to support a more elegant API:
class Object
def try(*args)
if args.length > 0
method = args.shift # get the first method
__send__(method).try(*args) # Call `try` recursively on the result method
else
self # No more methods in chain return result
end
end
end
# And keep NilClass same as above
Then you could do:
fizz.try(:buzz, :foo, :bar)
andand
andand uses a more nefarious technique, hacking the fact that you can't directly instantiate NilClass subclasses:
class Object
def andand
if self
self
else # this branch is chosen if `self.nil? or self == false`
Mock.new(self) # might want to modify if you have useful methods on false
end
end
end
class Mock < BasicObject
def initialize(me)
super()
#me = me
end
def method_missing(*args) # if any method is called return the original object
#me
end
end
This allows you to program this way:
fizz.andand.buzz.andand.foo.andand.bar
Combine with some fancy rewriting
Again you could expand on this technique:
class Object
def method_missing(m, *args, &blk) # `m` is the name of the method
if m[0] == '_' and respond_to? m[1..-1] # if it starts with '_' and the object
Mock.new(self.send(m[1..-1])) # responds to the rest wrap it.
else # otherwise throw exception or use
super # object specific method_missing
end
end
end
class Mock < BasicObject
def initialize(me)
super()
#me = me
end
def method_missing(m, *args, &blk)
if m[-1] == '_' # If method ends with '_'
# If #me isn't nil call m without final '_' and return its result.
# If #me is nil then return `nil`.
#me.send(m[0...-1], *args, &blk) if #me
else
#me = #me.send(m, *args, &blk) if #me # Otherwise call method on `#me` and
self # store result then return mock.
end
end
end
To explain what's going on: when you call an underscored method you trigger mock mode, the result of _meth is wrapped automatically in a Mock object. Anytime you call a method on that mock it checks whether its not holding a nil and then forwards your method to that object (here stored in the #me variable). The mock then replaces the original object with the result of your function call. When you call meth_ it ends mock mode and returns the actual return value of meth.
This allows for an api like this (I used underscores, but you could use really anything):
fizz._buzz.foo.bum.yum.bar_
Brutal monkey-patching approach
This is really quite nasty, but it allows for an elegant API and doesn't necessarily screw up error reporting in your whole app:
class NilClass
attr_accessor :complain
def method_missing(*args)
if #complain
super
else
self
end
end
end
nil.complain = true
Use like this:
nil.complain = false
fizz.buzz.foo.bar
nil.complain = true
As far as I'm aware there's no really easy way to do this. Some work has been done in the Ruby community that implements the functionality you're talking about; you may want to take a look at:
The andand gem
Rails's try method
The andand gem is used like this:
require 'andand'
...
fizz.buzz.andand.foo.andand.bar
You can modify the NilClass class to use method_missing() to respond to any
not-yet-defined methods.
> class NilClass
> def method_missing(name)
> return self
> end
> end
=> nil
> if nil:
* puts "true"
> end
=> nil
> nil.foo.bar.baz
=> nil
There is a principle called the Law of Demeter [1] which suggests that what you're trying to do is not good practice, as your objects shouldn't necessarily know so much about the relationships of other objects.
However, we all do it :-)
In simple cases I tend to delegate the chaining of attributes to a method that checks for existence:
class Fizz
def buzz_foo_bar
self.buzz.foo.bar if buzz && buzz.foo && buzz.foo.bar
end
end
So I can now call fizz.buzz_foo_bar knowing I won't get an exception.
But I've also got a snippet of code (at work, and I can't grab it until next week) that handles method missing and looks for underscores and tests reflected associations to see if they respond to the remainder of the chain. This means I don't now have to write the delegate methods and more - just include the method_missing patch:
module ActiveRecord
class Base
def children_names
association_names=self.class.reflect_on_all_associations.find_all{|x| x.instance_variable_get("#macro")==:belongs_to}
association_names.map{|x| x.instance_variable_get("#name").to_s} | association_names.map{|x| x.instance_variable_get("#name").to_s.gsub(/^#{self.class.name.underscore}_/,'')}
end
def reflected_children_regex
Regexp.new("^(" << children_names.join('|') << ")_(.*)")
end
def method_missing(method_id, *args, &block)
begin
super
rescue NoMethodError, NameError
if match_data=method_id.to_s.match(reflected_children_regex)
association_name=self.methods.include?(match_data[1]) ? match_data[1] : "#{self.class.name.underscore}_#{match_data[1]}"
if association=send(association_name)
association.send(match_data[2],*args,&block)
end
else
raise
end
end
end
end
end
[1] http://en.wikipedia.org/wiki/Law_of_Demeter
EDIT: I slightly changed the spec, to better match what I imagined this to do.
Well, I don't really want to fake C# attributes, I want to one-up-them and support AOP as well.
Given the program:
class Object
def Object.profile
# magic code here
end
end
class Foo
# This is the fake attribute, it profiles a single method.
profile
def bar(b)
puts b
end
def barbar(b)
puts(b)
end
comment("this really should be fixed")
def snafu(b)
end
end
Foo.new.bar("test")
Foo.new.barbar("test")
puts Foo.get_comment(:snafu)
Desired output:
Foo.bar was called with param: b = "test"
test
Foo.bar call finished, duration was 1ms
test
This really should be fixed
Is there any way to achieve this?
I have a somewhat different approach:
class Object
def self.profile(method_name)
return_value = nil
time = Benchmark.measure do
return_value = yield
end
puts "#{method_name} finished in #{time.real}"
return_value
end
end
require "benchmark"
module Profiler
def method_added(name)
profile_method(name) if #method_profiled
super
end
def profile_method(method_name)
#method_profiled = nil
alias_method "unprofiled_#{method_name}", method_name
class_eval <<-ruby_eval
def #{method_name}(*args, &blk)
name = "\#{self.class}##{method_name}"
msg = "\#{name} was called with \#{args.inspect}"
msg << " and a block" if block_given?
puts msg
Object.profile(name) { unprofiled_#{method_name}(*args, &blk) }
end
ruby_eval
end
def profile
#method_profiled = true
end
end
module Comment
def method_added(name)
comment_method(name) if #method_commented
super
end
def comment_method(method_name)
comment = #method_commented
#method_commented = nil
alias_method "uncommented_#{method_name}", method_name
class_eval <<-ruby_eval
def #{method_name}(*args, &blk)
puts #{comment.inspect}
uncommented_#{method_name}(*args, &blk)
end
ruby_eval
end
def comment(text)
#method_commented = text
end
end
class Foo
extend Profiler
extend Comment
# This is the fake attribute, it profiles a single method.
profile
def bar(b)
puts b
end
def barbar(b)
puts(b)
end
comment("this really should be fixed")
def snafu(b)
end
end
A few points about this solution:
I provided the additional methods via modules which could be extended into new classes as needed. This avoids polluting the global namespace for all modules.
I avoided using alias_method, since module includes allow AOP-style extensions (in this case, for method_added) without the need for aliasing.
I chose to use class_eval rather than define_method to define the new method in order to be able to support methods that take blocks. This also necessitated the use of alias_method.
Because I chose to support blocks, I also added a bit of text to the output in case the method takes a block.
There are ways to get the actual parameter names, which would be closer to your original output, but they don't really fit in a response here. You can check out merb-action-args, where we wrote some code that required getting the actual parameter names. It works in JRuby, Ruby 1.8.x, Ruby 1.9.1 (with a gem), and Ruby 1.9 trunk (natively).
The basic technique here is to store a class instance variable when profile or comment is called, which is then applied when a method is added. As in the previous solution, the method_added hook is used to track when the new method is added, but instead of removing the hook each time, the hook checks for an instance variable. The instance variable is removed after the AOP is applied, so it only applies once. If this same technique was used multiple time, it could be further abstracted.
In general, I tried to stick as close to your "spec" as possible, which is why I included the Object.profile snippet instead of implementing it inline.
Great question. This is my quick attempt at an implementation (I did not try to optimise the code). I took the liberty of adding the profile method to the
Module class. In this way it will be available in every class and module definition. It would be even better
to extract it into a module and mix it into the class Module whenever you need it.
I also didn't know if the point was to make the profile method behave like Ruby's public/protected/private keywords,
but I implemented it like that anyway. All methods defined after calling profile are profiled, until noprofile is called.
class Module
def profile
require "benchmark"
#profiled_methods ||= []
class << self
# Save any original method_added callback.
alias_method :__unprofiling_method_added, :method_added
# Create new callback.
def method_added(method)
# Possible infinite loop if we do not check if we already replaced this method.
unless #profiled_methods.include?(method)
#profiled_methods << method
unbound_method = instance_method(method)
define_method(method) do |*args|
puts "#{self.class}##{method} was called with params #{args.join(", ")}"
bench = Benchmark.measure do
unbound_method.bind(self).call(*args)
end
puts "#{self.class}##{method} finished in %.5fs" % bench.real
end
# Call the original callback too.
__unprofiling_method_added(method)
end
end
end
end
def noprofile # What's the opposite of profile?
class << self
# Remove profiling callback and restore previous one.
alias_method :method_added, :__unprofiling_method_added
end
end
end
You can now use it as follows:
class Foo
def self.method_added(method) # This still works.
puts "Method '#{method}' has been added to '#{self}'."
end
profile
def foo(arg1, arg2, arg3 = nil)
puts "> body of foo"
sleep 1
end
def bar(arg)
puts "> body of bar"
end
noprofile
def baz(arg)
puts "> body of baz"
end
end
Call the methods as you would normally:
foo = Foo.new
foo.foo(1, 2, 3)
foo.bar(2)
foo.baz(3)
And get benchmarked output (and the result of the original method_added callback just to show that it still works):
Method 'foo' has been added to 'Foo'.
Method 'bar' has been added to 'Foo'.
Method 'baz' has been added to 'Foo'.
Foo#foo was called with params 1, 2, 3
> body of foo
Foo#foo finished in 1.00018s
Foo#bar was called with params 2
> body of bar
Foo#bar finished in 0.00016s
> body of baz
One thing to note is that it is impossible to dynamically get the name of the arguments with Ruby meta-programming.
You'd have to parse the original Ruby file, which is certainly possible but a little more complex. See the parse_tree and ruby_parser
gems for details.
A fun improvement would be to be able to define this kind of behaviour with a class method in the Module class. It would be cool to be able to do something like:
class Module
method_wrapper :profile do |*arguments|
# Do something before calling method.
yield *arguments # Call original method.
# Do something afterwards.
end
end
I'll leave this meta-meta-programming exercise for another time. :-)