How can I mock something that "does not implement" a particular method? - ruby

The Background:
I'm trying to use cucumber to do some test-driven (or behavior-driven) development around an interface to AWS, in ruby.
So, I have a step definition that looks like this:
Then(/^the mock object should have had :(.*?) called, setting "(.*?)" to "(.*?)"$/) do |method, param, value|
expect(#mock).to receive(method.to_sym).with(hash_including(param, value))
end
Where #mock was previously set using:
#mock = instance_double(AWS::AutoScaling::Client)
And where I invoke this step definition with a feature line like:
And the mock object should have had :update_auto_scaling_group called, setting "auto_scaling_group_name" to "Some-test-value"
When that step gets run, it gets the following error (leaving out the full error, as I believe this is the most relevant part):
AWS::AutoScaling::Client does not implement: update_auto_scaling_group (RSpec::Mocks::MockExpectationError)
I see that indeed, the checks that RSpec runs (as traced back from where the RSpec::Mocks::MockExpectationError gets thrown) are at least correctly reporting the information that they get from the class:
[1] pry(main)> require 'aws-sdk'
=> true
[2] pry(main)> klass = AWS::AutoScaling::Client
=> AWS::AutoScaling::Client
[3] pry(main)> klass.public_method_defined? "update_auto_scaling_group"
=> false
[4] pry(main)> klass.private_method_defined? "update_auto_scaling_group"
=> false
[5] pry(main)> klass.protected_method_defined? "update_auto_scaling_group"
=> false
And yet, if we ask an actual instance, it lets us know that this is a method it would respond to:
[6] pry(main)> x = klass.new
=> #<AWS::AutoScaling::Client::V20110101>
[7] pry(main)> x.respond_to? "update_auto_scaling_group"
=> true
Even while it doesn't say that about just anything:
[8] pry(main)> x.respond_to? "bogus"
=> false
First questions:
So... is this a bug in the AWS::AutoScaling::Client code (or really, probably here), for not defining the methods in a way that the extant checks ({public,private,protected}_method_defined?) would come back true?
Or perhaps a bug in RSpec's "doubles", for not doing all the checking it could do to try to find out that this is indeed a method that's callable in an instance of that class?
Or perhaps it's simply something that I'm doing wrong here? Other?
More generally:
How can I write tests for the code I'm writing, to ensure that it's making calls to what will be an AWS::AutoScaling::Client instance, with the correct parameters (as defined in several checks that I have)? Are there alternate ways I can write my step definitions that would make this work? Alternative ways to create my mock objects? Other?

I've found a way to dynamically mix in the methods I needed to mock
You could do this with empty methods and then stub them, or just include the stubs in the mixin
require 'rails_helper'
RSpec.describe "users/sessions/new.html.erb", :type => :view do
it "displays login form" do
module DeviseUserBits
def resource
#_DeviseUserBitsUser ||= User.new
end
def resource_name
:user
end
def devise_mapping
Devise.mappings[:user]
end
end
view.class.include DeviseUserBits
render
expect(rendered).to match /form/
end
end

It just adds methods on/after instantiating. It's pretty legal, all ruby classes/objects are open.
Proper answer - you do not want to test what you are trying to test in duck-typed language with open classes and objects. It just does not make sense.

The version 1 AWS SDK for Ruby uses #method_missing as a delegate for building and sending requests. The methods a client responds to are defined in an API definition. This eliminates boiler-plate code, but causes problems if you are trying to reflect the available methods at runtime.
Option A: Use a regular double and apply your assertions on the test double.
Option B: Use the mocking feature of the SDK via AWS.stub! When stubbing is enabled, all clients constructed will respond to their regular methods, but will return dummy responses (empty hashes and arrays). This approach provides the useful ability to specify the data to return from a stub. You can even create a stub response for the express purpose of returning from an assertion.
Going with Option B:
# use `:stub_requests` or call Aws.stub!
as = AWS::AutoScaling::Client.new(:stub_requests: true)
# validates parameters as normal, but returns empty response data
as.update_auto_scaling_group(auto_scaling_group_name: 'name')
#=> {}
# You can access the stub response for any operation by name:
stub = as.stub_for(:describe_auto_scaling_groups)
stub.data[:auto_scaling_group_names] = ["Group1", "Group2"]
# Now calling that operation will return the stubbed data
resp = as.describe_auto_scaling_groups
resp.auto_scaling_group_names
#=> ['Group1', 'Group2']
If you need to assert a method is called against the client, you can do so normally, returning the stubbed response:
expect(#client).to receive(:describe_auto_scaling_groups).
with(hash_including(param, value)).
and_return(#client.stub_for(:describe_auto_scaling_groups))

Related

Dry::Web::Container yielding different objects with multiple calls to resolve

I'm trying write a test to assert that all defined operations are called on a successful run. I have the operations for a given process defined in a list and resolve them from a container, like so:
class ProcessController
def call(input)
operations.each { |o| container[o].(input) }
end
def operations
['operation1', 'operation2']
end
def container
My::Container # This is a Dry::Web::Container
end
end
Then I test is as follows:
RSpec.describe ProcessController do
let(:container) { My::Container }
it 'executes all operations' do
subject.operations.each do |op|
expect(container[op]).to receive(:call).and_call_original
end
expect(subject.(input)).to be_success
end
end
This fails because calling container[operation_name] from inside ProcessController and from inside the test yield different instances of the operations. I can verify it by comparing the object ids. Other than that, I know the code is working correctly and all operations are being called.
The container is configured to auto register these operations and has been finalized before the test begins to run.
How do I make resolving the same key return the same item?
TL;DR - https://dry-rb.org/gems/dry-system/test-mode/
Hi, to get the behaviour you're asking for, you'd need to use the memoize option when registering items with your container.
Note that Dry::Web::Container inherits Dry::System::Container, which includes Dry::Container::Mixin, so while the following example is using dry-container, it's still applicable:
require 'bundler/inline'
gemfile(true) do
source 'https://rubygems.org'
gem 'dry-container'
end
class MyItem; end
class MyContainer
extend Dry::Container::Mixin
register(:item) { MyItem.new }
register(:memoized_item, memoize: true) { MyItem.new }
end
MyContainer[:item].object_id
# => 47171345299860
MyContainer[:item].object_id
# => 47171345290240
MyContainer[:memoized_item].object_id
# => 47171345277260
MyContainer[:memoized_item].object_id
# => 47171345277260
However, to do this from dry-web, you'd need to either memoize all objects auto-registered under the same path, or add the # auto_register: false magic comment to the top of the files that define the dependencies and boot them manually.
Memoizing could cause concurrency issues depending on which app server you're using and whether or not your objects are mutated during the request lifecycle, hence the design of dry-container to not memoize by default.
Another, arguably better option, is to use stubs:
# Extending above code
require 'dry/container/stub'
MyContainer.enable_stubs!
MyContainer.stub(:item, 'Some string')
MyContainer[:item]
# => "Some string"
Side note:
dry-system provides an injector so that you don't need to call the container manually in your objects, so your process controller would become something like:
class ProcessController
include My::Importer['operation1', 'operation2']
def call(input)
[operation1, operation2].each do |operation|
operation.(input)
end
end
end

How can I export existing AWS ELB policies? undefined method 'reduce'

We want to export our ELB configurations for re-use. I can get the ELB configs with:
all_elbs = Fog::AWS::ELB.load_balancers.all()
But this returns a failure:
all_policies = Fog::AWS::ELB.policies.all()
#=> /Library/Ruby/Gems/2.0.0/gems/fog-aws-0.0.6/lib/fog/aws/models/elb/policies.rb:20:
#=> in `munged_data': undefined method `reduce' for nil:NilClass (NoMethodError)
Ultimately, I want to be able to recreate a ELB based on an existing ELB.
That error message means that on line 20 of policies.rb there is code like foo.reduce and foo happens to be nil.
If we look at the source code of the gem, we see:
def munged_data
data.reduce([]){ |m,e| # line 20
So, the problem is that data is nil when the munged_data method is called. We see on line 8 of the same file that data is defined via a simple attr_accessor call. I cannot tell for sure where that should have been set. (There are 227 instances of #data = or data = in the gem.) This seems like a bug in the AWS gem, unless you were supposed to call some method before calling .all on policies.
Tracing further, we see that policies is defined in load_balancer.rb on line 154 as:
def policies
Fog::AWS::ELB::Policies.new({
:data => policy_descriptions,
:service => service,
:load_balancer => self
})
end
Assuming that the data passed to the method is used directly as the #data instance variable, then the problem is that policy_descriptions returned nil.
The implementation of policy_descriptions is:
def policy_descriptions
requires :id
#policy_descriptions ||= service.describe_load_balancer_policies(id).body["DescribeLoadBalancerPoliciesResult"]["PolicyDescriptions"]
end
If service.describe_load_balancer_policies(id).body["DescribeLoadBalancerPoliciesResult"] returned nil (or any object that did not have a [] method) this method would have thrown an error. So, my deduction is that this returned something like a hash, but that hash has no "PolicyDescriptions" key.
From there...I don't know.

Working around the need for partial mocks

From time to time I run into the situation that I want to use partial mocks of class methods in my tests. Currently, I'm working with minitest which does not support this (probably because it's not a good idea in the first place...).
An example:
class ImportRunner
def self.run *ids
ids.each { |id| ItemImporter.new(id).import }
end
end
class ItemImporter
def initialize id
#id = id
end
def import
do_this
do_that
end
private
def do_this
# do something with fetched_data
end
def do_that
# do something with fetched_data
end
def fetched_data
#fetched_data ||= DataFetcher.get #id
end
end
I want to test the ImportRunner.run method in isolation (mainly because ItemImporter#import is slow/expensive). In rspec I would have written a test like this:
it 'should do an import for each id' do
first_importer = mock
second_importer = mock
ItemImporter.should_receive(:new).with(123).and_return(first_importer)
first_importer.should_receive(:import).once
ItemImporter.should_receive(:new).with(456).and_return(second_importer)
second_importer.should_receive(:import).once
ImportRunner.run 123, 456
end
First part of the question: Is it possible to do something similar in minitest?
Second part of the question: Is object collaboration in the form
collaborator = SomeCollaborator.new a_param
collaborator.do_work
bad design? If so, how would you change it?
What you are asking for is almost possible in straight Minitest. Minitest::Mock doesn't support partial mocking, so we attempt to do this by stubbing ItemImporter's new method and returning a lambda that calls a mock that returns mocks instead. (Mocks within a mock: Mockception)
def test_imports_for_each_id
# Set up mock objects
item_importer = MiniTest::Mock.new
first_importer = MiniTest::Mock.new
second_importer = MiniTest::Mock.new
# Set up expectations of calls
item_importer.expect :new, first_importer, [123]
item_importer.expect :new, second_importer, [456]
first_importer.expect :import, nil
second_importer.expect :import, nil
# Run the import
ItemImporter.stub :new, lambda { |id| item_importer.new id } do
ImportRunner.run 123, 456
end
# Verify expectations were met
# item_importer.verify
first_importer.verify
second_importer.verify
end
This will work except for calling item_importer.verify. Because that mock will return other mocks, the process of verifying all the expectations were met will call additional methods on the first_importer and second_importer mocks, causing them to raise. So while you can get close, you can't replicate your rspec code exactly. To do that you will have to use a different mocking library that supports partial mocks like RR.
If that code looks ugly to you, don't worry, it is. But that isn't the fault of Minitest, its the fault of conflicting responsibilities within the test. Like you said, this probably isn't a good idea. I don't know what this test is supposed to prove. It looks to be specifying the implementation of your code, but it isn't really communicating the expected behavior. This is what some folks call "over-mocked".
Mocks and stubs are important and useful tools in the hands of a developer, but it’s easy to get carried away. Besides lending a false sense of security, over-mocked tests can also be brittle and noisy. - Rails AntiPatterns
I would rethink what you are trying to accomplish with this test. Minitest is helping you out here by making the design choice that ugly things should look ugly.
You could use the Mocha gem. I am also using MiniTest in most of my tests, and using Mocha to mock and stub methods.

Testing device dependent code in Ruby

I've used both rspec and minitest for Rails applications and libraries that had straightforward algorithms. By that I mean, if I have
def add(a, b)
a + b
end
that's simple to test. I expect that add(2, 2) to equal 4.
But say I have methods dependent on a certain machine.
def device_names
# some code to return an array of device names
end
I would get, e.g., ['CPU', 'GPU', 'DSP'], but this is completely dependent on my machine. No other person would be able to successfully pass the test if I were just expecting that.
How do you handle cross-environment testing as in the second example? How do you make it generic enough to cover that code for testing?
The piece of code in device_names method probably calls some methods in other Ruby classes and results of those calls are then manipulated by your code. You can stub those calls and test your method in isolation.
Here's a (silly) example of how to create a stub on any instance of a String class:
String.any_instance.stub(:downcase).and_return("TEST")
Now any call to downcase on any instance of String will return "TEST". You can play with that in irb:
irb(main):001:0> require 'rspec/mocks'
=> true
irb(main):002:0> RSpec::Mocks::setup(self)
=> #<RSpec::Mocks::Space:0x10a7be8>
irb(main):003:0> String.any_instance.stub(:downcase).and_return("TEST")
=> #<RSpec::Mocks::AnyInstance::StubChain:0x10a0b68 #invocation_order={:stub=>[nil], :with=>[:stub], :and_return=>[:wit
, :stub], :and_raise=>[:with, :stub], :and_yield=>[:with, :stub]}, #messages=[[[:stub, :downcase], nil], [[:and_return,
"TEST"], nil]]>
irb(main):004:0> "HAHA".downcase
=> "TEST"
Of course, you can also stub methods in single instances, for specific parameters, and so on. Read more on stubbing methods.
Now that you know what will be returned by the platform specific code, you can test your method and always get expected results.

More natural way of Proc calling in Ruby 1.9

As we know, there are several way of Proc calling in Ruby 1.9
f =->n {[:hello, n]}
p f[:ruby] # => [:hello, :ruby]
p f.call(:ruby) # => [:hello, :ruby]
p f.(:ruby) # => [:hello, :ruby]
p f === :ruby # => [:hello, :ruby]
I am curious, what is more 'natural' way of calling Proc? 'Natural', probably, means more Computer Science - like way.
The second option is by far the most used.
p f.call(:ruby) # => [:hello, :ruby]
It makes it more similar to a standard method. Also, some libraries actually rely on duck typing when validating arguments checking the availability of the #call method. In this case, using #call ensures you can provide a lambda or any other object (including a Class) that responds to #call.
Rack middlewares are a great example of this convention. The basic middleware can be a lambda, or you can supply more complex logic by using classes.
I always use option 3. Considering the syntactic ambiguities of being able to call methods without parentheses, this is the closest you can get to actual method call syntax.
I saw the first way used in Rack source code. It confused me in a long time. It's picked from lib/rack/builder.rb (version: 1.6.0.alpha)
module Rack
class Builder
...
def to_app
app = #map ? generate_map(#run, #map) : #run
fail "missing run or map statement" unless app
# This is the first option calling a proc
# #use is a array of procs (rack middleware)
#use.reverse.inject(app) { |a,e| e[a] }
end
...
end
end

Resources