Dry::Web::Container yielding different objects with multiple calls to resolve - ruby

I'm trying write a test to assert that all defined operations are called on a successful run. I have the operations for a given process defined in a list and resolve them from a container, like so:
class ProcessController
def call(input)
operations.each { |o| container[o].(input) }
end
def operations
['operation1', 'operation2']
end
def container
My::Container # This is a Dry::Web::Container
end
end
Then I test is as follows:
RSpec.describe ProcessController do
let(:container) { My::Container }
it 'executes all operations' do
subject.operations.each do |op|
expect(container[op]).to receive(:call).and_call_original
end
expect(subject.(input)).to be_success
end
end
This fails because calling container[operation_name] from inside ProcessController and from inside the test yield different instances of the operations. I can verify it by comparing the object ids. Other than that, I know the code is working correctly and all operations are being called.
The container is configured to auto register these operations and has been finalized before the test begins to run.
How do I make resolving the same key return the same item?

TL;DR - https://dry-rb.org/gems/dry-system/test-mode/
Hi, to get the behaviour you're asking for, you'd need to use the memoize option when registering items with your container.
Note that Dry::Web::Container inherits Dry::System::Container, which includes Dry::Container::Mixin, so while the following example is using dry-container, it's still applicable:
require 'bundler/inline'
gemfile(true) do
source 'https://rubygems.org'
gem 'dry-container'
end
class MyItem; end
class MyContainer
extend Dry::Container::Mixin
register(:item) { MyItem.new }
register(:memoized_item, memoize: true) { MyItem.new }
end
MyContainer[:item].object_id
# => 47171345299860
MyContainer[:item].object_id
# => 47171345290240
MyContainer[:memoized_item].object_id
# => 47171345277260
MyContainer[:memoized_item].object_id
# => 47171345277260
However, to do this from dry-web, you'd need to either memoize all objects auto-registered under the same path, or add the # auto_register: false magic comment to the top of the files that define the dependencies and boot them manually.
Memoizing could cause concurrency issues depending on which app server you're using and whether or not your objects are mutated during the request lifecycle, hence the design of dry-container to not memoize by default.
Another, arguably better option, is to use stubs:
# Extending above code
require 'dry/container/stub'
MyContainer.enable_stubs!
MyContainer.stub(:item, 'Some string')
MyContainer[:item]
# => "Some string"
Side note:
dry-system provides an injector so that you don't need to call the container manually in your objects, so your process controller would become something like:
class ProcessController
include My::Importer['operation1', 'operation2']
def call(input)
[operation1, operation2].each do |operation|
operation.(input)
end
end
end

Related

How to stub class instantiated inside tested class in rspec

I have problem stubbing external api, following is the example
require 'rspec'
require 'google/apis/storage_v1'
module Google
class Storage
def upload file
puts '#' * 90
puts "File #{file} is uploaded to google cloud"
end
end
end
class UploadWorker
include Sidekiq::Worker
def perform
Google::Storage.new.upload 'test.txt'
end
end
RSpec.describe UploadWorker do
it 'uploads to google cloud' do
google_cloud_instance = double(Google::Storage, insert_object: nil)
expect(google_cloud_instance).to receive(:upload)
worker = UploadWorker.new
worker.perform
end
end
I'm trying to stub Google::Storage class. This class is instantiated inside the object being tested. How can I verify the message expectation on this instance?
When I run above example, I get following output, and it seems logical, my double is not used by tested object
(Double Google::Storage).upload(*(any args))
expected: 1 time with any arguments
received: 0 times with any arguments
I'm new to Rspec and having hard time with this, any help will be appreciated.
Thanks!
Reaching for DI is always a good idea (https://stackoverflow.com/a/51401376/299774) but there are sometimes reasons you can't so it, so here's another way to stub it without changing the "production" code.
1. expect_any_instance_of
it 'uploads to google cloud' do
expect_any_instance_of(Google::Storage).to receive(:insert_object)
worker = UploadWorker.new
worker.perform
end
In case you just want to test that the method calls the method on any such objects.
2. bit more elaborated setup
In case you want to control or set up more expectations, you can do this
it 'uploads to google cloud' do
the_double = instance_double(Google::Storage)
expect(Google::Storage).to receive(:new).and_return(the_double)
# + optional `.with` in case you wanna assert stuff passed to the constructor
expect(the_double).to receive(:insert_object)
worker = UploadWorker.new
worker.perform
end
Again - Dependency Injection is clearer, and you should aim for it. This is presented as another possibility.
I would consider reaching for dependency injection, such as:
class UploadWorker
def initialize(dependencies = {})
#storage = dependencies.fetch(:storage) { Google::Storage }
end
def perform
#storage.new.upload 'test.txt'
end
end
Then in the spec you can inject a double:
storage = double
expect(storage).to receive(...) # expection
worker = UploadWorker.new(storage: storage)
worker.perform
If using the initializer is not an option then you could use getter/setter method to inject the dependency:
def storage=(new_storage)
#storage = new_storage
end
def storage
#storage ||= Google::Storage
end
and in the specs:
storage = double
worker.storage = storage

Calling a Volt Framework Task method from another Task

I have a Volt Framework Task that checks and stores information on a directory, e.g.
class DirectoryHelperTask < Volt::Task
def list_contents()
contents = []
Dir.glob("/path/to/files").each do |f|
contents << f
end
return contents
end
end
I would like to call this from a different task, e.g.
class DirectoryRearrangerTask < Volt::Task
dir_contents = DirectoryHelperTask.list_contents()
end
The code above (DirectoryRearranger) throws an error, as does a promise call
DirectoryHelperTask.list_contents().then do |r|
dir_conents = r
end.fail do |e|
puts "Error: #{e}"
end
Could not find a way to call a task from another task in the Volt Framework documentation.
Thanks a lot!
From what I gather, tasks are meant to be run on the server side and then called on the client side, hence the use of the promise object. The promise object comes from OpalRb, so trying to call it from MRI won't work. If you have a "task" that will only be used on the server side, then it doesn't really fit with Volt's concept of a task.
Your first approach to the problem actually does work, except that DirectoryRearrangerTask can't inherit from Volt::Task.
directory_helper_task.rb
require_relative "directory_rearranger_task"
class DirectoryHelperTask < Volt::Task
def list_contents
contents = []
Dir.glob("*").each do |file|
contents << file
end
DirectoryRearrangerTask.rearrange(contents)
contents
end
end
directory_rearranger_task.rb
class DirectoryRearrangerTask
def self.rearrange(contents)
contents.reverse!
end
end
Here is a GitHub repo with my solution to this problem.
You can call tasks from the client or server, but keep in mind that you call instance methods on the class. (So they get treated like singletons) And all methods return a Promise. I think your issue here is that your doing dir_contents = DirectoryHelperTask.list_contents() inside of the class. While you could do this in ruby, I'm not sure its what you want.
Also, where you do dir_contents = r, unless dir_contents was defined before the block, its going to get defined just in the block.

How do I make a class conditionally return one of two other classes?

I have a design problem.
I'm writing a REST client in ruby. For reasons beyond my control, it has to extend another gem that uses my networks zookeeper instance to do service lookup. My client takes a user provided tier, and based on that value, queries the zookeeper registry for the appropriate service url.
The problem is that I also need to be able to run my client against a locally running version of the service under test. When the service is running locally, zookeeper is obviously not involved, so I simply need to be able to make GET requests against the localhost resource url.
When a user instantiates my gem, they call something like:
client = MyRestClient.new(tier: :dev)
or in local mode
client = MyRestClient.new(tier: :local)
I would like to avoid conditionally hacking the constructor in MyRestClient (and all of the GET methods in MyRestClient) to alter requests based on :local vs. :requests_via_the_zk_gem.
I'm looking for an elegant and clean way to handle this situation in Ruby.
One thought was to create two client classes, one for :local and the other for :not_local. But then I don't know how to provide a single gem interface that will return the correct client object.
If MyClient has a constructor that looks something like this:
class MyClient
attr_reader :the_klass
def initialize(opts={})
if opts[:tier] == :local
#the_klass = LocalClass.new
else
#the_klass = ZkClass.new
end
#the_klass
end
end
then I end up with something like:
test = MyClient.new(tier: :local)
=> #<MyClient:0x007fe4d881ed58 #the_klass=#<LocalClass:0x007fe4d883afd0>>
test.class
=> MyClient
test.the_klass.class
=> LocalClass
those who then use my gem would have to make calls like:
#client = MyClient.new(tier: :local)
#client.the_klass.get
which doesn't seem right
I could use a module to return the appropriate class, but then I'm faced with the question of how to provide a single public interface for my gem. I can't instantiate a module with .new.
My sense is that this is a common OO problem and I just haven't run into it yet. It's also possible the answer is staring me in the face and I just haven't found it yet.
Most grateful for any help.
A common pattern is to pass the service into the client, something like:
class MyClient
attr_reader :service
def initialize(service)
#service = service
end
def some_method
service.some_method
end
end
And create it with:
client = MyRestClient.new(LocalClass.new)
# or
client = MyRestClient.new(ZkClass.new)
You could move these two into class methods:
class MyClient
self.local
new(LocalClass.new)
end
self.dev
new(ZkClass.new)
end
end
And instead call:
client = MyRestClient.local
# or
client = MyRestClient.dev
You can use method_missing to delegate from your client to the actual class.
def method_missing(m, *args, &block)
#the_class.send(m, *args, &block)
end
So whenever a method gets called on your class that doesn't exist (like get in your example) it wil be called on #the_class instead.
It's good style to also define the corresponding respond_to_missing? btw:
def respond_to_missing?(m, include_private = false)
#the_class.respond_to?(m)
end
The use case you are describing looks like a classic factory method use case.
The common solution for this is the create a method (not new) which returns the relevant class instance:
class MyClient
def self.create_client(opts={})
if opts[:tier] == :local
LocalClass.new
else
ZkClass.new
end
end
end
And now your usage is:
test = MyClient.create(tier: :local)
=> #<LocalClass:0x007fe4d881ed58>
test.class
=> LocalClass

Share state between test classes and tested classes

I'm experimenting with RSpec.
Since I don't like mocks, I would like to emulate a console print using a StringIO object.
So, I want to test that the Logger class writes Welcome to the console. To do so, my idea was to override the puts method used inside Logger from within the spec file, so that nothing actually changes when using Logger elsewhere.
Here's some code:
describe Logger do
Logger.class_eval do
def puts(*args)
???.puts(*args)
end
end
it 'says "Welcome"' do
end
Doing this way, I need to share some StringIO object (which would go where the question marks are now) between the Logger class and the test class.
I found out that when I'm inside RSpec tests, self is an instance of Class. What I thought initially was to do something like this:
Class.class_eval do
attr_accessor :my_io
#my_io = StringIO.new
end
and then replace ??? with Class.my_io.
When I do this, a thousand bells ring in my head telling me it's not a clean way to do this.
What can I do?
PS: I still don't get this:
a = StringIO.new
a.print('a')
a.string # => "a"
a.read # => "" ??? WHY???
a.readlines # => [] ???
Still: StringIO.new('hello').readlines # => ["hello"]
To respond to your last concern, StringIO simulates file behavior. When you write/print to it, the input cursor is positioned after the last thing you wrote. If you write something and want to read it back, you need to reposition yourself (e.g. with rewind, seek, etc.), per http://ruby-doc.org/stdlib-1.9.3/libdoc/stringio/rdoc/StringIO.html
In contrast, StringIO.new('hello') establishes hello as the initial contents of the string while leaving in the position at 0. In any event, the string method just returns the contents, independent of position.
It's not clear why you have an issue with the test double mechanism in RSpec.
That said, your approach for sharing a method works, although:
The fact that self is an anonymous class within RSpec's describe is not really relevant
Instead of using an instance method of Class, you can define your own class and associated class method and "share" that instead, as in the following:
class Foo
def self.bar(arg)
puts(arg)
end
end
describe "Sharing stringio" do
Foo.class_eval do
def self.puts(*args)
MyStringIO.my_io.print(*args)
end
end
class MyStringIO
#my_io = StringIO.new
def self.my_io ; #my_io ; end
end
it 'says "Welcome"' do
Foo.bar("Welcome")
expect(MyStringIO.my_io.string).to eql "Welcome"
end
end
Logger already allows the output device to be specified on construction, so you can easily pass in your StringIO directly without having to redefine anything:
require 'logger'
describe Logger do
let(:my_io) { StringIO.new }
let(:log) { Logger.new(my_io) }
it 'says welcome' do
log.error('Welcome')
expect(my_io.string).to include('ERROR -- : Welcome')
end
end
As other posters have mentioned, it's unclear whether you're intending to test Logger or some code that uses it. In the case of the latter, consider injecting the logger into the client code.
The answers to this SO question also show several ways to share a common Logger between clients.

What is the best way to define test specs in JSON using a Ruby harness?

I've got an interesting conundrum. I'm in the midst of developing a library to parse PSDs in Ruby. Also, a buddy is simultaneously working on a library to parse PSDs in JavaScript. We would like to share the same unit tests via a git submodule.
We've decided to use a simple JSON DSL to define each test. A single test might look like:
{
"_name": "Layer should render out",
"_file": "test/fixtures/layer_out.psd",
"_exports_to": "test/controls/layer_out_control.png"
}
So, now it's up to us to build the appropriate test harnesses to translate the JSON into the appropriate native unit tests. I've been using MiniTest to get myself up to speed, but I'm running into a few walls.
Here's what I've got so far. The test harness is named TargetPractice for the time being:
# run_target_practice.rb
require 'target_practice'
TargetPractice.new(:test) do |test|
test.pattern = "test/**/*.json"
end
and
# psd_test.rb
class PSDTest < MiniTest::Unit::TestCase
attr_accessor :data
def tests_against_data
# do some assertions
end
end
and
# target_practice.rb
class TargetPractice
attr_accessor :libs, :pattern
def initialize(sym)
#libs = []
#pattern = ""
yield self
run_tests
end
def run_tests
FileList[#pattern].to_a.each do |file|
test_data = JSON.parse(File.open(file).read)
test = PSDTest.new(test_data["_name"]) do |t|
t.data = test_data
end
end
end
end
Unfortunately, I'm having trouble getting a yield in the initialize to stick in my PSDTest class. Also, it appears that a test will run immediately on initialization.
I would like to dynamically create a few MiniTest::Unit::TestCase objects, set their appropriate data properties and then run the tests. Any pointers are appreciated!
I think you are overcomplicating things a bit here. What you need is a parameterized test, which is pretty trivial to implement using mintest/spec:
describe "PSD converter" do
def self.tests(pattern = 'test/**/*.json')
FileList[pattern].map{|file| JSON.parse(File.read(file))}
end
tests.each do |test|
it "satisfies test: " + test["_name"] do
# some assertions using test["_file"] and test["_exports_to"]
end
end
end

Resources