How to write unit tests with gem `memory_profiler` usage? - ruby

For example, we have class implementation:
class Memory
def call
2 * 2
end
end
we can get report by memory_profiler usage:
require 'memory_profiler'
MemoryProfiler.report{ Memory.new.call }.pretty_print
How to implement the unit test for this which became failed when we have an increase of memory or memory leak in addition changes into Memory#call?
For example, if we going to change Memory#call in this way:
- 2 * 2
+ loop { 2 * 2 }

I don't think you should place it as unit tests, cause testing is a bit different environment it should not be aware of time cost or memory cost for your code, it should not verify metrics, it is job for other tools.
For your case I would advice to write some module, where you can assert that allocated memory is ok for you, for example you can do something
module MyMemsizeNotifier
extend self
ALLOWED_MEMSIZE = 0..4640
# Or you can receive block, lambdas, procs
# I will leave it for you implementation
def call(klass)
report = MemoryProfiler.report
klass.new.call
end
exceeds_limit?(report.total_allocated_memsize)
report
end
def exceeds_limit?(total_usage)
return if ALLOWED_MEMSIZE.include?(total_usage)
# notify in rollbar, write a log to stdout or to file or any other way to notify
end
end
## Usage
class MyController
def index
...
MyMemsizeNotifier.call(Memory)
...
end
end

Related

Dry::Web::Container yielding different objects with multiple calls to resolve

I'm trying write a test to assert that all defined operations are called on a successful run. I have the operations for a given process defined in a list and resolve them from a container, like so:
class ProcessController
def call(input)
operations.each { |o| container[o].(input) }
end
def operations
['operation1', 'operation2']
end
def container
My::Container # This is a Dry::Web::Container
end
end
Then I test is as follows:
RSpec.describe ProcessController do
let(:container) { My::Container }
it 'executes all operations' do
subject.operations.each do |op|
expect(container[op]).to receive(:call).and_call_original
end
expect(subject.(input)).to be_success
end
end
This fails because calling container[operation_name] from inside ProcessController and from inside the test yield different instances of the operations. I can verify it by comparing the object ids. Other than that, I know the code is working correctly and all operations are being called.
The container is configured to auto register these operations and has been finalized before the test begins to run.
How do I make resolving the same key return the same item?
TL;DR - https://dry-rb.org/gems/dry-system/test-mode/
Hi, to get the behaviour you're asking for, you'd need to use the memoize option when registering items with your container.
Note that Dry::Web::Container inherits Dry::System::Container, which includes Dry::Container::Mixin, so while the following example is using dry-container, it's still applicable:
require 'bundler/inline'
gemfile(true) do
source 'https://rubygems.org'
gem 'dry-container'
end
class MyItem; end
class MyContainer
extend Dry::Container::Mixin
register(:item) { MyItem.new }
register(:memoized_item, memoize: true) { MyItem.new }
end
MyContainer[:item].object_id
# => 47171345299860
MyContainer[:item].object_id
# => 47171345290240
MyContainer[:memoized_item].object_id
# => 47171345277260
MyContainer[:memoized_item].object_id
# => 47171345277260
However, to do this from dry-web, you'd need to either memoize all objects auto-registered under the same path, or add the # auto_register: false magic comment to the top of the files that define the dependencies and boot them manually.
Memoizing could cause concurrency issues depending on which app server you're using and whether or not your objects are mutated during the request lifecycle, hence the design of dry-container to not memoize by default.
Another, arguably better option, is to use stubs:
# Extending above code
require 'dry/container/stub'
MyContainer.enable_stubs!
MyContainer.stub(:item, 'Some string')
MyContainer[:item]
# => "Some string"
Side note:
dry-system provides an injector so that you don't need to call the container manually in your objects, so your process controller would become something like:
class ProcessController
include My::Importer['operation1', 'operation2']
def call(input)
[operation1, operation2].each do |operation|
operation.(input)
end
end
end

Writing a simple circuit breaker with thread support

I'm looking to extend a the simple circuter breaker written ruby to work across multiple thread...
And thus far I manage to accomplish something like this ..
## following is a simple cicruit breaker implementation with thread support.
## https://github.com/soundcloud/simple_circuit_breaker/blob/master/lib/simple_circuit_breaker.rb
class CircuitBreaker
class Error < StandardError
end
def initialize(retry_timeout=10, threshold=30)
#mutex = Mutex.new
#retry_timeout = retry_timeout
#threshold = threshold
reset!
end
def handle
if tripped?
raise CircuitBreaker::Error.new('circuit opened')
else
execute
end
end
def execute
result = yield
reset!
result
rescue Exception => exception
fail!
raise exception
end
def tripped?
opened? && !timeout_exceeded?
end
def fail!
#mutex.synchronize do
#failures += 1
if #failures >= #threshold
#open_time = Time.now
#circuit = :opened
end
end
end
def opened?
#circuit == :opened
end
def timeout_exceeded?
#open_time + #retry_timeout < Time.now
end
def reset!
#mutex.synchronize do
#circuit = :closed
#failures = 0
end
end
end
http_circuit_breaker = CircuitBreaker.new
http_circuit_breaker.handle { make_http_request }
but I'm not sure about few things ...
The multithreaded code has always puzzled me hence I'm not the entirely confident about the approach to say that the stuff seems correct.
Read operation are not under mutex:
While (I think, I have ensured that no data race condition every happens between two threads) mutex are applied for the write operation but the read operation is mutex free. Now, since there can be a scenario where a thread 1 has a held mutex while changing the #circuit or #failure variable but the other thread read the stale value.
So, I'm not able to think thorough does by achieving a full consistency(while applying the read lock) is worth a trade-off over here. Where consistency might be 100 % but the execution code as turn a bit slower because of the excessive lock.
it's unclear what you are asking, so i guess your post will be closed.
nevertheless, i think that the only thread-safe-way to implement a circuit-breaker would be to have the mutex around all data operartions which would result in a sequential flow, so it's basically useless.
otherwise you will have race-conditions like
thread-a starts (server does not respond immediately due to network issues)
thread-b starts (10 seconds later)
thread-b finishes all good
thread-a aborts due to a timeout -> opens circuit with stale data
a version that is mentioned in martin fowlers blog is a circuit-breaker in combination with a thread-pool: https://martinfowler.com/bliki/CircuitBreaker.html

How to stub class instantiated inside tested class in rspec

I have problem stubbing external api, following is the example
require 'rspec'
require 'google/apis/storage_v1'
module Google
class Storage
def upload file
puts '#' * 90
puts "File #{file} is uploaded to google cloud"
end
end
end
class UploadWorker
include Sidekiq::Worker
def perform
Google::Storage.new.upload 'test.txt'
end
end
RSpec.describe UploadWorker do
it 'uploads to google cloud' do
google_cloud_instance = double(Google::Storage, insert_object: nil)
expect(google_cloud_instance).to receive(:upload)
worker = UploadWorker.new
worker.perform
end
end
I'm trying to stub Google::Storage class. This class is instantiated inside the object being tested. How can I verify the message expectation on this instance?
When I run above example, I get following output, and it seems logical, my double is not used by tested object
(Double Google::Storage).upload(*(any args))
expected: 1 time with any arguments
received: 0 times with any arguments
I'm new to Rspec and having hard time with this, any help will be appreciated.
Thanks!
Reaching for DI is always a good idea (https://stackoverflow.com/a/51401376/299774) but there are sometimes reasons you can't so it, so here's another way to stub it without changing the "production" code.
1. expect_any_instance_of
it 'uploads to google cloud' do
expect_any_instance_of(Google::Storage).to receive(:insert_object)
worker = UploadWorker.new
worker.perform
end
In case you just want to test that the method calls the method on any such objects.
2. bit more elaborated setup
In case you want to control or set up more expectations, you can do this
it 'uploads to google cloud' do
the_double = instance_double(Google::Storage)
expect(Google::Storage).to receive(:new).and_return(the_double)
# + optional `.with` in case you wanna assert stuff passed to the constructor
expect(the_double).to receive(:insert_object)
worker = UploadWorker.new
worker.perform
end
Again - Dependency Injection is clearer, and you should aim for it. This is presented as another possibility.
I would consider reaching for dependency injection, such as:
class UploadWorker
def initialize(dependencies = {})
#storage = dependencies.fetch(:storage) { Google::Storage }
end
def perform
#storage.new.upload 'test.txt'
end
end
Then in the spec you can inject a double:
storage = double
expect(storage).to receive(...) # expection
worker = UploadWorker.new(storage: storage)
worker.perform
If using the initializer is not an option then you could use getter/setter method to inject the dependency:
def storage=(new_storage)
#storage = new_storage
end
def storage
#storage ||= Google::Storage
end
and in the specs:
storage = double
worker.storage = storage

How to test method that delegates to the initiation of another class with rspec?

How would you go about testing this with rspec?
class SomeClass
def map_url(size)
GoogleMap.new(point: model.location.point, size: size).map_url
end
end
The fact that your test seems "very coupled and brittle to mock" is a sign that the code itself is doing too many things at once.
To highlight the problem, look at this implementation of map_url, which is meaningless (returning "foo" for any size input) and yet passes your tests:
class SomeClass
def map_url(size)
GoogleMap.new.map_url
GoogleMap.new(point: model.location.point, size: size)
return "foo"
end
end
Notice that:
A new map is being initiated with the correct arguments, but is not contributing to the return value.
map_url is being called on a newly-initiated map, but not the one initiated with the correct arguments.
The result of map_url is not being returned.
I'd argue that the problem is that the way you have structured your code makes it look simpler than it actually is. As a result, your tests are too simple and thus fall short of fully covering the method's behaviour.
This comment from David Chelimsky seems relevant here:
There is an old guideline in TDD that suggests that you should listen to
your tests because when they hurt there is usually a design problem.
Tests are clients of the code under test, and if the test hurts, then so
do all of the other clients in the codebase. Shortcuts like this quickly
become an excuse for poor designs. I want it to stay painful because it
should hurt to do this.
Following this advice, I'd suggest first splitting the code into two separate methods, to isolate concerns:
class SomeClass
def new_map(size)
GoogleMap.new(point: model.location.point, size: size)
end
def map_url(size)
new_map(size).map_url
end
end
Then you can test them separately:
describe SomeClass do
let(:some_class) { SomeClass.new }
let(:mock_map) { double('map') }
describe "#new_map" do
it "returns a GoogleMap with the correct point and size" do
map = some_class.new_map('300x600')
map.point.should == [1,2]
map.size.should == '300x600'
end
end
describe "#map_url" do
before do
some_class.should_receive(:new_map).with('300x600').and_return(mock_map)
end
it "initiates a new map of the right size and call map_url on it" do
mock_map.should_receive(:map_url)
some_class.map_url('300x600')
end
it "returns the url" do
mock_map.stub(map_url: "http://www.example.com")
some_class.map_url('300x600').should == "http://www.example.com"
end
end
end
The resulting test code is a longer and there are 3 specs rather than two, but I think it more clearly and cleanly separates the steps involved in your code, and covers the method behaviour completely. Let me know if this makes sense.
So this is how I did it, it feels very coupled and brittle to mock it like this. Suggestions?
describe SomeClass do
let(:some_class) { SomeClass.new }
describe "#map_url" do
it "should instantiate a GoogleMap with the correct args" do
GoogleMap.should_receive(:new).with(point: [1,2], size: '300x600') { stub(map_url: nil) }
some_class.map_url('300x600')
end
it "should call map_url on GoogleMap instance" do
GoogleMap.any_instance.should_receive(:map_url)
some_class.map_url('300x600')
end
end
end

RAII in Ruby (Or, How to Manage Resources in Ruby)

I know it's by design that you can't control what happens when an object is destroyed. I am also aware of defining some class method as a finalizer.
However is the ruby idiom for C++'s RAII (Resources are initialized in constructor, closed in destructor)? How do people manage resources used inside objects even when errors or exceptions happen?
Using ensure works:
f = File.open("testfile")
begin
# .. process
rescue
# .. handle error
ensure
f.close unless f.nil?
end
but users of the class have to remember to do the whole begin-rescue-ensure chacha everytime the open method needs to be called.
So for example, I'll have the following class:
class SomeResource
def initialize(connection_string)
#resource_handle = ...some mojo here...
end
def do_something()
begin
#resource_handle.do_that()
...
rescue
...
ensure
end
def close
#resource_handle.close
end
end
The resource_handle won't be closed if the exception is cause by some other class and the script exits.
Or is the problem more of I'm still doing this too C++-like?
So that users don't "have to remember to do the whole begin-rescue-ensure chacha" combine rescue/ensure with yield.
class SomeResource
...
def SomeResource.use(*resource_args)
# create resource
resource = SomeResource.new(*resource_args) # pass args direct to constructor
# export it
yield resource
rescue
# known error processing
...
ensure
# close up when done even if unhandled exception thrown from block
resource.close
end
...
end
Client code can use it as follows:
SomeResource.use(connection_string) do | resource |
resource.do_something
... # whatever else
end
# after this point resource has been .close()d
In fact this is how File.open operates - making the first answer confusing at best (well it was to my work colleagues).
File.open("testfile") do |f|
# .. process - may include throwing exceptions
end
# f is guaranteed closed after this point even if exceptions are
# thrown during processing
How about yielding a resource to a block? Example:
File.open("testfile") do |f|
begin
# .. process
rescue
# .. handle error
end
end
Or is the problem more of I'm still doing this too C++-like?
Yes it is since in C++ resource deallocation happens implicitly for everything on the stack. Stack unwound = resource destroyed = destructors called and from there things can be released. Since Ruby has no destructors there is no "do that when everything else is done with" place since grabage collection can be delayed several cycles from where you are. You do have finalizers but they are called "in limbo" (not everything is available to them) and they get called on GC.
Therefore if you are holding a handle to some resource that better be released you need to release it explicitly. Indeed the correct idiom to handle this kind of situation is
def with_shmoo
handle = allocate_shmoo
yield(handle)
ensure
handle.close
end
See http://www.rubycentral.com/pickaxe/tut_exceptions.html
In Ruby, you would use an ensure statement:
f = File.open("testfile")
begin
# .. process
rescue
# .. handle error
ensure
f.close unless f.nil?
end
This will be familiar to users of Python, Java, or C# in that it works like try / catch / finally.

Resources