I have an around action_action called set_current_user
def set_current_user
CurrentUser.set(current_user) do
yield
end
end
In the CurrentUser singleton
def set(user)
self.user = user
yield
ensure
self.user = nil
end
I cannot figure out how to stub out the yield and the not have the ensure part of the method called
Ideally I would like to do something like
it 'sets the user' do
subject.set(user)
expect(subject.user).to eql user
end
Two errors I am getting
No block is given
When I do pass a block self.user = nil gets called
Thanks in advance
A few things to point out that might help:
ensure is reserved for block of codes that you want to run no matter what happens, hence the reason why your self.user will always be nil. I think what you want is to assign user to nil if there's an exception. In this case, you should be using rescue instead.
def set(user)
self.user = user
yield
rescue => e
self.user = nil
end
As for the unit test, what you want is to be testing only the .set method in the CurrentUser class. Assuming you have everything hooked up correctly in your around filter, here's a sample that might work for you:
describe CurrentUser do
describe '.set' do
let(:current_user) { create(:user) }
subject do
CurrentUser.set(current_user) {}
end
it 'sets the user' do
subject
expect(CurrentUser.user).to eq(current_user)
end
end
end
Hope this helps!
I am not sure what you intend to accomplish with this as it appears you just want to make sure that user is set in the block and unset afterwards. If this is the case then the following should work fine
class CurrentUser
attr_accessor :user
def set(user)
self.user = user
yield
ensure
self.user = nil
end
end
describe '.set' do
subject { CurrentUser.new }
let(:user) { OpenStruct.new(id: 1) }
it 'sets user for the block only' do
subject.set(user) do
expect(subject.user).to eq(user)
end
expect(subject.user).to be_nil
end
end
This will check that inside the block (where yield is called) that subject.user is equal to user and that afterwards subject.user is nil.
Output:
.set
sets user for the block only
Finished in 0.03504 seconds (files took 0.14009 seconds to load)
1 example, 0 failures
I failed to mention I need to clear out the user after every request.
This is what I came up with. Its kinda crazy to put the expectation inside of the lambda but does ensure the user is set prior to the request being processed and clears it after
describe '.set' do
subject { described_class }
let(:user) { OpenStruct.new(id: 1) }
let(:user_expectation) { lambda{ expect(subject.user).to eql user } }
it 'sets the user prior to the block being processed' do
subject.set(user) { user_expectation.call }
end
context 'after the block has been processed' do
# This makes sure the user is always cleared after a request
# even if there is an error and sidekiq will never have access to it.
before do
subject.set(user) { lambda{} }
end
it 'clears out the user' do
expect(subject.user).to eql nil
end
end
end
Related
require_relative 'config/environment'
HTTP_ERRORS = [
RestClient::Exception
]
module API
class Client
def initialize
#client = RawClient.new
end
def search(params = {})
call { #client.search(params) }
end
def call
raise 'No block specified' unless block_given?
loop do # Keep retrying on error
begin
return yield
rescue *HTTP_ERRORS => e
puts "#{e.response&.request.url}"
sleep 5
end
end
end
end
class RawClient
BASE_URL = 'https://www.google.com'
def search(params = {})
go "search/#{params.delete(:query)}", params
end
private
def go(path, params = {})
RestClient.get(BASE_URL + '/' + path, params: params)
end
end
end
API::Client.new.search(query: 'tulips', per_page: 10)
Will output
https://www.google.com/search/tulips?per_page=10 # First time
https://www.google.com/search/?per_page=10 # On retry
I thought I was being clever here: have a flexible and unified way to pass parameters (ie. search(query: 'tulips', per_page: 10)) and let the client implementation figure out what goes into the url itself (ie. query) and what should be passed as GET parameters (ie. per_page).
But the query param is lost from the params after the first retry, because the hash is passed by reference and delete makes a permanent change to it. The second time yield is called, it apparently preserves the context and params won't have the deleted query anymore in it.
What would be an elegant way to solve this? Doing call { #client.search(params.dup) } seems a bit excessive.
I have recently started using Rubocop to "standardise" my code, and it has helped me optimise a lot of my code, as well as help me learn a lot of Ruby "tricks". I understand that I should use my own judgement and disable Cops where necessary, but I have found myself quite stuck with the below code:
def index
if params[:filters].present?
if params[:filters][:deleted].blank? || params[:filters][:deleted] == "false"
# if owned is true, then we don't need to filter by admin
params[:filters][:admin] = nil if params[:filters][:admin].present? && params[:filters][:owned] == "true"
# if admin is true, then must not filter by owned if false
params[:filters][:owned] = nil if params[:filters][:owned].present? && params[:filters][:admin] == "false"
companies_list =
case params[:filters][:admin]&.to_b
when true
current_user.admin_companies
when false
current_user.non_admin_companies
end
if params[:filters][:owned].present?
companies_list ||= current_user.companies
if params[:filters][:owned].to_b
companies_list = companies_list.where(owner: current_user)
else
companies_list = companies_list.where.not(owner: current_user)
end
end
else
# Filters for deleted companies
companies_list = {}
end
end
companies_list ||= current_user.companies
response = { data: companies_list.alphabetical.as_json(current_user: current_user) }
json_response(response)
end
Among others, the error that I'm getting is the following:
C: Metrics/AbcSize: Assignment Branch Condition size for index is too high. [<13, 57, 16> 60.61/15]
I understand the maths behind it, but I don't know how to simplify this code to achieve the same result.
Could someone please give me some guidance on this?
Thanks in advance.
Well first and foremost, is this code fully tested, including all the myriad conditions? It's so complex that refactoring will surely be disastrous unless the test suite is rigorous. So, write a comprehensive test suite if you don't already have one. If there's already a test suite, make sure it tests all the conditions.
Second, apply the "fat model skinny controller" paradigm. So move all the complexity into a model, let's call it CompanyFilter
def index
companies_list = CompanyFilter.new(current_user, params).list
response = { data: companies_list.alphabetical.as_json(current_user: current_user) }
json_response(response)
end
and move all those if/then/else statements into the CompanyFilter#list method
tests still pass? great, you'll still get the Rubocop warnings, but related to the CompanyFilter class.
Now you need to untangle all the conditions. It's a bit hard for me to understand what's going on, but it looks as if it should be reducible to a single case statement, with 5 possible outcomes. So the CompanyFilter class might look something like this:
class CompanyFilter
attr_accessors :current_user, :params
def initialize(current_user, params)
#current_user = current_user
#params = params
end
def list
case
when no_filter_specified
{}
when user_is_admin
#current_user.admin_companies
when user_is_owned
# etc
when # other condition
# etc
end
end
private
def no_filter_specified
#params[:filter].blank?
end
def user_is_admin
# returns boolean based on params hash
end
def user_is_owned
# returns boolean based on params hash
end
end
tests still passing? perfect! [Edit] Now you can move most of your controller tests into a model test for the CompanyFilter class.
Finally I would define all the different companies_list queries as scopes on the Company model, e.g.
class Company < ApplicationRecord
# some examples, I don't know what's appropriate in this app
scope :for_user, ->(user){ where("...") }
scope :administered_by, ->(user){ where("...") }
end
When composing database scopes ActiveRecord::SpawnMethods#merge is your friend.
Post.where(title: 'How to use .merge')
.merge(Post.where(published: true))
While it doesn't look like much it lets you programatically compose scopes without overelying on mutating assignment and if/else trees. You can for example compose an array of conditions and merge them together into a single ActiveRecord::Relation object with Array#reduce:
[Post.where(title: 'foo'), Post.where(author: 'bar')].reduce(&:merge)
# => SELECT "posts".* FROM "posts" WHERE "posts"."title" = $1 AND "posts"."author" = $2 LIMIT $3
So lets combine that with a skinny controllers approach where you handle filtering in a seperate object:
class ApplicationFilter
include ActiveModel::Attributes
include ActiveModel::AttributeAssignment
attr_accessor :user
def initialize(**attributes)
super()
assign_attributes(attributes)
end
# A convenience method to both instanciate and apply the filters
def self.call(user, params, scope: model_class.all)
return scope unless params[:filters].present?
scope.merge(
new(
permit_params(params).merge(user: user)
).to_scope
)
end
def to_scope
filters.map { |filter| apply_filter(filter) }
.compact
.select {|f| f.respond_to?(:merge) }
.reduce(&:merge)
end
private
# calls a filter_by_foo method if present or
# defaults to where(key => value)
def apply_filter(attribute)
if respond_to? "filter_by_#{attribute}"
send("filter_by_#{attribute}")
else
self.class.model_class.where(
attribute => send(attribute)
)
end
end
# Convention over Configuration is sexy.
def self.model_class
name.chomp("Filter").constantize
end
# filters the incoming params hash based on the attributes of this filter class
def self.permit_params
params.permit(filters).reject{ |k,v| v.blank? }
end
# provided for modularity
def self.filters
attribute_names
end
end
This uses some of the goodness provided by Rails to setup objects with attributes that will dynamically handle filtering attributes. It looks at the list of attributes you have declared and then slices those off the params and applies a method for that filter if present.
We can then write a concrete implementation:
class CompanyFilter < ApplicationFilter
attribute :admin, :boolean, default: false
attribute :owned, :boolean
private
def filter_by_admin
if admin
user.admin_companies
else
user.non_admin_companies
end
end
# this should be refactored to use an assocation on User
def filter_by_owned
case owned
when nil
nil
when true
Company.where(owner: user)
when false
Company.where.not(owner: user)
end
end
end
And you can call it with:
# scope is optional
#companies = CompanyFilter.call(current_user, params), scope: current_user.companies)
# users_show_controller.rb
class Controllers::Users::Show
include Hanami::Action
params do
required(:id).filled(:str?)
end
def call(params)
result = users_show_interactor(id: params[:id])
halt 404 if result.failure?
#user = result.user
end
end
# users_show_interactor.rb
class Users::Show::Interactor
include Hanami::Interactor
expose :user
def call(:id)
#user = UserRepository.find_by(:id)
end
end
I have a controller and a interactor like above.
And I'm considering the better way to distinguish ClientError from ServerError, on the controller.
I think It is nice if I could handle an error like below.
handle_exeption StandardError => :some_handler
But, hanami-interactor wraps errors raised inside themselves and so, controller receive errors through result object from interactor.
I don't think that re-raising an error on the controller is good way.
result = some_interactor.call(params)
raise result.error if result.failure
How about implementing the error handler like this?
I know the if statement will increase easily and so this way is not smart.
def call(params)
result = some_interactor.call(params)
handle_error(result.error) if result.faulure?
end
private
def handle_error(error)
return handle_client_error(error) if error.is_a?(ClientError)
return server_error(error) if error.is_a?(ServerError)
end
Not actually hanami-oriented way, but please have a look at dry-monads with do notation. The basic idea is that you can write the interactor-like processing code in the following way
def some_action
value_1 = yield step_1
value_2 = yield step_2(value_1)
return yield(step_3(value_2))
end
def step_1
if condition
Success(some_value)
else
Failure(:some_error_code)
end
end
def step_2
if condition
Success(some_value)
else
Failure(:some_error_code_2)
end
end
Then in the controller you can match the failures using dry-matcher:
matcher.(result) do |m|
m.success do |v|
# ok
end
m.failure :some_error_code do |v|
halt 400
end
m.failure :some_error_2 do |v|
halt 422
end
end
The matcher may be defined in the prepend code for all controllers, so it's easy to remove the code duplication.
Hanami way is validating input parameters before each request handler. So, ClientError must be identified always before actions logic.
halt 400 unless params.valid? #halt ClientError
#your code
result = users_show_interactor(id: params[:id])
halt 422 if result.failure? #ServerError
halt 404 unless result.user
#user = result.user
I normally go about by raising scoped errors in the interactor, then the controller only has to rescue the errors raised by the interactor and return the appropriate status response.
Interactor:
module Users
class Delete
include Tnt::Interactor
class UserNotFoundError < ApplicationError; end
def call(report_id)
deleted = UserRepository.new.delete(report_id)
fail_with!(UserNotFoundError) unless deleted
end
end
end
Controller:
module Api::Controllers::Users
class Destroy
include Api::Action
include Api::Halt
params do
required(:id).filled(:str?, :uuid?)
end
def call(params)
halt 422 unless params.valid?
Users::Delete.new.call(params[:id])
rescue Users::Delete::UserNotFoundError => e
halt_with_status_and_error(404, e)
end
end
end
fail_with! and halt_with_status_and_error are helper methods common to my interactors and controllers, respectively.
# module Api::Halt
def halt_with_status_and_error(status, error = ApplicationError)
halt status, JSON.generate(
errors: [{ key: error.key, message: error.message }],
)
end
# module Tnt::Interactor
def fail_with!(exception)
#__result.fail!
raise exception
end
I have a class like this.
class Time
def has_same_hours?(t)
self.strftime("%Y%m%d%H") == t.strftime("%Y%m%d%H")
end
end
class MyLogger
DATA_DIR = 'data'
def initialize
#time_current_hour = Time.now
#io = nil
update_io_to_current_hour
end
def update_io_to_current_hour
#io = open output_filename, "a+" if #io.nil?
return if #time_current_hour.has_same_hours? Time.now
#io.close
#io = open output_filename, "a+"
#time_current_hour = Time.now
end
def output_filename(time = Time.now)
"#{DATA_DIR}/#{time.strftime('%Y_%m_%d_%H')}.txt"
end
end
When update_io_to_current_hour is called, the file IO should be changed if hour is different compare to #time_current_hour.
I want to write RSpec test for it. This is what I wrote.
describe Logger do
let(:logger){ Logger.new }
describe "#update_io_to_current_hour" do
context "when the hour changes" do
before{
#time_now = Time.parse("2010/4/10 19:00")
#time_current = Time.parse("2010/4/10 18:59")
Time.stub(:now).and_return(#time_now)
logger.stub(:time_current_hour).and_return(#time_current)
}
it "should change file io" do
expect{logger.update_io_to_current_hour}.to change{ logger.instance_variable_get :#io }
end
end
context "when the hour doesn't changes" do
before{
#time_now = Time.parse("2010/4/10 18:59")
#time_current = Time.parse("2010/4/10 18:58")
Time.stub(:now).and_return(#time_now)
logger.stub(:time_current_hour).and_return(#time_current)
}
it "should not change file io" do
expect{logger.update_io_to_current_hour}.not_to change{ logger.instance_variable_get :#io }
end
end
end
end
Second test passes and first not. It looks like file io is never changed whatever stubbed to Time object.
What am I doing wrong? How can I write the test properly?
A couple of points:
logger.stub(:time_current_hour)
The class has no method named :time_current_hour, only an instance variable. There is rarely a good reason to test the values of instance variables; that is an implementation detail. You want to test behavior. In any case this stub is ineffective. Also
logger.instance_variable_get :#io
Now you are reaching right into the guts of your object and inspecting its internal values. Have you no regard for its privacy? :)
I think this would be a lot easier if you simply tested the value of :output_filename. When the hour changes, the filename changes. When the hour is the same, the filename is the same.
I have the following class:
I want to ensure the class url is only set once for all instances.
class DataFactory
##url = nil
def initialize()
begin
if ##url.nil?
Rails.logger.debug "Setting url"
##url = MY_CONFIG["my value"]
end
rescue Exception
raise DataFactoryError, "Error!"
end
end
end
I have two tests:
it "should log a message" do
APP_CONFIG = {"my value" => "test"}
Rails.stub(:logger).and_return(logger_mock)
logger_mock.should_receive(:debug).with "Setting url"
t = DataFactory.new
t = nil
end
it "should throw an exception" do
APP_CONFIG = nil
expect {
DataFactory.new
}.to raise_error(DataFactoryError, /Error!/)
end
The problem is the second test never throws an exception as the ##url class variable is still set from the first test when the second test runs.
Even though I have se the instance to nil at the end of the first test garbage collection has not cleared the memory before the second test runs:
Any ideas would be great!
I did hear you could possibly use Class.new but I am not sure how to go about this.
describe DataFactory
before(:each) { DataFactory.class_variable_set :##url, nil }
...
end
Here is an alternative to the accepted answer, which while wouldn't solve your particular example, I'm hoping it might help a few people with a question in the same vein. If the class in question doesn't specify a default value, and remains undefined until set, this seems to work:
describe DataFactory
before(:each) do
DataFactory.remove_class_variable :##url if DataFactory.class_variable_defined? :##url
end
...
end
Works for me with a class with something more like:
def initialize
##url ||= MY_CONFIG["my value"]
...
end