Normally I expect that once an object is constructed, it should be ready for use, period. No two step construction. If you need calling two constructors for using an object something is very wrong... right?
class Contact
attr_accessor :auth_token
def initialize(contact_hash)
...
end
def edit(...)
auth_token.can! :read, self
end
end
token = AuthorizationToken.new(session)
contact = SomeService.get_contact(...)
contact.edit(...)
# raise error because auth_token is not set
contact.auth_token = token
contact.edit(...)
The code above represents my current dilemma: I want SomeService to give me Contact objects, but I do not want that service to be concerned about an existing session, or authorization at all.
My current approach is adding this extra class:
class QueryService
def initialize(session)
token = AuthorizationToken(session)
end
def get_contact
contact = SomeService.get_contact(...)
contact.token = token
end
end
contact = QueryService.new(session).get_contact(...)
contact.edit(...)
This solution gives me the most freedom to use authorization concerns inside the core domain object Contact, implement them in an external class AuthorizationToken and implement services that are not concerned about the current user session SomeService.
However the two step construction is killing me. It feels strange: An object that is not fully initialized for some operations???
This is not a plain case of dependency injection, but more exactly a context injection. So most of the articles about avoiding DI in Ruby do not really solve my problem. I am wondering if there is a more Ruby way to solve this, or this is just as clean as it can get.
Looks like your Contact class serves two purposes - storing contact data and doing some authorized requests - so yes it does violate the Single Responsibility Principle.
This could be fixed by splitting the Contact class into two - one maybe a Struct or even a plain hash to store the data, and the second that has does the requests.
And I think the most Ruby way to do it would be to return hashes from SomeService and instantiate with Contact.new(data, auth_token) later on.
Related
I'll try to keep this as brief and to the point as possible.
I'm writing a Ruby gem, modeled after the Diplomat gem, that's a wrapper around a product's REST API. The API I'm consuming makes use of token based authentication; an API token is sent via a POST, and a session is returned as a cookie. I'm making use of the Faraday cookiejar middleware for handling the cookie that's returned by the API. The problem I'm struggling with, conceptually, is when to authenticate.
I have two classes, one called RestClient and one called Volume; the latter inherits from the former. As it stands now RestClient's init method builds a connection object and authenticates, while Volume's init method calls super and passes a path. My thinking here is that when any class that inherits from RestClient is initialized it'll authenticate the user.
class RestClient
def initialize(api_path)
<build connection>
auth
end
def auth
<post token, get session cookie>
end
end
class Volume < RestClient
def initialize
super('/volume')
end
def volumes
<send GET, receive volumes>
end
end
obj = Volume.new #Creates object, authenticates user
obj.volumes #Returns list of volumes
I guess my question is..am I headed down the right track? Should I hold off authenticating until a method is first called on the object, rather than authenticating when it's initialized? Am I going about this entirely incorrectly?
what you are asking here is more of a code-style question. there is no right or wrong here. i was about to vote to close because i think it is primarily opinion-based.
since i have an opinion, i'm writing an answer instead.
a) do not over-think
just implement the stuff, if it works, it is good enough
b) rule of 3
if you have implemented 3 things of the same kind and a pattern emerges, refactor!
c) refuse to use inheritance
when in doubt, do not use inheritance. a module will be good enough most of the time.
to your question specifically:
i would not use an initializer to make http calls. they are error-prone and error-handling from within initializers or around those is really ugly. it makes testing a pain in the ass.
what i would do is to just implement whatever you need in simple methods.
what is wrong with calling authenticate before making another api call? putting it into a block may make it really nice and readable:
client.authenticate do |session|
session.volumes
end
if this is too ugly for your use-case, you could do it lazily before any other method call that might require authentication.
Is cookie the only auth your API supports? Very often server-oriented (server-to-server) REST APIs also implement better auth strategies that allow you to pass in auth with every request.
All that being said, what you also can do is something like this:
client = MyApi::Client.for_user(username: ..., password: ....)
#...or
client = MyApi::Client.for_token(token)
volumes = MyApi::Volumes.get(client: client)
This way for where auth is required you would be doing a good thing by "encouraging your class to be used right" - that you simply won't be instantiating the client without authentication data, and won't be initializing your remote objects/calls without a client.
Then, within the client, what you can do is a memoized auth on first request
def perform(http_method, url, ...)
#auth_cookie ||= #client.get_cookie_by_authentication
...
end
Does the single responsibility principle (SRP) apply to the text of a file that defines a class? Or does it apply to the live object when running the program?
I am on a project and we are pulling code out of a model class and putting it in a module. We are doing this in order to adhere to single responsibility.
We are changing this:
class User
... lots of other code
def convert_attributes
{ username: self.email , name: "#{self.first_name} #{self.last_name}" }
end
end
to something like this
class User
include UserAttributeConverter
... lots of other code
end
module UserAttributeConverter
def convert_attributes
{ username: self.email , name: "#{self.first_name} #{self.last_name}" }
end
end
What if we made this change at run time like this?
user = User.find(42)
user.extend(UserAttributeConverter)
user.convert_attributes
The single responsibility principle, based on my knowledge and research here, and here, is defined for a particular context. By this definition, the location of the text that defines the functionality doesn't necessarily matter. Extracting functionality from the class into a module (at least as the example shows) with only one purpose does not seem to extract the responsibility of convert_attributes, but rather shift it to a different file which is still bound to User. My assessment would be that a true extraction of this responsibility would perhaps be to create a class as such:
class UserAttributeConverter
def self.convert_attributes(first_name, last_name, email)
{ username: self.email , name: "#{self.first_name} #{self.last_name}" }
end
end
This gives three benefits. First is a simpler test case, we no longer need a User to test the functionality of covert_attributes. Second the User model is no longer responsible for what is a cosmetic change to its data. Third is that this approach removes the side effects associated with the instance implementation of convert_attributes.
To summarize, I do not think that extracting the functionality as you have changes your adherence to the SRP. To truly gain single responsibility, I believe a (breaking) change to the class interface would be required.
I'm looking for the best way to structure a class (or set of classes) that coordinate an ordered set of API calls or steps and then persist mixed data from the results of those API calls and steps. Ideally there would be rollback handling for failure scenarios to cleanup any created API data, in the case where persisted data fails or doesn't pass certain validations. I have created a similar structure with some Ruby pseudo code below, however this just doesn't feel right.
Any help with a better way to structure this would be greatly appreciated!
class SomeImportantAction
def initialize(obj)
#obj = obj
end
def run!
result = API.get(...)
result_2 = OtherAPI.post(...)
some_var = do_some_work()
update_obj(result, some_var)
create_something_new(result,result_2)
end
private
def update_obj(result,somevar)
...
end
def create_something_new(arg,arg)
...
end
end
Rule of thumb is to have one method working on one level of abstraction.
Your aim is to gather information (with API calls) and to save it to DB. This is top level:
def run!
result = make_api_calls(#obj)
another_result = do_some_work()
if results_valid?(result, another_result)
save_results(result, another_result)
else
cleanup(result, another_result)
end
end
def make_api_calls(#obj)
result = API.get(...)
result_2 = OtherAPI.post(...)
combine_results(result, result_2)
rescue
handle_api_fail()
end
Reading run! method you can clearly say what the task is doing. And then, if interested, you can inspect each method.
Each method that called in run! should then operate on lower level of abstraction - make API calls, save to DB. But again, only one level - don't dig into establishing DB connection or something like that.
Once you split work into smaller and task-specific methods, you can see groups of methods that only use each other (like API calls and handling related exceptions). That would be candidates for a new class.
For example, you need series of API calls to create a user profile - fetch auth info, fetch profile picture, call some API to add user's role, book something etc. That is a bunch of API calls that work as a transaction - all succeed or have no effect if one fails. This should be organized as a class - it is simpler to use (less API calls to keep in mind), to debug and to test.
Don't be afraid to create classes and methods. Even one-lined methods can help significantly.
I often find myself doing lots of delegating.
In Ruby Science, it says:
Many delegate methods to the same object are an indicator that your
object graph may not accurately reflect the real world relationships
they represent.
and
If you find yourself writing lots of delegators, consider changing the
consumer class to take a different object. For example, if you need to
delegate lots of User methods to Account, it’s possible that the
code referencing User should actually reference an instance of
Account instead.
I don't really understand this. What is an example of how this would look in practice?
I think the author of that chapter wants to make clear, that, for example, writing:
class User
def discounted_plan_price(discount_code)
coupon = Coupon.new(discount_code)
coupon.discount(account.plan.price)
end
end
Notice account.plan.price, could be better done by using delegate on the user model, such as:
class User
delegate :discounted_plan_price, to: :account
end
which allows you to write the following:
class User
def discounted_plan_price(discount_code)
account.discounted_plan_price(discount_code)
end
end
class Account
def discounted_plan_price(discount_code)
coupon = Coupon.new(discount_code)
coupon.discount(plan.price)
end
end
Notice that we write plan.price, instead of account.plan.price, thanks to the delegate.
Good example, but there are many more.
I have an object PersistentObject which you can think of as plucked out of an ORM, it's an object which you can use natively in your programming language (agnostic to the backend), and it has methods load and save for committing changes to a database.
I want my PersistentObject to be faultable, i.e. I want to be able to initialize it as a lightweight pointer which server only to reference the object in the database. And when (if) the moment comes then I can fault it into memory by actually going to the database and fetching it. The point here is to be able to add this object to collections as a reference without ever needing to fetch the object. I also want to be able to initialize the object the old fashioned way with classic constructor and then commit it to the database (this is handy when you need to create a new object from scratch, rather than manipulating an existing one).
So I have an object which has multiple constructors: a classic one, and one that creates a fault based on the object GUID in the database. And when the object is initialized as a fault, I want instance methods to be able to access that state as an instance variable because operations on a fault are different to those on a fully loaded object. But for obvious reasons, I don't want clients messing with my inner state so I don't want to create an accessor for the ivar. So my question is, how do I init/set an ivar from a class method in an object instance in such a way that outside clients of my class can't mess with it (i.e. set its value to something else)?
Sorry for all the words, the code should make it a lot clearer. I've tried something which obviously doesn't work but illustrates the point nicely. Apologies if this is an elementary question, I'm quite new to Ruby.
class PersistentObject
def initialize(opts={})
#id = opts[:id] || new_id
#data = opts[:data] || nil
end
def self.new_fault(id)
new_object = PersistentObject.new
new_object.#fault = true #<----- How do you achieve this?
new_object
end
def new_id
#returns a new globally unique id
end
def fault?
#fault
end
def load
if fault?
#fault in the object from the database by fetching the record corresponding to the id
#fault = false
end
end
def save
#save the object to the database
end
end
#I create a new object as a fault, I can add it to collections, refer to it all I want, etc., but I can't access it's data, I just have a lightweight pointer which can be created without ever hitting the database
o = PersistentObject.new_fault("123")
#Now let's suppose I need the object's data, so I'll load it
o.load
#Now I can use the object, change it's data, etc.
p o.data
o.data = "foo"
#And when I'm ready I can save it back to the database
o.save
EDIT:
I should say that my heart isn't set on accessing that instance's ivar from the class method, I'd be more than happy to hear of an idiomatic Ruby pattern for solving this problem.
You could use instance_eval:
new_object.instance_eval { #fault = true }
or instance_variable_set:
new_object.instance_variable_set(:#fault, true)
If your goal is to set the instance variable then I agree with Stephan's answer. To answer your edit, another approach is to add another option to the constructor:
class PersistentObject
def initialize(opts={})
#id = opts[:id] || new_id
#data = opts[:data] || nil
#fault = opts[:fault] || false
end
def self.new_fault(id)
self.new(fault: true)
end
...
Unfortunately, Ruby's unconventional implementation of private/protected make them non-viable for this problem.
This is not possible. And I am not talking about "not possible in Ruby", I am talking about mathematically, logically impossible. You have two requirements:
Another object should not be allowed to set #fault.
Another object should be allowed to set #fault. (Remember, PersistentObject is just yet another object.)
It should be immediately obvious that those two requirements contradict each other and thus what you want simply cannot be done. Period.
You can create an attr_writer for #fault, then PersistentObject can write to it … but so can everybody else. You can make that writer private, then PersistentObject needs to use metaprogramming (i.e. send) to circumvent that access protection … but so can everybody else. You can use instance_variable_set to have PersistentObject set #fault directly … but so can everybody else.