how rails delegate method works? - ruby

After reading the answer by jvans below and looking at the source code a few more time I get it now :). And in case anyone is still wondering how exactly rails delegates works. All rails is doing is creating a new method with (module_eval) in the file/class that you ran the delegate method from.
So for example:
class A
delegate :hello, :to => :b
end
class B
def hello
p hello
end
end
At the point when delegate is called rails will create a hello method with (*args, &block) in class A (technically in the file that class A is written in) and in that method all rails do is uses the ":to" value(which should be an object or a Class that is already defined within the class A) and assign it to a local variable _, then just calls the method on that object or Class passing in the params.
So in order for delegate to work without raising an exception... with our previous example. An instance of A must already have a instance variable referencing to an instance of class B.
class A
attr_accessor :b
def b
#b ||= B.new
end
delegate :hello, :to => :b
end
class B
def hello
p hello
end
end
This is not a question on "how to use the delegate method in rails", which I already know. I'm wondering how exactly "delegate" delegates methods :D. In Rails 4 source code delegate is defined in the core Ruby Module class, which makes it available as a class method in all rails app.
Actually my first question would be how is Ruby's Module class included? I mean every Ruby class has ancestors of > Object > Kernel > BasicObject and any module in ruby has the same ancestors. So how exactly how does ruby add methods to all ruby class/modules when someone reopens the Module class?
My second question is.. I understand that the delegate method in rails uses module_eval do the actual delegation but I don't really understand how module_eval works.
def delegate(*methods)
options = methods.pop
unless options.is_a?(Hash) && to = options[:to]
raise ArgumentError, 'Delegation needs a target. Supply an options hash with a :to key as the last argument (e.g. delegate :hello, to: :greeter).'
end
prefix, allow_nil = options.values_at(:prefix, :allow_nil)
if prefix == true && to =~ /^[^a-z_]/
raise ArgumentError, 'Can only automatically set the delegation prefix when delegating to a method.'
end
method_prefix = \
if prefix
"#{prefix == true ? to : prefix}_"
else
''
end
file, line = caller.first.split(':', 2)
line = line.to_i
to = to.to_s
to = 'self.class' if to == 'class'
methods.each do |method|
# Attribute writer methods only accept one argument. Makes sure []=
# methods still accept two arguments.
definition = (method =~ /[^\]]=$/) ? 'arg' : '*args, &block'
# The following generated methods call the target exactly once, storing
# the returned value in a dummy variable.
#
# Reason is twofold: On one hand doing less calls is in general better.
# On the other hand it could be that the target has side-effects,
# whereas conceptually, from the user point of view, the delegator should
# be doing one call.
if allow_nil
module_eval(<<-EOS, file, line - 3)
def #{method_prefix}#{method}(#{definition}) # def customer_name(*args, &block)
_ = #{to} # _ = client
if !_.nil? || nil.respond_to?(:#{method}) # if !_.nil? || nil.respond_to?(:name)
_.#{method}(#{definition}) # _.name(*args, &block)
end # end
end # end
EOS
else
exception = %(raise DelegationError, "#{self}##{method_prefix}#{method} delegated to #{to}.#{method}, but #{to} is nil: \#{self.inspect}")
module_eval(<<-EOS, file, line - 2)
def #{method_prefix}#{method}(#{definition}) # def customer_name(*args, &block)
_ = #{to} # _ = client
_.#{method}(#{definition}) # _.name(*args, &block)
rescue NoMethodError => e # rescue NoMethodError => e
if _.nil? && e.name == :#{method} # if _.nil? && e.name == :name
#{exception} # # add helpful message to the exception
else # else
raise # raise
end # end
end # end
EOS
end
end
end

Ruby isn't reopening the module class here. In ruby the class Module and the class Class are almost identical.
Class.instance_methods - Module.instance_methods #=> [:allocate, :new, :superclass]
The main difference is that you can't 'new' a module.
Module's are ruby's version of multiple inheritance so when you do:
module A
end
module B
end
class C
include A
include B
end
behind the scenes ruby is actually creating something called an anonymous class. so the above is actually equivalent to:
class A
end
class B < A
end
class C < B
end
module_eval here is a little deceptive. Nothing from the code you're looking at is dealing with modules. class_eval and module_eval are the same thing and they just reopen the class that they're called on so if you want to add methods to a class C you can do:
C.class_eval do
def my_new_method
end
end
or
C.module_eval do
def my_new_method
end
end
both of which are equivalent to manually reopening the class and defining the method
class C
end
class C
def my_new_method
end
end
so when they're calling module_eval in the source above, they're just reopening the current class it's being called it and dynamically defining the methods that you're delegating
I think this will answer your question better:
Class.ancestors #=> [Module, Object, PP::ObjectMixin, Kernel, BasicObject]
since everything in ruby is a class, the method lookup chain will go through all of these objects until it finds what it's looking for. By reoping module you add behavior to everything. The ancestor chain here is a little deceptive, since BasicObject.class #=> Class and Module is in Class's lookup hierarchy, even BasicObject inherits behavior from repening module. The advantage of reopening Module here over Class is that you can now call this method from within a module as well as within a class! Very cool, learned something here myself.

After reading the answer by jvans below and looking at the source code a few more time I get it now :). And in case anyone is still wondering how exactly rails delegates works. All rails is doing is creating a new method with (module_eval) in the file/class that you ran the delegate method from.
So for example:
class A
delegate :hello, :to => :b
end
class B
def hello
p hello
end
end
At the point when delegate is called rails will create a hello method with (*args, &block) in class A (technically in the file that class A is written in) and in that method all rails do is uses the ":to" value(which should be an object or a Class that is already defined within the class A) and assign it to a local variable _, then just calls the method on that object or Class passing in the params.
So in order for delegate to work without raising an exception... with our previous example. An instance of A must already have a instance variable referencing to an instance of class B.
class A
attr_accessor :b
def b
#b ||= B.new
end
delegate :hello, :to => :b
end
class B
def hello
p hello
end
end

Related

Is there to shorten `def self.method_name` and `def method_name` into one method?

So say I have this class:
class This
def a(that)
puts that
end
def self.b(that)
puts that
end
end
This.b("Hello!") # => passes
This.a("Hello!") # => fails
the_class = This.new()
the_class.b("Hello!") # => fails
the_class.a("Hello!") # => passes
Is there a way to shorten both of those methods into one method that is able to be called on a uninitialized object, AND is able to be called on an already initialized one, or will I always have to write a method like that twice?
You can extract the functionality into a module and both extend and include it.
module A
def a(that)
puts that
end
end
class This
include A # defines instance methods
extend A # defines class methods
end
This.a("foo") # => "foo"
This.new.a("foo") # => "foo"
Although I think it's more common to either include or extend and not both. The reason being that instance methods often depend on instance state, whereas class methods don't. If you have an instance This.new and want to call a class method, you can use .class i.e. This.new.class.a
The following bit of code uses some metaprogramming tricks to auto copy over any class methods to instances of that class.
module AutoAddMethods
def singleton_method_added(symbol)
define_method(symbol, method(symbol).to_proc)
end
end
class Foo
extend AutoAddMethods
#bar = 39
def initialize
#bar = 42
end
def test_one # Only added as an instance method.
puts "One #{#bar.inspect}"
end
def self.test_two # Added as both an instance and class method.
puts "Two #{#bar.inspect}"
end
end
i = Foo.new
i.test_one
i.test_two
Foo.test_two
And here is my test run output:
One 42
Two 39
Two 39
The test_two method is callable from both the class and its instances. It runs as if in the class.

Usage of "self", mixin methods, and exposed methods

This is some code that I use in a class called Game:
def play
puts "There are #{#players.length} players in #{#title}."
#players.each do |n|
puts n
end
#players.each do |o|
GameTurn.take_turn(o)
puts o
end
end
It uses a line of code that references a module called GameTurn. Within GameTurn, I have a method called self.take_turn:
require_relative "die"
require_relative "Player"
module GameTurn
def self.take_turn(o)
die = Die.new
case die.roll
when 1..2
o.blam
puts "#{o.name} was blammed homie."
when 3..4
puts "#{o.name} was skipped."
else
o.w00t
end
end
end
I'm a little confused why we use "self" and the difference between exposed methods and mixin methods in modules. I asked this "instance methods of classes vs module methods"
Is take_turn really an exposed method? Even though we're feeding into the take_turn method an object from the player class, is this method still considered a module method that we're using directly? Is this not considered a mixin method? We are feeding into the take_turn method an object from another class, so isn't it mixing in with other classes?
Also, I am still trying to figure out when/why we use the term "self"? It just seems weird that we need to define the method take_turn within the GameTurn module using the term "self". It seems like it should be defined without "self" no?
Ok, from the start:
self always returns the object in which context it is executed. So here:
class A
self #=> A
end
In ruby, you can define methods on objects in flight, you can for example do:
o = Object.new
o.foo #=> NameError
def o.foo
:foo
end
o.foo #=> :foo
Classes and modules are just objects as everything else, hence you can define methods on them as well:
def A.method
'class method'
end
A.method #=> 'class_method'
However it is much easier and more convinient to define it within the class body - because of self, which always returns the class itself:
class A
def self.foo
:foo
end
end
self returns A, so this can be read as:
class A
def A.foo
:foo
end
end
The good thing about this is that if you decide to change class name, you only need to do it on top, next to class - self will take care of the rest.
Within the method self is always the receiver of the method. So:
o = Object.new
def o.method
self
end
o.method == o #=> true
It might be however pretty confusing from time to time. Common confusion come from the code:
class A
def get_class
self.class
end
end
class B < A
end
b = B.new
b.get_class #=> B
even though get_class is defined on class A, self refers to the receiver of a method, not the method owner. Hence it evaluates to:
b.class #=> B
For the same reason self within class methods always points to the class the method is executed on.

Method chaining in ruby

I want to build an API client that has an interface similar to rails active record. I want the consumers to be able to chain methods and after the last method is chained, the client requests a url based on the methods called. So it's method chaining with some lazy evaluation. I looked into Active Record but this is very complicated (spawning proceses, etc).
Here is a toy example of the sort of thing I am talking about. You can chain as many 'bar' methods together as you like before calling 'get', like this:
puts Foo.bar.bar.get # => 'bar,bar'
puts Foo.bar.bar.bar.get # => 'bar,bar,bar'
I have successfully implemented this, but I would rather not need to call the 'get' method. So what I want is this:
puts Foo.bar.bar # => 'bar,bar'
But my current implementation does this:
puts Foo.bar.bar #=> [:bar, :bar]
I have thought of overriding array methods like each and to_s but I am sure there is a better solution.
How would I chain the methods and know which was the last one so I could return something like the string returned in the get method?
Here is my current implementation:
#!/usr/bin/env ruby
class Bar
def get(args)
# does a request to an API and returns things but this will do for now.
args.join(',')
end
end
class Foo < Array
def self.bar
#q = new
#q << :bar
#q
end
def bar
self << :bar
self
end
def get
Bar.new.get(self)
end
end
Also see: Ruby Challenge - Method chaining and Lazy Evaluation
How it works with activerecord is that the relation is a wrapper around the array, delegating any undefined method to this internal array (called target). So what you need is to start with a BasicObject instead of Object:
class Foo < BasicObject
then you need to create internal variable, to which you will delegate all the methods:
def method_missing(*args, &block)
reload! unless loaded?
#target.send(*args, &block)
end
def reload!
# your logic to populate target, e.g:
#target = #counter
#loaded = true
end
def loaded?
!!#loaded
end
To chain methods, your methods need to return new instance of your class, e.g:
def initialize(counter=0)
#counter = counter
end
def bar
_class.new(#counter + 1)
end
private
# BasicObject does not define class method. If you want to wrap your target
# completely (like ActiveRecord does before rails 4), you want to delegate it
# to #target as well. Still you need to access the instance class to create
# new instances. That's the way (if there are any suggestion how to improve it,
# please comment!)
def _class
(class << self; self end).superclass
end
Now you can check it in action:
p Foo.new.bar.bar.bar #=> 3
(f = Foo.new) && nil # '&& nil' added to prevent execution of inspect
# object in the console , as it will force #target
# to be loaded
f.loaded? #=> false
puts f #=> 0
f.loaded? #=> true
A (very simple, maybe simplistic) option would be to implement the to_s method - as it is used to "coerce" to string (for instance in a puts), you could have your specific "this is the end of the chain" code there.

What's the difference between self.generate and Invoice.generate?

class Invoice
def Invoice.generate(order_id, charge_amount, credited_amount = 0.0)
Invoice.new(:order_id => order_id, :amount => charge_amount, :invoice_type => PURCHASE, :credited_amount => credited_amount)
end
end
Why would you create Invoice.generate inside Invoice class rather than self.generate?
self.generate is easier to work with, whereas Invoice.generate is arguably more explicit. Other than that, there's no difference between the two.
Explanation
You can define a method on any instance using this form
def receiver.method(args)
end
Check this out
class Foo
end
def Foo.bar
"bar"
end
Foo.bar # => "bar"
And yes, I mean any instance. It's absolutely possible that one instance has some method while another doesn't
f = Foo.new
def f.quux
'quux'
end
f2 = Foo.new
f.quux # => "quux"
f2.quux # => # ~> -:20:in `<main>': undefined method `quux' for #<Foo:0x007fe4e904a6c0> (NoMethodError)
A reminder: inside of class definition (but outside of method definitions) self points to that class.
class Foo
# self is Foo
end
So, armed with this knowledge, the difference between self.generate and Invoice.generate should be obvious.
Under normal circumstances, it would practically have no difference from def self.generate.
The only edge case I can think of is if you have a nested class with the same name, then the explicit version would apply only to the nested class.
class A
def self.x
name
end
def A.y
name
end
class A
# nested class A::A
end
def self.p
name
end
def A.q
name
end
end
> A.x # => "A"
> A.y # => "A"
> A.p # => "A"
> A.q # => NoMethodError: undefined method `q' for A:Class
> A::A.q # => "A::A"
As you see, after a nested class with the same name is defined, subsequent explicit class method definitions made with the class name refer to the nested class, but explicit definitions made beforehand refer to the original.
Implicit definitions made with self always refer to the base class.
You have 2 ways for defining a class method.
1) You can use the name of the class directly
class Test #Test is now an instance of a built-in class named Class
def Test.class_method
"I'm a class method."
end
end
2) You can use the self variable, which is always pointing to the current object
class Test
def self.class_method
"I'm a class method."
end
end
Once you understand that classes are objects, this use of the self variable to define a class method finally makes sense.
The value of self
Not too surprinsingly, when you are inside a class method, the value of self refers to the object that holds the class structure (the instance of class Class). This means that :
class Test
def self.class_method
self.x
end
end
is equivalent to :
class Test
def self.class_method
Test.x
end
end
When you are inside an instance method, the value of self still refers to the current object. This time however, the current object is an instance of class Test, not an instance of class Class.
More info. : http://www.jimmycuadra.com/posts/self-in-ruby

Why does including this module not override a dynamically-generated method?

I'm trying to override a dynamically-generated method by including a module.
In the example below, a Ripple association adds a rows= method to Table. I want to call that method, but also do some additional stuff afterwards.
I created a module to override the method, thinking that the module's row= would be able to call super to use the existing method.
class Table
# Ripple association - creates rows= method
many :rows, :class_name => Table::Row
# Hacky first attempt to use the dynamically-created
# method and also do additional stuff - I would actually
# move this code elsewhere if it worked
module RowNormalizer
def rows=(*args)
rows = super
rows.map!(&:normalize_prior_year)
end
end
include RowNormalizer
end
However, my new rows= is never called, as evidenced by the fact that if I raise an exception inside it, nothing happens.
I know the module is getting included, because if I put this in it, my exception gets raised.
included do
raise 'I got included, woo!'
end
Also, if instead of rows=, the module defines somethingelse=, that method is callable.
Why isn't my module method overriding the dynamically-generated one?
Let's do an experiment:
class A; def x; 'hi' end end
module B; def x; super + ' john' end end
A.class_eval { include B }
A.new.x
=> "hi" # oops
Why is that? The answer is simple:
A.ancestors
=> [A, B, Object, Kernel, BasicObject]
B is before A in the ancestors chain (you can think of this as B being inside A). Therefore A.x always takes priority over B.x.
However, this can be worked around:
class A
def x
'hi'
end
end
module B
# Define a method with a different name
def x_after
x_before + ' john'
end
# And set up aliases on the inclusion :)
# We can use `alias new_name old_name`
def self.included(klass)
klass.class_eval {
alias :x_before :x
alias :x :x_after
}
end
end
A.class_eval { include B }
A.new.x #=> "hi john"
With ActiveSupport (and therefore Rails) you have this pattern implemented as alias_method_chain(target, feature) http://apidock.com/rails/Module/alias_method_chain:
module B
def self.included(base)
base.alias_method_chain :x, :feature
end
def x_with_feature
x_without_feature + " John"
end
end
Update Ruby 2 comes with Module#prepend, which does override the methods of A, making this alias hack unnecessary for most use cases.
Why isn't my module method overriding the dynamically-generated one?
Because that's not how inheritance works. Methods defined in a class override the ones inherited from other classes/modules, not the other way around.
In Ruby 2.0, there's Module#prepend, which works just like Module#include, except it inserts the module as a subclass instead of a superclass in the inheritance chain.
If you extend the instance of the class, you will can do it.
class A
def initialize
extend(B)
end
def hi
'hi'
end
end
module B
def hi
super[0,1] + 'ello'
end
end
obj = A.new
obj.hi #=> 'hello'

Resources