(Crossposting note: I have asked this already at the Ruby Forum one week ago, but did not get any response yet).
Here is a (very) simplified, working version of what I have so far:
# A class S with two methods, one which requires one parameter, and
# one without parameters.
class S
def initialize(s); #ms = s; end
def s_method1(i); puts "s_method1 #{i} #{#ms}"; end
def s_method2; puts "s_method2 #{#ms}"; end
end
# A class T which uses S, and "associates" itself to
# one of the both methods in S, depending on how it is
# initialized.
class T
def initialize(s, choice=nil)
#s = S.new(s)
# If choice is true, associate to the one-parameter-method, otherwise
# to the parameterless method.
#pobj = choice ? lambda { #s.s_method1(choice) } : #s.method(:s_method2)
end
# Here is how I use this association
def invoke
#pobj.call
end
end
In this example, depending on how T is constructed, T#invoke calls
either S#s_method1 or S#S_method2, but in the case of calling
S#s_method1, the parameter to s_method1 is already fixed at creation
time of the T object. Hence, the following two lines,
T.new('no arguments').invoke
T.new('one argument', 12345).invoke
produce the output
s_method2 no arguments
s_method1 12345 one argument
which is exactly what I need.
Now to my question:
In the case, where choice is nil, i.e. where I want to invoke the
parameterless method s_method2, I can get my callable object in an
elegant way by
#s.method(:s_method2)
In the case where choice is non-nil, I had to construct a Proc object
using `lambda. This not only looks clumsy, but also makes me feel a bit
uncomfortable. We have a closure here, which is connected to the
environment inside the initialize method, and I'm not sure whether this
could cause trouble by causing memory leaks in some circumstances.
Is there an easy way to simply bind a method object (in this case
#s.method(:s_method1) to a fixed argument?
My first idea was to use
#s.method(:s_method1).curry[choice]
but this does not achieve my goal. It would not return a callable Proc object, but instead actually execute s_method1 (this is not a bug, but documented behaviour).
Any other ideas of how my goal could be achieved?
Saving parameters separately
This option is simple, but it might not be what you're looking for :
class T
def initialize(s, choice=nil)
s = S.new(s)
#choice = choice
#pobj = s.method(choice ? :s_method1 : :s_method2)
end
def invoke
#pobj.call(*#choice)
end
end
T.new('no arguments').invoke
T.new('one argument', 12345).invoke
#=> s_method2 no arguments
#=> s_method1 12345 one argument
Method refinements for default parameters (Ruby 2.0+)
# Allows setting default parameters for methods, after they have been defined.
module BindParameters
refine Method do
def default_parameters=(params)
#default_params = params
end
def default_parameters
#default_params || []
end
alias_method :orig_call, :call
def call(*params)
merged_params = params + (default_parameters[params.size..-1] || [])
orig_call(*merged_params)
end
end
end
Here's an example :
def f(string)
puts "Hello #{string}"
end
def g(a, b)
puts "#{a} #{b}"
end
using BindParameters
f_method = method(:f)
f_method.default_parameters = %w(World)
f_method.call('user') # => Hello user
f_method.call # => Hello World
g_method = method(:g)
g_method.default_parameters = %w(Hello World)
g_method.call # => Hello World
g_method.call('Goodbye') # => Goodbye World
g_method.call('Goodbye', 'User') # => Goodbye User
Your code can be rewritten :
class T
using BindParameters
def initialize(s, *choice)
s = S.new(s)
#pobj = s.method(choice.empty? ? :s_method2 : :s_method1)
#pobj.default_parameters = choice
end
def invoke
#pobj.call
end
end
T.new('no arguments').invoke # => s_method2 no arguments
T.new('one argument', 12_345).invoke # => s_method1 12345 one argument
Monkey-Patching Method class (Ruby 1.9+)
If it is acceptable to patch the Method class, you could use :
class Method
def default_parameters=(params)
#default_params = params
end
def default_parameters
#default_params || []
end
alias_method :orig_call, :call
def call(*params)
merged_params = params + (default_parameters[params.size..-1] || [])
orig_call(*merged_params)
end
end
T becomes :
class T
def initialize(s, *choice)
s = S.new(s)
#pobj = s.method(choice.empty? ? :s_method2 : :s_method1)
#pobj.default_parameters = choice
end
def invoke
#pobj.call
end
end
Wrapping Method class (Ruby 1.9+)
This way is probably cleaner if you don't want to pollute Method class :
class MethodWithDefaultParameters
attr_accessor :default_parameters
attr_reader :method
def initialize(receiver, method_symbol)
#method = receiver.public_send(:method, method_symbol)
#default_parameters = []
end
def call(*params)
merged_params = params + (default_parameters[params.size..-1] || [])
method.call(*merged_params)
end
def method_missing(sym, *args)
method.send(sym, *args)
end
end
T becomes :
class T
def initialize(s, *choice)
s = S.new(s)
#pobj = MethodWithDefaultParameters.new(s, choice.empty? ? :s_method2 : :s_method1)
#pobj.default_parameters = choice
end
def invoke
#pobj.call
end
end
Any comment or suggestion are welcome!
Related
So I am porting a tool from ruby were a callback block could be defined on an object and I want it to be called in case the callback was set.
So basically something like this.
def set_block(&block)
#new_kid_on_the_block = block
end
def call_the_block_if_it_was_defined
block.call("step by step") if block = #new_kid_on_the_block
end
I am pretty sure this is an easy task but somehow I just run into problems.
Thank you in advance!
In Crystal you almost always have to specify types of instance variables explicitly. So here is how it could look:
class A
alias BlockType = String -> String
def set_block(&block : BlockType)
#block = block
end
def call_block
#block.try &.call("step by step")
end
end
a = A.new
pp a.call_block # => nil
a.set_block { |a| a + "!" }
pp a.call_block # => "step by step!"
Take a look at Capturing blocks for more.
I have a wrapper class which redefines a method of the wrapped class. Is there any way the wrapper's state can be accessed from inside the override method?
class WidgetWrapper
attr_accessor :result_saved_by_widget
def initialize(widget)
#widget = widget
# we intercept the widget's usual "save" method so we can see
# what the widget tries to save
def #widget.save_result(result) # this override works fine ...
OUTER.result_saved_by_widget = result # .. but I need something like this inside it!
end
end
def call
widget.calculate # this will call "save_result" at some stage
end
end
# How it gets used
wrapper = Wrapper.new(Widget.new)
wrapper.call
puts wrapper.result_saved_by_widget
Based on your example, I would extend the object with a module:
module WidgetExtension
attr_accessor :results_saved_by_widget
def save_result(result)
#results_saved_by_widget = result
super
end
end
w = Widget.new
w.extend(WidgetExtension)
w.calculate
w.results_saved_by_widget #=> stored value
Solved this with a perfectly stupid hack - injecting the wrapper object beforehand, using instance_variable_set.
class WidgetWrapper
attr_accessor :result_saved_by_widget
def initialize(widget)
#widget = widget
#widget.instance_variable_set :#wrapper, self
# we intercept the widget's usual "save" method so we can see
# what the widget tries to save
def #widget.save_result(result) # this override works fine ...
#wrapper.result_saved_by_widget = result # ... and this works too :)
end
end
def call
widget.calculate # this will call "save_result" at some stage
end
end
# How it gets used
wrapper = Wrapper.new(Widget.new)
wrapper.call
puts wrapper.result_saved_by_widget
I don't quite understand your question but I think did something quite similar in the past, maybe the following lines can help you :
documents_to_wrap.each do |doc|
doc.define_singleton_method(:method){override_code}
tmp = doc.instance_variable_get(:#instance_var).
doc.instance_variable_set(:#other_instance_var, tmp.do_something)
end
Actually, it's not that hard. A couple of points:
You probably want to call the original save_result. Otherwise, it's not much of a wrapper.
You need to use closure to capture current lexical context (meaning, memorize that we're in WidgetWrapper)
class Widget
def calculate
save_result(3)
end
def save_result(arg)
puts "original save_result: #{arg}"
end
end
class WidgetWrapper
attr_accessor :result_saved_by_widget, :widget
def initialize(widget)
#widget = widget
wrapper = self # `self` can/will unpredictably change.
#widget.define_singleton_method :save_result do |result|
wrapper.result_saved_by_widget = result
super(result)
end
end
def call
widget.calculate
end
end
# How it gets used
wrapper = WidgetWrapper.new(Widget.new)
wrapper.call
puts 'intercepted value'
puts wrapper.result_saved_by_widget
# >> original save_result: 3
# >> intercepted value
# >> 3
I'm confused when to use each of this methods.
From respond_to? documentation:
Returns true if obj responds to the given method. Private methods
are included in the search only if the optional second parameter
evaluates to true.
If the method is not implemented, as Process.fork on Windows,
File.lchmod on GNU/Linux, etc., false is returned.
If the method is not defined, respond_to_missing? method is called and
the result is returned.
And respond_to_missing?:
Hook method to return whether the obj can respond to id method or
not.
See #respond_to?.
Both methods takes 2 arguments.
Both methods seems to the same thing(check if some object respond to given method) so why we should use(have) both?
Defining 'resond_to_missing?` gives you ability to take methods:
class A
def method_missing name, *args, &block
if name == :meth1
puts 'YES!'
else
raise NoMethodError
end
end
def respond_to_missing? name, flag = true
if name == :meth1
true
else
false
end
end
end
[65] pry(main)> A.new.method :meth1
# => #<Method: A#meth1>
Why respond_to? couldn't do this?
What I guess:
respond_to? checks if method is in:
Current object.
Parent object.
Included modules.
respond_to_missing? checks if method is:
Defined via method_missing:
Via array of possible methods:
def method_missing name, *args, &block
arr = [:a, :b, :c]
if arr.include? name
puts name
else
raise NoMethodError
end
end
Delegating it to different object:
class A
def initialize name
#str = String name
end
def method_missing name, *args, &block
#str.send name, *args, &block
end
end
2 . Other way that I'm not aware of.
Where should both be defined/used(my guessing too):
Starting from 1.9.3(as fair I remember) define only respond_to_missing? but use only respond_to?
Last questions:
Am I right? Did I missed something? Correct everything that is bad and/or answer questions asked in this question.
respond_to_missing? is supposed to be updated when you make available additional methods using the method missing technique. This will cause the Ruby interpreter to better understand the existence of the new method.
In fact, without using respond_to_missing?, you can't get the method using method.
Marc-André posted a great article about the respond_to_missing?.
In order for respond_to? to return true, one can specialize it, as follows:
class StereoPlayer
# def method_missing ...
# ...
# end
def respond_to?(method, *)
method.to_s =~ /play_(\w+)/ || super
end
end
p.respond_to? :play_some_Beethoven # => true
This is better, but it still doesn’t make play_some_Beethoven behave exactly like a method. Indeed:
p.method :play_some_Beethoven
# => NameError: undefined method `play_some_Beethoven'
# for class `StereoPlayer'
Ruby 1.9.2 introduces respond_to_missing? that provides for a clean solution to the problem. Instead of specializing respond_to? one specializes respond_to_missing?. Here’s a full example:
class StereoPlayer
# def method_missing ...
# ...
# end
def respond_to_missing?(method, *)
method =~ /play_(\w+)/ || super
end
end
p = StereoPlayer.new
p.play_some_Beethoven # => "Here's some_Beethoven"
p.respond_to? :play_some_Beethoven # => true
m = p.method(:play_some_Beethoven) # => #<Method: StereoPlayer#play_some_Beethoven>
# m acts like any other method:
m.call # => "Here's some_Beethoven"
m == p.method(:play_some_Beethoven) # => true
m.name # => :play_some_Beethoven
StereoPlayer.send :define_method, :ludwig, m
p.ludwig # => "Here's some_Beethoven"
See also Always Define respond_to_missing? When Overriding method_missing.
I have this code:
l = lambda { a }
def some_function
a = 1
end
I just want to access a by the lambda and a special scope which has defined a already somewhere like inside some_function in the example, or just soon later in the same scope as:
l = lambda { a }
a = 1
l.call
Then I found when calling l, it is still using its own binding but not the new one where it was called.
And then I tried to use it as:
l.instance_eval do
a = 1
call
end
But this also failed, it is strange that I can't explain why.
I know the one of the solution is using eval, in which I could special a binding and executing some code in text, but I really do not want to use as so.
And, I know it is able to use a global variable or instance variable. However, actually my code is in a deeper embedded environment, so I don't want to break the completed parts if not quite necessary.
I have referred the Proc class in the documentation, and I found a function names binding that referred to the Proc's context. While the function only provided a way to access its binding but cannot change it, except using Binding#eval. It evaluate text also, which is exactly what I don't like to do.
Now the question is, do I have a better (or more elegant) way to implement this? Or using eval is already the regular manner?
Edit to reply to #Andrew:
Okay, this is a problem which I met when I'm writing a lexical parser, in which I defined a array with fixed-number of items, there including at least a Proc and a regular expression. My purpose is to matching the regular expressions and execute the Procs under my special scope, where the Proce will involved some local variables that should be defined later. And then I met the problem above.
Actually I suppose it is not same completely to that question, as mine is how to pass in binding to a Proc rather than how to pass it out.
#Niklas:
Got your answer, I think that is what exactly I want. It has solved my problem perfectly.
You can try the following hack:
class Proc
def call_with_vars(vars, *args)
Struct.new(*vars.keys).new(*vars.values).instance_exec(*args, &self)
end
end
To be used like this:
irb(main):001:0* lambda { foo }.call_with_vars(:foo => 3)
=> 3
irb(main):002:0> lambda { |a| foo + a }.call_with_vars({:foo => 3}, 1)
=> 4
This is not a very general solution, though. It would be better if we could give it Binding instance instead of a Hash and do the following:
l = lambda { |a| foo + a }
foo = 3
l.call_with_binding(binding, 1) # => 4
Using the following, more complex hack, this exact behaviour can be achieved:
class LookupStack
def initialize(bindings = [])
#bindings = bindings
end
def method_missing(m, *args)
#bindings.reverse_each do |bind|
begin
method = eval("method(%s)" % m.inspect, bind)
rescue NameError
else
return method.call(*args)
end
begin
value = eval(m.to_s, bind)
return value
rescue NameError
end
end
raise NoMethodError
end
def push_binding(bind)
#bindings.push bind
end
def push_instance(obj)
#bindings.push obj.instance_eval { binding }
end
def push_hash(vars)
push_instance Struct.new(*vars.keys).new(*vars.values)
end
def run_proc(p, *args)
instance_exec(*args, &p)
end
end
class Proc
def call_with_binding(bind, *args)
LookupStack.new([bind]).run_proc(self, *args)
end
end
Basically we define ourselves a manual name lookup stack and instance_exec our proc against it. This is a very flexible mechanism. It not only enables the implementation of call_with_binding, it can also be used to build up much more complex lookup chains:
l = lambda { |a| local + func(2) + some_method(1) + var + a }
local = 1
def func(x) x end
class Foo < Struct.new(:add)
def some_method(x) x + add end
end
stack = LookupStack.new
stack.push_binding(binding)
stack.push_instance(Foo.new(2))
stack.push_hash(:var => 4)
p stack.run_proc(l, 5)
This prints 15, as expected :)
UPDATE: Code is now also available at Github. I use this for one my projects too now.
class Proc
def call_with_obj(obj, *args)
m = nil
p = self
Object.class_eval do
define_method :a_temp_method_name, &p
m = instance_method :a_temp_method_name; remove_method :a_temp_method_name
end
m.bind(obj).call(*args)
end
end
And then use it as:
class Foo
def bar
"bar"
end
end
p = Proc.new { bar }
bar = "baz"
p.call_with_obj(self) # => baz
p.call_with_obj(Foo.new) # => bar
Perhaps you don't actually need to define a later, but instead only need to set it later.
Or (as below), perhaps you don't actually need a to be a local variable (which itself references an array). Instead, perhaps you can usefully employ a class variable, such as ##a. This works for me, by printing "1":
class SomeClass
def l
#l ||= lambda { puts ##a }
end
def some_function
##a = 1
l.call
end
end
SomeClass.new.some_function
a similar way:
class Context
attr_reader :_previous, :_arguments
def initialize(_previous, _arguments)
#_previous = _previous
#_arguments = _arguments
end
end
def _code_def(_previous, _arguments = [], &_block)
define_method("_code_#{_previous}") do |_method_previous, _method_arguments = []|
Context.new(_method_previous, _method_arguments).instance_eval(&_block)
end
end
_code_def('something') do
puts _previous
puts _arguments
end
Let's say I have a class Foo and the constructor takes 2 parameters.
Based on these parameters the initialize method does some heavy calculations and stores them as variables in the instance of the class. Object created.
Now I want to optimize this and create a cache of these objects. When creating a new Foo object, I want to return a existing one from the cache if the parameters match. How can I do this?
I currently have a self.new_using_cache(param1, param2), but I would love to have this integrated in the normal Foo.new().
Is this possible in any way?
I can also deduct that using .new() combined with a cache is not really syntactical correct.
That would mean that the method should be called new_or_from_cache().
clarification
It's not just about the heavy calculation, it's also preferred because of limiting the amount of duplicate objects. I don't want 5000 objects in memory, when I can have 50 unique ones from a cache. So I really need to customize the .new method, not just the cached values.
class Foo
##cache = {}
def self.new(value)
if ##cache[value]
##cache[value]
else
##cache[value] = super(value)
end
end
def initialize(value)
#value = value
end
end
puts Foo.new(1).object_id #2148123860
puts Foo.new(2).object_id #2148123820 (different from first instance)
puts Foo.new(1).object_id #2148123860 (same as first instance)
You can actually define self.new, then call super if you actually want to use Class#new.
Also, this totally approach prevents any instantiation from ever occurring if a new instance isn't actually needed. This is die to the fact the initialize method doesn't actually make the decision.
Here's a solution I came up with by defining a generic caching module. The module expects your class to implement the "retrieve_from_cache" and "store_in_cache" methods. If those methods don't exist, it doesn't attempt to do any fancy caching.
module CacheInitializer
def new(*args)
if respond_to?(:retrieve_from_cache) &&
cache_hit = retrieve_from_cache(*args)
cache_hit
else
object = super
store_in_cache(object, *args) if respond_to?(:store_in_cache)
object
end
end
end
class MyObject
attr_accessor :foo, :bar
extend CacheInitializer
#cache = {}
def initialize(foo, bar)
#foo = foo
#bar = bar
end
def self.retrieve_from_cache(foo, bar)
# grab the object from the cache
#cache[cache_key(foo, bar)]
end
def self.store_in_cache(object, foo, bar)
# write back to cache
#cache[cache_key(foo, bar)] = object
end
private
def self.cache_key(foo, bar)
foo + bar
end
end
Something like this?
class Foo
##cache = {}
def initialize prm1, prm2
if ##cache.key?([prm1, prm2]) then #prm1, #prm2 = ##cache[[prm1, prm2]] else
#prm1 = ...
#prm2 = ...
##cache[[prm1, prm2]] = [#prm1, #prm2]
end
end
end
Edited
To not create an instance when the parameters are the same as before,
class Foo
##cache = {}
def self.new prm1, prm2
return if ##cache.key?([prm1, prm2])
#prm1 = ...
#prm2 = ...
##cache[[prm1, prm2]] = [#prm1, #prm2]
super
end
end
p Foo.new(1, 2)
p Foo.new(3, 4)
p Foo.new(1, 2)
# => #<Foo:0x897c4f0>
# => #<Foo:0x897c478>
# => nil
You could use a class-level instance variable to store results from previous object instantiations:
class Foo
#object_cache = {}
def initialize(param1, param2)
#foo1 = #object_cache[param1] || #object_cache[param1] = expensive_calculation
#foo2 = #object_cache[param2] || #object_cache[param2] = expensive_calculation
end
private
def expensive_calculation
...
enf
end
As you probably know you have reinvented the factory method design pattern and it's a perfectly valid solution using your name for the factory method. In fact, it's probably better to do it without redefining new if anyone else is going to have to understand it.
But, it can be done. Here is my take:
class Test
##cache = {}
class << self
alias_method :real_new, :new
end
def self.new p1
o = ##cache[p1]
if o
s = "returning cached object"
else
##cache[p1] = o = real_new(p1)
s = "created new object"
end
puts "%s (%d: %x)" % [s, p1, o.object_id]
o
end
def initialize p
puts "(initialize #{p})"
end
end
Test.new 1
Test.new 2
Test.new 1
Test.new 2
Test.new 3
And this results in:
(initialize 1)
created new object (1: 81176de0)
(initialize 2)
created new object (2: 81176d54)
returning cached object (1: 81176de0)
returning cached object (2: 81176d54)
(initialize 3)