How can I run a Ruby functions backwards? - ruby

I want to have a class that runs functions backwards like foo.method1.method2.method3 and I want the funcitons to run method3 method2 then method1. But it goes 1 2 3 now. I think this is called lazy evaluation but I'm not sure.
I know next to nothing about Ruby so please excuse this question if its simple and I should know this already.

Sure you can do it. It sounds like you were on the right path thinking of the lazy evaluation you just need to end every list of method calls with a method that runs the queued methods.
class Foo
def initialize
#command_queue = []
end
def method1
#command_queue << :_method1
self
end
def method2
#command_queue << :_method2
self
end
def method3
#command_queue << :_method3
self
end
def exec
#command_queue.reverse.map do |command|
self.send(command)
end
#command_queue = []
end
private
def _method1
puts "method1"
end
def _method2
puts "method2"
end
def _method3
puts "method3"
end
end
foo = Foo.new
foo.method1.method2.method3.exec
method3
method2
method1

Maybe you could chain method calls, build an evaluation stack and execute later. This requires that you call an extra method to evaluate the stack. You could use private methods for the actual implementations.
class Weirdo
def initialize
#stack = []
end
def method_a
#stack << [:method_a!]
self #so that the next call gets chained
end
def method_b(arg1, arg2)
#stack << [:method_b!, arg1, arg2]
self
end
def method_c(&block)
#stack << [:method_c!, block]
self
end
def call_stack
while #stack.length > 0 do
send *#stack.pop
end
end
private
# actual method implementations
def method_a!
# method_a functionality
end
def method_b!(arg1, arg2)
# method_b functionality
end
def method_c!(&block)
# method_c functionality
end
end
so that you can do something like
w = Weirdo.new
w.method_a.method_b(3,5).method_c{ Time.now }
w.call_stack # => executes c first, b next and a last.
Update
Looks like I managed to miss Pete's answer and posted almost exactly the same answer. The only difference is the ability to pass on the arguments to the internal stack.

What you really want here is a proxy class that captures the messages, reverses them, and then forwards them on to the actual class:
# This is the proxy class that captures the messages, reverses them, and then forwards them
class Messenger
def initialize(target)
#obj = target
#messages = []
end
def method_missing(name, *args, &block)
#messages << [name, args, block]
self
end
# The return value of this method is an array of the return values of the invoked methods
def exec
#messages.reverse.map { |name, args, block| #obj.send(name, *args, &block) }
end
end
# this is the actual class that implements the methods you want to invoke
# every method on this class just returns its name
class Test
def self.def_methods(*names)
names.each { |v| define_method(v) { v } }
end
def_methods :a, :b, :c, :d
end
# attach the proxy, store the messages, forward the reversed messages onto the actual class
# and then display the resulting array of method return values
Messenger.new(Test.new).a.b.c.exec.inspect.display #=> [:c, :b, :a]

I don't think that this is possibl. Here is why:
method3 is operating on the result of method2
method2 is operating on the result of method1
therefore method3 requires that method2 has completed
therefore method2 requires that method1 has completed
thus, the execution order must be method3 -> method2 -> method1
I think your only option is to write it as:
foo.method3.method2.method1
As a side note, lazy evaluation is simply delaying a computation until the result is required. For example, if I had a database query that went:
#results = Result.all
Lazy evaluation would not perform the query until I did something like:
puts #results

This is most interesting and useful; I think that the proxy pattern can indeed be used.
I thought as I read your initial post about a debugger that IBM first provided that allowed executing forward and backward (I think that it was on eclipse for java).
I see that OCAML has such a debugger: http://caml.inria.fr/pub/docs/manual-ocaml/manual030.html.
Don't forget that closures can indeed mess or help this issue.

Related

How could I implement something like Rails' before_initialize/before_new in plain Ruby?

In Rails we can define a class like:
class Test < ActiveRecord::Base
before_initialize :method
end
and when calling Test.new, method() will be called on the instance. I'm trying to learn more about Ruby and class methods like this, but I'm having trouble trying to implement this in plain Ruby.
Here's what I have so far:
class LameAR
def self.before_initialize(*args, &block)
# somehow store the symbols or block to be called on init
end
def new(*args)
## Call methods/blocks here
super(*args)
end
end
class Tester < LameAR
before_initialize :do_stuff
def do_stuff
puts "DOING STUFF!!"
end
end
I'm trying to figure out where to store the blocks in self.before_initialize. I originally tried an instance variable like #before_init_methods, but that instance variable wouldn't exist in memory at that point, so I couldn't store or retrieve from it. I'm not sure how/where could I store these blocks/procs/symbols during the class definition, to later be called inside of new.
How could I implement this? (Either having before_initialize take a block/proc/list of symbols, I don't mind at this point, just trying to understand the concept)
For a comprehensive description, you can always check the Rails source; it is itself implemented in 'plain Ruby', after all. (But it handles lots of edge cases, so it's not great for getting a quick overview.)
The quick version is:
module MyCallbacks
def self.included(klass)
klass.extend(ClassMethods) # we don't have ActiveSupport::Concern either
end
module ClassMethods
def initialize_callbacks
#callbacks ||= []
end
def before_initialize(&block)
initialize_callbacks << block
end
end
def initialize(*)
self.class.initialize_callbacks.each do |callback|
instance_eval(&callback)
end
super
end
end
class Tester
include MyCallbacks
before_initialize { puts "hello world" }
end
Tester.new
Left to the reader:
arguments
calling methods by name
inheritance
callbacks aborting a call and supplying the return value
"around" callbacks that wrap the original invocation
conditional callbacks (:if / :unless)
subclasses selectively overriding/skipping callbacks
inserting new callbacks elsewhere in the sequence
... but eliding all of those is what [hopefully] makes this implementation more approachable.
One way would be by overriding Class#new:
class LameAR
def self.before_initialize(*symbols_or_callables, &block)
#before_init_methods ||= []
#before_init_methods.concat(symbols_or_callables)
#before_init_methods << block if block
nil
end
def self.new(*args, &block)
obj = allocate
#before_init_methods.each do |symbol_or_callable|
if symbol_or_callable.is_a?(Symbol)
obj.public_send(symbol_or_callable)
else
symbol_or_callable.(obj)
end
end
obj.__send__(:initialize, *args, &block)
end
end
class Tester < LameAR
before_initialize :do_stuff
def do_stuff
puts "DOING STUFF!!"
end
end

Dynamically define a method inside an instance method

I am working on a project of context-oriented programming in ruby. And I come to this problem:
Suppose that I have a class Klass:
class Klass
def my_method
proceed
end
end
I also have a proc stored inside a variable impl. And impl contains { puts "it works!" }.
From somewhere outside Klass, I would like to define a method called proceed inside the method my_method. So that if a call Klass.new.my_method, I get the result "it works".
So the final result should be something like that:
class Klass
def my_method
def proceed
puts "it works!"
end
proceed
end
end
Or if you have any other idea to make the call of proceed inside my_method working, it's also good. But the proceed of another method (let's say my_method_2) isn't the same as my_method.
In fact, the proceed of my_method represent an old version of my_method. And the proceed of my_method_2 represent an old version of my_method_2.
Thanks for your help
Disclaimer: you are doing it wrong!
There must be more robust, elegant and rubyish way to achieve what you want. If you still want to abuse metaprogramming, here you go:
class Klass
def self.proceeds
#proceeds ||= {}
end
def def_proceed
self.class.proceeds[caller.first[/`.*?'/]] = Proc.new
end
def proceed *args
self.class.proceeds[caller.first[/`.*?'/]].(*args)
end
def m_1
def_proceed { puts 1 }
proceed
end
def m_2
def_proceed { puts 2 }
proceed
end
end
inst = Klass.new
inst.m_1
#⇒ 1
inst.m_2
#⇒ 2
What you in fact need, is Module#prepend and call super from there.
One way of doing that is to construct a hash whose keys are the names of the methods calling proceed and whose values are procs that represent the implementations of proceed for each method calling it.
class Klass
singleton_class.send(:attr_reader, :proceeds)
#proceeds = {}
def my_method1(*args)
proceed(__method__,*args)
end
def my_method2(*args)
proceed(__method__,*args)
end
def proceed(m, *args)
self.class.proceeds[m].call(*args)
end
end
def define_proceed(m, &block)
Klass.proceeds[m] = Proc.new &block
end
define_proceed(:my_method1) { |*arr| arr.sum }
define_proceed(:my_method2) { |a,b| "%s-%s" % [a,b] }
k = Klass.new
k.my_method1(1,2,3) #=> 6
k.my_method2("cat", "dog") #=> "cat-dog"

Is there any difference in using `yield self` in a method with parameter `&block` and `yield self` in a method without a parameter `&block`?

I understand that
def a(&block)
block.call(self)
end
and
def a()
yield self
end
lead to the same result, if I assume that there is such a block a {}. My question is - since I stumbled over some code like that, whether it makes any difference or if there is any advantage of having (if I do not use the variable/reference block otherwise):
def a(&block)
yield self
end
This is a concrete case where i do not understand the use of &block:
def rule(code, name, &block)
#rules = [] if #rules.nil?
#rules << Rule.new(code, name)
yield self
end
The only advantage I can think of is for introspection:
def foo; end
def bar(&blk); end
method(:foo).parameters #=> []
method(:bar).parameters #=> [[:block, :blk]]
IDEs and documentation generators could take advantage of this. However, it does not affect Ruby's argument passing. When calling a method, you can pass or omit a block, regardless of whether it is declared or invoked.
The main difference between
def pass_block
yield
end
pass_block { 'hi' } #=> 'hi'
and
def pass_proc(&blk)
blk.call
end
pass_proc { 'hi' } #=> 'hi'
is that, blk, an instance of Proc, is an object and therefore can be passed to other methods. By contrast, blocks are not objects and therefore cannot be passed around.
def pass_proc(&blk)
puts "blk.is_a?(Proc)=#{blk.is_a?(Proc)}"
receive_proc(blk)
end
def receive_proc(proc)
proc.call
end
pass_proc { 'ho' }
blk.is_a?(Proc)=true
#=> "ho"

Ruby: automatically wrapping methods in event triggers

Heres what I have/want:
module Observable
def observers; #observers; end
def trigger(event, *args)
good = true
return good unless (#observers ||= {})[event]
#obersvers[event].each { |e| good = false and break unless e.call(self, args) }
good
end
def on(event, &block)
#obersvers ||= {}
#obersvers[event] ||= []
#observers[event] << block
end
end
class Item < Thing
include Observable
def pickup(pickuper)
return unless trigger(:before_pick_up, pickuper)
pickuper.add_to_pocket self
trigger(:after_pick_up, pickuper)
end
def drop(droper)
return unless trigger(:before_drop, droper)
droper.remove_from_pocket self
trigger(:after_drop, droper)
end
# Lots of other methods
end
# How it all should work
Item.new.on(:before_pickup) do |item, pickuper|
puts "Hey #{pickuper} thats my #{item}"
return false # The pickuper never picks up the object
end
While starting on trying to create a game in Ruby, I thought it would be great if it could be based all around Observers and Events. The problem is have to write all of these triggers seems to be a waste, as it seems like a lot of duplicated code. I feel there must be some meta programming method out there to wrap methods with functionality.
Ideal Sceanrio:
class CustomBaseObject
class << self
### Replace with correct meta magic
def public_method_called(name, *args, &block)
return unless trigger(:before_+name.to_sym, args)
yield block
trigger(:after_+name.to_sym, args)
end
###
end
end
And then I have all of my object inherit from this Class.
I'm still new to Ruby's more advanced meta programming subjects, so any knowledge about this type of thing would be awesome.
There are a several ways to do it with the help of metaprogramming magic. For example, you can define a method like this:
def override_public_methods(c)
c.instance_methods(false).each do |m|
m = m.to_sym
c.class_eval %Q{
alias #{m}_original #{m}
def #{m}(*args, &block)
puts "Foo"
result = #{m}_original(*args, &block)
puts "Bar"
result
end
}
end
end
class CustomBaseObject
def test(a, &block)
puts "Test: #{a}"
yield
end
end
override_public_methods(CustomBaseObject)
foo = CustomBaseObject.new
foo.test(2) { puts 'Block!' }
# => Foo
Test: 2
Block!
Bar
In this case, you figure out all the required methods defined in the class by using instance_methods and then override them.
Another way is to use so-called 'hook' methods:
module Overrideable
def self.included(c)
c.instance_methods(false).each do |m|
m = m.to_sym
c.class_eval %Q{
alias #{m}_original #{m}
def #{m}(*args, &block)
puts "Foo"
result = #{m}_original(*args, &block)
puts "Bar"
result
end
}
end
end
end
class CustomBaseObject
def test(a, &block)
puts "Test: #{a}"
yield
end
include Overrideable
end
The included hook, defined in this module, is called when you include that module. This requires that you include the module at the end of the class definition, because included should know about all the already defined methods. I think it's rather ugly :)

calling another method in super class in ruby

class A
def a
puts 'in #a'
end
end
class B < A
def a
b()
end
def b
# here i want to call A#a.
end
end
class B < A
alias :super_a :a
def a
b()
end
def b
super_a()
end
end
There's no nice way to do it, but you can do A.instance_method(:a).bind(self).call, which will work, but is ugly.
You could even define your own method in Object to act like super in java:
class SuperProxy
def initialize(obj)
#obj = obj
end
def method_missing(meth, *args, &blk)
#obj.class.superclass.instance_method(meth).bind(#obj).call(*args, &blk)
end
end
class Object
private
def sup
SuperProxy.new(self)
end
end
class A
def a
puts "In A#a"
end
end
class B<A
def a
end
def b
sup.a
end
end
B.new.b # Prints in A#a
If you don't explicitly need to call A#a from B#b, but rather need to call A#a from B#a, which is effectively what you're doing by way of B#b (unless you're example isn't complete enough to demonstrate why you're calling from B#b, you can just call super from within B#a, just like is sometimes done in initialize methods. I know this is kind of obvious, I just wanted to clarify for any Ruby new-comers that you don't have to alias (specifically this is sometimes called an "around alias") in every case.
class A
def a
# do stuff for A
end
end
class B < A
def a
# do some stuff specific to B
super
# or use super() if you don't want super to pass on any args that method a might have had
# super/super() can also be called first
# it should be noted that some design patterns call for avoiding this construct
# as it creates a tight coupling between the classes. If you control both
# classes, it's not as big a deal, but if the superclass is outside your control
# it could change, w/o you knowing. This is pretty much composition vs inheritance
end
end

Resources