I could use some help on this one, given this code:
result1, result2, result3 = do_stuff {
method_1
method_2
method_3
}
I would like to be able to write a method called do_stuff that can call each line of that block individually and return a result for each line/block. Can it be done? Am I going about this the wrong way? Something like this (doesn't work at all) is what I am thinking.
def do_stuff(&block)
block.each_block do |block|
block.call
end
end
EDIT: What I am trying to accomplish is to be able to run each method/block call inside the method "do_stuff" in parallel (in it's own thread) and also add some logging around each method call.
I agree with mu above, you should explain what you are trying to do, as there is probably a more suitable pattern to use.
BTW, you can do what you ask for with a minor change:
result1, result2 = do_stuff {
[
method_1,
method_2,
method_3
]
}
or, perhaps, more elegantly, without the block:
result1, result2 = [
method_1,
method_2,
method_3
]
:)
OK, it looks clearer after the question was updated. You could do something like this, using method_missing, instance_eval and threads:
class Parallelizer
class << self
def run(receiver, &block)
#receiver = receiver
instance_eval &block
# wait for all threads to finish
#threads.each{|t| t.join}
#results
end
def method_missing *args, &block
#threads ||= []
#results ||= []
#threads.push Thread.new{
# you could add here custom wrappings
#results.push(#receiver.send(*args, &block))
}
end
end
end
class Test
def take_a_break name, sec
puts "#{name} taking a break for #{sec} seconds"
Kernel.sleep sec
puts "#{name} done."
name
end
end
t = Test.new
results = Parallelizer.run(t) do
take_a_break 'foo', 3
take_a_break 'bar', 2
take_a_break 'baz', 1
end
Be careful, though, that this is not well-tested and I am not sure how threadsafe.
Related
I would like to be able to insert some code at the beginning and at the end of methods in my class. I would like to avoid repetition as well.
I found this answer helpful, however it doesn't help with the repetition.
class MyClass
def initialize
[:a, :b].each{ |method| add_code(method) }
end
def a
sleep 1
"returning from a"
end
def b
sleep 1
"returning from b"
end
private
def elapsed
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
block_value = yield
finish = Process.clock_gettime(Process::CLOCK_MONOTONIC)
puts "elapsed: #{finish - start} seconds, block_value: #{block_value}."
block_value
end
def add_code(meth)
meth = meth.to_sym
self.singleton_class.send(:alias_method, "old_#{meth}".to_sym, meth)
self.singleton_class.send(:define_method, meth) do
elapsed do
send("old_#{meth}".to_sym)
end
end
end
end
The above does work, but what would be a more elegant solution? I would love to be able to, for example, put attr_add_code at the beginning of the class definition and list the methods I want the code added to, or perhaps even specify that I want it added to all public methods.
Note: The self.singleton_class is just a workaround since I am adding code during the initialisation.
If by repetition you mean the listing of methods you want to instrument, then you can do something like:
module Measure
def self.prepended(base)
method_names = base.instance_methods(false)
base.instance_eval do
method_names.each do |method_name|
alias_method "__#{method_name}_without_timing", method_name
define_method(method_name) do
t1 = Process.clock_gettime(Process::CLOCK_MONOTONIC)
public_send("__#{method_name}_without_timing")
t2 = Process.clock_gettime(Process::CLOCK_MONOTONIC)
puts "Method #{method_name} took #{t2 - t1}"
end
end
end
end
end
class Foo
def a
puts "a"
sleep(1)
end
def b
puts "b"
sleep(2)
end
end
Foo.prepend(Measure)
foo = Foo.new
foo.a
foo.b
# => a
# => Method a took 1.0052679998334497
# => b
# => Method b took 2.0026899999938905
Main change is that i use prepend and inside the prepended callback you can find the list of methods defined on the class with instance_methods(false), the falseparameter indicating that ancestors should not be considered.
Instead of using method aliasing, which in my opinion is something of the past since the introduction of Module#prepend, we can prepend an anonymous module that has a method for each instance method of the class to be measured. This will cause calling MyClass#a to invoke the method in this anonymous module, which measures the time and simply resorts to super to invoke the actual MyClass#a implementation.
def measure(klass)
mod = Module.new do
klass.instance_methods(false).each do |method|
define_method(method) do |*args, &blk|
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
value = super(*args, &blk)
finish = Process.clock_gettime(Process::CLOCK_MONOTONIC)
puts "elapsed: #{finish - start} seconds, value: #{value}."
value
end
end
end
klass.prepend(mod)
end
Alternatively, you can use class_eval, which is also faster and allows you to just call super without specifying any arguments to forward all arguments from the method call, which isn't possible with define_method.
def measure(klass)
mod = Module.new do
klass.instance_methods(false).each do |method|
class_eval <<-CODE, __FILE__, __LINE__ + 1
def #{method}(*)
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
value = super
finish = Process.clock_gettime(Process::CLOCK_MONOTONIC)
puts "elapsed: \#{finish - start} seconds, value: \#{value}."
value
end
CODE
end
end
klass.prepend(mod)
end
To use this, simply do:
measure(MyClass)
It looks like you're trying to do some benchmarking. Have you checked out the benchmark library? It's in the standard library.
require 'benchmark'
puts Benchmark.measure { MyClass.new.a }
puts Benchmark.measure { MyClass.new.b }
Another possibility would be to create a wrapper class like so:
class Measure < BasicObject
def initialize(target)
#target = target
end
def method_missing(name, *args)
t1 = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
target.public_send(name, *args)
t2 = ::Process.clock_gettime(::Process::CLOCK_MONOTONIC)
::Kernel.puts "Method #{name} took #{t2 - t1}"
end
def respond_to_missing?(*args)
target.respond_to?(*args)
end
private
attr_reader :target
end
foo = Measure.new(Foo.new)
foo.a
foo.b
I am working on a project of context-oriented programming in ruby. And I come to this problem:
Suppose that I have a class Klass:
class Klass
def my_method
proceed
end
end
I also have a proc stored inside a variable impl. And impl contains { puts "it works!" }.
From somewhere outside Klass, I would like to define a method called proceed inside the method my_method. So that if a call Klass.new.my_method, I get the result "it works".
So the final result should be something like that:
class Klass
def my_method
def proceed
puts "it works!"
end
proceed
end
end
Or if you have any other idea to make the call of proceed inside my_method working, it's also good. But the proceed of another method (let's say my_method_2) isn't the same as my_method.
In fact, the proceed of my_method represent an old version of my_method. And the proceed of my_method_2 represent an old version of my_method_2.
Thanks for your help
Disclaimer: you are doing it wrong!
There must be more robust, elegant and rubyish way to achieve what you want. If you still want to abuse metaprogramming, here you go:
class Klass
def self.proceeds
#proceeds ||= {}
end
def def_proceed
self.class.proceeds[caller.first[/`.*?'/]] = Proc.new
end
def proceed *args
self.class.proceeds[caller.first[/`.*?'/]].(*args)
end
def m_1
def_proceed { puts 1 }
proceed
end
def m_2
def_proceed { puts 2 }
proceed
end
end
inst = Klass.new
inst.m_1
#⇒ 1
inst.m_2
#⇒ 2
What you in fact need, is Module#prepend and call super from there.
One way of doing that is to construct a hash whose keys are the names of the methods calling proceed and whose values are procs that represent the implementations of proceed for each method calling it.
class Klass
singleton_class.send(:attr_reader, :proceeds)
#proceeds = {}
def my_method1(*args)
proceed(__method__,*args)
end
def my_method2(*args)
proceed(__method__,*args)
end
def proceed(m, *args)
self.class.proceeds[m].call(*args)
end
end
def define_proceed(m, &block)
Klass.proceeds[m] = Proc.new &block
end
define_proceed(:my_method1) { |*arr| arr.sum }
define_proceed(:my_method2) { |a,b| "%s-%s" % [a,b] }
k = Klass.new
k.my_method1(1,2,3) #=> 6
k.my_method2("cat", "dog") #=> "cat-dog"
I understand that
def a(&block)
block.call(self)
end
and
def a()
yield self
end
lead to the same result, if I assume that there is such a block a {}. My question is - since I stumbled over some code like that, whether it makes any difference or if there is any advantage of having (if I do not use the variable/reference block otherwise):
def a(&block)
yield self
end
This is a concrete case where i do not understand the use of &block:
def rule(code, name, &block)
#rules = [] if #rules.nil?
#rules << Rule.new(code, name)
yield self
end
The only advantage I can think of is for introspection:
def foo; end
def bar(&blk); end
method(:foo).parameters #=> []
method(:bar).parameters #=> [[:block, :blk]]
IDEs and documentation generators could take advantage of this. However, it does not affect Ruby's argument passing. When calling a method, you can pass or omit a block, regardless of whether it is declared or invoked.
The main difference between
def pass_block
yield
end
pass_block { 'hi' } #=> 'hi'
and
def pass_proc(&blk)
blk.call
end
pass_proc { 'hi' } #=> 'hi'
is that, blk, an instance of Proc, is an object and therefore can be passed to other methods. By contrast, blocks are not objects and therefore cannot be passed around.
def pass_proc(&blk)
puts "blk.is_a?(Proc)=#{blk.is_a?(Proc)}"
receive_proc(blk)
end
def receive_proc(proc)
proc.call
end
pass_proc { 'ho' }
blk.is_a?(Proc)=true
#=> "ho"
I'm currently working on an interface that allows me to wrap arbitrary method calls with a chain of procs. Without going into too much detail, I currently have an interface that accepts something like this:
class Spy
def initialize
#procs = []
end
def wrap(&block)
#procs << block
end
def execute
original_proc = Proc.new { call_original }
#procs.reduce(original_proc) do |memo, p|
Proc.new { p.call &memo }
end.call
end
def call_original
puts 'in the middle'
end
end
spy = Spy.new
spy.wrap do |&block|
puts 'hello'
block.call
end
spy.wrap do |&block|
block.call
puts 'goodbye'
end
spy.execute
What I'd like to do though is remove the |&block| and block.call from my API and use yield instead.
spy.wrap do
puts 'hello'
yield
end
This didn't work and raised a LocalJumpError: no block given (yield) error.
I've also tried creating methods by passing the proc the define_singleton_method in the reduce, but I haven't had any luck.
def execute
original_proc = Proc.new { call_original }
#procs.reduce(original_proc) do |memo, p|
define_singleton_method :hello, &p
Proc.new { singleton_method(:hello).call(&memo) }
end.call
end
Is there another approach I can use? Is there anyway to yield from a Proc or use the Proc to initialize something that can be yielded to?
Using yield in your wrap block does not make much sense unless you passed a block to the caller itself:
def foo
spy.wrap do
puts "executed in wrap from foo"
yield
end
end
If you call foo without a block it will raise the exception since yield can't find a block to execute. But if you pass a block to foo method then it will be invoked:
foo do
puts "foo block"
end
Will output
executed in wrap from foo
foo block
In conclusion I think you misunderstood how yield works and I don't think it is what you want to achieve here.
I was looking in detail at the Thread class. Basically, I was looking for an elegant mechanism to allow thread-local variables to be inherited as threads are created. For example the functionality I am looking to create would ensure that
Thread.new do
self[:foo]="bar"
t1=Thread.new { puts self[:foo] }
end
=> "bar"
i.e. a Thread would inherit it's calling thread's thread-local variables
So I hit upon the idea of redefining Thread.new, so that I could add an extra step to copy the thread-local variables into the new thread from the current thread. Something like this:
class Thread
def self.another_new(*args)
o=allocate
o.send(:initialize, *args)
Thread.current.keys.each{ |k| o[k]=Thread.current[k] }
o
end
end
But when I try this I get the following error:
:in `allocate': allocator undefined for Thread (TypeError)
I thought that as Thread is a subclass of Object, it should have a working #allocate method. Is this not the case?
Does anyone have any deep insight on this, and on how to achieve the functionality I am looking for.
Thanks in advance
Steve
Thread.new do
Thread.current[:foo]="bar"
t1=Thread.new(Thread.current) do |parent|
puts parent[:foo] ? parent[:foo] : 'nothing'
end.join
end.join
#=> bar
UPDATED:
Try this in irb:
thread_ext.rb
class Thread
def self.another_new(*args)
parent = Thread.current
a = Thread.new(parent) do |parent|
parent.keys.each{ |k| Thread.current[k] = parent[k] }
yield
end
a
end
end
use_case.rb
A = Thread.new do
Thread.current[:local_a]="A"
B1 =Thread.another_new do
C1 = Thread.another_new{p Thread.current[:local_a] }.join
end
B2 =Thread.another_new do
C2 = Thread.another_new{p Thread.current[:local_a] }.join
end
[B1, B2].each{|b| b.join }
end.join
output
"A"
"A"
Here is a revised answer based on #CodeGroover's suggestion, with a simple unit test harness
ext/thread.rb
class Thread
def self.inherit(*args, &block)
parent = Thread.current
t = Thread.new(parent, *args) do |parent|
parent.keys.each{ |k| Thread.current[k] = parent[k] }
yield *args
end
t
end
end
test/thread.rb
require 'test/unit'
require 'ext/thread'
class ThreadTest < Test::Unit::TestCase
def test_inherit
Thread.current[:foo]=1
m=Mutex.new
#check basic inheritence
t1= Thread.inherit do
assert_equal(1, Thread.current[:foo])
end
#check inheritence with parameters - in this case a mutex
t2= Thread.inherit(m) do |m|
assert_not_nil(m)
m.synchronize{ Thread.current[:bar]=2 }
assert_equal(1, Thread.current[:foo])
assert_equal(2, Thread.current[:bar])
sleep 0.1
end
#ensure t2 runs its mutexs-synchronized block first
sleep 0.05
#check that the inheritence works downwards only - not back up in reverse
m.synchronize do
assert_nil(Thread.current[:bar])
end
[t1,t2].each{|x| x.join }
end
end
I was looking for the same thing recently and was able to come up with the following answer. Note I am aware the following is a hack and not recommended, but for the sake of answering the specific question on how you could alter the Thread.new functionality, I have done as following:
class Thread
class << self
alias :original_new :new
def new(*args, **options, &block)
original_thread = Thread.current
instance = original_new(*args, **options, &block)
original_thread.keys.each do |key|
instance[key] = original_thread[key]
end
instance
end
end
end