Make dynamic method calls using NoMethodError handler instead of method_missing - ruby

I'm trying to make an API for dynamic reloading processes; right now I'm at the point where I want to provide in all contexts a method called reload!, however, I'm implementing this method on an object that has some state (so it can't be on Kernel).
Suppose we have something like
WorkerForker.run_in_worker do
# some code over here...
reload! if some_condition
end
Inside the run_in_worker method there is a code like the following:
begin
worker = Worker.new(pid, stream)
block.call
rescue NoMethodError => e
if (e.message =~ /reload!/)
puts "reload! was called"
worker.reload!
else
raise e
end
end
So I'm doing it this way because I want to make the reload! method available in any nested context, and I don't wanna mess the block I'm receiving with an instance_eval on the worker instance.
So my question is, is there any complications regarding this approach? I don't know if anybody has done this already (haven't read that much code yet), and if it has been done already? Is there a better way to achieve the objective of this code?

Assuming i understand you now, how about this:
my_object = Blah.new
Object.send(:define_method, :reload!) {
my_object.reload!
...
}
Using this method every object that invokes the reload! method is modifying the same shared state since my_object is captured by the block passed to define_method

what's wrong with doing this?
def run_in_worker(&block)
...
worker = Worker.new(pid, stream)
block.call(worker)
end
WorkerForker.run_in_worker do |worker|
worker.reload! if some_condition
end

It sounds like you just want every method to know about an object without the method or the method's owner having been told about it. The way to accomplish this is a global variable. It's not generally considered a good idea (because it leads to concurrency issues, ownership issues, makes unit testing harder, etc.), but if that's what you want, there it is.

Related

Understanding Ruby define_method with initialize

So, I'm currently learning about metaprogramming in Ruby and I want to fully understand what is happening behind the scenes.
I followed a tutorial where I included some of the methods in my own small project, an importer for CSV files and I have difficulties to wrap my hand around one of the methods used.
I know that the define_method method in Ruby exists to create methods "on the fly", which is great. Now, in the tutorial the method initialize to instantiate an object from a class is defined with this method, so basically it looks like this:
class Foo
def self.define_initialize(attributes)
define_method(:initialize) do |*args|
attributes.zip(args) do |attribute, value|
instance_variable_set("##{attribute}", value)
end
end
end
end
Next, in an initializer of the other class first this method is called with Foo.define_initialize(attributes), where attributes are the header row from the CSV file like ["attr_1", "attr_2", ...], so the *args are not provided yet.
Then in the next step a loop loops over the the data:
#foos = data[1..-1].map do |d|
Foo.new(*d)
end
So here the *d get passed as the *args to the initialize method respectively to the block.
So, is it right that when Foo.define_initialize gets called, the method is just "built" for later calls to the class?
So I theoretically get a class which now has this method like:
def initialize(*args)
... do stuff
end
Because otherwise, it had to throw an exception like "missing arguments" or something - so, in other words, it just defines the method like the name implies.
I hope that I made my question clear enough, cause as a Rails developer coming from the "Rails magic" I would really like to understand what is happening behind the scenes in some cases :).
Thanks for any helpful reply!
Short answer, yes, long answer:
First, let's start explaining in a really (REALLY) simple way, how metaprogramming works on Ruby. In Ruby, the definition of anything is never close, that means that you can add, update, or delete the behavior of anything (really, almost anything) at any moment. So, if you want to add a method to Object class, you are allowed, same for delete or update.
In your example, you are doing nothing more than update or create the initialize method of a given class. Note that initialize is not mandatory, because ruby builds a default "blank" one for you if you didn't create one. You may think, "what happens if the initialize method already exist?" and the answer is "nothing". I mean, ruby is going to rewrite the initialize method again, and new Foo.new calls are going to call the new initialize.

A better way to call methods on an instance

My question has a couple layers to it so please bear with me? I built a module that adds workflows from the Workflow gem to an instance, when you call a method on that instance. It has to be able to receive the description as a Hash or some basic data structure and then turn that into something that puts the described workflow onto the class, at run-time. So everything has to happen at run-time. It's a bit complex to explain what all the crazy requirements are for but it's still a good question, I hope. Anyways, The best I can do to be brief for a context, here, is this:
Build a class and include this module I built.
Create an instance of Your class.
Call the inject_workflow(some_workflow_description) method on the instance. It all must be dynamic.
The tricky part for me is that when I use public_send() or eval() or exec(), I still have to send some nested method calls and it seems like they use 2 different scopes, the class' and Workflow's (the gem). When someone uses the Workflow gem, they hand write these method calls in their class so it scopes everything correctly. The gem gets to have access to the class it creates methods on. The way I'm trying to do it, the user doesn't hand write the methods on the class, they get added to the class via the method shown here. So I wasn't able to get it to work using blocks because I have to do nested block calls e.g.
workflow() do # first method call
# first nested method call. can't access my scope from here
state(:state_name) do
# second nested method call. can't access my scope
event(:event_name, transitions_to: :transition_to_state)
end
end
One of the things I'm trying to do is call the Workflow#state() method n number of times, while nesting the Workflow#event(with, custom_params) 0..n times. The problem for me seems to be that I can't get the right scope when I nest the methods like that.
It works just like I'd like it to (I think...) but I'm not too sure I hit the best implementation. In fact, I think I'll probably get some strong words for what I've done. I tried using public_send() and every other thing I could find to avoid using class_eval() to no avail.
Whenever I attempted to use one of the "better" methods, I couldn't quite get the scope right and sometimes, I was invoking methods on the wrong object, altogether. So I think this is where I need the help, yeah?
This is what a few of the attempts were going for but this is more pseudo-code because I could never get this version or any like it to fly.
# Call this as soon as you can, after .new()
def inject_workflow(description)
public_send :workflow do
description[:workflow][:states].each do |state|
state.map do |name, event|
public_send name.to_sym do # nested call occurs in Workflow gem
# nested call occurs in Workflow gem
public_send :event, event[:name], transitions_to: event[:transitions_to]
end
end
end
end
end
From what I was trying, all these kinds of attempts ended up in the same result, which was my scope isn't what I need because I'm evaluating code in the Workflow gem, not in the module or user's class.
Anyways, here's my implementation. I would really appreciate it if someone could point me in the right direction!
module WorkflowFactory
# ...
def inject_workflow(description)
# Build up an array of strings that will be used to create exactly what
# you would hand-write in your class, if you wanted to use the gem.
description_string_builder = ['include Workflow', 'workflow do']
description[:workflow][:states].each do |state|
state.map do |name, state_description|
if state_description.nil? # if this is a final state...
description_string_builder << "state :#{name}"
else # because it is not a final state, add event information too.
description_string_builder.concat([
"state :#{name} do",
"event :#{state_description[:event]}, transitions_to: :#{state_description[:transitions_to]}",
"end"
])
end
end
end
description_string_builder << "end\n"
begin
# Use class_eval to run that workflow specification by
# passing it off to the workflow gem, just like you would when you use
# the gem normally. I'm pretty sure this is where everyone's head pops...
self.class.class_eval(description_string_builder.join("\n"))
define_singleton_method(:has_workflow?) { true }
rescue Exception => e
define_singleton_method(:has_workflow?) { !!(puts e.backtrace) }
end
end
end
end
# This is the class in question.
class Job
include WorkflowFactory
# ... some interesting code for your class goes here
def next!
current_state.events.#somehow choose the correct event
end
end
# and in some other place where you want your "job" to be able to use a workflow, you have something like this...
job = Job.new
job.done?
# => false
until job.done? do job.next! end
# progresses through the workflow and manages its own state awareness
I started this question off under 300000 lines of text, I swear. Thanks for hanging in there! Here's even more documentation, if you're not asleep yet.
module in my gem

Behavior of `super`

I have this code:
class B
def self.definer(name, *args, &block)
define_method(name) { self.instance_exec(*args, &block) }
end
end
and when I try to use it, I get this error:
B.definer(:tst) { super }
# => :tst
B.new.tst
# => TypeError: self has wrong type to call super in this context: B (expected #<Class:#<Object:0x007fd3008123f8>>)
I understand that super has a special meaning, and works little different from calling a method. Can someone explain why and what is happening? It would also be great if someone suggests a solution for this.
I don't get the same error message as you did, but get an error anyway. super must be used within a method definition. But you are not using it in a method definition. That raises an error.
Regarding the solution, I cannot give you one since it is not clear at all what you are trying to do.
You definitely don't want instance_exec there.
If you didn't have the *args involved, I'd say you just wanted this:
def self.definer(name, &block)
define_method(name, &block)
end
But then your new definer method would do the exact same thing that define_method does in the first place, so there's be no reason to create it, instead of just using define_method in the first place.
What are you actually trying to do? Explain what you want to do, and maybe someone can help you.
But I think the instance_exec in your existing implementation isn't what you want -- it is immediately executing the block upon definer call, when calling define_method -- I think you want the block executed when the method you are defining is being called instead? But I'm not really sure, it depends on what you're trying to do, which is unclear. super doesn't really make any sense within an instance_exec -- super to what method did you think you'd be calling?

Ruby - how to intercept a block and modify it before eval-ing or yield-ing it?

I have been thinking about blocks in Ruby.
Please consider this code:
div {
h2 'Hello world!'
drag
}
This calls the method div(), and passes a block to it.
With yield I can evaluate the block.
h2() is also a method, and so is drag().
Now the thing is - h2() is defined in a module, which
is included. drag() on the other hand resides on an
object and also needs some additional information.
I can provide this at run-time, but not at call-time.
In other words, I need to be able to "intercept"
drag(), change it, and then call that method
on another object.
Is there a way to evaluate yield() line by line
or some other way? I don't have to call yield
yet, it would also be possible to get this
code as string, modify drag(), and then
eval() on it (although this sounds ugly, I
just need to have this available anyway
no mater how).
If I'm understanding you correctly, it seems that you're looking for the .tap method. Tap allows you to access intermediate results within a method chain. Of course, this would require you to restructure how this is set up.
You can kind of do this with instance_eval and a proxy object.
The general idea would be something like this:
class DSLProxyObject
def initialize(proxied_object)
#target = proxied_object
end
def drag
# Do some stuff
#target.drag
end
def method_missing(method, *args, &block)
#target.send(method, *args, &block)
end
end
DSLProxyObject.new(target_object).instance_eval(&block)
You could implement each of your DSL's methods, perform whatever modifications you need to when a method is called, and then call what you need to on the underlying object to make the DSL resolve.
It's difficult to answer your question completely without a less general example, but the general idea is that you would create an object context that has the information you need and which wraps the underlying DSL, then evaluate the DSL block in that context, which would let you intercept and modify individual calls on a per-usage basis.

RSpec any_instance return self

I'm trying to stub any instance of some class. I need to stub the fetch method, which fills the self with some data.
How can I get access to self variable, modify it and return on fetch method?
MyObject.any_instance.stub(:fetch) { self }
doesn't return a MyObject instance.
Maybe, mocks is more useful in this situation. Unfortunately, I haven't understood they yet.
There's an open rspec-mocks issue to address this. I hope to get around to addressing it at some point, but it's not simple to add this in a way that doesn't break existing spec suites that use any_instance with a block implementation, because we would start yielding an additional argument (e.g. the object instance).
Overall, any_instance can come in handy in some situations, but it's a bit of a smell, and you'll generally have fewer issues if you can find a way to mock or stub individual instances.
Here's a work around that I have not tested but should work:
orig_new = MyObject.method(:new)
MyObject.stub(:new) do |*args, &block|
orig_new.call(*args, &block).tap do |instance|
instance.stub(:fetch) { instance }
end
end
Essentially, we're simulating any_instance here by hooking into MyObject.new so that we can stub fetch on each new instance that is instantiated.
All that said, it's important to "listen to your tests", and, when something is hard to test, consider what that says about your design, rather than immediately using power tools like any_instance. Your original question doesn't give enough context for me to speculate anything about your design, but it's definitely where I would start when faced with a need to do this.
As far as I can see it, this doesn't seem to be possible, for some reason. I checked the current rspec-mocks implementation, and the method actually invoking the stub implementation seems to be the following:
# lib/rspec/mocks/message_expectation.rb:450
def call_implementation(*args, &block)
#implementation.arity == 0 ? #implementation.call(&block) : #implementation.call(*args, &block)
end
As it seems, the block is simply invoked by itself and not through instance_eval. Maybe there is another technique to achieve what you want though, after all I am not an RSpec expert by any means.

Resources