can anybody explain me why the Redis (redis-rb) synchrony driver works directly under EM.synchrony block but doesn't within EM:Connection?
Considering following example
EM.synchrony do
redis = Redis.new(:path => "/usr/local/var/redis.sock")
id = redis.incr "local:id_counter"
puts id
EM.start_server('0.0.0.0', 9999) do |c|
def c.receive_data(data)
redis = Redis.new(:path => "/usr/local/var/redis.sock")
puts redis.incr "local:id_counter"
end
end
end
I'm getting
can't yield from root fiber (FiberError)
when using within receive_data. From reading source code for both EventMachine and em-synchrony I can't figure out what's the difference.
Thanks!
PS: Obvious workaround is to wrap the redis code within EventMachine::Synchrony.next_tick as hinted at issue #59, but given the EM.synchrony I would expect to already have the call wrapped within Fiber...
PPS: same applies for using EM::Synchrony::Iterator
You're doing some rather tricky here.. You're providing a block to start_server, which effectively creates an "anonymous" connection class and executes your block within the post_init method of that class. Then within that class you're defining an instance method.
The thing to keep in mind is: when the reactor executes a callback, or a method like receive_data, that happens on the main thread (and within root fiber), which is why you're seeing this exception. To work around this, you need to wrap each callback to be executed within a Fiber (ex, see Synchrony.add_(periodic)_timer methods).
To address your actual exception: wrap the execution of receive_data within a Fiber. The outer EM.synchrony {} won't do anything for callbacks which are scheduled later by the reactor.
Related
I stumbled across a curious behaviour and haven't been able to figure out what I was doing wrong. I hope somebody can enlighten me.
I was trying to stub the Redis client during my tests in a Rails application. Therefore I was using the MockRedis gem. I have created a RedisFactory class with the single class method .create, which I wanted to stub with an optional MockRedis instance like so:
def stub_redis(mock_redis = MockRedis.new)
RedisFactory.stub :create, mock_redis { yield }
end
This did not work and always threw a ArgumentError in Foo#Bar wrong number of arguments (0 for 1). Some further debugging revealed that a call of RedisFactory.create 'foo' within the stub-block resulted in an error that 'foo' is no method on instance of MockRedis::Database.
However, I have been able to solve this problem with the following code snippet, using a lambda function to catch the incoming arguments:
def stub_redis(mock_redis = MockRedis.new)
RedisFactory.stub(:create, ->(*_args) { mock_redis }) { yield }
end
Could anybody explain this behaviour?
As of now MiniTest tries to guess if the passed val_or_callable is a Proc by checking whether it responds to call, cf.:
https://apidock.com/ruby/Proc/call
https://github.com/seattlerb/minitest/blob/b84b8176930bacb4d70d6bef476b1ea0f7c94977/lib/minitest/mock.rb#L226
Unfortunately, in this specific case Redis as well as the passed MockRedis-instance both provide a generic call-method for executing Redis commands, cf.:
https://github.com/brigade/mock_redis/blob/master/lib/mock_redis.rb#L51
You already found the correct workaround. In this case, your only chance is to explicitly use the proc-version of stub.
Note: There are some communities using def call as a pattern with ServiceObjects in Ruby which may have a difficult time using minitest's stub. It is probably a good idea to open an issue in seattlerb/minitest.
My question has a couple layers to it so please bear with me? I built a module that adds workflows from the Workflow gem to an instance, when you call a method on that instance. It has to be able to receive the description as a Hash or some basic data structure and then turn that into something that puts the described workflow onto the class, at run-time. So everything has to happen at run-time. It's a bit complex to explain what all the crazy requirements are for but it's still a good question, I hope. Anyways, The best I can do to be brief for a context, here, is this:
Build a class and include this module I built.
Create an instance of Your class.
Call the inject_workflow(some_workflow_description) method on the instance. It all must be dynamic.
The tricky part for me is that when I use public_send() or eval() or exec(), I still have to send some nested method calls and it seems like they use 2 different scopes, the class' and Workflow's (the gem). When someone uses the Workflow gem, they hand write these method calls in their class so it scopes everything correctly. The gem gets to have access to the class it creates methods on. The way I'm trying to do it, the user doesn't hand write the methods on the class, they get added to the class via the method shown here. So I wasn't able to get it to work using blocks because I have to do nested block calls e.g.
workflow() do # first method call
# first nested method call. can't access my scope from here
state(:state_name) do
# second nested method call. can't access my scope
event(:event_name, transitions_to: :transition_to_state)
end
end
One of the things I'm trying to do is call the Workflow#state() method n number of times, while nesting the Workflow#event(with, custom_params) 0..n times. The problem for me seems to be that I can't get the right scope when I nest the methods like that.
It works just like I'd like it to (I think...) but I'm not too sure I hit the best implementation. In fact, I think I'll probably get some strong words for what I've done. I tried using public_send() and every other thing I could find to avoid using class_eval() to no avail.
Whenever I attempted to use one of the "better" methods, I couldn't quite get the scope right and sometimes, I was invoking methods on the wrong object, altogether. So I think this is where I need the help, yeah?
This is what a few of the attempts were going for but this is more pseudo-code because I could never get this version or any like it to fly.
# Call this as soon as you can, after .new()
def inject_workflow(description)
public_send :workflow do
description[:workflow][:states].each do |state|
state.map do |name, event|
public_send name.to_sym do # nested call occurs in Workflow gem
# nested call occurs in Workflow gem
public_send :event, event[:name], transitions_to: event[:transitions_to]
end
end
end
end
end
From what I was trying, all these kinds of attempts ended up in the same result, which was my scope isn't what I need because I'm evaluating code in the Workflow gem, not in the module or user's class.
Anyways, here's my implementation. I would really appreciate it if someone could point me in the right direction!
module WorkflowFactory
# ...
def inject_workflow(description)
# Build up an array of strings that will be used to create exactly what
# you would hand-write in your class, if you wanted to use the gem.
description_string_builder = ['include Workflow', 'workflow do']
description[:workflow][:states].each do |state|
state.map do |name, state_description|
if state_description.nil? # if this is a final state...
description_string_builder << "state :#{name}"
else # because it is not a final state, add event information too.
description_string_builder.concat([
"state :#{name} do",
"event :#{state_description[:event]}, transitions_to: :#{state_description[:transitions_to]}",
"end"
])
end
end
end
description_string_builder << "end\n"
begin
# Use class_eval to run that workflow specification by
# passing it off to the workflow gem, just like you would when you use
# the gem normally. I'm pretty sure this is where everyone's head pops...
self.class.class_eval(description_string_builder.join("\n"))
define_singleton_method(:has_workflow?) { true }
rescue Exception => e
define_singleton_method(:has_workflow?) { !!(puts e.backtrace) }
end
end
end
end
# This is the class in question.
class Job
include WorkflowFactory
# ... some interesting code for your class goes here
def next!
current_state.events.#somehow choose the correct event
end
end
# and in some other place where you want your "job" to be able to use a workflow, you have something like this...
job = Job.new
job.done?
# => false
until job.done? do job.next! end
# progresses through the workflow and manages its own state awareness
I started this question off under 300000 lines of text, I swear. Thanks for hanging in there! Here's even more documentation, if you're not asleep yet.
module in my gem
I'm trying to write a multi-threaded code to achieve parallelism for a task that is taking too much time. Here is how it looks:
class A
attr_reader :mutex, :logger
def initialize
#reciever = ZeroMQ::Queue
#sender = ZeroMQ::Queue
#mutex = Mutex.new
#logger = Logger.new('log/test.log')
end
def run
50.times do
Thread.new do
run_parallel(#reciever.get_data)
end
end
end
def run_parallel(data)
## Define some local variables.
a , b = data
## Log some data to file.
logger.info "Got #{a}"
output = B.get_data(b)
## Send output back to zermoq.
mutex.synchronize { #sender.send_data(output} }
end
end
One needs to make sure that code is thread safe. Sharing and changing data (like #,##,$ without proper mutex) across threads could lead to thread safety issue.
I'm not sure whether if I pass the data to a method, that results in thread safety issue as well. In other words, do I have to ensure that the part of my code inside run_parallel has to be wrapped in a mutex if I'm not using any #, ##, $ inside the method? Or is the given mutex definition enough?
mutex.synchronize { #sender.send_data(output} }
Whenever you're running in a threaded context, you've got to be aware (for a simple heuristic) of anything that's not a local variable. I see these potential problems in your code:
run_parallel(#reciever.get_data) Is get_data threadsafe? You've synchronized send_data, and they're both a ZeroMQ::Queue, so I'm guessing not.
output = B.get_data(b) Is this call threadsafe? If it just pulls something out of b, you're fine, but if it uses state in B or calls anything else that has state, you're in trouble.
logger.info "Got #{a}" #coreyward points out that Logger is threadsafe, so this is no trouble. Just make sure to stick with it over puts, which will garble your output.
Once you're inside the mutex for #sender.send_data, you're safe, assuming #sender isn't accessed anywhere else in your code by another thread. Of course, the more synchronize you throw around, the more your threads will block on each other and lose performance, so there's a balance you need to find your design.
Do what you can to make your code functional: try to use only local state and write methods that don't have side effects. As your task gets more complicated, there are libraries like concurrent-ruby with threadsafe data structures and other patterns that can help.
I am new to Ruby. I am confused by something I am reading here:
http://alma-connect.github.io/techblog/2014/03/rails-pub-sub.html
They offer this code:
# app/pub_sub/publisher.rb
module Publisher
extend self
# delegate to ActiveSupport::Notifications.instrument
def broadcast_event(event_name, payload={})
if block_given?
ActiveSupport::Notifications.instrument(event_name, payload) do
yield
end
else
ActiveSupport::Notifications.instrument(event_name, payload)
end
end
end
What is the difference between doing this:
ActiveSupport::Notifications.instrument(event_name, payload) do
yield
end
versus doing this:
ActiveSupport::Notifications.instrument(event_name, payload)
yield
If this were another language, I might assume that we first call the method instrument(), and then we call yield so as to call the block. But that is not what they wrote. They show yield being nested inside of ActiveSupport::Notifications.instrument().
Should I assume that ActiveSupport::Notifications.instrument() is returning some kind of iterable, that we will iterate over? Are we calling yield once for every item returned from ActiveSupport::Notifications.instrument()?
While blocks are frequently used for iteration they have many other uses. One is to ensure proper resource cleanup, for example
ActiveRecord::Base.with_connection do
...
end
Checks out a database connection for the thread, yields to the block and then checks the connection back in.
In the specific case of the instrument method you found what it does is add to the event data it is about to broadcast information about the time it's block took to execute. The actual implementation is more complicated but in broad terms it's not so different to
event = Event.new(event_name, payload)
event.start = Time.now
yield
event.end = Time.now
event
The use of yield allows it to wrap the execution of your code with some timing code. In your second example no block is passed to instrument, which detects this and will record it as an event having no duration
The broadcast_event method has been designed to accept an optional block (which allows you to pass a code block to the method).
ActiveSupport::Notifications.instrument also takes an optional block.
Your first example simply takes the block passed in to broadcast_event and forwards it along to ActiveSupport::Notifications.instrument. If there's no block, you can't yield anything, hence the different calls.
I'm trying to make an API for dynamic reloading processes; right now I'm at the point where I want to provide in all contexts a method called reload!, however, I'm implementing this method on an object that has some state (so it can't be on Kernel).
Suppose we have something like
WorkerForker.run_in_worker do
# some code over here...
reload! if some_condition
end
Inside the run_in_worker method there is a code like the following:
begin
worker = Worker.new(pid, stream)
block.call
rescue NoMethodError => e
if (e.message =~ /reload!/)
puts "reload! was called"
worker.reload!
else
raise e
end
end
So I'm doing it this way because I want to make the reload! method available in any nested context, and I don't wanna mess the block I'm receiving with an instance_eval on the worker instance.
So my question is, is there any complications regarding this approach? I don't know if anybody has done this already (haven't read that much code yet), and if it has been done already? Is there a better way to achieve the objective of this code?
Assuming i understand you now, how about this:
my_object = Blah.new
Object.send(:define_method, :reload!) {
my_object.reload!
...
}
Using this method every object that invokes the reload! method is modifying the same shared state since my_object is captured by the block passed to define_method
what's wrong with doing this?
def run_in_worker(&block)
...
worker = Worker.new(pid, stream)
block.call(worker)
end
WorkerForker.run_in_worker do |worker|
worker.reload! if some_condition
end
It sounds like you just want every method to know about an object without the method or the method's owner having been told about it. The way to accomplish this is a global variable. It's not generally considered a good idea (because it leads to concurrency issues, ownership issues, makes unit testing harder, etc.), but if that's what you want, there it is.