Waking up by push notification - ruby

Suppose:
There is some object (e.g., an array a) and a condition dependent on the object (e.g., such as a.empty?).
Some threads other than the current thread can manipulate the object (a), so the truthness of the evaluated value of the condition changes over the time.
How can I let the current thread sleep at some point in the code and continue (wake up) by push notification when the condition is satisfied?
I do not want to do polling like this:
...
sleep 1 until a.empty?
...
Perhaps using Fiber will be a clue.

Maybe I do not quite understand your question, but I guess ConditionVariable is a good approach for such problem.
So, ConditionVariable can be used to signal threads when something happens. Let's see:
require 'thread'
a = [] # array a is empty now
mutex = Mutex.new
condvar = ConditionVariable.new
Thread.new do
mutex.synchronize do
sleep(5)
a << "Hey hey!"
# Now we have value in array; it's time to signal about it
condvar.signal
end
end
mutex.synchronize do
condvar.wait(mutex)
# This happens only after 5 seconds, when condvar recieves signal
puts "Hey. Array a is not empty now!"
end

Related

Multi Threading in Ruby

I need to create 3 threads.
Each thread will print on the screen a collor and sleep for x seconds.
Thread A will print red; Thread B will print yellow; Thread C will print green;
All threads must wait until its their turn to print.
The first thread to print must be Red, after printing, Red will tell Yellow that's its turn to print and so on.
The threads must be able to print multiple times (user specific)
I'm stuck because calling #firstFlag.signal outside a Thread isn't working and the 3 threads aren't working on the right order
How do I make the red Thread go first?
my code so far:
#lock = Mutex.new
#firstFlag = ConditionVariable.new
#secondFlag = ConditionVariable.new
#thirdFlag = ConditionVariable.new
print "Tell me n's vallue:"
#n = gets.to_i
#threads = Array.new
#threads << Thread.new() {
t = Random.rand(1..3)
n = 0
#lock.synchronize {
for i in 0...#n do
#firstFlag.wait(#lock, t)
puts "red : #{t}s"
sleep(t)
#secondFlag.signal
end
}
}
#threads << Thread.new() {
t = Random.rand(1..3)
n = 0
#lock.synchronize {
for i in 0...#n do
#secondFlag.wait(#lock, t)
puts "yellow : #{t}s"
sleep(t)
#thirdFlag.signal
end
}
}
#threads << Thread.new() {
t = Random.rand(1..3)
n = 0
#lock.synchronize {
for i in 0...#n do
#thirdFlag.wait(#lock, t)
puts "green : #{t}s"
sleep(t)
#firstFlag.signal
end
}
}
#threads.each {|t| t.join}
#firstFlag.signal
There are three bugs in your code:
First bug
Your wait calls use a timeout. This means your threads will become de-synchronized from your intended sequence, because the timeout will let each thread slip past your intended wait point.
Solution: change all your wait calls to NOT use a timeout:
#xxxxFlag.wait(#lock)
Second bug
You put your sequence trigger AFTER your Thread.join call in the end. Your join call will never return, and hence the last statement in your code will never be executed, and your thread sequence will never start.
Solution: change the order to signal the sequence start first, and then join the threads:
#firstFlag.signal
#threads.each {|t| t.join}
Third bug
The problem with a wait/signal construction is that it does not buffer the signals.
Therefore you have to ensure all threads are in their wait state before calling signal, otherwise you may encounter a race condition where a thread calls signal before another thread has called wait.
Solution: This a bit harder to solve, although it is possible to solve with Queue. But I propose a complete rethinking of your code instead. See below for the full solution.
Better solution
I think you need to rethink the whole construction, and instead of condition variables just use Queue for everything. Now the code becomes much less brittle, and because Queue itself is thread safe, you do not need any critical sections any more.
The advantage of Queue is that you can use it like a wait/signal construction, but it buffers the signals, which makes everything much simpler in this case.
Now we can rewrite the code:
redq = Queue.new
yellowq = Queue.new
greenq = Queue.new
Then each thread becomes like this:
#threads << Thread.new() {
t = Random.rand(1..3)
n = 0
for i in 0...#n do
redq.pop
puts "red : #{t}s"
sleep(t)
yellowq.push(1)
end
}
And finally to kick off the whole sequence:
redq.push(1)
#threads.each { |t| t.join }
I'd redesign this slightly. Think of your ConditionVariables as flags that a thread uses to say it's done for now, and name them accordingly:
#lock = Mutex.new
#thread_a_done = ConditionVariable.new
#thread_b_done = ConditionVariable.new
#thread_c_done = ConditionVariable.new
Now, thread A signals it's done by doing #thread_a_done.signal, and thread B can wait for that signal, etc. Thread A of course needs to wait until thread C is done, so we get this kind of structure:
#threads << Thread.new() {
t = Random.rand(1..3)
#lock.synchronize {
for i in 0...#n do
#thread_c_done.wait(#lock)
puts "A: red : #{t}s"
sleep(t)
#thread_a_done.signal
end
}
}
A problem here is that you need to make sure that thread A in the first iteration doesn't wait for a flag signal. After all, it's to go first, so it shouldn't wait for anyone else. So modify it to:
#thread_c_done.wait(#lock) unless i == 0
Finally, once you have created your threads, kick them all off by invoking run, then join on each thread (so that your program doesn't exit before the last thread is done):
#threads.each(&:run)
#threads.each(&:join)
Oh btw I'd get rid of the timeouts in your wait as well. You have a hard requirement that they go in order. If you make the signal wait time out you screw that up - threads might still "jump the queue" so to speak.
EDIT as #casper remarked below, this still has a potential race condition: Thread A could call signal before thread B is waiting to receive it, in which case thread B will miss it and just wait indefinitely. A possible way to fix this is to use some form of a CountDownLatch - a shared object that all threads can wait on, which gets released as soon as all threads have signalled that they're ready. The ruby-concurrency gem has an implementation of this, and in fact might have other interesting things to use for more elegant multi-threaded programming.
Sticking with pure ruby though, you could possibly fix this by adding a second Mutex that guards shared access to a boolean flag to indicate the thread is ready.
Ok, thank you guys that answered. I've found a solution:
I've created a fourth thread. Because I found out that calling "#firstFlag.signal" outside a thread doesn't work, because ruby has a "main thread" that sleeps when you "run" other threads.
So, "#firstFlag.signal" calling must be inside a thread so it can be on the same level of the CV.wait
I solved the issue using this:
#threads << Thread.new {
sleep 1
#firstFlag.signal
}
This fourth thread will wait for 1 sec before sending the first signal to red. This only sec seems to be enough for the others thread reach the wait point.
And, I've removed the timeout, as you sugested.
//Edit//
I realized I don't need a fourth Thread, I could just make thread C do the first signal.
I made thread C sleep for 1 sec to wait the other two threads enter in wait state, then it signals red to start and goes to wait too
#threads << Thread.new() {
sleep 1
#redFlag.signal
t = Random.rand(1..3)
n = 0
#lock.synchronize {
for i in 0...#n do
#greenFlag.wait(#lock)
puts "verde : #{t}s"
sleep(t)
#redFlag.signal
n += 1
end
}
}

How to IPC with the parent process when creating child processes in a loop (Ruby)

I have the following code snippet (a simplified representation of what I'm trying to do - training wheels). The sleep(2) would represent some network operation in my real code:
arr = []
5.times do |i|
rd, wr = IO.pipe
fork do
sleep(2) # I'm still waiting for the sleep to happen on each process ... not good, there is no parallelism here
rd.close
arr[i] = i
Marshal.dump(arr, wr)
end
wr.close
result = rd.read
arr = Marshal.load(result)
end
# Process.waitall
p arr
Q: is it possible to somehow create new processes in a loop, pass the results back but not waiting on each iteration. I'm pretty rusty and don't know / remember a great deal about IPC ... especially in Ruby.
Actual result is wait time of 2s*5 = 10s
Expected ~2s tootal (async processing of the sleep())
So a good comment clarifying things, explaining the theory would help a lot. Thanks.
In your loop you wait for each child process to write its results to the pipe before starting the next iteration.
The simplest fix would be to save the read ends of the pipes in an array and don’t read any of them until the loop is finished and you’ve started all the child processes:
arr = []
# array to store the IOs
pipes = []
5.times do |i|
rd, wr = IO.pipe
fork do
sleep(2)
rd.close
# Note only returning the value of i here, not the whole array
Marshal.dump(i, wr)
end
wr.close
#store the IO for later
pipes[i] = rd
end
# Now all child processes are started, we can read the results in turn
# Remember each child is returng i, not the whole array
pipes.each_with_index do |rd, i|
arr[i] = Marshal.load(rd.read)
end
A more complex solution if the wait/network times for different child processes variad might be to look at select, so you could read from whichever pipe was ready first.

JS-style async/non-blocking callback execution with Ruby, without heavy machinery like threads?

I'm a frontend developer, somewhat familiar with Ruby. I only know how to do Ruby in a synchronous/sequential manner, while in JS i'm used to async/non-blocking callbacks.
Here's sample Ruby code:
results = []
rounds = 5
callback = ->(item) {
# This imitates that the callback may take time to complete
sleep rand(1..5)
results.push item
if results.size == rounds
puts "All #{rounds} requests have completed! Here they are:", *results
end
}
1.upto(rounds) { |item| callback.call(item) }
puts "Hello"
The goal is to have the callbacks run without blocking main script execution. In other words, i want "Hello" line to appear in output above the "All 5 requests..." line. Also, the callbacks should run concurrently, so that the callback fastest to finish makes it into the resulting array first.
With JavaScript, i would simply wrap the callback call into a setTimeout with zero delay:
setTimeout( function() { callback(item); }, 0);
This JS approach does not implement true multithreading/concurrency/parallel execution. Under the hood, the callbacks would run all in one thread sequentially, or rather interlaced on the low level.
But on practical level it would appear as concurrent execution: the resulting array would be populated in an order corresponding to the amount of time spent by each callback, i. e. the resulting array would appear sorted by the time it took each callback to finish.
Note that i only want the asynchronous feature of setTimeout(). I don't need the sleep feature built into setTimeout() (not to be confused with a sleep used in the callback example to imitate a time-consuming operation).
I tried to inquire into how to do that JS-style async approach with Ruby and was given suggestions to use:
Multithreading. This is probably THE approach for Ruby, but it requires a substantial amount of scaffolding:
Manually define an array for threads.
Manually define a mutex.
Start a new thread for each callback, add it to the array.
Pass the mutex into each callback.
Use mutex in the callback for thread synchronization.
Ensure all threads are completed before program completion.
Compared to JavaScript's setTimeout(), this is just too much. As i don't need true parallel execution, i don't want to build that much scaffolding every time i want to execute a proc asynchronously.
A sophisticated Ruby library like Celluloid and Event Machine. They look like it will take weeks to learn them.
A custom solution like this one (the author, apeiros#freenode, claims it to be very close to what setTimeout does under the hood). It requires almost no scaffolding to build and it does not involve threads. But it seems to run callbacks synchronously, in the order they've been executed.
I have always considered Ruby to be a programming language most close to my ideal, and JS to be a poor man's programming language. And it kinda discourages me that Ruby is not able to do a thing which is trivial with JS, without involving heavy machinery.
So the question is: what is the simplest, most intuitive way to do do async/non-blocking callback with Ruby, without involving complicated machinery like threads or complex libraries?
PS If there will be no satisfying answer during the bounty period, i will dig into #3 by apeiros and probably make it the accepted answer.
Like people said, it's not possible to achieve what you want without using Threads or a library that abstracts their functionality. But, if it's just the setTimeout functionality you want, then the implementation is actually very small.
Here's my attempt at emulating Javascript's setTimeout in ruby:
require 'thread'
require 'set'
module Timeout
#timeouts = Set[]
#exiting = false
#exitm = Mutex.new
#mutex = Mutex.new
at_exit { wait_for_timeouts }
def self.set(delay, &blk)
thrd = Thread.start do
sleep delay
blk.call
#exitm.synchronize do
unless #exiting
#mutex.synchronize { #timeouts.delete thrd }
end
end
end
#mutex.synchronize { #timeouts << thrd }
end
def self.wait_for_timeouts
#exitm.synchronize { #exiting = true }
#timeouts.each(&:join)
#exitm.synchronize { #exiting = false }
end
end
Here's how to use it:
$results = []
$rounds = 5
mutex = Mutex.new
def callback(n, mutex)
-> {
sleep rand(1..5)
mutex.synchronize {
$results << n
puts "Fin: #{$results}" if $results.size == $rounds
}
}
end
1.upto($rounds) { |i| Timeout.set(0, &callback(i, mutex)) }
puts "Hello"
This outputs:
Hello
Fin: [1, 2, 3, 5, 4]
As you can see, the way you use it is essentially the same, the only thing I've changed is I've added a mutex to prevent race conditions on the results array.
Aside: Why we need the mutex in the usage example
Even if javascript is only running on a single core, that does not prevent race conditions due to atomicity of operations. Pushing to an array is not an atomic operation, so more than one instruction is executed.
Suppose it is two instructions, putting the element at the end, and incrementing the size. (SET, INC).
Consider all the ways two pushes can be interleaved (taking symmetry into account):
SET1 INC1 SET2 INC2
SET1 SET2 INC1 INC2
The first one is what we want, but the second one results in the second append overwriting the first.
Okay, after some fiddling with threads and studying contributions by apeiros and asQuirreL, i came up with a solution that suits me.
I'll show sample usage first, source code in the end.
Example 1: simple non-blocking execution
First, a JS example that i'm trying to mimic:
setTimeout( function() {
console.log("world");
}, 0);
console.log("hello");
// 'Will print "hello" first, then "world"'.
Here's how i can do it with my tiny Ruby library:
# You wrap all your code into this...
Branch.new do
# ...and you gain access to the `branch` method that accepts a block.
# This block runs non-blockingly, just like in JS `setTimeout(callback, 0)`.
branch { puts "world!" }
print "Hello, "
end
# Will print "Hello, world!"
Note how you don't have to take care of creating threads, waiting for them to finish. The only scaffolding required is the Branch.new { ... } wrapper.
Example 2: synchronizing threads with a mutex
Now we'll assume that we're working with some input and output shared among threads.
JS code i'm trying to reproduce with Ruby:
var
results = [],
rounds = 5;
for (var i = 1; i <= rounds; i++) {
console.log("Starting thread #" + i + ".");
// "Creating local scope"
(function(local_i) {
setTimeout( function() {
// "Assuming there's a time-consuming operation here."
results.push(local_i);
console.log("Thread #" + local_i + " has finished.");
if (results.length === rounds)
console.log("All " + rounds + " threads have completed! Bye!");
}, 0);
})(i);
}
console.log("All threads started!");
This code produces the following output:
Starting thread #1.
Starting thread #2.
Starting thread #3.
Starting thread #4.
Starting thread #5.
All threads started!
Thread #5 has finished.
Thread #4 has finished.
Thread #3 has finished.
Thread #2 has finished.
Thread #1 has finished.
All 5 threads have completed! Bye!
Notice that the callbacks finish in reverse order.
We're also gonna assume that working the results array may produce a race condition. In JS this is never an issue, but in multithreaded Ruby this has to be addressed with a mutex.
Ruby equivalent of the above:
Branch.new 1 do
# Setting up an array to be filled with that many values.
results = []
rounds = 5
# Running `branch` N times:
1.upto(rounds) do |item|
puts "Starting thread ##{item}."
# The block passed to `branch` accepts a hash with mutexes
# that you can use to synchronize threads.
branch do |mutexes|
# This imitates that the callback may take time to complete.
# Threads will finish in reverse order.
sleep (6.0 - item) / 10
# When you need a mutex, you simply request one from the hash.
# For each unique key, a new mutex will be created lazily.
mutexes[:array_and_output].synchronize do
puts "Thread ##{item} has finished!"
results.push item
if results.size == rounds
puts "All #{rounds} threads have completed! Bye!"
end
end
end
end
puts "All threads started."
end
puts "All threads finished!"
Note how you don't have to take care of creating threads, waiting for them to finish, creating mutexes and passing them into the block.
Example 3: delaying execution of the block
If you need the delay feature of setTimeout, you can do it like this.
JS:
setTimeout(function(){ console.log('Foo'); }, 2000);
Ruby:
branch(2) { puts 'Foo' }
Example 4: waiting for all threads to finish
With JS, there's no simple way to have the script wait for all threads to finish. You'll need an await/defer library for that.
But in Ruby it's possible, and Branch makes it even simpler. If you write code after the Branch.new{} wrapper, it will be executed after all branches within the wrapper have been completed. You don't need to manually ensure that all threads have finished, Branch does that for you.
Branch.new do
branch { sleep 10 }
branch { sleep 5 }
# This will be printed immediately
puts "All threads started!"
end
# This will be printed after 10 seconds (the duration of the slowest branch).
puts "All threads finished!"
Sequential Branch.new{} wrappers will be executed sequentially.
Source
# (c) lolmaus (Andrey Mikhaylov), 2014
# MIT license http://choosealicense.com/licenses/mit/
class Branch
def initialize(mutexes = 0, &block)
#threads = []
#mutexes = Hash.new { |hash, key| hash[key] = Mutex.new }
# Executing the passed block within the context
# of this class' instance.
instance_eval &block
# Waiting for all threads to finish
#threads.each { |thr| thr.join }
end
# This method will be available within a block
# passed to `Branch.new`.
def branch(delay = false, &block)
# Starting a new thread
#threads << Thread.new do
# Implementing the timeout functionality
sleep delay if delay.is_a? Numeric
# Executing the block passed to `branch`,
# providing mutexes into the block.
block.call #mutexes
end
end
end

Purpose of `Mutex#synchronize` used with `ConditionVariable#signal`

An answer to this question by railscard and the doc for ConditionVariable suggest code similar to the following:
m = Mutex.new
cv = ConditionVariable.new
Thread.new do
sleep(3) # A
m.synchronize{cv.signal}
end
m.synchronize{cv.wait(m)}
puts "Resource released." # B
This code makes the process commented as B wait until A finishes.
I understand the purpose of m.synchronize{...} around cv.wait(m). What is the purpose of m.synchronize{...} around cv.signal? How would it be different if I had the following instead?
m = Mutex.new
cv = ConditionVariable.new
Thread.new do
sleep(3)
cv.signal
end
m.synchronize{cv.wait(m)}
puts "Resource released."
I think it's useless in this example, but it's required when you have any conditions or calculations before signaling to avoid race conditions.
In order for cv in cv.wait(m) to be unlocked, the cv.signal has to be emitted after cv.wait. In this particular case, that timing is most likely guaranteed because of sleep(3), but otherwise, there is a danger of cv.signal being emitted before cv.wait(m). If that happens, there will not be any signal to be emitted after cv.wait(m), and the locked condition of cv will continue forever. The purpose of m.synchronize{...} around cv.signal is to ensure that it happens after cv.wait(m).

Implementing a synchronization barrier in Ruby

I'm trying to "replicate" the behaviour of CUDA's __synchtreads() function in Ruby. Specifically, I have a set of N threads that need to execute some code, then all wait on each other at mid-point in execution before continuing with the rest of their business. For example:
x = 0
a = Thread.new do
x = 1
syncthreads()
end
b = Thread.new do
syncthreads()
# x should have been changed
raise if x == 0
end
[a,b].each { |t| t.join }
What tools do I need to use to accomplish this? I tried using a global hash, and then sleeping until all the threads have set a flag indicating they're done with the first part of the code. I couldn't get it to work properly; it resulted in hangs and deadlock. I think I need to use a combination of Mutex and ConditionVariable but I am unsure as to why/how.
Edit: 50 views and no answer! Looks like a candidate for a bounty...
Let's implement a synchronization barrier. It has to know the number of threads it will handle, n, up front. During first n - 1 calls to sync the barrier will cause a calling thread to wait. The call number n will wake all threads up.
class Barrier
def initialize(count)
#mutex = Mutex.new
#cond = ConditionVariable.new
#count = count
end
def sync
#mutex.synchronize do
#count -= 1
if #count > 0
#cond.wait #mutex
else
#cond.broadcast
end
end
end
end
Whole body of sync is a critical section, i.e. it cannot be executed by two threads concurrently. Hence the call to Mutex#synchronize.
When the decreased value of #count is positive the thread is frozen. Passing the mutex as an argument to the call to ConditionVariable#wait is critical to prevent deadlocks. It causes the mutex to be unlocked before freezing the thread.
A simple experiment starts 1k threads and makes them add elements to an array. Firstly they add zeros, then they synchronize and add ones. The expected result is a sorted array with 2k elements, of which 1k are zeros and 1k are ones.
mtx = Mutex.new
arr = []
num = 1000
barrier = Barrier.new num
num.times.map do
Thread.start do
mtx.synchronize { arr << 0 }
barrier.sync
mtx.synchronize { arr << 1 }
end
end .map &:join;
# Prints true. See it break by deleting `barrier.sync`.
puts [
arr.sort == arr,
arr.count == 2 * num,
arr.count(&:zero?) == num,
arr.uniq == [0, 1],
].all?
As a matter of fact, there's a gem named barrier which does exactly what I described above.
On a final note, don't use sleep for waiting in such circumstances. It's called busy waiting and is considered a bad practice.
There might be merits of having the threads wait for each other. But I think that it is cleaner to have the threads actually finish at "midpoint", because your question obviously impliest that the threads need each others' results at the "midpoint". Clean design solution would be to let them finish, deliver the result of their work, and start a brand new set of threads based on these.

Resources