Using threads without mutex.unlock in Ruby - ruby

I'm not sure why this code works:
m, n = Mutex.new, Mutex.new
t = Thread.new do
m.lock
p 'ha'
sleep 1
p 'ya'
n.lock
end
s = Thread.new do
m.lock
p 'h'
sleep 1
p 'y'
n.lock
end
t.join
s.join
I've avoided a deadlock by using the locks in the same order, but I'm not sure why this works since every mutex needs to have both mutex.lock and mutex.unlock, and this code doesn't have .unlock and still works. Why?

Per the docs, Mutex#lock waits until the lock is acquired. If you add some kind of output after acquiring the locks, you should see it's not executing in thread s until t is dead. When t is done and gets killed the lock is released.

Related

Multi Threading in Ruby

I need to create 3 threads.
Each thread will print on the screen a collor and sleep for x seconds.
Thread A will print red; Thread B will print yellow; Thread C will print green;
All threads must wait until its their turn to print.
The first thread to print must be Red, after printing, Red will tell Yellow that's its turn to print and so on.
The threads must be able to print multiple times (user specific)
I'm stuck because calling #firstFlag.signal outside a Thread isn't working and the 3 threads aren't working on the right order
How do I make the red Thread go first?
my code so far:
#lock = Mutex.new
#firstFlag = ConditionVariable.new
#secondFlag = ConditionVariable.new
#thirdFlag = ConditionVariable.new
print "Tell me n's vallue:"
#n = gets.to_i
#threads = Array.new
#threads << Thread.new() {
t = Random.rand(1..3)
n = 0
#lock.synchronize {
for i in 0...#n do
#firstFlag.wait(#lock, t)
puts "red : #{t}s"
sleep(t)
#secondFlag.signal
end
}
}
#threads << Thread.new() {
t = Random.rand(1..3)
n = 0
#lock.synchronize {
for i in 0...#n do
#secondFlag.wait(#lock, t)
puts "yellow : #{t}s"
sleep(t)
#thirdFlag.signal
end
}
}
#threads << Thread.new() {
t = Random.rand(1..3)
n = 0
#lock.synchronize {
for i in 0...#n do
#thirdFlag.wait(#lock, t)
puts "green : #{t}s"
sleep(t)
#firstFlag.signal
end
}
}
#threads.each {|t| t.join}
#firstFlag.signal
There are three bugs in your code:
First bug
Your wait calls use a timeout. This means your threads will become de-synchronized from your intended sequence, because the timeout will let each thread slip past your intended wait point.
Solution: change all your wait calls to NOT use a timeout:
#xxxxFlag.wait(#lock)
Second bug
You put your sequence trigger AFTER your Thread.join call in the end. Your join call will never return, and hence the last statement in your code will never be executed, and your thread sequence will never start.
Solution: change the order to signal the sequence start first, and then join the threads:
#firstFlag.signal
#threads.each {|t| t.join}
Third bug
The problem with a wait/signal construction is that it does not buffer the signals.
Therefore you have to ensure all threads are in their wait state before calling signal, otherwise you may encounter a race condition where a thread calls signal before another thread has called wait.
Solution: This a bit harder to solve, although it is possible to solve with Queue. But I propose a complete rethinking of your code instead. See below for the full solution.
Better solution
I think you need to rethink the whole construction, and instead of condition variables just use Queue for everything. Now the code becomes much less brittle, and because Queue itself is thread safe, you do not need any critical sections any more.
The advantage of Queue is that you can use it like a wait/signal construction, but it buffers the signals, which makes everything much simpler in this case.
Now we can rewrite the code:
redq = Queue.new
yellowq = Queue.new
greenq = Queue.new
Then each thread becomes like this:
#threads << Thread.new() {
t = Random.rand(1..3)
n = 0
for i in 0...#n do
redq.pop
puts "red : #{t}s"
sleep(t)
yellowq.push(1)
end
}
And finally to kick off the whole sequence:
redq.push(1)
#threads.each { |t| t.join }
I'd redesign this slightly. Think of your ConditionVariables as flags that a thread uses to say it's done for now, and name them accordingly:
#lock = Mutex.new
#thread_a_done = ConditionVariable.new
#thread_b_done = ConditionVariable.new
#thread_c_done = ConditionVariable.new
Now, thread A signals it's done by doing #thread_a_done.signal, and thread B can wait for that signal, etc. Thread A of course needs to wait until thread C is done, so we get this kind of structure:
#threads << Thread.new() {
t = Random.rand(1..3)
#lock.synchronize {
for i in 0...#n do
#thread_c_done.wait(#lock)
puts "A: red : #{t}s"
sleep(t)
#thread_a_done.signal
end
}
}
A problem here is that you need to make sure that thread A in the first iteration doesn't wait for a flag signal. After all, it's to go first, so it shouldn't wait for anyone else. So modify it to:
#thread_c_done.wait(#lock) unless i == 0
Finally, once you have created your threads, kick them all off by invoking run, then join on each thread (so that your program doesn't exit before the last thread is done):
#threads.each(&:run)
#threads.each(&:join)
Oh btw I'd get rid of the timeouts in your wait as well. You have a hard requirement that they go in order. If you make the signal wait time out you screw that up - threads might still "jump the queue" so to speak.
EDIT as #casper remarked below, this still has a potential race condition: Thread A could call signal before thread B is waiting to receive it, in which case thread B will miss it and just wait indefinitely. A possible way to fix this is to use some form of a CountDownLatch - a shared object that all threads can wait on, which gets released as soon as all threads have signalled that they're ready. The ruby-concurrency gem has an implementation of this, and in fact might have other interesting things to use for more elegant multi-threaded programming.
Sticking with pure ruby though, you could possibly fix this by adding a second Mutex that guards shared access to a boolean flag to indicate the thread is ready.
Ok, thank you guys that answered. I've found a solution:
I've created a fourth thread. Because I found out that calling "#firstFlag.signal" outside a thread doesn't work, because ruby has a "main thread" that sleeps when you "run" other threads.
So, "#firstFlag.signal" calling must be inside a thread so it can be on the same level of the CV.wait
I solved the issue using this:
#threads << Thread.new {
sleep 1
#firstFlag.signal
}
This fourth thread will wait for 1 sec before sending the first signal to red. This only sec seems to be enough for the others thread reach the wait point.
And, I've removed the timeout, as you sugested.
//Edit//
I realized I don't need a fourth Thread, I could just make thread C do the first signal.
I made thread C sleep for 1 sec to wait the other two threads enter in wait state, then it signals red to start and goes to wait too
#threads << Thread.new() {
sleep 1
#redFlag.signal
t = Random.rand(1..3)
n = 0
#lock.synchronize {
for i in 0...#n do
#greenFlag.wait(#lock)
puts "verde : #{t}s"
sleep(t)
#redFlag.signal
n += 1
end
}
}

Purpose of `Mutex#synchronize` used with `ConditionVariable#signal`

An answer to this question by railscard and the doc for ConditionVariable suggest code similar to the following:
m = Mutex.new
cv = ConditionVariable.new
Thread.new do
sleep(3) # A
m.synchronize{cv.signal}
end
m.synchronize{cv.wait(m)}
puts "Resource released." # B
This code makes the process commented as B wait until A finishes.
I understand the purpose of m.synchronize{...} around cv.wait(m). What is the purpose of m.synchronize{...} around cv.signal? How would it be different if I had the following instead?
m = Mutex.new
cv = ConditionVariable.new
Thread.new do
sleep(3)
cv.signal
end
m.synchronize{cv.wait(m)}
puts "Resource released."
I think it's useless in this example, but it's required when you have any conditions or calculations before signaling to avoid race conditions.
In order for cv in cv.wait(m) to be unlocked, the cv.signal has to be emitted after cv.wait. In this particular case, that timing is most likely guaranteed because of sleep(3), but otherwise, there is a danger of cv.signal being emitted before cv.wait(m). If that happens, there will not be any signal to be emitted after cv.wait(m), and the locked condition of cv will continue forever. The purpose of m.synchronize{...} around cv.signal is to ensure that it happens after cv.wait(m).

Implementing a synchronization barrier in Ruby

I'm trying to "replicate" the behaviour of CUDA's __synchtreads() function in Ruby. Specifically, I have a set of N threads that need to execute some code, then all wait on each other at mid-point in execution before continuing with the rest of their business. For example:
x = 0
a = Thread.new do
x = 1
syncthreads()
end
b = Thread.new do
syncthreads()
# x should have been changed
raise if x == 0
end
[a,b].each { |t| t.join }
What tools do I need to use to accomplish this? I tried using a global hash, and then sleeping until all the threads have set a flag indicating they're done with the first part of the code. I couldn't get it to work properly; it resulted in hangs and deadlock. I think I need to use a combination of Mutex and ConditionVariable but I am unsure as to why/how.
Edit: 50 views and no answer! Looks like a candidate for a bounty...
Let's implement a synchronization barrier. It has to know the number of threads it will handle, n, up front. During first n - 1 calls to sync the barrier will cause a calling thread to wait. The call number n will wake all threads up.
class Barrier
def initialize(count)
#mutex = Mutex.new
#cond = ConditionVariable.new
#count = count
end
def sync
#mutex.synchronize do
#count -= 1
if #count > 0
#cond.wait #mutex
else
#cond.broadcast
end
end
end
end
Whole body of sync is a critical section, i.e. it cannot be executed by two threads concurrently. Hence the call to Mutex#synchronize.
When the decreased value of #count is positive the thread is frozen. Passing the mutex as an argument to the call to ConditionVariable#wait is critical to prevent deadlocks. It causes the mutex to be unlocked before freezing the thread.
A simple experiment starts 1k threads and makes them add elements to an array. Firstly they add zeros, then they synchronize and add ones. The expected result is a sorted array with 2k elements, of which 1k are zeros and 1k are ones.
mtx = Mutex.new
arr = []
num = 1000
barrier = Barrier.new num
num.times.map do
Thread.start do
mtx.synchronize { arr << 0 }
barrier.sync
mtx.synchronize { arr << 1 }
end
end .map &:join;
# Prints true. See it break by deleting `barrier.sync`.
puts [
arr.sort == arr,
arr.count == 2 * num,
arr.count(&:zero?) == num,
arr.uniq == [0, 1],
].all?
As a matter of fact, there's a gem named barrier which does exactly what I described above.
On a final note, don't use sleep for waiting in such circumstances. It's called busy waiting and is considered a bad practice.
There might be merits of having the threads wait for each other. But I think that it is cleaner to have the threads actually finish at "midpoint", because your question obviously impliest that the threads need each others' results at the "midpoint". Clean design solution would be to let them finish, deliver the result of their work, and start a brand new set of threads based on these.

How do ruby exceptions cause mutices to unlock?

Recently, I have been working with Ruby's threads, and have uncovered a slightly unexpected behaviour. In a critical section, calling raise causes the mutex to release. I could expect this of the synchronize method, with its block, but it also seems to happen when lock and unlock are called separately.
For example, the code below outputs:
$ ruby testmutex.rb
x sync
y sync
...where I'd expect y to be blocked until the heat death of the universe.
m = Mutex.new
x = Thread.new() do
begin
m.lock
puts "x sync"
sleep 5
raise "x err"
sleep 5
m.unlock
rescue
end
end
y = Thread.new() do
sleep 0.5
m.lock
puts "y sync"
m.unlock
end
x.join
y.join
Why is the y thread allowed to run even though the m.unlock in the x thread is never executed?
Note that if you remove the raise and the unlock from x the behavior is the same. So you have a situation where the x thread locks the mutex, and then the thread ends, and the mutex is unlocked.
m = Mutex.new
Thread.new{ m.lock; p m.locked? }.join
#=> true
p m.locked?
#=> false
Thus we see that the situation is unrelated to raise. Because you have a begin/rescue block around your raise, you just exit the x thread 5 seconds earlier than you would have otherwise.
Presumably the interpreter keeps tracks of any mutexes locked by a thread and automatically and intentionally unlocks them when the thread dies. (I cannot back this up with source-code inspection, however. This is just a guess, based on behavior.)

How do I manage ruby threads so they finish all their work?

I have a computation that can be divided into independent units and the way I'm dealing with it now is by creating a fixed number of threads and then handing off chunks of work to be done in each thread. So in pseudo code here's what it looks like
# main thread
work_units.take(10).each {|work_unit| spawn_thread_for work_unit}
def spawn_thread_for(work)
Thread.new do
do_some work
more_work = work_units.pop
spawn_thread_for more_work unless more_work.nil?
end
end
Basically once the initial number of threads is created each one does some work and then keeps taking stuff to be done from the work stack until nothing is left. Everything works fine when I run things in irb but when I execute the script using the interpreter things don't work out so well. I'm not sure how to make the main thread wait until all the work is finished. Is there a nice way of doing this or am I stuck with executing sleep 10 until work_units.empty? in the main thread
In ruby 1.9 (and 2.0), you can use ThreadsWait from the stdlib for this purpose:
require 'thread'
require 'thwait'
threads = []
threads << Thread.new { }
threads << Thread.new { }
ThreadsWait.all_waits(*threads)
If you modify spawn_thread_for to save a reference to your created Thread, then you can call Thread#join on the thread to wait for completion:
x = Thread.new { sleep 0.1; print "x"; print "y"; print "z" }
a = Thread.new { print "a"; print "b"; sleep 0.2; print "c" }
x.join # Let the threads finish before
a.join # main thread exits...
produces:
abxyzc
(Stolen from the ri Thread.new documentation. See the ri Thread.join documentation for some more details.)
So, if you amend spawn_thread_for to save the Thread references, you can join on them all:
(Untested, but ought to give the flavor)
# main thread
work_units = Queue.new # and fill the queue...
threads = []
10.downto(1) do
threads << Thread.new do
loop do
w = work_units.pop
Thread::exit() if w.nil?
do_some_work(w)
end
end
end
# main thread continues while work threads devour work
threads.each(&:join)
Thread.list.each{ |t| t.join unless t == Thread.current }
It seems like you are replicating what the Parallel Each (Peach) library provides.
You can use Thread#join
join(p1 = v1) public
The calling thread will suspend execution and run thr. Does not return until thr exits or until limit seconds have passed. If the time limit expires, nil will be returned, otherwise thr is returned.
Also you can use Enumerable#each_slice to iterate over the work units in batches
work_units.each_slice(10) do |batch|
# handle each work unit in a thread
threads = batch.map do |work_unit|
spawn_thread_for work_unit
end
# wait until current batch work units finish before handling the next batch
threads.each(&:join)
end

Resources