Determine ruby thread state - ruby

I have a Ruby script fetching HTML pages over HTTP using threads:
require "thread"
require "net/http"
q = Queue.new
q << "http://google.com/"
q << "http://rubygems.org/"
q << "http://twitter.com/"
t = Thread.new do
loop do
html = Net::HTTP.get(URI(q.pop))
p html.length
end
end
10.times do
puts t.status
sleep 0.3
end
I'm trying to determine the state of the thread while it is fetching the content from given sources. Here is the output I got:
run
219
sleep
sleep
7255
sleep
sleep
sleep
sleep
sleep
sleep
65446
sleep
The thread is in "sleep" state almost all the time though it's actually working. I understand it's waiting for the HTTP class to retrieve the content. The last "sleep" is different: the thread tried to pop the value from the queue which is empty and switched to "sleep" state until there is something new in the queue.
I want to be able to check what's going on in the thread: Is it working on HTTP or simply waiting for new job to appear?
What is the right way to do it?

The sleep state appears to cover both I/O wait and being blocked in synchronization, so you won't be able to use the thread state to know whether you're processing or waiting. Instead, you could use thread local storage for the thread to communicate that. Use Thread#[]= to store a value, and Thread#[] to get it back.
require "thread"
require "net/http"
q = Queue.new
q << "http://google.com/"
q << "http://rubygems.org/"
q << "http://twitter.com/"
t = Thread.new do
loop do
Thread.current[:status] = 'waiting'
request = q.pop
Thread.current[:status] = 'fetching'
html = Net::HTTP.get(URI(request))
Thread.current[:status] = 'processing'
# Take half a second to process it.
Time.new.tap { |start_time| while Time.now - start_time < 0.5 ; end }
p html.length
end
end
10.times do
puts t[:status]
sleep 0.3
end
I've added a short loop to eat up time. Without it, it's unlikely you'd see "processing" in the output:
219
processing
fetching
processing
7255
fetching
fetching
fetching
62471
processing
waiting
waiting

Related

How to asynchronously collect results from new threads created in real time in ruby

I would like to continously check the table in the DB for the commands to run.
Some commands might take 4minutes to complete, some 10 seconds.
Hence I would like to run them in threads. So every record creates new thread, and after thread is created, record gets removed.
Because the DB lookup + Thread creation will run in an endless loop, how do I get the 'response' from the Thread (thread will issue shell command and get response code which I would like to read) ?
I thought about creating two Threads with endless loop each:
- first for DB lookups + creating new threads
- second for ...somehow reading the threads results and acting upon each response
Or maybe I should use fork, or os spawn a new process?
You can have each thread push its results onto a Queue, then your main thread can read from the Queue. Reading from a Queue is a blocking operation by default, so if there are no results, your code will block and wait on the read.
http://ruby-doc.org/stdlib-2.0.0/libdoc/thread/rdoc/Queue.html
Here is an example:
require 'thread'
jobs = Queue.new
results = Queue.new
thread_pool = []
pool_size = 5
(1..pool_size).each do |i|
thread_pool << Thread.new do
loop do
job = jobs.shift #blocks waiting for a task
break if job == "!NO-MORE-JOBS!"
#Otherwise, do job...
puts "#{i}...."
sleep rand(1..5) #Simulate the time it takes to do a job
results << "thread#{i} finished #{job}" #Push some result from the job onto the Queue
#Go back and get another task from the Queue
end
end
end
#All threads are now blocking waiting for a job...
puts 'db_stuff'
db_stuff = [
'job1',
'job2',
'job3',
'job4',
'job5',
'job6',
'job7',
]
db_stuff.each do |job|
jobs << job
end
#Threads are now attacking the Queue like hungry dogs.
pool_size.times do
jobs << "!NO-MORE-JOBS!"
end
result_count = 0
loop do
result = results.shift
puts "result: #{result}"
result_count +=1
break if result_count == 7
end

Is there a better way to make multiple HTTP requests asynchronously in Ruby?

I'm trying to make multiple HTTP requests in Ruby. I know it can be done in NodeJS quite easily. I'm trying to do it in Ruby using threads, but I don't know if that's the best way. I haven't had a successful run for high numbers of requests (e.g. over 50).
require 'json'
require 'net/http'
urls = [
{"link" => "url1"},
{"link" => "url2"},
{"link" => "url3"}
]
urls.each_value do |thing|
Thread.new do
result = Net::HTTP.get(URI.parse(thing))
json_stuff = JSON::parse(result)
info = json["person"]["bio"]["info"]
thing["name"] = info
end
end
# Wait until threads are done.
while !urls.all? { |url| url.has_key? "name" }; end
puts urls
Any thoughts?
Instead of the while clause you used, you can call Thread#join to make the main thread wait for other threads.
threads = []
urls.each_value do |thing|
threads << Thread.new do
result = Net::HTTP.get(URI.parse(thing))
json_stuff = JSON::parse(result)
info = json["person"]["bio"]["info"]
thing["name"] = info
end
end
# Wait until threads are done.
threads.each { |aThread| aThread.join }
Your way might work, but it's going to end up in a busy loop, eating up CPU cycles when it really doesn't need to. A better way is to only check whether you're done when a request completes. One way to accomplish this would be to use a Mutex and a ConditionVariable.
Using a mutex and condition variable, we can have the main thread waiting, and when one of the worker threads receives its response, it can wake up the main thread. The main thread can then see if any URLs remain to be downloaded; if so, it'll just go to sleep again, waiting; otherwise, it's done.
To wait for a signal:
mutex.synchronize { cv.wait mutex }
To wake up the waiting thread:
mutex.synchronize { cv.signal }
You might want to check for done-ness and set thing['name'] inside the mutex.synchronize block to avoid accessing data in multiple threads simultaneously.

Ruby Pause thread

In ruby, is it possible to cause a thread to pause from a different concurrently running thread.
Below is the code that I've written so far. I want the user to be able to type 'pause thread' and the sample500 thread to pause.
#!/usr/bin/env ruby
# Creates a new thread executes the block every intervalSec for durationSec.
def DoEvery(thread, intervalSec, durationSec)
thread = Thread.new do
start = Time.now
timeTakenToComplete = 0
loopCounter = 0
while(timeTakenToComplete < durationSec && loopCounter += 1)
yield
finish = Time.now
timeTakenToComplete = finish - start
sleep(intervalSec*loopCounter - timeTakenToComplete)
end
end
end
# User input loop.
exit = nil
while(!exit)
userInput = gets
case userInput
when "start thread\n"
sample500 = Thread
beginTime = Time.now
DoEvery(sample500, 0.5, 30) {File.open('abc', 'a') {|file| file.write("a\n")}}
when "pause thread\n"
sample500.stop
when "resume thread"
sample500.run
when "exit\n"
exit = TRUE
end
end
Passing Thread object as argument to DoEvery function makes no sense because you immediately overwrite it with Thread.new, check out this modified version:
def DoEvery(intervalSec, durationSec)
thread = Thread.new do
start = Time.now
Thread.current["stop"] = false
timeTakenToComplete = 0
loopCounter = 0
while(timeTakenToComplete < durationSec && loopCounter += 1)
if Thread.current["stop"]
Thread.current["stop"] = false
puts "paused"
Thread.stop
end
yield
finish = Time.now
timeTakenToComplete = finish - start
sleep(intervalSec*loopCounter - timeTakenToComplete)
end
end
thread
end
# User input loop.
exit = nil
while(!exit)
userInput = gets
case userInput
when "start thread\n"
sample500 = DoEvery(0.5, 30) {File.open('abc', 'a') {|file| file.write("a\n")} }
when "pause thread\n"
sample500["stop"] = true
when "resume thread\n"
sample500.run
when "exit\n"
exit = TRUE
end
end
Here DoEvery returns new thread object. Also note that Thread.stop called inside running thread, you can't directly stop one thread from another because it is not safe.
You may be able to better able to accomplish what you are attempting using Ruby Fiber object, and likely achieve better efficiency on the running system.
Fibers are primitives for implementing light weight cooperative
concurrency in Ruby. Basically they are a means of creating code
blocks that can be paused and resumed, much like threads. The main
difference is that they are never preempted and that the scheduling
must be done by the programmer and not the VM.
Keeping in mind the current implementation of MRI Ruby does not offer any concurrent running threads and the best you are able to accomplish is a green threaded program, the following is a nice example:
require "fiber"
f1 = Fiber.new { |f2| f2.resume Fiber.current; while true; puts "A"; f2.transfer; end }
f2 = Fiber.new { |f1| f1.transfer; while true; puts "B"; f1.transfer; end }
f1.resume f2 # =>
# A
# B
# A
# B
# .
# .
# .

Odd bug with DataMapper, Mutexes, and Threads?

I have a database full of URLs that I need to test HTTP response time for on a regular basis. I want to have many worker threads combing the database at all times for a URL that hasn't been tested recently, and if it finds one, test it.
Of course, this could cause multiple threads to snag the same URL from the database. I don't want this. So, I'm trying to use Mutexes to prevent this from happening. I realize there are other options at the database level (optimistic locking, pessimistic locking), but I'd at least prefer to figure out why this isn't working.
Take a look at this test code I wrote:
threads = []
mutex = Mutex.new
50.times do |i|
threads << Thread.new do
while true do
url = nil
mutex.synchronize do
url = URL.first(:locked_for_testing => false, :times_tested.lt => 150)
if url
url.locked_for_testing = true
url.save
end
end
if url
# simulate testing the url
sleep 1
url.times_tested += 1
url.save
mutex.synchronize do
url.locked_for_testing = false
url.save
end
end
end
sleep 1
end
end
threads.each { |t| t.join }
Of course there is no real URL testing here. But what should happen is at the end of the day, each URL should end up with "times_tested" equal to 150, right?
(I'm basically just trying to make sure the mutexes and worker-thread mentality are working)
But each time I run it, a few odd URLs here and there end up with times_tested equal to a much lower number, say, 37, and locked_for_testing frozen on "true"
Now as far as I can tell from my code, if any URL gets locked, it will have to unlock. So I don't understand how some URLs are ending up "frozen" like that.
There are no exceptions and I've tried adding begin/ensure but it didn't do anything.
Any ideas?
I'd use a Queue, and a master to pull what you want. if you have a single master you control what's getting accessed. This isn't perfect but it's not going to blow up because of concurrency, remember if you aren't locking the database a mutex doesn't really help you is something else accesses the db.
code completely untested
require 'thread'
queue = Queue.new
keep_running = true
# trap cntrl_c or something to reset keep_running
master = Thread.new do
while keep_running
# check if we need some work to do
if queue.size == 0
urls = URL.all(:times_tested.lt => 150)
urls.each do |u|
queue << u.id
end
# keep from spinning the queue
sleep(0.1)
end
end
end
workers = []
50.times do
workers << Thread.new do
while keep_running
# get an id
id = queue.shift
url = URL.get(id)
#do something with the url
url.save
sleep(0.1)
end
end
end
workers.each do |w|
w.join
end

Limiting concurrent threads

I'm using threads in a program that uploads files over sftp. The number of files that could be upload can potentially be very large or very small. I'd like to be able to have 5 or less simultaneous uploads, and if there's more have them wait. My understanding is usually a conditional variable would be used for this, but it looks to me like that would only allow for 1 thread at a time.
cv = ConditionVariable.new
t2 = Thread.new {
mutex.synchronize {
cv.wait(mutex)
upload(file)
cv.signal
}
}
I think that should tell it to wait for the cv to be available the release it when done. My question is how can I do this allowing more than 1 at a time while still limiting the number?
edit: I'm using Ruby 1.8.7 on Windows from the 1 click installer
Use a ThreadPool instead. See Deadlock in ThreadPool (the accepted answer, specifically).
A word of caution -- there is no real concurrency in Ruby unless you are using JRuby. Also, exception in thread will freeze main loop unless you are in debug mode.
require "thread"
POOL_SIZE = 5
items_to_process = (0..100).to_a
message_queue = Queue.new
start_thread =
lambda do
Thread.new(items_to_process.shift) do |i|
puts "Processing #{i}"
message_queue.push(:done)
end
end
items_left = items_to_process.length
[items_left, POOL_SIZE].min.times do
start_thread[]
end
while items_left > 0
message_queue.pop
items_left -= 1
start_thread[] unless items_left < POOL_SIZE
end

Resources