Sinatra app receives requests for long running tasks and EM.defer them, launching them in EM's internal pool of 20 threads. When there are more than 20 EM.defer running, they are stored in EM's threadqueue by EM.defer.
However, it seems Sinatra won't service any requests until there is an EM thread available to handle them. My question is, isn't Sinatra suppose to use the reactor of the main thread to service all requests? Why am I seeing an add on the threadqueue when I make a new request?
Steps to reproduce:
Access /track/
Launch 30 /sleep/ reqs to fill the threadqueue
Access /ping/ and notice the add in the threadqueue as well as the delay
Code to reproduce it:
require 'sinatra'
#monkeypatch EM so we can access threadpools
module EventMachine
def self.queuedDefers
#threadqueue==nil ? 0: #threadqueue.size
end
def self.availThreads
#threadqueue==nil ? 0: #threadqueue.num_waiting
end
def self.busyThreads
#threadqueue==nil ? 0: #threadpool_size - #threadqueue.num_waiting
end
end
get '/track/?' do
EM.add_periodic_timer(1) do
p "Busy: " + EventMachine.busyThreads.to_s + "/" +EventMachine.threadpool_size.to_s + ", Available: " + EventMachine.availThreads.to_s + "/" +EventMachine.threadpool_size.to_s + ", Queued: " + EventMachine.queuedDefers.to_s
end
end
get '/sleep/?' do
EM.defer(Proc.new {sleep 20}, Proc.new {body "DONE"})
end
get '/ping/?' do
body "pong"
end
I tried the same thing on Rack/Thin (no Sinatra) and works as it's supposed to, so I guess Sinatra is causing it.
Ruby version: 1.9.3.p125
EventMachine: 1.0.0.beta.4.1
Sinatra: 1.3.2
OS: Windows
Ok, so it seems Sinatra starts Thin in threaded mode by default causing the above behavior.
You can add
set :threaded, false
in your Sinatra configure section and this will prevent the Reactor defering requests on a separate thread, and blocking when under load.
Source1
Source2
Unless I'm misunderstanding something about your question, this is pretty much how EventMachine works. If you check out the docs for EM.defer, they state:
Don't write a deferred operation that will block forever. If so, the
current implementation will not detect the problem, and the thread
will never be returned to the pool. EventMachine limits the number of
threads in its pool, so if you do this enough times, your subsequent
deferred operations won't get a chance to run.
Basically, there's a finite number of threads, and if you use them up, any pending operations will block until a thread is available.
It might be possible to bump threadpool_size if you just need more threads, although ultimately that's not a long-term solution.
Is Sinatra multi threaded? is a really good question here on SO about Sinatra and threads. In short, Sinatra is awesome but if you need decent threading you might need to look elsewhere.
Related
I'm new to fibers and EventMachine, and have only recently found out about fibers when I was seeing if Ruby had any concurrency features, like go-lang.
There don't seem to be a whole lot of examples out there for real use cases for when you'd use a Fiber.
I did manage to find this: https://www.igvita.com/2009/05/13/fibers-cooperative-scheduling-in-ruby/ (back from 2009!!!)
which has the following code:
require 'eventmachine'
require 'em-http'
require 'fiber'
def async_fetch(url)
f = Fiber.current
http = EventMachine::HttpRequest.new(url).get :timeout => 10
http.callback { f.resume(http) }
http.errback { f.resume(http) }
return Fiber.yield
end
EventMachine.run do
Fiber.new{
puts "Setting up HTTP request #1"
data = async_fetch('http://www.google.com/')
puts "Fetched page #1: #{data.response_header.status}"
EventMachine.stop
}.resume
end
And that's great, async GET request! yay!!! but... how do I actually use it asyncily? The example doesn't have anything beyond creating the containing Fiber.
From what I understand (and don't understand):
async_fetch is blocking until f.resume is called.
f is the current Fiber, which is the wrapping Fiber created in the EventMachine.run block.
the async_fetch yields control flow back to its caller? I'm not sure what this does
why does the wrapping fiber have resume at the end? are fibers paused by default?
Outside of the example, how do I use fibers to say, shoot off a bunch of requests triggered by keyboard commands?
like, for example: every time I type a letter, I make a request to google or something? - normally this requires a thread, which the main thread would tell the parallel thread to launch a thread for each request. :-\
I'm new to concurrency / Fibers. But they are so intriguing!
If anyone can answers these questions, that would be very appreciated!!!
There is a lot of confusion regarding fibers in Ruby. Fibers are not a tool with which to implement concurrency; they are merely a way of organizing code in a way that may more clearly represent what is going on.
That the name 'fibers' is similar to 'threads' in my opinion contributes to the confusion.
If you want true concurrency, that is, distributing the CPU load across all available CPU's, you have the following options:
In MRI Ruby
Running multiple Ruby VM's (i.e. OS processes), using fork, etc. Even with multiple threads in Ruby, the GIL (Global Interpreter Lock) prevents the use of more than 1 CPU by the Ruby runtime.
In JRuby
Unlike MRI Ruby, JRuby will use multiple CPU's when assigning threads, so you can get truly concurrent processing.
If your code is spending most of its time waiting for external resources, then you may not have any need for this true concurrency. MRI threads or some kind of event handling loop will probably work fine for you.
In my sinatra web application, I have a route:
get "/" do
temp = MyClass.new("hello",1)
redirect "/home"
end
Where MyClass is:
class MyClass
#instancesArray = []
def initialize(string,id)
#string = string
#id = id
#instancesArray[id] = this
end
def run(id)
puts #instancesArray[id].string
end
end
At some point I would want to run MyClass.run(1), but I wouldn't want it to execute immediately because that would slow down the servers response to some clients. I would want the server to wait to run MyClass.run(temp) until there was some time with a lighter load. How could I tell it to wait until there is an empty/light load, then run MyClass.run(temp)? Can I do that?
Addendum
Here is some sample code for what I would want to do:
$var = 0
get "/" do
$var = $var+1 # each time a request is recieved, it incriments
end
After that I would have a loop that would count requests/minute (so after a minute it would reset $var to 0, and if $var was less than some number, then it would run tasks util the load increased.
As Andrew mentioned (correctly—not sure why he was voted down), Sinatra stops processing a route when it sees a redirect, so any subsequent statements will never execute. As you stated, you don't want to put those statements before the redirect because that will block the request until they complete. You could potentially send the redirect status and header to the client without using the redirect method and then call MyClass#run. This will have the desired effect (from the client's perspective), but the server process (or thread) will block until it completes. This is undesirable because that process (or thread) will not be able to serve any new requests until it unblocks.
You could fork a new process (or spawn a new thread) to handle this background task asynchronously from the main process associated with the request. Unfortunately, this approach has the potential to get messy. You would have to code around different situations like the background task failing, or the fork/spawn failing, or the main request process not ending if it owns a running thread or other process. (Disclaimer: I don't really know enough about IPC in Ruby and Rack under different application servers to understand all of the different scenarios, but I'm confident that here there be dragons.)
The most common solution pattern for this type of problem is to push the task into some kind of work queue to be serviced later by another process. Pushing a task onto the queue is ideally a very quick operation, and won't block the main process for more than a few milliseconds. This introduces a few new challenges (where is the queue? how is the task described so that it can be facilitated at a later time without any context? how do we maintain the worker processes?) but fortunately a lot of the leg work has already been done by other people. :-)
There is the delayed_job gem, which seems to provide a nice all-in-one solution. Unfortunately, it's mostly geared towards Rails and ActiveRecord, and the efforts people have made in the past to make it work with Sinatra look to be unmaintained. The contemporary, framework-agnostic solutions are Resque and Sidekiq. It might take some effort to get up and running with either option, but it would be well worth it if you have several "run when you can" type functions in your application.
MyClass.run(temp) is never actually executing. In your current request to / path you instantiate a new instance of MyClass then it will immediately do a get request to /home. I'm not entirely sure what the question is though. If you want something to execute after the redirect, that functionality needs to exist within the /home route.
get '/home' do
# some code like MyClass.run(some_arg)
end
I wanna clear my concept of asynchronous IO, non-blocking server
When dealing with Node.js , it is easy to under the concept
var express = require('express');
var app = express();
app.get('/test', function(req, res){
setTimeout(function(){
console.log("sleep doesn't block, and now return");
res.send('success');
}, 2000);
});
var server = app.listen(3000, function() {
console.log('Listening on port %d', server.address().port);
});
I know that when node.js is waiting for 2 seconds of setTimeout, it is able to serve another request at the same time, once the 2 seconds is passed, it will call it callback function.
How about in Ruby world, thin server?
require 'sinatra'
require 'thin'
set :server, %w[thin]
get '/test' do
sleep 2 <----
"success"
end
The code snippet above is using Thin server (non-blocking, asynchronous IO), When talking to asynchronous IO, i want to ask when reaching sleep 2 , is that the server are able to serve another request at the same time as sleep 2 is blocking IO.
The code between node.js and sinatra is that
node.js is writing asynchronous way (callback approach)
ruby is writing in synchronous way (but working in asynchronous way under the cover? is it true)
If the above statement is true,
it seems that ruby is better as the code looks better rather than bunch of callback code in node.js
Kit
Sinatra / Thin
Thin will be started in threaded mode,
if it is started by Sinatra (i.e. with ruby asynchtest.rb)
This means that your assumptions are correct; when reaching sleep 2 , the server is able to serve another request at the same time , but on another thread.
I would to show this behavior with a simple test:
#asynchtest.rb
require 'sinatra'
require 'thin'
set :server, %w[thin]
get '/test' do
puts "[#{Time.now.strftime("%H:%M:%S")}] logging /test starts on thread_id:#{Thread.current.object_id} \n"
sleep 10
"[#{Time.now.strftime("%H:%M:%S")}] success - id:#{Thread.current.object_id} \n"
end
let's test it by starting three concurrent http requests ( in here timestamp and thread-id are relevant parts to observe):
The test demonstrate that we got three different thread ( one for each cuncurrent request ), namely:
70098572502680
70098572602260
70098572485180
each of them starts concurrently ( the starts is pretty immediate as we can see from the execution of the puts statement ) , then waits (sleeps) ten seconds and after that time flush the response to the client (to the curl process).
deeper understanding
Quoting wikipedia - Asynchronous_I/O:
In computer science, asynchronous I/O, or non-blocking I/O is a form of input/output processing that permits
other processing to continue before the transmission has finished .
The above test (Sinatra/thin) actually demonstrate that it's possible to start a first request from curl ( the client ) to thin ( the server)
and, before we get the response from the first (before the transmission has finished) it's possible to start a second and a third
request and these lasts requests aren't queued but starts concurrently the first one or in other words: permits other processing to continue*
Basically this is a confirmation of the #Holger just's comment: sleep blocks the current thread, but not the whole process. That said, in thin, most stuff is handled in the main reactor thread which thus works similar to the one thread available in node.js: if you block it, nothing else scheduled in this thread will run. In thin/eventmachine, you can however defer stuff to other threads.
This linked answers have more details: "is-sinatra-multi-threaded and Single thread still handles concurrency request?
Node.js
To compare the behavoir of the two platform let's run an equivalent asynchtest.js on node.js; as we do in asynchtest.rb to undertand what happen we add a log line when processing starts;
here the code of asynchtest.rb:
var express = require('express');
var app = express();
app.get('/test', function(req, res){
console.log("[" + getTime() + "] logging /test starts\n");
setTimeout(function(){
console.log("sleep doen't block, and now return");
res.send('[' + getTime() + '] success \n');
},10000);
});
var server = app.listen(3000,function(){
console.log("listening on port %d", server.address().port);
});
Let's starts three concurrent requests in nodejs and observe the same behavoir:
of course very similar to what we saw in the previous case.
This response doesn't claim to be exhaustive on the subject which is very complex and deserves further study and specific evidence before drawing conclusions for their own purposes.
There are lots of subtle differences, almost too many to list here.
First, don't confuse "coding style" with "event model". There's no reason you need to use callbacks in Node.js (see various 'promise' libraries). And Ruby has EventMachine if like the call-back structured code.
Second, Thin (and Ruby) can have many different multi-tasking models. You didn't specify which one.
In Ruby 1.8.7, "Thread" will create green threads. The language actually turns a "sleep N" into a timer call, and allows other statements to execute. But it's got a lot of limitations.
Ruby 1.9.x can create native OS threads. But those can be hard to use (spinning up 1000's is bad for performance, etc.)
Ruby 1.9.x has "Fibers" which are a much better abstraction, very similar to Node.
In any comparison, you also have to take into account the entire ecosystem: Pretty much any node.js code will work in a callback. It's really hard to write blocking code. But many Ruby libraries are not Thread-aware out of the box (require special configuration, etc). Many seemingly simple things (DNS) can block the entire ruby process.
You also need to consider the language. Node.JS, is built on JavaScript, which has a lot of dark corners to trip you up. For example, it's easy to assume that JavaScript has Integers, but it doesn't. Ruby has fewer dark corners (such as Metaprogramming).
If you are really into evented architectures, you should really consider Go. It has the best of all worlds: The evented architecture is built in (just like in Node, except it's multiprocessor-aware), there are no callbacks (just like in Ruby), plus it has first-class messaging (very similar to Erlang). As a bonus, it will use a fraction of the memory of a Node or Ruby process.
No, node.js is fully asynchronous, setTimeout will not block script execution, just delay part inside it. So this parts of code are not equal. Choosing platform for your project depends on tasks you want to reach.
I am trying to use Ruby's daemon gem and loop the restart of a daemon that has its own loop. My code looks like this now:
require 'daemons'
while true
listener = Daemons.call(:force => true) do
users = accounts.get_updated_user_list
TweetStream::Client.new.follow(users) do |status|
puts "#{status.text}"
end
end
sleep(60)
listener.restart
end
Running this gives me the following error (after 60 seconds):
undefined method `restart' for #<Daemons::Application:0x007fc5b29f5658> (NoMethodError)
So obviously Daemons.call doesn't return a controllable daemon like I think it does. What do I need to do to set this up correctly. Is a daemon the right tool here?
I think this is what you're after, although I haven't tested it.
class RestartingUserTracker
def initialize
#client = TweetStream::Client.new
end
def handle_status(status)
# do whatever it is you're going to do with the status
end
def fetch_users
accounts.get_updated_user_list
end
def restart
#client.stop_stream
users = fetch_users
#client.follow(users) do |status|
handle_status(status)
end
end
end
EM.run do
client = RestartingUserTracker.new
client.restart
EM::PeriodicTimer.new(60) do
client.restart
end
end
Here's how it works:
TweetStream uses EventMachine internally, as a way of polling the API forever and handling the responses. I can see why you might have felt stuck, because the normal TweetStream API blocks forever and doesn't give you a way to intervene at any point. However, TweetStream does allow you to set up other things in the same event loop. In your case, a timer. I found the documentation on how to do that here: https://github.com/intridea/tweetstream#removal-of-on_interval-callback
By starting up our own EventMachine reactor, we're able to inject our own code into the reactor as well as use TweetStream. In this case, we're using a simple timer that just restarts the client every 60 seconds.
EventMachine is an implementation of something called the Reactor Pattern. If you want to fully understand and maintain this code, it would serve you well to find some resources about it and gain a full understanding. The reactor pattern is very powerful, but can be difficult to grasp at first.
However, this code should get you started. Also, I'd consider renaming the RestartingUserTracker to something more appropriate.
I'm building a distributed web-crawler and trying to get maximum out of resources of each single machine. I run parsing functions in EventMachine through Iterator and use em-http-request to make asynchronous HTTP requests. For now I have 100 iterations that run at the same time and it seems that I can't pass over this level. If I increase a number of iteration it doesn't affect the speed of crawling. However, I get only 10-15% cpu load and 20-30% of network load, so there's plenty of room to crawl faster.
I'm using Ruby 1.9.2. Is there any way to improve the code to use resources effectively or maybe I'm even doing it wrong?
def start_job_crawl
#redis.lpop #queue do |link|
if link.nil?
EventMachine::add_timer( 1 ){ start_job_crawl() }
else
#parsing link, using asynchronous http request,
#doing something with the content
parse(link)
end
end
end
#main reactor loop
EM.run {
EM.kqueue
#redis = EM::Protocols::Redis.connect(:host => "127.0.0.1")
#redis.errback do |code|
puts "Redis error: #{code}"
end
#100 parallel 'threads'. Want to increase this
EM::Iterator.new(0..99, 100).each do |num, iter|
start_job_crawl()
end
}
if you are using select()(which is the default for EM), the most is 1024 because select() limited to 1024 file descriptors.
However it seems like you are using kqueue, so it should be able to handle much more than 1024 file descriptors at once.
which is the value of your EM.threadpool_size ?
try enlarging it, I suspect the limit is not in the kqueue but in the pool handling the requests...