Environment
I'm installing Airbrake on Heroku for a Ruby web app (not Rails).
So Airbrake#notify for Airbrake version 5 for Ruby sends a notification asynchronously.
My worry is that if I don't use Sidekiq worker + Redis, then it might still be possible that calling Airbrake#notify might still slow down the app's response time depending on how it's used (whether in a Rails-like controller or some other part of the app).
Besides overcoming the potential issue mentioned above, the other advantage of using Sidekiq worker + Redis to call Airbrake#notify I can think of is that Redis has a couple of persistence strategies so if the app crashes I can backtrack and look over the backed up error notifications from the Sidekiq queue.
Whereas if I don't use Sidekiq + Redis and the app crashes, then there will be no backed up data....
Questions
Does that mean I don't need to use Sidekiq + Redis (or some other equivalent database)?
Am I understanding the issue correctly? I don't have a very complete understanding of "pooled connections" and asynchronous processing, so this makes understanding what to do here a bit challenging.
This is the class that sends async notices https://github.com/airbrake/airbrake-ruby/blob/master/lib/airbrake-ruby/async_sender.rb
It's using standard ruby threads to send messages, so no background service should be necessary
I'm trying to use the WIN32OLE class in conjunction with the eventmachine library. The OLE library communicates fine with the program, but the minute I add the WIN32OLE_Event hook to the program, it doesn't. The events fire at unpredictable times (or often never). Removing my listen server implemented by eventmachine seems to make the events fire properly.
Does anyone have an idea why this is happening and how I can work around it? What other connection / socket managing libraries are there that could possibly replace eventmachine?
Turns out WIN32OLE is not thread safe and is up to the user to ensure it is only ever accessed by the thread that initialized it.
I'm looking to build a webapp with a WebSocket component, and a run of the mill rack based frontend. My initial plan was to use Camping for the frontend, running the server on thin, with a rack config.ru looking like this:
require 'rack'
require './parts/web-frontend'
require './parts/websocket'
AppStationary = Rack::File.new("./stationary")
run Rack::Cascade.new(AppWebSockets, AppWebPages, AppStationary)
AppWebSockets is being provided by websocket-rack and works great. In the absence of an Upgrade: WebSocket request it simply 404's and the request runs down the cascade to the camping app, AppWebPages.
It's becoming clear that this camping webapp inevitably requires access to IO, to talk with the CouchDB database using regular http requests. There are plenty of ways to do http requests, including some async libraries compatible with eventmachine. If I subscribe to a callback, rack returns and the page has already responded by the time I'm ready to create a response. I'd like to be able to use em-synchrony to get some concurrency via Ruby 1.9's Fibers - which I've only just gotten my head around - but cannot find any documentation on how to make use of em-synchrony with Thin.
I've encountered a webserver called Goliath which claims to be similar to thin, with em-synchrony support baked in, but it lacks a command line utility to launch and test the server and seems to require I write a different sort of file to a rackup, which is quite distasteful. It also is unclear if it would even support websocket-rack, which only specifies support for Thin currently.
What are some good ways to avoid blocking IO while still making use of familiar rack based tools like camping, and having access to WebSockets?
In regards to Goliath, Goliath is based on Thin (I started with the thin code and when from there). A lot of the code has changed (e.g. using http_parser.rb instead of mongrel parser) but the original basis was Thin.
Launching the server is just a matter of executing your .rb file. The system is the same as Sinatra uses (I borrowed the code from Sinatra to make it work). You can also write you own server if you want, there are examples in the repo if you need the extra control. For us, we wanted the launching to be as simple as possible and require as few files created as possible. So, launching .rb file and using God to bringup/restart servers worked well.
Tests you write with RSpec/Test::Unit and run the test file as you normally would. The tests for Goliath will fire up the reactor and send real requests to the API from your unit tests (note, this doesn't fork, it uses EM to run the reactor in the same process as the tests). All this stuff is wrapped in a test_helper that goliath provides.
There is no rackup file with Goliath. You run the .rb file directly. The Goliath application has the middleware use commands baked straight into the .rb file. For us at PostRank, this was the easiest and clearest way to define the server. You had all of your use statements (with any extra bits they use) visible as you work on the file instead of having multiple files. For us, this was a win, your mileage may vary.
I have no idea if websocket-rack would work but, there is a branch in the repo for baking websocket support straight into Goliath. I haven't looked at it in a while (there were some upstream bugs that got fixed that were required) but it shouldn't be too hard to get it up and running and, with the upstream fixed, merged into master.
To your question about em-synchrony and thin, you should just be able to wrap an EM.synchrony {} block around your code. The synchrony method just calls down to EM.run and wraps your block in a new fiber. If the reactor is already running EM will just execute the passed block immediately. As long as Thin has already started the reactor this should work fine.
Update: The websockets branch as been merged into Goliath mainline, so there is WebSocket support baked straight into Goliath if you're running from HEAD.
Here's an example of how to add async support to Camping: https://gist.github.com/1192720 (see 65 for the code you'll have to use in your app). Maybe we should wrap it up in a gem or something…
Have you looked at Cramp - http://cramp.in ? Cramp is fully async and has an in-built websockets support.
Is Sinatra multi-threaded? I read else where that "sinatra is multi-threaded by default", what does that imply?
Consider this example
get "/multithread" do
t1 = Thread.new{
puts "sleeping for 10 sec"
sleep 10
# Actually make a call to Third party API using HTTP NET or whatever.
}
t1.join
"multi thread"
end
get "/dummy" do
"dummy"
end
If I access "/multithread" and "/dummy" subsequently in another tab or browser then nothing can be served(in this case for 10 seconds) till "/multithread" request is completed. In case activity freezes application becomes unresponsive.
How can we work around this without spawning another instance of the application?
tl;dr Sinatra works well with Threads, but you will probably have to use a different web server.
Sinatra itself does not impose any concurrency model, it does not even handle concurrency. This is done by the Rack handler (web server), like Thin, WEBrick or Passenger. Sinatra itself is thread-safe, meaning that if your Rack handler uses multiple threads to server requests, it works just fine. However, since Ruby 1.8 only supports green threads and Ruby 1.9 has a global VM lock, threads are not that widely used for concurrency, since on both versions, Threads will not run truly in parallel. The will, however, on JRuby or the upcoming Rubinius 2.0 (both alternative Ruby implementations).
Most existing Rack handlers that use threads will use a thread pool in order to reuse threads instead of actually creating a thread for each incoming request, since thread creation is not for free, esp. on 1.9 where threads map 1:1 to native threads. Green threads have far less overhead, which is why fibers, which are basically cooperatively scheduled green threads, as used by the above mentioned sinatra-synchrony, became so popular recently. You should be aware that any network communication will have to go through EventMachine, so you cannot use the mysql gem, for instance, to talk to your database.
Fibers scale well for network intense processing, but fail miserably for heavy computations. You are less likely to run into race conditions, a common pitfall with concurrency, if you use fibers, as they only do a context switch at clearly defined points (with synchony, whenever you wait for IO). There is a third common concurrency model: Processes. You can use preforking server or fire up multiple processes yourself. While this seems a bad idea at first glance, it has some advantages: On the normal Ruby implementation, this is the only way to use all your CPUs simultaniously. And you avoid shared state, so no race conditions by definition. Also, multiprocess apps scale easily over multiple machines. Keep in mind that you can combine multiple process with other concurrency models (evented, cooperative, preemptive).
The choice is mainly made by the server and middleware you use:
Multi-Process, non-preforking: Mongrel, Thin, WEBrick, Zbatery
Multi-Process, preforking: Unicorn, Rainbows, Passenger
Evented (suited for sinatra-synchrony): Thin, Rainbows, Zbatery
Threaded: Net::HTTP::Server, Threaded Mongrel, Puma, Rainbows, Zbatery, Thin[1], Phusion Passenger Enterprise >= 4
[1] since Sinatra 1.3.0, Thin will be started in threaded mode, if it is started by Sinatra (i.e. with ruby app.rb, but not with the thin command, nor with rackup).
While googling around, found this gem:
sinatra-synchrony
which might help you, because it touches you question.
There is also a benchmark, they did nearly the same thing like you want (external calls).
Conclusion: EventMachine is the answer here!
Thought I might elaborate for people who come across this. Sinatra includes this little chunk of code:
server.threaded = settings.threaded if server.respond_to? :threaded=
Sinatra will detect what gem you have installed for a webserver (aka, thin, puma, whatever.) and if it responds to "threaded" will set it to be threaded if requested. Neat.
After making some changes to code I was able to run padrino/sinatra application on mizuno
. Initially I tried to run Padrino application on jRuby but it was simply too unstable and I did not investigate as to why. I was facing JVM crashes when running on jRuby. I also went through this article, which makes me think why even choose Ruby if deployment can be anything but easy.
Is there any discussion on deployment of applications in ruby? Or can I spawn a new thread :)
I've been getting in to JRuby myself lately and I am extremely surprised how simple it is to switch from MRI to JRuby. It pretty much involves swapping out a few gems (in most cases).
You should take a look at the combination JRuby and Trinidad (App Server). Torquebox also seems to be an interesting all-in-one solution, it comes with a lot more than just an app server.
If you want to have an app server that supports threading, and you're familiar with Mongrel, Thin, Unicorn, etc, then Trinidad is probably the easiest to migrate to since it's practically identical from the users perspective. Loving it so far!
I have some questions about non-blocking IO:
If I use Ruby without EventMachine on Nginx, could I leverage non-blocking IO?
If i use Ruby with EventMachine but on Apache, could I leverage non-blocking IO?
If the above answers are no, then it means I have to use Ruby with EventMachine on Nginx to leverage non-blocking IO?
This probably don't really answer your question, but there are evented web servers that are "ruby friendly" you can use instead of Apache or nginx.
Rainbows! is an HTTP Server for Rack applications that utilizes Eventmachine. It's based on Unicorn which is based on Mongrel: http://rainbows.rubyforge.org/
Zbatery is an off shoot of Rainbows! but the main difference is, it's meant to work on systems that either do not support fork(), or have no memory (nor need) to run the master/worker model. http://zbatery.bogomip.org/
Thin is also another HTTP server that is also evented: http://code.macournoyer.com/thin/