Handle Dalli::RingError - No server available in Sinatra - ruby

I have a Sinatra application running on Heroku which makes use of Dalli to enable memcached support. Occasionally, the memcached server fails to respond, and I get the following:
Dalli::RingError - No server available
What is the best way to handle this situation?

I chose to handle this by explicitly ignoring the error, as there is no reason why my app functionality should fail if the caching component is down. You could certainly implement a log statement or whatever you want, but I chose to do nothing.
I created my own Cache class and use that to insulate my domain code from Dalli. Here is the relevant part:
def Cache.get(key)
Configuration.dalliClient.get(key)
rescue Dalli::RingError
nil
end

Related

How to allow sinatra poll for data smartly

I am wanting to design an application where the back end is constantly polling different sensors while the front end (sinatra) allows for this data to be viewed either via json api, or by simply displaying the results in html.
What considerations should I take to develop such an application and how should I structure the application for best scaling and ease of maintenance.
My first thought is to simply let sinatra poll the sensors every time it receives a request to the proper end points, but this seems like it could bog down quiet fast especially seeing how that some sensors only update themselves every couple seconds.
My second thought is to have a background process (or thread) poll the sensors and store the values for sinatra. When a request is received sinatra can then simply poll the background process for a cached value (or pull it from the threaded code) and present it to the client.
I like the second thought more, but I am not sure how I would develop the "background application" so that sinatra could poll it for data to present to the client. The other option would be for sinatra to thread the sensor polling code so that it can simply grab values from it inside the same process rather than requesting it from another process.
Due note that this application will also be responsible for automation of different relays and such based off the sensors and sinatra is only responsible for relaying the status of the sensors to the user. I think separating the backend (automation + sensor information) in a background process/daemon from the frontend (sinatra) would be ideal, but I am not sure how I would fetch the data for sinatra.
Anyone have any input on how I could structure this? If possible I would also appreciate a sample application that simply displays the idea that I could adopt and modify.
Thanks
Edit::
After a bit more research I have discovered drb (distributed ruby http://ruby-doc.org/stdlib-1.9.3/libdoc/drb/rdoc/DRb.html) which allows you to make remote calls on objects over the network. This may be a suitable solution to this problem as the daemon can automate the relays, read the sensors and store the values in class objects, and then present the class objects over drb so that sinatra can call the getters on the remote object to obtain up to date data from the daemon. This is what I initially wanted to attempt to do.
What do you guys think? Is this advisable for such an application?
I have decided to go with Sinatra, DRB, and Daemons to meet the requirements I have stated above.
The web front end will run in its own process and only serve up statistical information via DRB interactions with the backend. This will allow quick response times for the clients and allow me to separate front end code from backend code.
The backend will run in its own process and constantly poll the sensors for updates and store them as class objects with getters so that Sinatra can fetch the information over DRB when required. It will also use the gathered information for automation that is project specific.
Finally the backend and frontend will be wrapped with a Daemons wrapper so that the project will have the capabilities of starting, restarting, stopping, run status, and automatic restarting of the Daemons if it crashes or quits for what ever reason.
Source information:
http://phrogz.net/drb-server-for-long-running-web-processes
http://ruby-doc.org/stdlib-1.9.3/libdoc/drb/rdoc/DRb.html
http://www.sinatrarb.com/
https://github.com/thuehlinger/daemons

For Airbrake 5, do I need to use Sidekiq?

Environment
I'm installing Airbrake on Heroku for a Ruby web app (not Rails).
So Airbrake#notify for Airbrake version 5 for Ruby sends a notification asynchronously.
My worry is that if I don't use Sidekiq worker + Redis, then it might still be possible that calling Airbrake#notify might still slow down the app's response time depending on how it's used (whether in a Rails-like controller or some other part of the app).
Besides overcoming the potential issue mentioned above, the other advantage of using Sidekiq worker + Redis to call Airbrake#notify I can think of is that Redis has a couple of persistence strategies so if the app crashes I can backtrack and look over the backed up error notifications from the Sidekiq queue.
Whereas if I don't use Sidekiq + Redis and the app crashes, then there will be no backed up data....
Questions
Does that mean I don't need to use Sidekiq + Redis (or some other equivalent database)?
Am I understanding the issue correctly? I don't have a very complete understanding of "pooled connections" and asynchronous processing, so this makes understanding what to do here a bit challenging.
This is the class that sends async notices https://github.com/airbrake/airbrake-ruby/blob/master/lib/airbrake-ruby/async_sender.rb
It's using standard ruby threads to send messages, so no background service should be necessary

Sinatra - Register startup and shutdown operations

I'm designing a web service using Sinatra and I need to perform certain operations when the service is started and some other operations when the server is stopped.
How can I register those operations to be fully integrated with sinatra?
Thanks.
The answer depends on how you need to perform your operations. Does they need to be ran for each ruby process or do they need to be ran just once for the service. I suppose it's once for all the service and in the case of the latest :
You might be tempted to run some code before your Sinatra app is starting but this is not really the behavior you might expect. I'll explain why just after. The workaround would be adding code before your sinatra class like
require "sinatra"
puts "Starting"
get "/" do
...
end
You could add some code to your config.ru too btw, would have the same effect but I don't which one is uglier.
Why is this wrong ? Because when you host your web service, many web server instances will be fired and each one will execute the puts method or your "starting" code. This is correct when you want to initialize things that are local to your app instance, like a database connection but not to initialize things which are shared by all of them.
And about the code firing at its end, well you can't (or maybe you could with some really ugly workaround, but you'll end with the same issue you get with the start).
So the best way to handle on and off operations would be to wrap it within your tasks firing your service.
Run some rake task or ruby script that do your initalization stuff
Start your web server
And to stop it
Run a rake task or ruby script that stops the server
Run your rake task or ruby script that does the cleaning operations.
You can wrap those into a single rake task, by starting your app server directly from ruby, like I did there https://github.com/TactilizeTeam/photograph/blob/master/bin/photograph.
This way you can easily add some code to get ran before starting the service, still keeping it into a single task. With some plumbing, I guess you can fire multiple thin instances and then allow you to start your cluster of thin (or whatever you use) instances and have still one task to rely on.
I'd say that adding a handler to the SIGINT signal could allow you to run some code before exiting. See http://www.ruby-doc.org/core-1.9.3/Signal.html for how to do that. You might want to check if Thin isn't already registering a trap for that signal, I'm not sure if this is handled in the library or in the script used to launch thin ( the "thin" executable that gets in your $PATH).
Another way to handle the exit, would be to have a watchdog process, that check if your cluster is running and could ensure the stop code is being ran if no more instances are running.

What are some good ways to make an async web app on ruby these days?

I'm looking to build a webapp with a WebSocket component, and a run of the mill rack based frontend. My initial plan was to use Camping for the frontend, running the server on thin, with a rack config.ru looking like this:
require 'rack'
require './parts/web-frontend'
require './parts/websocket'
AppStationary = Rack::File.new("./stationary")
run Rack::Cascade.new(AppWebSockets, AppWebPages, AppStationary)
AppWebSockets is being provided by websocket-rack and works great. In the absence of an Upgrade: WebSocket request it simply 404's and the request runs down the cascade to the camping app, AppWebPages.
It's becoming clear that this camping webapp inevitably requires access to IO, to talk with the CouchDB database using regular http requests. There are plenty of ways to do http requests, including some async libraries compatible with eventmachine. If I subscribe to a callback, rack returns and the page has already responded by the time I'm ready to create a response. I'd like to be able to use em-synchrony to get some concurrency via Ruby 1.9's Fibers - which I've only just gotten my head around - but cannot find any documentation on how to make use of em-synchrony with Thin.
I've encountered a webserver called Goliath which claims to be similar to thin, with em-synchrony support baked in, but it lacks a command line utility to launch and test the server and seems to require I write a different sort of file to a rackup, which is quite distasteful. It also is unclear if it would even support websocket-rack, which only specifies support for Thin currently.
What are some good ways to avoid blocking IO while still making use of familiar rack based tools like camping, and having access to WebSockets?
In regards to Goliath, Goliath is based on Thin (I started with the thin code and when from there). A lot of the code has changed (e.g. using http_parser.rb instead of mongrel parser) but the original basis was Thin.
Launching the server is just a matter of executing your .rb file. The system is the same as Sinatra uses (I borrowed the code from Sinatra to make it work). You can also write you own server if you want, there are examples in the repo if you need the extra control. For us, we wanted the launching to be as simple as possible and require as few files created as possible. So, launching .rb file and using God to bringup/restart servers worked well.
Tests you write with RSpec/Test::Unit and run the test file as you normally would. The tests for Goliath will fire up the reactor and send real requests to the API from your unit tests (note, this doesn't fork, it uses EM to run the reactor in the same process as the tests). All this stuff is wrapped in a test_helper that goliath provides.
There is no rackup file with Goliath. You run the .rb file directly. The Goliath application has the middleware use commands baked straight into the .rb file. For us at PostRank, this was the easiest and clearest way to define the server. You had all of your use statements (with any extra bits they use) visible as you work on the file instead of having multiple files. For us, this was a win, your mileage may vary.
I have no idea if websocket-rack would work but, there is a branch in the repo for baking websocket support straight into Goliath. I haven't looked at it in a while (there were some upstream bugs that got fixed that were required) but it shouldn't be too hard to get it up and running and, with the upstream fixed, merged into master.
To your question about em-synchrony and thin, you should just be able to wrap an EM.synchrony {} block around your code. The synchrony method just calls down to EM.run and wraps your block in a new fiber. If the reactor is already running EM will just execute the passed block immediately. As long as Thin has already started the reactor this should work fine.
Update: The websockets branch as been merged into Goliath mainline, so there is WebSocket support baked straight into Goliath if you're running from HEAD.
Here's an example of how to add async support to Camping: https://gist.github.com/1192720 (see 65 for the code you'll have to use in your app). Maybe we should wrap it up in a gem or something…
Have you looked at Cramp - http://cramp.in ? Cramp is fully async and has an in-built websockets support.

Restart a dummy application when testing rails engines?

I'm working in a new Rails engine and I need to restart it while testing. Of I try Dummy::Application.initialize! it do not work because the application was already initialized, so Rails returned the same instance.
I need to do so my engine after_initialize block runs again>
I do not believe that Rails::Application has any (at least publicly accessible) method for restarting the stack. Your best way (and what I do) is just exit the server process (Control + C) and rails s the server back up.
If that is not what you are talking about, please be more specific on the error and situation you are in.
ref: http://railsapi.com/doc/rails-v3.0.7/classes/Rails/Application.html

Resources