I am using Sequel with Sinatra in a Puma / Rack server.
I wish to store my Sequel DB connection object with the session - not globally - so that I can have a separate pool of DB connections for each logged on user. The intent is to have one database server logon for each web server logon.
I cannot work out how to do this since the Sequel database object appears to be global singleton. If for example I attempt to serialize the database object and store in the session I will get an error message: TypeError - can't dump anonymous class: I don't want to have to connect to the database on every route request.
How can I do this? Here is some sample code that hopefully indicates what I am trying to achieve:
require 'sinatra/base'
require 'pp'
require 'sequel'
require 'json'
require 'java'
require 'sqljdbc4.jar'
require 'yaml'
class MyApp < Sinatra::Base
set :sessions, true
use Rack::Session::Cookie, :key => 'rack.session',
:expire_after => 2592000,
:secret => 'sydasx'
get '/' do
db = session[:db]
DB = YAML::Load(db)
response = ''
DB['select * from SEC_USER'].each do |row|
response += row.to_s
end
response.to_json
end
get '/login/:username' do
username = params['username']
puts "username: #{username}"
conn_str = "jdbc:sqlserver://localhost:1434;databaseName=#{username};integratedSecurity=true;"
DB = Sequel.connect(conn_str)
puts "DB: #{DB.pretty_inspect}"
db = YAML::dump(DB)
puts "db: #{db}"
session[:db] = db
"logged in"
end
end
You can't serialize the Sequel::Database object. You have a few decent options:
Use a rack middleware that creates a Sequel::Database object per request, using the object only for the request. In this case you wouldn't assign the result of Sequel.connect to a constant, you would pass a block and call the next variable inside that block.
Create a single Sequel::Database object at top level and store it in the DB constant. The the arbitrary_servers and server_block extensions into the Sequel::Database object. Then use a rack middleware that checks out a connection to the appropriate server for the duration of the block.
If you have few customers, it's possible to just use Sequel's sharding support and use just the server_block extension without arbitrary_servers. One advantage of doing that is that connections can be cached so you aren't making a separate database connection per request (which would be the case for both 1. and 2.).
Use a global hash as you mentioned, with keys being user names and values being Sequel::Database objects. You need to make sure you have enough memory to store all the objects you want to track if you do this.
Related
I'm using distributed ruby so that I can save the selenium web-driver object in one script and use the same object in the next script when I run the clients I get an error indicating that #<Drb::DRbConnError: connection closed>.
Has anyone tried this or how do we overcome this issue?
Below are my scripts
Server.rb
require 'drb/drb'
# The URI for the server to connect to
URI="druby://localhost:8787"
# Allows sharing of variables between component runs
class TestScope
# class variable
##variables = {}
def save_variable(key, value)
##variables.store(key, value)
end
def get_variable(key)
return ##variables[key]
end
def get_size
return ##variables.size
end
end
# The object that handles requests on the server
FRONT_OBJECT=TestScope.new
DRb.start_service(URI, FRONT_OBJECT, safe_level: 1, verbose: true)
# Wait for the drb server thread to finish before exiting.
DRb.thread.join
Client1.rb
require 'drb/drb'
require 'selenium-webdriver'
# The URI to connect to
SERVER_URI="druby://localhost:8787"
# Start a local DRbServer to handle callbacks.
# Not necessary for this small example, but will be required
# as soon as we pass a non-marshallable object as an argument
# to a dRuby call.
# Note: this must be called at least once per process to take any effect.
# This is particularly important if your application forks.
DRb.start_service
test_scope = DRbObject.new_with_uri(SERVER_URI)
driver = Selenium::WebDriver::Driver.for :firefox
driver.navigate.to "http://www.google.com"
puts "Saving the driver object to the test scope"
test_scope.save_variable('driver', driver)
Client2.rb
require 'drb/drb'
require 'selenium-webdriver'
# The URI to connect to
SERVER_URI="druby://localhost:8787"
# Start a local DRbServer to handle callbacks.
# Not necessary for this small example, but will be required
# as soon as we pass a non-marshallable object as an argument
# to a dRuby call.
# Note: this must be called at least once per process to take any effect.
# This is particularly important if your application forks.
DRb.start_service
test_scope = DRbObject.new_with_uri(SERVER_URI)
puts "Fetching the driver object from the test scope"
driver1 = test_scope.get_variable('driver')
driver1.navigate.to "http://www.yahoo.com"
In order to share an object using DRb, the class must be defined in the dRb server as it allows an object in one Ruby process to invoke methods on an object in another Ruby process on the same or a different machine.
If there is a scenario where one needs to create an object on the dRb client and use that object in other DRb clients. We need to use ruby script runner and define object in the scriptrunner.rb so that multiple clients can use it.
I have a working Sinatra app that uses redis-namespace for its Redis connections. It works fine, but on Heroku it keeps running out of its 10 Redis connections, despite having very little traffic - they seem to stay open for ages and the app keeps opening new ones.
So, there might be a better way to structure what I've got, so it doesn't keep opening new connections. Or maybe I can use connection_pool... although I'm not sure how to use that with redis-namespace.
The Sinatra front end (myapp/frontend.rb) is something like:
require 'sinatra/base'
require 'myapp/store'
module MyApp
class Frontend < Sinatra::Base
registration_store = MyApp::Store::Registration.new
subscription_store = MyApp::Store::Subscription.new
get '/' do
...
end
...
end
end
And the Redis-accessing Store classes are in myapp/store.rb:
require 'redis'
require 'redis-namespace'
module MyApp
module Store
class RedisBase
attr_accessor :redis
def initialize
uri = URI.parse(ENV['REDISCLOUD_URL'])
redis = ::Redis.new(:host => uri.host, :port => uri.port, :password => uri.password)
#redis = ::Redis::Namespace.new(:myapp, :redis => redis)
end
class Registration < RedisBase
def add(user_id)
redis.sadd(:registrations, user_id)
end
...
end
class Subscription < RedisBase
...
end
end
end
end
The frontend stores data via the Store classes: registration_store.add(37).
Am I doing something wrong that keeps opening new connections unnecessarily? Or, how can I add connection_pool or similar?
I bumped into a similar problem and stumbled upon this question. I think you should add redis.quit somewhere in your code. Doing some manual testing monitoring connections with client list on the redis command line gives that the connection disappears on a quit. The object can still be used later and will open a new connection it the connection is closed. No need for pooling! (At least when the load is low.... I guess you may end up with calls not getting a connection under higher loads.)
You can set a session expiry for a Sinatra app when you set up the session engine:
use Rack::Session::Cookie, :expire_after => 60*60*3, :secret => 'xxxx'
But I want to enable a longer session for certain users. Say two weeks.
session[:expires] = ?? #how and where do I put this.?
Do I need to set on each call (before do?) or is once good enough? Is session[:expires] the right thing to set?
First make sure you don't set an expire_after value with the use Rack::Session::Cookie command, then put use Rack::Session::Cookie in your configure block. Next create an "expiry time" variable (let's say expiry_time for this example) or setting in config. Now for each user, when they login, retrieve their expiry_time setting and issue the following command:
env['rack.session.options'].merge! expire_after: expiry_time
That should do what you are asking.
If this doesn't work for you, try putting the env...merge! command in a before block.
I tried to do this via an after filter in Sinatra but it didn't work, I guess it sets the session after after filters have run, so I knocked up a quick Rack filter and it appears to work.
require 'sinatra'
class SessionExpiryModifier
def initialize(app,options={})
#app,#options = app,options
end
def call(env)
warn env["rack.session.options"].inspect
t = Time.now.to_i.even? ? 10 : 60
env["rack.session.options"].merge! :expire_after => 60 * 60 * t
#app.call env
end
end
configure do
use Rack::Session::Cookie,
:expire_after => 60*60*3,
:secret => 'xxxx' * 10
use SessionExpiryModifier
end
get "/" do
session[:usr] = Time.now
env["rack.session.options"].inspect
end
However, that makes it a lot harder to get a conditional from the Sinatra app into the Rack filter to decide on which branch to take, but that depends on what your condition is. Perhaps inject something into the headers that the filter can read to make the decision.
I'm building a simple app on the side using an API I made with Sinatra that returns some JSON. It's quite a bit of JSON, my app's API relies on a few hundred requests to other APIs.
I can probably cache the results for 5 days or so, no problem with the data at all. I'm just not 100% sure how to implement the caching. How would I go about doing that with Sinatra?
Personally, I prefer to use redis for this type of things over memcached. I have an app that I use redis in pretty extensively, using it in a similar way to what you described. If I make a call that is not cached, page load time is upwards of 5 seconds, with redis, the load time drops to around 0.3 seconds. You can set an expires time as well, which can be changed quite easily. I would do something like this to retrieve the data from the cache.
require 'redis'
get '/my_data/:id' do
redis = Redis.new
if redis[params[:id]]
send_file redis[params[:id]], :type => 'application/json'
end
end
Then when you wanted to save the data to the cache, perhaps something like this:
require 'redis'
redis = Redis.new
<make API calls here and build your JSON>
redis[id] = json
redis.expire(id, 3600*24*5)
get '/my_data/:id' do
# security check for file-based caching
raise "invalid id" if params[:id] =~ /[^a-z0-9]/i
cache_file = File.join("cache",params[:id])
if !File.exist?(cache_file) || (File.mtime(cache_file) < (Time.now - 3600*24*5))
data = do_my_few_hundred_internal_requests(params[:id])
File.open(cache_file,"w"){ |f| f << data }
end
send_file cache_file, :type => 'application/json'
end
Don't forget to mkdir cache.
alternatively you could use memcache-client, but it will require you to install memcached system-wide.
I'm looking to build a simple, RESTful notification system for an internal project leveraging Sinatra. I've used EventMachine channels in the past to subscribe/publish to events, but in all my previous cases I was using EventMachine directly.
Does anyone know if it's possible to create, subscribe, and publish to EventMachine channels (running in Thin) from a Sinatra application, or even from some Rack middleware for that matter?
Have a look at async_sinatra.
Basically, to make it possible to use EventMachine when running in Thin you need to make it aware that you want to serve requests asynchronously. The Rack protocol is synchronous by design, and Thin expects a request to be done when the handler returns. There are ways to make Thin aware that you want to handle the request asynchronously (see think_async for an example how), and async_sinatra makes it very easy.
Bryan,
You can use the em-http-request library (https://github.com/igrigorik/em-http-request), this will allow you to reference a specific EventMachine application running on either A. the same server, B. a different server, or C. wherever you want really.
require 'eventmachine'
require 'em-http-request'
require 'sinatra/base'
require 'thin'
class ServerClass < EventMachine::Connection
def initialize(*args)
# ruby singleton - store channel data in global hash
($channels ||= [])
end
def post_init
puts "initialized"
$cb.call("initialized");
end
def receive_data(data)
# got information from client connection
end
def channel_send(msg,channel)
$channels[channel].send_data(msg)
end
def channels_send(msg)
$channels.each{|channel| channel.send_data(msg)}
end
def unbind
# puts user left
end
end
EventMachine.run do
$cb = EM.callback {|msg| puts msg #do something creative}
$ems = EventMachine::start_server('0.0.0.0',ServerClass,args)
class App < Sinatra::Base
set :public, File.dirname(__FILE__) + '/public'
get '/' do
erb :index
end
end
App.run!({:port => 3000})
end
Above is a basic wireframe. Depending on how you want to go about sending data, you can use WebSockets (em-websocket) and bind each user on login (have to add a login system), or you can use this for whatever. As long as you have a global reference to the Eventmachine Object (connection, websocket, channel) you can pass messages from within your application.
BTW - It is optional to add the EventMachine.run do;....end loop, since Thin will do this anyways. It helps to know how it works though.
Good Luck