Is it safe to reuse Faraday connection objects, or is it better to recreate them every time?
def connection
#connection ||= Faraday.new('http://example.com') do |conn|
conn.request :url_encoded
# more configuration
end
end
I think it's safe to reuse them (I have, a lot). I don't see it really covered one way or another in the documentation but the presence of "Per-request options" (as opposed to per-connection) at least implies that you can rely on making multiple requests with the same connection.
https://github.com/lostisland/faraday/blob/52e30bf8e8d79159f332088189cb7f7e536d1ba1/lib/faraday/connection.rb#L502
connection.get .post and all other methods duplicates params etc here.
It means each request shares nothing with each other and the parent Connection object.
It's safe to reuse.
Related
I have a collection of 'data endpoints'. Each endpoint has a name and can be available or unavailable. In Ruby I want to present the available endpoints as a Hash to make it easy to work with them. The difficulty is that getting information about the endpoints is costly and should be done lazily.
Some examples of how I want my object to behave:
endpoints = get_endpoints.call # No endpoint information is accessed yet
result = endpoints['name1'] # This should only query endpoint "name1"
is_available = endpoints.key? 'name2' # This should only query endpoint "name2"
all_available = endpoints.keys # This has to query all endpoints
The comments describe how the object internally makes requests to the 'data endpoints'.
It is straightforward to make a Hash that can do the first 2 lines. However I don't know how to support the last 2 lines. To do this I need a way to make the keys lazy, not just the values.
Thank you for taking a look!
You'd have to override the key? method, and do your own checking in there.
class LazyHash < Hash
def key?(key)
# Do your checking here. However that looks for your application
end
end
In my opinion, you're asking for trouble though. One of the most powerful virtues in computer science is expectability. If you're changing the behavior of something, modifying it far beyond it's intent, it doesn't serve you to continue calling it by the original name. You don't need to shoe-horn your solution into existing classes/interfaces.
Programming offers you plenty of flexibility, so you can do stuff like this (dependent on the language of course), but in that same argument, you have no reason not to simply build a new object/service with it's own API.
I recommend starting fresh with a new class and building out your desired interface and functionality.
class LazyEndpoints
def on?(name)
end
def set(name, value)
end
end
(Or something like that, the world is yours for the taking!)
I'll try to keep this as brief and to the point as possible.
I'm writing a Ruby gem, modeled after the Diplomat gem, that's a wrapper around a product's REST API. The API I'm consuming makes use of token based authentication; an API token is sent via a POST, and a session is returned as a cookie. I'm making use of the Faraday cookiejar middleware for handling the cookie that's returned by the API. The problem I'm struggling with, conceptually, is when to authenticate.
I have two classes, one called RestClient and one called Volume; the latter inherits from the former. As it stands now RestClient's init method builds a connection object and authenticates, while Volume's init method calls super and passes a path. My thinking here is that when any class that inherits from RestClient is initialized it'll authenticate the user.
class RestClient
def initialize(api_path)
<build connection>
auth
end
def auth
<post token, get session cookie>
end
end
class Volume < RestClient
def initialize
super('/volume')
end
def volumes
<send GET, receive volumes>
end
end
obj = Volume.new #Creates object, authenticates user
obj.volumes #Returns list of volumes
I guess my question is..am I headed down the right track? Should I hold off authenticating until a method is first called on the object, rather than authenticating when it's initialized? Am I going about this entirely incorrectly?
what you are asking here is more of a code-style question. there is no right or wrong here. i was about to vote to close because i think it is primarily opinion-based.
since i have an opinion, i'm writing an answer instead.
a) do not over-think
just implement the stuff, if it works, it is good enough
b) rule of 3
if you have implemented 3 things of the same kind and a pattern emerges, refactor!
c) refuse to use inheritance
when in doubt, do not use inheritance. a module will be good enough most of the time.
to your question specifically:
i would not use an initializer to make http calls. they are error-prone and error-handling from within initializers or around those is really ugly. it makes testing a pain in the ass.
what i would do is to just implement whatever you need in simple methods.
what is wrong with calling authenticate before making another api call? putting it into a block may make it really nice and readable:
client.authenticate do |session|
session.volumes
end
if this is too ugly for your use-case, you could do it lazily before any other method call that might require authentication.
Is cookie the only auth your API supports? Very often server-oriented (server-to-server) REST APIs also implement better auth strategies that allow you to pass in auth with every request.
All that being said, what you also can do is something like this:
client = MyApi::Client.for_user(username: ..., password: ....)
#...or
client = MyApi::Client.for_token(token)
volumes = MyApi::Volumes.get(client: client)
This way for where auth is required you would be doing a good thing by "encouraging your class to be used right" - that you simply won't be instantiating the client without authentication data, and won't be initializing your remote objects/calls without a client.
Then, within the client, what you can do is a memoized auth on first request
def perform(http_method, url, ...)
#auth_cookie ||= #client.get_cookie_by_authentication
...
end
I have built a pretty simple REST service in Sinatra, on Rack. It's backed by 3 Tokyo Cabinet/Table datastores, which have connections that need to be opened and closed. I have two model classes written in straight Ruby that currently simply connect, get or put what they need, and then disconnect. Obviously, this isn't going to work long-term.
I also have some Rack middleware like Warden that rely on these model classes.
What's the best way to manage opening and closing the connections? Rack doesn't provide startup/shutdown hooks as I'm aware. I thought about inserting a piece of middleware that provides reference to the TC/TT object in env, but then I'd have to pipe that through Sinatra to the models, which doesn't seem efficient either; and that would only get be a per-request connection to TC. I'd imagine that per-server-instance-lifecycle would be a more appropriate lifespan.
Thanks!
Have you considered using Sinatra's configure blocks to set up your connections?
configure do
Connection.initialize_for_development
end
configure :production do
Connection.initialize_for_production
end
That's a pretty common idiom while using things like DataMapper with Sinatra
Check out the "Configuration" section at http://www.sinatrarb.com/intro
If you have other Rack middleware that depend on these connections (by way of a dependence on your model classes), then I wouldn't put the connection logic in Sinatra -- what happens if you rip out Sinatra and put in another endpoint?
Since you want connection-per-application rather than connection-per-request, you could easily write a middleware that initialized and cleaned up connections (sort of the Guard Idiom as applied to Rack) and install it ahead of any other middleware that need the connections.
class TokyoCabinetConnectionManagerMiddleware
class <<self
attr_accessor :connection
end
def initialize(app)
#app = app
end
def call(env)
open_connection_if_necessary!
#app.call(env)
end
protected
def open_connection_if_necessary!
self.class.connection ||= begin
... initialize the connection ..
add_finalizer_hook!
end
end
def add_finalizer_hook!
at_exit do
begin
TokyoCabinetConnectionManagerMiddleware.connection.close!
rescue WhateverTokyoCabinetCanRaise => e
puts "Error closing Tokyo Cabinet connection. You might have to clean up manually."
end
end
end
end
If you later decide you want connection-per-thread or connection-per-request, you can change this middleware to put the connection in the env Hash, but you'll need to change your models as well. Perhaps this middleware could set a connection variable in each model class instead of storing it internally? In that case, you might want to do more checking about the state of the connection in the at_exit hook because another thread/request might have closed it.
I'm not very experienced in ruby (coding in java on daily basis) and I'm trying to find "the Ruby way" of keeping things like connection pools around. I've standalone Ruby app with multiple threads and I came up with something like below.
Note that MongoClient provided by native ruby driver for Mongo maintains conneciton pool internally, so all I need is to be able to keep one instance of MongoClient around
require 'mongo'
module MongoPool
# module instance var to ensures only one exists
#mongo = nil
def self.lazy_create
#mongo ||= Mongo::MongoClient.new('localhost', 27017, :pool_size => 5, :timeout => 5)
end
# when getting connection lazily create pool by assigning to #mongo only if nil
def connection
MongoPool.lazy_create
end
end
class PeopleRepository
include MongoPool
def random_person
coll = connection['test']['people']
coll.find_one
end
end
# usage
PeopleRepository.new.find_one
I know that works (checked that object_id of #mongo remains the same across several invocations), but is this preferred way to keep things around?
There may be more than one Repositories, so each can incluse MongoPool and use its connections. Are there any drawbacks of the solution above? Are there any other ways I should be aware of?
NOTE: The question is more about how to do things in Ruby way, not about how to do it in general (as I got it working).
You don't really need another gem to do this and actually Mongoid's driver (Moped) doesn't support connection pooling yet anyway.
Similar to the recommendation to use an application level constant in rails, you just need to use a class variable in your headless application so that your MongoClient instance is the same single object/pool instance across all invocations of your application's base class.
For example, you could do something like this:
require 'mongo'
class MyApplication
include Mongo
# creates a single class instance, sets pool size but won't connect until used (lazy)
##client = MongoClient.new('localhost', 27017, :pool_size => 5, :connect => false)
def do_something
##client['my_db']['my_collection].insert({ "foo" => "bar"})
end
end
Simple and very straight-forward. The module approach you used above isn't necessary.
You mentioned Torquebox so I assume you're using JRuby and letting Torquebox manage your application's thread pool for you.
Make sure you're running version 1.8.3 or greater of the mongo ruby driver which includes some major fixes and improvements for running under that kind thread-heavy of setup. We addressed a few thread safety issues and greatly improved concurrency in the connection pool.
Hope that helps.
assuming you are using rails, i would do this:
# config/initializers/mongo.rb
MONGODB = Mongo::MongoClient.new('localhost', 27017, :pool_size => 5, :timeout => 5)
when you are using a library such as mongomapper, there are probably ways to configure pooling and just use it transparently.
have a look at mongo libraries here: http://railscasts.com/episodes/194-mongodb-and-mongomapper
I have built a pretty simple REST service in Sinatra, on Rack. It's backed by 3 Tokyo Cabinet/Table datastores, which have connections that need to be opened and closed. I have two model classes written in straight Ruby that currently simply connect, get or put what they need, and then disconnect. Obviously, this isn't going to work long-term.
I also have some Rack middleware like Warden that rely on these model classes.
What's the best way to manage opening and closing the connections? Rack doesn't provide startup/shutdown hooks as I'm aware. I thought about inserting a piece of middleware that provides reference to the TC/TT object in env, but then I'd have to pipe that through Sinatra to the models, which doesn't seem efficient either; and that would only get be a per-request connection to TC. I'd imagine that per-server-instance-lifecycle would be a more appropriate lifespan.
Thanks!
Have you considered using Sinatra's configure blocks to set up your connections?
configure do
Connection.initialize_for_development
end
configure :production do
Connection.initialize_for_production
end
That's a pretty common idiom while using things like DataMapper with Sinatra
Check out the "Configuration" section at http://www.sinatrarb.com/intro
If you have other Rack middleware that depend on these connections (by way of a dependence on your model classes), then I wouldn't put the connection logic in Sinatra -- what happens if you rip out Sinatra and put in another endpoint?
Since you want connection-per-application rather than connection-per-request, you could easily write a middleware that initialized and cleaned up connections (sort of the Guard Idiom as applied to Rack) and install it ahead of any other middleware that need the connections.
class TokyoCabinetConnectionManagerMiddleware
class <<self
attr_accessor :connection
end
def initialize(app)
#app = app
end
def call(env)
open_connection_if_necessary!
#app.call(env)
end
protected
def open_connection_if_necessary!
self.class.connection ||= begin
... initialize the connection ..
add_finalizer_hook!
end
end
def add_finalizer_hook!
at_exit do
begin
TokyoCabinetConnectionManagerMiddleware.connection.close!
rescue WhateverTokyoCabinetCanRaise => e
puts "Error closing Tokyo Cabinet connection. You might have to clean up manually."
end
end
end
end
If you later decide you want connection-per-thread or connection-per-request, you can change this middleware to put the connection in the env Hash, but you'll need to change your models as well. Perhaps this middleware could set a connection variable in each model class instead of storing it internally? In that case, you might want to do more checking about the state of the connection in the at_exit hook because another thread/request might have closed it.