Ruby object with object property - ruby

I have a class that uses the AWS S3 gem and have several methods in my class that utilise the gem. My issue is, that rather than configuring it in several locations, I'd like to make it a property of my object.
In PHP, we'd do this;
<?php
class myClass {
private $obj;
public function __construct() {
$this->obj = new Object();
}
}
?>
And then I could use $this->obj->method() anywhere in myClass.
I am having a hard time getting similar to work in ruby.
My scenario is similar to this;
require 'aws/s3'
class ProfileVideo < ActiveRecord::Base
def self.cleanup
# <snip> YAML load my config etc etc
AWS::S3::Base.establish_connection!(
:access_key_id => #aws_config['aws_key'],
:secret_access_key => #aws_config['aws_secret']
)
end
def self.another_method
# I want to use AWS::S3 here without needing to establish connection again
end
end
I have also noticed in my class that initialize fails to execute - a simple 'puts "here"' does nothing. Considering this is a rake task and I can 'puts "here"' in the other methods. I'm not sure if maybe rake does not init like running ProfileVideo.new would?
Anyway, thanks in advance.

I'm not familiar with the S3 gem in particular but here are a couple ways you could go about this.
If you simply want to make establishing the connection easier, you can create a method in your model like so:
def open_s3
return if #s3_opened
AWS::S3::Base.establish_connection!(
:access_key_id => #aws_config['aws_key'],
:secret_access_key => #aws_config['aws_secret']
)
#s3_opened = true
end
then you can call open_s3 at the top of any methods that require it and it will only open once.
Another route you could take is to place the connection code in a before hook set to fire before any other hooks (IIRC, the order in which you define them sets the order in which they fire) and then make your calls.
In either case, I would recommend against putting your AWS key and secret into your code. Instead, those should go into a config file is ignored by your version control system and generated on-deploy for remote systems.

Related

Dry::Web::Container yielding different objects with multiple calls to resolve

I'm trying write a test to assert that all defined operations are called on a successful run. I have the operations for a given process defined in a list and resolve them from a container, like so:
class ProcessController
def call(input)
operations.each { |o| container[o].(input) }
end
def operations
['operation1', 'operation2']
end
def container
My::Container # This is a Dry::Web::Container
end
end
Then I test is as follows:
RSpec.describe ProcessController do
let(:container) { My::Container }
it 'executes all operations' do
subject.operations.each do |op|
expect(container[op]).to receive(:call).and_call_original
end
expect(subject.(input)).to be_success
end
end
This fails because calling container[operation_name] from inside ProcessController and from inside the test yield different instances of the operations. I can verify it by comparing the object ids. Other than that, I know the code is working correctly and all operations are being called.
The container is configured to auto register these operations and has been finalized before the test begins to run.
How do I make resolving the same key return the same item?
TL;DR - https://dry-rb.org/gems/dry-system/test-mode/
Hi, to get the behaviour you're asking for, you'd need to use the memoize option when registering items with your container.
Note that Dry::Web::Container inherits Dry::System::Container, which includes Dry::Container::Mixin, so while the following example is using dry-container, it's still applicable:
require 'bundler/inline'
gemfile(true) do
source 'https://rubygems.org'
gem 'dry-container'
end
class MyItem; end
class MyContainer
extend Dry::Container::Mixin
register(:item) { MyItem.new }
register(:memoized_item, memoize: true) { MyItem.new }
end
MyContainer[:item].object_id
# => 47171345299860
MyContainer[:item].object_id
# => 47171345290240
MyContainer[:memoized_item].object_id
# => 47171345277260
MyContainer[:memoized_item].object_id
# => 47171345277260
However, to do this from dry-web, you'd need to either memoize all objects auto-registered under the same path, or add the # auto_register: false magic comment to the top of the files that define the dependencies and boot them manually.
Memoizing could cause concurrency issues depending on which app server you're using and whether or not your objects are mutated during the request lifecycle, hence the design of dry-container to not memoize by default.
Another, arguably better option, is to use stubs:
# Extending above code
require 'dry/container/stub'
MyContainer.enable_stubs!
MyContainer.stub(:item, 'Some string')
MyContainer[:item]
# => "Some string"
Side note:
dry-system provides an injector so that you don't need to call the container manually in your objects, so your process controller would become something like:
class ProcessController
include My::Importer['operation1', 'operation2']
def call(input)
[operation1, operation2].each do |operation|
operation.(input)
end
end
end

How do I make a class conditionally return one of two other classes?

I have a design problem.
I'm writing a REST client in ruby. For reasons beyond my control, it has to extend another gem that uses my networks zookeeper instance to do service lookup. My client takes a user provided tier, and based on that value, queries the zookeeper registry for the appropriate service url.
The problem is that I also need to be able to run my client against a locally running version of the service under test. When the service is running locally, zookeeper is obviously not involved, so I simply need to be able to make GET requests against the localhost resource url.
When a user instantiates my gem, they call something like:
client = MyRestClient.new(tier: :dev)
or in local mode
client = MyRestClient.new(tier: :local)
I would like to avoid conditionally hacking the constructor in MyRestClient (and all of the GET methods in MyRestClient) to alter requests based on :local vs. :requests_via_the_zk_gem.
I'm looking for an elegant and clean way to handle this situation in Ruby.
One thought was to create two client classes, one for :local and the other for :not_local. But then I don't know how to provide a single gem interface that will return the correct client object.
If MyClient has a constructor that looks something like this:
class MyClient
attr_reader :the_klass
def initialize(opts={})
if opts[:tier] == :local
#the_klass = LocalClass.new
else
#the_klass = ZkClass.new
end
#the_klass
end
end
then I end up with something like:
test = MyClient.new(tier: :local)
=> #<MyClient:0x007fe4d881ed58 #the_klass=#<LocalClass:0x007fe4d883afd0>>
test.class
=> MyClient
test.the_klass.class
=> LocalClass
those who then use my gem would have to make calls like:
#client = MyClient.new(tier: :local)
#client.the_klass.get
which doesn't seem right
I could use a module to return the appropriate class, but then I'm faced with the question of how to provide a single public interface for my gem. I can't instantiate a module with .new.
My sense is that this is a common OO problem and I just haven't run into it yet. It's also possible the answer is staring me in the face and I just haven't found it yet.
Most grateful for any help.
A common pattern is to pass the service into the client, something like:
class MyClient
attr_reader :service
def initialize(service)
#service = service
end
def some_method
service.some_method
end
end
And create it with:
client = MyRestClient.new(LocalClass.new)
# or
client = MyRestClient.new(ZkClass.new)
You could move these two into class methods:
class MyClient
self.local
new(LocalClass.new)
end
self.dev
new(ZkClass.new)
end
end
And instead call:
client = MyRestClient.local
# or
client = MyRestClient.dev
You can use method_missing to delegate from your client to the actual class.
def method_missing(m, *args, &block)
#the_class.send(m, *args, &block)
end
So whenever a method gets called on your class that doesn't exist (like get in your example) it wil be called on #the_class instead.
It's good style to also define the corresponding respond_to_missing? btw:
def respond_to_missing?(m, include_private = false)
#the_class.respond_to?(m)
end
The use case you are describing looks like a classic factory method use case.
The common solution for this is the create a method (not new) which returns the relevant class instance:
class MyClient
def self.create_client(opts={})
if opts[:tier] == :local
LocalClass.new
else
ZkClass.new
end
end
end
And now your usage is:
test = MyClient.create(tier: :local)
=> #<LocalClass:0x007fe4d881ed58>
test.class
=> LocalClass

Is it better to create new controller instance for each HTTP request in Rack based app or to use the same instance?

I'm creating a very simple Rack based application as I want it to do a very specific task.
The server.rb looks something like this:
Path= File.expand_path("#{File.dirname __FILE__}/../../")
require "bundler/setup"
require "thin"
require "rack"
%w(parser auth controller).each do |file|
require "#{Path}/app/server/#{file}.rb"
end
builder = Rack::Builder.app do
use Auth
run Parser.new
end
Rack::Handler::Thin.run(builder, :Port => 8080, :threaded => true)
parser.rb looks like:
class Parser
def initialize
#controller = Controller.new
end
def call(env)
req = Rack::Request.new(env).params
res = Rack::Response.new
res['Content-Type'] = "text/plain"
command= req[:command]
if command =~ /\A(register|r|subscribe|s)\z/i
#controller.register
end
res.write command
res.finish
end
end
Now my question here, from design prospective, is it better to create one instance of Controller and use it with each request(like Idid with the code above), or to create new controller instance for each request(change #controller.register to Controller.new.register)? which is better to use and why?
Thanks in advance
The overhead of creating a new controller per request is likely not that large.
If you store state in the controller (instance variables etcetera) and you reuse it, you could run into concurrency issues such as race conditions or deadlock when under load.
If you take care to ensure that your Controller object stores no state, you can reuse it. If it does any sort of state storage per request, you will need to ensure that the shared resources are property synchronized.
My 2c - create a new controller per request, until you can confirm that you have a performance hit from creating a new controller per request. It's simpler, cleaner, and less prone to strange bugs.

Transactions in Ruby Sequel module: how to get DB object?

I'm working in a Sinatra application using Sequel.
I want to make a transaction, according to the manual I have to use the DB object, how can I get this object from any part of my code?
You can define it in your base app.rb (or equivalent) or include a separate file where you configure the DB object if you wish.
For example, in one of my Sinatra apps, I have an app.rb that includes a
class App < Sinatra::Application
#lots of stuff here...
end
require_relative 'models/init'
In my models/init.rb I configure DB
require 'sequel'
conf = YAML.load(File.open(File.expand_path('./config/dbconn.yml')))
env = ENV['RACK_ENV'] || 'development'
DB = Sequel.connect(host:conf['database'][env]['host'],
port:conf['database'][env]['port'],
database:conf['database'][env]['schema'],
username:conf['database'][env]['username'],
password:conf['database'][env]['password'],
adapter:conf['database'][env]['adapter'],
encoding:conf['database'][env]['encoding'])
raise "Unable to connect to #{conf['database'][env]['host']}" unless DB.test_connection
...
That's one way. Hope it helps.
You mention that you want to reference from any part of your code; however I've found that encapsulated within the models is where I tend to wrap transactions; and from there it's relatively easy:
class X < Sequel::Model
def self.y
self.db.transaction {
...
end
end
def z
db.transaction {
...
}
end
end

Is there an ORM-like wrapper for memcached

I'm looking for a ruby gem (or rails plugin) which abstracts the details of memcached in the same way that ActiveRecord abstracts the details of SQL. I am NOT looking for something to help cache ActiveRecord models in memcached. I'm sure there are approximately 4215 gems that will help with that problem.
Ideally what I'd like is to be able to do something like:
class Apple < MemcachedModel
# whatever else here
end
and then be able to do stuff like:
my_apple = Apple.find('some memcached key')
which would look up the JSON representation of this class in memcached and deserialize it. I'd also maybe be able to do things like:
my_apple.color = "red"
# persist changes back to memcached
my_apple.save
# load any changes from memcached into local model
my_apple.update
It seems like someone must have scratched this itch by now and created something along these lines, but whenever I google for such a gem I just keep turning up thing which help cache AR models using memcached.
You can take a look at my moneta gem, which is an ORM'ish thing for all kinds of key-value-stores. You can see it at: http://github.com/wycats/moneta/tree/master
The basic idea behind moneta is that all KVSs should behave exactly like a subset of normal Ruby hashes. We support:
#[]
#[]=
#delete
#fetch
#key?
#store
#update_key
#clear
The store and update_key methods take an additional options hash which you can use thusly:
cache = Moneta::Memcache.new(:server => "localhost:11211", :namespace => "me")
cache.store("name", "wycats", :expires_in => 2)
cache.update_key("name", :expires_in => 10)
We support a large number of KVSs:
BerkeleyDB
CouchDB
DataMapper (which means any store supported by DM)
Files
LMC
Memcache
In-process memory
MongoDB
Redis
Tokyo Cabinet
Tokyo Tyrant
S3
SDBM
Files using XAttrs
Every store supports expiry, either natively (like in memcached) or using a standard module that emulates memcache-style expiry. The API is always identical and there is a shared spec that all adapters are run against to ensure compliance.
It is also quite easy to add your own adapter, which is why so many exist.
I don't know about any Ruby ActiveRecord-like adapter for Memcached. A similar library would probably be hard to create because Memcached doesn't act as a relational database.
The result is that the library wouldn't be able to implement about the 80% of the features supported by ActiveRecord, so what's the benefit of such an implementation?
You already have everything you need in Rails to work with memcache with a "CRUD" pattern.
Rails.cache.read('key')
Rails.cache.write('key', 'value')
Rails.cache.delete('key')
Rails.cache.increment('key', 5)
Rails.cache.fetch('key') { 'value' }
If you feel more comfortable, you can create a wrapper and proxy these methods with corresponding new/create/update/save/destroy methods. However, you would never be able to go beyond a basic CRUD system just because Memcached is not intended to be a relational database.
It's fairly easy to implement.
require 'ostruct'
require 'active_support/cache'
class StoredStruct < OpenStruct
attr_writer :store
def self.store
#store || superclass.store
end
def self.expand_key(key)
'StoredStruct_' + (superclass == OpenStruct ? '' : "#{self}_") + key.to_s
end
def self.get_unique_id
key = expand_key('unique_id')
store.write(key, 0, :unless_exist => true)
store.increment(key)
end
def self.save(instance)
id = instance.id || get_unique_id
store.write(expand_key(id), instance)
id
end
def self.find(id)
store.read(expand_key(id))
end
attr_reader :id
def attributes
#table
end
def attributes=(hash)
#table = hash
end
def new_record?
self.id.nil?
end
def save
#id = self.class.save(self)
true
end
def reload
instance = self.class.find(self.id)
self.attributes = instance.attributes unless self == instance
self
end
end
Use it like this:
# connect to memcached
StoredStruct.store = ActiveSupport::Cache::MemCacheStore.new("localhost:11211")
class Apple < StoredStruct
end
fruit = Apple.new
fruit.color = "red"
fruit.taste = "delicious"
fruit.id
#=> nil
fruit.save
#=> true
fruit.id
#=> 1
# to load any changes:
fruit.reload
Apple.find(1)
#=> fruit
As Simone Carletti wrote, Memcached isn't a relational database; it can't even list all its keys. As such, any ActiveRecord-like model storing data in Memcached will not contain all the functionality of ActiveRecord. Nonetheless, I think there is some value in having a consistent API for all your models, so if it makes sense to have one of your models store its data in Memcached, you can use this module I created for that purpose:
http://blog.slashpoundbang.com/post/1455548868/memcachemodel-make-any-ruby-object-that-persists-in
You may be looking for Nick Kallen's cache-money.

Resources