Override multi_insert_sql_strategy used in Sequel multi_insert - ruby

Is there a way to override the multi_insert_sql_strategy that is specified when using methods like multi_insert? I am using the ODBC adapter which falls back to :separate as the strategy. The database that I am connecting to (Snowflake) supports multiple rows in the VALUES clause and as such, I'd like to leverage :values as the strategy instead. I have not found this to be an option that I can pass in.
Default strategy:
https://github.com/jeremyevans/sequel/blob/9202d780b92626646c9faeff90a7f7b9d7b6c10d/lib/sequel/dataset/sql.rb#L1340
multi_insert code:
https://github.com/jeremyevans/sequel/blob/ff5d77cb60a61b41d3eb500344f287f0b9fbdb97/lib/sequel/dataset/actions.rb#L484
Options available for import which is used by multi_insert:
https://www.rubydoc.info/github/jeremyevans/sequel/Sequel%2FDataset:import

Yes, you can override the strategy:
DB.extend_datasets do
def multi_insert_sql_strategy; :values; end
end
In general, you may want to consider working on an Sequel adapter for Snowflake, as this is something the adapter is supposed to take care of.

Related

Can pundit policies be loaded from database?

I like the simplicity of Pundit gem and I would like to make policies dynamic by storing them to database.
Basically I'm looking for a way to be able to change policies without need to redeploy the application.
1st way
Pundit policy is pure ruby code, so if you don't want to keep code inside database and evaluate it dynamically, I'd say the answer is no. It's unsafe. You may give it a go, though.
2nd way
But nothing prevents you from creating model which keeps rules in simple json and compare them using Pundit, e.g.:
class PostPolicy < ApplicationPolicy
def update?
access_setting = PolicySetting.find_by(key: self.class_name)
user.role.in?(access_setting['roles'])
end
end
Of course, complexity and flexibility of the tool directly depends on each other.
3rd way
Is just work around. You may set you authorisation project apart from the main one, so that it's deploys (zero-downtime, of course) would not affect the main big project.
4th way
Create your own DSL to be stored in Database
5th way
Use something like json-logic-ruby to store logic in database

Ruby - Sequel Model to access multiple databases

I'm trying to use the Ruby Sequel::Model ORM functionality for a web service, in which every user's data is stored in a separate MySQL database. There may be thousands of users and thus databases.
On every web request I want to construct the connection string to connect to the user's data, do the work, and then close the connection.
When using Sequel, I can specify the database to use for a particular block of code:
Sequel.connect(:adapter=>'mysql', :host=>'localhost', database=>'test1') do |db|
db.do_something()
end
This is all very good, I can perform Sequel operations on the particular user's database. However, when using Sequel::Model, when I come to do my db operations it looks like this:
Supplier.create(:field1 => 'TEST')
I.e. it doesn't take db as a parameter, so just uses some shared database configuration.
I can configure the database Model uses in two ways, either set the global DB variable:
DB = Sequel.connect(:adapter=>'mysql', :host=>'localhost', database=>'test1')
class Supplier < Sequel::Model
end
Or, I can set the database just for Model:
Sequel::Model.db = Sequel.connect(:adapter=>'mysql', :host=>'localhost', database=>'test1')
class Supplier < Sequel::Model
end
In either case, setting a shared variable like this is no good - there may be multiple requests processed concurrently, each of which needs its own database configuration.
Is there any way around this? Is there a way of specifying per-request db configuration using Sequel::Model?
As an aside, I've run into a similar problem with DataMapper, I'm now wondering whether having a single multi-tenanted database is going to be the only option if using Ruby, although I'd prefer to avoid this as it limits scalability.
A solution, or any pertinent discussion would be much appreciated.
Thanks
Pete
Use Sequel's sharding support for this: http://sequel.jeremyevans.net/rdoc/files/doc/sharding_rdoc.html
Actually in your case it's probably better to use arbitrary_servers extension than sharding:
DB.with_server(:host=>'hash_host_b', :database=>'backup') do
DB.synchronize do
# All queries here default to the backup database on hash_host_b
end
end
See:
http://sequel.jeremyevans.net/rdoc/files/doc/sharding_rdoc.html#label-arbitrary_servers+Extension

Ruby Method for Casting

I am new to ruby development and I am making a simple chat server encrypted over TLS.
I have managed to get the basic server running however I now want to add special properties to each of the connected clients (username, etc.)
I have this class which I plan to use for each client that connects:
class Client < OpenSSL::SSL::SSLSocket
attr_accessor :username
...
end
I need to get a Client object from the OpenSSL::SSL::SSLServer.accept function in order to set the username attribute. I am used to C type languages where casting would do the trick but Google has told me that this is not the case in Ruby.
What is the Ruby way of doing this?
You don't need casting in Ruby. It's a dynamic language. So what matters is if the object knows how to respond to a message (method).
There are essentially 2 ways to solve your problem:
delegation: create Client as a wrapper with an instance variable of class OpenSSL::SSL::SSLSocket. Client would then have to understand and forward messages to SSLSocket. Could get complicated.
extension: use class_eval to add instance variables and/or methods directly to SSLSocket. This is a commonly-used Ruby approach.

Specify table name mid application Ruby-Datamapper

I'm wanting to dynamically create and query tables using Datamapper.
While Datamapper allows you to work with legacy tables and schemas, and in this way set the table name used this is only during initialisation, not within the application.
Is there an easy way to tell Datamapper to migrate/upgrade a Model with an assigned table name in application, and to then tell it to query this table?
This should not be a problem.
All Ruby classes can be created, and re-defined at run-time. Even initialization is at run-time. Initialization just happens to be executed first, before other code is executed.
That is why monkey-patches work so easily. It's just additional code at initialization that just re-defines classes to add extra methods, variables etc.
There is no Ruby code that is "special" in the sense that it only runs at compile time. Ruby is an interpreted language.
To dynamically create a class, see Dynamically creating class in Ruby.
Assuming you don't need to dynamically create classes from an array of strings, you can define additional methods with define_method, or call Datamapper methods at runtime to add attributes.
To define new methods in a class:
Post.send :define_method, :new_method_name do
end
To define a new property using the Datamapper property:
class Post
include DataMapper::Resource
property :title, String # the static way
end
Post.send :property, :title, String # add property the dynamic way (at run-time)
Do note that any tables or properties you define at run-time will not be available if you restart your server, unless the code that dynamically generates these are re-executed.
To update your tables at runtime, you simply do the same thing as normal, that is, call:
DataMapper.auto_upgrade!
To upgrade only a single table, you can also do:
Post.auto_upgrade!
2nd warning: If you have multiple processes, the dynamic code will need to be run in each process, or the additional table Models and Properties will not be available.
This is a problem if you have multiple worker processes, as might happen in production (eg. Nginx with multiple Unicorn workers, or multiple Mongrel workers behind a Ha_proxy).
If you have a single process server, then that is not a problem. However, if you have multiple worker processes, you must run the dynamic code to generate these extra classes and properties in EACH process to make it available.
This is actually the same for initialization, because each process goes through initialization (or if forked, inherit any initialization).
The easiest way without changing anything under the hood is to use separate databases instead of tables (assuming that any relationships will also be stored in the separate database) and open a connection to an additional repository in the block.
DataMapper.setup(:external, "adapter://username:password#hostname/dbname")
DataMapper.repository(:external) do...end

OO Design: Multiple persistance design for a ruby class

I am designing a class for log entries of my mail server. I have parsed the log entries and created the class hierarchy. Now I need to save the in memory representation to the disk. I need to save it to multiple destinations like mysql and disk files. I am at a loss to find out the proper way to design the persistence mechanism. The challenges are:
How to pass persistence
initialization information like
filename, db connection parameters
passed to them. The options I can
think of are all ugly for eg:
1.1 Constructor: it becomes ugly as I
add more persistence.
1.2 Method: Object.mysql_params(" "),
again butt ugly
"Correct" method name to call each
persistance mechanism: eg:
Object.save_mysql, Object.save_file,
or Object.save (mysql) and
Object.save(file)
I am sure there is some pattern to solve this particular problem. I am using ruby as my language, with out any rails, ie pure ruby code. Any clue is much welcome.
raj
Personally I'd break things out a bit - the object representing a log entry really shouldn't be worrying about how it should save it, so I'd probably create a MySQLObjectStore, and FileObjectStore, which you can configure separately, and gets passed the object to save. You could give your Object class a class variable which contains the store type, to be called on save.
class Object
cattr_accessor :store
def save
##store.save(self)
end
end
class MySQLObjectStore
def initialize(connection_string)
# Connect to DB etc...
end
def save(obj)
# Write to database
end
end
store = MySQLObjectStore.new("user:password#localhost/database")
Object.store = store
obj = Object.new(foo)
obj.save
Unless I completely misunstood your question, I would recommend using the Strategy pattern. Instead of having this one class try to write to all of those different sources, delegate that responsibility to another class. Have a bunch of LogWriter classes, each one with the responsibility of persiting the object to a particular data store. So you might have a MySqlLogWriter, FileLogWriter, etc.
Each one of these objects can be instantiated on their own and then the persitence object can be passed to it:
lw = FileLogWriter.new "log_file.txt"
lw.Write(log)
You really should separate your concerns here. The message and the way the message is saved are two separate things. In fact, in many cases, it would also be more efficient not to open a new mysql connection or new file pointer for every message.
I would create a Saver class, extended by FileSaver and MysqlSaver, each of which have a save method, which is passed your message. The saver is responsible for pulling out the parts of the message that apply and saving them to the medium it's responsible for.

Resources