Ruby - plsql gem. How manage multiple connections - ruby

I have a ruby project, without rails. In my project I exceute some plsql, using ruby_plsql gem, like this:
require 'ruby_plsql'
plsql.connection = OCI8.new('user','password',"//host:1521/my-db") # This only execute one time
def execute_pl
ds = plsql.my_package.my_procedure(company, country) # This execute many time, reuse the connection
ds
end
At this point, every work nice.
Now, for some requirement, y need execute plsql in other db, its depend of some parameter.
If I do this:
plsql.connection = OCI8.new('user','password',"//host2:1521/my-other-db")
From now on, all the plsql will be executed in the other database, and it is not the idea, the idea is that it can be dynamically executed in any of the two databases, without the need to be creating a connection every time you go to run some plsql.
How do I build another method that executes a plsql in the other database, without the need to be creating a connection every time you go to run some plsql?

For user another connection using plsql gem, we can use alias:
plsql(:my_alias).connection = OCI8.new('user','password',"//host2:1521/my-other-db")
And the methods:
def execute_pl
ds = plsql.my_package.my_procedure(company, country)
ds
end
def execute_pl_in_other_db
ds = plsql(:my_alias).my_package.my_procedure(company, country)
ds
end
end

Related

Stored procedure OUT params in JRuby using Sequel

I'm trying to call a stored procedure in a DB2 database that has output params and also returns a cursor. I can get this done using JDBC through JRuby, but I'd like to extend Sequel to do it, because of the nicer interface. I've gotten this far:
Sequel::JDBC::Database.class_eval do
def call_test
sql = "{call ddd.mystoredproc(?)}"
result = {}
synchronize do |conn|
cps = conn.prepare_call(sql)
cps.register_out_parameter(1, Types::INTEGER)
result[:success] = cps.execute
result[:outparam_val] = cps.get_int(1)
if result[:success]
dataset.send(:process_result_set, cps.get_data_set) do |row|
yield row
end
end
# rescue block
end
end
end
This gets me a ResultSet that I have to work with in a very Java-ish way, though, not a nice Sequel::Dataset object. I know this code doesn't make sense - I'm just using it to experiment with values, so at one point I was returning the result hash and seeing what it contained. If I can get something that works, I will clean it up and make it more flexible. It looks like the log_yield method just logs the sql and yields to the block, so I don't know how anything else is getting converted to a Sequel::Dataset. Doing something like DB[:ddd__sometable] will return a dataset that I can loop through, but I can't figure out how and at what point the underlying Java ResultSet is getting changed over, or how to do it myself.
edit: Since Sequel::Database can create a dummy Dataset, and the Sequel::JDBC::Dataset has a private method that converts a result set and yields it to a block, the above is what I have now. This works, but I'm absolutely positive that there has to be a better way.
Sequel seems like the best database library for Ruby, which is why I'm trying to work with it, but if there are alternatives that are nicer than using straight JDBC, I'd like to know about them, too.
Sequel doesn't currently support OUT params in stored procedures on JDBC, so what you are currently doing is probably best.

Rails with mutex on class variable, rake task and cron

Sorry for such a big question. I do not have much experience with Rails threads and mutex.
I have a class as follow which is used by different controllers to get the license for each customers.
Customers and their licenses gets added and removed every hour. An api is available to get all customers and their licenses.
I plan to create a rake task to call update_set_customers_licenses, run hourly via a cronjob.
I have following questions:
1) Even with a mutex, currently there is a potential for problem, there is a chance that my rake task can occur while updating. Any idea on how to solve this?
2) My design below writes the json out to a file, this is done is for safety as the api is not that reliable. As can be seen, it is not reading the file back, so in essence the file write is useless. I tried to implement a file read but together with mutex and rake task, it gets really confusing. Any pointers will help here.
class Customer
##customers_to_licenses_hash = nil
##last_updated_at = nil
##mutex = Mutex.new
CUSTOMERS_LICENSES_FILE = "#{Rails.root}/tmp/customers_licenses"
def self.cached_license_with_customer(customer)
Rails.cache.fetch('customer') {self.license_with_customer(customer)}
end
def self.license_with_customer(customer)
##mutex.synchronize do
license = ##customers_to_licenses_hash[customer]
if license
return license
elsif(##customers_to_licenses_hash.nil? || Time.now.utc - ##last_updated_at > 1.hours)
updated = self.update_set_customers_licenses
return ##customers_to_licenses_hash[customer] if updated
else
return nil
end
end
end
def self.update_set_customers_licenses
updated = nil
file_write = File.open(CUSTOMERS_LICENSES_FILE, 'w')
results = self.get_active_customers_licenses
if results
##customers_to_licenses_hash = results
file_write.print(results.to_json)
##last_updated_at = Time.now.utc
updated = true
end
file_write.close
updated
end
def self.get_active_customers_licenses
#http get thru api
#return hash of records
end
end
I'm pretty it's the case that every time rails loads, the environment is "fresh" and has no concept of "state" in between instances. That is to say, a mutex in one ruby instance (the one request to rails) has no effect on a second ruby instance (another request to rails or in this case, a rake task).
If you follow the data upstream, you'll find that the common root of every instance that can be used to synchronize them is the database. You could use transactional blocks or maybe a manual flag you set and unset in the database.

rails persist objects over requests in development mode

I am trying to interact with Matlab.Application.Single win32ole objects in my rails application. The problem I am running into is that while I am developing my application, each separate request reloads my win32ole objects so I loose the connection to my matlab orignal instances and new instances are made. Is there a way to persist live objects between requests in rails? or is there a way to reconnect to my Matlab.Application.Single instances?
In production mode I use module variables to store my connection between requests, but in development mode Module variables are reloaded every request.
here is a snippet of my code
require 'win32ole'
module Calculator
#engine2 = nil
#engine3 = nil
def self.engine2
if #engine2.nil?
#engine2 = WIN32OLE.new("Matlab.Application.Single")
#engine2.execute("run('setup_path.m')")
end
#engine2
end
def self.engine3
if #engine3.nil?
#engine3 = WIN32OLE.new("Matlab.Application.Single")
#engine3.execute("run('setup_path.m')")
end
#engine3
end
def self.load_CT_image(file)
Calculator.engine2.execute("spm_image('Init','#{file}')")
end
def self.load_MR_image(file)
Calculator.engine3.execute("spm_image('Init','#{file}')")
end
end
I am then able to use my code in my controllers like this:
Calculator.load_CT_image('Post_Incident_CT.hdr')
Calculator.load_MR_image('Post_Incident_MRI.hdr')
You can keep an app-wide object in a constant that won't be reset for every request. Add this to a new file in config/initializers/:
ENGINE_2 = WIN32OLE.new("Matlab.Application.Single")
You might also need to include the .execute("run('setup_path.m')") line here as well (I'm not familiar with WIN32OLE). You can then assign that object to your instance variables in your Calculator module (just replace the WIN32OLE.new("Matlab.Application.Single") call with ENGINE_2, or simply refer to them directly.
I know this is beyond the scope of your question, but you have a lot of duplicated code here, and you might want to think about creating a class or module to manage your Matlab instances -- spinning up new ones as needed, and shutting down old ones that are no longer in use.

RSpec - How to mock a stored procedure

Consider the following stored procedure:
CREATE OR REPLACE FUNCTION get_supported_locales()
RETURNS TABLE(
code character varying(10)
) AS
...
And the following method that call's it:
def self.supported_locales
query = "SELECT code FROM get_supported_locales();"
res = ActiveRecord::Base.connection.execute(query)
res.values.flatten
end
I'm trying to write a test for this method but I'm getting some problems while mocking:
it "should list an intersection of locales available on the app and on last fm" do
res = mock(PG::Result)
res.should_receive(:values).and_return(['en', 'pt'])
ActiveRecord::Base.connection.stub(:execute).and_return(res)
Language.supported_locales.should =~ ['pt', 'en']
end
This test succeds but any test that runs after this one gives the following message:
WARNING: there is already a transaction in progress
Why does this happen? Am I doing the mocking
The database is postgres 9.1.
Your test is running using database level transactions. When the test completes, the transaction is rolled back so that none of the changes made in the test are actually saved to the database. In your case, this rollback can't happen because you have stubbed out the execute method on the ActiveRecord connection.
You can disable transactions globally and switch to using DatabaseCleaner to enable/disable transactions for various tests. You could then set up to use transactions through DatabaseCleaner by default so your existing tests don't change, and then in this one test choose to disable transactions in favor of some other strategy (such as the null strategy since there is no cleaning to be done for this test).
This other SO post indicates you may be able to avoid disabling transactions globally and turn them off on a per test basis as well, I have not tried that myself though.

How to fire raw MongoDB queries directly in Ruby

Is there any way that I can fire a raw mongo query directly in Ruby instead of converting them to the native Ruby objects?
I went through Ruby Mongo Tutorial, but I cannot find such a method anywhere.
If it were mysql, I would have fired a query something like this.
ActiveRecord::Base.connection.execute("Select * from foo")
My mongo query is a bit large and it is properly executing in the MongoDB console. What I want is to directly execute the same inside Ruby code.
Here's a (possibly) better mini-tutorial on how to get directly into the guts of your MongoDB. This might not solve your specific problem but it should get you as far as the MongoDB version of SELECT * FROM table.
First of all, you'll want a Mongo::Connection object. If
you're using MongoMapper then you can call the connection
class method on any of your MongoMapper models to get a connection
or ask MongoMapper for it directly:
connection = YourMongoModel.connection
connection = MongoMapper.connection
Otherwise I guess you'd use the from_uri constructor to build
your own connection.
Then you need to get your hands on a database, you can do this
using the array access notation, the db method, or get
the current one straight from MongoMapper:
db = connection['database_name'] # This does not support options.
db = connection.db('database_name') # This does support options.
db = MongoMapper.database # This should be configured like
# the rest of your app.
Now you have a nice shiny Mongo::DB instance in your hands.
But, you probably want a Collection to do anything interesting
and you can get that using either array access notation or the
collection method:
collection = db['collection_name']
collection = db.collection('collection_name')
Now you have something that behaves sort of like an SQL table so
you can count how many things it has or query it using find:
cursor = collection.find(:key => 'value')
cursor = collection.find({:key => 'value'}, :fields => ['just', 'these', 'fields'])
# etc.
And now you have what you're really after: a hot out of the oven Mongo::Cursor
that points at the data you're interested in. Mongo::Cursor is
an Enumerable so you have access to all your usual iterating
friends such as each, first, map, and one of my personal
favorites, each_with_object:
a = cursor.each_with_object([]) { |x, a| a.push(mangle(x)) }
There are also command and eval methods on Mongo::DB that might do what you want.
In case you are using mongoid you will find the answer to your question here.
If you're using Mongoid 3, it provides easy access to its MongoDB driver: Moped. Here's an example of accessing some raw data without using Models to access the data:
db = Mongoid::Sessions.default
# inserting a new document
collection = db[:collection_name]
collection.insert(name: 'my new document')
# finding a document
doc = collection.find(name: 'my new document').first
# "select * from collection"
collection.find.each do |document|
puts document.inspect
end

Resources