I am working on a rails application, and I am using Sqlite in my dev environment and PostgreSQL in production. Is there any way to write a "database-aware" migration? i.e. one that would execute a certain SQL statement on Sqlite and a differetn statement on Postgres?
You should be able to write something like:
class MyMigration < ActiveRecord::Migration
def up
if ActiveRecord::Base.connection.kind_of? ActiveRecord::ConnectionAdapters::SQLite3Adapter
execute 'SQL Statement...'
else
execute 'Different SQL Statement...'
end
end
def down
...
end
end
Its not something I have had to implement myself so I'm not aware of any pitfalls.
Related
I have a ruby project, without rails. In my project I exceute some plsql, using ruby_plsql gem, like this:
require 'ruby_plsql'
plsql.connection = OCI8.new('user','password',"//host:1521/my-db") # This only execute one time
def execute_pl
ds = plsql.my_package.my_procedure(company, country) # This execute many time, reuse the connection
ds
end
At this point, every work nice.
Now, for some requirement, y need execute plsql in other db, its depend of some parameter.
If I do this:
plsql.connection = OCI8.new('user','password',"//host2:1521/my-other-db")
From now on, all the plsql will be executed in the other database, and it is not the idea, the idea is that it can be dynamically executed in any of the two databases, without the need to be creating a connection every time you go to run some plsql.
How do I build another method that executes a plsql in the other database, without the need to be creating a connection every time you go to run some plsql?
For user another connection using plsql gem, we can use alias:
plsql(:my_alias).connection = OCI8.new('user','password',"//host2:1521/my-other-db")
And the methods:
def execute_pl
ds = plsql.my_package.my_procedure(company, country)
ds
end
def execute_pl_in_other_db
ds = plsql(:my_alias).my_package.my_procedure(company, country)
ds
end
end
I'm trying to call a stored procedure in a DB2 database that has output params and also returns a cursor. I can get this done using JDBC through JRuby, but I'd like to extend Sequel to do it, because of the nicer interface. I've gotten this far:
Sequel::JDBC::Database.class_eval do
def call_test
sql = "{call ddd.mystoredproc(?)}"
result = {}
synchronize do |conn|
cps = conn.prepare_call(sql)
cps.register_out_parameter(1, Types::INTEGER)
result[:success] = cps.execute
result[:outparam_val] = cps.get_int(1)
if result[:success]
dataset.send(:process_result_set, cps.get_data_set) do |row|
yield row
end
end
# rescue block
end
end
end
This gets me a ResultSet that I have to work with in a very Java-ish way, though, not a nice Sequel::Dataset object. I know this code doesn't make sense - I'm just using it to experiment with values, so at one point I was returning the result hash and seeing what it contained. If I can get something that works, I will clean it up and make it more flexible. It looks like the log_yield method just logs the sql and yields to the block, so I don't know how anything else is getting converted to a Sequel::Dataset. Doing something like DB[:ddd__sometable] will return a dataset that I can loop through, but I can't figure out how and at what point the underlying Java ResultSet is getting changed over, or how to do it myself.
edit: Since Sequel::Database can create a dummy Dataset, and the Sequel::JDBC::Dataset has a private method that converts a result set and yields it to a block, the above is what I have now. This works, but I'm absolutely positive that there has to be a better way.
Sequel seems like the best database library for Ruby, which is why I'm trying to work with it, but if there are alternatives that are nicer than using straight JDBC, I'd like to know about them, too.
Sequel doesn't currently support OUT params in stored procedures on JDBC, so what you are currently doing is probably best.
I am testing a class, with RSpec, that reads data from the database. Depending on arguments, it will not return the same.
My current strategy is something like this:
before do
# create a bunch of data
end
it 'test1' # ...
it 'test2' # ...
it 'test3' # ...
Which of course means that my data will be created before each test.
I would like to be able to create the data once, in the scope of this file, and do my reads on the data.
I tried setting use_transactional_fixtures to false for the file and use database_cleaner but it made my tests twice as slow because I had to re-seed my db before and after the tests.
I was wondering if there was a way to tell rspec "run each of these tests in the same transaction".
Or maybe, since I'm having a hard time finding that, there's a better strategy for that kind of testing?
It looks like using the database_cleaner gem was the right thing to do, I was just doing it wrong. I had set my cleaning strategy to truncation which emptied the db...
On this Github issue, David Chelimsky explains that using database_cleaner is the way to go https://github.com/dchelimsky/rspec-rails/issues/2
My tests now look like this:
before(:all) do
self.use_transactional_fixtures = false
DatabaseCleaner.strategy = :transaction
DatabaseCleaner.start
create_data
end
after(:all) do
DatabaseCleaner.clean
end
def create_data
# create the data...
end
it 'test1' # ...
it 'test2' # ...
it 'test3' # ...
end
The same tests now run in ~3.5s versus ~7s before. I am happy :)
edit: one before :all is enough
I am using DataMapper to interface with MySql. Is there any check I can do with Datamapper to ensure that the database is up?
If you want to test whether you can actually do some work with your database, something like this could be helpful:
begin
DataMapper.repository(:default).adapter.execute('SHOW TABLES;')
rescue
puts "Problem!"
end
This will make sure that the server is up and that the database you chose is valid (that's why something like SELECT 1 wouldn't work).
We are using Datamapper in a Sinatra application and would like to use case insensitive like that works on both Sqlite (locally in development) and Postgresql (on Heroku in production).
We have statements like
TreeItem.all(:name.like =>"%#{term}%",:unique => true,:limit => 20)
If termis "BERL" we get the suggestion "BERLIN" from both the Sqlite and Postgresql backends. However if termis "Berl" we only get that result from Sqlite and not Postgresql.
I guess this has to do with the fact that both dm-postgres-adapter and dm-sqlite-adapter outputting a LIKE in the resulting SQL query. Since Postgresql has a case sensitive LIKE we get this (for us unwanted) behavior.
Is there a way to get case insensitive like in Datamapper without resorting to use a raw SQL query to the adapter or patching the adapter to use ILIKEinstead of LIKE?
I could of course use something in between, such as:
TreeItem.all(:conditions => ["name LIKE ?","%#{term}%"],:unique => true,:limit => 20)
but then we would be tied to the use of Postgresql within our own code and not just as a configuration for the adapter.
By writing my own data object adapter that overrides the like_operator method I managed to get Postgres' case insensitive ILIKE.
require 'do_postgres'
require 'dm-do-adapter'
module DataMapper
module Adapters
class PostgresAdapter < DataObjectsAdapter
module SQL #:nodoc:
private
# #api private
def supports_returning?
true
end
def like_operator(operand)
'ILIKE'
end
end
include SQL
end
const_added(:PostgresAdapter)
end
end
Eventually I however decided to port the application in question to use a document database.
For other people who happen to use datamapper wanting support for ilike as well as 'similar to' in PostgreSQL: https://gist.github.com/Speljohan/5124955
Just drop that in your project, and then to use it, see these examples:
Model.all(:column.ilike => '%foo%')
Model.all(:column.similar => '(%foo%)|(%bar%)')