I am using DataMapper to interface with MySql. Is there any check I can do with Datamapper to ensure that the database is up?
If you want to test whether you can actually do some work with your database, something like this could be helpful:
begin
DataMapper.repository(:default).adapter.execute('SHOW TABLES;')
rescue
puts "Problem!"
end
This will make sure that the server is up and that the database you chose is valid (that's why something like SELECT 1 wouldn't work).
Related
I'm trying to call a stored procedure in a DB2 database that has output params and also returns a cursor. I can get this done using JDBC through JRuby, but I'd like to extend Sequel to do it, because of the nicer interface. I've gotten this far:
Sequel::JDBC::Database.class_eval do
def call_test
sql = "{call ddd.mystoredproc(?)}"
result = {}
synchronize do |conn|
cps = conn.prepare_call(sql)
cps.register_out_parameter(1, Types::INTEGER)
result[:success] = cps.execute
result[:outparam_val] = cps.get_int(1)
if result[:success]
dataset.send(:process_result_set, cps.get_data_set) do |row|
yield row
end
end
# rescue block
end
end
end
This gets me a ResultSet that I have to work with in a very Java-ish way, though, not a nice Sequel::Dataset object. I know this code doesn't make sense - I'm just using it to experiment with values, so at one point I was returning the result hash and seeing what it contained. If I can get something that works, I will clean it up and make it more flexible. It looks like the log_yield method just logs the sql and yields to the block, so I don't know how anything else is getting converted to a Sequel::Dataset. Doing something like DB[:ddd__sometable] will return a dataset that I can loop through, but I can't figure out how and at what point the underlying Java ResultSet is getting changed over, or how to do it myself.
edit: Since Sequel::Database can create a dummy Dataset, and the Sequel::JDBC::Dataset has a private method that converts a result set and yields it to a block, the above is what I have now. This works, but I'm absolutely positive that there has to be a better way.
Sequel seems like the best database library for Ruby, which is why I'm trying to work with it, but if there are alternatives that are nicer than using straight JDBC, I'd like to know about them, too.
Sequel doesn't currently support OUT params in stored procedures on JDBC, so what you are currently doing is probably best.
I am working on a rails application, and I am using Sqlite in my dev environment and PostgreSQL in production. Is there any way to write a "database-aware" migration? i.e. one that would execute a certain SQL statement on Sqlite and a differetn statement on Postgres?
You should be able to write something like:
class MyMigration < ActiveRecord::Migration
def up
if ActiveRecord::Base.connection.kind_of? ActiveRecord::ConnectionAdapters::SQLite3Adapter
execute 'SQL Statement...'
else
execute 'Different SQL Statement...'
end
end
def down
...
end
end
Its not something I have had to implement myself so I'm not aware of any pitfalls.
We are using Datamapper in a Sinatra application and would like to use case insensitive like that works on both Sqlite (locally in development) and Postgresql (on Heroku in production).
We have statements like
TreeItem.all(:name.like =>"%#{term}%",:unique => true,:limit => 20)
If termis "BERL" we get the suggestion "BERLIN" from both the Sqlite and Postgresql backends. However if termis "Berl" we only get that result from Sqlite and not Postgresql.
I guess this has to do with the fact that both dm-postgres-adapter and dm-sqlite-adapter outputting a LIKE in the resulting SQL query. Since Postgresql has a case sensitive LIKE we get this (for us unwanted) behavior.
Is there a way to get case insensitive like in Datamapper without resorting to use a raw SQL query to the adapter or patching the adapter to use ILIKEinstead of LIKE?
I could of course use something in between, such as:
TreeItem.all(:conditions => ["name LIKE ?","%#{term}%"],:unique => true,:limit => 20)
but then we would be tied to the use of Postgresql within our own code and not just as a configuration for the adapter.
By writing my own data object adapter that overrides the like_operator method I managed to get Postgres' case insensitive ILIKE.
require 'do_postgres'
require 'dm-do-adapter'
module DataMapper
module Adapters
class PostgresAdapter < DataObjectsAdapter
module SQL #:nodoc:
private
# #api private
def supports_returning?
true
end
def like_operator(operand)
'ILIKE'
end
end
include SQL
end
const_added(:PostgresAdapter)
end
end
Eventually I however decided to port the application in question to use a document database.
For other people who happen to use datamapper wanting support for ilike as well as 'similar to' in PostgreSQL: https://gist.github.com/Speljohan/5124955
Just drop that in your project, and then to use it, see these examples:
Model.all(:column.ilike => '%foo%')
Model.all(:column.similar => '(%foo%)|(%bar%)')
I want to add a comment to every request send by active record in order to found source in mysql slow query. How can I modify the request before ActiveRecord sends it?
For example i want to have this in my central mysql slow query log.
SELECT * FROM articles
-- File: refresh-article.rb
ActiveRecord already logs db requests with timing information to your app log.
I solve the problem with monkey patch
ActiveRecord::ConnectionAdapters::Mysql2Adapter.class_eval do
def execute_with_log(sql, name=nil)
sql = "-- Script: #{$0}\n#{sql}"
execute_without_log(sql, name)
end
alias_method_chain :execute, :log
end
In your rails app, you can see your queries with timing in log/(production|development).log.
However if you want anything more than that, I suggest checking out NewRelic in development mode. It is free, and it shows your the source of where that query was executed(which looks like what you want). It really is one of the best logging/performance analyzer out there.
I found a solution by monkey patch MySQL2::execute
ActiveRecord 6 allows queries to be annotated
User.annotate("selecting user names").select(:name)
# SELECT "users"."name" FROM "users" /* selecting user names */
User.annotate("selecting", "user", "names").select(:name)
# SELECT "users"."name" FROM "users" /* selecting */ /* user */ /* names */
https://api.rubyonrails.org/classes/ActiveRecord/QueryMethods.html#method-i-annotate
You could combine this with the caller_locations kernel method:
User.annotate("#{caller_locations(1,1).first}").select(:name)
https://www.rubydoc.info/stdlib/core/2.0.0/Kernel:caller_locations
I'm building a command line application using ActiveRecord 3.0 (without rails). How do I clear the query cache that ActiveRecord maintains?
To a first approximation:
ActiveRecord::Base.connection.query_cache.clear
Have a look at the method clear_query_cache in http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/QueryCache.html
We use:
ActiveRecord::Base.connection.query_cache.clear
(ActiveRecord::Base.connection.tables - %w[schema_migrations versions]).each do |table|
table.classify.constantize.reset_column_information rescue nil
end
But I am not certain even this is enough.
If you only want to do this temporarily, you can use ActiveRecord::Base.uncached like so:
::ActiveRecord::Base.uncached { User.order('random()').limit(3) }
Oftentimes when you see caching of database queries, your db is doing the caching, not ActiveRecord, which means you need to clear the cache and buffers at the db level, not the ActiveRecord level.
For example, to clear Postgres' cache and buffers on Mac, you would do sudo purge, which forces the disk cache to be flushed and emptied.
To clear Postgres' cache and buffers on Linux, you would shut down postgres, drop the caches, and start postgres back up again:
service postgresql stop
sync
echo 3 > /proc/sys/vm/drop_caches
service postgresql start
Further reading:
See and clear Postgres caches/buffers?
Does Postgres provide a command to flush buffer cache?
https://linux-mm.org/Drop_Caches
Since, the title of the question is so broad, I stumbled over it while searching for a related problem: When adding columns to an ActiveRecord model during a migration, it may happen that the new column is not available on the ActiveRecord class. Rails caches the column information. In order to fix this we need to call reset_column_information.
Here and example:
# migration
Product.first # rails caches the schema
add_column :products, :name
Product.first.update(name: 'xxx') # fails
Product.reset_column_information
Product.first.update(name: 'xxx') # now it succeeds