create a connection with a postgresql database using sequel and puma - ruby

For the past few weeks I've been learning Ruby and I must say that it hasn't been easy to get a grasp of some things.
That leads me to asking this question, I'm trying to setup a project which uses Rubinius as ruby engine, puma as webserver (since Puma states that it's best made to work with Rubinius because of their concurrency implementation), PostgreSQL as database and sequel as toolkit for the database.
What I'm struggling with is making the database connection. As it is, I'm doing it in the config.ru:
require 'rubygems'
require 'bundler/setup'
require 'uri'
require 'yaml'
require 'erb'
Bundler.require :default, ENV['RACK_ENV']
DATABASE.disconnect if defined?(DATABASE)
if ENV['DATABASE_URL']
db_config = URI.parse ENV['DATABASE_URL']
else
#noinspection RubyResolve
db_config = YAML.load(ERB.new(File.read('config/database.yml')).result)[ENV['RACK_ENV']]
end
DATABASE = Sequel.connect db_config
require File.expand_path('../application/api', __FILE__)
run APP::API
But I've been told that it's not the best place to do it if I want concurrency and not a shared connection. If I were using Unicorn I would do it in the before_fork, but Puma does not have such a function.
Though it does provide a on_worker_boot it is not useful with Sequel, because if I preload the app, Sequel requires a database connection before it can create my models (class SomeClass < Sequel::Model).
I am a bit confused now and I'm not sure where to go from this point. I was trying to find some guides or some good practices on this matter, but the only things I found were using ActiveRecord.
Does someone know how to actually do this properly, connecting to the database?

If you haven't set up puma to fork and preload the app (-w and --preload flags), you shouldn't need to do anything. If you have set up puma to fork and preload, then after loading your model classes, call DATABASE.disconnect. You may also want to lobby the puma developers to add a hook similar to before_fork in unicorn.

Related

Rails 4 Simple Model Caching With Redis

I am new to Redis and Rails caching, and would like to perform simple model caching. I have just read these 2 articles :
http://www.sitepoint.com/rails-model-caching-redis/
http://www.victorareba.com/tutorials/speed-your-rails-app-with-model-caching-using-redis
Since Redis model caching consists in storing JSON strings in redis and retrieving them with code like
def fetch_snippets
snippets = $redis.get("snippets")
if snippets.nil?
snippets = Snippet.all.to_json
$redis.set("snippets", snippets)
end
#snippets = JSON.load snippets
end
I don't understand what is the need of using
gem 'redis-rails'
gem 'redis-rack-cache'
I don't see where the cache store or other caching mechanisms are at use in that kind of examples, since they consist only in reading/writing to Redis.
Thank you for any help.
Here is what I have in my Gemfile
gem 'redis'
gem 'readthis'
gem 'hiredis'
gem 'redis-browser'
readthis - recently implemented nice feature to not crash Rails when Redis is down Disable Rails caching if Redis is down. And it supports advanced Redis data types (not just strings as redis-rails).
hiredis - is a little faster
redis-browser - allows me to see what is actually cached (easier than cli).
Here is my application.rb
config.cache_store = :readthis_store, { expires_in: 1.hour.to_i, namespace: 'foobar', redis: { host: config.redis_host, port: 6379, db: 0 }, driver: :hiredis }
Then in my models I do:
def my_method_name
Rails.cache.fetch("#{cache_key}/#{__method__}", expires_in: 1.hour) do
# put my code here
end
end
I used https://github.com/MiniProfiler/rack-mini-profiler to see which queries were firing lots of DB request and determined what I should cache.
The snippet you posted isn't really clever. It assumes that the entire snippet collection is never updated locally, as it doesn't set any expiration for the content that is stored into Redis.
As for the gems, you don't need them at all if your goal is the example you posted.
The redis-rails is likely a plugin to connect to Redis in Rails. However, connecting to Redis is as easy as creating an initializer file and opening a new connection to Redis with the correct Redis URL using the Ruby Redis gem.
The second gem seems to add a Redis-based storage for Rack cache. If you don't know what it is, it's probably better if you don't use it at all.

How to access environment in datamapper migration

I have a padrino server application with datamapper as ORM layer. I have a database migration, say:
migrate 1, :test do
up do
execute 'Some Query'
end
end
This migration is run using the command:
padrino rake dm:migrate -e <env>
Now my problem is that I need access to env in my query (not to choose schema or anything which datamapper does automatically, something very specific to the functionality). I tried debugging the migration to see if there is a variable which stores this value, but no luck. Is there a way?
As it turns out, since I am using Padrino, I can directly use Padrino.env inside up do..end or down do..end blocks:
migrate 1, :test do
up do
env = Padrino.env
execute "Some Query #{env}"
end
end
Although this is Padrino specific, but so is the concept of environment. I am sure something like this would work with other frameworks like Rails as well.

Output SQL from an ActiveRecord migration without executing it (not rails!)

There's a bunch of questions out there similar to this one that talk about rails plugins as a solution - but I'm not using rails, read on for more
I have a Rakefile in a sinatra project which allows me to rake db:migrate. It'll do my migration perfectly, but I'd like to pass that a flag (or write a new rake task) which does the same thing, but outputs the SQL to STDOUT and doesn't commit the changes to the database. Does anyone know how to do this?
My first thought was to try ActiveRecord logging and see if I could get the SQL out at all, but that doesn't work! Any ideas?
namespace :db do
task :migrate_sql do
require 'logger'
ActiveRecord::Base.logger = Logger.new(STDOUT)
Rake::Task['db:migrate'].invoke
# This does the migration and doesn't output SQL - so no good!
end
end
I think there isn't any easy way to do it, for the following reasons:
up, down, and change are methods which execute other methods; there isn't a global migration query string that gets built and executed
neither the statements methods (add_column, etc) expose their statements as strings; as I understand, they are implemented as connection adapter methods, and for example the mysql adapter has a add_column_sql method, while the postgresql adapter does not, and its sql is a variable inside its add_column method
So, if you really need this functionality, I think your best option is to copy the sql from the log.

How to use Datamapper in conjunction with Heroku Scheduler?

I have a Postgres database that I manipulate using Datamapper on my ruby web server. I am trying to incorporate Heroku Scheduler to update parts of the database every 10 minutes. However when it tries to execute the script it keeps giving this error:
/app/vendor/bundle/ruby/1.9.1/gems/dm-core-1.2.0/lib/dm-core/repository.rb:72:in `adapter': Adapter not set: default. Did you forget to setup? (DataMapper::RepositoryNotSetupError)
The database is initialized when the server starts up, so why can't this script update the database like I would normally do in the rest of the code.
For example the script being called by the scheduler will contain lines such as:
User.update(:user_x => "whatever")
Is there a certain require statement I absolutely need?
First, you need to point the scheduled process to your database, so include the Datamapper.setup, but don't migrate or initialize.
Then you need to read the database. You probably want datamapper to create object models, just like your app uses. I've been using dm-is-reflective for that.
require 'data_mapper'
require 'dm-is-reflective'
dm = DataMapper.setup(:default, ENV['DATABASE_URL'])
class User
include DataMapper::Resource
is :reflective
reflect
end
p User.fields

Preloading classes without Rails?

I am working on a big project, and I realized that several of the components were groups of classes that I could turn into services and strip from Rails. But now that I've done that I realize that the slowness of loading classes without Spork isn't a function of Rails being slow, but a function of Ruby being slow. Is there something like Spork that will work in non Rails projects?
Spork should work just fine for any ruby project, it just requires a bit more setup.
Assuming you're using rspec 2.x and spork 0.9, make a spec_helper.rb that looks something like:
require 'spork'
# the rspec require seems to be necessary,
# without it you get "Missing or uninitialized constant: Object::RSpec" errors
require 'rspec'
Spork.prefork do
# do expensive one-time setup here
require 'mylibrary'
MyLibrary.setup_lots_of_stuff
end
Spork.each_run do
# do setup that must be done on each test run here (setting up external state, etc):
MyLibrary.reset_db
end
Everything in the Spork.prefork block will only be run once (at spork startup), the rest will run on every test invocation.
If you have lots of framework-specific setup, you'd probably be better off making an AppFramework for your library. See the padrino AppFramework for an example.

Resources