I am new to Redis and Rails caching, and would like to perform simple model caching. I have just read these 2 articles :
http://www.sitepoint.com/rails-model-caching-redis/
http://www.victorareba.com/tutorials/speed-your-rails-app-with-model-caching-using-redis
Since Redis model caching consists in storing JSON strings in redis and retrieving them with code like
def fetch_snippets
snippets = $redis.get("snippets")
if snippets.nil?
snippets = Snippet.all.to_json
$redis.set("snippets", snippets)
end
#snippets = JSON.load snippets
end
I don't understand what is the need of using
gem 'redis-rails'
gem 'redis-rack-cache'
I don't see where the cache store or other caching mechanisms are at use in that kind of examples, since they consist only in reading/writing to Redis.
Thank you for any help.
Here is what I have in my Gemfile
gem 'redis'
gem 'readthis'
gem 'hiredis'
gem 'redis-browser'
readthis - recently implemented nice feature to not crash Rails when Redis is down Disable Rails caching if Redis is down. And it supports advanced Redis data types (not just strings as redis-rails).
hiredis - is a little faster
redis-browser - allows me to see what is actually cached (easier than cli).
Here is my application.rb
config.cache_store = :readthis_store, { expires_in: 1.hour.to_i, namespace: 'foobar', redis: { host: config.redis_host, port: 6379, db: 0 }, driver: :hiredis }
Then in my models I do:
def my_method_name
Rails.cache.fetch("#{cache_key}/#{__method__}", expires_in: 1.hour) do
# put my code here
end
end
I used https://github.com/MiniProfiler/rack-mini-profiler to see which queries were firing lots of DB request and determined what I should cache.
The snippet you posted isn't really clever. It assumes that the entire snippet collection is never updated locally, as it doesn't set any expiration for the content that is stored into Redis.
As for the gems, you don't need them at all if your goal is the example you posted.
The redis-rails is likely a plugin to connect to Redis in Rails. However, connecting to Redis is as easy as creating an initializer file and opening a new connection to Redis with the correct Redis URL using the Ruby Redis gem.
The second gem seems to add a Redis-based storage for Rack cache. If you don't know what it is, it's probably better if you don't use it at all.
Related
I'm trying to use Redis as my session store, which seem to work just fine. However I can't figure out how to let multiple instances of Sinatra access the same session. This is what I have in my config.ru:
require 'redis-rack'
use Rack::Session::Redis, :redis_server => "redis://#{ENV['REDIS_HOST']}:6379/0"
I must be missing an argument to set this, but the documentation is lacking for this case:
https://github.com/redis-store/redis-rack
Maybe that's not what I want to achieve this behavior?
The end goal is to be deploying my Sinatra application with docker to a clustered environment so I can release new versions without downtime. So whatever let's me share the rack session between multiple instances works. I suppose I could create a redis object manually and not use the session keyword, just seems like the wrong way to do it.
I've got a Sinatra app that I'm setting up with a database using ActiveRecord.
Due to one of the quirks of this database (namely a string primary key), I want to use a SQL schema (structure.sql) instead of a Ruby one (schema.rb). I don't mind that this restricts me to using a specific database flavour, we use Postgres for everything anyway.
To achieve this in Rails, I would put config.active_record.schema_format = :sql in config/application.rb. How do I do the same thing in Sinatra?
It's easy to configure your database by hand with Sinatra. We like to build our tables in MySQL instead of using ActiveRecord Migrations.
You'll have to create your database models by hand instead of using generators and you'll add this line to manage your connection:
ActiveRecord::Base.establish_connection(database_settings)
This is super easy. We typically read in the settings from a YAML file. It gets complicated when you want to write automated tests. Here's a blog I wrote about how to set up automated tests with Sinatra, MiniTest, and ActiveRecord.
Since you are still using active record, you can just add next line to your config (I put it under config/initializers/active_record.rb).
ActiveRecord::Base.schema_format = :sql
I am using ActiveRecord with Sinatra and PostgreSQL. When the database connection drops (due to temporary network failure or postgres server restarting), my app doesn't re-acquire connection automatically. I'll have to restart the app in order to connect to postgres again. I remember I didn't have this problem when I was using Rails in another project.
Do I need to put some configuration or code to tell ActiveRecord to reconnect to PostgreSQL automatically?
ActiveRecord::Base.verify_active_connections! has removed back in 2012 in rails commit 9d1f1b1ea9e5d637984fda4f276db77ffd1dbdcb. so we can't use that method.
sentences below is my result of short investigation. I am no experts in rails activerecord. so listen with caution. (but hope this helpful)
comment in connection_pool.rb said
# 1. Simply use ActiveRecord::Base.connection as with Active Record 2.1 and
# earlier (pre-connection-pooling). Eventually, when you're done with
# the connection(s) and wish it to be returned to the pool, you call
# ActiveRecord::Base.clear_active_connections!. This will be the
# default behavior for Active Record when used in conjunction with
# Action Pack's request handling cycle.
so maybe you (and I. I have a same situation just like you) have to return connection to pool.
and to return connection to pool in sinatra as Action Pack's request handling cycle, use ActiveRecord::ConnectionAdapters::ConnectionManagement
use ActiveRecord::ConnectionAdapters::ConnectionManagement
and then as stated in rails commit 9d1f1b1ea9e5d637984fda4f276db77ffd1dbdcb we are using a different way as in this line, always checkout_and_verify when using Basae.connection by obeying action pack lifecycle.
def connection
# this is correctly done double-checked locking
# (ThreadSafe::Cache's lookups have volatile semantics)
#reserved_connections[current_connection_id] || synchronize do
#reserved_connections[current_connection_id] ||= checkout
end
end
UPDATED 2019-01-11 As of Rails 4.2 I have to use
ActiveRecord::Base.clear_active_connections!
and ActiveRecord will reconnect on next query. Works also from Rails console, which is rather convenient
From https://www.new-bamboo.co.uk/blog/2010/04/11/automatic-reconnection-of-mysql-connections-in-active-record/
If you use Active Record outside Rails or at least outside controller actions you have to verify connections on your own before executing a database statement. This can be done with the following code:
ActiveRecord::Base.verify_active_connections!
Since Active Record uses one connection per thread, in multi-threaded applications this verification has to be executed for each thread separately.
The blog post is about reconnecting to MySQL but I'm guessing it would be the same regardless of the engine used, as it's abstracted away. The blog also mentions a reconnect option in the configuration, but you'll have to find out if that works for Postgres.
For the past few weeks I've been learning Ruby and I must say that it hasn't been easy to get a grasp of some things.
That leads me to asking this question, I'm trying to setup a project which uses Rubinius as ruby engine, puma as webserver (since Puma states that it's best made to work with Rubinius because of their concurrency implementation), PostgreSQL as database and sequel as toolkit for the database.
What I'm struggling with is making the database connection. As it is, I'm doing it in the config.ru:
require 'rubygems'
require 'bundler/setup'
require 'uri'
require 'yaml'
require 'erb'
Bundler.require :default, ENV['RACK_ENV']
DATABASE.disconnect if defined?(DATABASE)
if ENV['DATABASE_URL']
db_config = URI.parse ENV['DATABASE_URL']
else
#noinspection RubyResolve
db_config = YAML.load(ERB.new(File.read('config/database.yml')).result)[ENV['RACK_ENV']]
end
DATABASE = Sequel.connect db_config
require File.expand_path('../application/api', __FILE__)
run APP::API
But I've been told that it's not the best place to do it if I want concurrency and not a shared connection. If I were using Unicorn I would do it in the before_fork, but Puma does not have such a function.
Though it does provide a on_worker_boot it is not useful with Sequel, because if I preload the app, Sequel requires a database connection before it can create my models (class SomeClass < Sequel::Model).
I am a bit confused now and I'm not sure where to go from this point. I was trying to find some guides or some good practices on this matter, but the only things I found were using ActiveRecord.
Does someone know how to actually do this properly, connecting to the database?
If you haven't set up puma to fork and preload the app (-w and --preload flags), you shouldn't need to do anything. If you have set up puma to fork and preload, then after loading your model classes, call DATABASE.disconnect. You may also want to lobby the puma developers to add a hook similar to before_fork in unicorn.
I am not able to use memcached as session store with rails 4 using dalli gem.
Here's what I have done.
I added dalli gem to Gemfile
gem 'dalli'
I added the following line in config/initializers/session_store.rb
Rails.application.config.session_store ActionDispatch::Session::CacheStore, :expire_after => 20.minutes
And I added the following line in development.rb
config.cache_store = :dalli_store
Now when I start my development server with thin server without starting memcached server, I still can login as usual. Should I get some error like no memcached server running or something like that.
I am not sure if rails is using memcached as session store or not.
Can someone tell me what have I missed in using memcached as session store in development environment?
For your information, I have been using devise as authentication gem.
Thanks
Yes, you should see an error like this in the console:
DalliError: No server available
However, you will still get the session cookie, since Rails will generate it and send it to the browser.
it's just that Rails does not have a place to store the data associated with the cookie.
So, for a demo, try this:
Stop memcached. In some controller action do this:
def some_action
puts session[:test]
session[:test] = "hello"
end
You should not see "hello" in STDOUT.
Now, restart memcached and hit the action again. (might need to refresh the browser twice).
This time, you should see "hello".
If you again stop memcached, the "hello" will no longer be displayed.
I hope that makes it clear that the generation of the cookie (containing the session key)
and the storage of data against the value of the cookie (i.e. the session key) are two different things. And of course, ensure that memcached really is stopped.
As for the part being able to login even with memcached stopped, check to see that you have cleared all cookies for the domain (localhost) and that you have restarted the rails server after making the change. Also, clear out the tmp/cache directory.
PS. If you do not see the error DalliError: No server available then that means that memcached is probably still running somewhere. Try accessing memcached via Dalli via the Rails console and see if you are able to store/get data.
PPS. If you see files being stored in tmp (like tmp/cache/D83/760/_session_id%3A4d65e5827354d0e1e8153a4664b3caa1), then that means that Rails is falling back to FileStore for storing the session data.