I am using Sequel and I have model defined like this:
class A < Sequel::Model
one_to_one :lang, class: ALang, key: :a_id,
graph_join_type: :inner do |ds|
ds.where(ALang__lang: I18n.locale.to_s)
end
delegate :title, :titleSanitized, :description, to: :lang
# ...
end
I18n.lang = :de
A.eager(:lang).all
# block is called ("ds.where(ALang__lang: I18n.locale.to_s)" code)
# database was queried (I can see the query in logs)
I18n.lang = :en
A.eager(:lang).all
# block is not called
# database was queried (I can see the query in logs)
Is it bug or feature? Or am I doing something wrong?
Thank you
In this case, the block is eagerly evaluated and the resulting dataset is cached. To delay the evaluation of the current locale, you need to use a delayed evaluation:
one_to_one :lang, class: ALang, key: :a_id,
graph_join_type: :inner do |ds|
ds.where(ALang__lang: Sequel.delay{I18n.locale.to_s})
end
I've updated Sequel's documentation to reflect this.
Assuming when you're saying "block" you're meaning the body of the A class or something within said body, that makes perfect sense. Classes are only loaded once (typically, unless monkey patching, but even then "loading" is a debatable term).
The body of A, in this case, sets up the declarative logic of of the queries you are performing. If you're talking abut the block passed to one_to_one it's likely Sequel::Model is calculating its result and caching it when the class is being loaded.
Am I missing the question here?
That is a feature:
The block is only evaluated one, when the class is loaded. This is the reason why you use lambdas in ActiveRecord to define variable parts in scopes or associations. I don't know if Sequel also supports lambdas in query or association definitions.
The database is not called twice, because associations are cached after being retrieved. See Caching in the docs for Sequel::Model
Related
Most of the Factorybot factories are like:
FactoryBot.define do
factory :product do
association :shop
title { 'Green t-shirt' }
price { 10.10 }
end
end
It seems that inside the ":product" block we are building a data structure, but it's not the typical hashmap, the "keys" are not declared through symbols and commas aren't used.
So my question is: what kind of data structure is this? and how it works?
How declaring "association" inside the block doesn't trigger a:
NameError: undefined local variable or method `association'
when this would happen on many other situations. Is there a subject in compsci related to this?
The block is not a data structure, it's code. association and friends are all method calls, probably being intercepted by method_missing. Here's an example using that same technique to build a regular hash:
class BlockHash < Hash
def method_missing(key, value=nil)
if value.nil?
return self[key]
else
self[key] = value
end
end
def initialize(&block)
self.instance_eval(&block)
end
end
With which you can do this:
h = BlockHash.new do
foo 'bar'
baz :zoo
end
h
#=> {:foo=>"bar", :baz=>:zoo}
h.foo
#=> "bar"
h.baz
#=> :zoo
I have not worked with FactoryBot so I'm going to make some assumptions based on other libraries I've worked with. Milage may vary.
The basics:
FactoryBot is a class (Obviously)
define is a static method in FactoryBot (I'm going to assume I still haven't lost you ;) ).
Define takes a block which is pretty standard stuff in ruby.
But here's where things get interesting.
Typically when a block is executed it has a closure relative to where it was declared. This can be changed in most languages but ruby makes it super easy. instance_eval(block) will do the trick. That means you can have access to methods in the block that weren't available outside the block.
factory on line 2 is just such a method. You didn't declare it, but the block it's running in isn't being executed with a standard scope. Instead your block is being immediately passed to FactoryBot which passes it to a inner class named DSL which instance_evals the block so its own factory method will be run.
line 3-5 don't work that way since you can have an arbitrary name there.
ruby has several ways to handle missing methods but the most straightforward is method_missing. method_missing is an overridable hook that any class can define that tells ruby what to do when somebody calls a method that doesn't exist.
Here it's checking to see if it can parse the name as an attribute name and use the parameters or block to define an attribute or declare an association. It sounds more complicated than it is. Typically in this situation I would use define_method, define_singleton_method, instance_variable_set etc... to dynamically create and control the underlying classes.
I hope that helps. You don't need to know this to use the library the developers made a domain specific language so people wouldn't have to think about this stuff, but stay curious and keep growing.
I'm using Rails 5. I have this model
class MyObject < ActiveRecord::Base
...
belongs_to :distance_unit
and I notice when I have a line like below
distance = Distance.new({:distance => my_obj.distance, :distance_unit => my_obj.distance_unit})
it causes the following to be executed
SELECT "distance_units".* FROM "distance_units" WHERE "distance_units"."id" = $1 LIMIT $2 [["id", 1], ["LIMIT", 1]]
Nothing unusual, but I have a cached method created in my DistanceUnit model
class DistanceUnit < ActiveRecord::Base
def self.cached_find_by_id(id)
Rails.cache.fetch("distanceunit-#{id}") do
puts "looking for id: #{id}"
find_by_id(id)
end
end
and I would like the "distance = Distance.new({:distance => my_obj.distance, :distance_unit => my_obj.distance_unit})" line to invoke my cached functionality instead of running off to the database. How can I achieve this?
Regarding to my comment, note that the memory_store is only used within a process, so each process will have its own store (consider MemCacheStore if that is an issue). Also the cache is gone when the process ends.
As a general note: The belongs_to association inherently triggers a lookup if the object hasn't been fetched yet. There are plenty of caches built in the Rails framework and in general I wouldn't worry too much about premature optimization in the beginning (and only optimize when you find queries are running too slow).
Also: Accessing the cache is also a call to a type of database, and an lookup based on an probably indexed id typically isn't really a heavy call. Also if there's just a few distance units, you maybe simply using a constant could work?
But to answer your question. There doesn't seem to be anything in your code that says cached_find_by_id (and while rails does a lot of things automagically, this isn't one of them).
You could create the following method in MyObject that overrides the 'getter' that is created by the belongs_to association:
def distance_unit
DistanceUnit.cached_find_by_id(distance_unit_id)
end
However, if you're simply initializing an ActiveRecord object and you don't need the DistanceUnit in this call you could also pass the id directly, since that is what is stored in the database.
Distance.new({:distance => my_obj.distance, :distance_unit_id => my_obj.distance_unit_id})
I am reading through The Rails 4 way (by Obie Fernandez), a well-known book about Rails, and from what I've read so far, I can highly recommend it.
However, there is an example section 9.2.7.1: Multiple Callback Methods in One Class that confuses me:
Bear with me, to make the problem clear for everyone, I have replicated the steps the book describes in this question.
The section talks about Active Record callbacks (before_create, before_update and so on), and that it is possible to create a class that handles multiple callbacks for you. The listed code is as follows:
class Auditor
def initialize(audit_log)
#audit_log = audit_log
end
def after_create(model)
#audit_log.created(model.inspect)
end
def after_update(model)
#audit_log.updated(model.inspect)
end
def after_destroy(model)
#audit_log.destroyed(model.inspect)
end
end
The book says then that to add this audit logging to an Active Record class, you would do the following:
class Account < ActiveRecord::Base
after_create Auditor.new(DEFAULT_AUDIT_LOG)
after_update Auditor.new(DEFAULT_AUDIT_LOG)
after_destroy Auditor.new(DEFAULT_AUDIT_LOG)
...
end
The book then notes that this code is very ugly, having to add three Auditors on three lines, and that it not DRY. It then goes ahead and tells us that to solve this problem, we should monkey-patch an acts_as_audited method into the Active Record::Base object, as follows:
(the book suggests putting this file in /lib/core_ext/active_record_base.rb)
class ActiveRecord::Base
def self.acts_as_audited(audit_log=DEFAULT_AUDIT_LOG)
auditor = Auditor.new(audit_log)
after_create auditor
after_update auditor
after_destroy auditor
end
end
which enables you to write the Account Model class as follows:
class Account < ActiveRecord::Base
acts_as_audited
...
end
Before reading the book, I have already made something similar that adds functionality to multiple Active Record models. The technique I used was to create a Module. To stay with the example, what I have done was similar to:
(I would put this file inside /app/models/auditable.rb)
module Auditable
def self.included(base)
#audit_log = base.audit_log || DEFAULT_AUDIT_LOG #The base class can override it if wanted, by specifying a self.audit_log before including this module
base.after_create audit_after_create
base.after_update audit_after_update
base.after_destroy audit_after_destroy
end
def audit_after_create
#audit_log.created(self.inspect)
end
def audit_after_update
#audit_log.updated(self.inspect)
end
def audit_after_destroy
#audit_log.destroyed(self.inspect)
end
end
Note that this file both replaces the Auditor and the monkey-patched ActiveRecord::Base method. The Account class would then look like:
class Account < ActiveRecord::Base
include Auditable
...
end
Now you've read both the way the book does it, and the way I would have done it in the past. My question: Which version is more sustainable in the long-term? I realize that this is a slightly opinionated question, just like everything about Rails, but to keep it answerable, I basically want to know:
Why would you want to monkey-patch ActiveRecord::Base directly, over creating and including a Module?
I would go for the module for a few reasons.
Its obvious; that is to say, I can quickly find the code that defines this behavior. In acts_as_* I don't know if its from some gem, library code, or defined within this class. There could be implications about it being overridden or piggy-backed in the call-stack.
Its portable. It uses method calls that are commonly defined in libraries that define callbacks. You could conceivably distribute and use this library in non-active-record objects.
It avoids the addition of unnecessary code on the static level. I'm a fan of having less code to manage (less code to break). I like using Ruby's niceties without doing to much to force it to be "nicer" than it already it is.
In a monkey-patch setting you are tying the code to a class or module that could go away and there are scenarios where it would fail silently until your class can't call acts_as_*.
One downfall of the portability argument is the testing argument. In which case I would say you can write your code to protect against portability, or fail early with smart warnings about what will and won't work when used portably.
Say I have a user model. It has an instance method called status. Status is not an association. It doesn't follow any active record pattern because it's a database already in production.
class User < ActiveRecord::Base
def status
Connection.where(machine_user_id: self.id).last
end
end
So I do this.
#users = User.all
First of all I can't eager load the status method.
#users.includes(:status).load
Second of all I can't cache that method within the array of users.
Rails.cache.write("user", #users)
The status method never gets called until the view layer it seems like.
What is the recommended way of caching this method.
Maybe this instance method is not what I want to do. I've looked at scope but it doesn't look like what I want to do.
Maybe I just need an association? Then I get the includes and I can cache.
But can associations handle complex logic. In this case the instance method is a simple query. What if I have complex logic in that instance method?
Thanks for any help.
Have You tried to encapsulate this logic inside some plain Ruby object like this (I wouldn't use this for very large sets though):
class UserStatuses
def self.users_and_statuses
Rails.cache.fetch "users_statuses", :expires_in => 30.minutes do
User.all.inject({}) {|hsh, u| hsh[u.id] = u.status; hsh }
end
end
end
After that You can use some helper method to access cached version
class User < ActiverRecord::Base
def cached_status
UserStatuses.users_and_statuses[id]
end
end
It doesn't solve Your eager loading problem, Rails doesn't have any cache warming up techniques built in. But by extracting like this, it's easily done by running rake task in Cron.
Also in this case I don't see any problems with using association. Rails associations allows You to submit different options including foreign and primary keys.
I have a Project model and a Developer model. I have the concept of calculating the "interestingness" for a project for a particular developer:
class Project < ActiveRecord::Base
def interestingness_for(developer)
some_integer_based_on_some_calculations
end
end
I think it would be neat, instead of having something like Project.order_by_interestingness_for(bill), to be able to say
Project.order(:interestingness, :developer => bill)
and have it be a scope as opposed to just a function, so I can do stuff like
Project.order(:interestingness, :developer => bill).limit(10)
I don't know how to do this, though, because it's not obvious to me how to override a scope. Any advice?
Assuming you will not need to use the standard ActiveRecord order query method for the Project class, you can override it like any other class method:
def self.order(type, options)
self.send(:"special_#{type}_calculation_via_scopes", options)
end
Then the trick is to ensure you create the needed calculation methods (which will vary according to your interestingness and other algorithms). And that the calculation methods only use scopes or other AR query interface methods. If you aren't comfortable converting the method logic to a SQL equivalent using the query interface, you can try using the Squeel DSL gem which can potentially work with the method directly depending on your specific calculation.
If you may be needing the classic order method (and this is usually a safe assumption), then don't override it. Either create a proxy non-ActiveRecord object for this purpose, or use a different naming convention.
If you really want to, you can use aliasing to achieve a similar effect, but it may have unintended consequences for the long term if the second argument ('options' in this case) suddenly takes on another meaning as Rails progresses. Here is an example of what you can use:
def self.order_with_options(type, options = nil)
if options.nil?
order_without_options(type)
else
self.send(:"special_#{type}_calculation_via_scopes", options)
end
end
class << self
alias_method_chain :order, :options
end