How to find a document in Mongoid by ID without the model? - ruby

Is there a way to use Mongoid to find a document by id, without knowing which model it is?

Seeing as how Mongoid is an ODM (Object-Document-Mapper) framework for MongoDB in Ruby, I do not believe this is possible. Knowing the model is a crucial component of Mongoid so that it can appropriately translate between your objects in code and the document representation of the data within MongoDB.
Please let me know if you have any questions!

A possible workaround is to iterate over all the collections, and execute the find method for all of them.
(It can have an impact on performance depending on the number and size of the collections.)
This code assumes, that the naming of the collections follows the convention: the name of the model with lower case in plural form.
def self.find_with_id_in_all_collections(id)
all_collections = Mongoid.default_session.collections
all_models = all_collections.collect{|col| col.name.singularize.camelize}
all_models.each {|model|
begin
found_with_id = eval(model + ".find(id)")
return found_with_id
rescue Mongoid::Errors::DocumentNotFound
#nothing to do: keep on searching in the other collections
end
}
# if no such ID has been found in any of the collections:
raise "No document with the ID #{id} found in any of the following collections: #{all_collections}} resp. models: #{all_models}"
end

Related

If I eager load associated child records, then that means future WHERE retrievals won't dig through database again?

Just trying to understand... if at the start of some method I eager load a record and its associated children like this:
#object = Object.include(:children).where(email:"test#example.com").first
Then does that mean that if later I have to look through that object's children this will not generate more database queries?
I.e.,
#found_child = #object.children.where(type_of_child:"this type").first
Unfortunately not - using ActiveRecord::Relation methods such as where will query the database again.
You could however filter the data without any further queries, using the standard Array / Enumerable methods:
#object.children.detect {|child| child.type_of_child == "this type"}
It will generate another database query in your case.
Eager loading is used to avoid N+1 queries. This is done by loading all associated objects. But this doesn't work when you want to filter that list with where later on, Rails will than build a new query and run that one.
That said: In your example the include makes your code actually slower, because it loads associated object, but cannot use them.
I would change your example to:
#object = Object.find_by(email: "test#example.com")
#found_child = #object.children.find_by(type_of_child: "this type")

Rails ActiveRecords with own attributes + associated objects' IDs

I have a rather simple ActiveRecords associations like such (specifically in Rails 4):
An organization has many users
A user belongs to an organization
But in terms of ActiveReocord queries, what's an optimal way to construct a query to return an array of Organizations each with its own array of user ids associated with itself? Basically, I'd like to return the following data structure:
#<ActiveRecord::Relation [#<Organization id: 1, name: "org name",.... user_ids: [1,2,3]>, <Organization id: 2...>]>
... or to distill it even further in JSON:
[{id: 1, name: 'org name', ... user_ids: [1,2,3]}, {...}]
where users is not part of the Organizations table but simply an attribute constructed on the fly by ActiveRecord.
Thanks in advance.
EDIT: After trying a few things out, I came up with something that returned the result in the format I was looking for. But I'm still not sure (nor convinced) if this is the most optimal query:
Organization.joins(:users).select("organizations.*, '[#{User.joins(:organization).pluck(:id).join(',')}]' as user_ids").group('organizations.id')
Alternatively, the JBuilder/Rabl approach #Kien Thanh suggested seem very reasonable and approachable. Is that considered current best practice nowadays for Rails-based API development (the app has the back-end and front-end pieces completely de-coupled)?
The only thing to be aware of with a library solution such as JBuilder or Rabl is to watch the performance when they build the json.
As for your query use includes instead of joins to pull the data back.
orgs = Organization.includes(:users)
You should not have to group your results this way (unless the group was for some aggregate value).
ActiveRecord::Relation gives you some automatic helper methods, one of which is association_ids.
So if you create your own JSON from a hash you can do
orgs.map! {|o| o.attributes.merge(user_ids: o.user_ids).to_json }
EDIT: Forgot to add the reference for has_many http://guides.rubyonrails.org/association_basics.html#has-many-association-reference

DataMapper use only certain columns

I have a code section like the following:
users = User.all(:fname => "Paul")
This of course results in getting all users called "Paul". Now I only need some of the columns available for each user which leads to replacing the above line by something like this:
users = User.all(:name => "Paul", :fields => [:id, :fname, :lname, :email])
Until now everything works as expected. Unfortunately now I want to work with users but as soon as I use something like users.to_json, also the other columns available will be lazy-loaded even due the fact, that I don't need those. What's the correct or at least a good way to end up with users only containing the attributes for each user that I need?
An intermediate object like suggested in How to stop DataMapper from double query when limiting columns/fields? is not a very good option as I have a lot of places where would need to define at least twice which fields I need and also I would loose the speed improvement gained by loading only the needed data from the DB. In addition such an intermediate object also seems to be quite ugly to build when having multiple rows of the DB selected (=> multiple objects in a collection) instead of just one.
If you usually works with the collection using json I suggest overriding the as_json method in your model:
def as_json(options = nil)
# this example ignores the user's options
super({:only => [:fname]}.merge(options || {}))
end
You are able to find more detailed explanation here http://robots.thoughtbot.com/better-serialization-less-as-json

Active Record class

I am working on a migration project. Wanna migrate a rails 2.x app to 3.x. I have a problem with active record.
In Rails 2.x:
arr=StorageUnit.find(:all, :conditions =>"type='Drawer'")
The above code will get me all records with type Drawer.
arr.class
=> Array
In Rails 3.x:
Here the above function is deprecated. So i had to use
arr=StorageUnit.where("type='Drawer'")
The above code will get me all records with type Drawer.
arr.class
ActiveRecord::Relation
I guess this is because of the change in Active Record.
My problem is i have some code based on this class.
For ex:
if arr.class== Array
do something
else
do something
end
So as off now i have changed it to
if arr.class== ActiveRecord::Relation
do something
else
do something
end
Just curious to know whether there is any better solution or any alternative way to solve it. I have a lot of place where they have used such stuff.
EDIT:
arr=StorageUnit.where("type='Drawer'").all
will provide the class as Array. My objective is to know when the code without suffix can provide you the required records than what is the use of all in the end.? Is it just to change class? Can anyone ecxplain?
StorageUnit.where simply returns the ActiveRecord relation. Tacking on .all will execute the sql and create instances of StorageUnit.
arr = StorageUnit.where(:type => 'Drawer').all
There are many interesting side effects of it being returned as a relation. Amongst other things, you can combine scopes before executing:
StorageUnit.where(:type => 'Drawer').where(:color => 'black')
you can view the resultant sql for debugging:
StorageUnit.where(:type => 'Drawer').to_sql
Imagine this:
class StorageUnit < ActiveRecord::Base
scope :with_drawer, where(:type => 'Drawer')
scope :with_color, lambda { |c| where(:color => c) }
end
Now:
StorageUnit.with_drawer.with_color('black').first_or_create # return the first storage unit with a black drawer
StorageUnit.with_drawer.with_color('black').all # return all storage units with black drawers
The relation allows for underlying query to be built up even saved for later use. all and other modifiers like it have special meaning to the relation and trigger the database execution and building of model instances.

Ignore 'read-only' column in creates and updates in Ruby ActiveRecord

I'm looking for a solution to the following problem: I have an ActiveRecord entity that is backed by an updatable database view (in DB2 via the activerecord-jdbc-adapter gem). This view contains one column that is calculated from other columns and is 'read-only': you cannot set that column in any valid way. When a new record is created for this entity, that field should not be set. However, by default, ActiveRecord does set it with the 'default' (NULL), which is rejected by the database.
attr_readonly isn't a solution, because that only excludes a column from updates and not from creates.
attr_ignore, such as implemented by the 'lincoln' gem, is not a solution either, because then the field is ignored entirely. However, the column still needs to be read and be accessible. It's actually even used as part of a relation.
There are ways to prevent you from setting a certain attribute of an ActiveRecord entity, but that doesn't usually prevent that attribute from being included in create or update statements
Does anyone know if there is a way in ActiveRecord to specify a column as 'never set this field'?
Update, in response to Arsen7:
I've attempted to use the after_initialize hook to remove the attribute from a newly created entity, so it isn't included in the SQL that is built. The trouble with this is that the attribute is completely removed and not available anymore at all, pretty much identical to the 'igonre_attr' situation described above. Due to caching, that's not trivial to get around and would require additional logic to force a reload of entities of these specific tables. That can probably be achieved by overriding create to add a 'reload', in addition to using the after_initialize.
(As pointed out by Arsen7, I forgot to mention I'm at ActiveRecord 3.0.9)
My solution
Since my entities already inherit from a subclass of ActiveRecord::Base, I've opted to add before_create and after_create hooks. In the before_create hook, I remove the 'calculated' columns from the #attributes of the instance. In the after_create hook, I add them again and read the values of the 'calculated' columns from the database to set them to the values they received.
Adding such hooks is almost identical to overriding create, so I consider Arsen7's answer to be correct.
I'm afraid ActiveRecord is not prepared for the use case you need. (By the way: which version of AR are you using?)
But I believe you may apply two possible workarounds.
The first, is to overwrite the 'create' method of your model, executing some other SQL, prepared manually in the worst case. I suppose that the real function which will need to be overwritten will not be the 'create' itself, but looking at the sources you could find the one.
The other solution, and I believe, a more elegant one, would be to create a trigger in the database. I am more in the PostgreSQL world, where I would use a 'CREATE RULE', but looking at the DB2 documentation I see that in DB2 there are 'INSTEAD OF' triggers. I hope this may be helpful.
I have achieved the same result by overriding ActiveRecord::Base#arel_attributes in my model:
Class Model < ActiveRecord::Base
##skip_attrs = [:attr1, :attr2]
def arel_attributes_values(include_primary_key = true, include_readonly_attributes = true, attribute_names = #attributes.keys)
skip_attrs = ##skip_attrs.map { |attr| [self.class.arel_table[attr] }
attrs = super(include_primary_key, include_readonly_attributes, attribute_names)
attrs.delete_if {|key, value| skip_attrs.include?(key) }
end
end
The attributes in the ##skip_attrs array will be ignored by ActiveRecord on both insert and update statements, as they both rely on arel_attributes_values for returning the list of attributes of the model.
A better solution would be: a patch on ActiveRecord::Base#arel_attributes along with a 'attr_ignore' macro similar to 'attr_readonly'.
cheers
I know this is very old, but I have been struggling with this very same issue. I have a database with a trigger that calculates an index value based on the max value within a key. I, too, want to prevent any ability to set the value in AR as it could throw off the index applied as rows are inserted.
CREATE TRIGGER incr_col_idx
AFTER INSERT ON fl_format_columns
FOR EACH ROW
BEGIN UPDATE fl_format_columns
SET idx = (SELECT coalesce(max(idx),-1) + 1
FROM fl_format_columns
WHERE fl_file_format_id = new.fl_file_format_id)
WHERE fl_file_format_id = new.fl_file_format_id AND name = new.name;
END;
I've tried a variety of things, but it always came back to overriding the setter directly.
# #raise ArgumentError when an attempt is made to set a value that is calculated in db
def idx=(o)
raise ArgumentError,'the value of idx is set by the db. attempts to set value is not allowed.' unless o.nil?
end
This will require catching the exception rather than interrogating the errors array, but that is what I ended up with. It does pass the following inspection:
context 'column index' do
it 'should prevent idx from being set' do
expect{FL_Format_Column.create(fl_file_format_id:-1,name:'test idx',idx:0)}.to raise_error(ArgumentError)
end
it 'should calculate idx relative to zero' do
x = FL_Format_Column.create(fl_file_format_id:-1,name:'test_idx_nil')
expect(x.errors[:idx].any?).to be false
expect(FL_Format_Column.last.idx).to be > -1
end
end

Resources