Mongoid class with embedded_in and belongs_to entries - ruby

I am creating a Mongoid based application which will have a Class (called Question) whose Objects are stored in two different ways for different purposes. One group of those objects need to be stored in an N:N relationship with Class Page and another group of the same objects need to be stored as embedded (1:N) entries in a different Class (FilledPage).
I need to be able to copy a Question Object which has been referenced in a Page into a FilledPage and for the purposes of speed, I need that to be an embedded relationship.
I have tried creating a Superclass with the information and then two child classes, but I can't convert from one child class to the other without considerable work (and this same design needs to be used in a few other areas with much greater complexity).
Is there any way to support both embedding and references in the same class, or some other solution which will do similar.

Nothing block to have same class to be embedded or standalone. with reference. The limitation is about linking a master document to embedded document. It's not possible easily with mongodb, because your need get the master document and extract the embedded one.

Related

Laravel / Eloquent special relation type based on parsed string attribute

I have developed a system where various classes have attributes consisting of a custom formula. The formula can contain special tokens which refer to different types of object. For example an object of class FruitSalad may have the following attribute;
$contents = "[A12] + [B76]";
In somewhat abstract terms, this means "add apple 12 to banana 76". It can also get significantly more complex than that with as many as 15 or 20 references to other objects involved in one formula.
I have a trait which passes formulae such as this and each time it finds a reference to a model (i.e. "[A12]") it gets it from the database with A::find(12) and adds it to an array of component objects which can be used for other processes later on in the request.
So, in essence, it's a relationship. But instead of a pivot table to describe the relationship, there is a formula on the parent model which can include references to child models.
This is all working. Yay! But it's really inefficient because there are so many tiny queries to get single models as formulae are parsed. One request may quite easily result in hundreds of queries. Oops.
I see two potential options;
1. Get all my apples and bananas from the database at the start of the request and get them from an in-memory store instead of from the database when parsing a formula (is this the repository pattern??).
2. Create a custom relation type (something like hasManyFromFormula) which makes eager loading work so that the parsing becomes much simpler because the relevant apples and bananas would already be loaded into the parent model.
Is there a precedent for this? As for why I am doing it like this, it would a bit tough to explain in brief but suffice to say it is to support a highly configurable data retrieval system which supports as-yet unknown input data configurations.
Help!
Thanks,
Geoff
Am not completely sure if it is the best solution, but in the end I created a new directory class for basic components and then set it up in the app service provider as a singleton. The constructor for the directory class loaded all models of several relevant classes and made them available as collections throughout the app.

Specify table name mid application Ruby-Datamapper

I'm wanting to dynamically create and query tables using Datamapper.
While Datamapper allows you to work with legacy tables and schemas, and in this way set the table name used this is only during initialisation, not within the application.
Is there an easy way to tell Datamapper to migrate/upgrade a Model with an assigned table name in application, and to then tell it to query this table?
This should not be a problem.
All Ruby classes can be created, and re-defined at run-time. Even initialization is at run-time. Initialization just happens to be executed first, before other code is executed.
That is why monkey-patches work so easily. It's just additional code at initialization that just re-defines classes to add extra methods, variables etc.
There is no Ruby code that is "special" in the sense that it only runs at compile time. Ruby is an interpreted language.
To dynamically create a class, see Dynamically creating class in Ruby.
Assuming you don't need to dynamically create classes from an array of strings, you can define additional methods with define_method, or call Datamapper methods at runtime to add attributes.
To define new methods in a class:
Post.send :define_method, :new_method_name do
end
To define a new property using the Datamapper property:
class Post
include DataMapper::Resource
property :title, String # the static way
end
Post.send :property, :title, String # add property the dynamic way (at run-time)
Do note that any tables or properties you define at run-time will not be available if you restart your server, unless the code that dynamically generates these are re-executed.
To update your tables at runtime, you simply do the same thing as normal, that is, call:
DataMapper.auto_upgrade!
To upgrade only a single table, you can also do:
Post.auto_upgrade!
2nd warning: If you have multiple processes, the dynamic code will need to be run in each process, or the additional table Models and Properties will not be available.
This is a problem if you have multiple worker processes, as might happen in production (eg. Nginx with multiple Unicorn workers, or multiple Mongrel workers behind a Ha_proxy).
If you have a single process server, then that is not a problem. However, if you have multiple worker processes, you must run the dynamic code to generate these extra classes and properties in EACH process to make it available.
This is actually the same for initialization, because each process goes through initialization (or if forked, inherit any initialization).
The easiest way without changing anything under the hood is to use separate databases instead of tables (assuming that any relationships will also be stored in the separate database) and open a connection to an additional repository in the block.
DataMapper.setup(:external, "adapter://username:password#hostname/dbname")
DataMapper.repository(:external) do...end

What is the better way to cache fields of referenced document in Mongoid?

What is the better way to cache fields of referenced document in Mongoid?
Currently I use additional fields for that:
class Trip
include Mongoid::Document
belongs_to :driver
field :driver_phone
field :driver_name
end
class Driver
include Mongoid::Document
field :name
field :phone
end
May be it would be more clear to store cache as nested object so in mongo it would be stored as:
{ driver_cache: { name: "john", phone: 12345 } }
I thought about embedded document with 1-1 relation? Is that right choice?
Author of Mongoid (Durran Jordan) suggested folowing option
This gem looks handy for this type of thing:
https://github.com/logandk/mongoid_denormalize
Great question. I authored and maintain mongoid_alize, a gem for denormalizing mongoid relations. And so I struggled with this question.
As of 0.3.1 alize now stores data for a denormalized one-to-one in a Hash, very much like your second example above w/ driver_cache. Previous versions, however, stored data in separate fields (your first example).
The change was motivated by many factors, but largely to make the handling of one-to-one's and one-to-many's consistent (alize can denormalize one-to-one, one-to-many, and many-to-many). one-to-many's have always been handled by storing the data in a array of hashes. So storing one-to-one's as just a Hash becomes a much more symmetrical design.
It also solved several other problems. You can find a more detailed explanation here - https://github.com/dzello/mongoid_alize#release-030
Hope that helps!
Either way, you're fine.
First approach seems slightly better because it explicitly states what data you're caching. Also, it (probably) requires less work from Mongoid :-)
Alexey,
I would recommend thinking about how the data will be used. If you always use the driver information in the context of a trip object then embedding is probably the proper choice.
If, however, you will use that information in other contexts perhaps it would be better as it's own collection as you have created it.
Also, consider embedding the trips inside the driver objects. That may or may not make sense given what your app is trying to do but logically it would make sense to have a collection of drivers that each have a set of trips (embedded or not) instead of having trips that embed drivers. I can see that scenario (where a trip is always though of in the context of a driver) to be more common than the above.
-Tyler
Alternative hash storage of your cacheable data:
field :driver_cache, type: Hash
(Keep in mind, internally the keys will be converted to strings.)

How to extend an existing Ruby model to support persistence

I have a gem which implements my entire business logic, so that I can use it in different applications. Now, one of these applications requires persistence. How do I easily extend my existing Ruby models to support persistence? Should I monkey patch them?
To give you a bit of a background, my model objects are usually just built from XML or JSON files, but now I need to store them in an relational database.
Are there common patterns for this problem? Should I write new model objects that support persistence and map between my legacy objects and the new model objects or should I extend the existing ones to be representable in a database?
Any tips, hints, and links are highly welcome.
I am not sure that I fully understand your question. However, the DataMapper library can be very easily used to add persistence to an already existing object model after the fact, for two reasons:
It doesn't rely on class inheritance (like e.g. ActiveRecord does) but on mixin inheritance, and you can inherit from as many mixins you like, which means you won't have to change the inheritance tree of your object model just to add DataMapper to it.
The object-relational-mapping is declared explicitly in the model, not inferred from the data-store. This means that you can have very complex mappings between the data-store and your models, unlike the rather simple 1:1 table == class, row == object, column == attribute mapping of ActiveRecord.
Now, whether or not you will manage to keep the persistence aspect fully orthogonal, and e.g. in a separate gem, that's another question. You could indeed keep it in a separate library that just opens up all the model classes and include DataMapper::Resource and declare all the properties. This will allow you to still deploy your object model gem without persistence, but the persistence gem will obviously be rather tightly coupled to the object model gem.

Core Data: Instantiating a "root object" in a document based application

I am creating a document based project using Core Data and have run into what may simply be a conceptual issue for me, as while I am not new to Cocoa, this is my first attempt to utilize Core Data. What I am trying to accomplish should be relatively simple: with each new document launched, I would like a new instance of one of my model objects created that serves as a "root" object.
What I have done is add an NSObjectController to my xib, set its mode to Entity Name (with the correct entity name provided), checked off "Prepares Content", and bound its managed object context to File's Owner with managedObjectContext as the model key path. To test this, I bound the title of my main window to the object controller, with controller key as selection and model key path as one of the keys in my entity.
I know I can create my root object programmatically, but am trying to adopt the mediator pattern as is recommended by Apple. I have seen the instructions in the department-employee tutorial under the "adopting the mediator pattern" section and the steps detailed are exactly what I believe I have done.
Any thoughts?
Edit:
Perhaps I did not state the problem correctly. The models are created in Core Data and the relationships are setup as I need them to be (with a "root", children and leaves, using to-one parent relationships, to-many children relationships and an isLeaf boolean attribute). My issue is ensuring that this root object is instantiated as a singleton every time a new document is launched. There should be exactly a 1:1 relationship between the root object and the current document, that root object must always exist and be available without any user interaction to create it, and child nodes that are created and attached to the root are the data objects that are used and manipulated by the application.
I have implemented the above functionality programatically, but in keeping with Core Data principles, would like to adopt the mediator pattern completely and not manage any creation of data objects within my application logic.
If you want a "root" managed object like you would find in linked-list or tree, then you have to set that up in data model itself.
By default, a Core Data data model has no particular hierarchy among objects. Objects may be related but no object is logically "above" or "below" another one. You can reach in object in any relationship by starting with any other object and walking the relationship/s back to the desired object.
A hierarchy of managed objects needs a tree like structure like this:
Tree{
nodeName:string
parent<-->>Tree.children
children<<-->Tree.parent
}
... so that the "root" object is the sole Tree instances that has parent==nil.
Having said all this, I would point out that the Apple docs you refer to say that it is best NOT to use this type of built in hierarchy for most cases. It's just a simplification used for purposes of demonstration (and I think it is a bad one.)
The data model is intended to model/simulate the real-world objects, conditions or events that the app deals with. As such, the logical relationships between the entities/objects in the model/graph should reflect the real-world relationships. In this case, unless the real-world things you are modeling exist in a hierarchy with a real-world "root" object, condition or event, then your model shouldn't have one either.

Resources