I have a gem which implements my entire business logic, so that I can use it in different applications. Now, one of these applications requires persistence. How do I easily extend my existing Ruby models to support persistence? Should I monkey patch them?
To give you a bit of a background, my model objects are usually just built from XML or JSON files, but now I need to store them in an relational database.
Are there common patterns for this problem? Should I write new model objects that support persistence and map between my legacy objects and the new model objects or should I extend the existing ones to be representable in a database?
Any tips, hints, and links are highly welcome.
I am not sure that I fully understand your question. However, the DataMapper library can be very easily used to add persistence to an already existing object model after the fact, for two reasons:
It doesn't rely on class inheritance (like e.g. ActiveRecord does) but on mixin inheritance, and you can inherit from as many mixins you like, which means you won't have to change the inheritance tree of your object model just to add DataMapper to it.
The object-relational-mapping is declared explicitly in the model, not inferred from the data-store. This means that you can have very complex mappings between the data-store and your models, unlike the rather simple 1:1 table == class, row == object, column == attribute mapping of ActiveRecord.
Now, whether or not you will manage to keep the persistence aspect fully orthogonal, and e.g. in a separate gem, that's another question. You could indeed keep it in a separate library that just opens up all the model classes and include DataMapper::Resource and declare all the properties. This will allow you to still deploy your object model gem without persistence, but the persistence gem will obviously be rather tightly coupled to the object model gem.
Related
Apologies for the "pedantic" question, however I have been wondering how to structure the following. If I am building a JPA type application, my persistent classes (annotated with #Table etc) may be collected in a foo.bar.entities package. However, I may also have objects that are of a similar structure (POJO) that are not use for persistence. Where would I place these so that it was clear that there function was other than JPA; foo.bar.dto (for data transfer object) - or am I confusing my terminology? Maybe they are "model" classes - although that is really what the entities are?
Term 'dto' is mostly used to refer to these kind of objects. Use vertical slice architecture to place these classes under a different package.
Now, you can place dto's under dto package and entity/domain classes under domain package. You can also use entities as your package name, but just be consistent all across your project with your naming conventions.
I've started using DataMapper in my project, then I've found out that it is actually a frozen project. DataMapper 2.0 moved to ROM (Ruby Object Mapper), however the Property API from DataMapper was extracted to Virtus project.
What I need is to keep definition of particular class in one place (relationship + attributes) and I cannot allow to keep model in schema definition prior to the class as it breaks low level requirements for the project. I need to map classes to model (way of persistence), not model to classes.
So, I start wonder if there is any way to glue Virtus and Sequel or ROM together, to have attributes definitions in same class declaration and get the database schema automatically as I would in DataMapper. I am looking at direction how can I hook to Virtus machinery and add schema to model by DB.create_table() (Sequel)... or something similar.
Please avoid answers and comments that are not hints how to do that. If it is not possible, I just remove DataMapper and abandon ORM then and instead I will create repository of marshaled objects.
I'm starting developing for ios, and now i'm studying core-data.
One thing was not clear for me, when i was studying a lot of people was managing core-data entitys on the controller.
For me this isn't MVC, since core-data is from Model layer.
So i think will be nice to implement core-data using DAO pattern, but before i wanna know if there is any core-data pattern or if there's some cons implementing DAO using core-data?
It is indeed correct to avoid implementing data look-up methods in the controller. This way the philosophy of the MVC design pattern is adhered to: the controller should just be calling high-level "glue" code and therefore acting as a document that describes how the view is interacting with the model.
With regards to persistent objects, there are two main approaches to this:
Use the ActiveRecord pattern
Use the Data Access Object pattern.
A Data Access Object (DAO) is an interface dedicated to the persistence of a model/domain object to a data-source.
The ActiveRecord pattern puts the persistence methods on the model object itself, whereas the DAO defines a discrete interface. The advantage of the DAO pattern is:
Its easy to define another style of persistence, eg moving from a Database to cloud, without changing the interface and thus effecting other classes.
The persistence concerns are modularized away from the main model object concerns.
The advantage of the ActiveRecord pattern is simplicity.
ActiveRecord for CoreData
At the present time the ActiveRecord pattern seems to be a lot more popular among Objective-C developers. The following project provides ActiveRecord for CoreData: https://github.com/magicalpanda/MagicalRecord
DAO for CoreData
I'm not familiar with a widely used library that provides the DAO pattern for CoreData. However, it could be quite easily applied without the assitance of a library:
Define all your data methods for a particular entity - findByName, save, delete, etc on a protocol.
Implement the protocol by calling the appropriate CoreData methods.
NB: The example project for the Typhoon framework will soon include some examples of applying the DAO pattern with CoreData.
You are looking for something like Core Date Persistence Framework
This framework allows you doing like:
DAOFactory *factory = [DAOFactory factory];
DAO *dao = [factory createRuntimeDAO:#"EntityName"];
NSArray *items = [dao findAll];
And a lot of more interesting things.
Need suggestion for migration few modules of asp.net mvc 3 application:
Right now we are using EF with POCO classes, but in the future for some performance driven modules we need to move to ADO.NET or some other ORM tool (may be DAPPER.NET).
The issue we are facing as of now is: our views dependent on Collection classes getting loaded through EF, what strategy should i used for these entity classes to be get loaded exactly the same way as by EF with some other ADO.NET or ORM tool.
What i want is to be able to switch between EF & ADO.NET with minimum of change at data access layer, as i don't want my Views to get effect by that.
You should use the Repository Design Pattern to hide the implementation of your data access layer. The Repository will return the same POCO's and have the same operations contracts no matter what data access layer you are using underneath the hood. Now this works fine if you are using real POCO's that do not have virtual methods for lazy loading in them. You will need to explicitly handle loading of dependent collections on entities to make this work.
What I've seen so far, a seamless transition from one ORM/DAL to another is an illusion. The repository pattern as suggested by Kevin is a great help, a prerequisite even (+1). But nonetheless, each ORM has a footprint in the domain layer. With EF you probably use virtual properties, maybe data annotations or, easily forgotten, a validation framework that easily fits in (and dismiss others that don't). You may rely on EF's ability to map across join tables. There may be things you don't (but would like to) do because of EF.
(Not to mention tools that execute scaffolding on top of an EF context. That would make the lock-in even tighter.)
Some things I might do in your situation (forgive me if I'm stating the obvious for you)
Use EF optimally. (Best mappings, best associations, ...) Preparing for the future is good, but it easily degenerates into the YAGNI pattern.
Become proficient in linq + EF: there are many ways to needlessly kill performance with EF. Suppose EF is good enough!
Keep looking for alternatives for high performance, like using stored procedures, parallellization, background processing, and/or reducing data volumes, and choose one when the requirements (functional and non-functional) are clear enough.
Maybe introduce an abstraction layer with DTO's that serve your views now, and in the future can be readily materialized by another ORM or ADO.
Expose POCO's as interfaces, which can be implemented by other objects later.
Just to add to both of these answers...
I always structure my solution into these basic projects:
Front end (usually asp.net MVC web app),
Services/Core with business processes,
Objects with application model POCOs and communication interfaces - referenced by all other projects so they serve as data interchange contracts and
Data that implements repositories that return POCO instances from Objects
Front end references Objects and Services/Core
Services Core references Objects and Data
Data references Objects
Objects doesn't reference any of the others, because they're the domain application model
If front end needs any additional POCOs I create those in the front end as view models and are only seen to that project (ie. registration process usually uses a separate type with more (and different) properties than Objects.User entity (POCO). No need to put these kind of types in Objects.
The same goes with data-only types. If additional properties are required, these classes usually inherit from the same Objects class and add additional properties and methods like generic ToPoco() method that knows exactly how to map from data type to application mode class.
Changing DAL
So whenever I need to change (which is as #GetArnold pointed out) my data access code I only have to provide a different Data project that uses different library/framework. All additional data-specific types etc. are then part of it as well... But all communication with other layers stays the same.
Instead of creating a new Data project you can also change existing one, by changing repository code and use a different DAL in them.
I'm implementing several classes which does not have data by itself, just logics. These classes implements access control policy to date which depends on several parameters taken from data from other models.
I initially try to find answer to "Where to store such classes?" here, and the answer was apps/models directory. That's ok, but I like to clearly separate these classes from ActiveRecord inherited classes in hierarchy, both as file and class.
So, I created classes inside Logic module, like Logic::EvaluationLogic or Logic::PhaseLogic. I also wanted to have constants which passed between these logics. I prefer to place these constants into Logic module too. Thus, I implemented like this:
# in logic/phase_logic.rb
module Logic
PHASE_INITIAL = 0
PHASE_MIDDLE = 1000
class PhaseLogic
def self.some_phase_control_code
end
end
end
# in logic/evaluation_logic.rb
module Logic
class EvaluationLogic
def self.some_other_code
Logic::PhaseLogic.self.some_phase_control_code(Logic::PHASE_INITIAL)
end
end
end
Now, it work just fine with rspec (It passes tests I wrote without issues), but not with development server, since it can't find the Logic::PHASE_INITIAL constant.
I suspect it's related to the mismatch of the autoloading scheme of Rails and what I wanted to do. I tried to tweak rails, but no luck, ended-up with eliminating module Logic wrap.
Now the question I want to ask: How I can organize these classes with Rails?
I'm using 3.2.1 at this moment.
Posted a follow-up question "How I can organize namespace of classes in app/modules with rails?"
I am not sure whether I really understand your classes, but couldn't you create a Logic module or (I would rather do this:) PhaseLogic and EvaluationLogic objects in /lib directory?
It is not said that "Model" is always descendant of ActiveRecord. If the object belongs to "business logic" then it is a model. You can have models which do not touch database in any way. So, if your classes are "business objects", place them in 'app/models' and use like any other model.
Another question is whether you should use inheritance or modules - but I would rather think about including a module in PhaseLogic, and not about defining PhaseLogic in a module. Of course, all this depends heavily on the intended role of your objects.
Because in Ruby the class of object is not important, you do not need to use inheritance. If you want to 'plug' the logic objects into other objects, just take care that all '*Logic' classes have the required methods. I know that all I said is very vague, but I think I cannot give you some more concrete suggestions without knowing more about the role of these objects.
Ah, and one more thing!
If you find yourself fighting with Rails class autoloading, just use the old require "lib/logic.rb" in all the classes where you are using Logic::PHASE_INITIAL constants.
In this case I suppose that your problem was caused by different order of loading. The logic/evaluation_logic.rb has been loaded before logic/phase_logic.rb. The problem may disappear if you create logic.rb somewhere, where class autoloading can find it, and define these constants in that file.
Don't name your classes or modules Logic use specific names. Start with extracting logic into separate classes and then try to break them into smaller ones. Use namespaces to distinguish them from each other in lib folder, after this steps you would be able to extract some logic parts to separate gems and reduce codebase and complexity of application. Also take a look into presenter pattern.