I just switched from ActiveRecord/NHibernate to Dapper. Previously, I had all of my queries in my controllers. However, some properties which were convenient to implement on my models (such as summaries/sums/totals/averages), I could calculate by iterating over instance variables (collections) in my model.
To be specific, my Project has a notion of AppSessions, and I can calculate the total number of sessions, plus the average session length, by iterating over someProject.AppSessions.
Now that I'm in Dapper, this seems confused: my controller methods now make queries to the database via Dapper (which seems okay), but my model class also makes queries to the database via Dapper (which seems strange).
TLDR: Should the DB access go in my model, or controller, or both? It seems that both is not correct, and I would like to limit it to one "layer" so that changing DB access style later doesn't impact too much.
You should consider using a repository pattern:
With repositories, all of the database queries are encapsulated within a repository which is exposed through public interface, for example:
public interface IGenericRepository<T> where T : class
{
T Get(object id);
IQueryable<T> GetAll();
void Insert(T entity);
void Delete(T entity);
void Save(T entity);
}
Then you can inject a repository into a controller:
public class MyController
{
private readonly IGenericRepository<Foo> _fooRepository;
public MyController(IGenericRepository<Foo> fooRepository)
{
_fooRepository = fooRepository;
}
}
This keeps UI free of any DB dependencies and makes testing easier; from unit tests you can inject any mock that implements IRepository. This also allows the repository to implement and switch between technologies like Dapper or Entity Framework without any client changes and at any time.
The above example used a generic repository, but you don't have to; you can create a separate interface for each repository, e.g. IFooRepository.
There are many examples and many variations of how repository pattern can be implemented, so google some more to understand it. Here is one of my favorite articles re. layered architectures.
Another note: For small projects, it should be OK to put queries directly into controllers...
I can't speak for dapper personally, but I've always restricted my db access to models only except in very rare circumstances. That seems to make the most sense in my opinion.
A little more info: http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
A model notifies its associated views and controllers when there has been a change in its state. This notification allows the views to produce updated output, and the controllers to change the available set of commands. A passive implementation of MVC omits these notifications, because the application does not require them or the software platform does not support them.
Basically, data access in models seems to be the standard.
I agree with #void-ray regarding the repository model. However, if you don't want to get into interfaces and dependency injection you could still separate out your data access layer and use static methods to return data from Dapper.
When I am using Dapper I typically have a Repository library that returns very small objects or lists that can then be mapped into a ViewModel and passed to the View (the mapping is done by StructureMap, but could be handled in the controller or another helper).
Related
I'm currently thinking about how i handle my domain objects along with hibernate considering the following :
My model object are directly annotated with JPA annotation, no entity layer.
On some database heavy operation, i don't mind tuning my code so i take full advantages of the proxies, even if we can consider it as a leak of abstraction/implementation masking. Of course i prefer when i can do otherwise.
Because i don't have entity layer, i don't have a DAO layer, the entity manager is considerered itself as a DAO layer (related : I found JPA, or alike, don't encourage DAO pattern)
However i was thinking about improve what i'm doing know in order to reduce a bit the complexity, or at least, relocate that complexity in a place it fits better, like entity's related service. And maybe more abstract the fact that i'm using an ORM.
Here is a generic CRUD Service from which all my business service inherits. This code is to show you how things are done currently (annotation, logs remove for clarity) :
public void create(T entity) {
this.entityManager.persist(entity);
}
#Transactional(value = TxType.REQUIRED, rollbackOn=NumeroVersionException.class)
public void update(T entity) throws NumeroVersionException{
try{
this.entityManager.merge(entity);
}catch(OptimisticLockException ole){
throw new NumeroVersionException("for entity "+entity, ole);
}
}
public T read(int id) {
return this.entityManager.find(entityClass, id);
}
public void delete(int id) {
T entity = this.entityManager.getReference(entityClass, id);
this.entityManager.remove(entity);
// edit : removed null test thanks to #JBNizet
}
The problem with this kind of implementation, is that if i want to create an object, then use the advantages of the proxies i basically have to create it then refetch it. Of course the query may not hits the database but hits only hibernat's cache (not sure about it though). But that means i still have to not forget to refetch the proxy.
This mean i leak the fact that i'm using an ORM and proxies behind the scenes.
So i was thinking to change my interface to something like :
public T read(int id);
public T update(T t)throws NumeroVersionException;
public T create(T object);
public void delete(int id);
List<T> list();
Meaning once i pass an object to this layer, i will have to use the returned value.
And implements update specifically like :
public T update(T t){
if(!(t instanceof [Proxy class goes there])){
//+ check if it is a detached proxy
entityManager.merge(t);
}
}
Since merge hits the database every time called, for some operation involving just some 10ish entities this can be annoying i wouldn't call it in an update method with a proxy.
Of course I expect to have some edge cases where i'll need the entityManager to flush things and so on. But i think this would reduce significatively the current complexity of my code and isolate better the concerns.
What i'm trying in short is to relocate the ORM code within the service so i can hide the fact that i'm using an ORM and proxies and use the interface like i was using any other implementation without loosing the benefits of using an ORM.
The question is so :
Is that new design a good idea towards this idea ?
Did i miss anything about how to handle this properly ?
Note : Even though i'm talking about performance, my concern is also about isolation of concerns, maintenability, and easier usability for developers that aren't familiars with ORMs and Java which i work with.
Thanks to #JBNizet i'm seeing some thing more clearly :
I should use the value returned by merge() method.
A managed entity is not always a proxy.
I don't have to abstract the fact that i use managed entities, this will lead to complex and unefficient code
I choosed JPA i won't switch for it which is true unless rewriting the full model to stand for something based on non relationnal database.
So i'll just change my update method from the original code and i'll keep the rest.
I am very new to DDD and reading various discussions on validation (where to put it) on the stackoverflow as well as on the web. I do like the idea of keeping the validations outside the entities and validate them depending upon the context. In our project, we have an entity called MedicalRecord and there are various rules for when one can be created or can be saved. I want to create a service layer, let's say RecordService, that would do some check to make sure if a user can create a medical record before creating one. I also want to create MedicalRecordRepository that would save the medical record. What confuses me is the access modifies on my entity and repository classes. Since both will be public, how can I enforce the client of my application to use the service instead of just creating new medical record (with public constructor) and use the repository to save it? I am using c#. I know DDD is language independent but wondering if someone could provide some thoughts on this.
Thanks.
You must control record creation by making the c'tor non-public and allowing creation only through a factory:
public class MedicalRecord
{
internal MedicalRecord()
{
}
}
public static class MedicalRecordFactory
{
public static MedicalRecord Create(User onBehalfOf)
{
// do logic here
return new MedicalRecord();
}
}
For the internal keyword to be applicable, either both classes must be in the same assembly or the class assembly must allow the factory assembly access with the InternalsVisibleTo attribute.
Edit
If you need to be able to perform validation at any time, you additionally have to encapsulate validation logic in another class (here partially done via an extension method):
public static class MedicalRecordValidator
{
public static bool IsValid(this MedicalRecord medicalRecord, <context>)
{
return IsValid(<context>);
}
public static bool IsValidForCreation(User onBehalfOf)
{
return IsValid(null, onBehalfOf);
}
private static bool IsValid(<context> context, User user = null)
{
// do validation logic here
}
}
then, the factory could do this:
public static MedicalRecord Create(User onBehalfOf)
{
return IsValidForCreation(onBehalfOf) ? new MedicalRecord() : null;
}
and you could also always do this:
if (myMedicalRecord.IsValid(<context>))
{
// ....
Only use your repository to retrieve your entities; not to save them. Saving (persisting) your entities is the responsibility of your unit of work.
You let your service change one or more entities (for instance a MedicalRecord) and track the changes in a unit of work. Before committing the changes in your unit of work, you can validate all entities for validation needs across entities. Then you commit your unit of work (in a single transaction), which will persist all your changes, or none at all.
For clarity, your MedicalRecord should protect its own invariants; such that a client retrieving a MedicalRecord using a repository, calling some methods to modify it and then persisting it using a unit of work, should result in a "valid" system state. If there are situations where this is not the case, then it can be argued that it should not be possible to retrieve a MedicalRecord on its own - it is part of some larger concept.
For creation purposes, using a factory like #Thomas suggests below is a good approach, although I think I'd call it a service (because of the collaboration between entities) instead of a factory. What I like about Thomas' approach is that it does not allow a client to create a MedicalRecord without a User (or other contextual information), without tying the MedicalRecord tightly to the user.
I have several thoroughly unit-tested and finely crafted rich DDD model classes, with final immutable invariants and integrity checks. Object's instantiation happens through adequate constructors, static factory methods and even via Builders.
Now, I have to provide a Spring MVC form to create new instances of some classes.
It seems to me (I'm not an expert) that I have to provide empty constructor and attribute's setters for all form's backing classes I want to bind.
So, what should I do ?
Create anemic objects dedicated to form backing and transfer the informations to my domain model (so much for the DRY principle...) calling the appropriate methods / builder ?
Or is there a mecanisms that I missed that can save my day ? :)
Thank you in advance for your wisdom !
The objects that are used for binding with the presentation layers are normally called view models and they are DTOs purposed toward displaying data mapped from domain objects and then mapping user input back to domain objects. View models typically look very similar to the domain objects they represent however there are some important differences:
Data from the domain objects may be flattened or otherwise transformed to fit the requirements of a given view. Having the mapping be in plain objects is easier to manage than mappings in the presentation framework, such as MVC. It is easier to debug and detect errors.
A given view may require data from multiple domain objects - there may not be a single domain object that fits requirements of a view. A view model can be populated by multiple domain objects.
A view model is normally designed with a specific presentation framework in mind and as such may utilize framework specific attributes for binding and client side validation. As you stated, a typical requirement is for a parameterless constructor, which is fine for a view model. Again, it is much easier to test and manage a view model than some sort of complex mapping mechanism.
View models appear to violate the DRY principle, however after a closer look the responsibility of the view model is different, so with the single responsibility principle in mind it is appropriate to have two classes. Also, take a look at this article discussing the fallacy of reuse often lead by the DRY principle.
Furthermore, view models are indeed anemic, though they may have a constructor accepting a domain object as a parameter and a method for creating and updating a domain object using the values in the view model as input. From experience I find that it is a good practice to create a view model class for every domain entity that is going to be rendered by the presentation layer. It is easier to manage the double class hierarchy of domain objects and view models than it is to manage complex mapping mechanisms.
Note also, there are libraries that attempt to simplify the mapping between view models and domain objects, for example AutoMapper for the .NET Framework.
Yes you will need to create Objects for the form to take all the input, and the update the your model with this objects in one operation.
But I wont call this objects anemic (especially if you do DDD). This objects represent one unit of work. So this are Domain Concepts too!
I solved this by creating a DTO Interface:
public interface DTO<T> {
T getDomainObject();
void loadFromDomainObject(T domainObject);
}
public class PersonDTO implements DTO<Person> {
private String firstName;
private String lastName;
public PersonDTO() {
super();
}
// setters, getters ...
#Override
public Person getDomainObject() {
return new Person(firstName, lastName);
}
#Override
public void loadFromDomainObject(Person person) {
this.firstName = person.getFirstName();
this.lastName = person.getLastName();
}
// validation methods, view formatting methods, etc
}
This also stops view validation and formatting stuff from leaking into the domain model. I really dislike having Spring specific (or other framework specific) annotations (#Value, etc) and javax.validation annotations in my domain objects.
I have the following for a project structure, these are all seperate projects, I was told to do it that way so not my choice.
CORE
--Self Explanitory
DATA
--Contains EF 4.1 EDMX, POCO's Generic Repository Interface
DATAMapping
--Contains Generic Repository
Services
-- Contains nothing at the moment
MVC 3 Application
-- Self Explanitory
Here is my question. I have been reading that it is best practice to keep the controllers on a diet and that models / viewmodels should be dumb therefore introducing the service layer part of my project structure. The actual question now; Is this a good approach or am I creating way too much work for myself?
So if I want to say have some CRUD ops on products or categories or any of the other entities, the repository should be instantiated from the service layer / Business Logic Layer?
Some input please??
Personally I have my service layer referencing only generic and abstract repositories for CRUD operations. For example a service layer constructor might look like this:
public class MyService: IMyService
{
private readonly IFooRepository _fooRepo;
private readonly IBarRepository _barRepo;
public MyService(IFooRepository fooRepo, IBarRepository barRepo)
{
_fooRepo = fooRepo;
_barRepo = barRepo;
}
public OutputModel SomeBusinessMethod(InputModel input)
{
// ... use CRUD methods on _fooRepo and _barRepo to define a business operation
}
}
and the controller will simply take an IMyService into his constructor and use the business operation.
Then everything will be wired by the dependency injection framework of your choice.
I have a situation where a client is requiring that we implement our data access code to use either an Oracle or SQL server database based on a runtime configuration settings. The production environment uses Oracle but both dev and QA are running against a SQL Server instance.
(I don't have any control over this or have any background on why this is the case other than Oracle is their BI platform and dev wants to work with SQL Server.)
Their request is to use LINQ-to-SQL / LINQ-to-Oracle for all data access. They will need to support the application and do not have the knowledge to jump into EF yet (their requirement) - although I believe the same problem exists if we use EF.
While I can implement LINQ to XYZ classes for both databases so that I can connect to both, they don't share a common interface (other than DataContext) so I really can't code against an interface and plug the actual implementation in at runtime.
Any ideas how I should approach this?
UPDATE
After writing this post, I did a little investigating into EF and it appears to me that this same problem exists if I use EF - which would be my long term goal.
Just a quick thought. Use MEF framework and plug your DAL layers to it. Then based on the environment(dev, production, QA) you can switch to the various DAL layers(Oracle, SQL etc.).
If you want to know about MEF , here is a quick intro.
Also sometime back I have seen a Generic Data Access Framework by Joydip Kanjilal. You can even have a look into that.
What you have to do is encapsulate the ORM datacontext in an interface of your creation, like IDataContext.
Then share this interface between all DALs and implement it. How you will plug it in is just your preference, using MEF as suggested or a IoC container.
For the sake of closure on this topic, here is what I ended up doing:
I implemented a combination of the Unit of Work and Repository patterns. The Unit of Work class is what consuming code works with and exposes all of the operations that can be performed on my root entities. There is one UoW per root entity. The UoW makes use of a repository class via an interface. The actual implementation of the repository is dependent on the data access technology being used.
So, for instance, if I have a customer entity and I need to support retrieving and updating each record, I would have something like:
public interface ICustomerManager
{
ICustomer GetCustomer(Guid customerId);
void SaveCustomer(ICustomer customer);
}
public class CustomerManager : ICustomerManager
{
public CustomerManager(ICustomerRepository repository)
{
Repository = repository;
}
public ICustomerRepository Repository { get; private set; }
public ICustomer GetCustomer(Guid customerId)
{
return Repository.SingleOrDefault(c => c.ID == customerId);
}
public void SaveCustomer(ICustomer customer)
{
Repository.Save(customer);
}
}
public interface ICustomerRepository : IQueryable<ICustomer>
{
void Save(ICustomer customer);
}
I'm using an Inversion of Control framework to inject the ICustomerRepository implementation into the CustomerManager class at runtime. The implementation class will be in a separate assembly that can be swapped out as the data access technology is changed. All we are concerned about is that the repository implements each method using the contract defined above.
As a side note, to do this with Linq-to-SQL, I simply created a LinqCustomerRepository class that implements ICustomerRepository and added a partial class for the generated Customer entity class that implements ICustomer. Then I can return the L2S entity from the repository as the implementation of the ICustomer interface for the UoW and calling code to work with and they'll be none the wiser that the entity originated from L2S code.