Recently while going through spring's mongotemplate they touched upon repositories (here). Say, for example, a CRUD repository, is it a repository made for all your CRUD operations?
Could anyone explain this in more simple terms, what exactly is the puropse of a repository?
Persisting data is all about CRUD (Create/Read/Update/Delete), but you can use different technologies to implement these operations.
The link you provided happens to choose MongoDB, a populate NoSQL document database. Spring Data can also work with relational SQL databases, object databases, graph databases, etc.
The beauty of designing to an interface, and the true power of Spring, is that you can separate what you need accomplished from the details of how it's done. Spring dependency injection makes it easy to swap in different implementations so you don't have to be bound too tightly to your choice.
Here's a simple generic DAO interface with CRUD operations:
package persistence;
public interface GenericDao<K, V> {
List<V> find();
V find(K id);
K save(V value);
void update(V value);
void delete(V value);
}
You could have a HibernateGenericDao:
package persistence;
public class HibernateGenericDao implements GenericDao<K, V> {
// implement methods using Hibernate here.
}
Related
We are working on a Restful project with lots of DB tables. Though the operations on the tables are almost same and mainly INSERT/UPDATE/DELETE/FETCH.
my questions is:
will we have to create a repository (extending JpaRepository) for every entity (Domain class) we create or, there is an option of creating a GenericRepository that can handle all the above-mentioned functionalities for all the entities?
i.e a single GenericRepository for all.
if so, could you share an example?
is [there] an option of creating a GenericRepository that can handle all the above-mentioned functionalities for all the entities?
You are looking at this with a wrong assumption: You are really not supposed to have a repository per table/entity but per Aggregate(Root). See Are you supposed to have one repository per table in JPA? for more details.
Second: Having a generic repository kind of defies the purpose of Spring Data JPA, after all, JPA already has a generic repository. It's called EntityManager. So if you just need the operations you mentioned, just injecting an EntityManager should be fine. No need to use Spring Data JPA at all. And if you want to have something between your business code and JPA specifics, you can wrap it in a simple repository as described by #AlexSalauyou.
One final point: You'll have the code to create all the tables somewhere. You'll also have the code for all the entities. And you have the code for testing this. Is having a trivial interface definition for each going to be a problem?
For insert/update/delete operations such repository may be as simple as:
#Component
public class CommonRepository {
#PersistenceContext
EntityManager em;
#Transactional
public <E> E insert(E entity) {
em.persist(entity);
return entity;
}
#Transactional
public <E> E update(E entity) {
return em.merge(entity);
}
#Transactional
public void delete(Object entity) {
em.remove(entity);
}
}
For more accurate code, refer SimpleJpaRepository implementation
I work on a Java EE application with Spring and JPA (EclispeLink). We developed a user-friendly interface for administrating the database tables. As I know more about Spring and Transactions now, I decided to refactor my code to add better transaction management. The question is how to best deal with generic DAOs, generic services and Spring transactions?
Our current solution was:
A generic BasicDAO which deals with all common database actions (find, create, update, delete...)
A DaoFactory which contains a map of implementations of BasicDao for all entity types (which only need basic database actions) and which gets the entitymanager injected by spring to pass it to the daos
A generic BasicService which offers the common services (actually directly linked to the dao methods)
A ServiceFactory which contains a map of implementations of BasicService for all entity types, which gets the daoFactory injected and passes it to the services. It has a method "getService(Class T)" to provide the right service to the controllers.
Controllers corresponding to the right entity types which delegate their requests to a generic controller which handles the request parameters using reflection and retrieves the right service from the service factory's map to call the update/create/remove method.
Problem is that, when I add the #Transactionnal annotations on the generic Service and my serviceFactory creates the typed services in its map, these services don't seem to have active transactions running.
1) Is it normal, due to the genericity and the fact that only spring-managed services can have transactions?
2) What is the best solution to solve my problem:
Create managed typed services only implementing the generic service and inject them directly in my serviceFactory?
Remove the service layer for these basic services? (but maybe I'll get the same problem with transactions on my dao generic layer...)
Other suggestions?
I read some questions related to these points on the web but couldn't find examples which went as far into genericity as here, so I hope somebody can advise me... Thanks in advance!
For basic gets you don't need a service layer.
A service layer is for dealing with multiple aggregate roots - ie complex logic invloving multiple different entities.
My implementation of a generic repository looks like this :
public class DomainRepository<T> {
#Resource(name = "sessionFactory")
protected SessionFactory sessionFactory;
public DomainRepository(Class genericType) {
this.genericType = genericType;
}
#Transactional(readOnly = true)
public T get(final long id) {
return (T) sessionFactory.getCurrentSession().get(genericType, id);
}
#Transactional(readOnly = true)
public <T> List<T> getFieldEquals(String fieldName, Object value) {
final Session session = sessionFactory.getCurrentSession();
final Criteria crit = session.createCriteria(genericType).
add(Restrictions.eq(fieldName, value));
return crit.list();
}
//and so on ..
with different types instantiated by spring :
<bean id="tagRepository" class="com.yourcompnay.data.DomainRepository">
<constructor-arg value="com.yourcompnay.domain.Tag"/>
</bean>
and can be referenced like so :
#Resource(name = "tagRepository")
private DomainRepository<Tag> tagRepository;
And can also be extended manually for complex entities.
I just switched from ActiveRecord/NHibernate to Dapper. Previously, I had all of my queries in my controllers. However, some properties which were convenient to implement on my models (such as summaries/sums/totals/averages), I could calculate by iterating over instance variables (collections) in my model.
To be specific, my Project has a notion of AppSessions, and I can calculate the total number of sessions, plus the average session length, by iterating over someProject.AppSessions.
Now that I'm in Dapper, this seems confused: my controller methods now make queries to the database via Dapper (which seems okay), but my model class also makes queries to the database via Dapper (which seems strange).
TLDR: Should the DB access go in my model, or controller, or both? It seems that both is not correct, and I would like to limit it to one "layer" so that changing DB access style later doesn't impact too much.
You should consider using a repository pattern:
With repositories, all of the database queries are encapsulated within a repository which is exposed through public interface, for example:
public interface IGenericRepository<T> where T : class
{
T Get(object id);
IQueryable<T> GetAll();
void Insert(T entity);
void Delete(T entity);
void Save(T entity);
}
Then you can inject a repository into a controller:
public class MyController
{
private readonly IGenericRepository<Foo> _fooRepository;
public MyController(IGenericRepository<Foo> fooRepository)
{
_fooRepository = fooRepository;
}
}
This keeps UI free of any DB dependencies and makes testing easier; from unit tests you can inject any mock that implements IRepository. This also allows the repository to implement and switch between technologies like Dapper or Entity Framework without any client changes and at any time.
The above example used a generic repository, but you don't have to; you can create a separate interface for each repository, e.g. IFooRepository.
There are many examples and many variations of how repository pattern can be implemented, so google some more to understand it. Here is one of my favorite articles re. layered architectures.
Another note: For small projects, it should be OK to put queries directly into controllers...
I can't speak for dapper personally, but I've always restricted my db access to models only except in very rare circumstances. That seems to make the most sense in my opinion.
A little more info: http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
A model notifies its associated views and controllers when there has been a change in its state. This notification allows the views to produce updated output, and the controllers to change the available set of commands. A passive implementation of MVC omits these notifications, because the application does not require them or the software platform does not support them.
Basically, data access in models seems to be the standard.
I agree with #void-ray regarding the repository model. However, if you don't want to get into interfaces and dependency injection you could still separate out your data access layer and use static methods to return data from Dapper.
When I am using Dapper I typically have a Repository library that returns very small objects or lists that can then be mapped into a ViewModel and passed to the View (the mapping is done by StructureMap, but could be handled in the controller or another helper).
I don't know so much about DDD repository pattern but the implementation in Spring is confusion me.
public interface PersonRepository extends JpaRepository<Person, Long> { … }
As the interface extends JpaRepository (or MongoDBRepository...), if you change from one db to another, you have to change the interface as well.
For me an interface is there to provide some abstraction, but here it's not so much abstract...
Do you know why Spring-Data works like that?
You are right, an Interface is an abstraction about something that works equals for all implementing classes, from an outside point of view.
And that is exactly what happens here:
JpaRepository is a common view of all your JPA Repositories (for all the different Entities), while MongoDBRepository is the same for all MongoDB Entities.
But JpaRepository and MongoDBRepository have nothing in common, except the stuff that is defined in there common super Interfaces:
org.springframework.data.repository.PagingAndSortingRepository
org.springframework.data.repository.Repository
So for me it looks normal.
If you use the classes that implement your Repository then use PagingAndSortingRepository or Repository if you want to be able to switch from an JPA implementation to an Document based implementation (sorry but I can not imagine such a use case - anyway). And of course your Repository implementation should implement the correct interface (JpaRepository, MongoDBRepository) depending on what it is.
The reasoning behind this is pretty clearly stated in this blog post http://blog.springsource.com/2011/02/10/getting-started-with-spring-data-jpa/.
Defining this interface serves two purposes: First, by extending JpaRepository we get a bunch of generic CRUD methods into our type that allows saving Accounts, deleting them and so on. Second, this will allow the Spring Data JPA repository infrastructure to scan the classpath for this interface and create a Spring bean for it.
If you do not trust sources so close to the source (pun intended) it might be a good idea to read this post as well http://www.brucephillips.name/blog/index.cfm/2011/3/25/Using-Spring-Data-JPA-To-Reduced-Data-Access-Coding.
What I did not need to code is an implementation of the PersonRepository interface. Spring will create an implementation of this interface and make a PersonRepository bean available to be autowired into my Service class. The PersonRepository bean will have all the standard CRUD methods (which will be transactional) and return Person objects or collection of Person objects. So by using Spring Data JPA, I've saved writing my own implementation class.
Until M2 of Spring Data we required users to extend JpaRepository due to the following reasons:
The classpath scanning infrastructure only picked up interfaces extending that interface as one might use Spring Data JPA and Spring Data Mongo in parallel and have both of them pointed to the very same package it would not be clear which store to create the proxy for. However since RC1 we simply leave that burden to the developer as we think it's a rather exotic case and the benefit of just using Repository, CrudRepository or the like outweights the effort you have to take in the just described corner case. You can use exclude and include elements in the namespace to gain finer-grained control over this.
Until M2 we had transactionality applied to the CRUD methods by redeclaring the CRUD methods and annotating them with #Transactional. This decision in turn was driven by the algorithm AnnotationTransactionAttributeSource uses to find transaction configuration. As we wanted to provide the user with the possibility to reconfigure transactions by just redeclaring a CRUD method in the concrete repository interface and applying #Transactional on it. For RC1 we decided to implement a custom TransactionAttributeSource to be able to move the annotations back to the repository CRUD implementation.
Long story short, here's what it boils down to:
As of RC1 there's no need to extend the store specific repository interface anymore, except you want to…
Use List-based access to findAll(…) instead of the Iterable-based one in the more core repository interfaces (allthough you could simply redeclare the relevant methods in a common base interface to return Lists as well)
You want to make use of the JPA-specific methods like saveAndFlush(…) and so on.
Generally you are much more flexible regarding the exposure of CRUD methods since RC1 as you can even only extend the Repository marker interface and selectively add the CRUD methods you want to expose. As the backing implementation will still implement all of the methods of PagingAndSortingRepository we can still route the calls to the instance:
public interface MyBaseRepository<T, ID extends Serializable> extends Repository<T, ID> {
List<T> findAll();
T findOne(ID id);
}
public interface UserRepository extends MyBaseRepository<User, Long> {
List<T> findByUsername(String username);
}
In that example we define MyBaseRepository to only expose findAll() and findOne(…) (which will be routed into the instance implementing the CRUD methods) and the concrete repository adding a finder method to the two CRUD ones.
For more details on that topic please consult the reference documentation.
I have a situation where a client is requiring that we implement our data access code to use either an Oracle or SQL server database based on a runtime configuration settings. The production environment uses Oracle but both dev and QA are running against a SQL Server instance.
(I don't have any control over this or have any background on why this is the case other than Oracle is their BI platform and dev wants to work with SQL Server.)
Their request is to use LINQ-to-SQL / LINQ-to-Oracle for all data access. They will need to support the application and do not have the knowledge to jump into EF yet (their requirement) - although I believe the same problem exists if we use EF.
While I can implement LINQ to XYZ classes for both databases so that I can connect to both, they don't share a common interface (other than DataContext) so I really can't code against an interface and plug the actual implementation in at runtime.
Any ideas how I should approach this?
UPDATE
After writing this post, I did a little investigating into EF and it appears to me that this same problem exists if I use EF - which would be my long term goal.
Just a quick thought. Use MEF framework and plug your DAL layers to it. Then based on the environment(dev, production, QA) you can switch to the various DAL layers(Oracle, SQL etc.).
If you want to know about MEF , here is a quick intro.
Also sometime back I have seen a Generic Data Access Framework by Joydip Kanjilal. You can even have a look into that.
What you have to do is encapsulate the ORM datacontext in an interface of your creation, like IDataContext.
Then share this interface between all DALs and implement it. How you will plug it in is just your preference, using MEF as suggested or a IoC container.
For the sake of closure on this topic, here is what I ended up doing:
I implemented a combination of the Unit of Work and Repository patterns. The Unit of Work class is what consuming code works with and exposes all of the operations that can be performed on my root entities. There is one UoW per root entity. The UoW makes use of a repository class via an interface. The actual implementation of the repository is dependent on the data access technology being used.
So, for instance, if I have a customer entity and I need to support retrieving and updating each record, I would have something like:
public interface ICustomerManager
{
ICustomer GetCustomer(Guid customerId);
void SaveCustomer(ICustomer customer);
}
public class CustomerManager : ICustomerManager
{
public CustomerManager(ICustomerRepository repository)
{
Repository = repository;
}
public ICustomerRepository Repository { get; private set; }
public ICustomer GetCustomer(Guid customerId)
{
return Repository.SingleOrDefault(c => c.ID == customerId);
}
public void SaveCustomer(ICustomer customer)
{
Repository.Save(customer);
}
}
public interface ICustomerRepository : IQueryable<ICustomer>
{
void Save(ICustomer customer);
}
I'm using an Inversion of Control framework to inject the ICustomerRepository implementation into the CustomerManager class at runtime. The implementation class will be in a separate assembly that can be swapped out as the data access technology is changed. All we are concerned about is that the repository implements each method using the contract defined above.
As a side note, to do this with Linq-to-SQL, I simply created a LinqCustomerRepository class that implements ICustomerRepository and added a partial class for the generated Customer entity class that implements ICustomer. Then I can return the L2S entity from the repository as the implementation of the ICustomer interface for the UoW and calling code to work with and they'll be none the wiser that the entity originated from L2S code.