I'm currently thinking about how i handle my domain objects along with hibernate considering the following :
My model object are directly annotated with JPA annotation, no entity layer.
On some database heavy operation, i don't mind tuning my code so i take full advantages of the proxies, even if we can consider it as a leak of abstraction/implementation masking. Of course i prefer when i can do otherwise.
Because i don't have entity layer, i don't have a DAO layer, the entity manager is considerered itself as a DAO layer (related : I found JPA, or alike, don't encourage DAO pattern)
However i was thinking about improve what i'm doing know in order to reduce a bit the complexity, or at least, relocate that complexity in a place it fits better, like entity's related service. And maybe more abstract the fact that i'm using an ORM.
Here is a generic CRUD Service from which all my business service inherits. This code is to show you how things are done currently (annotation, logs remove for clarity) :
public void create(T entity) {
this.entityManager.persist(entity);
}
#Transactional(value = TxType.REQUIRED, rollbackOn=NumeroVersionException.class)
public void update(T entity) throws NumeroVersionException{
try{
this.entityManager.merge(entity);
}catch(OptimisticLockException ole){
throw new NumeroVersionException("for entity "+entity, ole);
}
}
public T read(int id) {
return this.entityManager.find(entityClass, id);
}
public void delete(int id) {
T entity = this.entityManager.getReference(entityClass, id);
this.entityManager.remove(entity);
// edit : removed null test thanks to #JBNizet
}
The problem with this kind of implementation, is that if i want to create an object, then use the advantages of the proxies i basically have to create it then refetch it. Of course the query may not hits the database but hits only hibernat's cache (not sure about it though). But that means i still have to not forget to refetch the proxy.
This mean i leak the fact that i'm using an ORM and proxies behind the scenes.
So i was thinking to change my interface to something like :
public T read(int id);
public T update(T t)throws NumeroVersionException;
public T create(T object);
public void delete(int id);
List<T> list();
Meaning once i pass an object to this layer, i will have to use the returned value.
And implements update specifically like :
public T update(T t){
if(!(t instanceof [Proxy class goes there])){
//+ check if it is a detached proxy
entityManager.merge(t);
}
}
Since merge hits the database every time called, for some operation involving just some 10ish entities this can be annoying i wouldn't call it in an update method with a proxy.
Of course I expect to have some edge cases where i'll need the entityManager to flush things and so on. But i think this would reduce significatively the current complexity of my code and isolate better the concerns.
What i'm trying in short is to relocate the ORM code within the service so i can hide the fact that i'm using an ORM and proxies and use the interface like i was using any other implementation without loosing the benefits of using an ORM.
The question is so :
Is that new design a good idea towards this idea ?
Did i miss anything about how to handle this properly ?
Note : Even though i'm talking about performance, my concern is also about isolation of concerns, maintenability, and easier usability for developers that aren't familiars with ORMs and Java which i work with.
Thanks to #JBNizet i'm seeing some thing more clearly :
I should use the value returned by merge() method.
A managed entity is not always a proxy.
I don't have to abstract the fact that i use managed entities, this will lead to complex and unefficient code
I choosed JPA i won't switch for it which is true unless rewriting the full model to stand for something based on non relationnal database.
So i'll just change my update method from the original code and i'll keep the rest.
Related
I want to have a service which keeps a list inmemory so I don't need to access the database everytime. The service is accessed by a controller. Is this a valid approach or am I missing something? What about concurrent access here (from the controller)? Is this (stateful service) an anti-pattern?
#Service
public class ServiceCached {
private List<SomeObject> someObjects;
#PostConstruct
public void initOnce() {
someObjects = /** longer running loading methodd **/
}
public List<SomeObject> retrieveObjects() {
return someObjects;
}
}
Thanks!
I wouldn't call it an anti-pattern, but in my opinion loading the list from the database in a #PostConstruct method is not a good idea as you slow down the start up of your application, I'd rather use a lazy loading mechanism, but this would potentially introduce some concurrent access issues that would need to be handled.
In your example concurrent access from the controller should not be a problem as the list is loaded from a #PostConstruct method and the controller would depend on this service, therefore this service would need to be fully constructed before it is injected into the controller, therefore the list would already be loaded.
Preferably I'd suggest using Spring Caching: Caching Data with Spring, Documentation, Useful guide
Usage example:
#Cacheable("books")
public Book getByIsbn(String isbn) {
simulateSlowService();
return new Book(isbn, "Some book");
}
This way you do not need to take care of loading and evicting the objects. Once set up, the caching framework will take care of this for you.
I am wondering if there is a way to wrap all argument resolvers like for #PathVariables or #ModelAttributes into one single transaction? We are already using the OEMIV filter but spring/hibernate is spawning too many transactions (one per select if they are not wrapped within a service class which is be the case in pathvariable resolvers for example).
While the system is still pretty fast I think this is not necessary and neither consistent with the rest of the architecture.
Let me explain:
Let's assume that I have a request mapping including two entities and the conversion is based on a StringToEntityConverter
The actual URL would be like this if we support GET: http://localhost/app/link/User_231/Item_324
#RequestMapping("/link/{user}/{item}", method="POST")
public String linkUserAndItem(#PathVariable("user") User user, #PathVariable("item") Item item) {
userService.addItem(user, item);
return "linked";
}
#Converter
// simplified
public Object convert(String classAndId) {
return entityManager.find(getClass(classAndId), getId(classAndId));
}
The UserService.addItem() method is transactional so there is no issue here.
BUT:
The entity converter is resolving the User and the Item against the database before the call to the Controller, thus creating two selects, each running in it's own transaction. Then we have #ModelAttribute methods which might also issue some selects again and each will spawn a transaction.
And this is what I would like to change. I would like to create ONE readonly Transaction
I was not able to find any way to intercept/listen/etc... by the means of Spring.
First I wanted to override the RequestMappingHandlerAdapter but the resolver calls are well "hidden" inside the invokeHandleMethod method...
The ModelFactory is not a spring bean, so i cannot write an interceptor either.
So currently I only see a way by completely replacing the RequestMappingHandlerAdapter, but I would really like to avoid that.
And ideas?
This seems like a design failure to me. OEMIV is usually a sign that you're doing it wrong™.
Instead, do:
#RequestMapping("/link/User_{userId}/Item_{itemId}", method="POST")
public String linkUserAndItem(#PathVariable("userId") Long userId,
#PathVariable("itemId") Long itemId) {
userService.addItem(userId, itemId);
return "linked";
}
Where your service layer takes care of fetching and manipulating the entities. This logic doesn't belong in the controller.
I'm a beginner in hibernate 4 & Spring 3.2 stuffs.
I have read some tutorials and discussion on stack but i don't find a clear answer to my questions. And i think the best way to understand is to ask and share knowledges !
Here we go!
So you create each time a Pojo, a Dao , a Service class, with methods annotated transactionnal. That's ok. I'm using Sessionfactory to handle my transaction. I'm looking for good practices.
1- If you want to use Delete Method and Save Method from the same Service, how will you do to make it works in a same transaction. When i look at the log, each method are executed in different transactions.
This SampleServiceImpl:
#Transactional
public void save(Sample sample){
sampleDao.save(sample);
}
#Transactional
public void delete(Sample sample){
sampleDao.delete(sample);
}
// A solution could be that , but not very clean...there should be an another way, no?
#Transactional
public void action(Sample sample){
sampleDao.save(sample);
sampleDao.delete(sample);
}
2- If you want to use Delete Method and Save Method from different Services class, how will you do to make it works in a same transaction. Because each method in each service class is handled by a Transactionnal annotation. Do you create a global Service calling all subservice in one method annoted Transactional
SampleServiceImpl:
#Transactional
public void save(Sample sample){
sampleDao.save(sample);
}
ParcicipantServiceImpl
#Transactional
public void save(Participant participant){
participantDao.save(participant);
}
// A solution could be that , but not very clean...there should be an another way, no?
GlobalServiceImpl
#Transactional
public void save(Participant participant,Sample sample){
participantDao.save(participant);
sampleDao.save(sample);
}
3- And the last question but not the least .If you want to use several Methods from severals service in one global transaction. Imagine you want to fill up 5 or more table in one execution of a standalone program. How is it possible because each Service to have his proper transactional method, so each time you called this method, there is a transaction.
a- I have successfully arrive to fill up two tables in a sample transaction using Mkyong tutorial and cascade property in the mapping. So i see how to make it works for one table directly joined to one another or more tables.
b- But if you have a 3 tables Participant -> Samples -> Derived Products. How will you fill up the three tables in a same transaction.
I don't know if i'm clear. But i would appreciated some help or example on that from advanced users.
Thanks a lot for you time.
Your solution is fine, maybe this works if you want to using nested transactional methods(note I saw this solution couple days ago and didn't test it):
< tx:annotation-driven mode="aspectj" / >
< context:load-time-weaver aspectj-weaving="on"/ >
#Transactional
public void action(Sample sample){
save(sample);
delete(sample);
}
Transaction should propagate.
GlobalServiceImpl
#Transactional
public void save(Participant participant,Sample sample){
participantDao.save(participant);
sampleServiceImpl.save(sample);
}
The approch you are following is cleaner approch,
ServiceOpjects are ment to contain business logic. Hence they will always manuplate through data objects.
What we do in practise is create a another layer that uses dataObjects and and other functional call of same layer. Then this all business layer is called via service layer having annotation #transactional.
Can you please mention why you think this approch is dirty??
I just switched from ActiveRecord/NHibernate to Dapper. Previously, I had all of my queries in my controllers. However, some properties which were convenient to implement on my models (such as summaries/sums/totals/averages), I could calculate by iterating over instance variables (collections) in my model.
To be specific, my Project has a notion of AppSessions, and I can calculate the total number of sessions, plus the average session length, by iterating over someProject.AppSessions.
Now that I'm in Dapper, this seems confused: my controller methods now make queries to the database via Dapper (which seems okay), but my model class also makes queries to the database via Dapper (which seems strange).
TLDR: Should the DB access go in my model, or controller, or both? It seems that both is not correct, and I would like to limit it to one "layer" so that changing DB access style later doesn't impact too much.
You should consider using a repository pattern:
With repositories, all of the database queries are encapsulated within a repository which is exposed through public interface, for example:
public interface IGenericRepository<T> where T : class
{
T Get(object id);
IQueryable<T> GetAll();
void Insert(T entity);
void Delete(T entity);
void Save(T entity);
}
Then you can inject a repository into a controller:
public class MyController
{
private readonly IGenericRepository<Foo> _fooRepository;
public MyController(IGenericRepository<Foo> fooRepository)
{
_fooRepository = fooRepository;
}
}
This keeps UI free of any DB dependencies and makes testing easier; from unit tests you can inject any mock that implements IRepository. This also allows the repository to implement and switch between technologies like Dapper or Entity Framework without any client changes and at any time.
The above example used a generic repository, but you don't have to; you can create a separate interface for each repository, e.g. IFooRepository.
There are many examples and many variations of how repository pattern can be implemented, so google some more to understand it. Here is one of my favorite articles re. layered architectures.
Another note: For small projects, it should be OK to put queries directly into controllers...
I can't speak for dapper personally, but I've always restricted my db access to models only except in very rare circumstances. That seems to make the most sense in my opinion.
A little more info: http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
A model notifies its associated views and controllers when there has been a change in its state. This notification allows the views to produce updated output, and the controllers to change the available set of commands. A passive implementation of MVC omits these notifications, because the application does not require them or the software platform does not support them.
Basically, data access in models seems to be the standard.
I agree with #void-ray regarding the repository model. However, if you don't want to get into interfaces and dependency injection you could still separate out your data access layer and use static methods to return data from Dapper.
When I am using Dapper I typically have a Repository library that returns very small objects or lists that can then be mapped into a ViewModel and passed to the View (the mapping is done by StructureMap, but could be handled in the controller or another helper).
I have a situation where a client is requiring that we implement our data access code to use either an Oracle or SQL server database based on a runtime configuration settings. The production environment uses Oracle but both dev and QA are running against a SQL Server instance.
(I don't have any control over this or have any background on why this is the case other than Oracle is their BI platform and dev wants to work with SQL Server.)
Their request is to use LINQ-to-SQL / LINQ-to-Oracle for all data access. They will need to support the application and do not have the knowledge to jump into EF yet (their requirement) - although I believe the same problem exists if we use EF.
While I can implement LINQ to XYZ classes for both databases so that I can connect to both, they don't share a common interface (other than DataContext) so I really can't code against an interface and plug the actual implementation in at runtime.
Any ideas how I should approach this?
UPDATE
After writing this post, I did a little investigating into EF and it appears to me that this same problem exists if I use EF - which would be my long term goal.
Just a quick thought. Use MEF framework and plug your DAL layers to it. Then based on the environment(dev, production, QA) you can switch to the various DAL layers(Oracle, SQL etc.).
If you want to know about MEF , here is a quick intro.
Also sometime back I have seen a Generic Data Access Framework by Joydip Kanjilal. You can even have a look into that.
What you have to do is encapsulate the ORM datacontext in an interface of your creation, like IDataContext.
Then share this interface between all DALs and implement it. How you will plug it in is just your preference, using MEF as suggested or a IoC container.
For the sake of closure on this topic, here is what I ended up doing:
I implemented a combination of the Unit of Work and Repository patterns. The Unit of Work class is what consuming code works with and exposes all of the operations that can be performed on my root entities. There is one UoW per root entity. The UoW makes use of a repository class via an interface. The actual implementation of the repository is dependent on the data access technology being used.
So, for instance, if I have a customer entity and I need to support retrieving and updating each record, I would have something like:
public interface ICustomerManager
{
ICustomer GetCustomer(Guid customerId);
void SaveCustomer(ICustomer customer);
}
public class CustomerManager : ICustomerManager
{
public CustomerManager(ICustomerRepository repository)
{
Repository = repository;
}
public ICustomerRepository Repository { get; private set; }
public ICustomer GetCustomer(Guid customerId)
{
return Repository.SingleOrDefault(c => c.ID == customerId);
}
public void SaveCustomer(ICustomer customer)
{
Repository.Save(customer);
}
}
public interface ICustomerRepository : IQueryable<ICustomer>
{
void Save(ICustomer customer);
}
I'm using an Inversion of Control framework to inject the ICustomerRepository implementation into the CustomerManager class at runtime. The implementation class will be in a separate assembly that can be swapped out as the data access technology is changed. All we are concerned about is that the repository implements each method using the contract defined above.
As a side note, to do this with Linq-to-SQL, I simply created a LinqCustomerRepository class that implements ICustomerRepository and added a partial class for the generated Customer entity class that implements ICustomer. Then I can return the L2S entity from the repository as the implementation of the ICustomer interface for the UoW and calling code to work with and they'll be none the wiser that the entity originated from L2S code.