JPA and DAO - what's the standard approach? - spring

I'm developing my first app with JPA/Hibernate and Spring. My first attempt at a DAO class looks like this:
#Repository(value = "userDao")
public class UserDaoJpa implements UserDao {
#PersistenceContext
private EntityManager em;
public User getUser(Long id) {
return em.find(User.class, id);
}
public List getUsers() {
Query query = em.createQuery("select e from User e");
return query.getResultList();
}
}
I also found some examples using JpaDaoSupport and JpaTemplate. Which design do you prefer? Is there anything wrong with my example?

I'd say your approach looks totally sound. Personally I don't use JpaDaoSupport or JpaTemplate because you can do everything you need with the EntityManager and Criteria Queries.
Quote from the JavaDoc of JpaTemplate:
JpaTemplate mainly exists as a sibling of JdoTemplate and HibernateTemplate, offering the same style for people used to it. For newly started projects, consider adopting the standard JPA style of coding data access objects instead, based on a "shared EntityManager" reference injected via a Spring bean definition or the JPA PersistenceContext annotation.

I prefer the template-less approach (i.e. your current approach) because
it's less invasive, you don't tie DAOs to Spring
templates don't offer much value with APIs that use unchecked exceptions
And this is the Spring recommendation, as summarized in the blog post "So should you still use Spring's HibernateTemplate and/or JpaTemplate??" and the official javadoc:
The real question is: which approach to choose??
(...)
So in short (as the JavaDoc for
HibernateTemplate and
JpaTemplate already mention)
I'd recommend you to start using the
Session and/or EntityManager API
directly if you're starting to use
Hibernate or JPA respectively on a new
project–remember: Spring tries to be
non-invasive, this is another great
example!

I, personally, prefer your approach - inject EntityManager and use it directly. But JpaTemplate is also a good option. I don't like it, because adds yet another, unnecessary layer of abstraction.

I don't know if there's a "standard" approach.
If you're using JPA, you have your choice of implementations: Hibernate, TopLink, etc.
If you deploy to Google App Engine, you'll use JPA talking to BigTable.
So if your objectives are to maximize portability, stick with the JPA standard, and not tie yourself to a particular implementation like Hibernate, make sure that your DAOs only use JPA constructs.

Related

spring boot jpa open-in-view false. How to convert existing application to not use OSIV?

There are lot of articles why not to use OSIV in production. Unfortunately, my app is finished and I have used open-in-view:true all the time of development because that's default setting and I did not know this. Please, could you give me an advice how to convert the whole application by the easiest way?
Should I use
#PersistenceContext
private EntityManager em;
in every controller and call the NativeQuery?
Or do you have some example of spring boot application without OSIV? Thank you
That's no easy task. If you rely on lazy loading outside of the data layer, you will have to rework the data layer to fit all those needs. The easiest way is to use #EntityGraph on your repository methods to do the fetching of associations. Sometimes you will have to duplicate methods for different use cases to apply different #EntityGraph annotations. There are still some issues you can run into in such a design, but this should get you pretty far already.
IMO the best solution is to use DTOs as this will improve the performance when done right and eliminates all lazy loading issues. The "problem" is though, that this approach might require quite a few changes in your application.
Either way, I would recommend you take a look at what Blaze-Persistence Entity Views has to offer as a way to implement a DTO approach.
I created the library to allow easy mapping between JPA models and custom interface or abstract class defined models, something like Spring Data Projections on steroids. The idea is that you define your target structure(domain model) the way you like and map attributes(getters) via JPQL expressions to the entity model.
A DTO model could look like the following with Blaze-Persistence Entity-Views:
#EntityView(User.class)
public interface UserDto {
#IdMapping
Long getId();
String getName();
Set<RoleDto> getRoles();
#EntityView(Role.class)
interface RoleDto {
#IdMapping
Long getId();
String getName();
}
}
Querying is a matter of applying the entity view to a query, the simplest being just a query by id.
UserDto a = entityViewManager.find(entityManager, UserDto.class, id);
The Spring Data integration allows you to use it almost like Spring Data Projections: https://persistence.blazebit.com/documentation/entity-view/manual/en_US/index.html#spring-data-features
UserDto findOne(Long id);
Using Entity-Views will improve performance immediately as it will only fetch the state that is necessary.

Filter entities in spring repository

It is possible to apply a filter to results with annotations instead of extending method name?
For instance:
#Repository
public interface JobRepository extends JpaRepository<Job, Long> {
List<Job> findAllByUserAndEnabledIsTrue(User u);
}
Here I apply filter 'enabled == true'. But assume we have a lot of methods. Writing them with extended names is inconvenient. Could I apply this filter to whole repository?
I found
#FilterDef but I don't know how to use and also if spring support this annotation.
As far as I know Spring Data JPA is not Hibernate dependent, and it can work with any JPA implementation. Hibernate's #Filters is not a JPA standard, so the simple answer is no! Spring JPA does not support #Filters.
But you can apply #Filters using AOP, and simply applying aspects on your repository methods.
By the way I believe the better solution is to have hand written queries using Spring Data JPA's #Query annotation. Because this way you can name methods after their context meaning, and not about their internal implementation.
For example you can name your method findActiveJobsForUser which could be more meaningful and readable.

DataRepository and mybatis support

I have followed the guide react-js-and-spring-data-rest.
https://spring.io/blog/2015/10/28/react-js-and-spring-data-rest-part-5-security
This tutorial use JPA hibernate, I do really like the React/Api design, but I don't wan't to use JPA hibernate DAO, I would like to use Mybatis.
Is there a way to use spring DataRepository with mybatis ?
As far as I know there is not possible that way. Of course, you can use MyBatis-Spring-Boot-Starter integration which is not far different from Spring Data Repositories. It's not simply than DataRepository can be but not to more... For example one mapper should be:
#Mapper
public interface CityMapper {
#Select("SELECT * FROM CITY WHERE state = #{state}")
City findByState(#Param("state") String state);
}
Then you can inject it as a Bean Repository without implement:
#Autowired
private CityMapper cityMapper;
Unfortunately, you should do all Crud Operations in the entities you need... this could be tedious, so in de Data Repositories of Spring Data is not need.
The examples are in MyBatis Reference Documentation, and is much more explained than here.

How to manage transactions with JAX-RS, Spring and JPA

I'm using JAX-RS to provide an HTTP-based interface to manage a data model. The data model is stored in a database and interacted with via JPA.
This allows me to modify the interface to the data model to suit REST clients and mostly seems to work quite well. However, I'm not sure how to handle the scenario where a method provided by a JAX-RS resource requires a transaction, which affects the JPA get, update, commit-on-tx-end pattern, because there is only a transaction wrapping the get operation, so the update is never committed. I can see the same problem occurring if a single REST operation requires multiple JPA operations.
As I'm using Spring's transaction support, the obvious thing to do is to apply #Transactional to these methods in the JAX-RS resources. However, in order for this to work, Spring needs to manage the lifecycle of the JAX-RS resources, and the useage examples I'm aware of have resources being created via `new' when needed, which makes me a little nervous anyway.
I can think of the following solutions:
update my JPA methods to provide a transaction-managed version of everything I want to do from my REST interface atomically. Should work, keeps transactions out of the JAX-RS layer, but prevents the get, update, commit-on-tx-end pattern and means I need to create a very granular JPA interface.
Inject Resource objects; but they are typically stateful holding at least the ID of the object being interacted with
Ditch the hierarchy of resources and inject big, stateless super resources at the root that manage the entire hierarchy from that root; not cohesive, big services
Have a hierarchy of injected, stateless, transaction-supporting helper objects that 'shadow' the actual resources; the resources are instantiated and hold ths state but delegate method invocations to the helper objects
Anyone got any suggestions? It's quite possible I've missed some key point somewhere.
Update - to work around the lack of a transaction around the get, update, commit-on-tx-close flow, I can expose the EntityManager merge(object) method and call it manually. Not neat and doesn't solve the larger problem though.
Update 2 #skaffman
Code example:
In JPA service layer, injected, annotations work
public class MyEntityJPAService {
...
#Transactional(readOnly=true) // do in transaction
public MyEntity getMyEntity(final String id) {
return em.find(MyEntity.class, id);
}
In JAX-RS resource, created by new, no transactions
public class MyEntityResource {
...
private MyEntityJPAService jpa;
...
#Transactional // not injected so not effective
public void updateMyEntity(final String id, final MyEntityRepresentation rep) {
MyEntity entity = jpa.getMyEntity(id);
MyEntity.setSomeField(rep.getSomeField());
// no transaction commit, change not saved...
}
I have a few suggestions
Introduce a layer between your JPA and JAX-RS layers. This layer would consist of Spring-managed #Transactional beans, and would compose the various business-level operations from their component JPA calls. This is somewhat similar to your (1), but keeps the JPA layer simple.
Replace JAX-RS with Spring-MVC, which provides the same (or similar) functionality, including #PathVariable, #ResponseBody, etc.
Programmatically wrap your JAX-RS objects in transactional proxies using TransactionProxyFactorybean. This would detct your #Transactional annotations and generate a proxy that honours them.
Use #Configurable and AspectJ LTW to allow Spring to honour #Transactional even if you create the object using `new. See 8.8.1 Using AspectJ to dependency inject domain objects with Spring

Confused about Spring-Data DDD repository pattern

I don't know so much about DDD repository pattern but the implementation in Spring is confusion me.
public interface PersonRepository extends JpaRepository<Person, Long> { … }
As the interface extends JpaRepository (or MongoDBRepository...), if you change from one db to another, you have to change the interface as well.
For me an interface is there to provide some abstraction, but here it's not so much abstract...
Do you know why Spring-Data works like that?
You are right, an Interface is an abstraction about something that works equals for all implementing classes, from an outside point of view.
And that is exactly what happens here:
JpaRepository is a common view of all your JPA Repositories (for all the different Entities), while MongoDBRepository is the same for all MongoDB Entities.
But JpaRepository and MongoDBRepository have nothing in common, except the stuff that is defined in there common super Interfaces:
org.springframework.data.repository.PagingAndSortingRepository
org.springframework.data.repository.Repository
So for me it looks normal.
If you use the classes that implement your Repository then use PagingAndSortingRepository or Repository if you want to be able to switch from an JPA implementation to an Document based implementation (sorry but I can not imagine such a use case - anyway). And of course your Repository implementation should implement the correct interface (JpaRepository, MongoDBRepository) depending on what it is.
The reasoning behind this is pretty clearly stated in this blog post http://blog.springsource.com/2011/02/10/getting-started-with-spring-data-jpa/.
Defining this interface serves two purposes: First, by extending JpaRepository we get a bunch of generic CRUD methods into our type that allows saving Accounts, deleting them and so on. Second, this will allow the Spring Data JPA repository infrastructure to scan the classpath for this interface and create a Spring bean for it.
If you do not trust sources so close to the source (pun intended) it might be a good idea to read this post as well http://www.brucephillips.name/blog/index.cfm/2011/3/25/Using-Spring-Data-JPA-To-Reduced-Data-Access-Coding.
What I did not need to code is an implementation of the PersonRepository interface. Spring will create an implementation of this interface and make a PersonRepository bean available to be autowired into my Service class. The PersonRepository bean will have all the standard CRUD methods (which will be transactional) and return Person objects or collection of Person objects. So by using Spring Data JPA, I've saved writing my own implementation class.
Until M2 of Spring Data we required users to extend JpaRepository due to the following reasons:
The classpath scanning infrastructure only picked up interfaces extending that interface as one might use Spring Data JPA and Spring Data Mongo in parallel and have both of them pointed to the very same package it would not be clear which store to create the proxy for. However since RC1 we simply leave that burden to the developer as we think it's a rather exotic case and the benefit of just using Repository, CrudRepository or the like outweights the effort you have to take in the just described corner case. You can use exclude and include elements in the namespace to gain finer-grained control over this.
Until M2 we had transactionality applied to the CRUD methods by redeclaring the CRUD methods and annotating them with #Transactional. This decision in turn was driven by the algorithm AnnotationTransactionAttributeSource uses to find transaction configuration. As we wanted to provide the user with the possibility to reconfigure transactions by just redeclaring a CRUD method in the concrete repository interface and applying #Transactional on it. For RC1 we decided to implement a custom TransactionAttributeSource to be able to move the annotations back to the repository CRUD implementation.
Long story short, here's what it boils down to:
As of RC1 there's no need to extend the store specific repository interface anymore, except you want to…
Use List-based access to findAll(…) instead of the Iterable-based one in the more core repository interfaces (allthough you could simply redeclare the relevant methods in a common base interface to return Lists as well)
You want to make use of the JPA-specific methods like saveAndFlush(…) and so on.
Generally you are much more flexible regarding the exposure of CRUD methods since RC1 as you can even only extend the Repository marker interface and selectively add the CRUD methods you want to expose. As the backing implementation will still implement all of the methods of PagingAndSortingRepository we can still route the calls to the instance:
public interface MyBaseRepository<T, ID extends Serializable> extends Repository<T, ID> {
List<T> findAll();
T findOne(ID id);
}
public interface UserRepository extends MyBaseRepository<User, Long> {
List<T> findByUsername(String username);
}
In that example we define MyBaseRepository to only expose findAll() and findOne(…) (which will be routed into the instance implementing the CRUD methods) and the concrete repository adding a finder method to the two CRUD ones.
For more details on that topic please consult the reference documentation.

Resources