spring-data-jdbc integrate with mybatis - spring

I am currently using myBatis for my project, it is good for complex SQL, but I found that I need to create many redundant SQL for basic CRUD. So I come across spring-data-jdbc (here), which is a very nice library (similar to spring-data-jpa but without JPA persistence) to help to generate CRUD SQL statement via method name.
The guide on their official website on how to integrate with myBatis is very vague, I couldn't find any other source which showing how to do that. Basically I am looking for a way to do like below:
#Repository
#Mapper
public interface PersonRepository extends CrudRepository<Person, Long> {
//via spring-data-jdbc
List<Person> findAll();
//via spring-data-jdbc
List<Person> findByFirstName(String name);
//via myBatis, the actual query is in PersonMapper.xml
List<Person> selectAllRow();
}
As you can see, when I call findAll and findByFirstName, they will be handled by spring-data-jdbc. When I call selectAllRow, it will look for corresponding myBatis mapper file for actual SQL. In this way I can combine two mechanism to handle simple CRUD query and complex query (via myBatis) together.
But above doesn't work, so currently I have to split them into two interface, one for PersonRepository, and another one for PersonMapper, which is not a very nice design.
Anyone has done similar before?

Having two separate interfaces one for repository and another for mybatis mapper is just how mybatis integration in spring-data-jdbc works as of now (version 2.2.1).
Mapper is just an implemenation detail of the spring data repository and just should not be used by the clients, so it should not be an issue.
What you want can be probably done but will require quite some work and might not be worth it.
#Repository annotation instructs spring-data-jdbc to create a proxy implementing PersonRepository with all the machinery of fetching/saving objects to the database.
#Mapper on the other hand instructs mybatis-spring to create another proxy that will handle queries from the mapping provided in xml or via annotations on the interface method.
Only one of this proxies can be injected in the places where you are going to use PersonRepository. What you can do is to create your own proxy for PersonRepository that will create both spring-data-jdbc and mybatis proxy and will dispatch calls to them. But the complexity of this solution will be rather high compared to a simple class that delegates to two separate PersonRepository and PersonMapper.
Update: it seems there is a way to combine mybatis mapper with repository as described in this comment https://github.com/spring-projects/spring-data-jdbc/pull/152#issuecomment-660151636
This approach uses repository fragments to add mybatis fragment to the repository implementation.

Related

where does jpa picks up the method userbyusername as i have not given any implementation and i have checked the inner classes too

In my spring boot project, I am using this starter jpa . i have done all the db related thing in appliction.properties. Project is working fine . I fail to undestand where is this methods defination. We have just defined a abstract method how is this method even working?
public interface UserRepository extends JpaRepository<UserEntity, Integer>{
Optional<UserEntity> getUserByUserName(String user);
}
This is part of the magic of JPA Repositories. I don't actually know the details of how it works either, I just know how to use it.
Ultimately, I think it has to do with how Spring proxies interfaces. Spring will create an instance of an interface at runtime. When the methods are named according to the specs, Spring can generate an appropriate method.
Here is a good article that goes into detail on how you can construct the method names to make the query that you want: https://www.baeldung.com/spring-data-derived-queries.

Adding new DB support in spring data

Currently spring data has multiple db support (mysql, cassandra, mongo.. very big list), however i want to add my custom repository from the scratch like adding custom db support in spring data. I don't want to extend any existing repositories, instead I want to create a parallel repository strutcutre for my custom datasource. Looking at current implementation it looks like tedious task. It would be a great if someone could help me with minimal requirement to do this.
You could create a repository annotated bean where you would inject EntityManager or the proper bean that is acting like that, depending on database type that you are using.
#Repository
public class MyCustomRepositoryImpl implements MyCustomRepository {
#Autowired
private EntityManager entityManager;
//the methods that you are going to create.
}
For more details see:
https://docs.spring.io/spring-data/data-commons/docs/1.6.1.RELEASE/reference/html/repositories.html
Chapter: 1.3 Custom implementations for Spring Data repositories

Where to use #Transactional annotation and #Repository annotation

Some of the examples on internet use #Transactional annotation on DAO implementation methods while some use this annotation for the service layer methods. Where is it more appropriate to put the #Transactional and why?
Similarly where to put #Repository annotation. On the DAO interface or on the DAO implementation?
I have always used #Service and #Repository annotations on their implementations, but they can be put in either one. Although, putting it on a interface would mean that you won't be able to have more than one implementation, because you would get a NoUniqueBeanDefinitionException error.
In the case of #Transactional, it depends, but normally it goes on the service. If you want to be able to add various DB calls on one transaction, then it should go in the service. If you want to make small transactions, then on the DAO would be best, but then, you wouldn't be able to modify several tables in one single transaction. Another con of having it on the DAO, is that you won't be able to rollback multiple modifications, only the ones that are bing executed by the DAO.
EDIT
After several projects using Spring, each one of different proportions, I end up changing my own practices. I would like to add that even though adding #Transactional to the service layer isn't exactly bad practice, it can be negatively affect the performance of the application. So in my own experience, it is better to add it to the DAO/Repository layers and only add at function level in the service layer, if a transaction must be atomic.
One more thing, if you are using Spring Data, the #Repository must be added on the interface. Only if you extend the JpaRepository will you need to add the #Repository annotation on the implementation. In this case, the interface of the JpaRepository and the custom implementation will both have #Repository.

Multiple datasources in Spring Boot Repository Annotated Interface

My application is based on Spring Boot, Hibernate, MySQL using Spring Data JPA to stitch them.
Use case is to use slave db node for doing heavy read operations so as to avoid all traffic being served from master mysql node. One way of achieving this is to have multiple Entity Managers pointing to separate data sources(one to master and other to slave node). This way has been explained quite well in below SO questions and blogs.
Spring Boot, Spring Data JPA with multiple DataSources
https://scattercode.co.uk/2016/01/05/multiple-databases-with-spring-boot-and-spring-data-jpa/
Where I am stuck is to understand if there is a way I can inject different entity managers for different use cases in my Repository Annotated Interface.
The only way I see it can be done is extending repository with a custom implementation which gives uses custom entity manager annotated with relevant persistenceContext like below.
public interface CustomerRepository extends JpaRepository<Customer, Integer>, MyCustomCustomerRepository{
}
public class MyCustomCustomerRepositoryImpl implements MyCustomCustomerRepository {
#PersistenceContext(unitName = "entityManagerFactoryTwo")
EntityManager entityManager;
}
I would like to avoid doing this custom implementation. Any help around solving this use case(which I feel should be very common) would be appreciated.
NOTE: Entities are same in both databases so giving separate packages for entity scanning and similar solutions might not work.
Here is a nice sample you can use:
dynamic-datasource-routing-with-spring.
Inside you can find an AbstractRoutingDatasource + an interceptor for a custom annotation that wires the service method to a required database.
However you can just use datasource switch explicitly.
Below is the pull request that shows the diff and how I made it work with most configurations annotation driven instead of xml. It is based on cra6's answer above. i.e. using spring's RoutingDataSource capability.
https://github.com/himanshuvirmani/rest-webservice-sample/pull/1/files

How to use multiple spring-boot-starter-data-* dependencies

Taking an example: I want some entities to be persisted in MongoDB and some in Cassandra.
I have my repository interfaces extending CrudRepository. My Cassandra entities have #Table and my MongoDb entities have #Document annotations.
However, on startup, spring-data attempts to create an instance of a MyMongoObjectRepository, and thus complains that "Cassandra entities must have the #Table, #Persistent or #PrimaryKeyClass Annotation".
How are the libraries discovering which repository interfaces they are supposed to implement, and how can I control them so they don't try to implement them for unsupported entities?
Further question: if I wanted to store some entities in both storage systems, can multiple implementations of a single repository be generated, or do I need an interface for each store?
Edit
On further inspection, the problem seems to be from the entity scanning rather than the repository scanning. Both mappers pick up all the entities (as their annotations all extend #Persistent). One of the Mongo entities has a nested entity (without any annotations) that the Cassandra mapper cannot deal with.
You can use a basePackages setting in #EnableMongoRepositories and #EnableJpaRepositories to specify where they should look for repository definitions.
Like so:
#EnableMongoRepositories(basePackages={
"com.some.package.to.look.inside",
"com.some.package.to.look.also.at"
})
And
#EnableJpaRepositories(basePackages={
"com.some.differentpackage.to.look.inside",
"com.some.differentpackage.to.look.also.at"
})
For this to work you need to namespace your repository definitions in sensible packages.
Answer to your follow up question:
If you wanted to store entities multiple places at once I would implement a service in front of the repositories making use of #Autowire to dependency inject the repositories and setting a #Transactional on the service method which calls repository methods. Having #Transactional on the service method ensures that if an error would occur while saving it will ensure that no half-way saves are left laying around, even doing rollbacks if necessary.
Edit:
#Transactional does not work for db's that do not support transactions like Cassandra and MongoDB.
Problem is that all the different entity scanners use #Persistent as an annotation they're looking for, while all the repo-specific annotations (#Table, #Document, etc.) also have #Persistent as a meta-annotation.
Therefore, the entities for the different repositories must be in separate packages, and you must construct your own scanner in order to pass the packages to it, as it doesn't not accept a generic filter.

Resources