Spring-data-JPA - Executing complex multi join queries - spring-boot

I have a requirement for which I need to execute a bunch of random complex queries with multiple joins for reporting purposes. So I am planning to use entitymanager native query feature directly. I just tried and it seems to work.
#Service
public class SampleService {
#Autowired
private EntityManager entityManager;
public List<Object[]> execute(String sql){
Query query = entityManager.createNativeQuery(sql);
return query.getResultList();
}
}
This code is invoked once in every 30 seconds. Single threaded - scheduled process.
Question:
Should I be using entity manager or entity manager factory?
Should I close the connection here? or is it managed automatically?
How to reduce the DB connection pool - as it is not multi threaded app or Should I not be worried about that?
Any other suggestions!?

Should I be using entity manager or entity manager factory?
Injecting EntityManager Vs. EntityManagerFactory
EntityManager looks fine in this instance.
Should I close the connection here? or is it managed automatically?
No I dont think you need to as the manager handles this.
How to reduce the DB connection pool - as it is not multi threaded app or Should I not be worried about that?
I doubt you need concern yourself with the connection pools unless you are expecting large volumes and your application is running slowly under load. Try doing some bench marking you may have much more capacity than you need and be prematurely optimising your app.
It more likely you would you increase it number of connections rather than decrease. To increase the number of connections you do that in the application.properties (or application.yml)
Any other suggestions!?
Rather than a generic method I would consider having a separate repository class outside of the service and have that repository method do something specific. Make a method return a specific result or thing rather than pass in any sql.
As a rough outline of two seperate classes (files) something like this
#Service
public class SampleService {
#Autowired
private MyAuthorNativeRepository myAuthorNaviveRepository;
public List<Author> getAuthors(){
return myAuthorRepository.getAuthors();
}
}
#Service
public class MyAuthorNativeRepository {
#Autowired
private EntityManager entityManager;
public List<Author> getAuthors(){
Query q = entityManager.createNativeQuery("SELECT blah blah FROM Author");
List<Author> authors = new ArrayList();
for (Object[] row : q.getResultList()) {
Author author = new Author();
author.setName(row[0]);
authors.add(author);
}
return authors;
}
}

Related

Hibernate/Spring transaction - "Bad" flush order

In my app (Spring Boot based) I am using Hibernate and have custom repository like that:
#Repository
public interface MyRepository extends JpaRepository<MyRepoEntity, Long> {
#Query(value="SELECT NEXTVAL('mytable')", nativeQuery=true)
Long nextId();
#Procedure(procedureName = "SCHEMA2.save_2", outputParameterName = "res")
String callProcedure(#Param("prm_nrid") Long nr);
}
In my manager have a method with the following business logic:
#Transactional
String invokeProcedure1() {
Long id = myRepo.nextId();
return myRepo.callProcedure(id);
}
The problem is that Hibernate performs the two actions randomly and out of order because there is no db "relationship".
Is there a way (preferably without explicitly using flush()) to have nexId invoked before callProcedure?
Thank you all!
These are native queries which are executed immediately. I don't know how your real application looks like, but the code you posted will first run the nextval query and only then call the stored procedure.

Primary/secondary datasource failover in Spring MVC

I have a java web application developed on Spring framework which uses mybatis. I see that the datasource is defined in beans.xml. Now I want to add a secondary data source too as a backup. For e.g, if the application is not able to connect to the DB and gets some error, or if the server is down, then it should be able to connect to a different datasource. Is there a configuration in Spring to do this or we will have to manually code this in the application?
I have seen primary and secondary notations in Spring boot but nothing in Spring. I could achieve these in my code where the connection is created/retrieved, by connecting to the secondary datasource if the connection to the primary datasource fails/timed out. But wanted to know if this can be achieved by making changes just in Spring configuration.
Let me clarify things one-by-one-
Spring Boot has a #Primary annotation but there is no #Secondary annotation.
The purpose of the #Primary annotation is not what you have described. Spring does not automatically switch data sources in any way. #Primary merely tells the spring which data source to use in case we don't specify one in any transaction. For more detail on this- https://www.baeldung.com/spring-data-jpa-multiple-databases
Now, how do we actually switch datasources when one goes down-
Most people don't manage this kind of High-availability in code. People usually prefer to 2 master database instances in an active-passive mode which are kept in sync. For auto-failovers, something like keepalived can be used. This is also a high subjective and contentious topic and there are a lot of things to consider here like can we afford replication lag, are there slaves running for each master(because then we have to switch slaves too as old master's slaves would now become out of sync, etc. etc.) If you have databases spread across regions, this becomes even more difficult(read awesome) and requires yet more engineering, planning, and design.
Now since, the question specifically mentions using application code for this. There is one thing you can do. I don't advice to use it in production though. EVER. You can create an ASPECTJ advice around your all primary transactional methods using your own custom annotation. Lets call this annotation #SmartTransactional for our demo.
Sample Code. Did not test it though-
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.METHOD)
public #interface SmartTransactional {}
public class SomeServiceImpl implements SomeService {
#SmartTransactional
#Transactional("primaryTransactionManager")
public boolean someMethod(){
//call a common method here for code reusability or create an abstract class
}
}
public class SomeServiceSecondaryTransactionImpl implements SomeService {
#Transactional("secondaryTransactionManager")
public boolean usingTransactionManager2() {
//call a common method here for code reusability or create an abstract class
}
}
#Component
#Aspect
public class SmartTransactionalAspect {
#Autowired
private ApplicationContext context;
#Pointcut("#annotation(...SmartTransactional)")
public void smartTransactionalAnnotationPointcut() {
}
#Around("smartTransactionalAnnotationPointcut()")
public Object methodsAnnotatedWithSmartTransactional(final ProceedingJoinPoint joinPoint) throws Throwable {
Method method = getMethodFromTarget(joinPoint);
Object result = joinPoint.proceed();
boolean failure = Boolean.TRUE;// check if result is failure
if(failure) {
String secondaryTransactionManagebeanName = ""; // get class name from joinPoint and append 'SecondaryTransactionImpl' instead of 'Impl' in the class name
Object bean = context.getBean(secondaryTransactionManagebeanName);
result = bean.getClass().getMethod(method.getName()).invoke(bean);
}
return result;
}
}

Why does OpenEntityManagerInViewFilter change #Transactional propagation REQUIRES_NEW behavior?

Using Spring 4.3.12, Spring Data JPA 1.11.8 and Hibernate 5.2.12.
We use the OpenEntityManagerInViewFilter to ensure our entity relationships do not throw LazyInitializationException after an entity has been loaded. Often in our controllers we use a #ModelAttribute annotated method to load an entity by id and make that loaded entity available to a controller's request mapping handler method.
In some cases like auditing we have entity modifications that we want to commit even when some other transaction may error and rollback. Therefore we annotate our audit work with #Transactional(propagation = Propagation.REQUIRES_NEW) to ensure this transaction will commit successfully regardless of any other (if any) transactions which may or may not complete successfully.
What I've seen in practice using the OpenEntityManagerInviewFilter, is that when Propagation.REQUIRES_NEW transactions attempt to commit changes which occurred outside the scope of the new transaction causing work which should always result in successful commits to the database to instead rollback.
Example
Given this Spring Data JPA powered repository (the EmployeeRepository is similarly defined):
import org.springframework.data.jpa.repository.JpaRepository;
public interface MethodAuditRepository extends JpaRepository<MethodAudit,Long> {
}
This service:
#Service
public class MethodAuditorImpl implements MethodAuditor {
private final MethodAuditRepository methodAuditRepository;
public MethodAuditorImpl(MethodAuditRepository methodAuditRepository) {
this.methodAuditRepository = methodAuditRepository;
}
#Override #Transactional(propagation = Propagation.REQUIRES_NEW)
public void auditMethod(String methodName) {
MethodAudit audit = new MethodAudit();
audit.setMethodName(methodName);
audit.setInvocationTime(LocalDateTime.now());
methodAuditRepository.save(audit);
}
}
And this controller:
#Controller
public class StackOverflowQuestionController {
private final EmployeeRepository employeeRepository;
private final MethodAuditor methodAuditor;
public StackOverflowQuestionController(EmployeeRepository employeeRepository, MethodAuditor methodAuditor) {
this.employeeRepository = employeeRepository;
this.methodAuditor = methodAuditor;
}
#ModelAttribute
public Employee loadEmployee(#RequestParam Long id) {
return employeeRepository.findOne(id);
}
#GetMapping("/updateEmployee")
// #Transactional // <-- When uncommented, transactions work as expected (using OpenEntityManagerInViewFilter or not)
public String updateEmployee(#ModelAttribute Employee employee, RedirectAttributes ra) {
// method auditor performs work in new transaction
methodAuditor.auditMethod("updateEmployee"); // <-- at close of this method, employee update occurrs trigging rollback
// No code after this point executes
System.out.println(employee.getPin());
employeeRepository.save(employee);
return "redirect:/";
}
}
When the updateEmployee method is exercised with an invalid pin number updateEmployee?id=1&pin=12345 (pin number is limited in the database to 4 characters), then no audit is inserted into the database.
Why is this? Shouldn't the current transaction be suspended when the MethodAuditor is invoked? Why is the modified employee flushing when this Propagation.REQUIRES_NEW transaction commits?
If I wrap the updateEmployee method in a transaction by annotating it as #Transactional, however, audits will persist as desired. And this will work as expected whether or not the OpenEntityManagerInViewFilter is used.
While your application (server) tries to make two separate transactions you are still using a single EntityManager and single Datasource so at any given time JPA and the database see just one transaction. So if you want those things to be separated you need to setup two Datasources and two EntityManagers

Spring webflux and reading from database

Spring 5 introduces the reactive programming style for rest APIs with webflux. I'm fairly new to it myself and was wondering wether wrapping synchronous calls to a database into Flux or Mono makes sense preformence-wise? If yes, is this the way to do it:
#RestController
public class HomeController {
private MeasurementRepository repository;
public HomeController(MeasurementRepository repository){
this.repository = repository;
}
#GetMapping(value = "/v1/measurements")
public Flux<Measurement> getMeasurements() {
return Flux.fromIterable(repository.findByFromDateGreaterThanEqual(new Date(1486980000L)));
}
}
Is there something like an asynchronous CrudRepository? I couldn't find it.
One option would be to use alternative SQL clients that are fully non-blocking. Some examples include:
https://github.com/mauricio/postgresql-async or https://github.com/finagle/roc. Of course, none of these drivers is officially supported by database vendors yet. Also, functionality is way much less attractive comparing to mature JDBC-based abstractions such as Hibernate or jOOQ.
The alternative idea came to me from Scala world. The idea is to dispatch blocking calls into isolated ThreadPool not to mix blocking and non-blocking calls together. This will allow us to control the overall number of threads and will let the CPU serve non-blocking tasks in the main execution context with some potential optimizations.
Assuming that we have JDBC based implementation such as Spring Data JPA which is indeed blocking, we can make it’s execution asynchronous and dispatch on the dedicated thread pool.
#RestController
public class HomeController {
private final MeasurementRepository repository;
private final Scheduler scheduler;
public HomeController(MeasurementRepository repository, #Qualifier("jdbcScheduler") Scheduler scheduler) {
this.repository = repository;
this.scheduler = scheduler;
}
#GetMapping(value = "/v1/measurements")
public Flux<Measurement> getMeasurements() {
return Mono.fromCallable(() -> repository.findByFromDateGreaterThanEqual(new Date(1486980000L))).publishOn(scheduler);
}
}
Our Scheduler for JDBC should be configured by using dedicated Thread Pool with size count equal to the number of connections.
#Configuration
public class SchedulerConfiguration {
private final Integer connectionPoolSize;
public SchedulerConfiguration(#Value("${spring.datasource.maximum-pool-size}") Integer connectionPoolSize) {
this.connectionPoolSize = connectionPoolSize;
}
#Bean
public Scheduler jdbcScheduler() {
return Schedulers.fromExecutor(Executors.newFixedThreadPool(connectionPoolSize));
}
}
However, there are difficulties with this approach. The main one is transaction management. In JDBC, transactions are possible only within a single java.sql.Connection. To make several operations in one transaction, they have to share a connection. If we want to make some calculations in between them, we have to keep the connection. This is not very effective, as we keep a limited number of connections idle while doing calculations in between.
This idea of an asynchronous JDBC wrapper is not new and is already implemented in Scala library Slick 3. Finally, non-blocking JDBC may come along on the Java roadmap. As it was announced at JavaOne in September 2016, and it is possible that we will see it in Java 10.
Based on this blog you should rewrite your snippet in following way
#GetMapping(value = "/v1/measurements")
public Flux<Measurement> getMeasurements() {
return Flux.defer(() -> Flux.fromIterable(repository.findByFromDateGreaterThanEqual(new Date(1486980000L))))
.subscribeOn(Schedulers.elastic());
}
Obtaining a Flux or a Mono doesn’t necessarily mean it will run in a dedicated Thread. Instead, most operators continue working in the Thread on which the previous operator executed. Unless specified, the topmost operator (the source) itself runs on the Thread in which the subscribe() call was made.
If you have blocking persistence APIs (JPA, JDBC) or networking APIs to use, Spring MVC is the best choice for common architectures at least. It is technically feasible with both Reactor and RxJava to perform blocking calls on a separate thread but you would not be making the most of a non-blocking web stack.
So... How do I wrap a synchronous, blocking call?
Use Callable to defer execution. And you should use Schedulers.elastic because it creates a dedicated thread to wait for the blocking resource without tying up some other resource.
Schedulers.immediate() : Current thread.
Schedulers.single() : A single, reusable thread.
Schedulers.newSingle() : A per-call dedicated thread.
Schedulers.elastic() : An elastic thread pool. It creates new worker pools as needed, and reuse idle ones. This is a good choice for I/O blocking work for instance.
Schedulers.parallel() : A fixed pool of workers that is tuned for parallel work.
example:
Mono.fromCallable(() -> blockingRepository.save())
.subscribeOn(Schedulers.elastic());
Spring data support reactive repository interface for Mongo and Cassandra.
Spring data MongoDb Reactive Interface
Spring Data MongoDB provides reactive repository support with Project Reactor and RxJava 1 reactive types. The reactive API supports reactive type conversion between reactive types.
public interface ReactivePersonRepository extends ReactiveCrudRepository<Person, String> {
Flux<Person> findByLastname(String lastname);
#Query("{ 'firstname': ?0, 'lastname': ?1}")
Mono<Person> findByFirstnameAndLastname(String firstname, String lastname);
// Accept parameter inside a reactive type for deferred execution
Flux<Person> findByLastname(Mono<String> lastname);
Mono<Person> findByFirstnameAndLastname(Mono<String> firstname, String lastname);
#InfiniteStream // Use a tailable cursor
Flux<Person> findWithTailableCursorBy();
}
public interface RxJava1PersonRepository extends RxJava1CrudRepository<Person, String> {
Observable<Person> findByLastname(String lastname);
#Query("{ 'firstname': ?0, 'lastname': ?1}")
Single<Person> findByFirstnameAndLastname(String firstname, String lastname);
// Accept parameter inside a reactive type for deferred execution
Observable<Person> findByLastname(Single<String> lastname);
Single<Person> findByFirstnameAndLastname(Single<String> firstname, String lastname);
#InfiniteStream // Use a tailable cursor
Observable<Person> findWithTailableCursorBy();
}

JPA: Nested transactional method is not rolled back

UPD 1: Upon further research I think the following information may be useful:
I obtain datasource through JNDI lookup on WildFly 9.0.2, then 'wrap' it into in instance of HikariDataSource (e. g. return new HikariDataSource(jndiDSLookup(dsName))).
the transaction manager that ends up being used is JTATransactionManager.
I do not configure the transaction manager in any way.
ORIGINAL QUESTION:
I am experiencing an issue with JPA/Hibernate and (maybe) Spring-Boot where DB changes introduced in a transactional method of one class called from a transactional method of another class are committed even though the changes in the caller method are rolled back (as they should be).
Here are my transactional services
StuffService:
#Service
#Transactional(rollbackFor = IOException.class)
public class StuffService {
#Inject private BarService barService;
#Inject private StuffRepository stuffRepository;
public Stuff updateStuff(Stuff stuff) {
try {
if (null != barService.doBar(stuff)) {
stuff.setSomething(SOMETHING);
stuff.setSomethingElse(SOMETHING_ELSE);
return stuffRepository.save(stuff);
}
} catch (FirstCustomException e) {
logger.error("Blah", e);
throw new SecondCustomException(e.getMessage());
}
throw new SecondCustomException("Blah 2");
}
// other methods
}
and BarService:
#Service
#Transactional
public class BarService {
#Inject private EntityARepository entityARepository;
#Inject private EntityBRepository entityBRepository;
/*
* updates existing entity A and persists new entity B.
*/
public EntityA doBar(Stuff stuff) throws FirstCustomException {
EntityA a = entityARepository.findOne(/* some criteria */);
a.setSomething(SOMETHING);
EntityB b = new EntityB();
b.setSomething(SOMETHING);
b.setSomethingElse(SOMETHING_ELSE);
entityBRepository.save(b);
return entityARepository.save(a);
}
// other methods
}
EntityARepository and EntityBRepository are very similar Spring-Boot repositories defined like this:
public interface EntityARepository extends JpaRepository<EntityA, Long>{
EntityA findOne(/* some criteria */);
}
FirstCustomException extends Throwable
SecondCustomException extends RuntimeException
Stuff entity is versioned, and every once in a while it is concurrently updated by StuffService.updateStuff(). In that case changes to one of the stuff instances are rolled back, as expected, but everything that happens in the barService.doBar() ends up being committed.
This puzzles me quite a lot since transaction propagation on both methods should be REQUIRED (the default one) and both methods belong to different classes, hence #Transactional should apply for both.
I did see Transaction is not completely rolled back after server throws OptimisticLockException1
But it did not really answer my question.
Can anyone please give me an idea of what's going on?
Thank you.
This isn't a 'nested' transaction - these services are operating in completely independent transactions. If you want the rollback of one to affect the other, you need to have them take part in the same transaction rather than start its own.
Or if your issue is that there is a problem with the version of 'stuff' passed into the doBar method and you want it verified, you will need to do something with the stuff instance that would cause an optimistic lock check, and so result in an exception if it is stale. see EntityManager.lock

Resources