Combining multi-tenant Spring application with distributed JTA transactions - spring

I have a multi-tenant (database per tenant) Spring application. I have configured multiple data source beans for each tenant but only one entity manager factory bean because the tenants have the same tables with the same structure, i.e. the same entities. Unfortunately the uniqueness of the entity manager factory in context of how SharedEntityManagerCreator works produces difficulties for me when using distributed JTA transactions across these tenants. The SharedEntityManagerCreator before creating a new entity manager uses the entity manager factory bean instance as key to search if an entity manager object already exists in the resources of the current transaction:
public static EntityManager doGetTransactionalEntityManager(EntityManagerFactory emf, #Nullable Map<?, ?> properties, boolean synchronizedWithTransaction) throws PersistenceException {
EntityManagerHolder emHolder = (EntityManagerHolder)TransactionSynchronizationManager.getResource(emf);
/* ... */
}
and if such exits it is reused. Therefore changing the tenant in the current transaction has no effect, because the entity manager is not recreated but is reused, which means that the entity manager object contains the reference to the previous data source and the operations are executed on the previous tenant but not on the new tenant.
I found a quick solution. Since the entity manager object is wrapped in a EntityManagerHolder object inside transaction resources I created another class that extends from EntityManagerHolder and does not wrap one entity manager object but a map of entity managers with tenant as keys:
public class MultiTenantEntityManagerHolder extends EntityManagerHolder {
private Map<String, EntityManager> entityManagers = new HashedMap<>();
private EntityManagerFactory entityManagerFactory;
#Override
public EntityManager getEntityManager() {
String tenantId = <get current tenant>;
if(!entityManagers.containsKey(tenantId)) {
entityManagers.put(tenantId, entityManagerFactory.createEntityManager());
}
return entityManagers.get(tenantId);
}
}
Then an object of type MultiTenantEntityManagerHolder is created at the beginning of transaction an placed inside resources:
TransactionSynchronizationManager.bindResource(entityManagerFactory, new MultiTenantEntityManagerHolder(entityManagerFactory));
But I'm looking at this solution as a hack, that may not work in the next version of Spring. Therefore I have two questions: is my current solution really a hack, i.e. a weak solution that should be abandoned? What are possible other approaches for this problem?

Related

Spring boot change connection schema dynamically inside transaction

In my Spring boot application i need to read data from a specific schema and write on another one, to do so i follow this guide (https://github.com/spring-projects/spring-data-examples/tree/main/jpa/multitenant/schema) and i used this answer (https://stackoverflow.com/a/47776205/10857151) to be able to change at runtime the schema used.
But if this works fine inside a service without any transaction scope, this doesn't works on a more complex architecture (exception: session/EntityManager is closed) where there are couple of service that share transaction to ensure rollback.
THE BELLOW IS A SIMPLE EXAMPLE OF THE ARCHITECTURE
//simple jpa repository
private FirstRepository repository;
private SecondRepository secondRepository;
private Mapper mapper;
private SchematUpdater schemaUpdater;
#Transactional
public void entrypoint(String idSource,String idTarget) {
//copy first object
firstCopyService(idSource, idTarget);
//copy second object
secondCopyService(idSource, idTarget);
}
#Transactional
public void firstCopyService(String idSource,String idTarget) {
//change schema to the source default
schemaUpdater.changeToSurceSchema();
Object obj=repository.get(idSource);
//convert obj before persist - set new id reference and other things
obj=mapper.prepareObjToPersist(obj,idTarget);
//change schema to the target default
schemaUpdater.changeToTargetSchema();
repository.saveAndFlush(obj);
}
#Transactional
public void secondCopyService(String idSource,String idTarget) {
schemaUpdater.changeToSurceSchema();
Object obj=secondRepository.get(idSource);
//convert obj before persist
obj=mapper.prepareObjToPersist(obj);
//change schema to the target default
schemaUpdater.changeToTargetSchema();
secondRepository.saveAndFlush(obj);
}
I need to know what could be the best solution to ensure this dynamical switch and maintain the transaction scope on each service, without causing problems connected to restore and clean entity manager session.
Thanks

Why does OpenEntityManagerInViewFilter change #Transactional propagation REQUIRES_NEW behavior?

Using Spring 4.3.12, Spring Data JPA 1.11.8 and Hibernate 5.2.12.
We use the OpenEntityManagerInViewFilter to ensure our entity relationships do not throw LazyInitializationException after an entity has been loaded. Often in our controllers we use a #ModelAttribute annotated method to load an entity by id and make that loaded entity available to a controller's request mapping handler method.
In some cases like auditing we have entity modifications that we want to commit even when some other transaction may error and rollback. Therefore we annotate our audit work with #Transactional(propagation = Propagation.REQUIRES_NEW) to ensure this transaction will commit successfully regardless of any other (if any) transactions which may or may not complete successfully.
What I've seen in practice using the OpenEntityManagerInviewFilter, is that when Propagation.REQUIRES_NEW transactions attempt to commit changes which occurred outside the scope of the new transaction causing work which should always result in successful commits to the database to instead rollback.
Example
Given this Spring Data JPA powered repository (the EmployeeRepository is similarly defined):
import org.springframework.data.jpa.repository.JpaRepository;
public interface MethodAuditRepository extends JpaRepository<MethodAudit,Long> {
}
This service:
#Service
public class MethodAuditorImpl implements MethodAuditor {
private final MethodAuditRepository methodAuditRepository;
public MethodAuditorImpl(MethodAuditRepository methodAuditRepository) {
this.methodAuditRepository = methodAuditRepository;
}
#Override #Transactional(propagation = Propagation.REQUIRES_NEW)
public void auditMethod(String methodName) {
MethodAudit audit = new MethodAudit();
audit.setMethodName(methodName);
audit.setInvocationTime(LocalDateTime.now());
methodAuditRepository.save(audit);
}
}
And this controller:
#Controller
public class StackOverflowQuestionController {
private final EmployeeRepository employeeRepository;
private final MethodAuditor methodAuditor;
public StackOverflowQuestionController(EmployeeRepository employeeRepository, MethodAuditor methodAuditor) {
this.employeeRepository = employeeRepository;
this.methodAuditor = methodAuditor;
}
#ModelAttribute
public Employee loadEmployee(#RequestParam Long id) {
return employeeRepository.findOne(id);
}
#GetMapping("/updateEmployee")
// #Transactional // <-- When uncommented, transactions work as expected (using OpenEntityManagerInViewFilter or not)
public String updateEmployee(#ModelAttribute Employee employee, RedirectAttributes ra) {
// method auditor performs work in new transaction
methodAuditor.auditMethod("updateEmployee"); // <-- at close of this method, employee update occurrs trigging rollback
// No code after this point executes
System.out.println(employee.getPin());
employeeRepository.save(employee);
return "redirect:/";
}
}
When the updateEmployee method is exercised with an invalid pin number updateEmployee?id=1&pin=12345 (pin number is limited in the database to 4 characters), then no audit is inserted into the database.
Why is this? Shouldn't the current transaction be suspended when the MethodAuditor is invoked? Why is the modified employee flushing when this Propagation.REQUIRES_NEW transaction commits?
If I wrap the updateEmployee method in a transaction by annotating it as #Transactional, however, audits will persist as desired. And this will work as expected whether or not the OpenEntityManagerInViewFilter is used.
While your application (server) tries to make two separate transactions you are still using a single EntityManager and single Datasource so at any given time JPA and the database see just one transaction. So if you want those things to be separated you need to setup two Datasources and two EntityManagers

JPA: Nested transactional method is not rolled back

UPD 1: Upon further research I think the following information may be useful:
I obtain datasource through JNDI lookup on WildFly 9.0.2, then 'wrap' it into in instance of HikariDataSource (e. g. return new HikariDataSource(jndiDSLookup(dsName))).
the transaction manager that ends up being used is JTATransactionManager.
I do not configure the transaction manager in any way.
ORIGINAL QUESTION:
I am experiencing an issue with JPA/Hibernate and (maybe) Spring-Boot where DB changes introduced in a transactional method of one class called from a transactional method of another class are committed even though the changes in the caller method are rolled back (as they should be).
Here are my transactional services
StuffService:
#Service
#Transactional(rollbackFor = IOException.class)
public class StuffService {
#Inject private BarService barService;
#Inject private StuffRepository stuffRepository;
public Stuff updateStuff(Stuff stuff) {
try {
if (null != barService.doBar(stuff)) {
stuff.setSomething(SOMETHING);
stuff.setSomethingElse(SOMETHING_ELSE);
return stuffRepository.save(stuff);
}
} catch (FirstCustomException e) {
logger.error("Blah", e);
throw new SecondCustomException(e.getMessage());
}
throw new SecondCustomException("Blah 2");
}
// other methods
}
and BarService:
#Service
#Transactional
public class BarService {
#Inject private EntityARepository entityARepository;
#Inject private EntityBRepository entityBRepository;
/*
* updates existing entity A and persists new entity B.
*/
public EntityA doBar(Stuff stuff) throws FirstCustomException {
EntityA a = entityARepository.findOne(/* some criteria */);
a.setSomething(SOMETHING);
EntityB b = new EntityB();
b.setSomething(SOMETHING);
b.setSomethingElse(SOMETHING_ELSE);
entityBRepository.save(b);
return entityARepository.save(a);
}
// other methods
}
EntityARepository and EntityBRepository are very similar Spring-Boot repositories defined like this:
public interface EntityARepository extends JpaRepository<EntityA, Long>{
EntityA findOne(/* some criteria */);
}
FirstCustomException extends Throwable
SecondCustomException extends RuntimeException
Stuff entity is versioned, and every once in a while it is concurrently updated by StuffService.updateStuff(). In that case changes to one of the stuff instances are rolled back, as expected, but everything that happens in the barService.doBar() ends up being committed.
This puzzles me quite a lot since transaction propagation on both methods should be REQUIRED (the default one) and both methods belong to different classes, hence #Transactional should apply for both.
I did see Transaction is not completely rolled back after server throws OptimisticLockException1
But it did not really answer my question.
Can anyone please give me an idea of what's going on?
Thank you.
This isn't a 'nested' transaction - these services are operating in completely independent transactions. If you want the rollback of one to affect the other, you need to have them take part in the same transaction rather than start its own.
Or if your issue is that there is a problem with the version of 'stuff' passed into the doBar method and you want it verified, you will need to do something with the stuff instance that would cause an optimistic lock check, and so result in an exception if it is stale. see EntityManager.lock

How to link JPA persistence context with single database transaction

Latest Spring Boot with JPA and Hibernate: I'm struggling to understand the relationship between transactions, the persistence context and the hibernate session and I can't easily avoid the dreaded no session lazy initialization problem.
I update a set of objects in one transaction and then I want to loop through those objects processing them each in a separate transaction - seems straightforward.
public void control() {
List<> entities = getEntitiesToProcess();
for (Entity entity : entities) {
processEntity(entity.getId());
}
}
#Transactional(value=TxType.REQUIRES_NEW)
public List<Entity> getEntitiesToProcess() {
List<Entity> entities = entityRepository.findAll();
for (Entity entity : entities) {
// Update a few properties
}
return entities;
}
#Transactional(value=TxType.REQUIRES_NEW)
public void processEntity(String id) {
Entity entity = entityRepository.getOne(id);
entity.getLazyInitialisedListOfObjects(); // throws LazyInitializationException: could not initialize proxy - no Session
}
However, I get a problem because (I think) the same hibernate session is being used for both transactions. When I call entityRepository.getOne(id) in the 2nd transaction, I can see in the debugger that I am returned exactly the same object that was returned by findAll() in the 1st transaction without a DB access. If I understand this correctly, it's the hibernate cache doing this? If I then call a method on my object that requires a lazy evaluation, I get a "no session" error. I thought the cache and the session were linked so that's my first confusion.
If I drop all the #Transactional annotations or if I put a #Transactional on the control method it all runs fine, but the database commit isn't done until the control method completes which is obviously not what I want.
So, I have a few questions:
How can I make the hibernate session align with my transaction scope?
What is a good pattern for doing the separation transactions in a loop with JPA and declarative transaction management?
I want to retain the declarative style (i.e. no xml), and don't want to do anything Hibernate specific.
Any help appreciated!
Thanks
Marcus
Spring creates a proxy around your service class, which means #Transactional annotations are only applied when annotated methods are called through the proxy (where you have injected this service).
You are calling getEntitiesToProcess() and processEntity() from within control(), which means those calls are not going through proxy but instead have the transactional scope of the control() method (if you aren't also calling control() from another method in the same class).
In order for #Transactional to apply, you need to do something like this
#Autowired
private ApplicationContext applicationContext;
public void control() {
MyService myService = applicationContext.getBean(MyService.class);
List<> entities = myService.getEntitiesToProcess();
for (Entity entity : entities) {
myService.processEntity(entity.getId());
}
}

Adding #Transactional causes "collection with cascade="all-delete-orphan" was no longer referenced"

I am upgrading a working project from Spring2+Hibernate3 to Spring3+Hibernate4. Since HibernateTemplate and HibernateDAOSupport have been retired, I did the following
Before (simplified)
public List<Object> loadTable(final Class<?> cls)
{
Session s = getSession(); // was calling the old Spring getSession
Criteria c = s.createCriteria(cls);
List<Object> objects = c.list();
if (objects == null)
{
objects = new ArrayList<Object>();
}
closeSession(s);
return objects;
}
Now (simplified)
#Transactional(propagation=Propagation.REQUIRED)
public List<Object> loadTable(final Class<?> cls)
{
Session s = sessionFactory.getCurrentSession();
Criteria c = s.createCriteria(cls);
List<Object> objects = c.list();
if (objects == null)
{
objects = new ArrayList<Object>();
}
return objects;
}
I also added the transaction annotation declaration to Spring XML and removed this from Hibernate properties
"hibernate.current_session_context_class", "org.hibernate.context.ThreadLocalSessionContext"
The #Transactional annotation seems to have worked as I see this in the stacktrace
at com.database.spring.DatabaseDAOImpl$$EnhancerByCGLIB$$7d20ef95.loadTable(<generated>)
During initialization, the changes outlined above seem to work for a few calls to the loadTable function but when it gets around to loading an entity with a parent, I get the "collection with cascade="all-delete-orphan" was no longer referenced" error. Since I have not touched any other code that sets/gets parents or children and am only trying to fix the DAO method, and the query is only doing a sql SELECT, can anyone see why the code got broken?
The problem seems similar to Spring transaction management breaks hibernate cascade
This is unlikely problem of Spring, but rather issue with your entity handling / definition. When you are using deleteOrphans on a relation, the underlying PersistentSet MUST NOT be removed from the entity itself. You are allowed only to modify the set instance itself. So if you are trying to do anything clever within your entity setters, that is the cause.
Also as far as I remember there are some issues when you have deleteOrphans on both sides of the relation and/or load/manipulate both sides within one session.
Btw. I don't think "hibernate.current_session_context_class", "org.hibernate.context.ThreadLocalSessionContext" is necessary. In our project, this is the only configuration we have:
#Bean
public LocalSessionFactoryBuilder sessionFactoryBuilder() {
return ((LocalSessionFactoryBuilder) new LocalSessionFactoryBuilder(
dataSourceConfig.dataSource()).scanPackages(ENTITY_PACKAGES).
setProperty("hibernate.id.new_generator_mappings", "true").
setProperty("hibernate.dialect", dataSourceConfig.dialect()).
setProperty("javax.persistence.validation.mode", "none"));
}
#Bean
public SessionFactory sessionFactory() {
return sessionFactoryBuilder().buildSessionFactory();
}
The issue was with Session Management. The same block of transactional code was being called by other modules that were doing their own session handling. To add to our woes, some of the calling modules were Spring beans while others were written in direct Hibernate API style. This disorganization was sufficient work to keep us away from moving up to Hibernate 4 immediately.
Moral of the lesson (how do you like that English?): Use a consistent DAO implementation across the entire project and stick to a clearly defined session and transaction management strategy.

Resources