dirty reads of associations in hibernate second level cache - spring

When enabling ehCache (2.7.0) as Hibernate (4.3.7) Second level cache, Hibernate returns the old collection association.
Model: A Member has a Wallet with Wallet transactions.
Scenario: a Wallet transaction is added to a Member in a transaction (with ehcache enabled). However, after the scenario (after the commit), the Wallet transaction isn't present in the Member, whereas it's present in the db.
Scenarios testing Code:
startTransaction(); // used to create a transaction through Spring.
member = findMemberById(); // Hibernate "get()" to retrieve member from Db.
final WalletTransaction walTx = member.getEnsureWallet().addWalletTransaction(10); // add wallet tx of 10 euro.
member.saveOrUpdate(); // will update the member and the wallet transactions through cascading
commitTransaction();
// assert wallet transaction is present
startTransaction();
final Taxer mem = findMemberById(member.getId()); // refresh member in session through it's PK, logging indicates it comes from cache.
// final Taxer mem = findMemberByLoginName(member.getLoginName()); // when retrieving the member through it's loginName, the test works.
assertTrue(mem.containsWalletTransaction(walTx)); // FAILS
commitTransaction();
The hibernate model member snippet:
<class name="com.core.domain.MemberDefault" table="mem" discriminator-value="Mem" >
<component name="wallet" class="com.core.domain.Wallet">
<set name="transactions" table="wallet_tx" cascade="save-update, delete" >
<!--cache usage="read-write" /-->
<key column="idMember" not-null="true" />
<composite-element class="com.core.domain.WalletTransactionDefault">
<property name="amount" type="big_decimal" column="amount" />
.... (more props)
</composite-element>
</set>
</component>
</class
The MemberDefault and WalletDefault class snippets:
public class MemberDefault implements Member {
private Wallet wallet;
....
}
public class WalletDefault implements Wallet {
private Set<WalletTransaction> transactions;
public void setTransactions(Set<WalletTransaction> transactions) {
this.transactions = transaction;
}
public Set<WalletTransaction> getTransactions() {
return this.transactions;
}
}
Notes:
In case I not add the wallet transaction in a transaction (remove the first start/commit), the test is performed with success (The above is isolated testing code to reproduce bug).
In case I turn off the second-level, the test works.
In case I not retrieve the member from the Db through it's PK, but through it's loginName, such that a query is used by Hibernate, and as such the query cache, the test works.
I debugged, enabled hibernate/ehcache debug logging, modified the cache settings in hibernate, tried older Hibernate 4 and ehCache version, but don't seem to solve it, a bit frustrating.
Please some advice on how to solve this?

I solved it by always assigning an instance to the wallet field (hibernate component) in MemberDefault.
That is, instead of:
private Wallet wallet;
we had to use:
private Wallet wallet = new WalletDefault();
in the MemberDefault class.
Is this a bug, or does it has any logic ?
I think it is a bug as Hibernate knows that it's a component of type WalletDefault from the hibernate config.
(I discovered it by removing the Wallet component for testing)

Related

Spring boot change connection schema dynamically inside transaction

In my Spring boot application i need to read data from a specific schema and write on another one, to do so i follow this guide (https://github.com/spring-projects/spring-data-examples/tree/main/jpa/multitenant/schema) and i used this answer (https://stackoverflow.com/a/47776205/10857151) to be able to change at runtime the schema used.
But if this works fine inside a service without any transaction scope, this doesn't works on a more complex architecture (exception: session/EntityManager is closed) where there are couple of service that share transaction to ensure rollback.
THE BELLOW IS A SIMPLE EXAMPLE OF THE ARCHITECTURE
//simple jpa repository
private FirstRepository repository;
private SecondRepository secondRepository;
private Mapper mapper;
private SchematUpdater schemaUpdater;
#Transactional
public void entrypoint(String idSource,String idTarget) {
//copy first object
firstCopyService(idSource, idTarget);
//copy second object
secondCopyService(idSource, idTarget);
}
#Transactional
public void firstCopyService(String idSource,String idTarget) {
//change schema to the source default
schemaUpdater.changeToSurceSchema();
Object obj=repository.get(idSource);
//convert obj before persist - set new id reference and other things
obj=mapper.prepareObjToPersist(obj,idTarget);
//change schema to the target default
schemaUpdater.changeToTargetSchema();
repository.saveAndFlush(obj);
}
#Transactional
public void secondCopyService(String idSource,String idTarget) {
schemaUpdater.changeToSurceSchema();
Object obj=secondRepository.get(idSource);
//convert obj before persist
obj=mapper.prepareObjToPersist(obj);
//change schema to the target default
schemaUpdater.changeToTargetSchema();
secondRepository.saveAndFlush(obj);
}
I need to know what could be the best solution to ensure this dynamical switch and maintain the transaction scope on each service, without causing problems connected to restore and clean entity manager session.
Thanks

Combining multi-tenant Spring application with distributed JTA transactions

I have a multi-tenant (database per tenant) Spring application. I have configured multiple data source beans for each tenant but only one entity manager factory bean because the tenants have the same tables with the same structure, i.e. the same entities. Unfortunately the uniqueness of the entity manager factory in context of how SharedEntityManagerCreator works produces difficulties for me when using distributed JTA transactions across these tenants. The SharedEntityManagerCreator before creating a new entity manager uses the entity manager factory bean instance as key to search if an entity manager object already exists in the resources of the current transaction:
public static EntityManager doGetTransactionalEntityManager(EntityManagerFactory emf, #Nullable Map<?, ?> properties, boolean synchronizedWithTransaction) throws PersistenceException {
EntityManagerHolder emHolder = (EntityManagerHolder)TransactionSynchronizationManager.getResource(emf);
/* ... */
}
and if such exits it is reused. Therefore changing the tenant in the current transaction has no effect, because the entity manager is not recreated but is reused, which means that the entity manager object contains the reference to the previous data source and the operations are executed on the previous tenant but not on the new tenant.
I found a quick solution. Since the entity manager object is wrapped in a EntityManagerHolder object inside transaction resources I created another class that extends from EntityManagerHolder and does not wrap one entity manager object but a map of entity managers with tenant as keys:
public class MultiTenantEntityManagerHolder extends EntityManagerHolder {
private Map<String, EntityManager> entityManagers = new HashedMap<>();
private EntityManagerFactory entityManagerFactory;
#Override
public EntityManager getEntityManager() {
String tenantId = <get current tenant>;
if(!entityManagers.containsKey(tenantId)) {
entityManagers.put(tenantId, entityManagerFactory.createEntityManager());
}
return entityManagers.get(tenantId);
}
}
Then an object of type MultiTenantEntityManagerHolder is created at the beginning of transaction an placed inside resources:
TransactionSynchronizationManager.bindResource(entityManagerFactory, new MultiTenantEntityManagerHolder(entityManagerFactory));
But I'm looking at this solution as a hack, that may not work in the next version of Spring. Therefore I have two questions: is my current solution really a hack, i.e. a weak solution that should be abandoned? What are possible other approaches for this problem?

How to link JPA persistence context with single database transaction

Latest Spring Boot with JPA and Hibernate: I'm struggling to understand the relationship between transactions, the persistence context and the hibernate session and I can't easily avoid the dreaded no session lazy initialization problem.
I update a set of objects in one transaction and then I want to loop through those objects processing them each in a separate transaction - seems straightforward.
public void control() {
List<> entities = getEntitiesToProcess();
for (Entity entity : entities) {
processEntity(entity.getId());
}
}
#Transactional(value=TxType.REQUIRES_NEW)
public List<Entity> getEntitiesToProcess() {
List<Entity> entities = entityRepository.findAll();
for (Entity entity : entities) {
// Update a few properties
}
return entities;
}
#Transactional(value=TxType.REQUIRES_NEW)
public void processEntity(String id) {
Entity entity = entityRepository.getOne(id);
entity.getLazyInitialisedListOfObjects(); // throws LazyInitializationException: could not initialize proxy - no Session
}
However, I get a problem because (I think) the same hibernate session is being used for both transactions. When I call entityRepository.getOne(id) in the 2nd transaction, I can see in the debugger that I am returned exactly the same object that was returned by findAll() in the 1st transaction without a DB access. If I understand this correctly, it's the hibernate cache doing this? If I then call a method on my object that requires a lazy evaluation, I get a "no session" error. I thought the cache and the session were linked so that's my first confusion.
If I drop all the #Transactional annotations or if I put a #Transactional on the control method it all runs fine, but the database commit isn't done until the control method completes which is obviously not what I want.
So, I have a few questions:
How can I make the hibernate session align with my transaction scope?
What is a good pattern for doing the separation transactions in a loop with JPA and declarative transaction management?
I want to retain the declarative style (i.e. no xml), and don't want to do anything Hibernate specific.
Any help appreciated!
Thanks
Marcus
Spring creates a proxy around your service class, which means #Transactional annotations are only applied when annotated methods are called through the proxy (where you have injected this service).
You are calling getEntitiesToProcess() and processEntity() from within control(), which means those calls are not going through proxy but instead have the transactional scope of the control() method (if you aren't also calling control() from another method in the same class).
In order for #Transactional to apply, you need to do something like this
#Autowired
private ApplicationContext applicationContext;
public void control() {
MyService myService = applicationContext.getBean(MyService.class);
List<> entities = myService.getEntitiesToProcess();
for (Entity entity : entities) {
myService.processEntity(entity.getId());
}
}

jpa fetching data from cache not database eclipaslink

Hi I am working in a small project where I am using jpa (eclipselink), I am running two same application with from different tomcat, they are using same schema, but if i am
changing value form one application, the other application not fetching data from database,
it always return its local data, I have also tried this #Cacheable(false) in entity class and
<property name="eclipselink.cache.shared.default" value="false" />
<property name="eclipselink.query-results-cache" value="false"/>
in xml file but they are not returning latest data from database, my code is as follows:
EntityManager entityManager = GlobalBean.store.globalEntityManager();
String queryString = "select g from GlobalUrnConf g";
TypedQuery<GlobalUrnConf> globalUrnConfQuery = entityManager.createQuery
(queryString, GlobalUrnConf.class);
return globalUrnConfQuery.getSingleResult();
I am fetching entity manager and factory in following ways -
public EntityManagerFactory factory() {
if (this.entityManagerFactory == null) {
entityManagerFactory = Persistence.createEntityManagerFactory("FileUpload");
}
return this.entityManagerFactory;
}
public EntityManager globalEntityManager() {
if (this.entityManager == null) {
this.entityManager = factory().createEntityManager();
}
return this.entityManager;
}
Please help me. Thanks in advance.
You are caching the EntityManager which is required to cache and hold onto managed entities tht are read through it for both identity purposes and to manage changes. You can get around this by using a refresh query hint, or other query hints so they can avoid the caches, but it is probably better is you just manage the EntityManager's life and cache more directly and only obtain them as needed, or clear them at logical points when returned entities can be released. Try calling EntityManager clear for instance.
return globalUrnConfQuery
.setHint(QueryHints.CACHE_USAGE, CacheUsage.CheckCacheOnly)
.getSingleResult();

Why hibernate flushes on select queries (EmptyInterceptor)?

I would like to understand a counter intuitive Hibernate behaviour I am seeing. I always thought that "flush" meant that hibernate had a data structure in-memory that has to be written to the DB. This is not what I am seeing.
I have created the following Interceptor:
public class FeedInterceptor extends EmptyInterceptor
{
#Override
public void postFlush(Iterator entities)
{
System.out.println("postFlush");
}
}
Registered it in my ApplicationContext
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="entityInterceptor">
<bean class="interceptor.FeedInterceptor"/>
</property>
</bean>
But, strange enough, I see "postFlush" written to the console for every row retrieved from the DB from my DAO:
Session session = sessionFactory.getCurrentSession();
Query query = session.createQuery("from Feed feed");
query.list();
Why is that?
Let's assume Hibernate wouldn't flush the session, then you could have the following situation:
Person p = new Person();
p.setName("Pomario");
dao.create(p);
Person pomario = dao.findPersonByName("Pomario")
//pomario is null?
When finding a person by name, hibernate issues a select statement to the database. If it doesn't send the previous statements first, then the returned result might not be consistent with changes that have been done previously in the session, here the dababase hasn't received the create statement yet, so it returns an empty result set.

Resources