Ehcache local transactions with Spring #Transactional - spring

I'm trying to setup a transactional ehcache, making use of Spring #Cacheable and #Transactional.
My caches work fine with #Cacheable, but as soon as i setup my cache to use a local transaction:
<cache name="currencyCodeMaps" maxElementsInMemory="100" overflowToDisk="false" timeToIdleSeconds="5" timeToLiveSeconds="600" memoryStoreEvictionPolicy="LRU" transactionalMode="local"/>
When I access the cache i get error:
net.sf.ehcache.transaction.TransactionException: transaction not started
even though the same method is annotated #Transactional.
My Spring transaction manager is:
org.springframework.orm.jpa.JpaTransactionManager
The ehcache documentation says local transactions are controlled explicitly:
Local transactions are not controlled by a Transaction Manager.
Instead there is an explicit API where a reference is obtained to a
TransactionController for the CacheManager using
cacheManager.getTransactionController() and the steps in the
transaction are called explicitly
But this will be hard, as I want to sync my ehcache transactions with DB transactions, and DB transactions are controlled by #Transactional.
Is there a way to get local Ehcache transactions to work with Spring #Transactional?

Yes, there is a way to achieve you goal.
Because you have 2 transactional resources (JTA and Ehcache) and do not use JTA you have to use compound transaction manager likeorg.springframework.data.transaction.ChainedTransactionManager from spring-data project
#Bean
public PlatformTransactionManager transactionManager() {
return new ChainedTransactionManager(ehcacheTransactionManager(), jpaTransactionManager());
}
#Bean
public EhcacheTransactionManager ehcacheTransactionManager() {
return new EhcacheTransactionManager(ehcacheManager().getTransactionController());
}
#Bean
public PlatformTransactionManager jpaTransactionManager() {
return new JpaTransactionManager(entityManagerFactory());
}
You need to specify which transaction manager should be use by default:
#Configuration
public class Configuration implements TransactionManagementConfigurer {
...
#Override
public PlatformTransactionManager annotationDrivenTransactionManager() {
return transactionManager();
}
...
}
EhcacheTransactionManager implementation
import net.sf.ehcache.TransactionController;
import net.sf.ehcache.transaction.local.LocalTransactionContext;
import org.springframework.transaction.TransactionDefinition;
import org.springframework.transaction.TransactionException;
import org.springframework.transaction.support.AbstractPlatformTransactionManager;
import org.springframework.transaction.support.DefaultTransactionStatus;
public class EhcacheTransactionManager extends AbstractPlatformTransactionManager {
private TransactionController transactionController;
public EhcacheTransactionManager(TransactionController transactionController) {
this.transactionController = transactionController;
}
#Override
protected Object doGetTransaction() throws TransactionException {
return new EhcacheTransactionObject(transactionController.getCurrentTransactionContext());
}
#Override
protected void doBegin(Object o, TransactionDefinition transactionDefinition) throws TransactionException {
int timeout = transactionDefinition.getTimeout();
if (timeout != TransactionDefinition.TIMEOUT_DEFAULT) {
transactionController.begin(timeout);
} else {
transactionController.begin();
}
}
#Override
protected void doCommit(DefaultTransactionStatus defaultTransactionStatus) throws TransactionException {
transactionController.commit();
}
#Override
protected void doRollback(DefaultTransactionStatus defaultTransactionStatus) throws TransactionException {
transactionController.rollback();
}
public class EhcacheTransactionObject {
private LocalTransactionContext currentTransactionContext;
public EhcacheTransactionObject(LocalTransactionContext currentTransactionContext) {
this.currentTransactionContext = currentTransactionContext;
}
}
}
source code and test case can be found here
This solution has a significant drawback transaction coordinator of ehcache does not support suspend/resume operations so inner transactions (PROPAGATION_REQUIRES_NEW) are not possible. That is why I had to find another one.
Another option is not to use local ehcache transactions at all and use org.springframework.cache.transaction.AbstractTransactionSupportingCacheManager#setTransactionAware which decorates caches to postpone operations until the transaction end. But it has following drawbacks:
Evicted keys stay accessible inside transaction until transaction commit
putIfAbsent operation is not postponed
It was a problem for me, so I implemented this functionality in different way. Check 'me.qnox.springframework.cache.tx.TxAwareCacheManagerProxy', there problems described above was solved, in the same repository

You do not want local transactions, you want XA transactions, which are supported by Ehcache.
Have a look at the documentation for Ehcache 2.10.x or Ehcache 2.8.x.

Related

When trying to update record in after job, batch throws "No transaction is in progress"

I have three different datasource to work with in this batch. Everything running fine. Job Repository is map-based and using ResourcelessTransactionManager for it. I configured it like this
#Configuration
#EnableBatchProcessing
public class BatchConfigurer extends DefaultBatchConfigurer {
#Override
public void setDataSource(DataSource dataSource){
}
}
I also use different platformtransactionmanager then spring batch (issue). So I set my
spring allow bean overriding to true in my properties.
Now my problem is, I want to update my record's status that is essential for start my batch according to job's exit status. So I use job execution listener to achieve that. But while everything working great in my local, it is throwing error in our remote server(k8 env), which is making it more interesting.
The part where I trying to update my record is
#Slf4j
#Component
#Lazy
public class MyListener implements JobExecutionListener {
#Autowired
private MyRepo myRepo;
#Override
public void beforeJob(JobExecution jobExecution) {
//before job
}
#Override
public void afterJob(JobExecution jobExecution) {
myRepo.saveAndFlush(myUpdatedEntity); // I cannot share all code because of my company's policy but there is no issue in here
}
}
The error I get is
javax.persistence.TransactionRequiredException: no transaction is in
progress
As I know, spring batch is not handling this transaction. I already have transaction manager for it. Like I said, it's working in my local, so there shouldn't be any configuration issue. I tried to add #Transactional(transactionManager = "myTransactionManager") to ensure it, didn't work. What do you think?
edit: I defined my transaction manager as
#Configuration
#EnableJpaRepositories(
basePackages = "repo's package",
entityManagerFactoryRef = "entityManagerFactory",
transactionManagerRef = "transactionManager"
)
public class DatasourceConfiguration {
#Bean(name = "transactionManager")
public PlatformTransactionManager transactionManager(
#Qualifier("entityManagerFactory") EntityManagerFactory entityManagerFactory ) { // I defined these (datasource etc.)
return new JpaTransactionManager(entityManagerFactory);
}
}
edit 2:
Setting hibernate.allow_update_outside_transaction to true resolved the issue but I have some concerns about it. Could it effect rollback of chunk when error accoured? I suppose not because it has it's own transaction but I need to be sure. And I couldn't fully understand why it happens.
Since you are using JPA, you need to configure the job repository as well as the step to use a JpaTransactionManager.
For the job repository, you need to override BatchConfigurer#getTransactionManager as mentioned in the documentation here: https://docs.spring.io/spring-batch/docs/4.3.7/reference/html/job.html#javaConfig.
For the step, you can set the transaction manager using the builder:
#Bean
public Step step(JpaTransactionManager transactionManager) {
return this.stepBuilderFactory.get("step")
// configure step type and other properties
.transactionManager(transactionManager)
.build();
}
EDIT: Add transaction details about JobExecutionListener
JobExecutionListener#afterJob is executed outside the transaction driven by Spring Batch for the step. So if you want to execute transactional code inside that method, you need to manage the transaction yourself. You can do that either declaratively by adding #Transactional(transactionManager = "jpaTransactionManager", propagation = Propagation.REQUIRES_NEW) on your repository method, or programmatically with a TransactionTemplate, something like:
#Override
public void afterJob(JobExecution jobExecution) {
new TransactionTemplate(transactionManager, transactionAttribute).execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
myRepo.saveAndFlush(myUpdatedEntity);
}
});
}

Spring Data Neo4j OGM version and transaction isolation/propagation

I have a service method mergeAndUpdateAndCompliance() which uses Spring Data Neo4j OGM version:
#Transactional("neo4jTransactionManager")
#Override
public ComplianceMatrix mergeAndUpdateAndCompliance() {
mergeAndUpdate();
return compliance()
}
#Override
public void mergeAndUpdate() {
//do some update operations
}
#Override
#Transactional(readOnly = true, transactionManager = "neo4jTransactionManager")
public void compliance() {
//do some read operations
}
mergeAndUpdateAndCompliance() invokes two other service methods - mergeAndUpdate() and compliance(). compliance() reads the data updated by mergeAndUpdate(). Right now compliance() doesn't see the data updated in mergeAndUpdate() in the same transaction.
It works only if I add session.getTransaction().commit(); between them:
#Autowired
private Session session;
#Transactional("neo4jTransactionManager")
#Override
public ComplianceMatrix mergeAndUpdateAndCompliance() {
mergeAndUpdate();
session.getTransaction().commit();
return compliance()
}
Is it safe to place session.getTransaction().commit(); inside of Spring transaction ? What is the right way to solve this issue? Is it possible to use transaction propagation with SDN in order to configure mergeAndUpdate with REQUIRES_NEW ?
You have applied #Transactional on the mergeAndUpdateAndCompliance, function, within it is also applied to compliance method. You should try this way:
#Override
public ComplianceMatrix mergeAndUpdateAndCompliance() {
mergeAndUpdate();
return compliance()
}
#Override
#Transactional("neo4jTransactionManager")
public void mergeAndUpdate() {
//do some update operations
}
#Override
#Transactional(readOnly = true, transactionManager = "neo4jTransactionManager")
public void compliance() {
//do some read operations
}
Instead of applying it on the mergeAndUpdateAndCompliance, you should apply it on mergeAndUpdate and compliance functions separately. So that you don't have to manually commit the transaction.

Spring Boot Transaction support using #transactional annotation not working with mongoDB, anyone have solution for this?

Spring Boot version - 2.4.4,
mongodb version - 4.4.4
In my project, I want to do entry in 2 different document of mongodb, but if one fails than it should do rollback. mongodb supports transaction after version 4.0 but only if you have at least one replica set.
In my case I don't have replica set and also cannot create it according to my project structure. I can't use transaction support of mongodb because no replica-set. So, I am using Spring Transaction.
According to spring docs, to use transaction in Spring Boot, you only need to use #transactional annotation and everything will work(i.e. rollback or commit).
I tried many things from many sources but it is not rollbacking transaction if one fail.
Demo code is here,
This is demo code, not actual project.
This is my service class.
#Service
public class UserService {
#Autowired
UserRepository userRepository;
#Autowired
UserDetailRepository userDetailRepository;
#Transactional(rollbackFor = Exception.class)
public ResponseEntity<JsonNode> createUser(SaveUserDetailRequest saveUserDetailRequest) {
try {
User _user = userRepository.save(new User(saveUserDetailRequest.getId(), saveUserDetailRequest.getFirstName(), saveUserDetailRequest.getLastName()));
UserDetail _user_detail = userDetailRepository.save(new UserDetail(saveUserDetailRequest.getPhone(), saveUserDetailRequest.getAddress()));
} catch (Exception m) {
System.out.print("Mongo Exception");
}
return new ResponseEntity<>(HttpStatus.OK);
}
}
Also tried below code but still not working,
#EnableTransactionManagement
#Configuration
#EnableMongoRepositories({ "com.test.transaction.repository" })
#ComponentScan({"com.test.transaction.service"})
public class Config extends AbstractMongoClientConfiguration{
private com.mongodb.MongoClient mongoClient;
#Bean
MongoTransactionManager transactionManager(MongoDbFactory dbFactory) {
return new MongoTransactionManager(dbFactory);
}
#Bean
public com.mongodb.MongoClient mongodbClient() {
mongoClient = new com.mongodb.MongoClient("mongodb://localhost:27017");
return mongoClient;
}
#Override
protected String getDatabaseName() {
return "test";
}
}
The transaction support in Spring is only there to make things easier, it doesn't replace the transaction support for the underlying datastore being used.
In this case, it will simply delegate the starting/committing of a transaction to MongoDB. WHen using a database it will eventually delegate to the database etc.
As this is the case, the pre-requisites for MongoDB still need to be honoured and you will still need a replica.

How to involve a Collection on a spring transaction?

I currently have a spring application with hibernate and a PlataformTransactionManager running on Jboss/wildfly.
Some of the methods that manipulate the database also call a bean which contains a LinkedBlockingQueue. This queue stores logging messages that are periodically dispatched to someplace else on another thread (using simple spring #Scheduler).
Would it be possible to make my queue (inside a bean) transactional? ie. if the transaction rollback would I be able to "undo" any operations made on my Collection? What's the best strategy to implement this ?
So, in short something like:
#Service
#Transactional
public PersonService {
#Autowired
EntityManager EM;
#Autowired
LoggingBuffer logger;
public void addPerson(String name) {
EM.persist(new Person(.....));
logger.add("New person!");
// A rollback here via some thrown exception would not affect the queue
}
}
#Component
public class LoggingBuffer {
private Queue<String> q= new LinkedBlockingQueue<String>();
public add(String msg){
q.add(msg);
}
}
Try something like this
#Transactional
public void addPerson(String name) {
EM.persist(new Person(.....));
//logger.add("New person!");
// A rollback here via some thrown exception would not affect the queue
}
public void wrapAddPerson(String name){
List<String> localBuffer = new ArrayList<>();
try{
addPerson(name);
localBuffer.add(".....");
}catch(Exception e)
{
localBuffer.clear();
}
finally{
localBuffer.forEach(logger::add);
}
}

Null pointer exception using Autowired annotation - Gemfire Listerner

I have moved all the Cassandra into single class. When I tried create instance of CassandraOperations in the gemfire cache listener was getting null pointer exception.Can you please assist me on this error
I have not received any null pointer exception using spring and cassandra but getting while integrating with gemfire.
#Component
public class CacheListener<K, V> extends CacheListenerAdapter<K, V> implements Declarable {
#Autowired
private CassandraOperations cassandraOperations;
#Override
public void init(Properties props) {
}
public void afterCreate(EntryEvent e) {
cassandraOperations.insert(e.getNewValue());
}
#Override
public void close() {
}
}
public class CassandraConfig {
#Autowired
private Environment environment;
private static final Logger LOGGER = LoggerFactory.getLogger(CassandraConfig.class);
#Bean
public CassandraClusterFactoryBean cluster() {
CassandraClusterFactoryBean cluster = new CassandraClusterFactoryBean();
cluster.setContactPoints(environment.getProperty("cassandra.contactpoints"));
cluster.setPort(Integer.parseInt(environment.getProperty("cassandra.port")));
return cluster;
}
#Bean
public CassandraMappingContext mappingContext() {
BasicCassandraMappingContext mappingContext = new BasicCassandraMappingContext();
mappingContext.setUserTypeResolver(new SimpleUserTypeResolver(cluster().getObject(), environment.getProperty("cassandra.keyspace"))); return mappingContext;
}
#Bean
public CassandraConverter converter() {
return new MappingCassandraConverter(mappingContext());
}
#Bean
public CassandraSessionFactoryBean session() throws Exception {
CassandraSessionFactoryBean session = new CassandraSessionFactoryBean();
session.setCluster(cluster().getObject());
session.setKeyspaceName(environment.getProperty("cassandra.keyspace"));
session.setConverter(converter());
session.setSchemaAction(SchemaAction.NONE);
return session;
}
#Bean
public CassandraOperations cassandraTemplate() throws Exception {
return new CassandraTemplate(session().getObject());
}
}
Exception
[error 2017/05/05 11:16:04.874 CDT <http-nio-7878-exec-1> tid=0x5b] Exception occurred in CacheListener
java.lang.NullPointerException
at CacheListener.afterCreate(CacheListener.java:27)
at com.gemstone.gemfire.internal.cache.EnumListenerEvent$AFTER_CREATE.dispatchEvent(EnumListenerEvent.java:97)
at com.gemstone.gemfire.internal.cache.LocalRegion.dispatchEvent(LocalRegion.java:8897)
at com.gemstone.gemfire.internal.cache.LocalRegion.dispatchListenerEvent(LocalRegion.java:7376)
at com.gemstone.gemfire.internal.cache.LocalRegion.invokePutCallbacks(LocalRegion.java:6158)
at com.gemstone.gemfire.internal.cache.EntryEventImpl.invokeCallbacks(EntryEventImpl.java:1919)
at com.gemstone.gemfire.internal.cache.ProxyRegionMap$ProxyRegionEntry.dispatchListenerEvents(ProxyRegionMap.java:548)
at com.gemstone.gemfire.internal.cache.LocalRegion.basicPutPart2(LocalRegion.java:6012)
at com.gemstone.gemfire.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:232)
at com.gemstone.gemfire.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5824)
at com.gemstone.gemfire.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:118)
at com.gemstone.gemfire.internal.cache.LocalRegion.basicPut(LocalRegion.java:5214)
at com.gemstone.gemfire.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1597)
at com.gemstone.gemfire.internal.cache.LocalRegion.put(LocalRegion.java:1580)
at com.gemstone.gemfire.internal.cache.AbstractRegion.put(AbstractRegion.java:327)
at org.springframework.data.gemfire.GemfireTemplate.put(GemfireTemplate.java:189)
at org.springframework.data.gemfire.repository.support.SimpleGemfireRepository.save(SimpleGemfireRepository.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
What is not apparent in your code/configuration above is how you configured your application-specific, GemFire CacheListener using Spring (Data GemFire).
I see you annotated your application CacheListener using Spring's #Component stereo-type annotation, but this does nothing without help.
Are you using Spring's Classpath component scanning functionality, or perhaps Spring's Annotation-based container configuration support? If you are using the later, you know you have to still explicitly define your application CacheListener in config (JavaConfig or XML), right?
Whenever you encounter a NullPointerException on an #Autowired component/collaborator field to inject a dependency, especially when using Spring's #Autowired annotation, it is good indication you have a configuration problem, particularly since the #Autowired annotation implies that the "dependency" (e.g. CassandraOperations) is "required" (unless you explicitly set the required attribute of the #Autowired annotation to false, which you did not; required defaults to true).
Therefore, if the CacheListener component were picked up in the scan and a dependency could not be injected (auto-wired) because no (other) bean of the specified type (e.g. CassandraOperations) was defined in the Spring application context (which it is), then Spring would throw an Exception when evaluating your configuration class(es).
Although, even your CassandraConfig class must also be annotated with Spring's #Configuration annotation or with the #Component annotation when using either Spring Classpath component scanning or Annotation-based container config. Or, it must be explicitly defined as a bean in the Spring application context if using neither.
NOTE: the naming convention (i.e. CacheListener) is not very good since it clashes with GemFire's own CacheListener interface. It would be better to call your application-specific extension/implementation perhaps, "GemFireToCassandraCacheListener"
By way of example...
import ...;
#Configuration
class GemFireConfiguration {
#Bean
CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
#Bean("CassandraCache")
PartitionedRegionFactoryBean cassandraCacheRegion() {
PartitionedRegionFactoryBean cassandraCacheRegion =
new PartitionedRegionFactoryBean();
cassandraCacheRegion.setCache(gemfireCache());
cassandraCacheRegion.setClose(false);
cassandraCacheRegion.setCacheListeners(
new CacheListener[] { gemfireToCassandraCacheListener() });
return cassandraCacheRegion;
}
#Bean
GemFireToCassandraCacheListener gemfireToCassandraCacheListener() {
return new GemFireToCassandraCacheListener();
}
}
import ...;
#Configuration
class CassandraConfig {
// what you have above
}
I have plenty of GemFire configuration examples here, that shows GemFire native config with Spring (Data GemFire) config, XML vs. JavaConfig vs. annotations, etc, etc.
Finally...
Technically, it might be better to use a GemFire CacheWriter, attached to the Region, rather than a CacheListener, since what you are doing (updating Cassandra on a cache create) is the intended purpose of a CacheWriter.
Of course, the CacheListener is called "after" create vs. the CacheWriter which is "before" create. However, I would say it is always better to update the "primary" data source (or "source of truth") before updating the "cache" to reflect the data source. This is applicable especially if there are constraints in the primary data source that might cause an update to fail. You would not want the cache to be updated if the primary data source could not be.
A CacheWriter is configured similarly to a CacheListener, like so...
#Bean("CassandraCache")
PartitionedRegionFactoryBean cassandraCacheRegion() {
PartitionedRegionFactoryBean cassandraCacheRegion =
new PartitionedRegionFactoryBean();
cassandraCacheRegion.setCache(gemfireCache());
cassandraCacheRegion.setClose(false);
cassandraCacheRegion.setCacheWriter(gemfireToCassandraCacheWriter());
return cassandraCacheRegion;
}
#Bean
GemFireToCassandraCacheWriter gemfireToCassandraCacheWriter(
CassandraOperations cassandraOperations) {
return new GemFireToCassandraCacheWriter(cassandraOperations);
}
Where the GemFireToCassandraCacheWriter would be defined as...
class GemFireToCassandraCacheWriter extends CacheWriterAdapter {
private CassandraOperations cassandraOperations;
// Using constructor injection is better than field injection
GemFireToCassandraCacheWriter(CassandraOperations cassandraOperations) {
this.cassandraOperations = cassandraOperations;
}
public void beforeCreate(EntryEvent<?, ?> event) {
cassandraOperations.insert(event.getNewValue());
}
}
NOTE: a Region can only have 1 CacheWriter. FYI, functionally the CacheWriter is the counterpart to a CacheLoader. See the GemFire User Guide for more details. In particular, see here, here and here.
Additionally, if you are just using GemFire as a cache for state that is primarily managed in Cassandra, then you might also consider Spring's Cache Abstraction, for which Spring Data GemFire positions GemFire as a "provider" in the abstraction.
Not sure what your GemFire to Cassandra UC is all about, but food for thought.
Hope this helps!
-John

Resources