Spring Declarative Transaction not rolling back - spring

I created a simple spring application to test the basics of Spring declarative transaction.
As per rules declarative transaction should rollback in case of RuntimeException.
But it was not rolling back in my case.
Main test class has code
public class SpringOraTest {
public static void main(String[] args) {
ApplicationContext aplctx= new
FileSystemXmlApplicationContext("src\\config\\SpringConfigForOra.xml");
//Call to test Declarative Transaction with Annotation
TrxHandleAnnotated prxyobj=((TrxHandleAnnotated)aplctx.getBean("dbCommandAnnotated"));
prxyobj.doTask();
}
}
The class TrxHandleAnnotated had code :-
#Transactional
public class TrxHandleAnnotated
public void doTask(){
ApplicationContext aplctx= new
FileSystemXmlApplicationContext("src\\config\\SpringConfigForOra.xml");
JdbcTemplate jdbcTemplate= (JdbcTemplate)aplctx.getBean("jdbcTemplate");
jdbcTemplate.update("insert into kau_emp values(4,'forthmulga' )");
throw new RuntimeException();
}
And there was required configuration in config XML.
I was expecting that transaction be rolled back when exception is thrown. But it was not rolled back and record was getting commited to DB.
Even after long search on internet I could not understand why it was not getting rolled back.
Later I realised that, in doTask() code I am creating context once again and taking our JdbcTemplate instance out of new context it. This was the root cause of the issue.
I changed the code such a way that both classes will use some context. And it worked !!!
public class SpringOraTest {
public static ApplicationContext aplctx;
public static void main(String[] args) {
aplctx= new FileSystemXmlApplicationContext("src\\config\\SpringConfigForOra.xml");
//Call to test Declarative Transaction with Annotation
TrxHandleAnnotated prxyobj=
((TrxHandleAnnotated)aplctx.getBean("dbCommandAnnotated"));
prxyobj.doTask();
}
#Transactional
public class TrxHandleAnnotated
public void doTask(){
JdbcTemplate jdbcTemplate=(JdbcTemplate)SpringOraTest.aplctx.getBean("jdbcTemplate");
jdbcTemplate.update("insert into kau_emp values(4,'forthmulga' )");
throw new RuntimeException();
}
This is a lesson-learnt for me that, unless otherwise required, whole application should use only one context object.
This will sound too obvious Spring practitioners but spring novice like me can do such silly mistakes. So thought of sharing it.
In this particular case instead of manually creating JdbcTemplate is better to declare it as member variable and use setter injection.

use #TransactionConfiguration("name",ROLLBACK); //check syntax after #Transactional while declaring TrxHandleAnnotated. see this link for more information about #Transcational and its usage.

Related

Spring - Instantiating beans results in infinite recursion and (ironic) StackOverflow exception. How to fix?

When I launch my application, for some reason not apparent to me it is waiting until it instantiates the SchedulerFactoryBean to instantiate the jtaTransactionManager bean. When it does this, Spring goes into an infinite recursion starting from resulting in a StackOverflow exception.
After tracing hte code, I see no circular dependency - transaction manager is not dependent in any way on the SchedulerAccessor
In the stack view image at the bottom, the Proxy$98 class is some enhancement of org.springframework.scheduling.quartz.SchedulerAccessor
Edit 1: Update
What is happening is that the SchedulerFactoryBean is being initialized in the preInstantiateSingletons() method of the bean factory. The transaction manager is not a singleton, so it is not pre-initialized. When Spring goes through the advisements, it tries to initialize the bean, but the advisement leads it back to the same pathway.
Edit 2: Internals (or infernals)
The spring class org.springframework.batch.core.configuration.annotation.SimpleBatchConfiguration implements the transactionManager attribute as a LazyProxy.
This is executed well before the initialization code constructs the actual TransactionManager bean. At some point, the class needs to invoke a transaction within the TransactionManager context, which causes the spring container to try to instantiate the bean. Since there is an advice on the bean proxy, the method interceptor in the SimpleBatchConfiguration class tries to execute the getTransaction() method, which in turn causes the spring container to try to instantiate the bean, which calls the intergceptor, which tries to execute the getTransaction() method ....
Edit 3: #EnableBatchProcessing
I use the word "apparent" a lot here because it's guesswork based on the failure modes during startup.
There is (apparently) no way to configure which transaction manager is being used in the #EnableBatchProcessing annotation. Stripping out the #EnableBatchProcessing has eliminated the recursive call, but left me with an apparent circular dependency.
For some unknown reason, even though I have traced and this code is called exactly once, it fails because it thinks the bean named "configurer" is already in creation:
#Bean({ "configurer", "defaultBatchConfigurer" })
#Order(1)
public BatchConfigurer configurer() throws IOException, SystemException {
DefaultBatchConfigurer result = new DefaultBatchConfigurer(securityDataSource(), transactionManager());
return result;
}
The code that initiates the recursion is:
protected void registerJobsAndTriggers() throws SchedulerException {
TransactionStatus transactionStatus = null;
if (this.transactionManager != null) {
transactionStatus = this.transactionManager.getTransaction(new DefaultTransactionDefinition());
}
AppInitializer Startup Code:
#Override
public void onStartup(ServletContext container) throws ServletException {
Logger logger = LoggerFactory.getLogger(this.getClass());
try {
// DB2XADataSource db2DataSource = null;
AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext();
rootContext.register(DatabaseConfig.class);
rootContext.register(SecurityConfig.class);
rootContext.register(ExecutionContextConfig.class);
rootContext.register(SimpleBatchConfiguration.class);
rootContext.register(MailConfig.class);
rootContext.register(JmsConfig.class);
rootContext.register(SchedulerConfig.class);
rootContext.refresh();
} catch (Exception ex) {
logger.error(ex.getMessage(), ex);
}
}
Construction of jtaTransactionManager bean in DatabaseConfig
#Bean(destroyMethod = "shutdown")
#Order(1)
public BitronixTransactionManager bitronixTransactionManager() throws IOException, SystemException {
btmConfig();
BitronixTransactionManager bitronixTransactionManager = TransactionManagerServices.getTransactionManager();
bitronixTransactionManager.setTransactionTimeout(3600); // TODO: Make this configurable
return bitronixTransactionManager;
}
#Bean({ "transactionManager", "jtaTransactionManager" })
#Order(1)
public PlatformTransactionManager transactionManager() throws IOException, SystemException {
JtaTransactionManager mgr = new JtaTransactionManager();
mgr.setTransactionManager(bitronixTransactionManager());
mgr.setUserTransaction(bitronixTransactionManager());
mgr.setAllowCustomIsolationLevels(true);
mgr.setDefaultTimeout(3600);
mgr.afterPropertiesSet();
return mgr;
}
Construction of SchedulerFactoryBean in SchedulerConfig
#Autowired
#Qualifier("transactionManager")
public void setJtaTransactionManager(PlatformTransactionManager jtaTransactionManager) {
this.jtaTransactionManager = jtaTransactionManager;
}
#Bean
#Order(3)
public SchedulerFactoryBean schedulerFactoryBean() {
Properties quartzProperties = new Properties();
quartzProperties.put("org.quartz.jobStore.driverDelegateClass",
delegateClass.get(getDatabaseType()));
quartzProperties.put("org.quartz.jobStore.tablePrefix", getTableSchema()
+ ".QRTZ_");
quartzProperties.put("org.quartz.jobStore.class",
org.quartz.impl.jdbcjobstore.JobStoreCMT.class.getName());
quartzProperties.put("org.quartz.scheduler.instanceName",
"MxArchiveScheduler");
quartzProperties.put("org.quartz.threadPool.threadCount", "3");
SchedulerFactoryBean result = new SchedulerFactoryBean();
result.setDataSource(securityDataSource());
result.setNonTransactionalDataSource(nonJTAsecurityDataSource());
result.setTransactionManager(jtaTransactionManager);
result.setQuartzProperties(quartzProperties);
return result;
}
There were several impossibly convoluted to figure out steps to a resolution. I ended up monkeying it until it worked because the exception messages were not information.
In the end, here is the result:
refactored packaging so job/step scoped and global scoped beans were in different packages, so context scan could capture the right beans in the right context easily.
Cloned and modified org.springframework.batch.core.configuration.annotation.SimpleBatchConfiguration to acquire the beans I wanted for my application
Took out the #EnableBatchProcessing annotation. Since I was already initializing less automagically, everything was initializing twice which created confusion
Cleaned up the usage of datasources - XA and non-XA
Use the #Primary annotation to pick out the correct (Biting tongue here - no way to tell the framework which of several datasources to use without implicitly telling it that in case of questions always use "this one"? Really???)

Spring static context accessor and integration tests

We have a spring component which sets the application context into a static field. This static field is then accessed from other parts of the application. I know static should not be used, but sometimes it is necessary to access spring context from non-spring-managed beans. E.g. the field looks like this:
public class ApplicationContextProvider implements ApplicationContextAware {
private static ApplicationContext context;
public ApplicationContext getApplicationContext() {
return context;
}
#Override
public void setApplicationContext(ApplicationContext ctx) {
context = ctx;
}
}
(taken for http://www.dcalabresi.com/blog/java/spring-context-static-class/)
The problem is that when using JUnit (or Spock) framework in integration tests, a new spring context is created for tests that have annotations like #TestPropertySource or #ContextConfiguration, in that case the contexts are cached for other tests with the same configuration (context caching in spring test framework).
However, the static field is only updated when the spring context is created. That means, when a test context is retrieved from the cache, it does not update the static field, of course, because the context was already initialized before being cached. The static field was already overwritten by the last context created from previous test runs with different configuration and so it does not see the same context as the one that starts the test.
The consequence is that part of the test runs in one spring context and from the point it accesses the static field it runs in the other context.
Does anybody have a solution to this problem? Anybody got into the same situation?
I've faced with tha same issue.
The possible solution might be saving context before test and restoring it after.
For convinience it can be done via junit rule:
public class ContextRestoreRule extends ExternalResource {
private ApplicationContext context;
#Override
protected void before() throws Throwable {
context = ApplicationContextProvider.getContext();
}
#Override
protected void after() {
ApplicationContextProvider.setContext(context);
}
}
And in the test (wich modifies context):
#ClassRule
public static ContextRestoreRule contextRestore = new ContextRestoreRule();

ClassBridge with DAO class injected

I have a Hibernate Search ClassBridge where I want to use #Inject to inject a Spring 4.1 managed DAO/Service class. I have annotated the ClassBridge with #Configurable. I noticed that Spring 4.2 adds some additional lifecycle methods that might do the trick, but I'm on Spring 4.1
The goal of this is to store a custom field into the index document based on a query result.
However, since the DAO, depends on the SessionFactory getting initialized, it doesn't get injected because it doesn't exist yet when the #Configurable bean gets processed.
Any suggestions on how to achieve this?
You might try to create a custom field bridge provider, which could get hold of the Spring application context through some static method. When provideFieldBridge() is called you may return a Spring-ified instance of that from the application context, assuming the timing is better and the DAO bean is available by then.
Not sure whether it'd fly, but it may be worth trying.
Hibernate Search 5.8.0 includes support for bean injection. You can see the issue https://hibernate.atlassian.net/browse/HSEARCH-1316.
However I couldn't make it work in my application and I had implemented a workaround.
I have created an application context provider to obtain the Spring application context.
public class ApplicationContextProvider implements ApplicationContextAware {
private static ApplicationContext context;
public static ApplicationContext getApplicationContext() {
return context;
}
#Override
public void setApplicationContext(ApplicationContext context) throws BeansException {
ApplicationContextProvider.context = context;
}
}
I have added it to the configuration class.
#Configuration
public class RootConfig {
#Bean
public ApplicationContextProvider applicationContextProvider() {
return new ApplicationContextProvider();
}
}
Finally I have used it in a bridge to retrieve the spring beans.
public class AttachmentTikaBridge extends TikaBridge {
#Override
public void set(String name, Object value, Document document, LuceneOptions luceneOptions) {
// get service bean from the application context provider (to be replaced when HS bridges support beans injection)
ApplicationContext applicationContext = ApplicationContextProvider.getApplicationContext();
ExampleService exampleService = applicationContext.getBean(ExampleService .class);
// use exampleService ...
super.set(name, content, document, luceneOptions);
}
}
I think this workaround it's quite simple in comparision with other solutions and it doesn't have any big side effect except the bean injection happens in runtime.

Spring transaction: requires_new beharivour

May be I am misunderstand Spring Requires_new behavior. Here is my code:
#Transactional(rollbackFor=Exception.class,propagation=Propagation.REQUIRED)
public void outterMethod() throws Exception{
innerMethod1();
innerMethod2();
}
#Transactional(rollbackFor=Exception.class,propagation=Propagation.REQUIRES_NEW)
public void innerMethod1() throws Exception{
testService.insert(new Testbo("test-2", new Date()));
}
#Transactional(rollbackFor=Exception.class,propagation=Propagation.REQUIRES_NEW)
public void innerMethod2() throws Exception{
testService.insert(new Testbo("test-2", new Date()));
throw new Exception();
}
When the innnerMethod2 throws Exception, I thought that innerMethod1 still able to commit. But the all the outer and inner transactions rollback. What am I wrong here? How can I do to commit innerMethod1 when innerMethod2 rollback?
Although you have correctly understood the behavior of Propagation.REQUIRES_NEW, you have stumbled upon a common misconception about Spring 's Transactional behavior.
In order for the transactional semantics to be applied (that is for the annotation of the method to have any effect), the method needs to be called from outside the class. Calling a method annotated with transactional from inside the class has absolutely no effect on the transactional processing (because the Spring generated proxy class that contains the transactional code does not come into play).
In your example innerMethod2 maybe annotated with #Transactional but since it is called from outterMethod, the annotation is not being processed.
Check out this part of the Spring documentation

Accessing Spring context from non-spring component that is loaded at the same time with Spring

The cool enterprise app I'm working on is in the process of going Spring. That's very cool and exciting exercise to all the team, but also a huge source of stress. What we do is we gradually move legacy components to Spring context. Now what we have is a huuuge, I mean it, huuuuge component that is not piece of cake to spring-ify, and at the same time it needs to get access to some of the Spring beans.
Now here comes the problem: this component is being loaded at application startup (or bootstrap, whatever you prefer!). That means that there is a race condition between this guy and a Spring itself, so sometimes when I access the context from within that non-spring monstrosity, I get sweet and nice NPE. Which basically means that at the time we need that context, it's not yet initialized!
You might be curious how exactly we're accessing the context: and the answer is - it's a standard AppContextProvider pattern.
public class ApplicationContextProvider implements ApplicationContextAware {
private static ApplicationContext ctx;
public void setApplicationContext(ApplicationContext applicationContext) {
ctx = applicationContext;
}
public static ApplicationContext getApplicationContext() {
return ctx;
}
}
The ideal workaround for me in this case would be to tell Spring to notify that non-spring component "Okay, I'm up!", and perform all actions that require the context only after that. Is this actually possible?
Thanks in advance!
The correct way to make the application context available to non-spring beans is to use the ContextSingletonBeanFactoryLocator.
Take a look at this answer for more details.
Take a look at the mechanism of context events.
Perhaps you can block getApplicationConext() until receiving of ContextRefreshedEvent (if it wouldn't create deadlocks):
public class ApplicationContextProvider implements ApplicationListener<ContextRefreshedEvent> {
private static ApplicationContext ctx;
private static Object lock = new Object();
public void onApplicationEvent(ContextRefreshedEvent e) {
synchronized (lock) {
ctx = e.getApplicationContext();
lock.notifyAll();
}
}
public static ApplicationContext getApplicationContext() {
synchronized (lock) {
while (ctx == null) lock.wait();
return ctx;
}
}
}

Resources