Atomikos Transaction management spring boot/spring jams - spring

I have a spring boot application with spring JMS using DefaultMessageListener container. I am using Atomikos for transaction management.
On exception the message queue roll back works fine and messages do move to back out queue, but the database updates do not roll back. I have set the autoconfigured JtaTransactionManager on DefaultMessageContainerBean. Are there any other configurations required here to get a true global transaction management. I am using My Batis for database.
public class CusListener implements MessageListener{
public void onMessage(Message message) {
//Database call
catch (Exception ex) {
throw (new RuntimeException());
}
}
}
#Configuration
public class ListenerContainer{
#Bean
public DefaultMessageListenerContainer defaultMessageListenerContainer(ConnectionFactory queueConnectionFactory,MQQueue queue, MessageListener listener,
JtaTransactionManager jtaTransactionManager) {
DefaultMessageListenerContainer defaultMessageListenerContainer =
new DefaultMessageListenerContainer();
defaultMessageListenerContainer.setConnectionFactory(queueConnectionFactory);
defaultMessageListenerContainer.setDestination(queue);
defaultMessageListenerContainer.setMessageListener(listerner);
defaultMessageListenerContainer.setTransactionManager(jtaTransactionManager);
defaultMessageListenerContainer.setSessionTransacted(true);
defaultMessageListenerContainer.setConcurrency("3-10");
return defaultMessageListenerContainer;
}
//other beans declaration passed in the method above
}
#Configuration
public class PlanListenerSqlSessFac {
#Bean(name="sqlSessionFactory")
public SqlSessionFactory sqlSessionFactory(#Qualifier("dataSource") NMCryptoDataSourceWrapper dataSource) throws Exception {
}
#Bean(name="driverManagerDataSource")
public DriverManagerDataSource driverManagerDataSource() {
DriverManagerDataSource driverManagerDataSource = new DriverManagerDataSource();
return driverManagerDataSource;
}
}

You should use AtomikosDataSourceBean as dataDource.
See documentation : https://www.atomikos.com/bin/view/Documentation/ConfiguringJdbc

Related

Spring Disable #Transactional from Configuration java file

I have a code base which is using for two different applications. some of my spring service classes has annotation #Transactional. On server start I would like to disable #Transactional based on some configuration.
The below is my configuration Class.
#Configuration
#EnableTransactionManagement
#PropertySource("classpath:application.properties")
public class WebAppConfig {
private static final String PROPERTY_NAME_DATABASE_DRIVER = "db.driver";
#Resource
private Environment env;
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getRequiredProperty(PROPERTY_NAME_DATABASE_DRIVER));
dataSource.setUrl(url);
dataSource.setUsername(userId);
dataSource.setPassword(password);
return dataSource;
}
#Bean
public PlatformTransactionManager txManager() {
DefaultTransactionDefinition def = new DefaultTransactionDefinition();
def.setIsolationLevel(TransactionDefinition.ISOLATION_DEFAULT);
if(appName.equqls("ABC")) {
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_NEVER);
}else {
def.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
}
CustomDataSourceTransactionManager txM=new CustomDataSourceTransactionManager(def);
txM.setDataSource(dataSource());
return txM;
}
#Bean
public JdbcTemplate jdbcTemplate() {
JdbcTemplate jdbcTemplate = new JdbcTemplate();
jdbcTemplate.setDataSource(dataSource());
return jdbcTemplate;
}
}
I am trying to ovveried methods in DataSourceTransactionManager to make the functionality. But still it is trying to commit/rollback the transaction at end of transaction. Since there is no database connection available it is throwing exception.
If I keep #Transactional(propagation=Propagation.NEVER), everything works perfectly, but I cannot modify it as another app is using the same code base and it is necessary in that case.
I would like to know if there is a to make transaction fully disable from configuration without modifying #Transactional annotation.
I'm not sure if it would work but you can try to implement custom TransactionInterceptor and override its method that wraps invocation into a transaction, by removing that transactional stuff. Something like this:
public class NoOpTransactionInterceptor extends TransactionInterceptor {
#Override
protected Object invokeWithinTransaction(
Method method,
Class<?> targetClass,
InvocationCallback invocation
) throws Throwable {
// Simply invoke the original unwrapped code
return invocation.proceedWithInvocation();
}
}
Then you declare a conditional bean in one of #Configuration classes
// assuming this property is stored in Spring application properties file
#ConditionalOnProperty(name = "turnOffTransactions", havingValue = "true"))
#Bean
#Role(BeanDefinition.ROLE_INFRASTRUCTURE)
public TransactionInterceptor transactionInterceptor(
/* default bean would be injected here */
TransactionAttributeSource transactionAttributeSource
) {
TransactionInterceptor interceptor = new NoOpTransactionInterceptor();
interceptor.setTransactionAttributeSource(transactionAttributeSource);
return interceptor;
}
Probably you gonna need additional configurations, I can't verify that right now

Define an in-memory JobRepository

I'm testing Spring Batch using Spring boot. My need is to define jobs working on an Oracle Database but I don't want to save jobs and steps states inside this DB.
I've read in the documentation I can use a in-memory repository with the MapJobRepositoryFactoryBean.
Then, I've implemented this bean:
#Bean
public JobRepository jobRepository() {
MapJobRepositoryFactoryBean factoryBean = new MapJobRepositoryFactoryBean(new ResourcelessTransactionManager());
try {
JobRepository jobRepository = factoryBean.getObject();
return jobRepository;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
But when my job starts, the first thing Spring Batch does is to create the table in the Oracle DB and continues to use the Oracle datasource. It's like my JobRepository definition isn't taken account.
What did I miss ?
EDIT: I'm using Spring Boot 1.5.3 and Spring Batch 3.0.7
With SpringBoot 2.x, the solution is simpler.
You have to extend the DefaultBatchConfigurer class like this:
#Component
public class NoPersistenceBatchConfigurer extends DefaultBatchConfigurer {
#Override
public void setDataSource(DataSource dataSource) {
}
}
Without datasource, the framework automatically switches to use the MapJobRepository.
A few things here:
If you have a DataSource configured in your ApplicationContext, by default Spring Batch will try to use it.
In order to not use a DataSource when one is available within the ApplicationContext, you'll need to create your own BatchConfigurer. You can do that by extending the DefaultBatchConfigurer.
Don't use the MapJobRepository except only for testing purposes. I has a number of issues (thread safety, etc) and is not recommended for production use. Use an in memory database like HSQLDB instead (you'll still need to create your own BatchConfigurer to do so).
Thank the comment of pvpkiran I've found my problem. It's necessary to define a JobLauncher bean.
Below an example:
#Bean
public JobRepository jobRepository() {
MapJobRepositoryFactoryBean factoryBean = new MapJobRepositoryFactoryBean(new ResourcelessTransactionManager());
try {
JobRepository jobRepository = factoryBean.getObject();
return jobRepository;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
#Bean
public JobLauncher jobLauncher(JobRepository jobRepository) {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(jobRepository);
return jobLauncher;
}
If using Spring Boot and #EnableBatchProcessing, you would extend the DefaultBatchConfigurer and override the createJobRepository method. Create a ResourcelessTransactionManager and JobRepository using MapJobRepositoryFactoryBean, the rest of the beans will be auto created by Spring Boot.
#Configuration
public class InMemoryBatchContextConfigurer extends DefaultBatchConfigurer {
#Bean
private ResourcelessTransactionManager resoucelessTransactionManager() {
return new ResourcelessTransactionManager();
}
#Override
protected JobRepository createJobRepository() throws Exception {
MapJobRepositoryFactoryBean factoryBean = new MapJobRepositoryFactoryBean();
factoryBean.setTransactionManager(resoucelessTransactionManager());
return factoryBean.getObject();
}
}`
Extend DefaultBatchConfigurer class and override createJobRepository method just like below.
#Configuration
public class InMemoryBatchConfigurer extends DefaultBatchConfigurer {
#Override
protected JobRepository createJobRepository() throws Exception {
return new MapJobRepositoryFactoryBean().getObject();
}
}

Make OracleDataSource robust against Database restarts and hickups

So I got a advanced queue working with a Connectionfactory:
ConnectionFactory jmsQueueConnectionFactory() throws JMSException, SQLException {
final OracleDataSource dataSource = new OracleDataSource();
dataSource.setUser(username);
dataSource.setPassword(password);
dataSource.setURL(url);
dataSource.setImplicitCachingEnabled(true);
dataSource.setFastConnectionFailoverEnabled(true);
return AQjmsFactory.getConnectionFactory(dataSource);
}
This is running on a shared Database which might be restarted or sometimes the network just has a short hickup. Which results in no more messages from the Queue.
I use a spring MessageListener to retrieve messages and there is actually no indicator or what so ever that the queue is not running anymore. After restarting the application I then get a load of older messages that should have been processed already.
Is there a way or specific data source implementation which reconnects or something?
Update: Listener Impl
#Bean
OracleAqQueueFactoryBean etlQueueFactory() throws JMSException, SQLException {
final OracleAqQueueFactoryBean bean = new OracleAqQueueFactoryBean();
bean.setConnectionFactory(jmsQueueConnectionFactory());
bean.setOracleQueueUser("USER");
bean.setOracleQueueName("QUEUE");
return bean;
}
#Bean
DefaultMessageListenerContainer jmsContainer() throws JMSException, SQLException {
final DefaultMessageListenerContainer bean = new DefaultMessageListenerContainer();
bean.setConnectionFactory(jmsQueueConnectionFactory());
bean.setDestination(etlQueueFactory().getObject());
bean.setMessageListener(new MyListener());
bean.setSessionTransacted(false);
return bean;
}
public class MyListener implements MessageListener {
#Override
public void onMessage(Message message) {
...
}
}
I guess you have to do it on the JMS level, not DB level.
Not sure what type of listener you are using, but a DefaultMessageListenerContainer in Spring is implemented with a consumer.receive(timeout) loop. It's more robust than using a plain listener as it will attempt to reconnect on each poll cycle (if needed).

multiple datasources in Spring Boot application

I'm trying to using two database connections w/in a Spring Boot (v1.2.3) application as described in the docs (http://docs.spring.io/spring-boot/docs/1.2.3.RELEASE/reference/htmlsingle/#howto-two-datasources.
Problem seems to be that the secondary datasource is getting constructed with the properties for the primary datasource.
Can someone point out what I'm missing here?
#SpringBootApplication
class Application {
#Bean
#ConfigurationProperties(prefix="spring.datasource.secondary")
public DataSource secondaryDataSource() {
return DataSourceBuilder.create().build();
}
#Bean
public JdbcTemplate secondaryJdbcTemplate(DataSource secondaryDataSource) {
return new JdbcTemplate(secondaryDataSource)
}
#Bean
#Primary
#ConfigurationProperties(prefix="spring.datasource.primary")
public DataSource primaryDataSource() {
return DataSourceBuilder.create().build();
}
#Bean(name = "primaryJdbcTemplate")
public JdbcTemplate jdbcTemplate(DataSource primaryDataSource) {
return new JdbcTemplate(primaryDataSource)
}
static void main(String[] args) {
SpringApplication.run Application, args
}
}
application.properties:
spring.datasource.primary.url=jdbc:oracle:thin:#example.com:1521:DB1
spring.datasource.primary.username=user1
spring.datasource.primary.password=
spring.datasource.primary.driverClassName=oracle.jdbc.OracleDriver
spring.datasource.secondary.url=jdbc:oracle:thin:#example.com:1521:DB2
spring.datasource.secondary.username=user2
spring.datasource.secondary.password=
spring.datasource.secondary.driverClassName=oracle.jdbc.OracleDriver
Both JdbcTemplate beans will be getting created with the primary DataSource. You can use #Qualifier to have the secondary DataSource injected into the secondary JdbcTemplate. Alternatively, you could call the DataSource methods directly when creating the JdbcTemplate beans.

spring-boot-starter-jta-atomikos and spring-boot-starter-batch

Is it possible to use both these starters in a single application?
I want to load records from a CSV file into a database table. The Spring Batch tables are stored in a different database, so I assume I need to use JTA to handle the transaction.
Whenever I add #EnableBatchProcessing to my #Configuration class it configures a PlatformTransactionManager, which stops this being auto-configured by Atomikos.
Are there any spring boot + batch + jta samples out there that show how to do this?
Many Thanks,
James
I just went through this and I found something that seems to work. As you note, #EnableBatchProcessing causes a DataSourceTransactionManager to be created, which messes up everything. I'm using modular=true in #EnableBatchProcessing, so the ModularBatchConfiguration class is activated.
What I did was to stop using #EnableBatchProcessing and instead copy the entire ModularBatchConfiguration class into my project. Then I commented out the transactionManager() method, since the Atomikos configuration creates the JtaTransactionManager. I also had to override the jobRepository() method, because that was hardcoded to use the DataSourceTransactionManager created inside DefaultBatchConfiguration.
I also had to explicitly import the JtaAutoConfiguration class. This wires everything up correctly (according to the Actuator's "beans" endpoint - thank god for that). But when you run it the transaction manager throws an exception because something somewhere sets an explicit transaction isolation level. So I also wrote a BeanPostProcessor to find the transaction manager and call txnMgr.setAllowCustomIsolationLevels(true);
Now everything works, but while the job is running, I cannot fetch the current data from batch_step_execution table using JdbcTemplate, even though I can see the data in SQLYog. This must have something to do with transaction isolation, but I haven't been able to understand it yet.
Here is what I have for my configuration class, copied from Spring and modified as noted above. PS, I have my DataSource that points to the database with the batch tables annotated as #Primary. Also, I changed my DataSource beans to be instances of org.apache.tomcat.jdbc.pool.XADataSource; I'm not sure if that's necessary.
#Configuration
#Import(ScopeConfiguration.class)
public class ModularJtaBatchConfiguration implements ImportAware
{
#Autowired(required = false)
private Collection<DataSource> dataSources;
private BatchConfigurer configurer;
#Autowired
private ApplicationContext context;
#Autowired(required = false)
private Collection<BatchConfigurer> configurers;
private AutomaticJobRegistrar registrar = new AutomaticJobRegistrar();
#Bean
public JobRepository jobRepository(DataSource batchDataSource, JtaTransactionManager jtaTransactionManager) throws Exception
{
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(batchDataSource);
factory.setTransactionManager(jtaTransactionManager);
factory.afterPropertiesSet();
return factory.getObject();
}
#Bean
public JobLauncher jobLauncher() throws Exception {
return getConfigurer(configurers).getJobLauncher();
}
// #Bean
// public PlatformTransactionManager transactionManager() throws Exception {
// return getConfigurer(configurers).getTransactionManager();
// }
#Bean
public JobExplorer jobExplorer() throws Exception {
return getConfigurer(configurers).getJobExplorer();
}
#Bean
public AutomaticJobRegistrar jobRegistrar() throws Exception {
registrar.setJobLoader(new DefaultJobLoader(jobRegistry()));
for (ApplicationContextFactory factory : context.getBeansOfType(ApplicationContextFactory.class).values()) {
registrar.addApplicationContextFactory(factory);
}
return registrar;
}
#Bean
public JobBuilderFactory jobBuilders(JobRepository jobRepository) throws Exception {
return new JobBuilderFactory(jobRepository);
}
#Bean
// hopefully this will autowire the Atomikos JTA txn manager
public StepBuilderFactory stepBuilders(JobRepository jobRepository, JtaTransactionManager ptm) throws Exception {
return new StepBuilderFactory(jobRepository, ptm);
}
#Bean
public JobRegistry jobRegistry() throws Exception {
return new MapJobRegistry();
}
#Override
public void setImportMetadata(AnnotationMetadata importMetadata) {
AnnotationAttributes enabled = AnnotationAttributes.fromMap(importMetadata.getAnnotationAttributes(
EnableBatchProcessing.class.getName(), false));
Assert.notNull(enabled,
"#EnableBatchProcessing is not present on importing class " + importMetadata.getClassName());
}
protected BatchConfigurer getConfigurer(Collection<BatchConfigurer> configurers) throws Exception {
if (this.configurer != null) {
return this.configurer;
}
if (configurers == null || configurers.isEmpty()) {
if (dataSources == null || dataSources.isEmpty()) {
throw new UnsupportedOperationException("You are screwed");
} else if(dataSources != null && dataSources.size() == 1) {
DataSource dataSource = dataSources.iterator().next();
DefaultBatchConfigurer configurer = new DefaultBatchConfigurer(dataSource);
configurer.initialize();
this.configurer = configurer;
return configurer;
} else {
throw new IllegalStateException("To use the default BatchConfigurer the context must contain no more than" +
"one DataSource, found " + dataSources.size());
}
}
if (configurers.size() > 1) {
throw new IllegalStateException(
"To use a custom BatchConfigurer the context must contain precisely one, found "
+ configurers.size());
}
this.configurer = configurers.iterator().next();
return this.configurer;
}
}
#Configuration
class ScopeConfiguration {
private StepScope stepScope = new StepScope();
private JobScope jobScope = new JobScope();
#Bean
public StepScope stepScope() {
stepScope.setAutoProxy(false);
return stepScope;
}
#Bean
public JobScope jobScope() {
jobScope.setAutoProxy(false);
return jobScope;
}
}
I found a solution where I was able to keep #EnableBatchProcessing but had to implement BatchConfigurer and atomikos beans, see my full answer in this so answer.

Resources