Migration to Spring Boot 2 and using Spring Batch 4 - spring-boot

I am migrating Spring Boot from 1.4.2 to 2.0.0, which also includes migrating Spring batch from 3.0.7 to 4.0.0 and it looks like batch process is no longer working when i try to run it with new Spring Batch version.
When I tried to debug i found a problem when batch tries to get data from batch_job_execution_context.
I can see the getting the data from database works fine, but the new version of batch fails to parse database data
{"map":[{"entry":[{"string":["name",""]},{"string":["sender",""]},{"string":["id",""]},{"string":["nav",""]},{"string":["created",140418]}]}]}
with this error :
com.fasterxml.jackson.databind.exc.MismatchedInputException: Unexpected token (START_OBJECT), expected VALUE_STRING: need JSON String that contains type id (for subtype of java.lang.Object) at [Source: (ByteArrayInputStream); line: 1, column: 9] (through reference chain: java.util.HashMap["map"])
I have found that when I delete all batch metadata tables and recreate them from scratch, batch seems to work again. It looks like the metadata JSON format has changed to this
{"name":"","sender":"145844","id":"","nav":"","created":"160909"}
I do not want to delete old data to makes this work again, so is there any way to fix this?
Has anyone else tried to do this upgrade? It would be nice to know if there are any other breaking changes that I may not have noticed.
Thanks

Before Spring Batch 4, the default serialization mechanism for the ExecutionContext was via XStream. Now it uses Jackson by default which is not compatible with the old serialization format. We still have the old version available (XStreamExecutionContextStringSerializer) but you'll need to configure it yourself by implementing a BatchConfigurer and overriding the configuration in the JobRepositoryFactoryBean.
For the record, this is related to this issue: https://jira.spring.io/browse/BATCH-2575.

Based on Michael's answer above, this code block worked for me to extend the default configuration — I had to wire up the Serializer to both the JobRepository.class and the JobExplorer.class:
#Configuration
#EnableBatchProcessing
MyBatchConfigurer extends DefaultBatchConfigurer {
private final DataSource dataSource;
#Autowired
public BatchConfiguration(final DataSource dataSource) throws Exception {
this.dataSource = dataSource;
}
#Bean
ExecutionContextSerializer getSerializer() {
return new XStreamExecutionContextStringSerializer();
}
#Override
protected JobRepository createJobRepository() throws Exception {
final JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(dataSource);
factory.setSerializer(getSerializer());
factory.setTransactionManager(getTransactionManager());
factory.afterPropertiesSet();
return factory.getObject();
}
#Override
protected JobExplorer createJobExplorer() throws Exception {
final JobExplorerFactoryBean jobExplorerFactoryBean = new JobExplorerFactoryBean();
jobExplorerFactoryBean.setDataSource(dataSource);
jobExplorerFactoryBean.setSerializer(getSerializer());
jobExplorerFactoryBean.afterPropertiesSet();
return jobExplorerFactoryBean.getObject();
}
}

In the solution by #anotherdave and #michael-minella, you could also replace the plain XStreamExecutionContextStringSerializer with an instance of the following class. It accepts both formats when deserializing and serializes to the new format.
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.Map;
import com.fasterxml.jackson.core.JsonProcessingException;
import org.springframework.batch.core.repository.ExecutionContextSerializer;
import org.springframework.batch.core.repository.dao.Jackson2ExecutionContextStringSerializer;
import org.springframework.batch.core.repository.dao.XStreamExecutionContextStringSerializer;
/**
* Enables Spring Batch 4 to read both ExecutionContext entries written by ealier versions and the Spring 5 format. Entries are
* written in Spring 5 format.
*/
#SuppressWarnings("deprecation")
class XStreamOrJackson2ExecutionContextSerializer implements ExecutionContextSerializer {
private final XStreamExecutionContextStringSerializer xStream = new XStreamExecutionContextStringSerializer();
private final Jackson2ExecutionContextStringSerializer jackson = new Jackson2ExecutionContextStringSerializer();
public XStreamOrJackson2ExecutionContextSerializer() throws Exception {
xStream.afterPropertiesSet();
}
// The caller closes the stream; and the decoration by ensureMarkSupported does not need any cleanup.
#SuppressWarnings("resource")
#Override
public Map<String, Object> deserialize(InputStream inputStream) throws IOException {
InputStream repeatableInputStream = ensureMarkSupported(inputStream);
repeatableInputStream.mark(Integer.MAX_VALUE);
try {
return jackson.deserialize(repeatableInputStream);
} catch (JsonProcessingException e) {
repeatableInputStream.reset();
return xStream.deserialize(repeatableInputStream);
}
}
private static InputStream ensureMarkSupported(InputStream in) {
return in.markSupported() ? in : new BufferedInputStream(in);
}
#Override
public void serialize(Map<String, Object> object, OutputStream outputStream) throws IOException {
jackson.serialize(object, outputStream);
}
}

Related

Spring Cloud Function (GCP adapter) throws Hibernate lazy could not initialize proxy - no session

This is a common error in Spring when tries to transform automatically an entity object whit some hibernate proxys but i dont't know how to load Jackson DataType Hibernate5 module in Spring cloud functions gcp adapter.
#SpringBootApplication
#Log4j2
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
#Bean
public WebMvcConfigurer corsConfigurer() {
log.info("configurando cors");
return new WebMvcConfigurer() {
#Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/**").allowedOrigins("*");
}
};
}
#Bean
public Module datatypeHibernateModule() {
log.info("Cargando modulo hibernate jackson");
return new Hibernate5Module();
}
}
If i use the same code whit normal Spring boot project the module works but in this case i found on the log the adapter don't used Jackson and they implements Gson.
at com.google.gson.Gson.toJson(Gson.java:638)
at com.google.gson.Gson.toJson(Gson.java:618)
at org.springframework.cloud.function.json.GsonMapper.toJson(GsonMapper.java:70)
This is the entire log file
My first workaround is change the Page object for String and use manually jackson mapper.
public class ObtenerEstados implements Function<Void, String> {
#Autowired
private EstadoService estadoService;
#SneakyThrows
#Override
public String apply(Void unused) {
Page<Estado> page = estadoService.buscarTodos(0, 33);
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.registerModule(new Hibernate5Module());
String objectAsString = objectMapper.writeValueAsString(page);
return objectAsString;
}
}
I created two branches on the Github repository
functions (this branch have the bug)
function-jackson-hibernate5 (this branch have the workaround)
If you haved already installed Docker and Docker Compose you can reproduce the bug easy.
Follow the next steps:
git clone https://github.com/ripper2hl/sepomex.git
cd sepomex
git checkout -b dev origin/functions
docker-compose pull db
docker-compose up -d db
export spring_profiles_active=local
mvn -Pgcp function:run
And execute with a curl or any REST client
curl http://localhost:8080/
I know the alternative for use a DTO object but i prefer not use this option
So whenever Gson is on the classpath it is given priority and of course with Google that is the case. Please set spring.http.converters.preferred-json-mapper=jackson property to force Jackson.
Finally i fixed with this fragment of code
#SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
#Bean
public JsonMessageConverter jsonMessageConverter() {
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.registerModule(new Hibernate5Module());
JacksonMapper jacksonMapper = new JacksonMapper(objectMapper);
return new JsonMessageConverter(jacksonMapper);
}
}
The documentation explain Gson is the default MessageConverter but it's not be clear how to change(more easy) gson to jackson.
https://docs.spring.io/spring-cloud-function/docs/current/reference/html/spring-cloud-function.html#_provided_messageconverters

Injecting configuration dependency

I am creating a cache client wrapper using spring framework. This is to provide cache layer to our application. Right now, we are using redis. I have found out that spring-data-redis library is very good for creating my wrapper.
My application will pass a configuration POJO to my wrapper and will then use the interface that I will provide.
spring-data-redis provides an easy way to access redis using two variables.
RedisConnectionFactory
RedisTemplate<String, Object>
Although, I will be providing a better interface to my application with my interface functions like:
public Object getValue( final String key ) throws ConfigInvalidException;
public void setValue( final String key, final Object value ) throws ConfigInvalidException;
public void setValueWithExpiry(final String key, final Object value, final int seconds, final TimeUnit timeUnit) throws ConfigInvalidException;
I still want to provide RedisConnectionFactory and RedisTemplate beans.
My question is how to initialize my wrapper application with this configuration POJO?
Currently my configuration looks like this:
import java.util.List;
public class ClusterConfigurationProperties {
List<String> nodes;
public List<String> getNodes() {
return nodes;
}
public void setNodes(List<String> nodes) {
this.nodes = nodes;
}
}
And my AppConfig.java looks like this:
import com.ajio.Exception.ConfigInvalidException;
import com.ajio.configuration.ClusterConfigurationProperties;
import com.ajio.validator.Validator;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisClusterConfiguration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.GenericToStringSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
#Configuration
public class AppConfig {
#Autowired
private ClusterConfigurationProperties clusterConfigurationProperties;
#Autowired
private Validator validator;
#Bean
ClusterConfigurationProperties clusterConfigurationProperties() {
return null;
}
#Bean
Validator validator() {
return new Validator();
}
#Bean
RedisConnectionFactory connectionFactory() throws ConfigInvalidException {
if (clusterConfigurationProperties == null)
throw new ConfigInvalidException("Please provide a cluster configuration POJO in context");
validator.validate(clusterConfigurationProperties);
return new JedisConnectionFactory(new RedisClusterConfiguration(clusterConfigurationProperties.getNodes()));
}
#Bean
RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) throws ConfigInvalidException {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(connectionFactory());
redisTemplate.setKeySerializer( new StringRedisSerializer() );
redisTemplate.setHashValueSerializer( new GenericToStringSerializer<>( Object.class ) );
redisTemplate.setValueSerializer( new GenericToStringSerializer<>( Object.class ) );
return redisTemplate;
}
}
Here I am expecting a ClusterConfigurationProperties POJO as a bean in application which will be using the interface of wrapper.
But to compile my wrapper, I have created a null bean itself. Then when application uses it, there will be two beans, one of application and one of wrapper.
How should I resolve this problem?
Actually what i wanted was to have cluster config as a bean in my client application. For that i dont need to declare #autowire clusterconfig in my wrapper application. Instead should take cluster config as a parameter in the method, so that the client will pass cluster config object when creating bean. And the bean which is created in client code should have code for creating redis connection factory.
But all this i was writing was to make my client unknown of redis. So, better solution is to have wrapper class which takes cluster config pojo and create redis connection factory etc. And client should create this wrapper as a bean.
Very poor concept of spring and design patterns lead me to this mistake.

Multiple servlet mappings in Spring Boot

Is there any way to set via property 'context-path' many mappings for a same Spring Boot MVC application? My goal is to avoid creating many 'Dispatcherservlet' for the uri mapping.
For example:
servlet.context-path =/, /context1, context2
You can create #Bean annotated method which returns ServletRegistrationBean , and add multiple mappings there. This is more preferable way, as Spring Boot encourage Java configuration rather than config files:
#Bean
public ServletRegistrationBean myServletRegistration()
{
String urlMapping1 = "/mySuperApp/service1/*";
String urlMapping2 = "/mySuperApp/service2/*";
ServletRegistrationBean registration = new ServletRegistrationBean(new MyBeautifulServlet(), urlMapping1, urlMapping2);
//registration.set... other properties may be here
return registration;
}
On application startup you'll be able to see in logs:
INFO | localhost | org.springframework.boot.web.servlet.ServletRegistrationBean | Mapping servlet: 'MyBeautifulServlet' to [/mySuperApp/service1/*, /mySuperApp/service2/*]
You only need a single Dispatcherservlet with a root context path set to what you want (could be / or mySuperApp).
By declaring multiple #RequestMaping, you will be able to serve different URI with the same DispatcherServlet.
Here is an example. Setting the DispatcherServlet to /mySuperApp with #RequestMapping("/service1") and #RequestMapping("/service2") would exposed the following endpoints :
/mySuperApp/service1
/mySuperApp/service2
Having multiple context for a single servlet is not part of the Servlet specification. A single servlet cannot serve from multiple context.
What you can do is map multiple values to your requesting mappings.
#RequestMapping({"/context1/service1}", {"/context2/service1}")
I don't see any other way around it.
You can use 'server.contextPath' property placeholder to set context path for the entire spring boot application. (e.g. server.contextPath=/live/path1)
Also, you can set class level context path that will be applied to all the methods e.g.:
#RestController
#RequestMapping(value = "/testResource", produces = MediaType.APPLICATION_JSON_VALUE)
public class TestResource{
#RequestMapping(method = RequestMethod.POST, value="/test", produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<TestDto> save(#RequestBody TestDto testDto) {
...
With this structure, you can use /live/path1/testResource/test to execute save method.
None of the answers to this sort of question seem to mention that you'd normally solve this problem by configuring a reverse proxy in front of the application (eg nginx/apache httpd) to rewrite the request.
However if you must do it in the application then this method works (with Spring Boot 2.6.2 at least) : https://www.broadleafcommerce.com/blog/configuring-a-dynamic-context-path-in-spring-boot.
It describes creating a filter, putting it early in the filter chain and basically re-writing the URL (like a reverse proxy might) so that requests all go to the same place (ie the actual servlet.context-path).
I've found an alternative to using a filter described in https://www.broadleafcommerce.com/blog/configuring-a-dynamic-context-path-in-spring-boot that requires less code.
This uses RewriteValve (https://tomcat.apache.org/tomcat-9.0-doc/rewrite.html) to rewrite urls outside of the context path e.g. if the real context path is "context1" then it will map /context2/* to /context1/*
#Component
public class LegacyUrlWebServerFactoryCustomizer implements WebServerFactoryCustomizer<TomcatServletWebServerFactory> {
private static final List<String> LEGACY_PATHS = List.of("context2", "context3");
#Override
public void customize(TomcatServletWebServerFactory factory) {
RewriteValve rewrite = new RewriteValve() {
#Override
protected void initInternal() throws LifecycleException {
super.initInternal();
try {
String config = LEGACY_PATHS.stream() //
.map(p -> String.format("RewriteRule ^/%s(/.*)$ %s$1", p, factory.getContextPath())) //
.collect(Collectors.joining("\n"));
setConfiguration(config);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
};
factory.addEngineValves(rewrite);
}
}
If you need to use HTTP redirects instead then there is a little bit more required (to avoid a NullPointerException in sendRedirect):
#Component
public class LegacyUrlWebServerFactoryCustomizer implements WebServerFactoryCustomizer<TomcatServletWebServerFactory> {
private static final List<String> LEGACY_PATHS = List.of("context2", "context3");
#Override
public void customize(TomcatServletWebServerFactory factory) {
RewriteValve rewrite = new RewriteValve() {
#Override
protected void initInternal() throws LifecycleException {
super.initInternal();
try {
String config = LEGACY_PATHS.stream() //
.map(p -> String.format("RewriteRule ^/%s(/.*)$ %s$1 R=permanent", p, factory.getContextPath())) //
.collect(Collectors.joining("\n"));
setConfiguration(config);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
#Override
public void invoke(Request request, Response response) throws IOException, ServletException {
if (request.getContext() == null) {
String[] s = request.getRequestURI().split("/");
if (s.length > 1 && LEGACY_PATHS.contains(s[1])) {
request.getMappingData().context = new FailedContext();
}
}
super.invoke(request, response);
}
};
factory.addEngineValves(rewrite);
}
}
I use this approach:
import javax.servlet.ServletContext;
import javax.servlet.ServletRegistration;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.WebApplicationInitializer;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;
import org.springframework.web.servlet.DispatcherServlet;
#Configuration
public class WebAppInitializer implements WebApplicationInitializer {
#Override
public void onStartup(ServletContext servletContext) {
AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext();
rootContext.register(AppConfig.class);
rootContext.setServletContext(servletContext);
ServletRegistration.Dynamic dispatcher = servletContext.addServlet("dispatcher", new DispatcherServlet(rootContext));
dispatcher.setLoadOnStartup(1);
dispatcher.addMapping("/mapping1/*");
dispatcher.addMapping("/mapping2/*");
servletContext.addListener(new ContextLoaderListener(rootContext));
}
}

Spring Mongodb: How to configurer mongoDB with MongoClientFactoryBean

When configuring MongoDB in Spring, the reference sais:
register MongoDB like this:
#Configuration
public class AppConfig {
/*
* Use the standard Mongo driver API to create a com.mongodb.Mongo instance.
*/
public #Bean Mongo mongo() throws UnknownHostException {
return new Mongo("localhost");
}
}
pollutes the code with the UnknownHostException checked exception. The use of the checked exception is not desirable as Java based bean metadata uses methods as a means to set object dependencies, making the calling code cluttered.
so Spring proposes
#Configuration
public class AppConfig {
/*
* Factory bean that creates the com.mongodb.Mongo instance
*/
public #Bean MongoFactoryBean mongo() {
MongoFactoryBean mongo = new MongoFactoryBean();
mongo.setHost("localhost");
return mongo;
}
}
But unfortunately since Spring-Data-MongoDB 1.7 MongoFactoryBean has been deprecated and replaced by MongoClientFactoryBean.
So
#Bean
public MongoClientFactoryBean mongoClientFactoryBean() {
MongoClientFactoryBean factoryBean = new MongoClientFactoryBean();
factoryBean.setHost("localhost");
return factoryBean;
}
Then it's time to configure MongoDbFactory which has only one implementation SimpleMongoDbFactory. The SimpleMongoDbFactory has only two initializer not deprecated one of which is SimpleMongoDbFactory(MongoClient, DataBase).
But MongoClientFactoryBean can only return type of Mongo instead of MongoClient.
So, am I missing something to make this pure Spring configuration work?
Yes it returns a Mongo :-(
But as MongoClient extends Mongo that'll be ok anyway, just #Autowire the bean as a Mongo
#Autowired
private Mongo mongo;
Then use it
MongoOperations mongoOps = new MongoTemplate(mongo, "databaseName");
Do you really need the SimpleMongoDbFactory ? See this post.
In my case, I'm using the following code to create MongoTemplate. I'm using MongoRespository. As it only needs MongoTemplate I need to create the MongoTemplate bean only.
#Bean
public MongoTemplate mongoTemplate() throws Exception {
MongoClient mongoClient = new MongoClient("localhost");
MongoDbFactory mongoDbFactory = new SimpleMongoDbFactory(mongoClient, "kyc_reader_test");
return new MongoTemplate(mongoDbFactory);
}
In my configuration file, I've added
#EnableMongoRepositories(basePackages = "mongo.repository.package.name")

spring-boot-starter-jta-atomikos and spring-boot-starter-batch

Is it possible to use both these starters in a single application?
I want to load records from a CSV file into a database table. The Spring Batch tables are stored in a different database, so I assume I need to use JTA to handle the transaction.
Whenever I add #EnableBatchProcessing to my #Configuration class it configures a PlatformTransactionManager, which stops this being auto-configured by Atomikos.
Are there any spring boot + batch + jta samples out there that show how to do this?
Many Thanks,
James
I just went through this and I found something that seems to work. As you note, #EnableBatchProcessing causes a DataSourceTransactionManager to be created, which messes up everything. I'm using modular=true in #EnableBatchProcessing, so the ModularBatchConfiguration class is activated.
What I did was to stop using #EnableBatchProcessing and instead copy the entire ModularBatchConfiguration class into my project. Then I commented out the transactionManager() method, since the Atomikos configuration creates the JtaTransactionManager. I also had to override the jobRepository() method, because that was hardcoded to use the DataSourceTransactionManager created inside DefaultBatchConfiguration.
I also had to explicitly import the JtaAutoConfiguration class. This wires everything up correctly (according to the Actuator's "beans" endpoint - thank god for that). But when you run it the transaction manager throws an exception because something somewhere sets an explicit transaction isolation level. So I also wrote a BeanPostProcessor to find the transaction manager and call txnMgr.setAllowCustomIsolationLevels(true);
Now everything works, but while the job is running, I cannot fetch the current data from batch_step_execution table using JdbcTemplate, even though I can see the data in SQLYog. This must have something to do with transaction isolation, but I haven't been able to understand it yet.
Here is what I have for my configuration class, copied from Spring and modified as noted above. PS, I have my DataSource that points to the database with the batch tables annotated as #Primary. Also, I changed my DataSource beans to be instances of org.apache.tomcat.jdbc.pool.XADataSource; I'm not sure if that's necessary.
#Configuration
#Import(ScopeConfiguration.class)
public class ModularJtaBatchConfiguration implements ImportAware
{
#Autowired(required = false)
private Collection<DataSource> dataSources;
private BatchConfigurer configurer;
#Autowired
private ApplicationContext context;
#Autowired(required = false)
private Collection<BatchConfigurer> configurers;
private AutomaticJobRegistrar registrar = new AutomaticJobRegistrar();
#Bean
public JobRepository jobRepository(DataSource batchDataSource, JtaTransactionManager jtaTransactionManager) throws Exception
{
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(batchDataSource);
factory.setTransactionManager(jtaTransactionManager);
factory.afterPropertiesSet();
return factory.getObject();
}
#Bean
public JobLauncher jobLauncher() throws Exception {
return getConfigurer(configurers).getJobLauncher();
}
// #Bean
// public PlatformTransactionManager transactionManager() throws Exception {
// return getConfigurer(configurers).getTransactionManager();
// }
#Bean
public JobExplorer jobExplorer() throws Exception {
return getConfigurer(configurers).getJobExplorer();
}
#Bean
public AutomaticJobRegistrar jobRegistrar() throws Exception {
registrar.setJobLoader(new DefaultJobLoader(jobRegistry()));
for (ApplicationContextFactory factory : context.getBeansOfType(ApplicationContextFactory.class).values()) {
registrar.addApplicationContextFactory(factory);
}
return registrar;
}
#Bean
public JobBuilderFactory jobBuilders(JobRepository jobRepository) throws Exception {
return new JobBuilderFactory(jobRepository);
}
#Bean
// hopefully this will autowire the Atomikos JTA txn manager
public StepBuilderFactory stepBuilders(JobRepository jobRepository, JtaTransactionManager ptm) throws Exception {
return new StepBuilderFactory(jobRepository, ptm);
}
#Bean
public JobRegistry jobRegistry() throws Exception {
return new MapJobRegistry();
}
#Override
public void setImportMetadata(AnnotationMetadata importMetadata) {
AnnotationAttributes enabled = AnnotationAttributes.fromMap(importMetadata.getAnnotationAttributes(
EnableBatchProcessing.class.getName(), false));
Assert.notNull(enabled,
"#EnableBatchProcessing is not present on importing class " + importMetadata.getClassName());
}
protected BatchConfigurer getConfigurer(Collection<BatchConfigurer> configurers) throws Exception {
if (this.configurer != null) {
return this.configurer;
}
if (configurers == null || configurers.isEmpty()) {
if (dataSources == null || dataSources.isEmpty()) {
throw new UnsupportedOperationException("You are screwed");
} else if(dataSources != null && dataSources.size() == 1) {
DataSource dataSource = dataSources.iterator().next();
DefaultBatchConfigurer configurer = new DefaultBatchConfigurer(dataSource);
configurer.initialize();
this.configurer = configurer;
return configurer;
} else {
throw new IllegalStateException("To use the default BatchConfigurer the context must contain no more than" +
"one DataSource, found " + dataSources.size());
}
}
if (configurers.size() > 1) {
throw new IllegalStateException(
"To use a custom BatchConfigurer the context must contain precisely one, found "
+ configurers.size());
}
this.configurer = configurers.iterator().next();
return this.configurer;
}
}
#Configuration
class ScopeConfiguration {
private StepScope stepScope = new StepScope();
private JobScope jobScope = new JobScope();
#Bean
public StepScope stepScope() {
stepScope.setAutoProxy(false);
return stepScope;
}
#Bean
public JobScope jobScope() {
jobScope.setAutoProxy(false);
return jobScope;
}
}
I found a solution where I was able to keep #EnableBatchProcessing but had to implement BatchConfigurer and atomikos beans, see my full answer in this so answer.

Resources