Spring Bactch writes empty file - spring

I am new to batch programming and I am using Spring Batch. tried to use some online examples but after fixing lot of errors i am getting an empty file. I am trying to read from a simple MySql database and writing to a CSV file. users.csv is located in resource folder. Spent a lot of time to see what wrong I am doing. There is no error message but it is always showing empty file. I would really appreciate if anyone can help me to fix this issue. Here is my code;
BatchConfig.class
#Configuration
#EnableBatchProcessing
public class BatchConfig {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private DataSource dataSource;
#Bean
public JdbcCursorItemReader<User> reader(){
JdbcCursorItemReader<User> cursorItemReader = new JdbcCursorItemReader<>();
cursorItemReader.setDataSource(dataSource);
cursorItemReader.setSql("SELECT user_id,first_name,last_name,email FROM demodb.user");
cursorItemReader.setRowMapper(new UserRowMapper());
return cursorItemReader;
}
#Bean
public UserItemProcessor processor(){
return new UserItemProcessor();
}
#Bean
public FlatFileItemWriter<User> writer(){
FlatFileItemWriter<User> writer = new FlatFileItemWriter<User>();
writer.setResource(new ClassPathResource("users.csv"));
DelimitedLineAggregator<User> lineAggregator = new DelimitedLineAggregator<User>();
lineAggregator.setDelimiter(",");
BeanWrapperFieldExtractor<User> fieldExtractor = new BeanWrapperFieldExtractor<User>();
fieldExtractor.setNames(new String[]{"userId","firstName","lastName", "email"});
lineAggregator.setFieldExtractor(fieldExtractor);
writer.setLineAggregator(lineAggregator);
return writer;
}
#Bean
public Step step1(){
return stepBuilderFactory
.get("step1")
.<User,User>chunk(100)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
#Bean
public Job exportUserJob(){
return jobBuilderFactory
.get("exportUserJob")
.incrementer(new RunIdIncrementer())
.flow(step1())
.end()
.build();
}
}
UserRowMapper.class
public class UserRowMapper implements RowMapper<User> {
#Override
public User mapRow(ResultSet rs, int rowNum) throws SQLException {
User user = new User();
user.setUserId(rs.getInt("user_id"));
user.setFirstName(rs.getString("first_name"));
user.setLastName(rs.getString("last_name"));
user.setEmail(rs.getString("email"));
return user;
}
}
User.class
public class User {
private int userId;
private String firstName;
private String lastName;
private String email;
public int getUserId() {
return userId;
}
public void setUserId(int userId) {
this.userId = userId;
}
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
}
UserItemProcessor.class
public class UserItemProcessor implements ItemProcessor<User, User>{
#Override
public User process(User person) throws Exception {
return person;
}
}
application.properties
spring.datasource.url=jdbc:mysql://localhost:3306/demodb
spring.datasource.password=root
spring.datasource.username=root
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.batch.initialize-schema=ALWAYS
MySql db
UPDATE
I could solve the issue with new file new FileSystemResource(). But I was replicating a business use case issue. As I mentioned I am new to Spring batch and I have a requirement to read from a production database where I have only View permission. I come to know that Spring batch uses metadata tables to process the Batch. So is it required to have a write permission to the table ? If that is the case then I wont be able to use spring batch with only View permission in production environment. Is there any way I can work around this ? Overriding any configuration ?

Try to decrease chunk value
chunk: Indicates that this is an item based step and the number of items to be processed before the transaction is committed.

Use FileSystemResource instead of ClassPathResource in your writer's configuration:
writer.setResource(new FileSystemResource("users.csv"));

I have verified this by running this program, is working fine as it is and no issue
Issue:
As I mentioned in the comments below, user is looking at the file at the wrong location.
User creates a file at src/main/resources/users.csv
User runs the program
Files gets copied under target/classes/users.csv
Program is writing to the file under target/classes/users.csv
But user was looking the file at src/main/resources/users.csv

Related

How to run a job multiple times parallelly with different excels as input in spring batch

I have a use case where user upload different excel files where each file process parallelly and needs to save with job execution id for every row of the excel in h2 database.
But the issue I am facing here is when user uploads first file and the processing is going at back end and saving the excel data of every row with job id 1, but after if he uploading another excel file with out completing the first one with different data then the data related to first excel also getting saving with latest job execution id which is 2. So how to resolve this issue. so that each job data save with that particular id and has to run different jobs parlalley. this is the data for first excel sheet and this the data for second excel sheet and this is the output saving in h2 database.
This is the service class
#Service
public class BatchTestService {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Autowired
private JobLauncher batchJobLauncher;
#Autowired
private WriterImpl writer;
public Job job(byte data[]) {
return jobBuilderFactory.get("job")
.incrementer(new RunIdIncrementer())
.flow(step(data))
.end()
.build();
}
#SneakyThrows
public PoiItemReader<TestEntity> reader(byte[] data) {
ReaderImpl reader = new ReaderImpl();
reader.setLinesToSkip(1);
reader.setResource(toResource(data, "TEST"));
reader.setRowMapper(new MapperClass());
return reader;
}
public Step step(byte data[]) {
return stepBuilderFactory.get("step").<TestEntity, TestEntity>chunk(2).reader(reader(data))
.writer(writer)
.build();
}
public ThreadPoolTaskExecutor getThreadPoolTaskExecutor(){
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(2);
taskExecutor.setMaxPoolSize(4);
taskExecutor.setThreadNamePrefix("test");
taskExecutor.afterPropertiesSet();
return taskExecutor;
}
public void uploadExcel(MultipartFile file)
throws Exception {
String jobId = String.valueOf(System.currentTimeMillis());
JobParameters parameters = new JobParametersBuilder().addString("jobId", jobId)
.toJobParameters();
((SimpleJobLauncher) batchJobLauncher).setTaskExecutor(getThreadPoolTaskExecutor());
batchJobLauncher.run(job(file.getBytes()), parameters);
}
public static Resource toResource(byte bytesFile[], String sheetName) throws IOException {
ByteArrayInputStream bin = new ByteArrayInputStream(bytesFile);
XSSFWorkbook workbook = new XSSFWorkbook(bin);
var outputStream = new ByteArrayOutputStream();
workbook.write(outputStream);
return new ByteArrayResource(outputStream.toByteArray());
}
}
This is the config class.
#Configuration
public class BatchDataSourceConfig {
#Value("${spring.datasource.driver-class-name}")
private String driverName;
#Value("${spring.datasource.url}")
private String url;
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(driverName);
dataSource.setUrl(url);
dataSource.setUsername("sa");
dataSource.setPassword("");
return dataSource;
}
#Bean
public JobLauncher batchJobLauncher(JobRepository jobRepository) {
SimpleJobLauncher jobLauncher = new SimpleJobLauncher();
jobLauncher.setJobRepository(jobRepository);
return jobLauncher;
}
}
This is reader class
public class ReaderImpl extends PoiItemReader<TestEntity> {}
This is writer class
#Component
public class WriterImpl implements ItemWriter<TestEntity> {
private static Logger logger = LoggerFactory.getLogger(WriterImpl.class);
#Autowired
private TestEntityRepository testEntityRepository;
private StepExecution stepExecution;
#BeforeStep
public void beforeStep(final StepExecution stepExecution) {
this.stepExecution = stepExecution;
}
#Override
#SneakyThrows
public void write(List<? extends TestEntity> modelObjectList) {
logger.info("Writer is reached...");
Thread.sleep(3000);
for (TestEntity testEntity : modelObjectList) {
testEntity.setJobExecutionId(stepExecution.getJobExecutionId());
testEntityRepository.save(testEntity);
}
}
}
And also resprctive rowmapper class is also defined.
public class MapperClass implements RowMapper<TestEntity> {
#Override
public TestEntity mapRow(RowSet rowSet) {
TestEntity testEntity = new TestEntity();
testEntity.setStudentName(rowSet.getColumnValue(0));
testEntity.setRollNo(rowSet.getColumnValue(1));
testEntity.setSection(rowSet.getColumnValue(2));
return testEntity;
}
}
This is the model class
#AllArgsConstructor
#Data
#Entity
#NoArgsConstructor
#Table(name = "TEST_ENTITY")
public class TestEntity {
#GeneratedValue(strategy = GenerationType.AUTO)
#Id
private Integer id;
private String studentName;
private String rollNo;
private String section;
private Long jobExecutionId;
}

How can we take the result of `MethodInvokingTaskletAdapter` as a reader in the Spring Batch Step?

How can we take the result of MethodInvokingTaskletAdapter as a reader in the Spring Batch Step? Reference - https://docs.spring.io/spring-batch/docs/current/reference/html/index-single.html#taskletStep and https://github.com/spring-projects/spring-batch/pull/567
Here is the code that I developed
JobConfiguration.java
#Configuration
public class JobConfiguration {
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Bean
public CustomService service() {
return new CustomService();
}
#StepScope
#Bean
public MethodInvokingTaskletAdapter methodInvokingTasklet() {
MethodInvokingTaskletAdapter methodInvokingTaskletAdapter = new MethodInvokingTaskletAdapter();
methodInvokingTaskletAdapter.setTargetObject(service());
methodInvokingTaskletAdapter.setTargetMethod("getEmployees");
return methodInvokingTaskletAdapter;
}
#Bean
public Job methodInvokingJob() {
return this.jobBuilderFactory.get("methodInvokingJob")
.start(methodInvokingStep())
.build();
}
#Bean
public Step methodInvokingStep() {
// Looking to configure the Chunk based Step here, dont know how to do using MethodInvokingTaskletAdapter
return this.stepBuilderFactory.get("methodInvokingStep")
.tasklet(methodInvokingTasklet())
.build();
}
}
CustomService.java
public class CustomService {
public void serviceMethod(String message) {
System.out.println(message);
}
public void invokeMethod() {
System.out.println("=============== Your method has executed !");
}
public List<Employee> getEmployees(){
// In real world, it will be an GET API call to XYZ system
List<Employee> employees = new ArrayList<>();
employees.add(Employee.builder().firstName("Ravi").lastName("Shankar").email("ravi.shankar#gmail.com").age(30).build());
employees.add(Employee.builder().firstName("Parag").lastName("Rane").email("parag.rane#gmail.com").age(11).build());
employees.add(Employee.builder().firstName("Priya").lastName("Pande").email("priya.pande#gmail.com").age(40).build());
employees.add(Employee.builder().firstName("Kiran").lastName("khot").email("kiran.khot#gmail.com").age(50).build());
return employees;
}
}
Employee.java
#Data
#AllArgsConstructor
#NoArgsConstructor
#Builder
public class Employee {
private String firstName;
private String lastName;
private String email;
private int age;
}
MethodInvokingTaskletApplication.java
#EnableBatchProcessing
#SpringBootApplication
public class MethodInvokingTaskletApplication {
public static void main(String[] args) {
SpringApplication.run(MethodInvokingTaskletApplication.class, args);
}
}
To answer your question, you can't. The MethodInvokingTaskletAdapter is meant to adapt a POJO to a Tasklet. We have an ItemReaderAdapter that you can use to adapt a POJO to an ItemReader. You can read about it in the documentation here: https://docs.spring.io/spring-batch/docs/current/api/org/springframework/batch/item/adapter/ItemReaderAdapter.html
Now you'll have an issue with your service as configured in that each call to the delegating POJO is considered an item. That means that your item as configured will be a List<Employee> instead of just an Employee. Given your configuration states it's not the real service, I'll assume that your real service should return an Employee per call and null once the results are exhausted.
To update your configuration (with your service as it is configured in your question) in your sample:
...
#StepScope
#Bean
public ItemReaderAdapter itemReader() {
ItemReaderAdapter reader = new ItemReaderAdapter();
reader.setTargetObject(service());
reader.setTargetMethod("getEmployees");
return reader;
}
#Bean
public Job methodInvokingJob() {
return this.jobBuilderFactory.get("methodInvokingJob")
.start(methodInvokingStep())
.build();
}
#Bean
public Step methodInvokingStep() {
return this.stepBuilderFactory.get("methodInvokingStep")
.<List<Employee>, List<Employee>>chunk(methodInvokingTasklet())
.reader(itemReader())
// You'll need to define a writer...
.writer(itemWriter())
.build();
}
...

Saving file information in Spring batch MultiResourceItemReader

I have a directory having text files. I want to process files and write data into db. I did that by using MultiResourceItemReader.
I have a scenario like whenever file is coming, the first step is to save file info, like filename, record count in file in a log table(custom table).
Since i used MultiResourceItemReader, It's loading all files once and the code which i wrote is executing once in server startup. I tried with getCurrentResource() method but its returning null.
Please refer below code.
NetFileProcessController.java
#Slf4j
#RestController
#RequestMapping("/netProcess")
public class NetFileProcessController {
#Autowired
private JobLauncher jobLauncher;
#Autowired
#Qualifier("netFileParseJob")
private Job job;
#GetMapping(path = "/process")
public #ResponseBody StatusResponse process() throws ServiceException {
try {
Map<String, JobParameter> parameters = new HashMap<>();
parameters.put("date", new JobParameter(new Date()));
jobLauncher.run(job, new JobParameters(parameters));
return new StatusResponse(true);
} catch (Exception e) {
log.error("Exception", e);
Throwable rootException = ExceptionUtils.getRootCause(e);
String errMessage = rootException.getMessage();
log.info("Root cause is instance of JobInstanceAlreadyCompleteException --> "+(rootException instanceof JobInstanceAlreadyCompleteException));
if(rootException instanceof JobInstanceAlreadyCompleteException){
log.info(errMessage);
return new StatusResponse(false, "This job has been completed already!");
} else{
throw new ServiceException(errMessage);
}
}
}
}
BatchConfig.java
#Configuration
#EnableBatchProcessing
public class BatchConfig {
private JobBuilderFactory jobBuilderFactory;
#Autowired
public void setJobBuilderFactory(JobBuilderFactory jobBuilderFactory) {
this.jobBuilderFactory = jobBuilderFactory;
}
#Autowired
StepBuilderFactory stepBuilderFactory;
#Value("file:${input.files.location}${input.file.pattern}")
private Resource[] netFileInputs;
#Value("${net.file.column.names}")
private String netFilecolumnNames;
#Value("${net.file.column.lengths}")
private String netFileColumnLengths;
#Autowired
NetFileInfoTasklet netFileInfoTasklet;
#Autowired
NetFlatFileProcessor netFlatFileProcessor;
#Autowired
NetFlatFileWriter netFlatFileWriter;
#Bean
public Job netFileParseJob() {
return jobBuilderFactory.get("netFileParseJob")
.incrementer(new RunIdIncrementer())
.start(netFileStep())
.build();
}
public Step netFileStep() {
return stepBuilderFactory.get("netFileStep")
.<NetDetailsDTO, NetDetailsDTO>chunk(1)
.reader(new NetFlatFileReader(netFileInputs, netFilecolumnNames, netFileColumnLengths))
.processor(netFlatFileProcessor)
.writer(netFlatFileWriter)
.build();
}
}
NetFlatFileReader.java
#Slf4j
public class NetFlatFileReader extends MultiResourceItemReader<NetDetailsDTO> {
public netFlatFileReader(Resource[] netFileInputs, String netFilecolumnNames, String netFileColumnLengths) {
setResources(netFileInputs);
setDelegate(reader(netFilecolumnNames, netFileColumnLengths));
}
private FlatFileItemReader<NetDetailsDTO> reader(String netFilecolumnNames, String netFileColumnLengths) {
FlatFileItemReader<NetDetailsDTO> flatFileItemReader = new FlatFileItemReader<>();
FixedLengthTokenizer tokenizer = CommonUtil.fixedLengthTokenizer(netFilecolumnNames, netFileColumnLengths);
FieldSetMapper<NetDetailsDTO> mapper = createMapper();
DefaultLineMapper<NetDetailsDTO> lineMapper = new DefaultLineMapper<>();
lineMapper.setLineTokenizer(tokenizer);
lineMapper.setFieldSetMapper(mapper);
flatFileItemReader.setLineMapper(lineMapper);
return flatFileItemReader;
}
/*
* Mapping column data to DTO
*/
private FieldSetMapper<NetDetailsDTO> createMapper() {
BeanWrapperFieldSetMapper<NetDetailsDTO> mapper = new BeanWrapperFieldSetMapper<>();
try {
mapper.setTargetType(NetDetailsDTO.class);
} catch(Exception e) {
log.error("Exception in mapping column data to dto ", e);
}
return mapper;
}
}
I am stuck on this scenario, Any help appreciated
I don't think MultiResourceItemReader is appropriate in your case. I would run a job per file for all the reasons of making one thing do one thing and do it well:
Your preparatory step will work by design
It would be easier to run multiple jobs in parallel and improve your file ingestion throughput
In case of failure, you would only restart the job for the failed file
EDIT: add an example
Resource[] netFileInputs = ... // same code that looks for file as currently in your reader
for (Resource netFileInput : netFileInputs) {
Map<String, JobParameter> parameters = new HashMap<>();
parameters.put("netFileInput", new JobParameter(netFileInput.getFilename()));
jobLauncher.run(job, new JobParameters(parameters));
}

axon org.axonframework.commandhandling.NoHandlerForCommandException: No node known to accept

When trying to implement a DistributedCommandBus using Spring Cloud, I am getting the following error intermittently. I have reason to believe that there is some sort of race condition happening with the auto-configuration of my aggregate root class, its command handlers, and my configuration bean class.
org.axonframework.commandhandling.NoHandlerForCommandException: No
node known to accept.
I am using Axon Version 3.3.5.
Here is my configurations class:
#Configuration
#AutoConfigureBefore(CustomerAggregate.class)
public class AxonConfig {
#Value("${mongo.servers}")
private String mongoUrl;
#Value("${mongo.db}")
private String mongoDbName;
#Value("${axon.events.collection.name}")
private String eventsCollectionName;
#Value("${axon.snapshot.collection.name}")
private String snapshotCollectionName;
#Value("${axon.saga.collection.name}")
private String sagaCollectionName;
#Bean
#Primary
public CommandGateway commandGateway(#Qualifier("distributedBus") DistributedCommandBus commandBus) throws Exception {
return new DefaultCommandGateway(commandBus, new IntervalRetryScheduler(Executors.newSingleThreadScheduledExecutor(), 1000, 10));
}
#Bean
#Primary
#Qualifier("springCloudRouter")
public CommandRouter springCloudCommandRouter(DiscoveryClient client, Registration localServiceInstance) {
return new SpringCloudCommandRouter(client, localServiceInstance, new AnnotationRoutingStrategy());
}
#Bean
#Primary
#Qualifier("springCloudConnector")
public SpringHttpCommandBusConnector connector() {
return new SpringHttpCommandBusConnector(new SimpleCommandBus(), new RestTemplate(), new JacksonSerializer());
}
#Bean
#Primary
#Qualifier("distributedBus")
public DistributedCommandBus springCloudDistributedCommandBus(#Qualifier("springCloudRouter") CommandRouter router) {
return new DistributedCommandBus(router, connector());
}
#Bean
#Primary
public AggregateFactory<CustomerAggregate> aggregateFactory(){
return new GenericAggregateFactory<CustomerAggregate>(CustomerAggregate.class);
}
#Bean
#Primary
public EventCountSnapshotTriggerDefinition countSnapshotTriggerDefinition(){
return new EventCountSnapshotTriggerDefinition(snapShotter(), 3);
}
#Bean
#Primary
public Snapshotter snapShotter(){
return new AggregateSnapshotter(eventStore(), aggregateFactory());
}
#Bean
#Primary
public EventSourcingRepository<CustomerAggregate> customerAggregateRepository(){
return new EventSourcingRepository<>(aggregateFactory(), eventStore(), countSnapshotTriggerDefinition());
}
#Bean(name = "axonMongoTemplate")
public MongoTemplate axonMongoTemplate() {
return new DefaultMongoTemplate(mongoClient(), mongoDbName)
.withDomainEventsCollection(eventsCollectionName)
.withSnapshotCollection(snapshotCollectionName)
.withSagasCollection(sagaCollectionName);
}
#Bean
public MongoClient mongoClient() {
MongoFactory mongoFactory = new MongoFactory();
mongoFactory.setMongoAddresses(Arrays.asList(new ServerAddress(mongoUrl)));
return mongoFactory.createMongo();
}
#Bean
#Primary
public MongoEventStorageEngine engine() {
return new MongoEventStorageEngine(new JacksonSerializer(), null, axonMongoTemplate(), new DocumentPerEventStorageStrategy());
}
#Bean
#Primary
public EventStore eventStore() {
return new EmbeddedEventStore(engine());
}
}
And here is my aggregate class with command handlers:
#Aggregate(repository = "customerAggregateRepository")
public class CustomerAggregate {
Logger logger = LoggerFactory.getLogger(this.getClass());
#AggregateIdentifier
private String id;
private String firstName;
private String lastName;
private String email;
private CustomerAggregate() {}
public String getId() {
return id;
}
public String getFirstName() {
return firstName;
}
public String getLastName() {
return lastName;
}
public String getEmail() {
return email;
}
#CommandHandler
public CustomerAggregate(CreateCustomer cmd) {
logger.debug("Received creation command: " + cmd.toString());
apply(new CustomerCreated(cmd.getId(),cmd.getFirstName(),cmd.getLastName(), cmd.getEmail()));
}
#CommandHandler
public void on(UpdateCustomer cmd) {
logger.debug("Received update command: " + cmd.toString());
apply(new CustomerUpdated(this.id,cmd.getFirstName(),cmd.getLastName(), cmd.getEmail()));
}
#CommandHandler
public void on(UpdateCustomerEmail cmd) {
logger.debug("Received update command for existing customer: " + cmd.toString());
apply(new CustomerUpdated(cmd.getId(), this.firstName, this.lastName, cmd.getEmail()));
}
// Various event handlers...
}
Any help is much appreciated.

The annotation #Bean is disallowed for this location error

I read in a book that whenever we want a Java-based configuration and want define a bean we use #Bean annotation. But when I did that I got the error: The annotation #Bean is disallowed for this location. My bean is:
package com.mj.cchp.bean;
import javax.validation.constraints.Digits;
import javax.validation.constraints.NotNull;
import org.springframework.context.annotation.Bean;
import com.mj.cchp.annotation.Email;
#Bean
public class UserBean {
#NotNull
#Email
private String email;
#NotNull
private String firstName;
#NotNull
private String lastName;
#Digits(fraction = 0, integer = 10)
private String phoneNo;
#NotNull
private String role;
public String getEmail() {
return email;
}
public String getFirstName() {
return firstName;
}
public String getLastName() {
return lastName;
}
public String getPhoneNo() {
return phoneNo;
}
public String getRole() {
return role;
}
public void setEmail(String email) {
this.email = email;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public void setPhoneNo(String phoneNo) {
this.phoneNo = phoneNo;
}
public void setRole(String role) {
this.role = role;
}
}
The #Bean annotation is to define a Bean to be loaded in the Spring container. it is similar to the xml config of specifying
<bean id="myId" class="..."/>
This should be used in a Configuration file(java). Which is similar to your applicationContext.xml
#Configuration
#ComponentScan("...")
public class AppConfig{
#Bean
public MyBean myBean(){
return new MyBean();
}
}
The #Bean, #Configuration and other newly introduced annotations will do exactly what you do in an Xml config.
The #Bean annotation tells Spring that a method annotated with #Bean will return an object that should be registered as a bean in the Spring application context.
So you need a UserBeanConfig class that will be annotated using #Configuration that will have a method that create the new bean.
#Configuration
public class UserBeanConfig {
#Bean
public UserBean userBean(){
return new UserBean();
}
}
From my point of view Spring is not designed to construct simple Domain object.
You should use Spring to bootstrap the dependencies of Service/DAO etc.
So I suggest avoiding spring for domain objects.

Resources