Spring threadpooltaskexecutor: transaction managment - spring

I am trying to make an asynchronous method call in my service layer code. Some pseudo code for the same is as below:
public void createXXX ()
{
dao.saveOrUpdate(entity); // save an entity
...................
...................
callAServiceXXX ()
}
...........
...........
public void callAServiceXXX()
{
SomeEntity entity = dao.getEntity(); // entity NOT NULL
this.threadPoolTaskExecutor.execute(new Runnable() {
public void run() {
try {
callAMethodXXX()
}catch()
{}
}
public void callAMethodXXX()
{
SomeEntity entity = dao.getEntity(); // entity always NULL
}
My spring config file has the following defined for the service layer bean containing the above logic:
<property name="transactionAttributes">
<props>
<prop key="callAServiceXXX">PROPAGATION_REQUIRED</prop>
<prop key="callAMethodXXX">PROPAGATION_MANDATORY</prop>
</props>
</property>
As put in the above, when I am trying to fetch the entity object which I save in the method createXXX(), it is always NULL when the dao call is executed from the callAMethodXXX() method.
I am not sure about the reason for this behavior. Did try a few other transactional attributes in the spring config file but did not get any success.
A workaround which I tried to make this work was:
1) Create a helper class. Inject it in this service layer class.
2) Shift the method callAMethodXXX() to this helper class.
3) Define <prop key="callAMethodXXX">PROPAGATION_REQUIRES_NEW</prop> as i want to make sure that callAMethodXXX() should be executed in a new transaction.
However, I do not want to use an extra helper class and want to make sure that the logic works fine from with the single service layer class.
Any inputs on the above will be helpful.
Regards,

Your Runnable thread is not aware of transaction management.
Maybe you can try to add to your managed bean a reference to itself, and add TRANSACTION_REQUIRES_NEW for method callAMethod:
#Component
public class MyService {
#Autowired
public MyService myService;
public void callAServiceXXX() {
SomeEntity entity = dao.getEntity();
this.threadPoolTaskExecutor.execute(new Runnable() {
public void run() {
try {
myService.callAMethodXXX();
}catch(Exception e){
}
}
});
}
}
<property name="transactionAttributes">
<props>
<prop key="callAServiceXXX">PROPAGATION_REQUIRED</prop>
<prop key="callAMethodXXX">PROPAGATION_REQUIRES_NEW</prop>
</props>
</property>
Disclaimer: you might still have a problem with that approach, as you are calling an async method and you are not 100% sure that saveOrUpdate method in separate transaction is finished or not.

Related

Spring Batch CompositeItemProcessor get value from other delegates

I have a compositeItemProcessor as below
<bean id="compositeItemProcessor" class="org.springframework.batch.item.support.CompositeItemProcessor">
<property name="delegates">
<list>
<bean class="com.example.itemProcessor1"/>
<bean class="com.example.itemProcessor2"/>
<bean class="com.example.itemProcessor3"/>
<bean class="com.example.itemProcessor4"/>
</list>
</property>
</bean>
The issue i have is that within itemProcessor4 i require values from both itemProcessor1 and itemProcessor3.
I have looked at using the Step Execution Context but this does not work as this is within one step. I have also looked at using #AfterProcess within ItemProcessor1 but this does not work as it isn't called until after ItemProcessor4.
What is the correct way to share data between delegates in a compositeItemProcessor?
Is a solution of using util:map that is updated in itemProcessor1 and read in itemProcessor4 under the circumstances that the commit-interval is set to 1?
Using the step execution context won't work as it is persisted at chunk boundary, so it can't be shared between processors within the same chunk.
AfterProcess is called after the registered item processor, which is the composite processor in your case (so after ItemProcessor4). This won't work neither.
The only option left is to use some data holder object that you share between item processors.
Hope this helps.
This page seems to state that there are two types of ExecutionContexts, one at step-level, one at job-level.
https://docs.spring.io/spring-batch/trunk/reference/html/patterns.html#passingDataToFutureSteps
You should be able to get the job context and set keys on that, from the step context
I had a similar requirement in my application too. I went with creating a data transfer object ItemProcessorDto which will be shared by all the ItemProcessors. You can store data in this DTO object in first processor and all the remaining processors will get the information out of this DTO object. In addition to that any ItemProcessor could update or retrieve the data out of the DTO.
Below is a code snippet:
#Bean
public ItemProcessor1<ItemProcessorDto> itemProcessor1() {
log.info("Generating ItemProcessor1");
return new ItemProcessor1();
}
#Bean
public ItemProcessor2<ItemProcessorDto> itemProcessor2() {
log.info("Generating ItemProcessor2");
return new ItemProcessor2();
}
#Bean
public ItemProcessor3<ItemProcessorDto> itemProcessor3() {
log.info("Generating ItemProcessor3");
return new ItemProcessor3();
}
#Bean
public ItemProcessor4<ItemProcessorDto> itemProcessor4() {
log.info("Generating ItemProcessor4");
return new ItemProcessor4();
}
#Bean
#StepScope
public CompositeItemProcessor<ItemProcessorDto> compositeItemProcessor() {
log.info("Generating CompositeItemProcessor");
CompositeItemProcessor<ItemProcessorDto> compositeItemProcessor = new CompositeItemProcessor<>();
compositeItemProcessor.setDelegates(Arrays.asList(itemProcessor1(), itemProcessor2(), itemProcessor3), itemProcessor4()));
return compositeItemProcessor;
}
#Data
public class ItemProcessorDto {
private List<String> sharedData_1;
private Map<String, String> sharedData_2;
}

instantiate a property with return type form a method

Say I have the following class
public class AbcFactory{
#Autowired
private Builder1 builder1;
#Autowired
private Builder2 builder2;
public Builder<Employee > getBuilder(Employee employee) {
if (employee.isMale(employee)) {
return builder1;
} else {
return builder2;
}
}
How to get the returnType from AbcFactory.getBuilder() as a property to a another bean id .
something i tried looks like this
<property name="builder">
?????
</property>
try,
<bean id="emp" class="com.pack.Employee"/>
<bean id="factory" class="com.pack.AbcFactory">
</bean>
<bean id="result" class="com.pack.Builder"
factory-bean="factory" factory-method="getBuilder">
<constructor-arg ref="emp"/>
</bean>
Aren't you mixing up static configuration (launchtime) with dynamic behavior (runtime). Spring cannot be setup according to a call that did not happen yet.
Or maybe "employee" is a bean itself ? See JavaConfig in that case.

Get the object which failed validation Spring Batch validation

I am having this task to process input .csv, .txt files and store the data into a database. I am using Spring Batch for this purpose. Before dumping the data into database, I have to perform some validation checks on the data. I am using Spring Batch's ValidatingItemProcessor and Hibernate's JSR-303 reference implementation hibernate validator for the same. The code looks something like:
public class Person{
#Pattern(regexp = "someregex")
String name;
#NotNull
String address;
#NotNull
String age;
//getters and setters
}
And then I wrote a validator which looks something like this --
import javax.validation.ConstraintViolation;
import javax.validation.Validation;
import javax.validation.ValidatorFactory;
import org.springframework.batch.item.validator.ValidationException;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.batch.item.validator.Validator;
class MyBeanValidator implements Validator<Person>, InitializingBean{
private javax.validation.Validator validator;
#Override
public void afterPropertiesSet() throws Exception {
ValidatorFactory validatorFactory = Validation.buildDefaultValidatorFactory();
validator = validatorFactory.usingContext().getValidator();
}
#Override
public void validate(Person person) throws ValidationException {
Set<ConstraintViolation<Object>> constraintViolations = validator.validate(person);
if(constraintViolations.size() > 0) {
generateValidationException(constraintViolations);
}
}
private void generateValidationException(Set<ConstraintViolation<Object>> constraintViolations) {
StringBuilder message = new StringBuilder();
for (ConstraintViolation<Object> constraintViolation : constraintViolations) {
message.append(constraintViolation.getMessage() + "\n");
}
throw new ValidationException(message.toString());
}
And then I have a processor which subclasses Spring Batch's ValidatingItemProcessor.
public class ValidatingPersonItemProcessor extends ValidatingItemProcessor<Person>{
#Override
public Person process(Person person) {
//some code
}
The records that pass validation checks would be passed on to another processor for further processing but the failed ones will be cleaned and then passed on to next processor.
Now I want to catch hold of records which failed validation. My objective is to report all input records that failed validation and clean those records further before I could pass on those records to next processor for further processing. How can I achieve this?
Will the Spring Batch process terminate if validation fails for some input? If yes, how to avoid that? My Processor configuration looks something like :
<batch:chunk reader="personItemReader" writer="personDBWriter" processor="personProcessor"
commit-interval="100" skip-limit="100">
<batch:skippable-exception-classes>
<batch:include class="org.springframework.batch.item.validator.ValidationException"/>
</batch:skippable-exception-classes>
<batch:listeners>
<batch:listener>
<bean
class="org.someorg.poc.batch.listener.PersonSkipListener" />
</batch:listener>
</batch:listeners>
</batch:chunk>
<bean id="personProcessor"
class="org.springframework.batch.item.support.CompositeItemProcessor">
<property name="delegates">
<list>
<ref bean="validatingPersonItemProcessor" />
<ref bean="personVerifierProcessor" />
</list>
</property>
</bean>
<bean id="validatingPersonItemProcessor" class="org.someorg.poc.batch.processor.ValidatingPersonItemProcessor" scope="step">
<property name="validator" ref="myBeanValidator" />
</bean>
<bean id="myBeanValidator" class="org.someorg.poc.batch.validator.MyBeanValidator">
</bean>
<bean id="personVerifierProcessor" class="org.someorg.poc.batch.processor.PersonVerifierProcessor" scope="step"/>
</beans>
I guess your validatingPersonItemProcessor bean has his validator parameter set with your myBeanValidator. So the Exception will be thrown by the processor.
Create your own SkipListener. Here you put the logic on what happens when an item is not validated (writtes to a file, a DB, etc.), in the onSkipInProcess();.
You need to add the ValidationException you throw in <batch:skippable-exception-classes> so they will be caught (and doesn't terminate your batch), and add your SkipListener in the <batch:listeners>, so it will be call when an exception is thrown.
EDIT: Answer to comment.
If your processor is a ValidatingItemProcessor and you set the validator, it should automatically call validate. However if you make your own ValidatingItemProcessor by extending it, you should explicitely call super.process(yourItem); (process() of ValidatingItemProcessor ) to validate your item.

Spring: Inject bean depended on context (session/web or local thread/background process)

Is it possible to create a factory or proxy that can decide if thread is running in (Web)Request or background-process (ie. scheduler) and then depending on that information, it creates a session bean or a prototype bean?
Example (pseudo Spring config :)
<bean id="userInfoSession" scope="session" />
<bean id="userInfoStatic" scope="prototype" />
<bean id="currentUserInfoFactory" />
<bean id="someService" class="...">
<property name="userInfo" ref="currentUserInfoFactory.getCurrentUserInfo()" />
</bean>
I hope this makes my question easier to understand...
My Solution
It's never to late to update own questions ;). I solved it with two different instances of client session, one SessionScoped client session and one SingletonScoped session. Both are normal beans.
<bean id="sessionScopedClientSession" class="com.company.product.session.SessionScopedClientSession" scope="session">
<aop:scoped-proxy />
</bean>
<bean id="singletonScopedClientSession" class="com.company.product.session.SingletonScopedClientSession" />
<bean id="clientSession" class="com.company.product.session.ClientSession">
<property name="sessionScopedClientSessionBeanName" value="sessionScopedClientSession" />
<property name="singletonScopedClientSessionBeanName" value="singletonScopedClientSession" />
</bean>
The ClientSession will then decide if singleton or session scope:
private IClientSession getSessionAwareClientData() {
String beanName = (isInSessionContext() ? sessionScopedClientSessionBeanName : singletonScopedClientSessionBeanName);
return (IClientSession) ApplicationContextProvider.getApplicationContext().getBean(beanName);
}
Where session type could be gathered through this:
private boolean isInSessionContext() {
return RequestContextHolder.getRequestAttributes() != null;
}
All the classes implement a interface called IClientSession. Both singletonScoped and sessionScoped beans extends from a BaseClientSession where the implementation is found.
Every service then can use the client session ie:
#Resource
private ClientSession clientSession;
...
public void doSomething() {
Long orgId = clientSession.getSomethingFromSession();
}
Now if we go one step further we can write something like a Emulator for the session. This could be done by initializing the clientSession (which is in no context of a request) the singleton session. Now all services can use the same clientSession and we still can "emulate" a user ie:
clientSessionEmulator.startEmulateUser( testUser );
try {
service.doSomething();
} finally {
clientSessionEmulator.stopEmulation();
}
One more advice: take care about threading in SingletonScoped clientSession instance! Wouw, I thought I could do it with less lines ;) If you like to know more about this approach feel free to contact me.
I created small universal workaround to inject beans depends on context.
Guess we have two beans:
<bean class="xyz.UserInfo" id="userInfo" scope="session" />
<bean class="xyz.UserInfo" id="userInfoSessionLess" />
We want to use "userInfo" bean for web user actions and "userInfoSessionLess" bean for background services for example.
Wa also want to write code and don't want to think about context, for example:
#Autowired
//You will get "java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request?" for session less services.
//We can fix it and autowire "userInfo" or "userInfoSessionLess" depends on context...
private UserInfo userInfo;
public save(Document superSecureDocument) {
...
superSecureDocument.lastModifier = userInfo.getUser();
...
}
Now we need create custom session scope to make it worked:
public class MYSessionScope extends SessionScope implements ApplicationContextAware {
private static final String SESSION_LESS_POSTFIX = "SessionLess";
private ApplicationContext applicationContext;
public Object get(String name, ObjectFactory objectFactory) {
if (isInSessionContext()) {
log.debug("Return session Bean... name = " + name);
return super.get(name, objectFactory);
} else {
log.debug("Trying to access session Bean outside of Request Context... name = " + name + " return bean with name = " + name + SESSION_LESS_POSTFIX);
return applicationContext.getBean(name.replace("scopedTarget.", "") + SESSION_LESS_POSTFIX);
}
}
private boolean isInSessionContext() {
return RequestContextHolder.getRequestAttributes() != null;
}
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
}
Register new scope:
<bean class="org.springframework.beans.factory.config.CustomScopeConfigurer">
<property name="scopes">
<map>
<entry key="mySession">
<bean class="com.galantis.gbf.web.MYSessionScope" />
</entry>
</map>
</property>
</bean>
Now we need modify beans definions like this:
<bean class="xyz.UserInfo" id="userInfo" scope="mySession" autowire-candidate="true"/>
<bean class="xyz.UserInfo" id="userInfoSessionLess" autowire-candidate="false"/>
That's all. Bean with name "SessionLess" will be used for all "mySession" scoped beans if we use bean outside of actual web request thread.
Your rephrase is indeed considerably simpler :)
Your currentUserInfoFactory could make use of RequestContextHolder.getRequestAttributes(). If a session is present and associated with the calling thread, then this will return a non-null object, and you can then safely retrieve the session-scoped bean from the context. If it returns a null, then you should fetch the prototype-scoped bean instead.
It's not very neat, but it's simple, and should work.
Create two custom context loaders that bind the same scope defintion to different implementations:
public final class SessionScopeContextLoader extends GenericXmlContextLoader {
protected void customizeContext(final GenericApplicationContext context) {
final SessionScope testSessionScope = new SessionScope();
context.getBeanFactory().registerScope("superscope", testSessionScope);
}
...
}
Then you make a corresponding one for singleton (make your own scope with just statics)
Then you just specify the appropriate context loader in the xml startup for each of the two contexts.

Spring #Transactional wrapping 2 methods

I'm a Spring newby. I use the #Transactional annotation for my dao methods:
#Transactional
public Person getById(long id) {
return new Person(jdbcTemplate.queryForMap(...));
}
#Transactional
public void save(Person person) {
jdbcTemplate.update(...);
}
and I've set up the transaction manager like this:
<tx:annotation-driven transaction-manager="txManager" />
<bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
The problem is that when my client code calls dao.save(..) and then dao.getById(4) these happen in two separate transactions. How is it possible to wrap those 2 calls in the same database transaction? Ideally without doing it in a programmatic way.
thanks
It is bad practice to put transactional attributes in DAO layer. Also, I am not sure why do you require transaction for getById method. Even if you want to use transaction then you need to specify propagation behaviour as REQUIRES_NEW for save and getById method.
#Transactional(propagation = REQUIRES_NEW, readOnly = false)
public Person saveAndGetById(Person person, long id) {
save(person);
return getById(id);
}
#Transactional(propagation = REQUIRED)
public Person getById(long id) {
return new Person(jdbcTemplate.queryForMap(...));
}
#Transactional(propagation = REQUIRED, readOnly = false)
public void save(Person person) {
jdbcTemplate.update(...);
}
However, the best thing would be to have the "save" method return an ID, because it is hard to know beforehand which ID the Person will have once persisted.
Good practice in this case would be marking service method which invokes both these DAO methods as #Transactional. The case was clearly discussed here.

Resources