Spring Integration: start new transaction in my Message flow IntegrationFlowBuilder to commit change and resume the outer transaction - spring-boot

My jdbcSourceMessage execute a select for update with batch of 100 rows at a time.
While the integrationFlow is been executed in a Transaction to hold a lock to the database for the fetched batch.
I would like to start new Transaction for my JdbcSourceUpdate (within the message flow) to excute an update and commit my change for each
row sent throught the channel.
#Bean
public IntegrationFlow integrationFlow() {
IntegrationFlowBuilder flowBuilder = IntegrationFlows.from(jdbcSourceMessage());
flowBuilder
.split()
.log(LoggingHandler.Level.INFO, message ->
message.getHeaders().get("sequenceNumber")
+ " événements publiés sur le bus de message sur "
+ message.getHeaders().get("sequenceSize")
+ " événements lus (lot)")
.transform(Transformers.toJson())
.log()
.enrichHeaders(h -> h.headerExpression("type", "payload.typ_evenement"))
.publishSubscribeChannel(publishSubscribeSpec -> publishSubscribeSpec
.subscribe(flow -> flow
.bridge()
.transform(Transformers.toJson())
.transform(kafkaGuyTransformer())
.channel(this.rabbitMQchannel.demandeInscriptionOutput()))
.subscribe(flow -> flow
.handle(jdbcMessageHandler()))
);
return flowBuilder.get();
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PeriodicTrigger trigger = new PeriodicTrigger(this.proprietesSourceJdbc.getTriggerDelay(), TimeUnit.SECONDS);
PollerMetadata pollerMetadata = Pollers.trigger(trigger)
.advice(transactionInterceptor())
.get();
pollerMetadata.setMaxMessagesPerPoll(proprietesSourceJdbc.getMaxRowsPerPoll());
return pollerMetadata;
}
#Bean
public JdbcSourceUpdate jdbcSourceUpdate() {
return new JdbcSourceUpdate();
}
public TransactionInterceptor transactionInterceptor() {
return new TransactionInterceptorBuilder()
.transactionManager(transactionManager())
.build();
}
public PlatformTransactionManager transactionManager(){
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager(sourceDeDonnees);
transactionManager.setRollbackOnCommitFailure(false);
return transactionManager;
}
public class KafkaGuyTransformer implements GenericTransformer<Message, Message> {
#Override
public Message transform(Message message) {
Message<String> msg = null;
try {
DemandeRecueDTO dto = objectMapper.readValue(message.getPayload().toString(), DemandeRecueDTO.class);
msg = MessageBuilder.withPayload(dto.getTxtDonnee())
.copyHeaders(message.getHeaders())
.build();
} catch (Exception e) {
LOG.error(e.getMessage(), e);
}
return msg;
}
}
public class JdbcSourceUpdate implements MessageHandler {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
try {
Thread.sleep(100);
DemandeRecueDTO dto = objectMapper.readValue(message.getPayload().toString(), DemandeRecueDTO.class);
jdbcTemplate.update(proprietesSourceJdbc.getUpdate(), dto.getIdEvenementDemandeCrcd());
} catch (Exception e) {
LOG.error(e.getMessage(), e);
}
}
}

Since you have that JdbcSourceUpdate implementation, there is just enough to do like this:
#Transactional(propagation = Propagation.REQUIRES_NEW)
#Override
public void handleMessage(Message<?> message) throws MessagingException {
See its JavaDocs for more info:
/**
* Create a new transaction, and suspend the current transaction if one exists.
* Analogous to the EJB transaction attribute of the same name.
* <p><b>NOTE:</b> Actual transaction suspension will not work out-of-the-box
* on all transaction managers. This in particular applies to
* {#link org.springframework.transaction.jta.JtaTransactionManager},
* which requires the {#code javax.transaction.TransactionManager} to be
* made available to it (which is server-specific in standard Java EE).
* #see org.springframework.transaction.jta.JtaTransactionManager#setTransactionManager
*/
REQUIRES_NEW(TransactionDefinition.PROPAGATION_REQUIRES_NEW),
UPDATE
Pay attention to the NOTE though:
* Actual transaction suspension will not work out-of-the-box
* on all transaction managers. This in particular applies to
* {#link org.springframework.transaction.jta.JtaTransactionManager}`.
So, sounds like DataSourceTransactionManager doesn't work with suspension. I can suggest you to consider to use a .gateway() for that JdbcSourceUpdate, but using an ExecutorChannel. This way your handle(jdbcSourceUpdate() will be performed on a new thread and, therefore, with a new transaction. The main flow will wait for the reply from that gateway holding its transaction opened.
Something like this:
.subscribe(f -> f
.gateway(subFlow ->
subFlow.channel(c -> c.executor())
.handle(jdbcMessageHandler()))
.channel("nullChannel")
));
Buy your JdbcSourceUpdate must return something for the gateway reply. Consider do not implement MessageHandler there, but make just as a plain POJO with a single non-void method.

Related

RabbitMQ Spring "Cannot determine target ConnectionFactory for lookup key" when using Java lambda parallelStream

We have a Spring Java application using RabbitMQ, and here is the scenario:
There is a consumer receiving messages from a queue and sending them to another one. We are using "SimpleRabbitListenerContainerFactory" as the container factory, but when sending the messages to the other queue inside a "parallelStream" we've got an IllegalStateException "Cannot determine target ConnectionFactory for lookup key" Exception
When we remove the "parallelStream" it works flawlessly.
public void sendMessage(final StagingMessage stagingMessage, final Long timestamp, final String country) {
final List<TransformedMessage> messages = processMessageList(stagingMessage);
messages.parallelStream().forEach(message -> {
final TransformedMessage transformedMessage = buildMessage(timestamp, ApiConstants.POST_METHOD, country);
myMessageSender.sendQueue(country, transformedMessage);
});
}
Connectio Facotory, where the lookup key is set:
#Configuration
#EnableRabbit
public class RabbitBaseConfig {
#Autowired
private QueueProperties queueProperties;
#Bean
#Primary
public ConnectionFactory connectionFactory(final ConnectionFactory connectionFactoryA, final ConnectionFactory connectionFactoryB) {
final SimpleRoutingConnectionFactory simpleRoutingConnectionFactory = new SimpleRoutingConnectionFactory();
final Map<Object, ConnectionFactory> map = new HashMap<>();
for (final String queue : queueProperties.getAQueueMap().values()) {
map.put("[" + queue + "]", connectionFactoryA);
}
for (final String queue : queueProperties.getBQueueMap().values()) {
map.put("[" + queue + "]", connectionFactoryB);
}
simpleRoutingConnectionFactory.setTargetConnectionFactories(map);
return simpleRoutingConnectionFactory;
}
#Bean
public Jackson2JsonMessageConverter jackson2JsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
}
Welcome to stack overflow!
You should always show the pertinent code and configuration beans when asking questions like this.
I assume you are using the RoutingConnectionFactory.
It uses a ThreadLocal to store the lookup key so the send has to happen on the same thread that set the key.
You generally should never go asynchronous in a listener anyway; you risk message loss. To increase concurrency, use the concurrency properties on the container.
EDIT
One technique would be to convey the lookup key in a message header:
#Bean
public RabbitTemplate template(ConnectionFactory rcf) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(rcf);
Expression expression = new SpelExpressionParser().parseExpression("messageProperties.headers['cfSelector']");
rabbitTemplate.setSendConnectionFactorySelectorExpression(expression);
return rabbitTemplate;
}
#RabbitListener(queues = "foo")
public void listen1(String in) {
IntStream.range(0, 10)
.parallel()
.mapToObj(i -> in + i)
.forEach(val -> {
this.template.convertAndSend("bar", val.toUpperCase(), msg -> {
msg.getMessageProperties().setHeader("cfSelector", "[bar]");
return msg;
});
});
}

How to dead letter a RabbitMQ messages when an exceptions happens in a service after an aggregator's forceRelease

I am trying to figure out the best way to handle errors that might have occurred in a service that is called after a aggregate's group timeout occurred that mimics the same flow as if the releaseExpression was met.
Here is my setup:
I have a AmqpInboundChannelAdapter that takes in messages and send them to my aggregator.
When the releaseExpression has been met and before the groupTimeout has expired, if an exception gets thrown in my ServiceActivator, the messages get sent to my dead letter queue for all the messages in that MessageGroup. (10 messages in my example below, which is only used for illustrative purposes) This is what I would expect.
If my releaseExpression hasn't been met but the groupTimeout has been met and the group times out, if an exception gets throw in my ServiceActivator, then the messages do not get sent to my dead letter queue and are acked.
After reading another blog post,
link1
it mentions that this happens because the processing happens in another thread by the MessageGroupStoreReaper and not the one that the SimpleMessageListenerContainer was on. Once processing moves away from the SimpleMessageListener's thread, the messages will be auto ack.
I added the configuration mentioned in the link above and see the error messages getting sent to my error handler. My main question, is what is considered the best way to handle this scenario to minimize message getting lost.
Here are the options I was exploring:
Use a BatchRabbitTemplate in my custom error handler to publish the failed messaged to the same dead letter queue that they would have gone to if the releaseExpression was met. (This is the approach I outlined below but I am worried about messages getting lost, if an error happens during publishing)
Investigate if there is away I could let the SimpleMessageListener know about the error that occurred and have it send the batch of messages that failed to a dead letter queue? I doubt this is possible since it seems the messages are already acked.
Don't set the SimpleMessageListenerContainer to AcknowledgeMode.AUTO and manually ack the messages when they get processed via the Service when the releaseExpression being met or the groupTimeOut happening. (This seems kinda of messy, since there can be 1..N message in the MessageGroup but wanted to see what others have done)
Ideally, I want to have a flow that will that will mimic the same flow when the releaseExpression has been met, so that the messages don't get lost.
Does anyone have recommendation on the best way to handle this scenario they have used in the past?
Thanks for any help and/or advice!
Here is my current configuration using Spring Integration DSL
#Bean
public SimpleMessageListenerContainer workListenerContainer() {
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(rabbitConnectionFactory);
container.setQueues(worksQueue());
container.setConcurrentConsumers(4);
container.setDefaultRequeueRejected(false);
container.setTransactionManager(transactionManager);
container.setChannelTransacted(true);
container.setTxSize(10);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
return container;
}
#Bean
public AmqpInboundChannelAdapter inboundRabbitMessages() {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(workListenerContainer());
return adapter;
}
I have defined a error channel and defined my own taskScheduler to use for the MessageStoreRepear
#Bean
public ThreadPoolTaskScheduler taskScheduler(){
ThreadPoolTaskScheduler ts = new ThreadPoolTaskScheduler();
MessagePublishingErrorHandler mpe = new MessagePublishingErrorHandler();
mpe.setDefaultErrorChannel(myErrorChannel());
ts.setErrorHandler(mpe);
return ts;
}
#Bean
public PollableChannel myErrorChannel() {
return new QueueChannel();
}
public IntegrationFlow aggregationFlow() {
return IntegrationFlows.from(inboundRabbitMessages())
.transform(Transformers.fromJson(SomeObject.class))
.aggregate(a->{
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
}
)
.handle("someService", "processMessages")
.get();
}
Here is my custom error flow
#Bean
public IntegrationFlow errorResponse() {
return IntegrationFlows.from("myErrorChannel")
.<MessagingException, Message<?>>transform(MessagingException::getFailedMessage,
e -> e.poller(p -> p.fixedDelay(100)))
.channel("myErrorChannelHandler")
.handle("myErrorHandler","handleFailedMessage")
.log()
.get();
}
Here is the custom error handler
#Component
public class MyErrorHandler {
#Autowired
BatchingRabbitTemplate batchingRabbitTemplate;
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(Message<?> message) {
ArrayList<SomeObject> payload = (ArrayList<SomeObject>)message.getPayload();
payload.forEach(m->batchingRabbitTemplate.convertAndSend("some.dlq","#", m));
}
}
Here is the BatchingRabbitTemplate bean
#Bean
public BatchingRabbitTemplate batchingRabbitTemplate() {
ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler();
scheduler.setPoolSize(5);
scheduler.initialize();
BatchingStrategy batchingStrategy = new SimpleBatchingStrategy(10, Integer.MAX_VALUE, 30000);
BatchingRabbitTemplate batchingRabbitTemplate = new BatchingRabbitTemplate(batchingStrategy, scheduler);
batchingRabbitTemplate.setConnectionFactory(rabbitConnectionFactory);
return batchingRabbitTemplate;
}
Update 1) to show custom MessageGroupProcessor:
public class CustomAggregtingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
#Override
protected final Object aggregatePayloads(MessageGroup group, Map<String, Object> headers) {
return group;
}
}
Example Service:
#Slf4j
public class SomeService {
#ServiceActivator
public void processMessages(MessageGroup messageGroup) throws IOException {
Collection<Message<?>> messages = messageGroup.getMessages();
//Do business logic
//ack messages in the group
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug(" deliveryTag = {}",deliveryTag);
log.debug("Channel = {}",channel);
channel.basicAck(deliveryTag, false);
}
}
}
Updated integrationFlow
public IntegrationFlow aggregationFlowWithCustomMessageProcessor() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
New ErrorHandler to do nack
public class MyErrorHandler {
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(MessageGroup messageGroup) throws IOException {
if(messageGroup!=null) {
log.debug("Nack messages size = {}", messageGroup.getMessages().size());
Collection<Message<?>> messages = messageGroup.getMessages();
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug("deliveryTag = {}",deliveryTag);
log.debug("channel = {}",channel);
channel.basicNack(deliveryTag, false, false);
}
}
}
}
Update 2 Added custom ReleaseStratgedy and change to aggegator
public class CustomMeasureGroupReleaseStratgedy implements ReleaseStrategy {
private static final int MAX_MESSAGE_COUNT = 10;
public boolean canRelease(MessageGroup messageGroup) {
return messageGroup.getMessages().size() >= MAX_MESSAGE_COUNT;
}
}
public IntegrationFlow aggregationFlowWithCustomMessageProcessorAndReleaseStratgedy() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.transactional(true);
a.releaseStrategy(new CustomMeasureGroupReleaseStratgedy());
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
There are some flaws in your understanding.If you use AUTO, only the last message will be dead-lettered when an exception occurs. Messages successfully deposited in the group, before the release, will be ack'd immediately.
The only way to achieve what you want is to use MANUAL acks.
There is no way to "tell the listener container to send messages to the DLQ". The container never sends messages to the DLQ, it rejects a message and the broker sends it to the DLX/DLQ.

can we batch up groups of 10 message load in mosquitto using spring integration

this is how i have defined my mqtt connection using spring integration.i am not sure whether this is possible bt can we setup a mqtt subscriber works after getting a 10 load of messages. right now subscriber works after publishing a message as it should.
#Autowired
ConnectorConfig config;
#Bean
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
factory.setServerURIs(config.getUrl());
factory.setUserName(config.getUser());
factory.setPassword(config.getPass());
return factory;
}
#Bean
public MessageProducer inbound() {
MqttPahoMessageDrivenChannelAdapter adapter =
new MqttPahoMessageDrivenChannelAdapter(config.getClientid(), mqttClientFactory(), "ALERT", "READING");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
adapter.setOutputChannel(mqttRouterChannel());
return adapter;
}
/**this is router**/
#MessageEndpoint
public class MessageRouter {
private final Logger logger = LoggerFactory.getLogger(MessageRouter.class);
static final String ALERT = "ALERT";
static final String READING = "READING";
#Router(inputChannel = "mqttRouterChannel")
public String route(#Header("mqtt_topic") String topic){
String route = null;
switch (topic){
case ALERT:
logger.info("alert message received");
route = "alertTransformerChannel";
break;
case READING:
logger.info("reading message received");
route = "readingTransformerChannel";
break;
}
return route;
}
}
i need to batch up groups of 10 messages at a time
That is not a MqttPahoMessageDrivenChannelAdapter responsibility.
We use there MqttCallback with this semantic:
* #param topic name of the topic on the message was published to
* #param message the actual message.
* #throws Exception if a terminal error has occurred, and the client should be
* shut down.
*/
public void messageArrived(String topic, MqttMessage message) throws Exception;
So, we can't batch them there on this Channel Adapter by nature of the Paho client.
What we can suggest you from the Spring Integration perspective is an Aggregator EIP implementation.
In your case you should add #ServiceActivator for the AggregatorFactoryBean #Bean before that mqttRouterChannel, before sending to the router.
That maybe as simple as:
#Bean
#ServiceActivator(inputChannel = "mqttAggregatorChannel")
AggregatorFactoryBean mqttAggregator() {
AggregatorFactoryBean aggregator = new AggregatorFactoryBean();
aggregator.setProcessorBean(new DefaultAggregatingMessageGroupProcessor());
aggregator.setCorrelationStrategy(m -> 1);
aggregator.setReleaseStrategy(new MessageCountReleaseStrategy(10));
aggregator.setExpireGroupsUponCompletion(true);
aggregator.setSendPartialResultOnExpiry(true);
aggregator.setGroupTimeoutExpression(new ValueExpression<>(1000));
aggregator.setOutputChannelName("mqttRouterChannel");
return aggregator;
}
See more information in the Reference Manual.

No delay happening when sending messages between message channels

I am new to Spring Integration DSL. Currently, i am trying to add a delay
between message channels- "ordersChannel" and "bookItemsChannel". But , the flow continues as though there is no delay.
Any help appreciated.
Here is the code:
#Bean
public IntegrationFlow ordersFlow() {
return IntegrationFlows.from("ordersChannel")
.split(new AbstractMessageSplitter() {
#Override
protected Object splitMessage(Message<?> message) {
return ((Order)message.getPayload()).getOrderItems();
}
})
.delay("normalMessage", new Consumer<DelayerEndpointSpec>() {
public void accept(DelayerEndpointSpec spec) {
spec.id("delayChannel");
spec.defaultDelay(50000000);
System.out.println("Going to delay");
}
})
.channel("bookItemsChannel")
.get();
}
Seems for me that mixed the init phase when you see that System.out.println("Going to delay"); and the real runtime, when the delay happens for each incoming message.
We have some delay test-case in the DSL project, but I've just wrote this one to prove that the defaultDelay works well:
#Bean
public IntegrationFlow ordersFlow() {
return f -> f
.split()
.delay("normalMessage", (DelayerEndpointSpec e) -> e.defaultDelay(5000))
.channel(c -> c.queue("bookItemsChannel"));
}
...
#Autowired
#Qualifier("ordersFlow.input")
private MessageChannel ordersFlowInput;
#Autowired
#Qualifier("bookItemsChannel")
private PollableChannel bookItemsChannel;
#Test
public void ordersDelayTests() {
this.ordersFlowInput.send(new GenericMessage<>(new String[] {"foo", "bar", "baz"}));
StopWatch stopWatch = new StopWatch();
stopWatch.start();
Message<?> receive = this.bookItemsChannel.receive(10000);
assertNotNull(receive);
receive = this.bookItemsChannel.receive(10000);
assertNotNull(receive);
receive = this.bookItemsChannel.receive(10000);
assertNotNull(receive);
stopWatch.stop();
assertThat(stopWatch.getTotalTimeMillis(), greaterThanOrEqualTo(5000L));
}
As you see it is very close to your config, but it doesn't prove that we have something wrong around .delay().
So, it would be better to provide something similar to confirm an unexpected problem.

Spring Integration - JdbcPollingChannelAdapter commit instead of rollback on handled Exceptions

I am using Spring 4.1.x APIs, Spring Integration 4.1.x APIs and Spring Integration Java DSL 1.0.x APIs for an EIP flow where we consume messages from an Oracle database table using a JdbcPollingChannelAdpater as the entry point into the flow.
Even though we have an ErrorHandler configured on the JdbcPollingChannelAdapter's Poller, we are seeing that a session's Transaction is still rolled back and not committed when a RuntimeException is thrown and correctly handled by the ErrorHandler.
After reading through this thread: Spring Transactions - Prevent rollback after unchecked exceptions (RuntimeException), I get the feeling that it is not possible to prevent a rollback and instead force a commit. Is this correct? And, if there is a way, what is the cleanest way to force a commit instead of a rollback when an error is safely handled?
Current Configuration:
IntegrationConfig.java:
#Bean
public MessageSource<Object> jdbcMessageSource() {
JdbcPollingChannelAdapter adapter = new JdbcPollingChannelAdapter(
dataSource,
"select * from SERVICE_TABLE where rownum <= 10 for update skip locked");
adapter.setUpdateSql("delete from SERVICE_TABLE where SERVICE_MESSAGE_ID in (:id)");
adapter.setRowMapper(serviceMessageRowMapper);
adapter.setMaxRowsPerPoll(1);
adapter.setUpdatePerRow(true);
return adapter;
}
#SuppressWarnings("unchecked")
#Bean
public IntegrationFlow inFlow() {
return IntegrationFlows
.from(jdbcMessageSource(),
c -> {
c.poller(Pollers.fixedRate(100)
.maxMessagesPerPoll(10)
.transactional(transactionManager)
.errorHandler(errorHandler));
})
.channel(inProcessCh()).get();
}
ErrorHandler.java
#Component
public class ErrorHandler implements org.springframework.util.ErrorHandler {
#Autowired
private PlatformTransactionManager transactionManager;
private static final Logger logger = LogManager.getLogger();
#Override
public void handleError(Throwable t) {
logger.trace("handling error:{}", t.getMessage(), t);
// handle error code here...
// we want to force commit the transaction here?
TransactionStatus txStatus = transactionManager.getTransaction(null);
transactionManager.commit(txStatus);
}
}
--- EDITED to include ExpressionEvaluatingRequestHandlerAdvice Bean ---
#Bean
public Advice expressionEvaluatingRequestHandlerAdvice() {
ExpressionEvaluatingRequestHandlerAdvice expressionEvaluatingRequestHandlerAdvice = new ExpressionEvaluatingRequestHandlerAdvice();
expressionEvaluatingRequestHandlerAdvice.setTrapException(true);
expressionEvaluatingRequestHandlerAdvice.setOnSuccessExpression("payload");
expressionEvaluatingRequestHandlerAdvice
.setOnFailureExpression("payload");
expressionEvaluatingRequestHandlerAdvice.setFailureChannel(errorCh());
return expressionEvaluatingRequestHandlerAdvice;
}
--- EDITED to show Dummy Test Message handler ---
.handle(Message.class,
(m, h) -> {
boolean forceTestError = m.getHeaders().get("forceTestError");
if (forceTestError) {
logger.trace("simulated forced TestException");
TestException testException = new TestException(
"forced test exception");
throw testException;
}
logger.trace("simulated successful process");
return m;
}, e-> e.advice(expressionEvaluatingRequestHandlerAdvice())
--- EDITED to show ExecutorChannelInterceptor method ---
#Bean
public IntegrationFlow inFlow() {
return IntegrationFlows
.from(jdbcMessageSource(), c -> {
c.poller(Pollers.fixedRate(100).maxMessagesPerPoll(10)
.transactional(transactionManager));
})
.enrichHeaders(h -> h.header("errorChannel", errorCh(), true))
.channel(
MessageChannels.executor("testSyncTaskExecutor",
syncTaskExecutor()).interceptor(
testExecutorChannelInterceptor()))
.handle(Message.class, (m, h) -> {
boolean forceTestError = m.getHeaders().get("forceTestError");
if (forceTestError) {
logger.trace("simulated forced TestException");
TestException testException = new TestException(
"forced test exception");
throw testException;
}
logger.trace("simulated successful process");
}).channel("nullChannel").get();
}
It won't work just because your ErrorHandler works already after the finish of TX.
Here is a couple lines of source code (AbstractPollingEndpoint.Poller):
#Override
public void run() {
taskExecutor.execute(new Runnable() {
#Override
public void run() {
.............
try {
if (!pollingTask.call()) {
break;
}
count++;
}
catch (Exception e) {
....
}
}
}
});
}
Where:
The ErrorHandler is applied for the taskExecutor (SyncTaskExecutor) by default.
TransactionInterceptor being as Aspect is applied for the Proxy around that pollingTask.
Therefore TX is done around the pollingTask.call() and goes out. And only after that your ErrorHandler starts to work inside taskExecutor.execute().
To fix your issue, you need to figure out which downstream flow part isn't so critical for TX rallback and make there some try...catch or use ExpressionEvaluatingRequestHandlerAdvice to "burke" that RuntimeException.
But as you have noticed by my reasoning that must be done within TX.

Resources