The services consumes messages from Kafka and then sending them as emails (org.springframework.mail.javamail). Sometimes there is connection interruption happening and I would like to retry to send the message again. I'm using RetryTemplate as following:
#StreamRetryTemplate
RetryTemplate retryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
AlwaysRetryPolicy alwaysRetryPolicy = new AlwaysRetryPolicy();
retryTemplate.setRetryPolicy(alwaysRetryPolicy);
ExponentialBackOffPolicy exponentialBackOffPolicy = new ExponentialBackOffPolicy();
exponentialBackOffPolicy.setInitialInterval(backOffInitialInterval);
exponentialBackOffPolicy.setMaxInterval(backOffMaxInterval);
exponentialBackOffPolicy.setMultiplier(backOffMultiplier);
retryTemplate.setBackOffPolicy(exponentialBackOffPolicy);
log.info("Configured email consumer's back off: backOffInitialInterval={}, backOffMaxInterval={}, backOffMultiplier={}", backOffInitialInterval, backOffMaxInterval, backOffMultiplier);
return retryTemplate;
}
It works as expected (retries same message until connection is established), but for some reason after certain amount (actually can vary as I have noticied) of messages has been sent, the service (consumer) just hangs out and nothing is happening. When I restart the service sometimes it can hang out in the same way, but after some restarts it behaves correctly.
It is somehow related to partitions which are assigned to the consumer? Or what's going on? Any other better and simple approach of retrying?
Related
I am using the DefaultMessageListenerContainer for consuming messages from ActiveMQ queue as below. With this implementation is there any polling mechanism, does the listener poll the queue to see if there is a new message every 1 second or so , or does the onMessage method get invoked whenever there is a new message in the queue? If it uses polling how can we increase or decrease the polling frequency (time) .
DefaultMessageListenerContainer container = new DefaultMessageListenerContainer();
container.setMessageListener(new MessageJmsListener ());
public class MessageJmsListener implements MessageListener {
#Override
public void onMessage(Message message) {
if (message instanceof TextMessage) {
try {
//process the message and create record in Data Base
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
}
The container polls the JMS client, but the broker pushes messages to the client.
So, no, the container does not poll the queue directly.
If there are no messages in the queue, the container will timeout after receiveTimeout and immediately re-poll and will get the next message as soon as the broker sends it.
The prefetch determines how many messages are sent to the consumer by the broker; so that might impact performance (but it's 1000 by default, I think, with recent ActiveMQ versions).
Setting the prefetch to 1 will give you the slowest delivery rate.
If you want to slow things down, you can add a Thread.sleep() in your listener.
I am using Spring Integration and Scatter Gather handler (https://docs.spring.io/spring-integration/docs/5.3.0.M1/reference/html/scatter-gather.html) in order to send 3 parallel requests (using ExecutorChannels) to external REST APIs and aggregate their response into one single message.
Everything works fine until exception is thrown within Aggregator's aggregatePayloads method (AggregatingMessageHandler). In this scenario error message is successfully delivered to Messaging Gateway which initiated the flow ( caller ). However, ScatterGatherHandler thread remains in hanging state waiting for gatherer reply (I believe) which never arrives due to the exception within it. I.e each sequential call leaves one additional thread in "stuck" state and eventually Thread Pool runs out of available working threads.
My current Scatter Gather configuration:
#Bean
public MessageHandler distributor() {
RecipientListRouter router = new RecipientListRouter();
router.setChannels(Arrays.asList(Channel1(asyncExecutor()),Channel2(asyncExecutor()),Channel3(asyncExecutor())));
return router;
}
#Bean
public MessageHandler gatherer() {
AggregatingMessageHandler aggregatingMessageHandler = new AggregatingMessageHandler(
new TransactionAggregator(),
new SimpleMessageStore(),
new HeaderAttributeCorrelationStrategy("correlationID"),
new ExpressionEvaluatingReleaseStrategy("size() == 3"));
aggregatingMessageHandler.setExpireGroupsUponCompletion( true );
return aggregatingMessageHandler;
}
#Bean
#ServiceActivator(inputChannel = "validationOutputChannel")
public MessageHandler scatterGatherDistribution() {
ScatterGatherHandler handler = new ScatterGatherHandler(distributor(), gatherer());
handler.setErrorChannelName("scatterGatherErrorChannel");
return handler;
}
#Bean("taskExecutor")
#Primary
public TaskExecutor asyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(4);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(100);
executor.setThreadNamePrefix("AsyncThread-");
executor.initialize();
return executor;
}
So far the only solution that I found is to add RequiresReply and GatherTimeout values for ScatterGatherHandler like below:
handler.setGatherTimeout(120000L);
handler.setRequiresReply(true);
This will produce an exception and release ScatterGatherHandler's thread to the pull after specified timeout value and after aggregator's exception is delivered to the messaging gateway. I can see following message in the log:
[AsyncThread-1] [WARN] [o.s.m.c.GenericMessagingTemplate$TemporaryReplyChannel:] [{}] - Reply message received but the receiving thread has already received a reply: ErrorMessage
Is there any other way to achieve this? My main goal is to make sure that I am not blocking any threads in case of exception is thrown within aggregator's aggregatePayloads method.
Thank you.
Technically this is really an expect behavior. See docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#scatter-gather-error-handling
In this case a reasonable, finite gatherTimeout must be configured for the ScatterGatherHandler. Otherwise it is going to be blocked waiting for a reply from the gatherer forever, by default.
There is really no way to break expectations from the BlockingQueue.take() from that ScatterGatherHandler code.
I implemented a spring batch project that reads from a weblogic Jms queue (Custom Item Reader not message driven), then pass the Jms message data to an item writer (chunk = 1) where i call some APIs and write in DataBase.
However, i am trying to implement parallel Jms processing, reading in parallel Jms messages and passing them to the writer without waiting for the previous processes to complete.
I’ve used a DefaultMessageListenerContainer in a previous project and it offers a parallel consuming of jms messages, but in this project i have to use the spring batch framework.
I tried using the easiest solution (multi-threaded step) but it
didn’t work , JmsException : "invalid blocking receive when another
receive is in progress" which means probably that my reader is
statefull.
I thought about using remote partitioning but then i have to read all
messages and put the data into step execution contexts before calling
the slave steps, which isn't really efficient if dealing with a large
number of messages.
I looked a little bit into remote chunking, i understand that it passes data via queue channels, but i can't seem to find the utility in reading from a Jms and putting messages in a local queue for slave workers.
How can I approach this?
My code:
#Bean
Step step1() {
return steps.get("step1").<Message, DetectionIncoherenceLiqJmsOut>chunk(1)
.reader(reader()).processor(processor()).writer(writer())
.listener(stepListener()).build();
}
#Bean
Job job(#Qualifier("step1") Step step1) {
return jobs.get("job").start(step1).build();
}
Jms Code :
#Override
public void initQueueConnection() throws NamingException, JMSException {
Hashtable<String, String> properties = new Hashtable<String, String>();
properties.put(Context.INITIAL_CONTEXT_FACTORY, env.getProperty(WebLogicConstant.JNDI_FACTORY));
properties.put(Context.PROVIDER_URL, env.getProperty(WebLogicConstant.JMS_WEBLOGIC_URL_RECEIVE));
InitialContext vInitialContext = new InitialContext(properties);
QueueConnectionFactory vQueueConnectionFactory = (QueueConnectionFactory) vInitialContext
.lookup(env.getProperty(WebLogicConstant.JMS_FACTORY_RECEIVE));
vQueueConnection = vQueueConnectionFactory.createQueueConnection();
vQueueConnection.start();
vQueueSession = vQueueConnection.createQueueSession(false, 0);
Queue vQueue = (Queue) vInitialContext.lookup(env.getProperty(WebLogicConstant.JMS_QUEUE_RECEIVE));
consumer = vQueueSession.createConsumer(vQueue, "JMSCorrelationID IS NOT NULL");
}
#Override
public Message receiveMessages() throws NamingException, JMSException {
return consumer.receive(20000);
}
Item reader :
#Override
public Message read() throws Exception {
return jmsServiceReceiver.receiveMessages();
}
Thanks ! i'll appreciate the help :)
There's a BatchMessageListenerContainer in the spring-batch-infrastructure-tests sub project.
https://github.com/spring-projects/spring-batch/blob/d8fc58338d3b059b67b5f777adc132d2564d7402/spring-batch-infrastructure-tests/src/main/java/org/springframework/batch/container/jms/BatchMessageListenerContainer.java
Message listener container adapted for intercepting the message reception with advice provided through configuration.
To enable batching of messages in a single transaction, use the TransactionInterceptor and the RepeatOperationsInterceptor in the advice chain (with or without a transaction manager set in the base class). Instead of receiving a single message and processing it, the container will then use a RepeatOperations to receive multiple messages in the same thread. Use with a RepeatOperations and a transaction interceptor. If the transaction interceptor uses XA then use an XA connection factory, or else the TransactionAwareConnectionFactoryProxy to synchronize the JMS session with the ongoing transaction (opening up the possibility of duplicate messages after a failure). In the latter case you will not need to provide a transaction manager in the base class - it only gets on the way and prevents the JMS session from synchronizing with the database transaction.
Perhaps you could adapt it for your use case.
I was able to do so with a multithreaded step :
// Jobs et Steps
#Bean
Step stepDetectionIncoherencesLiq(#Autowired StepBuilderFactory steps) {
int threadSize = Integer.parseInt(env.getProperty(PropertyConstant.THREAD_POOL_SIZE));
return steps.get("stepDetectionIncoherencesLiq").<Message, DetectionIncoherenceLiqJmsOut>chunk(1)
.reader(reader()).processor(processor()).writer(writer())
.readerIsTransactionalQueue()
.faultTolerant()
.taskExecutor(taskExecutor())
.throttleLimit(threadSize)
.listener(stepListener())
.build();
}
And a jmsItemReader with jmsTemplate instead of creating session and connections explicitly, it manages connections so i dont have the jms exception anymore:( JmsException : "invalid blocking receive when another receive is in progress" )
#Bean
public JmsItemReader<Message> reader() {
JmsItemReader<Message> itemReader = new JmsItemReader<>();
itemReader.setItemType(Message.class);
itemReader.setJmsTemplate(jmsTemplate());
return itemReader;
}
Spring cloud Kafka stream does not retry upon deserialization error even after specific configuration. The expectation is, it should retry based on the configured retry policy and at the end push the failed message to DLQ.
Configuration as below.
spring.cloud.stream.bindings.input_topic.consumer.maxAttempts=7
spring.cloud.stream.bindings.input_topic.consumer.backOffInitialInterval=500
spring.cloud.stream.bindings.input_topic.consumer.backOffMultiplier=10.0
spring.cloud.stream.bindings.input_topic.consumer.backOffMaxInterval=100000
spring.cloud.stream.bindings.iinput_topic.consumer.defaultRetryable=true
public interface MyStreams {
String INPUT_TOPIC = "input_topic";
String INPUT_TOPIC2 = "input_topic2";
String ERROR = "apperror";
String OUTPUT = "output";
#Input(INPUT_TOPIC)
KStream<String, InObject> inboundTopic();
#Input(INPUT_TOPIC2)
KStream<Object, InObject> inboundTOPIC2();
#Output(OUTPUT)
KStream<Object, outObject> outbound();
#Output(ERROR)
MessageChannel outboundError();
}
#StreamListener(MyStreams.INPUT_TOPIC)
#SendTo(MyStreams.OUTPUT)
public KStream<Key, outObject> processSwft(KStream<Key, InObject> myStream) {
return myStream.mapValues(this::transform);
}
The metadataRetryOperations in KafkaTopicProvisioner.java is always null and hence it creates a new RetryTemplate in the afterPropertiesSet().
public KafkaTopicProvisioner(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties, KafkaProperties kafkaProperties) {
Assert.isTrue(kafkaProperties != null, "KafkaProperties cannot be null");
this.adminClientProperties = kafkaProperties.buildAdminProperties();
this.configurationProperties = kafkaBinderConfigurationProperties;
this.normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties, kafkaBinderConfigurationProperties);
}
public void setMetadataRetryOperations(RetryOperations metadataRetryOperations) {
this.metadataRetryOperations = metadataRetryOperations;
}
public void afterPropertiesSet() throws Exception {
if (this.metadataRetryOperations == null) {
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy();
simpleRetryPolicy.setMaxAttempts(10);
retryTemplate.setRetryPolicy(simpleRetryPolicy);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(100L);
backOffPolicy.setMultiplier(2.0D);
backOffPolicy.setMaxInterval(1000L);
retryTemplate.setBackOffPolicy(backOffPolicy);
this.metadataRetryOperations = retryTemplate;
}
}
The retry configuration only works with MessageChannel-based binders. With the KStream binder, Spring just helps with building the topology in a prescribed way, it's not involved with the message flow once the topology is built.
The next version of spring-kafka (used by the binder) has added the RecoveringDeserializationExceptionHandler (commit here); while it can't help with retry, it can be used with a DeadLetterPublishingRecoverer send the record to a dead-letter topic.
You can use a RetryTemplate within your processors/transformers to retry specific operations.
Spring cloud Kafka stream does not retry upon deserialization error even after specific configuration.
The behavior you are seeing matches the default settings of Kafka Streams when it encounters a deserialization error.
From https://docs.confluent.io/current/streams/faq.html#handling-corrupted-records-and-deserialization-errors-poison-pill-records:
LogAndFailExceptionHandler implements DeserializationExceptionHandler and is the default setting in Kafka Streams. It handles any encountered deserialization exceptions by logging the error and throwing a fatal error to stop your Streams application. If your application is configured to use LogAndFailExceptionHandler, then an instance of your application will fail-fast when it encounters a corrupted record by terminating itself.
I am not familiar with Spring's facade for Kafka Streams, but you probably need to configure the desired org.apache.kafka.streams.errors.DeserializationExceptionHandler, instead of configuring retries (they are meant for a different purpose). Or, you may want to implement your own, custom handler (see link above for more information), and then configure Spring/KStreams to use it.
I have a non durable Topic client which is supposed to receive messages asynchronously using a listener.
When message is published on Topic, i can see on admin console that message is published and consumed but my client never receives it.
Client is able to establish connection properly as i can track it on console.
Any suggestions?
EDIT:
Did some more analysis and found that issue is with API used for connection.
I was able to listen to messages when i use following code:
TopicConnection conn;
TopicSession session = conn.createTopicSession(false, TopicSession.AUTO_ACKNOWLEDGE);
Topic topic = session.createTopic(monacoSubscriberEmsTopic);
conn.start();
tsubs = session.createSubscriber(topic);
tsubs.setMessageListener(listener);
But when i use following code then it doesn't work:
DefaultMessageListenerContainer listenerContainer = createMessageListenerContainer();
private DefaultMessageListenerContainer createMessageListenerContainer() {
DefaultMessageListenerContainer listenerContainer = new DefaultMessageListenerContainer();
listenerContainer.setClientId(clientID);
listenerContainer.setDestinationName(destination);
listenerContainer.setConnectionFactory(connectionFactory);
listenerContainer.setConcurrentConsumers(minConsumerCount);
listenerContainer.setMaxConcurrentConsumers(maxConsumerCount);
listenerContainer.setPubSubDomain(true);
listenerContainer.setSessionAcknowledgeModeName(sessionAcknowledgeMode);
if (messageSelector != null)
listenerContainer.setMessageSelector(messageSelector);
listenerContainer.setSessionTransacted(true);
return listenerContainer;
}
listenerContainer.initialize();
listenerContainer.start();
What is wrong with second approach?