Filter messages before executing #RabbitListener - spring-boot

How can I filter a message before is processed by a #RabbitListener annotated method ?
If the message is for. ex. is "duplicated" because contains an header with a determinate value I would like to return "ack" and skip processing. (skip the body of #RabbitListener method)
I tried to do it in a MessagePostProcessor (with addAfterReceivePostProcessors) but cannot skip execution for ex. based on a message property (header).
This is the signature of MessageProcessor :
Message postProcessMessage(Message message) throws AmqpException;
I would like to return an "ack" here so the message processing is skipped.
thank you for your support.

I think an AmqpRejectAndDontRequeueException is what you need to throw from your MessagePostProcessor impl.
See its javadocs:
/**
* Exception for listener implementations used to indicate the
* basic.reject will be sent with requeue=false in order to enable
* features such as DLQ.
* #author Gary Russell
* #since 1.0.1
*
*/
#SuppressWarnings("serial")
public class AmqpRejectAndDontRequeueException extends AmqpException {
And respective docs: https://docs.spring.io/spring-amqp/docs/current/reference/html/#exception-handling

Related

Kafka ConsumerInterceptor onCommit not being called when using transactions

I'm using Spring Kafka in a Spring Boot application. I'm attempting to use a Kafka ConsumerInterceptor to intercept when offsets are committed.
This seems to work producers transactions are not enabled but transactions are turned on Interceptor::onCommit is no longer called.
The following minimal example everything works as expected:
#SpringBootApplication
#EnableKafka
class Application {
#KafkaListener(topics = ["test"])
fun onMessage(message: String) {
log.warn("onMessage: $message")
}
Interceptor:
class Interceptor : ConsumerInterceptor<String, String> {
override fun onCommit(offsets: MutableMap<TopicPartition, OffsetAndMetadata>) {
log.warn("onCommit: $offsets")
}
override fun onConsume(records: ConsumerRecords<String, String>): ConsumerRecords<String, String> {
log.warn("onConsume: $records")
return records
}
}
Application config:
spring:
kafka:
consumer:
enable-auto-commit: false
auto-offset-reset: earliest
properties:
"interceptor.classes": com.example.Interceptor
group-id: test-group
listener:
ack-mode: record
Inside a test using #EmbeddedKafka:
#Test
fun sendMessage() {
kafkaTemplate.send("test", "id", "sent message").get() // block so we don't end before the consumer gets the message
}
This outputs what I would expect:
onConsume: org.apache.kafka.clients.consumer.ConsumerRecords#6a646f3c
onMessage: sent message
onCommit: {test-0=OffsetAndMetadata{offset=1, leaderEpoch=null, metadata=''}}
However, when I enabled transactions by providing a transaction-id-prefix the Interceptor's onCommit is no longer called.
My updated config only adds:
spring:
kafka:
producer:
transaction-id-prefix: tx-id-
And the test is updated to wrap send in a transaction:
#Test
fun sendMessage() {
kafkaTemplate.executeInTransaction {
kafkaTemplate.send("test", "a", "sent message").get()
}
}
With this change my log output is now only
onConsume: org.apache.kafka.clients.consumer.ConsumerRecords#738b5968
onMessage: sent message
The Interceptor's onConsume method is called and the #KafkaListener receives the message but onCommit is never called.
Does anyone happen to know whats happening here? Are my expectations about what I should see here incorrect?
Offsets are not committed via the consumer when using transactions (exactly once semantics). Instead, the offset is committed via the producer.
KafkaProducer...
/**
* Sends a list of specified offsets to the consumer group coordinator, and also marks
* those offsets as part of the current transaction. These offsets will be considered
* committed only if the transaction is committed successfully. The committed offset should
* be the next message your application will consume, i.e. lastProcessedMessageOffset + 1.
* <p>
* This method should be used when you need to batch consumed and produced messages
* together, typically in a consume-transform-produce pattern. Thus, the specified
* {#code groupMetadata} should be extracted from the used {#link KafkaConsumer consumer} via
* {#link KafkaConsumer#groupMetadata()} to leverage consumer group metadata. This will provide
* stronger fencing than just supplying the {#code consumerGroupId} and passing in {#code new ConsumerGroupMetadata(consumerGroupId)},
* however note that the full set of consumer group metadata returned by {#link KafkaConsumer#groupMetadata()}
* requires the brokers to be on version 2.5 or newer to understand.
*
* <p>
* Note, that the consumer should have {#code enable.auto.commit=false} and should
* also not commit offsets manually (via {#link KafkaConsumer#commitSync(Map) sync} or
* {#link KafkaConsumer#commitAsync(Map, OffsetCommitCallback) async} commits).
* This method will raise {#link TimeoutException} if the producer cannot send offsets before expiration of {#code max.block.ms}.
* Additionally, it will raise {#link InterruptException} if interrupted.
*
* #throws IllegalStateException if no transactional.id has been configured or no transaction has been started.
* #throws ProducerFencedException fatal error indicating another producer with the same transactional.id is active
* #throws org.apache.kafka.common.errors.UnsupportedVersionException fatal error indicating the broker
* does not support transactions (i.e. if its version is lower than 0.11.0.0) or
* the broker doesn't support latest version of transactional API with all consumer group metadata
* (i.e. if its version is lower than 2.5.0).
* #throws org.apache.kafka.common.errors.UnsupportedForMessageFormatException fatal error indicating the message
* format used for the offsets topic on the broker does not support transactions
* #throws org.apache.kafka.common.errors.AuthorizationException fatal error indicating that the configured
* transactional.id is not authorized, or the consumer group id is not authorized.
* #throws org.apache.kafka.clients.consumer.CommitFailedException if the commit failed and cannot be retried
* (e.g. if the consumer has been kicked out of the group). Users should handle this by aborting the transaction.
* #throws org.apache.kafka.common.errors.FencedInstanceIdException if this producer instance gets fenced by broker due to a
* mis-configured consumer instance id within group metadata.
* #throws org.apache.kafka.common.errors.InvalidProducerEpochException if the producer has attempted to produce with an old epoch
* to the partition leader. See the exception for more details
* #throws KafkaException if the producer has encountered a previous fatal or abortable error, or for any
* other unexpected error
* #throws TimeoutException if the time taken for sending offsets has surpassed max.block.ms.
* #throws InterruptException if the thread is interrupted while blocked
*/
public void sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata> offsets,
ConsumerGroupMetadata groupMetadata) throws ProducerFencedException {

Is this how spring #Retryable supposed to work?

I recently added spring #Retryable to handle 502 errors instead of appropriate responses from the target server. (with retrofit2).
Below is just pseudo-code but the original code handles exceptions in a similar way.
class BadGatewayException : RuntimeException
#Retryable(include = [BadGatewayException::class])
class A {
private fun handle(block: () -> Call<T>): T {
try {
val response = val response = block().execute()
...
if (response.code() == 502) {
throw BadGatewayException("The server suffers temporary connection problems.")
}
...
} catch(e: Exception) {
throw RuntimeException("a system error has occurred")
}
}
}
I expected #Retryable wouldn't retry as the BadGatewayException that occurs with 502 would be wrapped in RuntimeException straight away in a catch block then thrown. But, when it was tested, it seemed like it follows this step
try to get a response from a retrofit request
502 occurs
BadGatewayException thrown
retry (3 by default) - here BadGatewayException is caught somehow
RuntimeException thrown
The point is, is #Retryable supposed to intercept any exceptions this way? Or am I missing something here?
In your pseudo-code you don't wrap your BadGatewayException into RuntimeException. But if you do something like throw RuntimeException("your text", e) where e is BadGatewayException I can explain what happens:
The class AnnotationAwareRetryOperationsInterceptor.java which works with annotation #Retryable creates RetryPolicy like return new SimpleRetryPolicy(maxAttempts, policyMap, true, retryNotExcluded);
The third parameter is traverseCauses and it is always true
It means that Retryable works with causes: when it gets RuntimeException it takes a cause which is BadGatewayException. This is why retry happens in your case if you do a wrap.
I dont know how change this behavior. In the latest version (2.0.0) there is still the same code of creation a retry policy.
I know it is late but maybe it will be helpful for someone
The SimpleRetryPolicy used by the annotation does not traverse the cause chain to look for classified exceptions; only the top level exception is compared to the classified list.
You would have to wire up your own RetryInterceptor with an appropriately configured RetryTemplate and SimpleRetryPolicy.
/**
* Create a builder for a stateless retry interceptor.
* #return The interceptor builder.
*/
public static StatelessRetryInterceptorBuilder stateless() {
return new StatelessRetryInterceptorBuilder();
}
RetryInterceptorBuilder.stateless()
.retryPolicy(new SimpleRetryPolicy(...))
.build();
See
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry attempts. If
* traverseCauses is true, the exception causes will be traversed until a match or the
* root cause is found. The default value indicates whether to retry or not for
* exceptions (or super classes thereof) that are not found in the map.
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses true to traverse the exception cause chain until a classified
* exception is found or the root cause is reached.
* #param defaultValue the default action.
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses, boolean defaultValue) {
(defaultValue = true and Map.of(BadGatewayException.class, false)).
Set the bean name of the interceptor in the interceptor annotation property.

Type Id property error is get when sending text

I use spring boot 2.2 application who receive mq message via jms.
Data is json
In my code I have this config
#Bean
public MessageConverter jacksonJmsMessageConverter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
converter.setTargetType(MessageType.TEXT);
converter.setTypeIdPropertyName("_type");
return converter;
}
When I send data via IBM MQ Explorer, I get this error
org.springframework.jms.support.converter.MessageConversionException: Could not find type id property [_type] on message
Is there a way to set this with IBM MQ explorer?
If your message doesn't have a property containing type information, you can subclass the MappingJackson2MessageConverter and override getJavaTypeForMessage
/**
* Determine a Jackson JavaType for the given JMS Message,
* typically parsing a type id message property.
* <p>The default implementation parses the configured type id property name
* and consults the configured type id mapping. This can be overridden with
* a different strategy, e.g. doing some heuristics based on message origin.
* #param message the JMS Message to set the type id on
* #throws JMSException if thrown by JMS methods
* #see #setTypeIdOnMessage(Object, javax.jms.Message)
* #see #setTypeIdPropertyName(String)
* #see #setTypeIdMappings(java.util.Map)
*/
protected JavaType getJavaTypeForMessage(Message message) throws JMSException {

How to implement Spring Retry for SocketTimeoutException from Rest Template

I want to use Spring retry functionality in case of 'SocketTimeoutException' from rest template.
but spring Rest template throwing like bellow:
org.springframework.web.client.ResourceAccessException: I/O error: Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out
I have added SocketTimeoutException in Retry Template Map.
Spring retry works only if I add SocketTimeoutException in Retry Template Map or Do I need to add ResourceAccessException also.
You need to use a custom SimpleRetryPolicy that has the traverseCauses option set. Then, instead of just looking at the top level exception, it will examine the cause hierarchy to look for a match.
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry
* attempts. If traverseCauses is true, the exception causes will be traversed until
* a match is found.
*
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses is this clause traversable
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses) {
this(maxAttempts, retryableExceptions, traverseCauses, false);
}

Spring AMQP #RabbitListener convert to origin object

I try to send a message based on a flatten MAP using Spring Boot and AMQP. The message should then be received using #RabbitListener and transfer it back to a MAP.
First I have nested json String and flat it and send it using the following code:
// Flatten the JSON String returned into a map
Map<String,Object> jsonMap = JsonFlattener.flattenAsMap(result);
rabbitTemplate.convertAndSend(ApplicationProperties.rmqExchange, ApplicationProperties.rmqTopic, jsonMap, new MessagePostProcessor() {
#Override
public Message postProcessMessage(Message message) throws AmqpException {
message.getMessageProperties().setHeader("amqp_Key1", "wert1");
message.getMessageProperties().setHeader("amqp_Key2", "Wert2");
message.getMessageProperties().setDeliveryMode(MessageDeliveryMode.PERSISTENT);
return message;
}
});
So far so good.
On the receiving site I try to use a Listener and convert the message payload back to the Map as it was send before.
The problem ist that I have no idea how to do it.
I receive the message with the following code:
#RabbitListener(queues = "temparea")
public void receiveMessage(Message message) {
log.info("Receiving data from RabbitMQ:");
log.info("Message is of type: " + message.getClass().getName());
log.info("Message: " + message.toString());
}
As I mentioned before I have no idea how I can convert the message to my old MAP. The __ TypeId __ of the Message is: com.github.wnameless.json.flattener.JsonifyLinkedHashMap
I would be more than glad if somebody could assist me how I get this message back to an Java Map.
BR
Update after answer from Artem Bilan:
I added the following code to my configuration file:
#Bean
public SimpleRabbitListenerContainerFactory myRabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setConnectionFactory(connectionFactory());
factory.setMaxConcurrentConsumers(5);
return factory;
}
But still I have no idea how to get the Map out of my message.
The new code block does not change anything.
You have to configure Jackson2JsonMessageConverter bean and Spring Boot will pick it up for the SimpleRabbitListenerContainerFactory bean definition which is used to build listener containers for the #RabbitListener methods.
UPDATE
Pay attention to the Spring AMQP JSON Sample.
There is a bean like jsonConverter(). According Spring Boot auto-configuration this bean is injected to the default:
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(SimpleRabbitListenerContainerFactoryConfigurer configurer, ConnectionFactory connectionFactory) {
Which is really used for the #RabbitListener by default, when the containerFactory attribute is empty.
So, you need just configure that bean and don't need any custom SimpleRabbitListenerContainerFactory. Or if you do that you should specify its bean name in that containerFactory attribute of your #RabbitListener definitions.
Another option to consider is like Jackson2JsonMessageConverter.setTypePrecedence():
/**
* Set the precedence for evaluating type information in message properties.
* When using {#code #RabbitListener} at the method level, the framework attempts
* to determine the target type for payload conversion from the method signature.
* If so, this type is provided in the
* {#link MessageProperties#getInferredArgumentType() inferredArgumentType}
* message property.
* <p> By default, if the type is concrete (not abstract, not an interface), this will
* be used ahead of type information provided in the {#code __TypeId__} and
* associated headers provided by the sender.
* <p> If you wish to force the use of the {#code __TypeId__} and associated headers
* (such as when the actual type is a subclass of the method argument type),
* set the precedence to {#link TypePrecedence#TYPE_ID}.
* #param typePrecedence the precedence.
* #since 1.6
* #see DefaultJackson2JavaTypeMapper#setTypePrecedence(Jackson2JavaTypeMapper.TypePrecedence)
*/
public void setTypePrecedence(Jackson2JavaTypeMapper.TypePrecedence typePrecedence) {
So, if you want still to have a Message as a method argument but get a gain of the JSON conversion based on the __TypeId__ header, you should consider to configure Jackson2JsonMessageConverter to be based on the Jackson2JavaTypeMapper.TypePrecedence.TYPE_ID.

Resources