Spring Cloud Stream Kafka Custom Error Handler - spring

I have Kafka Consumer using spring cloud stream
spring:
cloud:
stream:
function:
definition: myConsumer
bindings:
myConsumer-in-0:
destination:myDest
binder: kafka
group: myGroup
content-type: application/json
I was able to catch errors through global error channel
#ServiceActivator(inputChannel = IntegrationContextUtils.ERROR_CHANNEL_BEAN_NAME)
public void processError(Message<MessageHandlingException> message) {
log.info("error had happend {}", message);
}
Now I have two questions
How can we redirect the error to a specific internal channel by a consumer?
Will the message be acknowledged automatically even for the error scenario?

Related

spring amqp (rabbitmq) and sending to DLQ when exception occurs

I am using org.springframework.boot:spring-boot-starter-amqp:2.6.6 .
According to the documentation, I set up #RabbitListener - I use SimpleRabbitListenerContainerFactory and the configuration looks like this:
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ObjectMapper om) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
factory.setConcurrentConsumers(rabbitProperties.getUpdater().getConcurrentConsumers());
factory.setMaxConcurrentConsumers(rabbitProperties.getUpdater().getMaxConcurrentConsumers());
factory.setMessageConverter(new Jackson2JsonMessageConverter(om));
factory.setAutoStartup(rabbitProperties.getUpdater().getAutoStartup());
factory.setDefaultRequeueRejected(false);
return factory;
}
The logic of the service is to receive messages from rabbitmq, contact an external service via the rest API (using rest template) and put some information into the database based on the results of the response (using spring data jpa). The service implemented it successfully, but during testing it ran into problems that if any exceptions occur during the work of those thrown up the stack, the message is not sent to the configured dlq, but simply hangs in the broker as unacked. Can you please tell me how you can tell spring amqp that if any error occurs, you need to redirect the message to dlq?
The listener itself looks something like this:
#RabbitListener(
queues = {"${rabbit.updater.consuming.queue.name}"},
containerFactory = "rabbitListenerContainerFactory"
)
#Override
public void listen(
#Valid #Payload MessageDTO message,
Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) Long deliveryTag
) {
log.debug(DebugMessagesConstants.RECEIVED_MESSAGE_FROM_QUEUE, message, deliveryTag);
messageUpdater.process(message);
channel.basicAck(deliveryTag, false);
log.debug(DebugMessagesConstants.PROCESSED_MESSAGE_FROM_QUEUE, message, deliveryTag);
}
In rabbit managment it look something like this:
enter image description here
and unacked will hang until the queue consuming application stops
See error handling documentation: https://docs.spring.io/spring-amqp/docs/current/reference/html/#annotation-error-handling.
So, you just don't do an AcknowledgeMode.MANUAL and rely on the Dead Letter Exchange configuration for those messages which are rejected in case of error.
Or try to use a this.channel.basicNack(deliveryTag, false, false) in case of messageUpdater.process(message); exception...

Async RabbitMQ communcation using Spring Integration

I have two spring boot services that communicate using RabbitMQ.
Service1 sends request for session creation to Service2.
Service2 handles request and should return response.
Service1 should handle the response.
Service1 method for requesting session:
public void startSession()
{
ListenableFuture<SessionCreationResponseDTO> sessionCreationResponse = sessionGateway.requestNewSession();
sessionCreationResponse.addCallback(response -> {
//handle success
}, ex -> {
// handle exception
});
}
On Service1 I have defined AsyncOutboundGateway, like:
#Bean
public IntegrationFlow requestSessionFlow(MessageChannel requestNewSessionChannel,
AsyncRabbitTemplate amqpTemplate,
SessionProperties sessionProperties)
{
return flow -> flow.channel(requestNewSessionChannel)
.handle(Amqp.asyncOutboundGateway(amqpTemplate)
.exchangeName(sessionProperties.getRequestSession().getExchangeName())
.routingKey(sessionProperties.getRequestSession().getRoutingKey()));
}
On Service2, I have flow for receiving these messages:
#Bean
public IntegrationFlow requestNewSessionFlow(ConnectionFactory connectionFactory,
SessionProperties sessionProperties,
MessageConverter messageConverter,
RequestNewSessionHandler requestNewSessionHandler)
{
return IntegrationFlows.from(Amqp.inboundGateway(connectionFactory,
sessionProperties.requestSessionProperties().queueName())
.handle(requestNewSessionHandler)
.get();
Service2 handles there requests:
#ServiceActivator(async = "true")
public ListenableFuture<SessionCreationResponseDTO> handleRequestNewSession()
{
SettableListenableFuture<SessionCreationResponseDTO> settableListenableFuture = new SettableListenableFuture<>();
// Goes through asynchronous process of creating session and sets value in listenable future
return settableListenableFuture;
}
Problem is that Service2 immediately returns ListenableFuture to Service1 as message payload, instead of waiting for result of future and sending back result.
If I understood documentation correctly Docs by setting async parameter in #ServiceActivator to true, successful result should be returned and in case of exception, error channel would be used.
Probably I misunderstood documentation, so that I need to unpack ListenableFuture in flow of Service2 before returning it as response, but I am not sure how to achieve that.
I tried something with publishSubscribeChannel but without much luck.
Your problem is here:
.handle(requestNewSessionHandler)
Such a configuration doesn't see your #ServiceActivator(async = "true") and uses it as a regular blocking service-activator.
Let's see if this helps you:
.handle(requestNewSessionHandler, "handleRequestNewSession", e -> e.async(true))
It is better to think about it like: or only annotation configuration. or only programmatic, via Java DSL.

Embedded headers found in Spring Cloud Stream message body

I use Spring Cloud Stream 1.3.2.RELEASE to publish a String message to Kafka. When I consume the message using command line Kafka consumer or Spring Kafka #KafkaListener, a contentType header is always appended to the message body.
Question:
Is there any way to get rid of the embedded headers?
--
Spring Cloud Stream as producer
private void send() {
channel.test().send(MessageBuilder.withPayload("{\"foo\":\"bar\"}").build());
}
Command line Kafka consumer
$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
�
contentType
"text/plain"{"foo":"bar"}
Spring Kafka as consumer
#KafkaListener(topics = "test")
public void receive(Message message){
log.info("Message payload received: {}", message.getPayload());
}
2018-05-16 07:12:05.241 INFO 19475 --- [ntainer#0-0-C-1] com.demo.service.Listener : Message payload received: �contentType"text/plain"{"foo":"bar"}
#KafkaListener(topics = "test")
public void receive(#Payload String message){
log.info("Message payload received: {}", message);
}
2018-05-16 07:16:14.313 INFO 19747 --- [ntainer#0-0-C-1] com.demo.service.Listener : Message payload received: �contentType"text/plain"{"foo":"bar"}
See headerMode binding property: https://docs.spring.io/spring-cloud-stream/docs/Ditmars.SR3/reference/htmlsingle/#_properties_for_use_of_spring_cloud_stream. You need to set it to raw for the destination you send messages.

Spring cloud stream IntegrationFlow with Rabbitmq messaging, The consumer giving ASCII numbers as message payload

I am using spring cloud stream for messaging. In the consumer part, I used IntegrationFlow to listen to the queue. It is listening and printing the message from producer side. But the format is different that's the problem I am facing now. The content-type of the producer is application/json and the IntegrationFLow message payload showing ASCII numbers. The code is written for the consumer is given below
#EnableBinding(UserOperationConsume.class)
public class ConsumerController {
#Bean
IntegrationFlow consumerIntgrationFlow(UserOperationConsume u) {
return IntegrationFlows
.from(u.userRegistraionProduces())
//.transform(Transformers.toJson()) // not working as expected
//.transform(Transformers.fromJson(UserDTO.class))
.handle(String.class, (payload, headers) -> {
System.out.println(payload.toString()); // here the output is 123,34,105,100,34,58,49,44,34,110,97,109,101,34,58,34,86,105,115,104,110,117,34,44,34,101,109,97,105,108,34,58,34,118...
return null;
}).get();
}
}
The input interface is,
public interface UserOperationConsume {
#Input
public SubscribableChannel userRegistraionProduces();
}
And the consumer yml configuration is,
server:
port: 8181
spring:
application:
name: nets-alert-service
---
spring:
cloud:
config:
name: notification-service
uri: http://localhost:8888
---
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
---
spring:
cloud:
stream:
bindings:
userRegistraionProduces:
destination: userOperations
input:
content-type: application/json
I have tried Sink.class binding, That time I got an exact message from the queue. So please let me know if there is any mistake in this IntegrationFlow configuration. Because I am a newbie in spring cloud stream and IntegrationFlow. is there any way to transform this ascii to exact string?
Thanks in advance
Using IntegrationFlows.from(channel) provides no conversion hints so you just get the raw byte[] payload (containing JSON). It's not clear why you then use a toJson() transformer.
Your .handle(String.class, (payload, headers) -> {... is causing a simple ArrayToStringConverter to be used which is why you are seeing each byte value.
In any case, you are not using the framework properly. Use...
#StreamListener("userRegistraionProduces")
public void listen(UserDTO dto) {
System.out.println(dto);
}
...and the framework will take care of the conversion for you. Or...
#StreamListener("userRegistraionProduces")
public void listen(Message<UserDTO> dtoMessage) {
System.out.println(dtoMessage);
}
if your producer conveys additional information in headers.
EDIT
If you prefer to do the conversion yourself, this works fine...
#Bean
IntegrationFlow consumerIntgrationFlow(UserOperationConsume u) {
return IntegrationFlows.from(u.userRegistraionProduces())
.transform(Transformers.fromJson(UserDTO.class))
.handle((payload, headers) -> {
System.out.println(payload.toString());
return null;
}).get();
}
...since the Json transformer can read a byte[].

Kafka Spring Integration: Headers not coming for kafka consumer

I am using Kafka Spring Integration for publishing and consuming messages using kafka. I see Payload is properly passed from producer to consumer, but the header information is getting overridden somewhere.
#ServiceActivator(inputChannel = "fromKafka")
public void processMessage(Message<?> message) throws InterruptedException,
ExecutionException {
try {
System.out.println("Headers :" + message.getHeaders().toString());
}
} catch (Exception e) {
e.printStackTrace();
}
}
I get following headers:
Headers :{timestamp=1440013920609, id=f8c645f7-677b-ec32-dad0-a7b79082ef81}
I am constructing the message at producer end like this:
Message<FeelDBMessage> message = MessageBuilder
.withPayload(samplePayloadObj)
.setHeader(KafkaHeaders.MESSAGE_KEY, "key")
.setHeader(KafkaHeaders.TOPIC, "sampleTopic").build();
// publish the message
publisher.publishMessage(message);
and below is the header info at producer:
headers={timestamp=1440013914085, id=c4159c1c-2c67-634b-ef8d-3fb026b1172e, kafka_messageKey=key, kafka_topic=sampleTopic}
Any idea why the Headers are overridden by a different value?
Just because by default Framework uses the immutable GenericMessage.
Any manipulation to the existing message (e.g. MessageBuilder.withPayload) will produce a new GenericMessage instance.
From other side Kafka doesn't support any headers abstraction like JMS or AMQP. That's why KafkaProducerMessageHandler just do this when it publishes a message to Kafka:
this.kafkaProducerContext.send(topic, partitionId, messageKey, message.getPayload());
As you see it doesn't send headers at all. So, other side (consumer) just deals with only message from the topic as a payload and some system options as headers like topic, partition, messageKey.
In two words: we don't transfer headers over Kafka because it doesn't support them.

Resources