I'm trying to implement some CloudEvent demo.
I have a hew spring boot services with RabbitMQ as a message bus they all send messages to a queue and one listens to the queue messages.
I try to wrap my messages as CloudEvent to make them more standard.
I use the following code to wrap the message (data) as a CloudEvent.
try {
inputEvent = CloudEventBuilder.v1()
.withSource(new URI("app://" + messageData.getChangeRequestId().toString()))
.withDataContentType("application/json")
.withId(messageData.myId().toString())
.withType("com.data.BaseMessageData")
.withData(objMapper.writeValueAsBytes(eventData))
.build();
} catch (Exception e) {
throw new MyMessagingException("Failed to convert the message into json. (See inner exception for further details)", e);
}
The data is converted to bytes since the message CloudEventData is based on bytes.
Of course, that on my listener method I get exception since SimpleMessageConverter can't handle bytes array.
Now, I can try and implement some custom message handler or try to check out CloudEvent AMQP suggested binding solution but I'm not keen with the amount of code it involves and I don't want to involve more technologies if not absolutely necessary.
Should I go down this path and implement a custom message conveter?
Is there any other standard solution for standardizing services messaging over qeueus?
You will need a custom message converter; but see this blog post:
https://spring.io/blog/2020/12/10/cloud-events-and-spring-part-1
Related
I am trying to implement DLQ using spring cloud stream with Batch mode enabled
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(BatchErrorHandler handler) {
return ((container, destinationName, group) -> {
if(dlqEnabledTopic.contains(destinationName))
container.setBatchErrorHandler(handler);});
}
#Bean
public BatchErrorHandler batchErrorHandler(KafkaOperations<String, byte[]> kafkaOperations) {
CustomDeadLetterPublishingRecoverer recoverer = new CustomDeadLetterPublishingRecoverer(kafkaOperations,
(cr, e) -> new TopicPartition(cr.topic()+"_dlq", cr.partition()));
return new RecoveringBatchErrorHandler(recoverer, new FixedBackOff(1000, 1));
}
but have a few queries:
how to configure key/value Serializer using properties - my message is String type but KafkaOperations is using ByteArraySerializer
In the batch multiple messages are there , but if first message failed it went to DLQ but don't see the processing of next message.
Requirement - at any index if batch fails, I need only that message to be sent to DLQ and rest of the message should be processed again.
Is DLQ now supported with batch mode now ? just like with record mode it can be enabled using properties
spring.kafka.producer.* properties - however, the DLT publishing should use the same serializers as the main stream app. ByteArraySerializer is generally correct.
The recovering batch error handler will perform seeks for the unprocessed records and they will be returned. Debug logging should help you figure out what's wrong. If you can't figure it out, provide an MCRE that exhibits the behavior you are seeing.
No; the binder does not support DLQ for batch mode; configuring the error handler is the correct approach.
We have implemented sqslistner as the documentation suggests, the best way to receive AWS SQS message Cloud Spring Doc.
There are two ways for receiving SQS messages, either use the receive
methods of the QueueMessagingTemplate or with annotation-driven
listener endpoints. The latter is by far the more convenient way to
receive messages.
Everything is working as expected. If business process failed, we throw a runtime exception. The particular message is sent back to the SQS queue for retry. When visibility timeout passed the message reappears to the worker for processing.
Sample Code is here:
#SqsListener(value="sample-standard-queue",deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void receiveMessage(String message) {
log.info("Message Received **************************** "+message );
log.info("After Conversion"+new JSONObject(message).getString("payload"));
throw new RuntimeException("An exception was thrown during the execution of the SQS listener method and Message will be still available in Queue");
}
But there are some examples where "Acknowledgment" is used instead of throwing run time exception. Documentation doesn't suggest that.
Which one is the best way to deal with a business logic failure scenario?Is Acknowledgment necessary?
Thanks in advance.
One way is to keep a track of messages being processed in some RDS table. If any message gets retried then increase the retry count in the table for that particular message.
There should be some configured numbers of retries that you want to retry one particular message and then you may want to move that to a dead-letter-queue or you may log it and just simply discard it.
There can be multiple ways of handling it: One way can be:
#SqsListener(value="sample-standard-queue",deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void receiveMessage(String message) {
try{
log.info("Message Received **************************** "+message );
log.info("After Conversion"+new JSONObject(message).getString("payload"));
}catch(Exception e){
// check if its retry count has exhausted or not
// if exhausted - then acknowledge it (push it into dead-letter-queue) and dont throw the exception
// If not exhausted - increase the retry count in the table before throwing exception
throw new RuntimeException("An exception was thrown during the execution of the SQS listener method and Message will be still available in Queue");
}
}
In a spring boot application, I have a class with jms listener.
public class PaymentNotification{
#JmsListener(destination="payment")
public void receive(String payload) throws Exception{
//mapstring conversion
....
paymentEvent = billingService.insert(paymentEvent); //transactional method
//call rest...
billingService.save(paymentEvent);
//send info to jms
}
}
I saw then when a error happen, data is inserted in the database, that ok, but it's like receive method is called again and again... but queue is empty when I check on the server.
If there is an error, I don't want method is called again, Is there something for that.
The JMS Message Headers might contain additional information to help with your processing. In particular JMSRedelivered could be of some value. The Oracle doc states that "If a client receives a message with the JMSRedelivered field set, it is likely, but not guaranteed, that this message was delivered earlier but that its receipt was not acknowledged at that time."
I ran the following code to explore what was available in my configuration (Spring Boot with IBM MQ).
#JmsListener(destination="DEV.QUEUE.1")
public void receive(Message message) throws Exception{
for (Enumeration<String> e = message.getPropertyNames(); e.hasMoreElements();)
System.out.println(e.nextElement());
}
From here I could find JMSXDeliveryCount is available in JMS 2.0. If that property is not available, then you may well find something similar for your own configuration.
One strategy would be to use JMSXDeliveryCount, a vendor specific property or maybe JMSRedelivered (if suitable for your needs) as a way to check before you process the message. Typically, the message would be sent to a specific blackout queue where the redelivery count exceeds a set threshold.
Depending on the messaging provider you are using it might also be possible to configure back out queue processing as properties of the queue.
I am using reactor kafka to send in kafka messages and receive and process them.
While receiving the kakfa payload, I do some deserialization, and if there is an exception, I want to just log that payload ( by saving to mongo ), and then continue receiving other payloads.
For this I am using the below approach -
#EventListener(ApplicationStartedEvent.class)
public void kafkaReceiving() {
for(Flux<ReceiverRecord<String, Object>> flux: kafkaService.getFluxReceives()) {
flux.delayUntil(//some function to do something)
.doOnNext(r -> r.receiverOffset().acknowledge())
.onErrorResume(this::handleException()) // here I'll just save to mongo
.subscribe();
}
}
private Publisher<? extends ReceiverRecord<String,Object>> handleException(object ex) {
// save to mongo
return Flux.empty();
}
Here I expect that whenever I encounter an exception while receiving a payload, the onErrorResume should catch it and log to mongo and then I should be good to continue receiving more messages when I send to the kafka queue. However, I see that after the exception, even though the onErrorResume method gets invoked, but I am not able to process anymore messages sent to Kakfa topic.
Anything I might be missing here?
If you need to handle the error gracefully, you can add onErrorResume inside delayUntil:
flux
.delayUntil(r -> {
return process(r)
.onErrorReturn(e -> saveToMongo(r));
});
.doOnNext(r -> r.receiverOffset().acknowledge())
.subscribe();
Reactive operators treat error as a terminal signal, and, if your inner logic (inside delayUntil) throws an error, delayUntil will terminate the sequence, and onErrorReturn after delayUntil will not make it continue processing the events from Kafka.
As mentioned by #bsideup too, I ultimately went ahead with not throwing exception from the deserializer, since the kafka is not able to commit offset for that record, and there is no clean way of ignoring that record and going ahead with further consumption of records as we dont have the offset information of the record( since it is malformed). So even if I try to ignore the record using reactive error operators, the poll fetches the same record, and the consumer is then kind of stuck
What is the best practice for handling exceptions in MassTransit 3+ with regard to Request/Response pattern? The docs here mention that if a ResponseAddress exists on a message, the Fault message will be sent to that address, but how does one consumer/receive the messages at that address? The ResponseAddress for Bus.Request seems to be an auto-generated MassTransit address that I don't have control over, so I don't know how to access the exception thrown in the main consumer. What am I missing? Here's my code to register the consumer and its fault consumer using Unity container:
cfg.ReceiveEndpoint(host, "request_response_queue", e =>
{
e.Consumer<IConsumer<IRequestResponse>>(container);
e.Consumer(() => container.Resolve<IMessageFaultConsumer<IRequestResponse>>() as IConsumer<Fault<IRequestResponse>>);
});
And here's my attempt at a global message fault consumer:
public interface IMessageFaultConsumer<TMessage>
{
}
public class MessageFaultConsumer<TMessage> : IConsumer<Fault<TMessage>>, IMessageFaultConsumer<TMessage>
{
public Task Consume(ConsumeContext<Fault<TMessage>> context)
{
Console.WriteLine("MessageFaultConsumer");
return Task.FromResult(0);
}
}
This approach DOES work when I use Bus.Publish as opposed to Bus.Request. I also looked into creating an IConsumeObserver and putting my global exception logging code into the ConsumeFault method, but that has the downside of being invoked every exception prior to the re-tries giving up. What is the proper way to handle exceptions for request/response?
First of all, the request/response support in MassTransit is meant to be used with the .Request() method, or the request client (MessageRequestClient or PublishRequestClient). With these methods, if the consumer of the request message throws an exception, that exception is packaged into the Fault<T>, which is sent to the ResponseAddress. Since the .Request() method, and the request client are both asynchronous, using await will throw an exception with the exception data from the fault included. That's how it is designed, await the request and it will either complete, timeout, or fault (throw an exception upon await).
If you are trying to put in some global "exception handler" code for logging purposes, you really should log those at the service boundary, and an observer is the best way to handle it. This way, you can just implement the ConsumeFault method, and log to your event sink. However, this is synchronous within the consumer pipeline, so recognize the delay that could be introduced.
The other option is to of course just consume Fault<T>, but as you mentioned, it does not get published when the request client is used with the response address in the header. In this case, perhaps your requester should publish an event indicating that operation X faulted, and you can log that -- at the business context level versus the service level.
There are many options here, it's just choosing the one that fits your use case best.