How to do Event-Driven Microservices with quarkus and smallrye correctly - quarkus

Dears,
I am trying to do some kind of event-driven Microservices. Currently, I was able to consume a message from Kafka and update database record when message is received using Quarkus & Smallrye-Reactive messaging extension. What I want to achieve further is to be able to send a message to other topic in case of success and send a message to error topic otherwise. I know that we can use return and #outgoing annotation for emitting new message but I don't think it will fit in my use case. I need a guidance here, if error happens while consuming a message. Should I return message to the original topic (by not acknowledging the message) or should I consume it and produce error message to different topic to rollback the original transaction.
Here is my code :
#Incoming("new-payment")
public void newMessage(String msg) {
LOG.info("New payment has been received.");
LOG.info("Payload is {}", msg);
PaymentEvent pe = jsob.fromJson(msg, PaymentEvent.class);
mysqlPool.preparedQuery("select totalBuyers from Book where isbn = ? ",
Tuple.of(pe.getIsbn()))
.thenApply(rs -> {
RowIterator<Row> iterator = rs.iterator();
if (iterator.hasNext()) {
return iterator.next().getInteger(0) + 1;
} else {
return Integer.valueOf(0);
}
})
.thenApply(totalCount -> {
return mysqlPool.preparedQuery("update Book set totalBuyers = ?",
Tuple.of(totalCount));
})
.whenComplete((rs, err) -> {
if (err != null) {
//Emit an error to error topic.
} else {
//Emit a msg to other service.
}
});
}
Also if you've better code please submit, I am still newbie in reactive programming :).

I've been doing enterprise integration for years and I think that you would want to do both.
Should I return message to the original topic (by not acknowledging
the message) or should I consume it and produce error message to
different topic to rollback the original transaction.
The event should remain on the topic for another instance to potentially pick up and process. And an error message should be logged as an event. Perhaps the same consumer could pick up and reprocess the event successfully.
An EDA (Event Driven Architecture) may offer different ways to handle this but on an ESB the message would be marked as tried. Generally three tried attempts would send it to a dead-letter queue so that it can be corrected and reprocessed later.
Our enterprise is also starting to design and build applications using EDA so I am interested to read what others have to say on this question. And KUDOS to you for focusing on Quarkus. I believe that this is one of the best technologies to come from Redhat that I have seen yet!

Another problem with this approach is that you are doing “2 writes in 1 service” e.g. one call to the db and another one to a topic. And this can become problematic when one of the 2 writes fails.
If you want to avoid this and use a pure event driven approach, then you need to reorder your events in such a way that writing to a db is the last event in the whole flow so that you can prevent 2 writes from 1 service.
Thus in your case: change the 2nd thenApply(..) method from updating the db into firing a new event to another topic. And the consumer of this new topic should do the db update. Thus the flow becomes like this:
Producer -> topic1 -> consumer (select from ...) & fire event to another topic -> topic2 -> consumer (update table).

Related

Manually ACK batch AMQP messages

I'm able to receive batch messages with the codes below. But now my question is, how should I manually ACK the messages. ACK all the messages in the list one by one, or ACK the last message in the list is enough? Thanks in advance!
public class MyMessageListener implements ChannelAwareBatchMessageListener {
#Override
public void onMessageBatch(List<Message> messages, Channel channel){
//Do something......
//option 1
messages.forEach(msg-> channel.basicAck(msg.getMessageProperties().getDeliveryTag(), true);
//option 2
channel.basicAck( messages.get(messages.size()-1).getMessageProperties().getDeliveryTag(), true);
}
}
This depends on your business case , generally speaking acknowledging all the messages will increase the network traffic and consequently your message throughput will go down .
The second option is a pragmatic approach provided it is OK to have the possibility of loosing some intermediate messages which may not have been delivered and a later message is delivered and acknowledged in which case the un-delivered messages will also get ackéd .
So it is a design decision which is driven by the sensitivity of payload of your messages.

Message processing guarantees with spring-cloud-stream-binder-kafka functional binding

Given default configuration and this binding
#Bean
public Function<Flux<Message<Input>>, Flux<Message<Output>>> process() {
return input -> input
.map(message -> {
// simplified
return MessageBuilder.build();
});
}
Is there any guarantee that input message offset is commited after output is written to Kafka? I don´t need full Transactions, and I can live with at-least-once delivery and possible duplicates, but I cannot loose output message. I was unable to find this exact scenario in docs, and I believe previous channel-based binding worked as I need it to, since it was blocking by nature, but I am not sure about functional.

In functional model of spring cloud stream and kafka, how can I send to another topic( error topic) in case of an exception occured?

The below shows a snippet for the function, please suggest how to send data to different topics based on if it has error or not
public Function<KStream<String,?>, KStream<String,?>> process(){
return input -> input.map(key, value) {
try{
// logic of function here
}catch(Exception e) {
// How do I send to different topic from here??
}
return new KeyValue<>(key,value);
}
}
Set the kafka consumer binding's enableDlq option to true; when the listener throws an exception the record is sent to the dead letter topic after retries are exhausted. If you want to fail immediately, set the consumer binding's maxAttempts property to 1 (default is 3).
See the documentation.
enableDlq
When set to true, it enables DLQ behavior for the consumer. By default, messages that result in errors are forwarded to a topic named error.<destination>.<group>. The DLQ topic name can be configurable by setting the dlqName property or by defining a #Bean of type DlqDestinationResolver. This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome. See Dead-Letter Topic Processing processing for more information. Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: x-original-topic, x-exception-message, and x-exception-stacktrace as byte[]. By default, a failed record is sent to the same partition number in the DLQ topic as the original record. See Dead-Letter Topic Partition Selection for how to change that behavior. Not allowed when destinationIsPattern is true.
Default: false.

How to retry a kafka message when there is an error - spring cloud stream

I'm pretty new to Kafka. I'm using spring cloud stream Kafka to produce and consume
#StreamListener(Sink.INPUT)
public void process(Order order) {
try {
// have my message processing
}
catch( exception e ) {
//retry here that record..
}
}
}
Just want to know how can I implement a retry ? Any help on this is highly appreciated
Hy
There are multiple ways to handle "retries" and it depends on the kind of events you encounter.
For basic issues kafka framework will retry for you to recover from an error condition, for example in case of a short network downtime the consumer and producer api implement auto retry.
In particular kafka support "built-in producer/consumer retries" to correctly handle a large variety of errors without loss of messages, but as a developer, you must still be able to handle other types of errors with the try-catch block you mention.
Error in kafka can be divided in the following categories:
(producer & consumer side) Nonretriable broker errors such as errors regarding message size, authorization errors, etc -> you must handle them in "design phase" of your app.
(producer side) Errors that occur before the message was sent to the broker—for example, serialization errors --> you must handle them in the runtime app execution
(producer & consumer sideErrors that occur when the producer exhausted all retry attempts or when the
available memory used by the producer is filled to the limit due to using all of it to store messages while retrying -> you should handle these errors.
Another point of attention regarding "how to retry" is how to handle correctly the order of commits in case of auto-commit option is set to false.
A common and simple pattern to get commit order right is to use a monotonically increasing sequence number. Increase the sequence number every time you commit and add the sequence number at the time of the commit to the commit function.
When you’re getting ready to send a retry, check if the
commit sequence number the callback got is equal to the instance
variable; if it is, there was no newer commit and it is safe to retry. If
the instance sequence number is higher, don’t retry because a
newer commit was already sent.

Spring Cloud Stream Listener not pausing / waiting for the messages in Integration Testing Code

I am having a Application which connects to RabbitMQ through Spring Cloud Stream, which works prefectly.
For Integration test cases i am trying to use the sample - https://github.com/piomin/sample-message-driven-microservices/blob/master/account-service/src/test/java/pl/piomin/services/account/OrderReceiverTest.java
However, in my case my application sends back 3 messages in some time Interval. So if i put the below Lines, it fetches the messages, but if the there is a delay in getting the messages.
int i = 1;
while (i > 0) {
Message<String> received = (Message<String>) collector.forChannel(channels.statusMessage()).poll();
if (received != null) {
LOGGER.info("Order response received: {}", received.getPayload());
}
}
So Instead of my custom polling, is there any way i can wait and Poll for my messages, and stop when i get those ?
I want to get the pick Messages based on the Response Routing Key to different Channels. Is it possible ?
--> Example: If the routingKey is "InProcess" , it should go to Inprocess Method.
1) Your question is not at all clear, expand on it and explain exactly what you mean.
2) Routing keys are used within Rabbit to route to different queues, they are not used within the framework to route to channels or methods.
You can, however, use a condition on the #StreamListener (match on the headers['amqp_receivedRoutingKey]`), but it's better to route messages to different queues instead.

Resources