#KafkaListener per specific header value - spring

I have #KafkaListener:
#KafkaListener(topicPattern = "SameTopic")
public void onMessage(Message<String> message, Acknowledgment acknowledgment) {
String eventType = new String((byte[]) message.getHeaders().get("Event-Type"), StandardCharsets.UTF_8);
switch (eventType) {
case "create" -> doCreate(message);
case "update" -> doUpdate(message);
case "delete" -> doDelete(message);
}
}
Producer sets custom header Event-Type with three possible values: create, update, delete. Currently I'm reading this header value from Message and then invoke rest of the logic according to the header value.
Is there any way to create three #KafkaListeners where each of them will consume message filtered by some criteria - for my case filtered by header Event-Type value?
#KafkaListener(topicPattern = "SameTopic", ...)
public void onCreate(Message<String> message, Acknowledgment acknowledgment) {
doCreate(message);
}
#KafkaListener(topicPattern = "SameTopic", ...)
public void onUpdate(Message<String> message, Acknowledgment acknowledgment) {
doUpdate(message);
}
#KafkaListener(topicPattern = "SameTopic", ...)
public void onDelete(Message<String> message, Acknowledgment acknowledgment) {
doDelete(message);
}
I'm aware of RecordFilterStrategy, but couldn't get any help of it.

Consider to have those types mapped to the partition on the topic.
This way you definitely can have different #KafkaListener with the specific partition assigned:
/**
* The topicPartitions for this listener when using manual topic/partition
* assignment.
* <p>
* Mutually exclusive with {#link #topicPattern()} and {#link #topics()}.
* #return the topic names or expressions (SpEL) to listen to.
*/
TopicPartition[] topicPartitions() default {};
The doc is here: https://docs.spring.io/spring-kafka/docs/current/reference/html/#manual-assignment
It's probably not going to work well with several instances of your app, since with manual assignment there is no consumer group involved. You may consider to refine the logic to 3 different topics. Or if that is not possible from produce side, use Kafka Streams to split() the original topic to other topics according the record key.

Related

How to see the types that flows in Spring Integration's IntegrationFlow

I try to understand what's the type that returns when I aggregate in Spring Integration and that's pretty hard. I'm using Project Reactor and my code snippet is:
public FluxAggregatorMessageHandler randomIdsBatchAggregator() {
FluxAggregatorMessageHandler f = new FluxAggregatorMessageHandler();
f.setWindowTimespan(Duration.ofSeconds(5));
f.setCombineFunction(messageFlux -> messageFlux
.map(Message::getPayload)
.collectList()
.map(GenericMessage::new);
return f;
}
#Bean
public IntegrationFlow dataPipeline() {
return IntegrationFlows.from(somePublisher)
// ----> The type Message<?> passed? Or Flux<Message<?>>?
.handle(randomIdsBatchAggregator())
// ----> What type has been returned from the aggregation?
.handle(bla())
.get();
}
More than understanding the types that passes in the example, I want to know in general how can I know what are the objects that flows in the IntegrationFlow and their types.
IntegrationFlows.from(somePublisher)
This creates a FluxMessageChannel internally which subscribes to the provided Publsiher. Every single event is emitted from this channel to its subscriber - your aggregator.
The FluxAggregatorMessageHandler produces whatever is explained in the setCombineFunction() JavaDocs:
/**
* Configure a transformation {#link Function} to apply for a {#link Flux} window to emit.
* Requires a {#link Mono} result with a {#link Message} as value as a combination result
* of the incoming {#link Flux} for window.
* By default a {#link Flux} for window is fully wrapped into a message with headers copied
* from the first message in window. Such a {#link Flux} in the payload has to be subscribed
* and consumed downstream.
* #param combineFunction the {#link Function} to use for result windows transformation.
*/
public void setCombineFunction(Function<Flux<Message<?>>, Mono<Message<?>>> combineFunction) {
So, it is a Mono with a message which you really do with your .collectList(). That Mono is subscribed by the framework when it emits a reply message from the FluxAggregatorMessageHandler. Therefore your .handle(bla()) must expect a list of payloads. Which is really natural for the aggregator result.
See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#flux-aggregator

Spring Cloud Function - Separate routing-expression for different Consumer

I have a service, which receives different structured messages from different message queues. Having #StreamListener conditions we can choose at every message type how that message should be handled. As an example:
We receive two different types of messages, which have different header fields and values e.g.
Incoming from "order" queue:
Order1: { Header: {catalog:groceries} }
Order2: { Header: {catalog:tools} }
Incoming from "shipment" queue:
Shipment1: { Header: {region:Europe} }
Shipment2: { Header: {region:America} }
There is a binding for each queue, and with according #StreamListener I can process the messages by catalog and region differently
e.g.
#StreamListener(target = OrderSink.ORDER_CHANNEL, condition = "headers['catalog'] == 'groceries'")
public void onGroceriesOrder(GroceryOder order){
...
}
So the question is, how to achieve this with the new Spring Cloud Function approach?
At the documentation https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.2.RELEASE/reference/html/spring-cloud-stream.html#_event_routing it is mentioned:
Also, for SpEL, the root object of the evaluation context is Message so you can do evaluation on individual headers (or message) as well …​.routing-expression=headers['type']
Is it possible to add the routing-expression to the binding like (in application.yml)
onGroceriesOrder-in-0:
destination: order
routing-expression: "headers['catalog']==groceries"
?
EDIT after first answer
If the above expression at this location is not possible, what the first answer implies, than my question goes as follows:
As far as I understand, an expression like routing-expression: headers['catalog'] must be set globally, because the result maps to certain (consumer) functions.
How can I control that the 2 different messages on each queue will be forwarted to their own consumer function, e.g.
Order1 --> MyOrderService.onGroceriesOrder()
Order2 --> MyOrderService.onToolsOrder()
Shipment1 --> MyShipmentService.onEuropeShipment()
Shipment2 --> MyShipmentService.onAmericaShipment()
That was easy with #StreamListener, because each method gets their own #StreamListener annotation with different conditions. How can this be achieved with the new routing-expression setting?
?
Aside from the fact that the above is not a valid expression, but I think you meant headers['catalog']==groceries. If so, what would you expect to happen from evaluating it as the only two option could be true/false. Anyway, these are rhetorical but helps to understand the problem and how to fix it.
The expression must result in a value of a function to route TO. So. . .
routing-expression: headers['catalog'] - assumes that the actual value of catalog header is the name of the function to invoke
routing-expression: headers['catalog']==groceries ? 'processGroceries' : 'processOther' - maps value 'groceries' to 'processGroceries' function.
For a specific routing, you can use MessageRoutingCallback strategy:
MessageRoutingCallback
The MessageRoutingCallback is a strategy to assist with determining
the name of the route-to function definition.
public interface MessageRoutingCallback {
FunctionRoutingResult routingResult(Message<?> message);
. . .
}
All you need to do is implement and register it as a bean to be picked
up by the RoutingFunction. For example:
#Bean
public MessageRoutingCallback customRouter() {
return new MessageRoutingCallback() {
#Override
FunctionRoutingResult routingResult(Message<?> message) {
return new FunctionRoutingResult((String) message.getHeaders().get("func_name"));
}
};
}
Spring Cloud Function

GRPC: Client streaming with configuration message

Here's a proto definition for a service that consumes a stream of events
from a client
message Event {
// ...
}
service EventService {
rpc Publisher(stream Event) returns (google.protobuf.Empty);
}
The problem is that the server needs to be told what to do with this stream.
Ideally, it would first recieve an Options message:
message Event {
// ...
}
message Options {
// ...
}
service EventService {
rpc Publisher(Options, stream Event) returns (google.protobuf.Empty);
}
However, grpc only supports one parameter for rpc methods.
One solution is to introduce an additional PublishMessage message which
can contain either an Options or Event message.
message PublishMessage {
oneof content {
Options options = 1;
Event event = 2;
}
}
The service would then expect the first PublishMessage to contain an Options message, with all subsequent ones containing Event messages. This introduces additional overhead from the wrapping message and makes the api a little clunky.
Is there a cleaner way to achieve the same result?
Using oneof is the suggested approach when many fields or messages are in play. The overhead is minimal, so wouldn't generally be a concern. There is the clunkiness though.
If there's only a few fields, you may want to combine the fields from Options and Event into a single message. Or similarly add Options to Event as a field. You'd expect the Options fields to be present on the first request and missing from subsequent. This works better when there's fewer configuration fields, like just a "name."

How to create a CRUD with gRPC service without much repetition?

I'm trying to use gRPC to build a simple CRUD service, but I keep finding myself creating messages with big overlaps.
This is best described by an example:
message Todo {
// id is only available for a persisted entity in database.
string id = 1;
string content = 2;
// this is only available for users with admin role.
string secret_content = 3;
}
service Todos {
rpc CreateTodo(CreateRequest) returns (CreateResponse) {}
rpc ReadTodo(ReadRequest) returns (ReadResponse) {}
}
message CreateRequest {
// this todo is not supposed to have id,
// should I create another version of Todo without an id field?
Todo todo
}
message CreateResponse {
// this todo will always have an id.
Todo todo = 1;
}
message ReadRequest {
string id = 1;
}
message ReadResponse {
// this todo should only have the secret_content field if the
// user is authenticated as an admin, if not, the field should not
// fallback to the zero value, the whole field must be missing.
Todo todo = 1;
}
Is this a good approach to build a CRUD like resource with gRPC? That is, having a single message (Todo) representing the resource, and wrapping this message in response/request types per action.
Should the Todo type message have all fields covered by all requests/responses, and not set the ones which are not in use by each?
Should the Todo type message have all fields covered by all requests/responses, and not set the ones which are not in use by each?
Yes, this seems like a reasonable design. In protobuf v2, you would have marked such fields optional to make it easier to understand. But in v3, all fields are optional by default anyway.

Domain Driven Design - complex validation of commands across different aggregates

I've only began with DDD and currently trying to grasp the ways to do different things with it. I'm trying to design it using asynchronous events (no event-sourcing yet) with CQRS. Currently I'm stuck with validation of commands. I've read this question: Validation in a Domain Driven Design , however, none of the answers seem to cover complex validation across different aggregate roots.
Let's say I have these aggregate roots:
Client - contains list of enabled services, each service can have a value-object list of discounts and their validity.
DiscountOrder - an order to enable more discounts on some of the services of given client, contains order items with discount configuration.
BillCycle - each period when bills are generated is described by own billcycle.
Here's the usecase:
Discount order can be submitted. Each new discount period in discount order should not overlap with any of BillCycles. No two discounts of same type can be active at the same time on one service.
Basically, using Hibernate in CRUD style, this would look something similar to (java code, but question is language-agnostic):
public class DiscountProcessor {
...
#Transactional
public void processOrder(long orderId) {
DiscOrder order = orderDao.get(orderId);
BillCycle[] cycles = billCycleDao.getAll();
for (OrderItem item : order.getItems()) {
//Validate billcycle overlapping
for (BillCycle cycle : cycles) {
if (periodsOverlap(cycle.getPeriod(), item.getPeriod())) {
throw new PeriodsOverlapWithBillCycle(...);
}
}
//Validate discount overlapping
for (Discount d : item.getForService().getDiscounts()) {
if (d.getType() == item.getType() && periodsOverlap(d.getPeriod(), item.getPeriod())) {
throw new PeriodsOverlapWithOtherItems(...);
}
}
//Maybe some other validations in future or stuff
...
}
createDiscountsForOrder(order);
}
}
Now here are my thoughts on implementation:
Basically, the order can be in three states: "DRAFT", "VALIDATED" and "INVALID". "DRAFT" state can contain any kind of invalid data, "VALIDATED" state should only contain valid data, "INVALID" should contain invalid data.
Therefore, there should be a method which tries to switch the state of the order, let's call it order.validate(...). The method will perform validations required for shift of state (DRAFT -> VALIDATED or DRAFT -> INVALID) and if successful - change the state and transmit a OrderValidated or OrderInvalidated events.
Now, what I'm struggling with, is the signature of said order.validate(...) method. To validate the order, it requires several other aggregates, namely BillCycle and Client. I can see these solutions:
Put those aggregates directly into the validate method, like
order.validateWith(client, cycles) or order.validate(new
OrderValidationData(client, cycles)). However, this seems a bit
hackish.
Extract the required information from client and cycle
into some kind of intermediate validation data object. Something like
order.validate(new OrderValidationData(client.getDiscountInfos(),
getListOfPeriods(cycles)).
Do validation in a separate service
method which can do whatever it wants with whatever aggregates it
wants (basically similar to CRUD example above). However, this seems
far from DDD, as method order.validate() will become a dummy state
setter, and calling this method will make it possible to bring an
order unintuitively into an corrupted state (status = "valid" but
contains invalid data because nobody bothered to call validation
service).
What is the proper way to do it, and could it be that my whole thought process is wrong?
Thanks in advance.
What about introducing a delegate object to manipulate Order, Client, BillCycle?
class OrderingService {
#Injected private ClientRepository clientRepository;
#Injected private BillingRepository billRepository;
Specification<Order> validSpec() {
return new ValidOrderSpec(clientRepository, billRepository);
}
}
class ValidOrderSpec implements Specification<Order> {
#Override public boolean isSatisfied(Order order) {
Client client = clientRepository.findBy(order.getClientId());
BillCycle[] billCycles = billRepository.findAll();
// validate here
}
}
class Order {
void validate(ValidOrderSpecification<Order> spec) {
if (spec.isSatisfiedBy(this) {
validated();
} else {
invalidated();
}
}
}
The pros and cons of your three solutions, from my perspective:
order.validateWith(client, cycles)
It is easy to test the validation with order.
#file: OrderUnitTest
#Test public void should_change_to_valid_when_xxxx() {
Client client = new ClientFixture()...build()
BillCycle[] cycles = new BillCycleFixture()...build()
Order order = new OrderFixture()...build();
subject.validateWith(client, cycles);
assertThat(order.getStatus(), is(VALID));
}
so far so good, but there seems to be some duplicate test code for DiscountOrderProcess.
#file: DiscountProcessor
#Test public void should_change_to_valid_when_xxxx() {
Client client = new ClientFixture()...build()
BillCycle[] cycles = new BillCycleFixture()...build()
Order order = new OrderFixture()...build()
DiscountProcessor subject = ...
given(clientRepository).findBy(client.getId()).thenReturn(client);
given(cycleRepository).findAll().thenReturn(cycles);
given(orderRepository).findBy(order.getId()).thenReturn(order);
subject.processOrder(order.getId());
assertThat(order.getStatus(), is(VALID));
}
#or in mock style
#Test public void should_change_to_valid_when_xxxx() {
Client client = mock(Client.class)
BillCycle[] cycles = array(mock(BillCycle.class))
Order order = mock(Order.class)
DiscountProcessor subject = ...
given(clientRepository).findBy(client.getId()).thenReturn(client);
given(cycleRepository).findAll().thenReturn(cycles);
given(orderRepository).findBy(order.getId()).thenReturn(order);
given(client).....
given(cycle1)....
subject.processOrder(order.getId());
verify(order).validated();
}
order.validate(new OrderValidationData(client.getDiscountInfos(),
getListOfPeriods(cycles))
Same as the above one, you still need to prepare data for both OrderUnitTest and discountOrderProcessUnitTest. But I think this one is better as order is not tightly coupled with Client and BillCycle.
order.validate()
Similar to my idea if you keep validation in the domain layer. Sometimes it is just not any entity's responsibility, consider domain service or specification object.
#file: OrderUnitTest
#Test public void should_change_to_valid_when_xxxx() {
Client client = new ClientFixture()...build()
BillCycle[] cycles = new BillCycleFixture()...build()
Order order = new OrderFixture()...build();
Specification<Order> spec = new ValidOrderSpec(clientRepository, cycleRepository);
given(clientRepository).findBy(client.getId()).thenReturn(client);
given(cycleRepository).findAll().thenReturn(cycles);
subject.validate(spec);
assertThat(order.getStatus(), is(VALID));
}
#file: DiscountProcessor
#Test public void should_change_to_valid_when_xxxx() {
Order order = new OrderFixture()...build()
Specification<Order> spec = mock(ValidOrderSpec.class);
DiscountProcessor subject = ...
given(orderingService).validSpec().thenReturn(spec);
given(spec).isSatisfiedBy(order).thenReturn(true);
given(orderRepository).findBy(order.getId()).thenReturn(order);
subject.processOrder(order.getId());
assertThat(order.getStatus(), is(VALID));
}
Do the 3 possible states reflect your domain or is that just extrapolation ? I'm asking because your sample code doesn't seem to change Order state but throw an exception when it's invalid.
If it's acceptable for the order to stay DRAFT for a short period of time after being submitted, you could have DiscountOrder emit a DiscountOrderSubmitted domain event. A handler catches the event and (delegates to a Domain service that) examines if the submit is legit or not. It would then issue a ChangeOrderState command to make the order either VALIDATED or INVALID.
You could even suppose that the change is legit by default and have processOrder() directly take it to VALIDATED, until proven otherwise by a subsequent INVALID counter-order given by the validation service.
This is not much different from your third solution or Hippoom's one though, except every step of the process is made explicit with its own domain event. I guess that with your current aggregate design you're doomed to have a third party orchestrator (as un-DDD and transaction script-esque as it may sound) that controls the process, since the DiscountOrder aggregate doesn't have native access to all information to tell if a given transformation is valid or not.

Resources