How to send messages to one of the multiple topics based on condition in Spring Cloud Stream Kafka application - spring

Currently i have a spring clound funtion which consumes a topic and publish in to another topic. Now I have multiple topics and need to publish message to one of the multiple topic based on certain checks from spring cloud function. How can I achieve this? Here is current implementation.
#Bean("producerBean")
public Function<Message<SourceMessage>, Message<SinkMessage>> producerBean(SinkService<SourceMessage> sinkService) {
return sinkService::processMessage;
}
#Service("SinkService")
public class SinkService<T> {
public Message<SinkMessage> processMessage(Message<SourceMessage> message) {
log.info("Message consumed at {} \n{}", message.getHeaders().getTimestamp(), message.getPayload());
try {
if (message.getPayload().isManaged()) {
/*
Need to add one more check here.
if (type==2)
send to topic1
else if(type==4)
send to topic2
else
Just log the type, do not send to any topic.
*/
Message<SinkMessage> output = new GenericMessage<>(new SinkMessage());
output.getPayload().setPayload(message.getPayload());
return output;
}
} catch (Exception exception) {
exception.printStackTrace();
}
return null;
}
}
application.properties
spring.cloud.stream.kafka.binder.brokers=${bootstrap.servers}
spring.cloud.stream.kafka.binder.configuration.enable.idempotence=false
spring.cloud.stream.binders.test_binder.type=kafka
spring.cloud.stream.bindings.producerBean.binder=test_binder
spring.cloud.stream.bindings.producerBean-in-0.destination=${input-destination}
spring.cloud.stream.bindings.producerBean-in-0.group=${input-group}
spring.cloud.stream.bindings.producerBean-out-0.destination=topic1
spring.cloud.stream.bindings.producerBean-out-1.destination=topic2
pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<version>3.2.5</version>
</dependency>

You can use StreamBridge with kafka-topicname and spring-cloud will bind it automatically in runtime. That approach also auto creates topic if that not exist, you can turn it off.
#Autowired
private final StreamBridge streamBridge;
public void sendDynamically(Message message, String topicName) {
streamBridge.send(route, topicName);
}
https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#_streambridge_and_dynamic_destinations

Related

Producer callback in Spring Cloud Stream with reactor core publisher

I have written a spring cloud stream application where producers are publishing messages to the designated kafka topics. My query is how can I add a producer callback to receive ack/confirmation that the message has been successfully published on the topic? Like how we do in spring kafka producer.send(record, new callback { ... }) (maintaining async producer). Below is my code:
private final Sinks.Many<Message<?>> responseProcessor = Sinks.many().multicast().onBackpressureBuffer();
#Bean
public Supplier<Flux<Message<?>>> event() {
return responseProcessor::asFlux;
}
public Message<?> publishEvent(String status) {
try {
String key = ...;
response = MessageBuilder.withPayload(payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, key)
.build();
responseProcessor.tryEmitNext(response);
}
How can I make sure that tryEmitNext has successfully written to the topic?
Is implementing ProducerListener a solution and possible? Couldn't find a concrete solution/documentation in Spring Cloud Stream
UPDATE
I have implemented below now, seems to work as expected
#Component
public class MyProducerListener<K, V> implements ProducerListener<K, V> {
#Override
public void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
// Do nothing on onSuccess
}
#Override
public void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
log.error("Producer exception occurred while publishing message : {}, exception : {}", producerRecord, exception);
}
}
#Bean
ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> customizer(MyProducerListener pl) {
return (handler, destinationName) -> handler.getKafkaTemplate().setProducerListener(pl);
}
See the Kafka Producer Properties.
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
Failed sends go the producer error channel (if configured); see Error Channels. Default: null
You can add a #ServiceActivator to consume from this channel asynchronously.

Kafka Consumer with Circuit Breaker, Retry Patterns using Resilience4j

I need some help in understanding how I can come up with a solution using Spring boot, Kafka, Resilence4J to achieve a microservice call from my Kafka Consumer. Let's say if the Microservice is down then I need to notify my Kafka consumer using a circuit breaker pattern to stop fetching the messages/events until the Microservice is up and running.
With Spring Kafka, you could use the pause and resume methods depending on the CircuitBreaker state transitions. The best way I found for this is to define it as "supervisor" with an #Configuration Annotation. Resilience4j is also used.
#Configuration
public class CircuitBreakerConsumerConfiguration {
public CircuitBreakerConsumerConfiguration(CircuitBreakerRegistry circuitBreakerRegistry, KafkaManager kafkaManager) {
circuitBreakerRegistry.circuitBreaker("yourCBName").getEventPublisher().onStateTransition(event -> {
switch (event.getStateTransition()) {
case CLOSED_TO_OPEN:
case CLOSED_TO_FORCED_OPEN:
case HALF_OPEN_TO_OPEN:
kafkaManager.pause();
break;
case OPEN_TO_HALF_OPEN:
case HALF_OPEN_TO_CLOSED:
case FORCED_OPEN_TO_CLOSED:
case FORCED_OPEN_TO_HALF_OPEN:
kafkaManager.resume();
break;
default:
throw new IllegalStateException("Unknown transition state: " + event.getStateTransition());
}
});
}
}
This is what I used in combination with a KafkaManager annotated with #Component.
#Component
public class KafkaManager {
private final KafkaListenerEndpointRegistry registry;
public KafkaManager(KafkaListenerEndpointRegistry registry) {
this.registry = registry;
}
public void pause() {
registry.getListenerContainers().forEach(MessageListenerContainer::pause);
}
public void resume() {
registry.getListenerContainers().forEach(MessageListenerContainer::resume);
}
}
In addition my consumer service looks like this:
#KafkaListener(topics = "#{'${topic.name}'}", concurrency = "1", id = "CBListener")
public void receive(final ConsumerRecord<String, ReplayData> replayData, Acknowledgment acknowledgment) throws
Exception {
try {
httpClientServiceCB.receiveHandleCircuitBreaker(replayData);
acknowledgement.acknowledge();
} catch (Exception e) {
acknowledgment.nack(1000);
}
}
And the #CircuitBreaker Annotation:
#CircuitBreaker(name = "yourCBName")
public void receiveHandleCircuitBreaker(ConsumerRecord<String, ReplayData> replayData) throws
Exception {
try {
String response = restTemplate.getForObject("http://localhost:8081/item", String.class);
} catch (Exception e ) {
// throwing the exception is needed to trigger the Circuit Breaker state change
throw new Exception();
}
}
And this is additionally supplemented by the following application.properties
resilience4j.circuitbreaker.instances.yourCBName.failure-rate-threshold=80
resilience4j.circuitbreaker.instances.yourCBName.sliding-window-type=COUNT_BASED
resilience4j.circuitbreaker.instances.yourCBName.sliding-window-size=5
resilience4j.circuitbreaker.instances.yourCBName.wait-duration-in-open-state=10000
resilience4j.circuitbreaker.instances.yourCBName.automatic-transition-from-open-to-half-open-enabled=true
spring.kafka.consumer.enable.auto.commit = false
spring.kafka.listener.ack-mode = MANUAL_IMMEDIATE
Also have a look at https://resilience4j.readme.io/docs/circuitbreaker
If you are using Spring Kafka, you could maybe use the pause and resume methods of the ConcurrentMessageListenerContainer class.
You can attach an EventListener to the CircuitBreaker which listens on state transitions and pauses or resumes processing of events. Inject the CircuitBreakerRegistry into you bean:
circuitBreakerRegistry.circuitBreaker("yourCBName").getEventPublisher().onStateTransition(
event -> {
switch (event.getStateTransition()) {
case CLOSED_TO_OPEN:
container.pause();
case OPEN_TO_HALF_OPEN:
container.resume();
case HALF_OPEN_TO_CLOSED:
container.resume();
case HALF_OPEN_TO_OPEN:
container.pause();
case CLOSED_TO_FORCED_OPEN:
container.pause();
case FORCED_OPEN_TO_CLOSED:
container.resume();
case FORCED_OPEN_TO_HALF_OPEN:
container.resume();
default:
}
}
);

How publish event for more instance from command side axon

I tried to implement application with cqrs and event sourcing with axon framework. I implement command side and query part as a separate micro-service and replicate(scale up) query micro-service. I use message broker as RabbitMq. If the command part publish event that not update all query micro-service. It work as round robin way. how can i update all micro-services same time.
Here is my dependency file
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-amqp</artifactId>
<version>${axon.version}</version>
</dependency>
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-spring-boot-starter</artifactId>
<version>${axon.version}</version>
</dependency>
this is my configs in command side
#Bean
public Exchange exchange() {
return ExchangeBuilder.fanoutExchange("SeatReserveEvents").build();
}
#Bean
public Queue queue() {
return QueueBuilder.durable("SeatReserveEvents").build();
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("*").noargs();
}
#Autowired
public void configure(AmqpAdmin admin) {
admin.declareExchange(exchange());
admin.declareQueue(queue());
admin.declareBinding(binding());
}
This is application.yml
axon:
amqp:
exchange: SeatReserveEvents
This is command side configurations
#Bean
public SpringAMQPMessageSource statisticsQueue(Serializer serializer) {
return new SpringAMQPMessageSource(new DefaultAMQPMessageConverter(serializer)) {
#RabbitListener(queues = "SeatReserveEvents")
#Override
public void onMessage(Message arg0, Channel arg1) throws Exception {
super.onMessage(arg0, arg1);
}
};
}
this is handler
#Component
#ProcessingGroup("statistics")
public class EventLoggingHandler
{
#EventHandler
protected void on(SeatResurvationCreateEvent event) {
System.err.println(event);
}
#EventHandler
protected void on(SeatReservationUpdateEvent event) {
System.err.println(event);
}
}
this is application.yml
axon:
eventhandling:
processors:
statistics.source: statisticsQueue
I'd say this is more an AMQP/RabbitMQ configuration setting than an Axon Framework specific question. That said, you'd want to set up RabbitMQ to not do Round Robin, but Pub/Sub, like described in this tutorial here.
I do however have another, more Axon Framework specific response in mind.
Why immediately publish your events on a queue, if you could also pull the events from the store directly? So, you'd have TrackingEventProcessors on the Query Side of you application, which pull events from the event store as they get appended by the Command Side of your application.
That's how a monolith version of an Axon Framework application incorporating CQRS would initially look like any way. Hence the simplest next step to split up that CQRS application in a Command and Query side, would be to leave the way of receiving events as is, without adding the queue in between.
If you've got specific requirements to publish over a queue however, or you just prefer to use a queue instead of letting the Query applications pull from the Event Store directly, please disregard this comment and revert back to the RabbitMQ tutorial.
we need to change RabbitMq configuration to publish event for more instance from command side axon. For that we have to change configuration in publisher side as below.
#Bean
public FanoutExchange fanoutExchange() {
FanoutExchange exchange = new FanoutExchange("SeatReserveEvents");
return exchange;
}
#Autowired
public void configure(AmqpAdmin admin) {
admin.declareExchange(fanoutExchange());
}
and next thing is subscriber side we have to change bean like below
#Bean
public SpringAMQPMessageSource statisticsQueue(Serializer serializer) {
return new SpringAMQPMessageSource(new DefaultAMQPMessageConverter(serializer)) {
#RabbitListener(bindings = #QueueBinding(
value = #Queue,
exchange = #Exchange(value ="SeatReserveEvents",type = ExchangeTypes.FANOUT),
key = "orderRoutingKey")
)
#Override
public void onMessage(Message arg0, Channel arg1) throws Exception {
super.onMessage(arg0, arg1);
}
};
}
now we can replicate consumer for more instance. This pattern is publisher/subscriber pattern. and exchange type is fanout

How to configure Spring cloud stream (kafka) to use protobuf as serialization

I am using Spring cloud stream (kafka) to exchange messages between producer and consumer microservices.
It exchanges data with native java serialization. As per Spring cloud documentation, It supports JSON,AVRO serialization.
Is any one tried protobuf serialization (message converter) in spring cloud stream
---------------- Later Added
I wrote this MessageConverter
public class ProtobufMessageConverter<T extends AbstractMessage > extends AbstractMessageConverter
{
private Parser<T> parser;
public ProtobufMessageConverter(Parser<T> parser)
{
super(new MimeType("application", "protobuf"));
this.parser = parser;
}
#Override
protected boolean supports(Class<?> clazz)
{
if (clazz != null)
{
return EquipmentProto.Equipment.class.isAssignableFrom(clazz);
}
return true;
}
#Override
public Object convertFromInternal(Message<?> message, Class<?> targetClass, Object conversionHint)
{
if (!(message.getPayload() instanceof byte[]))
{
return null;
}
try
{
// return EquipmentProto.Equipment.parseFrom((byte[]) message.getPayload());
return parser.parseFrom((byte[]) message.getPayload());
}
catch (Exception e)
{
this.logger.error(e.getMessage(), e);
}
return null;
}
#Override
protected Object convertToInternal(Object payload, MessageHeaders headers, Object conversionHint)
{
return ((AbstractMessage) payload).toByteArray();
}
}
It's really not a question of trying but rather just doing it, since converters are a natural extension mechanism (inherited fro spring-integration) in spring-cloud-stream that exists specifically to address these concerns. So yes, you can add your own custom converter.
Also, keep in mind that with Kafka there is also a concept of native serde, so you need to make sure that the two do not create some conflict.

Not able to to filter messages received using condition attribute in Spring Cloud Stream #StreamListener annotation

I am trying to create a event based system for communicating between services using Apache Kafka as Messaging system and Spring Cloud Stream Kafka.
I have written my Receiver class methods as below,
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeCreatedEvent'")
public void handleEmployeeCreatedEvent(#Payload String payload) {
logger.info("Received EmployeeCreatedEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeCreatedEvent.
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeTransferredEvent'")
public void handleEmployeeTransferredEvent(#Payload String payload) {
logger.info("Received EmployeeTransferredEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeTransferredEvent.
#StreamListener(target = Sink.INPUT)
public void handleDefaultEvent(#Payload String payload) {
logger.info("Received payload: " + payload);
}
This is the default method.
When I run the application, I am not able to see the methods annoated with condition attribute being called. I only see the handleDefaultEvent method being called.
I am sending a message to this Receiver Application from the Sending/Source App using the below CustomMessageSource class as below,
#Component
#EnableBinding(Source.class)
public class CustomMessageSource {
#Autowired
private Source source;
public void sendMessage(String payload,String eventType) {
Message<String> myMessage = MessageBuilder.withPayload(payload)
.setHeader("eventType", eventType)
.build();
source.output().send(myMessage);
}
}
I am calling the method from my controller in Source App as below,
customMessageSource.sendMessage("Hello","EmployeeCreatedEvent");
The customMessageSource instance is autowired as below,
#Autowired
CustomMessageSource customMessageSource;
Basicaly, I would like to filter messages received by the Sink/Receiver application and handle them accordingly.
For this I have used the #StreamListener annotation with condition attribute to simulate the behaviour of handling different events.
I am using Spring Cloud Stream Chelsea.SR2 version.
Can someone help me in resolving this issue.
It seems like the headers are not propagated. Make sure you include the custom headers in spring.cloud.stream.kafka.binder.headers http://docs.spring.io/autorepo/docs/spring-cloud-stream-docs/Chelsea.SR2/reference/htmlsingle/#_kafka_binder_properties .

Resources