Spring Kafka Template implementaion example for seek offset, acknowledgement - spring

I am new to spring-kafka-template. I tried some basic stuff in it and they are working fine. But I am trying to implement some concepts mentioned at Spring Docs like :
Offset Seeking
Acknowledging listeners
I tried to find some example for it over net but was unsuccessful. The thing only I found is its source code.
We have a same issue as mentioned in this post Spring kafka consumer, seek offset at runtime.
But there is no example available to implement the same.
Can someone give any example on how to implement them?
Thanks in advance.

You should use ConsumerSeekAware for that purpose to deal with seeks:
static class Listener implements ConsumerSeekAware {
private final ThreadLocal<ConsumerSeekCallback> seekCallBack = new ThreadLocal<>();
public void registerSeekCallback(ConsumerSeekCallback callback) {
this.seekCallBack.set(callback);
}
#KafkaListener(...)
public void listen(#Payload String foo,
Acknowledgment ack) {
this.seekCallBack.get().seek(topic, partition, 0);
}
}
}

Related

Spring 6: Spring Cloud Stream Kafka - Replacement for #EnableBinding

I was reading "Spring Microservices In Action (2021)" because I wanted to brush up on Microservices.
Now with Spring Boot 3 a few things changed. In the book, an easy example of how to push messages to a topic and how to consume messages to a topic were presented.
The Problem is: The examples presented do just not work with Spring Boot 3. Sending Messages from a Spring Boot 2 Project works. The underlying project can be found here:
https://github.com/ihuaylupo/manning-smia/tree/master/chapter10
Example 1 (organization-service):
Consider this Config:
spring.cloud.stream.bindings.output.destination=orgChangeTopic
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.kafka.binder.zkNodes=kafka #kafka is used as a network alias in docker-compose
spring.cloud.stream.kafka.binder.brokers=kafka
And this Component(Class) which can is injected in a service in this project
#Component
public class SimpleSourceBean {
private Source source;
private static final Logger logger = LoggerFactory.getLogger(SimpleSourceBean.class);
#Autowired
public SimpleSourceBean(Source source){
this.source = source;
}
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
source.output().send(MessageBuilder.withPayload(change).build());
}
}
This code fires a message to the topic (destination) orgChangeTopic. The way I understand it, the firsttime a message is fired, the topic is created.
Question 1: How do I do this Spring Boot 3? Config-Wise and "Code-Wise"?
Example 2:
Consider this config:
spring.cloud.stream.bindings.input.destination=orgChangeTopic
spring.cloud.stream.bindings.input.content-type=application/json
spring.cloud.stream.bindings.input.group=licensingGroup
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.cloud.stream.kafka.binder.brokers=kafka
And this code:
#SpringBootApplication
#RefreshScope
#EnableDiscoveryClient
#EnableFeignClients
#EnableEurekaClient
#EnableBinding(Sink.class)
public class LicenseServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LicenseServiceApplication.class, args);
}
#StreamListener(Sink.INPUT)
public void loggerSink(OrganizationChangeModel orgChange) {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
}
What this method is supposed to do is to fire whenever a message is fired in orgChangeTopic, we want the method loggerSink to fire.
How do I do this in Spring Boot 3?
In Spring Cloud Stream 4.0.0 (the version used if you are using Boot 3), a few things are removed - such as the EnableBinding, StreamListener, etc. We deprecated them before in 3.x and finally removed them in the 4.0.0 version. The annotation-based programming model is removed in favor of the functional programming style enabled through the Spring Cloud Function project. You essentially express your business logic as java.util.function.Funciton|Consumer|Supplier etc. for a processor, sink, and source, respectively. For ad-hoc source situations, as in your first example, Spring Cloud Stream provides a StreamBridge API for custom sends.
Your example #1 can be re-written like this:
#Component
public class SimpleSourceBean {
#Autowired
StreamBridge streamBridge
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
streamBridge.send("output-out-0", MessageBuilder.withPayload(change).build());
}
}
Config
spring.cloud.stream.bindings.output-out-0.destination=orgChangeTopic
spring.cloud.stream.kafka.binder.brokers=kafka
Just so you know, you no longer need that zkNode property. Neither the content type since the framework auto-converts that for you.
StreamBridge send takes a binding name and the payload. The binding name can be anything - but for consistency reasons, we used output-out-0 here. Please read the reference docs for more context around the reasoning for this binding name.
If you have a simple source that runs on a timer, you can express this simply as a supplier as below (instead of using a StreamBrdige).
#Bean
public Supplier<OrganizationChangeModel> ouput() {
return () -> {
// return the payload
};
}
spring.cloud.function.definition=output
spring.cloud.bindings.output-out-0.destination=...
Example #2
#Bean
public Consumer<OrganizationChangeModel> loggerSink() {
return model -> {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
};
}
Config:
spring.cloud.function.definition=loggerSink
spring.cloud.stream.bindings.loggerSink-in-0.destination=orgChangeTopic
spring.cloud.stream.bindings.loggerSinnk-in-0.group=licensingGroup
spring.cloud.stream.kafka.binder.brokers=kafka
If you want the input/output binding names to be specifically input or output rather than with in-0, out-0 etc., there are ways to make that happen. Details for this are in the reference docs.

Spring Clould Stream Resolving Input Channel dynamically based on Message

I need a way of resolving an Inbound Channel dynamically based on the type of the Incoming Message.
I am not looking for any header based solution which is already mentioned in this link
https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.0.M1/spring-cloud-stream.html#_using_streamlistener_for_content_based_routing
The resolution has to happen based on the type of the message. If there is a custom binding that can be done at application startup to be able to do this, that should be ok; Please give me some samples on how this can be achieved.
There is no such support in Spring Cloud Stream.
The underlying Spring for Apache Kafka project does have support for such scenarios.
See #KafkaListener on a Class.
It requires the payload to have been deserialized by the Kafka deserializer; then the method called depends on the payload type.
It also supports a fallback "default" method.
#KafkaListener(id = "multi", topics = "myTopic")
static class MultiListenerBean {
#KafkaHandler
public void listen(String foo) {
...
}
#KafkaHandler
public void listen(Integer bar) {
...
}
#KafkaHandler(isDefault = true)
public void listenDefault(Object object) {
...
}
}

How to mask sensitive information while logging in spring integration framework

I have requirement to mask sensitive information while logging. We are using wire-tap provided by integration framework for logging and we have many interfaces already designed which logs using wire-tap. We are currently using spring boot 2.1 and spring integration.
I hope that all your integration flows log via the mentioned global single wire-tap.
This one is just a start from another integration flow anyway: it is not just for a channel and logger on it. You really can build a wire-tapped flow any complexity.
My point is that you can add a transformer before logging-channel-adapter and mask a payload and/or headers any required way. The logger will receive already masked data.
Another way is to use some masking functionality in the log-expression. You may call here some bean for masking or a static utility: https://docs.spring.io/spring-integration/reference/html/#logging-channel-adapter
Don't know if this is a fancy approach, but I ended up implementing some sort of "error message filter" to mask headers in case the sensitive one is present (this can be extended to multiple header names, but this gives the idea):
#Component
public class ErrorMessageFilter {
private static final String SENSITIVE_HEADER_NAME = "sensitive_header";
public Throwable filterErrorMessage(Throwable payload) {
if (payload instanceof MessagingException) {
Message<?> failedMessage = ((MessagingException) payload).getFailedMessage();
if (failedMessage != null && failedMessage.getHeaders().containsKey(SENSITIVE_HEADER_NAME)) {
MessageHeaderAccessor headerAccessor = new MessageHeaderAccessor(failedMessage);
headerAccessor.setHeader(SENSITIVE_HEADER_NAME, "XXX");
return new MessagingException(withPayload(failedMessage.getPayload()).setHeaders(headerAccessor)
.build());
}
}
return payload;
}
}
Then, in the #Configuration class, added a way to wire my filter with Spring Integration's LoggingHandler:
#Autowired
public void setLoggingHandlerLogExpression(LoggingHandler loggingHandler, ErrorMessageFilter messageFilter) {
loggingHandler.setLogExpression(new FunctionExpression<Message<?>>((m) -> {
if (m instanceof ErrorMessage) {
return messageFilter.filterErrorMessage(((ErrorMessage) m).getPayload());
}
return m.getPayload();
}));
}
This also gave me the flexibility to reuse my filter in other components where I handle error messages (e.g.: send error notifications to Zabbix, etc.).
P.S.: sorry about all the instanceof and ifs, but at certain layer dirty code has to start.

Kinesis as producer in Spring Boot Reactive Stream API

I'm trying to build a small Spring Boot Reactive API. The API should let the users subscribe to some data, returned as SSE.
The data is located on a Kinesis Topic.
Creating the Reactive API, and the StreamListener to Kinesis is fairly easy - but can I combine these, so the Kinesis Topic are used as a producer for the event stream used by my data service.
The code looks more or less like this
//Kinesis binding, with listenerMode: rawRecords
#EnableBinding(Sink.class)
public class KinesisStreamListener {
#StreamListener(value = Sink.INPUT)
public void logger(List<Record> payload) throws Exception {
}
}
#RestController
#RequestMapping("/data")
public class DataResource {
#Autowired
DataService service;
#GetMapping(produces = {MediaType.TEXT_EVENT_STREAM_VALUE, MediaType.APPLICATION_STREAM_JSON_VALUE})
public Flux<EventObject> getData() {
return service.getData();
}
}
#Component
public class DataService {
Flux<EventObject> getData() {
Flux<Long> interval = Flux.interval(Duration.ofMillis(1000));
Flux<EventObject> dataFlux = Flux.fromStream(Stream.generate(() -> ???
));
return dataFlux.zip(interval, dataFlux).map(Tuple2::getT2);
}
}
Here is a sample how I would do that: https://github.com/artembilan/sandbox/tree/master/cloud-stream-kinesis-to-webflux.
Once we agree about details and some improvements it can go to the official Spring Cloud Stream Samples repository: https://github.com/spring-cloud/spring-cloud-stream-samples
The main idea is to reuse the same Flux provided by the #StreamListener via Spring Cloud Stream Reactive Support. This is is already a FluxPublish, so any new SSE connections will work as a plain Reactive subscribers.
There are a couple tricks to count with:
For the listenerMode: rawRecords, we also need to configure a contentType: application/octet-stream to avoid any conversion attempts when Binder sends a message to the Sink.INPUT channel.
Since listenerMode: rawRecords returns a List<Record> our Flux in the #StreamListener method should expect exactly this type, but not a plain Record.
Both concerns are considered as a Framework improvements.
So, let us now how it looks and works for you.

Spring Cloud Stream RabbitMQ

I am trying to understand why I would want to use Spring cloud stream with RabbitMQ. I've had a look at the RabbitMQ Spring tutorial 4 (https://www.rabbitmq.com/tutorials/tutorial-four-spring-amqp.html) which is basically what I want to do. It creates a direct exchange with 2 queues attached and depending on the routing key a message is either routed to Q1 or to Q2.
The whole process is pretty straight forward if you look at the tutorial, you create all the parts, bind them together and youre ready to go.
I was wondering what benefit I would gain in using Sing Cloud Stream and if that is even the use case for it. It was easy to create a simple exchange and even defining destination and group was straight forward with stream. So I thought why not go further and try to handle the tutorial case with stream.
I have seen that Stream has a BinderAwareChannelResolver which seems to do the same thing. But I am struggling to put it all together to achieve the same as in the RabbitMQ Spring tutorial. I am not sure if it is a dependency issue, but I seem to misunderstand something fundamentally here, I thought something like:
spring.cloud.stream.bindings.output.destination=myDestination
spring.cloud.stream.bindings.output.group=consumerGroup
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression='key'
should to the trick.
Is there anyone with a minimal example for a source and sink which basically creates a direct exchange, binds 2 queues to it and depending on routing key routes to either one of those 2 queues like in https://www.rabbitmq.com/tutorials/tutorial-four-spring-amqp.html?
EDIT:
Below is a minimal set of code which demonstrates how to do what I asked. I did not attach the build.gradle as it is straight forward (but if anyone is interested, let me know)
application.properties: setup the producer
spring.cloud.stream.bindings.output.destination=tut.direct
spring.cloud.stream.rabbit.bindings.output.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression=headers.type
Sources.class: setup the producers channel
public interface Sources {
String OUTPUT = "output";
#Output(Sources.OUTPUT)
MessageChannel output();
}
StatusController.class: Respond to rest calls and send message with specific routing keys
/**
* Status endpoint for the health-check service.
*/
#RestController
#EnableBinding(Sources.class)
public class StatusController {
private int index;
private int count;
private final String[] keys = {"orange", "black", "green"};
private Sources sources;
private StatusService status;
#Autowired
public StatusController(Sources sources, StatusService status) {
this.sources = sources;
this.status = status;
}
/**
* Service available, service returns "OK"'.
* #return The Status of the service.
*/
#RequestMapping("/status")
public String status() {
String status = this.status.getStatus();
StringBuilder builder = new StringBuilder("Hello to ");
if (++this.index == 3) {
this.index = 0;
}
String key = keys[this.index];
builder.append(key).append(' ');
builder.append(Integer.toString(++this.count));
String payload = builder.toString();
log.info(payload);
// add kv pair - routingkeyexpression (which matches 'type') will then evaluate
// and add the value as routing key
Message<String> msg = new GenericMessage<>(payload, Collections.singletonMap("type", key));
sources.output().send(msg);
// return rest call
return status;
}
}
consumer side of things, properties:
spring.cloud.stream.bindings.input.destination=tut.direct
spring.cloud.stream.rabbit.bindings.input.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.input.consumer.bindingRoutingKey=orange
spring.cloud.stream.bindings.inputer.destination=tut.direct
spring.cloud.stream.rabbit.bindings.inputer.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.inputer.consumer.bindingRoutingKey=black
Sinks.class:
public interface Sinks {
String INPUT = "input";
#Input(Sinks.INPUT)
SubscribableChannel input();
String INPUTER = "inputer";
#Input(Sinks.INPUTER)
SubscribableChannel inputer();
}
ReceiveStatus.class: Receive the status:
#EnableBinding(Sinks.class)
public class ReceiveStatus {
#StreamListener(Sinks.INPUT)
public void receiveStatusOrange(String msg) {
log.info("I received a message. It was orange number: {}", msg);
}
#StreamListener(Sinks.INPUTER)
public void receiveStatusBlack(String msg) {
log.info("I received a message. It was black number: {}", msg);
}
}
Spring Cloud Stream lets you develop event driven micro service applications by enabling the applications to connect (via #EnableBinding) to the external messaging systems using the Spring Cloud Stream Binder implementations (Kafka, RabbitMQ, JMS binders etc.,). Apparently, Spring Cloud Stream uses Spring AMQP for the RabbitMQ binder implementation.
The BinderAwareChannelResolver is applicable for dynamically binding support for the producers and I think in your case it is about configuring the exchanges and binding of consumers to that exchange.
For instance, you need to have 2 consumers with the appropriate bindingRoutingKey set based on your criteria and a single producer with the properties(routing-key-expression, destination) you mentioned above (except the group). I noticed that you have configured group for the outbound channel. The group property is applicable only for the consumers (hence inbound).
You might also want to check this one: https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/57 as I see some discussion around using routing-key-expression. Specifically, check this one on using the expression value.

Resources