I have a Spring-Cloud-Streams client reading from a Kakfa topic consisting of several partitions. The client calls a webservice for every Kafka message it reads. If the webservice is unavailable after a few retries, I want to stop the consumer from reading from Kafka. Referring to a previous Stackoverflow question (Spring cloud stream kafka pause/resume binders) I autowired BindingsEndpoint and call the changeState() method to try to stop the consumer but the logs show the consumer continuing to read the messages from Kafka after changeState() is invoked.
I am using Spring Boot version 2.1.2.RELEASE with Spring Cloud version Greenwich.RELEASE. The managed version for spring-cloud-stream-binder-kafka is 2.1.0.RELEASE. I have set the properties autoCommitOffset=true and autoCommitOnError=false.
Below is snippet of my codes. Is there something I have missed? Is the first input parameter to changeState() supposed to be the topic name?
If I want the consumer application to exit when the webservice is not available, can I simply do System.exit() without needing to stop the consumer first?
#Autowired
private BindingsEndpoint bindingsEndpoint;
...
...
#StreamListener(MyInterface.INPUT)
public void read(#Payload MyDTO dto,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
#Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
try {
logger.info("Processing message "+dto);
process(dto); // this is the method that calls the webservice
} catch (Exception e) {
if (e instanceof IllegalStateException || e instanceof ConnectException) {
bindingsEndpoint.changeState("my.topic.name",
BindingsEndpoint.State.STOPPED);
// Binding<?> b = bindingsEndpoint.queryState("my.topic.name"); ==> Using topic name returns a valid Binding object
}
e.printStackTrace();
throw (e);
}
}
You can do so by utilising Binding visualization and control feature where you can visualize as well as stop/start/pause/resume bindings.
Also, you are aware that System.exit() will shut down the entire JVM?
Had the same issue, the first input parameter to changeState() should be the binding name. It worked for me
Related
My application should spread some event from a component to some rabbit message publisher.
My component fires the event using ApplicationEventPublisher.publishEvent(e)
On the other side, a message producer should receive the event, process it then publish it to a rabbit queue.
I'm using spring cloud stream and spring cloud function for messaging part:
#Configurationn
MessagingConfig {
#Autowired
StreamBridge sb;
#EventListener
void handleEvent(Event e){
sb.send("topic", e)
}
Is there to rely on function rather StreamBridge
#Bean
Supplier<Event> messageProducer(){
//Get the event and publish it
}
Or considering ApplicationEventListener as binder
Function<Event, Event> messageProcessor(){
// redirect event to rabbit binder
}
I'm a confused.
Thank you for your help.
The #EventListener and StreamBridge combination is an easier way to achieve your task. For a Supplier variant you need some intermediate buffer (Flux?) where you would place your events. And that would be a bit involved with a Flux.create() API: https://projectreactor.io/docs/core/release/reference/#producing.create.
It is possible to use Spring Integration ApplicationEventListeningMessageProducer to catch those events and produce them to the binding's MessageChannel.
I am using consuming events from kafka streams in a spring boot application version 2.4. The version of kafka client is 2.3.There are two consumers consuming the events. I want to put back the events back in kafka incase of any error. I Do NOT want to put the failed event in a dead letter queue. I am using ConsumerAwareListenerErrorHandler.
#Override
public Object handleError(Message<?> message, ListenerExecutionFailedException exception, Consumer<?, ?> consumer) {
ConsumerRecord record = (ConsumerRecord) message.getPayload();
// consumer.seek(new TopicPartition(record.topic(), record.partition()), record.offset());
Collection collection = Arrays.asList(new TopicPartition(record.topic(), record.partition()));
consumer.seekToBeginning(collection);
return null;
}
Now what I want is if I stop the consumer, The same error event should be consumed by the other running consumer. Kindly help.
Thanks
That won't work because any other records fetched by the previous poll() will still be processed; use a SeekToCurrentErrorHandler instead.
https://docs.spring.io/spring-kafka/docs/2.5.5.RELEASE/reference/html/#seek-to-current
I'm trying to setup a project with Springboot cloud Stream with Kafka. I managed to build a simple example, where a listener gets messages from a topic and after processed it, it sends the output to another topic.
My listener and channels are configured like this:
#Component
public class FileEventListener {
private FileEventProcessorService fileEventProcessorService;
#Autowired
public FileEventListener(FileEventProcessorService fileEventProcessorService) {
this.fileEventProcessorService = fileEventProcessorService;
}
#StreamListener(target = FileEventStreams.INPUT)
public void handleLine(#Payload(required = false) String jsonData) {
this.fileEventProcessorService.process(jsonData);
}
}
public interface FileEventStreams {
String INPUT = "file_events";
String OUTPUT = "raw_lines";
#Input(INPUT)
SubscribableChannel inboundFileEventChannel();
#Output(OUTPUT)
MessageChannel outboundRawLinesChannel();
}
The problem with this example is that when the service starts, it doesn't check for messages that already exist in the topic, it only process those messages that are sent after it started. I'm very new to Springboot stream and kafka, but for what I've read, this behavior may correspond to the fact that I'm using a SubscribableChannel. I tried to use a QueueChannel for example, to see how it works but I found the following exception:
Error creating bean with name ... nested exception is java.lang.IllegalStateException: No factory found for binding target type: org.springframework.integration.channel.QueueChannel among registered factories: channelFactory,messageSourceFactory
So, my questions are:
If I want to process all messages that exists in the topic once the application starts (and also messages are processed by only one consumer), I'm on the right path?
Even if QueueChannel is not the right choice for achieve the behavior explained in 1.) What do I have to add to my project to be able to use this type of channel?
Thanks!
Add spring.cloud.stream.bindings.file_events.group=foo
anonymous groups consume from the end of the topic only, bindings with a group consume from the beginning, by default.
You cannot use a PollableChannel for a binding, it must be a SubscribableChannel.
I have a #StreamListener method where it will perform REST call. When REST call return exception, #StreamListener method will throw RunTimeException and perform retry. #StreamListener method will retry unlimited times when it throw RuntimeException
Spring Cloud Stream Retry configuration:
spring.cloud.stream.kafka.bindings.inputChannel.consumer.enableDlq=true
spring.cloud.stream.bindings.inputChannel.consumer.maxAttempts=3
spring.cloud.stream.bindings.inputChannel.consumer.concurrency=3
spring.cloud.stream.bindings.inputChannel.consumer.backOffInitialInterval=300000
spring.cloud.stream.bindings.inputChannel.consumer.backOffMaxInterval=600000
SpringBoot microservice dependencies version:
Spring Boot 2.0.3
Spring Cloud Stream Elmhurst.RELEASE
Kafka broker 1.1.0
Using RetryTemplate or increasing maxAttempts property has the restriction that retries should be completed within max.poll.interval.ms, otherwise Kafka broker will think that consumer is down and reassigns the partition to another consumer(if available).
Other option is to make the listener re-read the same message from Kafka using consumer.seek method.
#StreamListener("events")
public void handleEvent(#Payload String eventString, #Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) String partitionId,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.OFFSET) String offset) {
try {
//do the logic (example: REST call)
} catch (Exception e) { // Catch only specific exceptions that can be retried
consumer.seek(new TopicPartition(topic, Integer.parseInt(partitionId)), Long.parseLong(offset));
}
}
You can certainly increase the number of attempts (maxAttempts property) to something like Integer.MAX_VALUE, or you can provide an instance of your own RetryTemplate bean which could be configured as you wish.
Here is where you can get more info https://docs.spring.io/spring-cloud-stream/docs/current/reference/htmlsingle/#_retry_template
after a few trial and error, we found out that kafka configuration: max.poll.interval.ms is defaulted to 5 minutes. Due to our consumer retry mechanism, our whole retry process will take 15 minutes for the worst case scenario.
So after 5 minutes of the first message being consumed, kafka partition decides that consumer did not provide any response, do a auto-balancing and assign the same message to another partition.
maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}